🤨 “Data is The New Oil.”
You’ve probably heard this phrase, spoken as if it were the dogma of our times.
I’ve seen it in presentations. I’ve heard it while I’ve been giving presentations. “Data is the new oil” nods someone, sagely, while I make an unrelated point.
When Clive Humby made this statement, in 2006, he also said:
“Oil is valuable, but if unrefined it cannot really be used. It has to be changed into gas, plastic, chemicals etc to create a valuable entity that drives profitable activity; so too must data be broken down, analyzed for it to have value.”
That doesn’t look as good on social media, but it is a more satisfying analogy. The pithy banality “data is the new oil” floats, fittingly, like an oil slick, mesmerising while smothering something deeper.
Too many neglect to look beyond the surface. And yet, it is important that we all understand our relationship with data.
For one, oil is indelibly marked by its scarcity. Data suffers no such restrictions.
Nonetheless, it is true that data, like oil, must be processed before it can be used.
In particular, we require that data should both model and predict human behaviour.
So much of modern AI is really prediction writ large, its adoption driven by cheaper and faster computation, more data, and better tools.
The assumption built into these algorithms is that whatever happened, will happen again.
It is a necessarily cyclical view of history.
Machines are exceptional at making these arithmetical computations at a scale we struggle to conceptualise, with an accuracy we cannot match.
Machine learning algorithms identify meaningful links between variables that we would simply never see.
However, when this AI-driven prediction becomes commonplace, it also creates complements. In an economic sense, this simply means that related commodities increase in value by dint of their relation to a prevalent resource.
When this AI-driven prediction becomes commonplace, it also creates complements. In an economic sense, this simply means that related commodities increase in value by dint of their relation to a prevalent resource.
Here, that means a skill like human judgement becomes more valuable as a complement to machine-based predictions.
We spend less time producing performance forecasts, for example, and can spend more time turning these forecasts into strategy.
Or, we can intervene to provide direction when an unexpected event occurs that the machine struggles to comprehend.
Furthermore, the self is defined by potential.
That applies to our audience, who may act in unexpected ways in response to a product launch. It also applies to those of us (that’s all of us, soon) who work with prediction tools.
People and machines can complement each other’s strengths — and also minimize each other’s weaknesses.
Beyond a certain accuracy threshold it is the reduction in errors that matters.
Indeed, the most effective uses of AI stem from a division of labour that not only separates out today’s tasks, but also incorporates the impact that AI will have on how we work.
All too often, businesses acquiesce to “the data”, without understanding or questioning its provenance.
It is important to understand that machines can give answers, but they also raise more questions for us. Our role should always be an active one.
Prediction machines are not Magic 8-Balls, revealing truths from the beyond.
There is nothing supernatural about their inner workings.
Artificial Intelligence reads the past and models the future, but it is we who must act in the present.
We diminish its potential — and ours — by assuming a subservient role to “the data”.