What if artificial intelligence is just a ‘normal’ technology?
Opinions about artificial intelligence tend to fall on a wide spectrum. At one extreme is the utopian view that AI will cause runaway economic growth, accelerate scientific research and perhaps make humans immortal.
At the other extreme is the dystopian view that AI will cause abrupt, widespread job losses and economic disruption, and perhaps go rogue and wipe out humanity.
So a paper published earlier this year by Arvind Narayanan and Sayash Kapoor, two computer scientists at Princeton University, is notable for the unfashionably sober manner in which it treats AI: as “normal technology”.
The work has prompted much debate among AI researchers and economists.
Both utopian and dystopian views, the authors write, treat AI as an unprecedented intelligence with agency to determine its own future, meaning analogies with previous inventions fail.
Messrs Narayanan and Kapoor reject this, and map out what they see as a more likely scenario: that AI will follow the trajectory of past technological revolutions.
They then consider what this would mean for AI adoption, jobs, risks and policy.
“Viewing AI as normal technology leads to fundamentally different conclusions about mitigations compared to viewing AI as being humanlike,” they note.
Adoption versus Innovation
The pace of AI adoption, the authors argue, has been slower than that of innovation.
Many people use AI tools occasionally, but at an intensity in America (in hours of usage per day) that is still low as a fraction of overall working hours.
For adoption to lag behind innovation is not surprising, because it takes time for people and companies to adapt habits and workflows to new technologies.
Adoption is also hampered by the fact that much knowledge is tacit and organisation-specific, data may not be in the right format and its use may be constrained by regulation.
Similar constraints were in place a century ago, when factories were electrified: doing so took decades, because it needed a total rethink of floor layouts, processes and organisational structures.
Moreover, constraints on the pace of AI innovation itself may be more significant than they seem, argues the paper, because many applications (such as drug development, self-driving cars or even just booking a holiday) require extensive real-world testing.
This can be slow and costly, particularly in safety-critical fields that are tightly regulated. As a result, economic impacts “are likely to be gradual”, the authors conclude, rather than involving the abrupt automation of a big chunk of the economy.
Even a slow spread of AI would change the nature of work. As more tasks become amenable to automation, “an increasing percentage of human jobs and tasks will be related to AI control.”
There is an analogy here with the Industrial Revolution, in which workers went from performing manual tasks, such as weaving, to supervising machines doing those tasks—and handling situations machines could not (like intervening when they get stuck).
Rather than AI stealing jobs wholesale, jobs might increasingly involve configuring, monitoring and controlling AI-based systems. Without human oversight, Messrs Narayanan and Kapoor speculate, AI may be “too error-prone to make business sense”.
That, in turn, has implications for AI risk. Strikingly, the authors criticise the emphasis on “alignment” of AI models, meaning efforts to ensure outputs align with their human creators’ goals.
Whether a given output is harmful often depends on context that humans may understand, but the model lacks, they argue.
A model asked to write a persuasive email, for example, cannot tell if that message will be used for legitimate marketing or nefarious phishing.
Trying to make an AI model that cannot be misused “is like trying to make a computer that cannot be used for bad things”, the authors write.
Instead, they suggest, defences against the misuse of AI, for example to create computer malware or bioweapons, should focus further downstream, by strengthening existing protective measures in cyber-security and biosafety.
This also increases resilience to forms of these threats not involving AI.
Terminator is fictional
Such thinking suggests a range of policies to reduce risk and increase resilience.
These include whistleblower protection (as seen in many other industries), compulsory disclosure of AI usage (as happens with data protection), registration to track deployment (as with cars and drones) and mandatory incident-reporting (as with cyber-attacks).
In sum, the paper concludes that lessons from previous technologies can be fruitfully applied to AI—and treating the technology as “normal” leads to more sensible policies than treating it as imminent superintelligence.
The paper is not without its flaws. At times it reads like a polemic against AI hype in general. It is rambling in places, states beliefs as facts and not all of its arguments are convincing—though the same is true of utopian and dystopian screeds.
Even AI-pragmatists may feel the authors are too blasé about the potential for labour-market disruption, underestimate the speed of AI adoption, are too dismissive of the risks of misalignment and deception, and complacent about regulation.
Their prediction that AI will not be able to “meaningfully outperform trained humans” at forecasting or persuasion seems oddly overconfident. And even if the utopian and dystopian scenarios are wrong, AI could still be far more transformative than the authors describe.
But many people, on reading this rejection of AI exceptionalism, will nod in agreement. The middle-ground view is less dramatic than predictions of an imminent “fast take-off” or apocalypse, so tends not to receive much attention.
That is why the authors think it worthwhile to articulate this position: because they believe that “some version of our worldview is widely held”.
Amid current worries about the sustainability of AI investment, their paper makes for a refreshingly dull alternative to AI hysteria.
© 2025 The Economist Newspaper Limited. All rights reserved
Continue this series
The Resilience RevolutionUp next
- Opinion
Getting AI to do the grocery shopping: It’s closer than you think
The chatbot helping you compile a report for work could soon be curating your shopping list.
- Opinion
Afraid an AI robot is coming for your job? It might be here to help
As an economist, I’m often asked how artificial intelligence will affect people’s work. Here’s the good news about this disruptive – and productive – technology.
Previous
How finance leaders can future-proof their business
Everyone’s a winner - including customers - when businesses modernise their technology.
More:
Most Viewed in Technology
From our partners
Original URL: https://www.watoday.com.au/technology/what-if-artificial-intelligence-is-just-a-normal-technology-20251202-p5nk3q.html