Artificial Intelligence: Why the Biggest Revolution May Be Surprisingly Ordinary

Amid monumental hype, new research suggests AI is evolving like earlier technologies, making governance, not panic, the real priority. 

iStock.com
Article related image
Representational Photo
Author
By Amitrajeet A. Batabyal*

Batabyal is a Distinguished Professor of economics and the Head of the Sustainability Department at the Rochester Institute of Technology, NY. His research interests span environmental, trade, and development economics.

February 20, 2026 at 5:52 AM IST

There is no gainsaying the fact that there is a great deal of hype about what artificial intelligence might mean for humanity. On one end of the spectrum lies the belief that AI will solve all human problems and perhaps even make us immortal. On the other is the dystopian fear that AI will immiserize society in ways that we have never seen before.

Given these polar extremes, it is refreshing to read a recent paper by Princeton University computer scientists Arvind Narayanan and Sayash Kapoor, which makes a compelling case for treating AI as just another, albeit powerful, technology. What, precisely, is their argument?

Speed Limits
The authors begin by outlining three stages of technological impact—methods, applications, and adoption—that unfold at different timescales. Contrary to popular claims of imminent upheaval, they present extensive evidence showing that AI diffusion, especially in significant or safetycritical domains, is slow.

Many applied systems still rely on decadesold statistical tools rather than cuttingedge deep learning. Safety constraints, regulatory oversight, and the complexity of realworld environments limit rapid deployment. Examples include Epic’s sepsis prediction model, which performed poorly in clinical settings, and early chatbots that failed in practice, illustrating how difficult it is to ensure reliability in complex arenas.

Even in non–safety-critical domains, adoption is constrained by human and organisational factors. Although millions have experimented with generative AI, actual usage intensity and sustained productivity gains remain modest. Historical parallels, such as the delayed productivity impact of electrification, reinforce the argument that meaningful diffusion often takes decades.

Progress in AI methods has also encountered “speed limits.” Despite explosive growth in research output, dominant paradigms such as transformer architectures have persisted, while large research ecosystems risk becoming ossified. Breakthroughs are rarer than headlines suggest.

Risk Reality
The authors also question the coherence of “superintelligence” as a policy concept. For risk analysis, they argue, the crucial variable is power rather than intelligence. Humans already achieve great capabilities through tools, and AI will extend this trajectory rather than replace it. Many tasks contain high “irreducible error”, limiting the scope for dramatic overperformance. Areas such as geopolitical forecasting or persuasion are therefore unlikely to experience superhuman leaps.

Future work, the authors suggest, will increasingly involve AI control: monitoring, auditing, specifying tasks, and intervening when necessary. This mirrors earlier shifts from manual labour to machine supervision after the Industrial Revolution. Existing safeguards—from audits to circuit breakers and failsafessupport the view that AI systems can remain governable without exotic alignment breakthroughs.

The paper analyses several risk categories, including accidents, arms races, misuse, and systemic failures. Accidents resemble failures in other technologies and can be addressed through established safety-engineering practices. Arms races are familiar phenomena and can be mitigated through sectorspecific regulation. The contrast between safetydriven firms such as Waymo and the struggles of Uber and Tesla suggests that market often reward caution.

Misuse cannot be reliably prevented through modellevel alignment alone, since harm depends heavily on user context. Defenses must therefore focus downstream, fortifying systems against AIenabled cyberattacks and biothreats through established security practices. Used wisely, AI can enhance bioscreening, content moderation, and cybersecurity, shifting offence–defense balances in society’s favour.

By contrast, systemic socioeconomic and political risks—inequality, labour disruption, power concentration, and democratic erosion—are far more plausible and historically precedented. Excessive focus on speculative superintelligence, the authors argue, distracts from these pressing challenges.

Given uncertainty and divergent worldviews, they advocate for resilience over non-proliferation. Resilience emphasises decentralization, transparency, defensive capabilities, and adaptability. Non-proliferation, by contrast, creates brittle single points of failure and concentrates power. Realising AI’s benefits requires enabling diffusion through thoughtful regulation, investment in complements such as AI literacy and open data, and careful public-sector adoption.

In the final analysis, the real story of AI isn’t about machines escaping human control. It is about societies learning how to wield a powerful tool. The most transformative force, in that world, isn’t artificial intelligence. It’s human judgement.

Batabyal is a Distinguished Professor, the Arthur J. Gosnell Professor of Economics, and the Head of the Sustainability Department at the Rochester Institute of Technology in Rochester, NY. These views are his own.