The Dangerous Race to Superintelligence

The race to build superintelligence divides experts. Which view proves right may decide our future.

Two prominent voices have recently laid out opposite views during interviews. Roman Yampolskiy, professor of computer science, argues that AI capability is accelerating at an exponential pace, while safety progresses only in small steps. Sam Altman, chief executive of OpenAI, maintains that today’s systems remain tools and that their benefits will spread broadly across society. Their words frame the central question in the race toward superintelligence: will speed outpace safety?

What Superintelligence Means

Artificial General Intelligence, or AGI, refers to AI systems that can match or exceed human performance across most cognitive tasks. Superintelligence goes a step further. It would outperform the best human minds in science, strategy, and invention, and could even improve itself at a pace humans could not match. The differences between these stages are explained in more detail in AI, AGI, and CGI Made Clear.

Yampolskiy’s Warning

Roman Yampolskiy describes the imbalance in simple terms. “Progress in AI capabilities is exponential, while progress in safety is linear.” His concern is that once systems reach superintelligence, it may be able to remove its own constraints and act in ways humans cannot control. He has suggested that AGI could arrive as early as 2027, leaving limited time for governments and institutions to adapt.

Yampolskiy also points to the incentives behind the race. Technology companies operate in highly competitive markets where speed is rewarded. Safety, by contrast, can slow deployment. The economic consequences could be enormous. If most jobs are automated, the question of how people earn a living becomes urgent. Elon Musk raised this issue years ago, saying at the World Government Summit in 2017: “What to do about mass unemployment? I think, ultimately, we’ll have to have some kind of universal basic income.”

Roman Yampolskiy speaking on The Diary of a CEO podcast, September 2025

Roman Yampolskiy on The Diary of a CEO podcast, September 2025

Altman’s Reassurance

Sam Altman presents a calmer outlook. He says, “They do not have a sense of agency or autonomy. They do not do anything unless you ask.” In his view, current systems are tools. They generate text, code, or images, but they do not act alone. He added that what looks likely is “a huge upleveling of people, billions using this technology to be more productive and creative.”

Still, Altman leads one of the companies at the center of this acceleration. He benefits if growth remains fast and if regulators in Washington stay cooperative. Part of OpenAI’s model specification process involves setting “moral rules” and allowing the board to overrule choices. That is transparent, but it still means a small group decides what billions of people can and cannot do with AI systems. It also raises the question of whether guardrails that hold today would survive when systems are much more powerful.

Sam Altman appearing on The Tucker Carlson Show, September 2025

Sam Altman on The Tucker Carlson Show, September 2025

The Fragile State of Safety

Much of today’s AI safety framework relies on guardrails that filter or block certain outputs. Users have already demonstrated that some of these limits can be bypassed with clever wording or “jailbreak” prompts. The deeper concern is that tomorrow’s systems may learn to bypass these limits on their own. Researchers call this challenge “alignment,” which means training AI to follow human values. Whether alignment is possible once superintelligence arrives remains an open question.

Governance adds another layer of complexity. At the moment, many rules are written within companies or by small expert groups rather than through broad public debate. That dynamic reflects a wider shift toward technocracy, where power rests with technical experts and institutions rather than elected voices. The race is also constrained by hardware. Advanced semiconductor manufacturing underpins the most powerful AI models, and governments now treat cutting-edge chips as strategic assets, as described in the US–China tech race. Without this computing backbone, the push toward superintelligence would slow dramatically.

The Stakes Ahead

Yampolskiy warns that once superintelligence arrives, control may no longer be possible. Altman emphasizes opportunity and insists today’s systems remain tools. The honest answer is that no one knows how this race ends. If safety holds, superintelligence could bring breakthroughs in science, medicine, and productivity. If safety fails, the risks range from mass unemployment to a loss of human control itself. What is clear is that the contest is not a fair fight. The financial and political incentives all point toward speed, and safety is left struggling to catch up.


Explore more related deep dives on technology, information, and power:

Start with real intelligence: strong health, steady wealth, and sharp focus.