'AGI Is Coming – And We’re Not Ready,' Warns Google DeepMind CEO Demis Hassabis

Google DeepMind CEO Demis Hassabis has issued a stark caution about the approaching reality of Artificial General Intelligence (AGI), stating that while the technology may be closer than we think, society remains alarmingly unprepared to handle its implications.

In a candid conversation with Time magazine, Hassabis, who recently received the 2024 Nobel Prize in Chemistry, said AGI — a form of AI capable of human-like cognitive functioning — is now within a five-to-ten-year horizon. Some experts believe it may arrive even sooner.

“It’s Coming Very Soon”

Asked what keeps him up at night, Hassabis didn’t mince words. “For me, it's this question of international standards and cooperation and also not just between countries, but also between companies and researchers as we get towards the final steps of AGI,” he said. “And I think we are on the cusp of that. Maybe we are five to 10 years out. Some people say shorter, I wouldn't be surprised.”

While AGI has long been a topic of speculation in tech circles, Hassabis made it clear that its arrival is no longer hypothetical. “It's coming, either way it's coming very soon, and I'm not sure society's quite ready for that yet,” he added, pointing to issues of system controllability and accessibility as critical concerns.

A Global Framework for AGI Safety

Hassabis has been a long-time advocate for stronger international collaboration to ensure AGI development remains safe and transparent. Reiterating a proposal he made earlier this year, he emphasised the need for a global governing body to monitor AGI progress and potential misuse.

"I would advocate for a kind of CERN for AGI, and by that, I mean a kind of international research-focused high-end collaboration on the frontiers of AGI development to try and make that as safe as possible," he said.

He added, "You would also have to pair it with a kind of an institute like IAEA, to monitor unsafe projects. So a kind of like UN umbrella, something that is fit for purpose for that, a technical UN."

Existential Risks Are Real

The DeepMind chief’s remarks come on the heels of a recent paper published by the AI lab, which paints a grim picture of what could go wrong if AGI is mishandled. The paper highlights that AGI may pose risks so severe that they could “permanently destroy humanity.”

"Given the massive potential impact of AGI, we expect that it too could pose potential risk of severe harm," the study warned, pointing to existential threats as prime examples of what’s at stake.

What Makes AGI Different?

Unlike current AI, which is typically designed to perform narrow, task-specific functions, AGI aims to replicate the breadth and depth of human intelligence. It would be capable of learning, reasoning, and applying knowledge across multiple fields — making it far more powerful and potentially unpredictable.

As AI continues its rapid evolution, Hassabis’s words stand as a pressing reminder: technological breakthroughs are only as beneficial as our ability to govern them responsibly.

technology