Habeeb Ibrahim
Geoffrey Hinton, often hailed as the “godfather of artificial intelligence,” has issued a dire warning that AI could surpass and dominate humanity within the next three decades. Hinton, who was recently awarded the Nobel Prize in Physics for his pioneering work in AI, estimates there is a 10% to 20% chance of AI leading to human extinction by 2054.
Speaking on BBC Radio 4’s Today programme, Hinton emphasized the rapid development of AI, which he described as moving “much faster” than anticipated. “We’ve never had to deal with something more intelligent than ourselves before,” he remarked. Comparing humanity’s potential future relationship with AI to that of a toddler and an adult, he said, “We’ll be the three-year-olds.”
Hinton, who previously worked at Google, resigned last year to freely discuss his concerns about unregulated AI development. He fears that without strict oversight, “bad actors” could exploit AI for destructive purposes, and that artificial general intelligence—machines smarter than humans—could eventually evade human control, posing an existential threat.
Reflecting on the current state of AI, Hinton admitted the pace of advancement has been unexpectedly swift. “Most experts believe that within the next 20 years, we will create AIs smarter than humans. That’s a very scary thought,” he said.
Hinton has called for immediate government intervention to regulate AI, warning that relying on profit-driven companies to ensure the technology’s safety is insufficient. “The invisible hand is not going to keep us safe,” he cautioned, urging governments to enforce rigorous safety research.
While Hinton’s concerns are shared by many in the AI community, some experts, such as Yann LeCun, Meta’s chief AI scientist, downplay the risks, arguing that AI could instead help humanity address global challenges.
As AI continues to evolve at an unprecedented pace, the debate over its potential to either save or destroy humanity grows ever more urgent.