Get Updates
Get notified of breaking news, exclusive insights, and must-see stories!

Geoffrey Hinton Issues Warning About AI Developing Its Own Language And Risks Involved

Geoffrey Hinton, often referred to as the Godfather of AI, has issued a new warning that seems like it belongs in a science fiction narrative. Speaking on the One Decision podcast, the Nobel Prize-winning scientist cautioned that artificial intelligence might soon develop its own language, one incomprehensible even to its creators. "Right now, AI systems do what's called 'chain of thought' reasoning in English, so we can follow what it's doing," Hinton explained. "But it gets more scary if they develop their own internal languages for talking to each other."

Hinton's concerns are not unfounded. Machines have already shown they can generate "terrible" thoughts, and there's no guarantee these will always be in a language humans can understand. His early work on neural networks laid the groundwork for today's deep learning models and large-scale AI systems. Despite this, he admits he didn't fully grasp the potential dangers until later in his career. "I should have realised much sooner what the eventual dangers were going to be," he admitted. "I always thought the future was far off and I wish I had thought about safety sooner." Now, that delayed realisation fuels his advocacy.

AI Summary

AI-generated summary, reviewed by editors

Geoffrey Hinton, known as the 'Godfather of AI,' warns that AI might develop its own language incomprehensible to humans, highlighting concerns about interconnected learning and the rapid advancement of AI, and he also mentions his departure from Google in 2023.

AI Learning and Its Implications

One of Hinton's primary concerns is how AI systems acquire knowledge. Unlike humans who learn gradually, digital brains can instantly share information among themselves. On BBC News, he illustrated this by saying, "Imagine if 10,000 people learned something and all of them knew it instantly; that's what happens in these systems." This interconnected intelligence allows AI to scale its learning at an unmatched pace.

Current models like GPT-4 already surpass humans in raw general knowledge. While reasoning remains a human stronghold for now, Hinton warns that this advantage is diminishing rapidly. He notes that many industry insiders are not as vocal about these risks as he is. "Many people in big companies are downplaying the risk," he noted, suggesting their private worries aren't reflected in their public statements.

Industry Response and Personal Decisions

Hinton acknowledges Google DeepMind CEO Demis Hassabis as an exception for genuinely addressing these risks. Regarding his departure from Google in 2023, Hinton clarifies it wasn't a protest but rather due to his age and declining programming ability. "I left Google because I was 75 and couldn't program effectively anymore. But when I left, maybe I could talk about all these risks more freely," he states.

While governments introduce initiatives like the White House's new "AI Action Plan", Hinton believes regulation alone won't suffice. The real challenge lies in creating AI that is "guaranteed benevolent", especially since these systems may soon think in ways beyond human comprehension.

Hinton's insights highlight the urgent need for careful consideration of AI's future impact on society. As technology advances rapidly, understanding and mitigating potential risks becomes crucial to ensure AI remains beneficial to humanity.

Notifications
Settings
Clear Notifications
Notifications
Use the toggle to switch on notifications
  • Block for 8 hours
  • Block for 12 hours
  • Block for 24 hours
  • Don't block
Gender
Select your Gender
  • Male
  • Female
  • Others
Age
Select your Age Range
  • Under 18
  • 18 to 25
  • 26 to 35
  • 36 to 45
  • 45 to 55
  • 55+