Get Updates
Get notified of breaking news, exclusive insights, and must-see stories!

UN Flags Risks Of Advancing Human-Like AI, Calls For Urgent Action

The United Nations has issued a cautionary statement regarding human-level artificial intelligence, commonly referred to as Artificial General Intelligence (AGI), calling for swift global coordination as the technology advances at a rapid pace.

The United Nations Council of Presidents of the General Assembly (UNCPGA) has released a report urging action to manage the potential dangers of AGI, which experts suggest could become a reality within the next few years.

Representative image
Photo Credit: Pexels

The report acknowledged that while AGI may "accelerate scientific discoveries related to public health" and revolutionise various industries, it also carries serious potential drawbacks.

"While AGI holds the potential to accelerate scientific discovery, advance public health, and help achieve the Sustainable Development Goals, it also poses unprecedented risks, including autonomous harmful actions and threats to global security," the report stated.

"Unlike traditional AI, AGI could autonomously execute harmful actions beyond human oversight, resulting in irreversible impacts, threats from advanced weapon systems, and vulnerabilities in critical infrastructures. We must ensure these risks are mitigated if we want to reap the extraordinary benefits of AGI."

It stressed the need for immediate, coordinated international measures, ideally led by the United Nations, to prevent AGI from becoming a serious threat.

"Such actions should be initiated by a special UN General Assembly specifically on AGI to discuss the benefits and risks of AGI and potential establishment of a global AGI observatory, certification system for secure and trustworthy AGI, a UN Convention on AGI, and an international AGI agency."

DeepMind CEO issues warning

In February, Demis Hassabis, CEO of Google DeepMind, stated that AGI could begin to emerge within the next five to ten years. He also recommended the creation of a UN-style international body to supervise its development.

"I would advocate for a kind of CERN for AGI, and by that, I mean a kind of international research focused high-end collaboration on the frontiers of AGI development to try and make that as safe as possible," said Hassabis.

"You would also have to pair it with a kind of an institute like IAEA, to monitor unsafe projects and sort of deal with those. And finally, some kind of supervening body that involves many countries around the world that input how you want to use and deploy these systems. So a kind of like UN umbrella, something that is fit for purpose for a that, a technical UN," he added.

According to a research paper by DeepMind, AGI might arrive as early as 2030 and could "permanently destroy humanity".

Notifications
Settings
Clear Notifications
Notifications
Use the toggle to switch on notifications
  • Block for 8 hours
  • Block for 12 hours
  • Block for 24 hours
  • Don't block
Gender
Select your Gender
  • Male
  • Female
  • Others
Age
Select your Age Range
  • Under 18
  • 18 to 25
  • 26 to 35
  • 36 to 45
  • 45 to 55
  • 55+