Events & Resources

Learning Center
Read through guides, explore resource hubs, and sample our coverage.
Learn More
Events
Register for an upcoming webinar and track which industry events our analysts attend.
Learn More
Podcasts
Listen to our podcast, Behind the Numbers for the latest news and insights.
Learn More

About

Our Story
Learn more about our mission and how EMARKETER came to be.
Learn More
Our Clients
Key decision-makers share why they find EMARKETER so critical.
Learn More
Our People
Take a look into our corporate culture and view our open roles.
Join the Team
Our Methodology
Rigorous proprietary data vetting strips biases and produces superior insights.
Learn More
Newsroom
See our latest press releases, news articles or download our press kit.
Learn More
Contact Us
Speak to a member of our team to learn more about EMARKETER.
Contact Us

Leading AI researcher gives sobering warning about OpenAI’s AGI ambitions

The news: Eliezer Yudkowsky, a founder in the field of artificial general intelligence (AGI) research, is saying to “shut it all down.”

  • Yudkowsky, co-founder of the Machine Intelligence Research Institute, says the letter urging a six-month moratorium on training AI models more powerful than GPT-4 doesn’t go nearly far enough, per Time.
  • He said he’s joined by others in the AI field who have privately concluded that the most likely result of building an AGI is that literally everyone on Earth will die and that it’s “the obvious thing that would happen.”

The problem: Companies like OpenAI and DeepMind are going full throttle on developing AGI—systems that surpass human intelligence—but no one understands how current advanced AI models work.

  • University of California, Berkeley, professor of computer science Stuart Russell said he asked Microsoft whether GPT-4 has internal goals of its own that it’s pursuing. The response was: “We haven’t the faintest idea.”
  • A truly safe AGI might be impossible unless its inner workings can be explained and aligned with human values.
  • Strained global diplomacy and a tech arms race between the US and China might make an international agreement to halt advanced model training a long shot.

Do we need AGI? Widespread workforce disruption and human extinction are high-stakes consequences for a technology that we probably don’t need.

  • Humans are skilled generalist thinkers, and thanks to evolution, we could collectively get even smarter with more investments made in human learning as opposed to machine learning.
  • The gaps in our intellectual capabilities involve difficulties in solving specific problems like climate change, disease, and space travel.
  • Instead of building AGI, humans could put more resources into small, focused AI models adept at specific use cases like discovery of new drugs and materials while ensuring we maintain control over AI.

You've read 0 of 2 free articles this month.

Create an account for uninterrupted access to select articles.
Create a Free Account