Events & Resources

Learning Center
Read through guides, explore resource hubs, and sample our coverage.
Learn More
Events
Register for an upcoming webinar and track which industry events our analysts attend.
Learn More
Podcasts
Listen to our podcast, Behind the Numbers for the latest news and insights.
Learn More

About

Our Story
Learn more about our mission and how EMARKETER came to be.
Learn More
Our Clients
Key decision-makers share why they find EMARKETER so critical.
Learn More
Our People
Take a look into our corporate culture and view our open roles.
Join the Team
Our Methodology
Rigorous proprietary data vetting strips biases and produces superior insights.
Learn More
Newsroom
See our latest press releases, news articles or download our press kit.
Learn More
Contact Us
Speak to a member of our team to learn more about EMARKETER.
Contact Us

Artificial General Intelligence Explained: When Will AI Be Smarter Than Us? | Behind the Numbers

On today’s podcast episode, we discuss the various definitions of artificial general intelligence (AGI) and try to come up with the best one we can. Then we look at how smart humans are compared to current AI models. Join Senior Director of Podcasts and host Marcus Johnson, and Analysts Jacob Bourne and Gadjo Sevilla. Listen everywhere and watch on YouTube and Spotify.

Subscribe to the “Behind the Numbers” podcast on Apple Podcasts, Spotify, Pandora, Stitcher, YouTube, Podbean or wherever you listen to podcasts. Follow us on Instagram.

Cint is a global insights company. Our media measurement solutions help advertisers, publishers, platforms, and media agencies measure the impact of cross-platform ad campaigns by leveraging our platform’s global reach. Cint’s attitudinal measurement product, Lucid Measurement, has measured over 15,000 campaigns and has over 500 billion impressions globally. For more information, visit cint.com/insights.

Episode Transcript:

Marcus Johnson:

Are your brand campaigns as effective as they could be? Look, Marcus, I'm going to be real with you, probably not. I understand. If you're only getting insights when the campaign is over then the answer is a resounding no. To make better campaign decisions you need real-time measurement. You need Lucid Measurement by Cint, let's be real. Discover the power of real-time brand lift measurement at cint.com/insights. That's C-I-N-T.com/insights. Hey, gang, it's Monday, June 30th. Gadjo, Jacob, and listeners welcome to Behind the Numbers: an EMARKETER podcast made possible by Cint. I'm Marcus. And joining me for today's show we have two people. Senior analyst writing for our AI and tech briefings based in New York is Gadjo Sevilla.

Gadjo Sevilla:

Hey, Marcus, hey, Jacob. Happy to be with you guys.

Marcus Johnson:

We're also joined by our analyst who writes long form about the same topics living in California is Jacob Bourne.

Jacob Bourne:

Thanks for having me today, Marcus.

Marcus Johnson:

Yes, sir. Today's fact, gentlemen. When you recall a memory you're actually reconstructing it, and it also changes each time. So what am I talking about? H.L. Roediger III from the University of Washington in St. Louis wrote a paper on the psychology of reconstructive memory. So they explain that when we perceive and encode events in the world we construct rather than copy the outside world as we comprehend the events. So if perceiving is construction then remembering the original experience involves reconstruction. We use traces of past events, general knowledge, our expectations, and our assumptions about what must have happened. Because of this, recollections may be filled with errors called false memories which include inferences during encoding, information we receive about an event after its occurrence, and our perspective during the retrieval. This makes me feel better.

Jacob Bourne:

That's quite the fact of the day, Marcus. That's awesome.

Marcus Johnson:

Way too deep. So it says, "Contrary to popular belief, memory does not work like a video recorder faithfully capturing the past to be played back accurately at a later time, rather" ... "Even when we are accurate we are reconstructing events from the past when we remember." The CBC piece from The Nature of Things, it's an article by Canadian writer and director Josh Freed who is saying, "Once our brain has a new version of the story it forgets erases the former version." So it's almost like a game of telephone, we're just going to-

Jacob Bourne:

I mean, everything is-

Marcus Johnson:

Remember the last recollection.

Jacob Bourne:

Objective.

Marcus Johnson:

That's what it essentially says.

Jacob Bourne:

Our perception of the road is always subjective.

Marcus Johnson:

Exactly. If that wasn't heavy enough today's real topic, what exactly is artificial general intelligence? So who exactly came up with the term artificial general intelligence or AGI? Well, Gil Press of Forbes notes that the term AGI was coined in 2007 when a collection of essays on the subject was published. There was a book titled Artificial General Intelligence, it was co-edited by Ben Goertzel and Cassio Pennachin. Although they seem to say that they sourced the idea for the title from a former colleague, AI researcher Shane Legg.

In the book, gents, they say, or the defined AI asks ... Loosely speaking they say, "AI systems that possess a reasonable degree of self-understanding, and autonomous self-control, and have the ability to solve a variety of complex problems, and a variety of contexts, and to learn to solve new problems that they didn't know about at the time of their creation." So it's not the most defined term. It wasn't then it doesn't seem to be now. Jacob, when we were discussing this episode that even definitions today are, you said, murky and not really agreed upon. So I've asked you and Gadjo to come up with your own definitions, to craft your own definition of AGI. What's yours?

Jacob Bourne:

I think an easy one is just an AI model that is on par in terms of intelligence and capabilities with most human beings. But the key here is that ... Is the word general. Because the thing about human intelligence is that we're good at a lot of different things, we can solve a lot of different problems. We have a wide variety of capabilities. And historically AI, when it was first starting to be developed in the 1960s, was really ... The goal was to make a machine that's thinks like people. But it turned out that AI is, actually, generally good at a very narrow area of things. And so I think that's how this term AGI came about because it's trying to get to an AI model ... Build an AI model that really is good at a wide variety of things like humans are.

Marcus Johnson:

Right.

Jacob Bourne:

It doesn't just Excel in one category of things.

Marcus Johnson:

So I'll give an example. So IBM supercomputer Deep Blue when it beat Garry Kasparov, the chess master-

Jacob Bourne:

Yes, exactly.

Marcus Johnson:

It was good at just that one thing. So that's narrow AI, correct?

Jacob Bourne:

Narrow AI, right. I think with ChatGPT in 2022 we've seen it's becoming more general but it's still not as general as a human I would say.

Marcus Johnson:

So that does seem to be a big part of this, Gadjo, that, obviously, it's in the name, artificial general intelligence. And Google says, "There's that generalization ability. AGI can transfer knowledge and skills learned in one domain to another enabling it to adapt to new and unseen situations effectively." So that's a big component. Would you agree with what Jacob said? And what else would you tack on?

Gadjo Sevilla:

I do agree. It requires beyond the narrow understanding of various topics. I also think autonomy is a big part of AGI so you have ... You can think of it as a live algorithm that's constantly learning, that can make decisions, that understands the nuances in between subject matters, right? And I think that's the elusive part of AGI. Sure, it could surpass a lot of human thought, but at the same time how it applies that thought might not be on a human level, right?

Marcus Johnson:

Yeah. One of the questions here is, what does a human level even mean? I looked at the IBM definition and they said, "AGI is a hypothetical stage in the development of machine learning in which an AI system can match or exceed the cognitive abilities" ... This word cognitive keeps coming up a lot in a lot of these definitions. "Cognitive abilities of human beings across any tasks." McKinsey, their definition, they say, "AGI may replicate human-like cognitive abilities including reasoning, problem solving." But there's a ton. "Perception, learning, language comprehension, navigation, social, and emotional engagement," et cetera. But Jaron Lanier who popularized the term virtual reality asks, "Does crossing some threshold make something human or not?"

Gadjo Sevilla:

That's a great question, what is the threshold

Marcus Johnson:

Exactly.

Jacob Bourne:

Human intelligence and cognition itself is poorly understood. And so now you're trying to take a machine and then compare it to a human essentially. I mean, no one has really agreed upon where that threshold is. And so that's why actually I like Anthropic CEO Dario Amodei who says that he prefers the term powerful AI to AGI because it's a bit more vague. AGI has sort of become a marketing term because, again, it hasn't really been defined in a precise way because it's difficult. How would you know when a model is really on par with the intelligence and capabilities of most people? Would AI companies agree upon that?

Gadjo Sevilla:

Following up on that, I think vagueness is going to be a continued aspect of this. No one wants to nail down a definition because the-

Marcus Johnson:

How interesting.

Gadjo Sevilla:

Competitors are just going to go back and say, "Well no, because this is what we think," right? I don't expect a consensus. Neither do I expect someone to say, "Yeah, this is AGI. We've achieved it, it does this." The fallout from that will be significant, right? So they're going to keep it vague and I think it's going to be nebulous. The target is a moving target. They say, "Oh, we're close to it." But some say, "No, we're not." And I think that just goes to show how complicated just defining AGI is going to be moving forward.

Jacob Bourne:

And to make it more complicated there's another term that-

Marcus Johnson:

Oh, good.

Jacob Bourne:

Has been floating around which is a superintelligence.

Marcus Johnson:

Right.

Jacob Bourne:

Which is an AI model-

Marcus Johnson:

So what's that?

Jacob Bourne:

That exceeds even the intelligence of the smartest people. And that's also something that some AI researchers think is possible. So even exceeding the capabilities of an AGI.

Marcus Johnson:

Is it fair to say that a big part of why we want to create AI that is on par with or smarter than a human is because of the Turing test which came from English computer scientist Alan Turing? He was basically, can you trick a person into thinking a computer is a human? Is that where all of this stems from?

Jacob Bourne:

Well, I think that the Turing test is just a test that grew out of this desire to create AI that's as smart as humans. But I think the Turing test speaks to this problem of how would you know. Because if it's just tricking you then it's a performance it's not really intelligent, right? So I think that's part of it-

Marcus Johnson:

Good point.

Jacob Bourne:

Is humans under ... We know that we understand the world we're living in. And so even though AI can do things, doesn't really understand what it's doing. When you're talking to a chatbot and it ... Its output is really great but does it understand the words that it's saying? And so I think that's a big part of what we think about in terms of human intelligence is that we understand what ... The world, we understand the language we're using, the problems we're trying to solve. But it doesn't seem like AI does at least not yet.

Marcus Johnson:

One of the definitions, this one coming from Amazon, says, "AGI is a field of theoretical AI research that attempts to create software with human-like intelligence and the ability to self-teach performing tasks it's not trained or developed for." The self-teaching part, do ... Would we agree that is AGI? Or do we think that actually goes beyond more towards superintelligence?

Gadjo Sevilla:

I think that's part of AGI just because ... As we discussed, AGI is an ongoing thing, right? It's an unfinished state. And in order for it to continue evolving it needs to continue learning. The issue there is it can definitely learn at least all the information that's on the internet definitely. But what it lacks, again, is just general reasoning, common sense, empathy, social intelligence. That's what it needs to unlock to sort of not be smarter than humans but at least be on par with the way humans process the world around them cognitively, right?

Marcus Johnson:

Speaking about how they process the worlds cognitively, common sense, knowledge seems to be part of this too. Google was saying, "AGI should have a vast repository of knowledge about the world including facts, relationships, and social norms allowing it to reason and make decisions based on this common understanding." I mean, how much is common sense intrinsically linked to being a human?

Jacob Bourne:

I mean, that seems like-

Marcus Johnson:

Go on, please.

Jacob Bourne:

Well, it seems very linked to being human. There is a distinction that even an AGI or super intelligence won't be human, but it's this measure of intelligence and capabilities that we're trying to determine. Not a specific human-

Marcus Johnson:

So you can be as smart as a human but not still be a human?

Jacob Bourne:

Right. This lack of common sense is where a lot of criticism of AIs capabilities come in. But then the flip side is saying, "Well, people often act without common sense too."

Marcus Johnson:

Yes.

Jacob Bourne:

"People make mistakes. We sort of hallucinate. We do all these things we criticize AI for. And so maybe AI's hallucinations are different, maybe it's lack of common sense is different but it doesn't mean it's not as intelligent." So that's one counterargument. I think a big limitation with current AI models is that they're trained on internet data, not real-world data for the most part. Now that's changing because you're ... We're seeing this ... These sort of models being developed that are designed for robotics. And I think the future outlook is to have AI-powered robots collecting real-world data that then can be used for model training. It could indicate a threshold that once that model training is more heavily underway that we might see AI advance closer to an AGI.

Marcus Johnson:

By real-world data, could you give folks an example of what you're talking about?

Jacob Bourne:

So you have an AI-powered robot that's out in the world, it has sensors. That it's collecting data from things it touches, interactions it has with people, things it's seeing as it's moving around the world. It's not just using internet data to produce output it's collecting data it's getting from interactions in the real world in real time. And that could-

Marcus Johnson:

Like a driverless car so to speak?

Jacob Bourne:

Right.

Marcus Johnson:

That being fair? Yeah, exactly. Like a driverless car.

Jacob Bourne:

So speaking of driverless cars, actually, this is a good pivot here. OpenAI, they've got a few different definitions of AGI. They say it's a highly autonomous system that outperforms humans at most economically valuable work. And then in a profile with the New Yorker OpenAI CEO Sam Altman defined AGI as the equivalent of a median human that you could hire as a co-worker. So there are a few of the definitions from OpenAI. But Maxwell Zeff of TechCrunch notes that "OpenAI created the five levels it internally uses to gauge its progress towards AGI." Jacob, I think we've talked about this before, Gadjo perhaps as well, that the six levels of autonomous driving you've got these six different stages from zero to five, everything from you drive the car completely yourself to the car drives itself completely by itself up to five.

With this, they have the five levels from which they measure AGI internally. So you have the first-level chatbots, ChatGPT. Second level, the reason is OpenAI's o1. Then the agents, level three. That's where we seem to be now or coming now. Innovators, level four. AI that can help invent things. And then the last level, organizational AI that can do the work of an entire organization at level five. Do we think it's more and more likely that we end up with something more akin to this? We get a set of rough guidelines on when something has reached a certain level of AGI as opposed to this one overarching AGI threshold?

Yes. AI, at the end of the day, it's interesting, it's research but it's also a marketable tool. And in order to market it you have to have the specs on what you're marketing. We're going to see more of these sort of levels I guess be flushed out as AI advances. I think it's a bit different from really arguing in essence what we mean when we say an AGI. I mean, reducing it to what Sam Altman's saying, in terms of an economic driver, is diminishing it a bit because human intelligence spans much farther than the tasks we do at work.

Marcus Johnson:

Economic value, exactly.

Jacob Bourne:

Right. So I think that almost constrains what an AGI is which is maybe good, again, if we're just thinking about it in terms of a product. I think that's just one limited way of looking at it.

Marcus Johnson:

That's a great point though. Because we have to think about the people who are telling us what AGI is or isn't are people who run companies.

Jacob Bourne:

Yes.

Gadjo Sevilla:

And they're all competing with each other. They're trying to productize their models and they're getting ... I mean, the competition is on every level now, right, from agents to chat, bots to search. And I think for most people, most companies, the concept of AGI really won't really move the needle. Specific solutions-based tools, updates to the functionality of what AI can do I think that is what matters right now. And I think for the foreseeable future that's how it's going to be measured, right?

Marcus Johnson:

Yeah. We touched on this earlier but I want to come back to it because I think it's a really interesting question which is, are AI systems smarter than people already? And Niccolo Conte, a visual capitalist, wrote a piece about the IQ levels of AI, using data from tracking AI, that ranked the smartest AI models based on their performance on the Mensa Norway IQ test. There are a bunch of different types of IQ test and this is one of the main ones. For context, the average human IQ score ranges from 90 to 110. A score above 130 is typically considered genius level.

Up top-ranked number one was OpenAI's text-only o3 model scoring a 135 on the Mensa IQ test placing it comfortably in the genius category. There were six other models that were above the top end of the human average so above 110. There were two Claude models from Anthropic to Gemini models from Google, another one from OpenAI, and one from ... A Grok model from xAI. And then you had 10 more that were between the average human IQ score ranges of 90 to 110. So Gadjo, are AI systems smarter than people already?

Gadjo Sevilla:

I think if you break it down we see that they've surpassed us in certain things. So image recognition I think they surpassed humans in 2015. Speech recognition that was 2017. And then language understanding that was recent, 2020, they match humans in that. And again, these are narrow fields of measure, right? You still need to put that together with the special sauce that makes us humans to determine whether they are smarter. They're definitely capable at certain tasks. They don't get tired, there's no fatigue involved, right, so you could say they have that endurance factor.

So for specific tasks that are, I guess, really just crafted with guardrails, sure, they could probably match or surpass also on a case-to-case basis, right? Generally, I still don't think so, they still lack common sense. They're not good at abstract reasoning. That's what defines sort of intelligence, the ability to problem solve on the spot, right, and to just shift paradigms. What AI will try to do is ... If it doesn't know the answer it's going to make up something. It's not programmed to say, "Hey, you know what? I don't know that, no." No AI has ever told me that. Instead, they'll fabricate something-

Marcus Johnson:

Sounds like people.

Gadjo Sevilla:

And try to justify it.

Marcus Johnson:

Some people. People built them so they're going to be a reflection of us. But you're right, there are people out there who will say, "I don't know." And no AI, at the moment at least, is going to say that.

Jacob Bourne:

I mean, I agree mostly with what Gadjo said. I'd also add that I think that IQ tests aren't a great benchmark of AGI, determining if we're at AGI level or surpassing human intelligence. If they were-

Marcus Johnson:

How come?

Jacob Bourne:

Then we would say, "Oh, look, the AI model scored genius level then we should be able to let it operate and perform tasks without human supervision" which it's not at that level yet.

Marcus Johnson:

Right.

Jacob Bourne:

And, of course, the reason why is because it really can't do what a human can do. And, again, it gets back to this general level intelligence which I don't think that IQ models really test for, it tests for something very specific. Versus being able to have a deep understanding of the real world and solve problems in that real world. I don't think IQ tests really do that.

Gadjo Sevilla:

Any tests that you use would have to evolve with the AI available, right? You can't set a standard and say, "This is it" because it's continuously changing. Is it more of a numbers thing, more of a comprehension thing? I think that's going to be the challenge. It's going to be an-

Marcus Johnson:

Emotional intelligence.

Jacob Bourne:

But I think at the same time, you look at how much the vast quantities of data that AI models are able to process at a much faster rate than humans can think, and then make predictions and draw insights from that. I mean, it's stunning and people can't even come close to doing that. So I think it's getting more general, but it's still quite a long way to go before it reaches the general level of human intelligence.

Marcus Johnson:

Most Americans think AI will become more intelligent than people. According to a 2025 YouGov study, 47% of people said that AI will eventually become more intelligent than people. 13% think it already is. 24% said it's unlikely. The rest weren't sure. That's all we've got time for this episode. Friday we will be back talking about the ways that AGI might change our lives and when it's most likely to get here if at all. Of course. Thank you so much to my guests. Thank you to Gadjo.

Gadjo Sevilla:

Thanks again.

Marcus Johnson:

Yes, sir. And to Jacob.

Jacob Bourne:

Thanks for having me.

Marcus Johnson:

Yes, indeed. Thank you friends. Thank you to the whole editing crew, and to everyone for listening in to Behind the Numbers: an EMARKETER podcast made possible by Cint. Subscribe, follow, leave a rating, and also maybe a cheeky little review if the mood takes you. Sarah will be back with the Reimagining Retail show for you on Wednesday.



 

Create an account for uninterrupted access to select audios.
Create a Free Account