Marcus Johnson (00:00):
In marketing, everything must work seamlessly, or efficiency, speed and ROI all suffer. That's why Quad is obsessed with making sure your marketing machine runs smoothly, with less friction and smarter integration. Better marketing is built on Quad. See how better gets done at www.quad.com/buildbetter, hit "Talk to our experts" and get help today.
(00:33):
Hey gang, it's Thursday, July 3rd. Jacob, Grace and listeners, welcome to Behind the Numbers, an EMARKETER video podcast made possible by Quad. Joining me today, we have two analysts, both living in California, both covering AI and technology for us. One writes our long form content, that's Jacob Bourne. Welcome, fella.
Jacob Bourne (00:54):
Thank you so much for having me today.
Marcus Johnson (00:55):
Yes, sir. And the other writes our short form stuff. It's Grace Harmon. Hello.
Grace Harmon (01:00):
Hi guys, nice to be here.
Marcus Johnson (01:02):
Yes, indeed. Today's fact. Does it matter if you drink from a wider or narrower drinking glass? Do you guys have a preference?
Jacob Bourne (01:16):
Wow, I've never considered that question before.
Marcus Johnson (01:18):
Yeah.
Jacob Bourne (01:18):
Ever. Not once in my life. If you're-
Marcus Johnson (01:22):
That's fair, because you have a life.
Jacob Bourne (01:23):
... an infant, you want narrow, right? But it kind of widens as you grow up.
Marcus Johnson (01:28):
Oh, okay. That's-
Jacob Bourne (01:30):
It's [inaudible 00:01:30] right?
Grace Harmon (01:30):
Yeah, I was going to say, "Narrows-"
Marcus Johnson (01:30):
... bad logic.
Grace Harmon (01:30):
... so-
Marcus Johnson (01:30):
Were you?
Grace Harmon (01:30):
I was going to say "Narrow."
Marcus Johnson (01:31):
Oh, yeah. Grace is like, "I needed a funnel." Apparently, it does matter. According to a recent study by Natalie Spielmann and Patricia Rossi, they published this study in the Journal of Business Research, I think it was last year, and they found that people prefer wide-rimmed, apart from Grace, wide-rimmed drinking glasses to narrow-rimmed ones. So, red wine glasses versus champagne flutes that would be one example.
(02:05):
A writer with the study by Lisa Ward of the Wall Street Journal notes that folks are not only prepared to spend more on beverages in wider glasses, but they're also more likely to reorder more expensive drinks that are served in a wider glass, and also drinking from wider glasses makes people feel better.
Jacob Bourne (02:22):
About-
Grace Harmon (02:24):
Okay, maybe I changed my mind then.
Jacob Bourne (02:26):
Yeah. But the champagne flute forces you to sip. I think that's the point of that, right? Sometimes-
Marcus Johnson (02:32):
Oh, yeah.
Jacob Bourne (02:32):
... maybe you don't want to be drinking things very quickly.
Marcus Johnson (02:34):
Yeah.
Jacob Bourne (02:35):
Depending on what though.
Marcus Johnson (02:36):
Wait, so if you had champagne in a wider-rimmed glass, you would down it?
Jacob Bourne (02:39):
Well, I don't know about-
Marcus Johnson (02:42):
I'd have the option, Marcus.
Jacob Bourne (02:42):
... down it, but certainly, yeah. I think it would go down quicker, yeah, than something narrow. Yeah.
Marcus Johnson (02:47):
Yeah. I just think they're easier to drink out of, aren't they?
Jacob Bourne (02:50):
Yeah, right. Yeah, that's right, yeah.
Marcus Johnson (02:53):
Because if it's narrow, it rushes at you. Doesn't it, Grace?
Grace Harmon (02:55):
I feel like I'm less likely to spill with a narrow glass.
Jacob Bourne (02:58):
Well, that's the whole... Yeah, that's true too.
Marcus Johnson (03:03):
Trying to get my order in the sippy cup, that's where I live. Not really, that's crazy. But sometimes I do think it would be better. It's like Velcro. Sometimes I wish Velcro was more socially acceptable as an adult. People have told me it is, but I disagree. Anyway, today's real topic, how AGI will change our lives and when the hell is it actually going to get here?
(03:30):
So, on Monday, we talked about how to define artificial general intelligence, or AGI. And then we discussed how smart AI is already when compared to people. As I mentioned, today we're going to talk about how it's going to change our lives, how much do AI companies need to ask society what they want from AI and when it might get here?
(03:55):
Jacob, let's start with what area of our lives... And we came to the general consensus, if you will, that it is... Human level intelligence is what we were discussing in terms of what AGI actually means. What area of our lives will AGI change the most, do you think?
Jacob Bourne (04:20):
Yeah. I think the most, and probably the most... Or one of two of the most immediate is going to be the workplace.
Marcus Johnson (04:28):
Okay, how so?
Jacob Bourne (04:28):
Well, people exchange their intelligence and capabilities to work in exchange for money. And if you have an AI that is as intelligent and as capable as a person, then that's of course going to shift the entire paradigm of work, the meaning of work. It could be that it just represents a shift of roles. People maybe might adopt roles that are just more based on human interaction and human relationships while AI does all the grunt work.
Marcus Johnson (05:12):
Is there a tipping point in terms of, okay, these are the things that I do as a person, and what's that threshold of once AI starts doing X number of things that I become more obsolete, so to speak?
Jacob Bourne (05:28):
Yeah, I think-
Marcus Johnson (05:29):
Could I could ever do enough to make humans... because you just mentioned, a lot of it's going to be emotional intelligence, like empathy, how well you can motivate people. And so will AI ever replace-
Jacob Bourne (05:42):
Well, and I think AI can do some of those things too. Certain tests show that AI can be more empathetic than people. Not all tests show that, but some do. But I think that there might be just a desire to have people in certain roles, just because they're people. But at the same time... So I think the question is will we be seeing a shift in roles, or will we see people just shifting to universal basic income and not working anymore? I think that's a basic question, a fundamental question about this.
(06:15):
If AI really gets this general intelligence but still requires a lot of human supervision, well then those will be the roles. People will just be AI supervisors. So I think it really depends on the level of autonomy. How much can you trust AI to do certain things without human supervision? And so those are the questions that I think will show us whether it's going to be just people not working at all or just a shift in roles.
Marcus Johnson (06:43):
So the workplace, Grace, how about you? What area do you think I is going to influence the most?
Grace Harmon (06:47):
Well, I think most immediately, like Jacob was saying, there's still a pretty big amount of human oversight needed. I think one of the big effects we're seeing on the workforce right now is reductions in workforce to make way for AI spending. So in preparation for that AI innovation, for AI initiatives is these really big cuts just to reduce employee costs.
(07:09):
I was thinking in terms of the top two. To a degree, scientific discovery for sure. But work in the economy. We're already seeing a lot of wealth concentrated at these big AI companies, these big, big tech firms that are in a way becoming the banks of today, and it affords them a lot of legal sway a lot of economic power. So that's what I was thinking about outside of the labor market. But that is just the big one, and I think Jacob had a lot of the big, important points there.
Jacob Bourne (07:38):
Yeah. And to add to what Grace said about already seeing cuts to make way for AI spending, there's also been some regret on that front. Some AI layoffs, firms are like, "Well, AI is not quite there. We wish we had those people back."
Grace Harmon (07:51):
Yeah. Well, I think it was Klarna that had to roll back AI customer service. And I do think that was because the customer experience went down. And there's two things there that I would guess, which is one that the AI wasn't doing a good enough job. But also, people just don't really want to engage with that.
Jacob Bourne (08:09):
And I think that's a big, big part of this and a big part of the feature trajectory is that people might just want to interact with people sometimes.
Marcus Johnson (08:16):
Yeah. It was a weird assumption that I'm going to want to read stuff written by AI. And there was a really good quote, I think it was... I can't remember the article it was in. And it was from not the author, but they were citing just a person, a member of the general public as saying, "Why would I want to read something that someone hasn't taken the time to write?"
(08:37):
And I think that gets to the heart of it for me, is that "Okay, yeah, there's something about the human experience, and there's a reason that we like talking to other humans. We like to experience art or music or whatever it is from other humans. And so I think that has been... And yet when it comes to customer service, we like getting help from other humans.
Jacob Bourne (08:57):
Yeah. And that's really interesting, Marcus. Just this morning, I was reading a study about how they found that people sometimes prefer AI written poetry if they don't know if it was written by an AI or a human. But once they learn it was written by an AI, then they want to-
Marcus Johnson (09:13):
There we are.
Jacob Bourne (09:16):
They're like, "I don't like it as much anymore."
Marcus Johnson (09:16):
Yeah, yeah. I think that's absolutely true.
Jacob Bourne (09:18):
There's a bias there.
Marcus Johnson (09:19):
There was an article by Charlie Warzel of The Atlantic that I was reading I think a year or so ago. In the first few paragraphs, you're reading it fine. And then you get to the fourth paragraph and he's like, "Oh by the way, the first three were written by AI. I felt some kind of way. I felt deceived, and I felt like I couldn't relate to the article as much because it wasn't written by a human.
Grace Harmon (09:39):
There's also just more distrust. That doesn't mean that no one has distrust in the flaws or the capabilities of a human journalist, but I think you then scrutinize what it's saying a lot more.
Marcus Johnson (09:53):
Yeah, yeah. Absolutely, absolutely. I think another part of it is when people are saying, "I don't want to speak to humans, I would rather deal with AI or a machine." What they're saying there is, "I'd like things to be a bit faster. I don't like having to wait in line for 40 minutes to customer service with an Airline. And if I can speak to an AI quicker, then I want that. But they might not want that. What they're saying to you is, "I would like to speak to a person if it was really fast and if they could help me as efficiently as maybe an AI system could."
Grace Harmon (10:24):
Well, there's also some conflicts there where the most important thing I think for online shoppers is speed, but also the most important thing within the customer service experience is human connection, which it's contradictory, and that's not a bad thing.
Marcus Johnson (10:39):
Yeah.
Jacob Bourne (10:39):
Yeah.
Marcus Johnson (10:40):
Jacob, what other areas do you expect AGI to really make an impact?
Jacob Bourne (10:45):
This one's interesting based on what we were just talking about. I think the other area is actually personal relationships, which seems counterintuitive to what we were just saying. But companionship is a rising use case for chatbots, as well as mental in health, and life coaching and things like that. And I think it's a rising use case, and probably it's just a subset of the population that feels comfortable using AI in that way.
(11:13):
But I think from the beginning, after ChatGPT launched and we saw these open source models be on the rise, there were companionship dedicated platforms that came about that were very popular. But now we're seeing that people are using ChatGPT for those kind of use cases as well, especially in the advent of voice mode, which makes it a bit more personal. So I think within the advent of an AGI that's going to be able to understand the nuance of human emotion and social situations even more, I think. And then of course pairing a GI with robotics, I think we're going to see sweeping changes in terms of people really turning to machines for companionship.
Marcus Johnson (12:02):
Yeah. Grace, is there another way that you think AGI is going to change things significantly?
Grace Harmon (12:09):
I think another key area is going to be scientific discovery. So creating cures for diseases, designing clean energy systems, things like that if the AGI is able to autonomously make hypotheses, design experiments. But that also plays again into the future of work. I think right now we talk a lot about how software developers, coders are really vulnerable to AI and to job loss, but that also brings in an entire other industry within medicine, scientists. But that is a level of AGI that would have to be far more advanced than customer service capabilities. But I think far down the line, it is something that could be a big benefit.
Marcus Johnson (12:47):
Yeah, yeah.
Grace Harmon (12:48):
Far down the line.
Marcus Johnson (12:49):
I think scientific breakthroughs is a really good one, and one that might not get the level of pushback or concern as other areas that AI is disrupting. And the Nobel Prize in chemistry last year went to some folks who are working on the protein structure problem, and solved that which people have been trying to solve for 50 odd years. So we're already seeing it influence in medicine, whether it's this or whether looking through scans to check for cancer that the human eye might not be able to spot. I think that's a really, really good way that it's going to make an impact in our lives.
(13:38):
It feels inevitable that it's going to... It has made an impact, and it's going to make even more of an impact in our lives. And it feels different from other technologies in a sense that... Sigal Samuel with Vox was writing a piece titled AI companies are trying to build God. Shouldn't they get our permission first? The public did not consent to artificial general intelligence.
(14:04):
And so there's a question here is how much permission do AI developers need to get from society before irrevocably changing society with AGI? Grace, when I first read that, I was thinking, well, no one asked for the iPhone, no one asked for Facebook and they profoundly changed our worlds. But then the more I read the article, Mrs. Samuel makes some very convincing arguments as to no, this is bigger than that, this is more important than that. And actually, speaking to the public, maybe even having a referendum on it, which has been done in the UK.
(14:40):
We had a referendum on ranked choice voting. Do we want to change how our voting worked? We had a referendum on Brexit. Do we want to just ask the public what they felt about being in or out of the European Union? So we have, in the past, stopped and said, "Hang on, big decisions. We should ask the general population what they think. What did you make of this idea of getting permission to develop AI before taking it even further?
Grace Harmon (15:06):
Yeah. I think the argument in the article that stuck out to me the most was that the simple fact of using AI means that you're giving consent to what the companies are doing.
Marcus Johnson (15:15):
Yes. Our use is our consent, exactly.
Grace Harmon (15:16):
Yeah. And you could say the same thing about using Facebook, or Instagram or anything like that, that your use of the platform equates. And it does, by the user policies, equate to consent to data scraping, and sometimes a lack of user privacy. In my opinion, consent to use shouldn't mean an agreement that you approve of or are okay with everything that a company is doing.
(15:39):
I also think that there is a lot of pressure from employers, and then just curiosity and interest that's driving use rather than a huge interest in having the tech be a big part of your personal or professional life. So there is the thing of if you've used it lightly or if you are being told to use it, does that mean that you automatically get factored into being okay with everything that the companies are doing? I would say that the control that we have now in terms of permission is like you were saying, more as voters being able to push lawmakers to set up growth from a framework, things like that. Because the ship has sailed otherwise.
Marcus Johnson (16:17):
Yeah. Yeah, the consent part's really interesting. That jumped out to me as well. Consent versus informed consent where we fully understand the associated risks. You could argue that it rarely is it informed consent, and maybe that's the responsibility of the individual to be more informed. Maybe it's the responsibility of the companies to help explain what the thing is in the first place. But Mrs. Ami was saying, "Sometimes we consent to technology because we fear," as you were saying, Grace, "We fear we will be at a professional disadvantage if we don't use it."
(16:47):
So if you're a journalist and you're using social media, did you really consent to it or are you doing it because you have to? If you're at a company and company is saying, "We need to use AI," and you are using it because you're nervous about falling behind professionally, are you really consenting to use it?
Jacob Bourne (17:03):
Yeah.
Marcus Johnson (17:04):
Yeah.
Jacob Bourne (17:05):
I think the stakes are just so high with AGI. We talked about the sweeping changes to the workforce, and to personal relationships. Of course, scientific advancement is positive, generally speaking. But I think you can achieve that same type of advancement with powerful, narrow models, not general models.
(17:26):
And I think with AGI, there's also concern among the people building this technology themselves who say, "It actually poses an existential risk to humanity. We don't exactly know what this thing is going to do once we build it." Current testing of existing models shows that even when powerful AI models are aligned with human values, if they have a certain objective, they're willing to lie and deceive in order to achieve that objective.
Marcus Johnson (17:54):
Yes.
Jacob Bourne (17:54):
And so if you have a model that's as or more intelligent than a person, that becomes pretty worrying. So I think there should be a strong level of public support needed to proceed with AGI. And I think this issue is really looming large right now. Because you have legislation proposed by the Trump administration to block the ability of states to regulate AI for 10 years, which puts us at 2035. Now the ban, it would be the current wording of the bill says, well it would be either that or lose federal funding.
(18:38):
But at the state level, that's the way that public support gets communicated into hopefully sensible regulation. And if states lose the ability to do that, then I think that kneecaps the permission-based AI development that's in question here.
Marcus Johnson (18:56):
Yeah.
Grace Harmon (18:58):
I would say I think it's a little bit different in the EU, but I do think in the US that that ship has sailed in terms of being able to rein things in. I think even the threat of losing federal funding, which to some states isn't enough to follow some of the policies being put into place by the administration. I think that in terms of being able to have any control over what these companies are doing, what they're developing, for the most part it's over, that ship has sailed.
Marcus Johnson (19:23):
Do you think that that could be influenced from the international community? Because one of the things that was pointed out in this article was that we have a nuclear non-proliferation treaty. We have a biological weapons convention. We have treaties, difficult to implement, not perfect. But they are there to keep people across the world safe. There are a lot of different... Mrs. Samuel was saying, there's the idea we can't stop technological innovation. We're too far gone, it's going to happen.
(19:53):
But she points out we stopped trying to clone people, and we decided you can't put nuclear weapons in space. And so do you think there could be pressure from outside the US for the UN, for somebody internationally to put some rules in place to say, "Hey, actually, are there certain kinds of AI that shouldn't exist?" I think was a good question in the piece. Do you think that that's possible?
Jacob Bourne (20:16):
I think it's possible. But I think a problem with that is... So with nuclear weapons, we know what they do. We don't have an AGI yet, and so we-
Marcus Johnson (20:26):
That's a great point.
Jacob Bourne (20:27):
There's thoughts about what it could do, and concern about what it could do. But we haven't seen this thing in action, and so I think it's really hard to pass legislation about something that doesn't exist yet.
Marcus Johnson (20:40):
Yeah, yeah.
Grace Harmon (20:40):
Yeah. I absolutely agree with that, that we don't know exactly what the consequences are. I also think that within some of the legislation that's been proposed, like the California AI bill that was shot down, what you're testing for in terms of capabilities, we still don't know. I think part of it was testing for if you could use AI to create a nuclear biological weapon, and it's... Does that mean being able to give instructions to a human? Does that mean being able to do its own coding and work with... We don't really know exactly what to test for, because like you said, we don't know what it's capable of. We don't know the consequences.
Marcus Johnson (21:15):
The stakes do seem higher here. Jacob, as you said. I thought it was an interesting, the line in the piece from Jack Clark, one of the co-founders of the AI company Anthropic. And he had told Vox that it's really weird that this is not a government project. This is someone who founded a private AI company. But because of how significant this is, that it is in the hands of private firms. And to your point, you have to wait for the thing to be built before you can regulate it. But if you wait for it to be built, maybe it's too late.
Jacob Bourne (21:50):
Yeah. And I think this is a big difference between the US and China too, where you see the Chinese government has much closer ties to its private tech sector, and much more control over it. And of course they're pushing for AGI as well. In the US, historically the tech industry has had very loose ties with the federal government. But I think we're seeing that maybe slowly change. Probably because of AI, we are seeing more partnerships between tech and the government, and certainly a lot of lobbying going on.
Marcus Johnson (22:29):
Yeah. All right, so let's end with this. When will AGI arrive? Will we have some form of AGI before 2030? Cade Metz of the New York Times was writing, and what we learned on Monday's episode is that identifying AGI is essentially a matter of opinion. So this is both I think a very interesting question, but also it may be a bit of a silly one because how will you know when it's here? But based on how you would both define AGI, Jacob, when does it get here? When does some form of AGI arrive?
Jacob Bourne (23:05):
Yeah. Yeah, based on the simpler definition, and also it could just be the kind of thing where we never really reach the definition but we know it when we see it kind of thing. I would say that something that we could call AGI is going to arrive by 2030. And I say that because if you just look at the giant leaps that we've already made, AI first was invented back in 1960 and it was a slow pace of development for many decades.
(23:37):
And then 2022, you see ChatGPT, you see this enormous leap in capabilities. And I think over the past few years, we have seen that increase. But not only have we seen an increase, we've seen just this huge amount of global investment and enthusiasm in AI advancement. And I think that any kind of roadblocks in terms of a lack of data quality, or limitations with chips or model architecture, I think the amount of investment is going to blow past those limitations, and we're going to see this kind of powerful AI arrive in the next few years.
Marcus Johnson (24:13):
Yeah. Grace, where do you land?
Grace Harmon (24:15):
I had just the same idea, of about five to seven years. I'd also posit that the other definition for some companies has been financially-based. I think with Microsoft and OpenAI, it was if AI can generate $100 billion in profits, which is complicated. Sam Altman said that the company is losing money on pro subscriptions, because it costs so much to use and people are using it more than expected. So from a financial point, it's a balancing act. I think it'll take at least five years for companies to find the balance between the cost of a powerful AI model and getting profits back from it. But in terms of the vague definition of what AGI is, I agree with Jacob.
Marcus Johnson (24:57):
Yeah. Yeah, OpenAI had said, we talked about this on Monday, that it's a highly autonomous system that outperforms humans at most economically valuable work. So again, focusing on the dollars and cents side of things. The arrival date for AGI, think one thing everyone can agree on is that it varies radically based on who you ask. And I went and looked and see, okay, what do people think? If you picked any year in the future, someone will agree with you.
(25:28):
Anthropic's CEO, Dario Amodei, thinks powerful AI, that's his phrase for AGI, might arrive as early as next year, 2026. Google's co-founder, Sergey Brin, and Google's DeepMind CEO, Demis Hassabis, thinks AGI will arrive sometime around 2030. So they are in agreement with both of you. There was a recent analysis from Kem Dilmajani, principal analyst at... I think this is AIMultiple or AI Multiple. Combing through close to 9,000 AGI predictions from scientists, AI experts and entrepreneurs. Through 2009 and 2023, they averaged the data and found there's a 50% probability we will reach human level intelligence in machines from 2040 to 2061.
(26:18):
However, McKinsey, they wrote that most researchers and academics believe we are decades away from realizing AGI. There are a few even predict we won't see AGI this century or ever. Rodney Brooks, a roboticist at MIT and co-founder of the company iRobot thinks AGI won't arrive until the year 2300, so hundreds of years away.
Jacob Bourne (26:39):
Well, if we can't ever agree on the definition, then I suppose we will never see it, right?
Grace Harmon (26:43):
Yeah.
Marcus Johnson (26:43):
Everyone could be right. Exactly. We're all right, we're wrong.
Jacob Bourne (26:47):
Well, one more thing on this outlook is just that this whole 2030 prediction, it's also in line with when some people think we're going to see the first quantum computer that can outperform classical computers on practical tasks arrive. There's a close relationship between AI and quantum computing in that one can speed up the development of the other. And so that could be one reason why we're seeing this 2030 date thrown around, is because AI could push quantum computing forward, and also it would be a symbiotic relationship or vice versa. But I think if both were to be achieved within five years, we would see enormous changes from that.
Marcus Johnson (27:38):
Yeah.
Grace Harmon (27:39):
Yeah.
Marcus Johnson (27:40):
We shall see. Thank you so much to my guests for hanging out with me today, that's all we have time for. Thank you first to Jacob.
Jacob Bourne (27:46):
Thanks so much for having me.
Marcus Johnson (27:47):
Yes, indeed. And then to Grace.
Grace Harmon (27:49):
Great talking to you guys.
Marcus Johnson (27:51):
And thank you to the whole editing crew, and to everyone for listening into Behind the Numbers, the new EMARKETER podcast made possible by Quad. Make sure you subscribe and follow, and leave a rating and review if you have time. We'll be back on Monday. To our American listeners/viewers, happy 4th of July weekends.