Marcus Johnson (00:00):
Are your brand campaigns as effective as they could be? If you're only getting insights when the campaign is over, then the answer is, of course, no.
(00:08):
To make better campaign decisions, you need real-time measurements, you need lucid measurements by Cint. Discover the power of real-time brand lift measurement at cint.com/Insights. That's C-I-N-T.com/insights.
(00:31):
Hey, gang, it's Monday, June 9th. Grace, Henry, and listeners, welcome to Behind the Numbers, an e-marketer video podcast Made possible by Cint.
(00:39):
I'm Marcus, and today we'll be discussing how workers feel about AI, and the biggest gaps between AI experts and the general public. Joining me for that conversation. We have two people: our analysts covering everything technology and AI. Living in California, it's Grace Harmon.
Grace Harmon (00:55):
Thanks for having me.
Marcus Johnson (00:57):
Of course, thank you for being here.
(00:58):
We also have with us our SVP of Media Content and Strategy, hanging out in Maine. It's Henry Powderley.
Henry Powderly (01:06):
Hey, Marcus.
Marcus Johnson (01:07):
Hey, fellow.
(01:08):
Today's facts: What are the oldest companies in the world? So the other day I did the oldest company in America, but now I've gone and found a piece by Iman Ghosh, of Visual Capitalist, who wrote a piece a few years ago referencing business financing, that's where this information came from.
(01:35):
And the oldest company in the world is in first place, Japanese construction company. Can't pronounce this, but Kongō Gumi Company Limited. Founded in when? When do you guys think this, the oldest company, has been in business since?
Henry Powderly (01:54):
1200?
Marcus Johnson (01:56):
Back further. A lot further.
Henry Powderly (01:58):
Wow.
Marcus Johnson (01:59):
Shockingly. I know 1200 is a good guess. That's the oldest university. I believe Oxford is around 1200. 1100, 1200 ish.
Grace Harmon (02:08):
900?
Marcus Johnson (02:09):
Even further: 578. This company's been in business 1500-odd years. Second place goes to... Lot of them are.
(02:20):
Second place goes to Austrian restaurant St. Peter Stiftskulinarium, which got started in 803. And in third place is German winery and distillery, Staffelter Hof, in 862.
Henry Powderly (02:37):
I was wondering if there'd be a winery in there. That makes sense.
Marcus Johnson (02:41):
It should be illegal to say "established in" if you're not at least a hundred years old. You know when people are like, "Oh, established in 2016." What you're saying is you've been doing this for nine years, and that's not impressive, especially compared to this Japanese construction company who can say, "We've been in business since the Western Roman Empire fell in 500 A.D." That is amazing. Anyway, here's the real topic: how Americans feel about AI.
(03:13):
So AI is everywhere, at least it feels that way, but how are folks engaging with it? According to some May data from Pew research, nearly 60%, six zero, of American adults viewed a web page with an AI generated summary, like AI overviews. But just 13% visited a site of an AI generative tool, and 10% looked up an AI-related term. I saw people are using AI, but how do they feel about it? YouGov took two readings, one in December of last year, one in March of this one to try and gauge how Americans feelings on AI have changed.
(03:50):
Before I give the results, Henry, can you fill in the blank for us? You get your take on this. American's feelings towards AI have gotten more what in the past couple of months or this year?
Henry Powderly (04:03):
I think American's feelings towards AI have gotten more complicated in the past few months.
(04:08):
I think on one side, you've got genuine enthusiasm growing. We know that more and more people are using tools like ChatGPT, I think they're cited at 5 billion queries a week now. We've seen viral things like the Studio Ghibli image generation, or the action Figure Yourself challenge that I saw a lot of people doing. And then, like you just said, I think most people are encountering AI in search and seeing how that's changing the experience.
(04:36):
But at the same time, you've got genuine concern that grows around AI. Around a number of situations. One, safety and creativity and whether we want to really give up so much creativity to AI tools. There's concerns about the environmental impact of AI, and then of course there's concerns about jobs and how this continues to change and will further change how people work.
Marcus Johnson (05:02):
Creativity is a good one, and I'll come to that in a second. Grace, I want to get your take first, and then we'll come back to that word, "creativity", because that seems to be very important for folks.
(05:11):
In terms of how AI is sweeping the nation, the world, Grace, your word, if you had to describe how people's feelings are changing towards AI, what would it be?
Grace Harmon (05:23):
I would definitely agree about excitement. I think I would say cautious and skeptical. I think that, like you said, there's a lot of interest. There is a lot of interest, especially on the enterprise side and a lot of excitement there. But I think that people are starting to see some of the more personal impacts, like on jobs, on human connection that AI can have, and, like you said, sustainability. So I think there's a lot of concerns about its accuracy as well.
(05:47):
So I would say I think there is more of a cautious sentiment growing despite a lot of the fun aspects that AI can have, like the Studio Ghibli trend.
Marcus Johnson (05:57):
Your take seems to be reflected in this data, because exactly the word, YouGov... Survey, strongest feeling towards AI in the last couple of months, since December: cautious, 54%, then concerned, 47%. They're both upper fraction. Then skeptical, which was also the feeling that had grown the most. Why were they concerned? Number one feeling was concerned, but why?
(06:21):
Most folks were concerned. All of these shares between 50 to 60% of people, but most of all deep fakes, then the erosion of privacy and then political propaganda, the replacement of human jobs and manipulation of human behavior were joint forth
(06:35):
Henry, but you just listed a bunch of facets of AI and it's almost... AI feels like too big of an umbrella to talk about all this, because you do have so many ways that it's being woven into people's lives that it feels like it's hard to have an opinion on AI as an umbrella term.
Henry Powderly (06:55):
I think that's fair. We talk about AI a lot in terms of writing, and creativity, and tasks that many people do in their jobs, and a lot of times that's generative AI, but there's a whole realm of AI that's much more on the data and the infrastructure side that we could talk about as well.
Marcus Johnson (07:14):
Speaking of creativity, there's some research on how workers feel about AI.
(07:20):
Pew found that last October workers felt more worried, 52%, than hopeful, 36%, but they're not miles apart. They're relatively close, but why might they feel worried?
(07:30):
Reiter had a March study looking into the main reasons why US workers were working against their company's gen-AI strategies - working against them. Most, 33%, thought that AI diminished their value or creativity, to use the words that was just referenced by Henry.
(07:50):
In joint seconds with 28%, there were three other reasons: AI has too many security issues, I don't want AI to take over my job, and the company's AI tools were low quality. A quarter of workers said AI was adding to their workload.
(08:02):
Grace, you've written recently about employees trying to sabotage their company's AI efforts. What did you find there?
Grace Harmon (08:10):
I think for one thing, it isn't necessarily always going to be sabotaged in terms of turning in fake numbers, or completely falsifying information about how it's impacting your job. A lot of that might just be not giving your employer insights on what you're learning.
(08:25):
I think that there is not necessarily as much excitement from employees about AI as there is from executives, the people who are in the trenches actually using it are being able to see where it's useful, where it's not.
(08:38):
And I think another really big issue is that there's just an enormous training gap. Employees are generally expected, at most companies, to figure out how to use it all on their own. A lot of times they need to pay for the tools themselves or what the employer is giving them is just not that fit for their job.
(08:54):
Like Henry was saying, it isn't always just generative AI, that's the one catchall category for AI, there's a lot of different products and a lot of different companies, and I'd say in terms of morale, there's also just a lot less training opportunities afforded to women and older generations, which is a big issue.
(09:13):
I think that the people who are really using it are figuring out where it's actually useful and whether or not their employers are giving them the resources.
Marcus Johnson (09:22):
It does seem like the Wild West at the moment. Henry, I'm wondering two things: one, will that change? Will it be that most companies in a few years do have a policy, do have training, or is it going to be a test-and-learn phase for a much longer period of time?
(09:38):
And then two, this idea about folks not really knowing how to use it, being left to their own devices, not really knowing how to measure success when it comes to AI. Because there was a few data was saying employees thought AI chatbots sped up work, but less people thought that it improved the quality of that work, so trying to figure out when and where they should be applying AI in their jobs.
Henry Powderly (10:01):
That's an interesting question, because on one hand I do think the need for training is going to continue to grow, but I don't think it's going to be so easy for companies to decide to invest in that training. [inaudible 00:10:13] resource.
(10:13):
I think that's a resource question, and I think what you're seeing first is job changes as a result of AI, and you're hearing a lot of perspective about how the new AI jobs of the future are going to emerge, that people who are displaced in jobs where AI is now pretty good at taking on that task are going to move into more like an operator role or something, but they need to be trained to do that. And I think that we need to see companies make those investments, and I'm not entirely sure that there's a lot of anecdotal evidence right now that we're seeing that.
Marcus Johnson (10:50):
How likely are we, do you think, to see a significant consumer or employee pushback over the next 12 months or so?
(10:59):
Grace, does it feel like there's a brewing consumer/employee revolt happening, or is that just because the technology is so new and that will eventually fizzle out, or do you think it will build into something more significant?
Grace Harmon (11:11):
Absolutely. I don't know about to the level of a revolt, but to a pushback, absolutely. Like thinking about Duolingo coming out and really identifying themselves as an AI-first company, how poorly that's gone over. I'll be interested when their next earnings report comes out to see how subscriptions have been affected, because I think it really will.
Marcus Johnson (11:32):
That's a good one. So the context there, it's written about by a lot of people, but I was reading about in Fortune, Matty Merritt and Morning Brew explained Duolingo's, as Grace was saying, CEO, coming out and saying they're getting rid of contract employers and replacing them with AI. It's a language learning company. They will only allow new hires once teams prove they can't automate the work.
(11:56):
Henry, they're not the only ones. Grace, you were saying Shopify has done this as well, as well as a few others. What are your thoughts on this stance from Duolingo, and the idea that this movement, if you want to call it that, this frustrations are going to build into something more?
Henry Powderly (12:11):
And I think it's natural to see that being frustrating, because the way Duolingo positioned it was a reduction, and I don't think that we should expect the public to see a reduction in a positive light. Had it been new jobs being created, and training being offered for people who wanted to move into those jobs, I think that could have been received a little differently.
(12:33):
And I think Klarna had a similar situation go on where they, famously talked about, about a year ago how much they had outsourced to AI in the customers center, and I think now they're hiring humans back. So the pushback is really going to be based on whether the public see the positives or negative in what happened, and when it's just a reduction, I think it's hard for a positive to be seen.
Grace Harmon (12:58):
I don't think it helped anything, either, that contractors are more vulnerable. They're not getting benefits, things like that.
Marcus Johnson (13:07):
You were saying about being seen Henry, and that leads me to some, I think, Grace, you had some research. 80% of UK consumers think AI use in customer service should be disclosed, from Quadient.
(13:20):
It sounds like a lot of people are okay with the technology. They just would like to know when it's being used, how it's being used, that it's not being used necessarily to take people's jobs directly. There's a lot of workers excited about AI.
(13:33):
Research from Henry Business School in the UK found 56% of full-time professionals, over half, were optimistic about AI advancements. However, 61% said they were overwhelmed by the speed at which the technology developed.
(13:46):
Henry, how do you tackle that? How do you tackle that excitement and enthusiasm, and try to make sure that the speed of development isn't muddying the water, so to speak, in terms of how people feel about AI?
Henry Powderly (14:02):
Well, I think transparency is probably the best way forward for now, even if the tool itself is very much managed by human intervention, I think the public, because of this cautious feeling that folks have, wants to know. And so I don't think it can go wrong being overly transparent on AI use.
(14:26):
But again, I think sentiment will change when the positive benefit is overwhelmingly seen. And so if people see that opportunities are being created, if there is efficiency that is gained, that efficiency is benefiting people either in their jobs or in their day-to-day lives. I think, overall you need to demonstrate that positive reaction through those results.
Marcus Johnson (14:54):
Demonstrating the positive size of AI, Grace, I'm wondering if, do you think there's A, is there a time limit here? Are people just going to wait to get through all of the tough parts of it and the figuring this out, and eventually when AI starts to show how capable it is, a lot of the positive things it can do, will people still be ready to welcome it with open arms?
(15:17):
There's a few quotes here, let me throw this at you. One, BBC article from someone... There's a quote in the article from someone saying, "Why would I bother to read something someone couldn't be bothered to write?" So it does seem a bit of a sentiment of a pushback to articles being written, pieces being written, art being created, music being developed with the use of AI.
(15:38):
And then there's this other piece of data as well, potential employees... So this is the Duolingo replacing employees, contractors, with AI that can do the job. So that's people at the company being pushed out. People also seem struggling to get a foot in the door. There's a recent Oxford economics report analyzing the US employment rate, sorry, unemployment rate for those age 22-27 with a bachelor's degree. That's basically recent college grads.
(16:05):
They looked at a three-month moving average, and their jobless rate of recent college grads was creeping up, was closer to 6% in April compared to just above 4% for the overall workforce. So what do you make of the potential stop clock, the countdown for which AI has to get its act together before people wash their hands of it?
Grace Harmon (16:32):
I don't think there's that much of an option just to entirely sit it out. Companies are going to use AI in a lot of different facets. Just because you're not choosing to use ChatGPT, or Perplexity, or more direct tools like that doesn't mean you're not going to be engaging with it.
(16:45):
I guess I'd also add about the jobs aspect, I don't know that that's all AI, there is a lot of economic factors in terms of how recent graduates are able to get jobs, or if they're able to get jobs.
Marcus Johnson (16:55):
That's a good point.
Grace Harmon (16:58):
I don't see there being a public sentiment of, "we are all going to just wait it out and see how this develops in a few years." I could see that happening a bit more on the enterprise side and company side, especially for people who were having to pay for those tools.
(17:16):
There's been pretty low adoption of Copilot, I think that's starting to pick up, but just companies weren't seeing that there was that much of an added value from it, and that is a tool you are paying for. So I'd say the people who are paying for those tools, if companies are, maybe incorrectly, assuming that they have created a product that's good enough to monetize, those are some of the tools that I think we might see people waiting out on, giving it a bit more time to simmer and cook and get better.
Marcus Johnson (17:44):
There does seem to be, I don't want to say it's an education gap, or two different kind of realities playing out in a sense between AI experts and then the general public. And I guess that's the case with a lot of facets of life. But in this particular instance, Henry, Pew Research was looking at the share of adults and AI experts, what share of them are very concerned about different aspects of AI?
(18:10):
The biggest gap between the two parties was regarding their concerns about AI leading to job loss, there's a 26 point gap between the two parties. And then the next largest gap between the two was AI leading to less human connection, and then concerns about AI impersonating people.
(18:25):
Where do you think the biggest gap exists between experts and the general public when it comes to AI attitudes and concern?
Henry Powderly (18:33):
To me, the sentiment gap makes a lot of sense. Experts are gathering tons of information about how AI can evolve over years and how it's going to change industries, they're deeply invested in research, they're forward looking in their approach to the topic, and that's not generally how the public approaches the topic. And the public bases their opinion on what they see on the day to day, and if the news is generally bad, or there's a pervading feeling of unease around the growth of AI, I think that's what's going to drive most of that sentiment.
Marcus Johnson (19:09):
Perception is reality. We were talking about this even with regards to tariffs. I think very few people would know the exact price of all of the things they buy at the grocery store, but the idea, reading articles about it, the headlines, seeing the news saying prices are going up. You probably couldn't even tell if the price had gone up on specific items, but you just feel like they are, and so that starts to affect your way of thinking.
(19:32):
There were some gaps between the two parties, but a lot of times they were on the same page, actually, when it came to the concerns about biases in decisions made by AI, AI spreading inaccurate information, and people not understanding what AI can do. Experts in the general public were actually in lockstep, pretty much.
(19:50):
I think the biggest gap might be people's awareness of how often they actually are using AI. Grace, you're alluding to this, other people, it's hard to just sit on the sidelines and wait this out because so much of what we do already has AI built into it, baked into it. Gadjo Sevilla was saying, "AI is both unavoidable online and also barely noticed."
(20:10):
Citing a March Pew research study where 93% of over 2 million page visits by a thousand adults touched a page mentioning AI. However, while 65% of users saw AI-related terms in search results, just 0.05%, so basically no one, of total page visits involved substantive AI engagement. "Basically, meaning most people encounter AI incidentally," he wrote, "not intentionally."
(20:41):
There is supporting research from Gallup and Telescope as well that found nearly all Americans use products that involve AI features, but nearly two-thirds don't realize it.
(20:52):
So if AI is here and a lot of people are experimenting with it, or being asked to experiment with it by their employer. Or doing it on their own account. Maybe as we said at the beginning, Grace, you nailed it with the words in terms of what some of the general public feel about AI, it's concern, it's caution, it's skepticism.
(21:17):
But all that said, if you still have to use AI, you still want to get involved in AI. What are the best ways to do it? Nicole Nguyen of the journal wrote a piece detailing how to get started using AI. She gave away two pieces of advice.
(21:31):
One was choose your bot. She said, think of it as a massive trove of information and learning. She then listed a few, Microsoft's Copilot, Anthropic's Claude. OpenAI's ChatGPT, Google's Gemini.
(21:43):
The second one was, "Undo your search brain." Henry, I thought this was good. Instead of searching, "bikes under $500," which a lot of the times we've purchased a couple of words, or one word, into the search engine, and that's how we look for things. "Provide more detail," she said. So a couple of sentences, as much context as you can.
(22:00):
Henry what's your best advice for folks, especially more skeptical ones, concerned ones, on how to get started with AI?
Henry Powderly (22:07):
That's funny because when you asked us to prepare on that question, that's exactly what I was thinking. And my suggestion was to change your default search engine in your browser to either ChatGPT or Perplexity...
Marcus Johnson (22:19):
How interesting.
Henry Powderly (22:20):
For a week, that's something I did, and talk about jumping in the deep end.
(22:25):
It was incredibly unsettling, because search and how we search... And Google is a verb. We have very deeply ingrained habits around how we search for information, and by just doing that small change, it completely forces you to think differently, and the way you ask questions, and just everything. And so that would be my first piece of advice is just jump in the deep end, change a default engine, and explore.
Marcus Johnson (22:52):
Just focusing on that for two seconds. What did you like about experience? What did you not like about it? What were some of the learnings from that?
Henry Powderly (22:59):
Well, sometimes I didn't want a long response. I just wanted the link to the restaurant in town so I can order my takeout. And so it's interesting, not everything requires a deeply cited, a long summary. But some things I really found it to be beneficial. Obviously I'm conflicted as someone who works in publishing, but having to engage with everything just right on that page and get the information I was looking for was really helpful.
Marcus Johnson (23:31):
It's similar to how we use different social media platforms for different activities, because I was speaking to someone and they were saying, "I use Google to find things immediately. I use ChatGPT to get more context and information about a certain thing."
(23:43):
And I wonder at what point we'll use whatever service for both, and it will know what we're asking and give us the thing we want quickly or with a bit more thought put into it.
(23:55):
Grace, how about for you?
Grace Harmon (23:57):
I think, going back a little bit, you were talking about that Pew research about the gap between experts and the general public. And one stat that I really remember from that is just this enormous gap in understanding of how much consumers are actually using chatbots. So I think if I would have advice for how to get started with AI, a lot of it would be understanding the best use cases, knowing when you should use a chatbot, when you using a search engine, if you're coding, what tools are best.
(24:25):
And I think that's hard just because a lot of companies, especially OpenAI, have just an enormous number of products that there are still in their lineup. And it isn't just ChatGPT, it isn't just GPT anymore. There's o3, there's o3-mini, there's 4.5. There's just an enormous number of options. So it's difficult, but they all are going to have best functions and strengths. So I think being able to look at what you need to get done, whether it's coding, or summarizing, or campaign generation, and knowing where you should be going to and what is going to be best fit for your use, I think that's really, really important.
Marcus Johnson (25:02):
That's a good one. In every other part of life, we use the right tool for the right job, and so why would this be any different?
(25:10):
Terrific. Well, that's where we have to leave our episode for today, unfortunately. But thank you so much to my guests for hanging out with me today. Thank you to Grace.
Grace Harmon (25:17):
Thanks, Marcus.
Marcus Johnson (25:18):
And Henry.
Henry Powderly (25:20):
Thank you.
Marcus Johnson (25:21):
Yes, sir.
(25:21):
And thank you to the whole editing crew and everyone for listening into Behind the Numbers, an email-to-video podcast made possible by Cint. Please subscribe and follow to hear about new episodes and leave a rating and review if you could. We really appreciate it.
(25:34):
Now tomorrow you can hang out with Rob Rubin on the Banking and Payment Show, where he'll be discussing how banks might be failing customers by not helping them with specific life milestones.