Marcus Johnson (00:00):
Today's episode is brought to you by Amazon Ads, recognizing excellence in advertising innovation through the Amazon Ads Partner Awards. Discover how award-winning technologies are helping brands achieve success across solutions, including Amazon Marketing Cloud and streaming TV. Visit advertising.amazon.com/partner-awards to learn more.
(00:28):
Hey gang, it's Thursday, December 18th, and welcome in to a special EMARKETER Podcast miniseries: AI-Driven Media Management with Gigi, made possible by Amazon Ads.
(00:38):
I'm Marcus, and here is Episode 1 of this two-part series with our Senior Director of Content, Jeremy Goldman, and Adam Epstein, Co-Founder and CEO of Gigi. I hope you enjoy.
Jeremy Goldman (00:51):
So first off, Adam, really excited to talk to you today.
Adam Epstein (00:54):
Thanks, Jeremy. Really excited to be here. Thanks for having me.
Jeremy Goldman (00:57):
Yeah, I have a lot of different things I'd love to dive into that I think our audience would really appreciate learning more about from you. You've said that Gigi started as a non-consensus bet that later acquired a major pivot. I know we've got a lot of people who are builders in our audience who are very curious about what did you learn from that early CTV focus thesis, and how did it lead you to rebuild Gigi as an AI native company?
Adam Epstein (01:24):
Yeah, it's a great question. Gigi was initially founded on a premise that CTV advertising would be increasingly concentrated to a small group of large tech and big media companies, which is proving to be true. But the non-consensus bet is that they were incentivized and had the means, and were going to become increasingly walled gardens like Amazon and YouTube, Google, so companies like Disney, Netflix, Paramount, et cetera, would become increasingly walled gardens.
(01:53):
That obviously didn't turn out to be true. Our bet was that if they became walled gardens, there'd be an opportunity to unify the future fragmentation of CTV buying across those platforms. And so, on account of that not being true, we had to take a step back. They weren't true because they became increasingly open. All of those companies said, "We want you to buy the way that we can buy however you want to buy." That ended up being on any other enterprise DSP, of which Amazon DSP was a primary beneficiary.
(02:24):
And so, once that wasn't true, we took a step back and we had a small group of customers who Gigi started on the Amazon DSP doing some really powerful work with data collaboration with Amazon Marketing Cloud, AWS Clean Rooms to provide a unified buying experience across those two properties on Amazon's ad tech stack so brands could buy the best possible CTV ads with the best possible audience and deterministically measure outcomes across Amazon signals and as first party signals.
(02:53):
As we did okay, but we definitely didn't hit product market fit and all of our customers said, "Hey, it's really cool what you're doing on the Amazon DSP, but we want you to provide a full funnel solution and not just CTV." This was around last year, a little over a year ago. We said, "Hey, how do we do this?" But also, we knew that we needed to be AI first or AI native, and we didn't really know what that meant at the time.
(03:16):
So we totally took a step back. We said, "We want to continue to build in this space. We know we need to be AI first. What are the common characteristics of the most successful companies at the application layer in AI, and how can we architect those characteristics to media buying and measurement within Amazon's ad tech?" If you look at the most successful companies, many of them are able to identify a job in the workforce, whether that be software engineering and coding, outbound sales, support, legal, in which that job has a lot of rote manual and repetitive tasks, in which there are a lot of people in that job.
(03:47):
And so, if we identify the job of the enterprise media manager, if you talk to any enterprise media manager, whatever DSP they operate in, there's a lot of buttons to push and there's a lot of rote manual and repetitive tasks. There's over a million-
Jeremy Goldman (03:59):
Oh, yeah.
Adam Epstein (04:01):
... media managers at enterprise agencies and brands. We thought that there would be a really good opportunity to identify the ways in which those people worked in those jobs and create agentic AI workflows and automations to enhance the people in those jobs.
Jeremy Goldman (04:16):
A follow-up question for that, because I'm really interested in that. Obviously, everybody knows that there is a lot of waste and there are a lot of people in that media management function that are doing things that they don't necessarily want to do, but it's part of the job. So how did you break that roll down into workflows that could be automated or augmented by agentic AI? Maybe you could talk more about that.
Adam Epstein (04:40):
Yeah, and happy to get really nerdy here. We would literally spend time with media managers that we worked with and we said, "How do you spend your days? What are the buttons that you press in the Amazon DSP that you press the most, and how can we augment that with agentic AI?" A really good anecdote is there's a history tab at the order and line item level in the Amazon DSP in which one of our customers said, "Look, I could tell you what I do, but just look at my history and just see what buttons we press."
(05:16):
And so, we were able to identify across a small group of large independent agencies the buttons that media managers press the most often and most frequently. Then we were able to, for agentic AI terms, create tool calling with the Gigi agents so Gigi had those buttons. Those buttons are API endpoints within the Amazon DSP, in which those API endpoints became tools that the Gigi agent could call. Then we would create a set of RAG, retrieval augment generation, content around the Gigi agent of the tool calling to instruct Gigi how to properly execute this task on a frequent basis for each one of those individual advertisers.
(06:00):
A really simple example of this would be updating budget flights to extend a campaign for a month. You need to update budgets in a variety of different places at the order, line item, and creative level. Our customers can just say to Gigi, "Hey, Gigi, can you update these budgets across all of these campaigns based on this?" We train Gigi to understand that the places that she needs to update those budgets are across these six dimensions in the Amazon DSP. Gigi would press all of those buttons on behalf of their customer, say, "Hey, this is done." And the customer would accept that plan from Gigi.
Jeremy Goldman (06:37):
Yeah. I mean, the cool thing to me about this, and I definitely want to talk about the human in the loop component that is clearly important to this, but you're making people-
Adam Epstein (06:44):
For sure.
Jeremy Goldman (06:45):
... more effective. In a lot of ways, it boils down to, it sounds like, being really close to figuring out what your perfect customer profile is, and getting very close to your users, and just figuring out what is their pain point, and not just introducing agentic AI for technology's sake, but actually figuring out where it's going to add some value.
Adam Epstein (07:07):
Yeah, 100%. Look, many of the customers that we work with, many of them are passionate about their job. They love working in media, they love working in advertising, they love being strategic, they love helping their clients or the brands that they work for achieve the business outcomes that they desire, but no one signed up to press a million buttons to extend a flight. No one signed up for, on a quarterly basis, updating creative tags for a week so new campaigns can run.
(07:40):
These are genuinely tasks that we've identified as ideal use cases for agentic AI so that the people that are in those seats, the people that are providing that human judgment and strategic vision to achieve those outcomes are able to do that and we can remove them from the mundane tasks that they don't enjoy doing.
Jeremy Goldman (07:59):
You said something to me once. I thought it was really interesting. Everybody's thinking about AI is going to be reviewing the human's work. I don't want to get this wrong, but I feel like you kind of phrased it as, no, actually the human is reviewing the AI's work. I thought that was a really interesting way of framing it.
Adam Epstein (08:15):
Yeah. I think we're all becoming a little bit more sophisticated in the benefits and trade-offs of working with LLMs. We all now have... I think ChatGPT has its three-year anniversary a couple of weeks ago, and we've been working with LLMs on both a personal and professional level for a while now. We all had this belief that we would do stuff and AI would review it and say like, "Hey, Adam, this should be better, that should be better." Invariably, these models are super smart, but they're not that smart.
(08:46):
The work that these models are replacing in the lens of Gigi is not the work of the most senior folks, the people that are touching clients, but rather the lower level work that people don't particularly enjoy doing that requires less reasoning and less intelligence, and can be repetitive and trained for AI to do that on an ongoing basis. There's a really critical design decision that we made in building this product that I think has allowed us to be relatively successful, which is we've very intentionally made Gigi work so that Gigi does not change a bid, budget, flight, or campaign without human direction.
(09:26):
The mechanism with which we've architected the product is such that you assign Gigi work. Gigi says, "Here's the work that I'm presenting to you from the assignment that you've presented to me. Here's a natural language rationale of the work that I've done, and this is my chain of thought, and this is the math that I've done to get to the outcome of this work," and now it's ultimately up for you to press accept. I think that that is actually how, if we think about agentic AI as hiring a junior employee, that's how junior employees work. You're not just going to assign a new hire the ability to draft a media plan and own a client meeting automatically.
(10:08):
They need to gain trust. They need to demonstrate that they're able to show value on the relatively mundane tasks that a senior person doesn't particularly enjoy. Then once you start to do that, you can assign that person other more meaningful tasks. Or again, similar to a junior employee environment, if Gigi begins to repetitively do the same task and you're good with her work, you can say, "Hey, Gigi, next time you do this, you don't need to ask me for approval. Just say that it's done and allow me to acknowledge that it's done."
(10:43):
And so, I think that it's really important to understand how we think about identifying the benefits that these LLMs can have in our day-to-day work, understanding the trade-offs, because a common trade-off is that these are incredibly magical, but incredibly imperfect humans that we're hiring to do jobs. If we let these artificial intelligences, degrees of intelligence, do the task without human supervision, that's when we have bad outcomes.
Jeremy Goldman (11:16):
Yeah, like a really fast intern, but you have to check their work, but they're really fast, so it's worth it. I mean, I think that you're right that the human roles will absolutely change. I think also there's another component to this, which is the necessity. You've seen the agency world change so much over the last year. There's been consolidation. There are some concerns about certain types of work that are going to dry up.
(11:37):
I imagine the type of efficiencies that you're enabling in a perfect world, I can see agencies benefiting from this and then having that contribute to margin just because it's going to be difficult for them to thrive otherwise.
Adam Epstein (11:52):
Totally. We've very intentionally focused on building this product for agencies, and I believe at least in the near term, let's call it one to four years, agentic AI is going to be a massive tailwind for agencies for a variety of reasons. One, like many services businesses... Or more broadly, there are many vertical AI agents that are tackling services businesses because historically services businesses have always operated on a fixed ratio of heads on the team to customers and revenue.
(12:25):
The magic of agentic AI, that if used appropriately, given that you're able to assign AI the work that the humans don't necessarily want to do, an agency or any services business should be able to exponentially increase customers and revenue while maintaining the same team. Thereby, if used successfully, an agency's operating margin should fundamentally change, and the degree to which agencies might be valued on a EBITDA multiple might fundamentally change as well, and they might get rewarded by being valued on a software multiple to prospective stakeholders and investors. That's one aspect from a financial standpoint.
(13:05):
We have two North Stars, and this is in our deck that we send to all of our customers. One North Star is can we fundamentally positively change the operating margin of our customers, and can we positively change the way in which we work? I think that that's a better question. Again, getting back to what are the things that people signed up to do in this industry, and how can we get them back to doing those tasks? How can people be more strategic? How can people spend more time with their clients? How can they spend less time pressing buttons in an enterprise DSP, and more time delivering and honing in on the right message at the right time to their client in order to achieve the optimal outcome for them?
(13:53):
Those are the North Stars that we hope to cultivate in working with Gigi.
Jeremy Goldman (13:57):
And where do you think we are in an overall adoption lifecycle, let's say? Because it's an interesting thing that you say, I think it's right that there is an advantage to leveraging a tool like Gigi to get these efficiencies to move faster and to be able to iterate. Then at the same time, I do imagine there probably will be a world where these are just table stakes at some point. I'm wondering how far do you think we are from that?
Adam Epstein (14:23):
We are unbelievably early in this journey together. As far as I know, we're one of the first companies that are approaching the problem that we're approaching in this vertical. Just by way of the fact that we're very early in our journey and we have a small subset of really awesome customers, but it's still relatively small in the high dozens of customers, we are very early on in our journey. I think there's been some really awesome learnings now that we've had this product in market for six months.
(14:55):
One of those primary learnings that I'd say is the way in which people are approaching beginning to work with a product like us. We typically begin working with an agency or brand on a defined pilot. I think one of the biggest things that we learned is that... I've been working in ad tech for eight years, and typically we would have a pilot for six to eight weeks, maybe 10 weeks whenever time we'd onboard a new customer, and people would rigorously test the technology to see if the technology would work. I previously was co-president of a large retail media company that's currently owned by Omnicom.
(15:36):
When we had those pilots, I was testing the technology to see if the tech worked. Now, the tech is LLMs. The tech is generative AI in the models. I think we all recognize, again, that these are magical, when used appropriately, but inherently imperfect. The test isn't the tech itself, but the test is the change management required for an internal company to say, "I want to be AI native and AI first. This is the approach that we're taking. This is the hands-on role that we're taking to training our Gigi agent like it would be a team member customized for us. These are the steps that we're taking, assuming it's successful to re-architect our org and the way that we serve our clients in an AI-first manner."
(16:27):
And so, that's super early and we're all figuring this out right now.
Jeremy Goldman (16:30):
Well, and speaking about figuring it out, I'm glad that you gave me that transition because one of the things I was really curious of is that you have a lot of agency folks who some are leaning in full steam ahead into AI. A lot of people also have some trepidations about the workforce changes, or they've had a bad experience with the hallucination in the past and as a result, maybe that's coloring the way that they feel right now.
(16:57):
I'm wondering, how do you address that skepticism around AI touch live campaigns? Obviously, there's the whole human supervising component of it, but from a change management standpoint, how difficult do you feel that is for the average organization just to address writ large?
Adam Epstein (17:17):
Again, we made some strategic product decisions so that we've mitigated as much AI off-the-rails hallucination risk as possible. Again, a lot of these decisions get back to change management. We were chatting with one agency that we're continuing to chat with, and that agency... Campaign building is one of the features that Gigi offers.
(17:41):
Gigi does this really cool thing where Gigi creates these things called agentic operating procedures, which are basically standard operating procedures for how someone wants to build a campaign, but is done so agentically in which you're codifying all of your best practices and providing advertiser context. So how each individual advertiser thinks about widely used targeting strategies within the DSP, and then just simply writing a small order form that expresses advertising intent of that campaign. Then Gigi will spit out an order with five line items in a couple of minutes, saving an agency anywhere between 30 to 40 minutes for each individual campaign, multiply that by X number of campaigns.
(18:23):
It's been a huge value for us. The reason why I say this anecdote is one of the agencies that we've spoken to said, "Hey, our process for QA and any campaign that gets built is downloading a bulk sheet and then uploading that bulk sheet into the Amazon DSP. And so, how would we be able to accommodate that process within Gigi?" I tried to be as respectful as possible to that agency leader and saying like, "Hey, I think that you need to rethink your QA process if you are going to begin adopting AI." I said to one of our colleagues after that call, downloading a bulk sheet to upload it into the DSP after Gigi agentically builds the campaigns is almost like printing out a fax to then send an email, but I don't say any of this dismissively.
(19:20):
I think that these are real human problems that everyone is starting to ask themselves of like, "Hey, what are the processes that we currently have? And if we want to be AI first, how can we change those processes so that we can have those same checks and balances to make sure that we're providing the same guardrails against hallucinations or human error, but doing so in an optimal manager that leverages AI to the fullest extents possible rather than reverting back to the old ways of doing things."
Jeremy Goldman (19:57):
By the way, this is great. Hopefully we can revert back to this conversation in the near future because, Adam, this was fantastic. I know our listeners will get a lot out of this, so I really appreciate you making the time.
Adam Epstein (20:08):
Of course. Happy to.
Marcus Johnson (20:10):
That's it for today's episode. Thank you so much to Jeremy and Adam for the conversation. Thanks to the production crew, of course. And thank you to everyone for listening to this special EMARKETER Podcast miniseries: AI-Driven Media Management with Gigi, made possible by Amazon Ads. Tune into Part 2 of this miniseries next week on Tuesday, December 23rd.