The news: A majority of US adults are ready for AI to take their bosses’ jobs.
- 73% said they’re behind AI having a role in hiring, firing, and budgets, per a new ResumeNow survey.
- 69% are fine with AI monitoring for productivity purposes.
- 66% believe AI leadership would make the company more equitable and efficient.
AI is slowly bleeding into work roles and plays a part in the hiring process, but so far, it hasn’t typically made C-suite decisions. It has, however, become the voice of at least one CEO.
Use case—AI as talking head: Lovable created a CEO GPT to write in CEO Anton Osika’s voice for press releases and quotes, Lovable CMO Carilu Dietrich posted on LinkedIn.
- Vibe coder Lazar Jovanovic trained AntonGPT using podcasts, video scripts, and LinkedIn posts.
- “Within a day, [it] was creating better quotes for Anton than I ever could have myself, writing press briefings for him with the elements he’s used and loved best,” Dietrich said.
Using generative AI (genAI) to speak for executives frees up time for those in the C-suite as well as marketers and public relations managers, allowing them to focus more on human-based tasks.
Don’t dismiss people: In cases that require a more human touch, like empathy and team dynamics, ResumeNow survey respondents would prefer to leave AI out of it.
- 64% said only humans can motivate teams well.
- 57% believe only humans can make difficult moral decisions.
- Just 19% trust AI to resolve conflicts among team members.
AI can simulate human behaviors because it’s trained on human-created content. But considering many genAI models demonstrate bias in their outputs, “fair” decisions involving morality, empathy, and understanding might be difficult to come by.
Our take: 41% of C-suite professionals are concerned about the ethical use of AI, per Bearing Point. AI can quickly become a yes man, confirming decisions that might not be in a company’s best interest.
Balance and oversight are key when adopting AI solutions. Using AI for hiring and budgeting will help streamline those processes, but AI decisions need to be monitored to keep bias and hallucinations in check.