Events & Resources

Learning Center
Read through guides, explore resource hubs, and sample our coverage.
Learn More
Events
Register for an upcoming webinar and track which industry events our analysts attend.
Learn More
Podcasts
Listen to our podcast, Behind the Numbers for the latest news and insights.
Learn More

About

Our Story
Learn more about our mission and how EMARKETER came to be.
Learn More
Our Clients
Key decision-makers share why they find EMARKETER so critical.
Learn More
Our People
Take a look into our corporate culture and view our open roles.
Join the Team
Our Methodology
Rigorous proprietary data vetting strips biases and produces superior insights.
Learn More
Newsroom
See our latest press releases, news articles or download our press kit.
Learn More
Contact Us
Speak to a member of our team to learn more about EMARKETER.
Contact Us

ChatGPT has major issues that startups and Big Tech alike should care about

The trend: ChatGPT is one of many powerful AI systems released this year that are raising questions about the future of work, companies, and the ethics of commercial AI.

  • The stir prompted Morgan Stanley to issue a report saying that the technology could “disrupt Google’s position as an entry point for people on the internet,” but that Google is still in a strong position with its Search upgrades and similar AI products on the way.
  • The ongoing tech recession has affected the valuation and revenue growth of Big Tech companies like Google this year.
  • For this reason, concerned Google employees wonder if commercializing systems like ChatGPT could help bolster the tech giant’s financial position.
  • The technology is expected to transform a slew of creative occupations, SEO specialists, and content marketers, who could use it to enhance and speed up their work.

The formidable balancing act: Google faces a greater reputational risk over product issues compared with a startup, but there are also legal risks that should concern any tech company that builds AI products.

  • AI-powered search tools provide authoritative-sounding answers to complicated questions without citing sources.
  • This means they could run afoul of legal and ethical standards if the bots give false or misleading answers to medical or other topics with social safety implications.
  • Despite being impressive, such bots have repeatedly demonstrated that they’ll portray fiction as fact and give biased and offensive responses.
  • Some point out that people make similar mistakes, but the difference is that our legal system is equipped to adjudicate human wrongdoing and hasn’t developed a framework for addressing generative AI’s culpability.

Careful consideration: Google’s cautious approach could signal that it's waiting to see what happens with the lawsuit filed against OpenAI and Microsoft over Copilot.

  • Based on the tech giant’s AI investments, we can expect to see significant announcements from Google on the AI front in 2023.
  • But the technology’s limitations, which include a lack of explainability, high compute costs, and monetization challenges, won’t be worked out within a matter of months.
  • Going forward, tech companies may want to more carefully consider whether the publicity benefits of releasing half-baked AI systems for public testing is worth the riskand expense.

This article originally appeared in Insider Intelligence's Connectivity & Tech Briefing—a daily recap of top stories reshaping the technology industry. Subscribe to have more hard-hitting takeaways delivered to your inbox daily.

You've read 0 of 2 free articles this month.

Create an account for uninterrupted access to select articles.
Create a Free Account