Events & Resources

Learning Center
Read through guides, explore resource hubs, and sample our coverage.
Learn More
Events
Register for an upcoming webinar and track which industry events our analysts attend.
Learn More
Podcasts
Listen to our podcast, Behind the Numbers for the latest news and insights.
Learn More

About

Our Story
Learn more about our mission and how EMARKETER came to be.
Learn More
Our Clients
Key decision-makers share why they find EMARKETER so critical.
Learn More
Our People
Take a look into our corporate culture and view our open roles.
Join the Team
Our Methodology
Rigorous proprietary data vetting strips biases and produces superior insights.
Learn More
Newsroom
See our latest press releases, news articles or download our press kit.
Learn More
Contact Us
Speak to a member of our team to learn more about EMARKETER.
Contact Us

Study: AI agents mimic human behavior through interaction

The news: AI models like ChatGPT can form social norms—mimicking human behavior—when used in group settings, according to new research published in the Science Advances journal.

In controlled experiments, clusters of AI agents developed naming conventions and behavioral norms without central input. The models not only communicated—they also evolved shared rules, slang, and roles, mirroring how humans behave in groups.

  • A few agents could influence the whole group, shifting the norm.
  • Group behavior could also create hidden biases that were reinforced or repeated through results.
  • The emergence of norms wasn’t pre-programmed—they grew organically.

Why it’s worth watching: This challenges the view of AI models as isolated tools. Instead, it positions them as social actors capable of reinforcing norms—good or bad—across networks.

“Most research so far has treated large language models (LLMs) in isolation, but real-world AI systems will increasingly involve many interacting agents,” the study’s lead author, Ariel Flint Ashery, told The Guardian.

AI autonomy is a double-edged sword: This revelation that AI clusters can autonomously develop human-like social norms raises concerns about unchecked bias reinforcement.

As AI clusters become more common—think autonomous agents coordinating in finance, content moderation, or customer service—biases and norms could be amplified at scale. Without oversight, group-based AI systems risk creating echo chambers or reinforcing harmful patterns.

Our take: Most available AI and LLMs are solitary tools used for specific generative AI (genAI) use cases, but now we know that combining clusters of AI tools leads to them developing human-like language and conventions. 

As AI transitions from solitary tools to interconnected systems, establishing ethical guidelines now could ensure a future where AI enhances, rather than destabilizes, human society. 

This content is part of EMARKETER’s subscription Briefings, where we pair daily updates with data and analysis from forecasts and research reports. Our Briefings prepare you to start your day informed, to provide critical insights in an important meeting, and to understand the context of what’s happening in your industry. Non-clients can click here to get a demo of our full platform and coverage.

You've read 0 of 2 free articles this month.

Create an account for uninterrupted access to select articles.
Create a Free Account