The news: OpenAI announced a blockbuster deal to design its own AI chips in collaboration with Broadcom, continuing a trend of AI companies seeking partnerships and investments to own both ends of the AI stack, per The New York Times.
“By building our own chip, we can embed what we’ve learned from creating frontier models and products directly into the hardware, unlocking new levels of capability and intelligence,” said Greg Brockman, OpenAI co-founder and president, during the announcement.
Broadcom’s stock surged between 9% and 10% in trading Monday following the news, per Seeking Alpha.
Why it’s worth watching: OpenAI recently signed similar agreements with Nvidia and AMD. The ChatGPT maker joins Amazon (Trainium), Google (TPU), and Meta (Artemis) in building proprietary chips to lessen reliance on Nvidia’s overtaxed GPU supply.
For Broadcom, the deal is a shift from networking hardware to core AI infrastructure. For OpenAI, the partnership secures greater control over its compute pipeline and reduces dependence on external suppliers.
According to TrendForce, AI servers will account for 72% of total server industry revenues in 2025, up from 67% in 2024, a $107 billion YoY surge. The market’s gravitational pull toward AI-specific systems underscores how compute is becoming the ultimate bottleneck—and its biggest differentiator.
But at what cost? OpenAI hasn’t released specific financing details for the latest partnership, but Broadcom is not investing in nor providing stock to OpenAI, per the Times. Financing the deal will likely draw on funding rounds, pre-orders, strategic backing from Microsoft or SoftBank, and future revenues or credit lines.
Our take: OpenAI’s Broadcom deal is a turning point in AI strategy—from chasing smarter models to securing the power that fuels them.
For enterprises, this is the moment to pick sides. The competitive advantage in the next decade belongs to those who align with partners that control their infrastructure, not just their algorithms.