San Francisco & Palo Alto — October 13, 2025 — OpenAI and Broadcom Inc. (NASDAQ: AVGO) announced a multi-year collaboration to co-develop and deploy 10 gigawatts of custom AI accelerators and networking systems.
Under the agreement:
- OpenAI will design the accelerators and system architecture, embedding insights from its frontier AI models into the hardware itself.
- Broadcom will handle development and deployment, providing racks of both the accelerators and networking (Ethernet, PCIe, optical connectivity) systems. Deployment is targeted to begin in the second half of 2026, and to be completed by end of 2029.
“Partnering with Broadcom is a critical step in building the infrastructure needed to unlock AI’s potential and deliver real benefits for people and businesses,”
Sam Altman, co-founder and CEO of OpenAI
“OpenAI has been in the forefront of the AI revolution … we are thrilled to co-develop and deploy 10 gigawatts of next generation accelerators and network systems to pave the way for the future of AI,”
Hock Tan, President and CEO of Broadcom
Other highlights and context:
- The racks will be “scaled entirely with Ethernet and other connectivity solutions from Broadcom.”
- OpenAI currently serves over 800 million weekly active users, which the company cites as justification for this scale of infrastructure investment.
- Financial terms of the agreement were not disclosed in the announcement.
Why It Matters
- Custom optimization & efficiency: By designing its own accelerators, OpenAI can embed what it has learned from its large models directly into hardware. This often increases performance (speed, lower latency), power efficiency, and potentially lower cost in the long run.
- Supply chain / dependency diversification: Reduces dependence on external GPU vendors (e.g. Nvidia, AMD) by owning more of the stack. Broadcom gets a stronger role.
- Scaling compute for AI growth: With AI usage exploding, models getting larger, inference and training demands skyrocketing — having 10GW of dedicated accelerator capacity is a massive step.
- Cost & infrastructure strategy: Running large AI systems is expensive not just in hardware, but in energy, cooling, networking, etc. Co-designing can help optimize across those layers. Broadcom’s networking contribution (Ethernet, optical, PCIe) is also critical.
Potential Challenges / Risks
- Energy consumption & infrastructure costs: 10GW is a huge power draw. The electricity, cooling, physical space, network fabric all have to scale.
- Manufacturing & production: Designing chips is one thing; producing them at scale and reliability is nontrivial. Yield, delays, cost overruns are real risks.
- Competition: Other players (Google, Microsoft, AWS, and Meta) are also heavily investing in custom AI accelerators and infrastructure. OpenAI will need to stay ahead technically.
- Timeline complexities: Start in 2026, complete in 2029 a long horizon. Many things can change (tech, regulation, supply chain) over 3+ years.
















