The artificial intelligence (AI) company OpenAI announced this Wednesday an agreement with the chip manufacturer Cerebras to add computing capacity to its technologies and thus improve the quality and speed of its results.
The agreement is valued at about $10 billion, according to The Wall Street Journal, which cites sources familiar with the matter.
The goal is for Cerebras to provide OpenAI with about 750 megawatts of computing capacity over several years, starting in 2026, in what represents “the largest high-speed AI inference facility in the world,” indicates a statement from the first company.
Recommended for you: AI investors will bet on energy over technology in 2026, according to BlackRock
OpenAI highlighted that integrating the Cerebras capability seeks to “speed up the AI response,” since now, when you ask ChatGPT a question or use an AI agent, there is a “loop” in which “the request is sent, the model thinks and sends something.”
OpenAI Computing Infrastructure Manager Sachin Katti said the result will be “faster responses, more natural interactions, and a stronger foundation to scale AI in real time to many more people.”
The company led by Sam Altman indicated that it plans to add Cerebras’ “low latency capability” to its “inference stack” in phases through 2028.
“Our teams have met frequently since 2017 sharing research, initial efforts and the common belief that there would come a time when the scale of models and hardware architecture would have to converge. That time has arrived,” Cerebras added.
According to CNBC, Cereberas has data centers full of its chips in the US and abroad, and its co-founder and CEO, Andrew Feldman, indicated that it plans to continue expanding its numbers with this deal.
With information from EFE
Follow the technology information in our specialized section













































