OpenAI has forged a partnership with Alphabet’s cloud service business, Google Cloud, to address its expanding computing requirements. According to reporting by Reuters, this collaboration – disclosed by three sources familiar with the matter – comes amid OpenAI’s efforts to diversify its computing resources away from Microsoft, its main backer.

The deal, which was reportedly finalised in May, highlights the substantial computing power needed for training and deploying AI models. Under this agreement, Google’s cloud division will enhance OpenAI’s existing infrastructure, which is critical for operating its AI models. Despite OpenAI’s ChatGPT posing a challenge to Google’s search dominance, Google executives have reportedly acknowledged that the AI race may not result in a single winner.

This agreement for Google marks an important step as the search giant broadens the external distribution of its proprietary chips, known as tensor processing units (TPUs). Previously, these advanced chips were primarily utilised for internal operations. This strategic shift has enabled Google to attract a diverse range of clients, including major industry players like Apple, as well as emerging startups such as Anthropic and Safe Superintelligence. Both startups were founded by former leaders from OpenAI and are positioned as competitors in the AI landscape.

Following the introduction of ChatGPT in late 2022, OpenAI has faced significant demand for computing capacity needed to train large language models and perform inference processing. Recently, OpenAI announced that its revenue run rate reached $10bn as of June, aligning with its annual financial targets amid widespread AI adoption.

In January, OpenAI entered into a collaboration with SoftBank and Oracle on the $500bn Stargate infrastructure initiative. The AI company is also focused on developing its own chip to reduce reliance on third-party hardware providers. OpenAI’s partnership with Google signifies another step in decreasing dependence on Microsoft’s Azure cloud service, which previously served as the ChatGPT maker’s sole infrastructure provider.

Possible IPO and data centre expansion indicate OpenAI’s growth plans

In March, OpenAI awarded a $11.9bn contract to CoreWeave to bolster AI infrastructure capabilities and support global distribution of its models. Additionally, OpenAI planned a $350m stock investment in CoreWeave, separate from the cloud company’s anticipated IPO. As of the end of 2024, CoreWeave operated an AI-focused network across 32 data centres equipped with over 250,000 Nvidia GPUs.

Last month, OpenAI’s chief financial officer (CFO), Sarah Friar, suggested that a possible initial public offering (IPO) for the firm could be on the cards. However, this is contingent on favourable market conditions and its level of preparation. During the Dublin Tech Summit, Friar talked about the recent restructuring efforts of OpenAI, which have involved transforming its for-profit division into a public benefit corporation (PBC). Friar also disclosed that OpenAI wants to expand its data centre capabilities to nearly 10GW over the next two years. This expansion would require significant capital investment to maintain its position at the forefront of AI development.

Read more: OpenAI restructuring may lead to future IPO, says CFO