OpenAI to Collaborate with Broadcom and TSMC on Its First AI Chip Production
OpenAI, known for its ChatGPT technology, is collaborating with Broadcom Inc. (NASDAQ:AVGO) and Taiwan Semiconductor Manufacturing Company (TSMC) to create its first in-house artificial intelligence chip. This move represents a shift away from the idea of building several chip production facilities, which had been sidelined due to high costs and lengthy requirements.
This newly announced strategy shifts OpenAI’s focus from larger factory-building goals to internal chip design while also incorporating Advanced Micro Devices, Inc. (NASDAQ:AMD) chips alongside existing Nvidia Corporation (NASDAQ:NVDA) chips to meet rising infrastructure needs.
As one of the major consumers of chips, OpenAI is diversifying its chip supply strategy to manage costs effectively. This practice is being embraced by other tech giants such as Amazon.com, Inc. (NASDAQ:AMZN), Meta Platforms, Inc. (NASDAQ:META), Google (NASDAQ:GOOGL), and Microsoft Corporation (NASDAQ:MSFT). The decision to source from various manufacturers while developing its custom chip could have repercussions for the broader technology sector.
The AI-focused company relies heavily on significant computing power to train its systems and make predictions or decisions. OpenAI is one of the largest buyers of Nvidia’s GPUs, which dominate more than 80% of the market. However, the industry is grappling with shortages and rising costs, prompting companies to seek alternatives.
OpenAI has been working with Broadcom on its first AI chip for months. This chip will concentrate on inference, a field expected to grow as AI applications become more widespread. Broadcom, which assists companies like Google in chip design for production, is also helping OpenAI secure manufacturing capacity with TSMC for its custom-designed chip, anticipated to be produced in 2026, though this timeline may change.
The company has created a chip design team of about 20 members that includes engineers such as Thomas Norrie and Richard Ho, who have experience in developing Google’s Tensor Processing Units (TPUs).
OpenAI’s approach to chip supply includes planning to use AMD chips via Microsoft’s Azure. This move comes during a period when AMD aims to gain market share with its new MI300X chips, forecasting $4.5 billion in AI chip sales for 2024 after launching in the fourth quarter of 2023.
The high costs associated with training AI models and running services like ChatGPT are significant for OpenAI, which forecasts a $5 billion loss against $3.7 billion in revenue this year. Since computing costs represent the largest expenditure, the company is working to optimize usage and diversify its suppliers.
While expanding its chip partnerships, OpenAI is being careful not to strain its relationship with Nvidia, especially as it continues to work with the chipmaker for access to Nvidia's latest Blackwell chips. Nvidia has not commented on these developments.
This article was generated with the support of artificial intelligence, translated, and reviewed by an editor. For more information, please refer to our Terms and Conditions section.