Challenging NVIDIA? OpenAI Reportedly in Talks with Broadcom and Others to Develop AI Chips!
According to media reports on Thursday citing informed sources, ChatGPT maker OpenAI is in discussions with chip designers, including Broadcom, to explore the development of new artificial intelligence (AI) chips.
The AI models developed by OpenAI, such as ChatGPT, GPT-4, and DALL-E 3, rely on expensive graphics processing units (GPUs). To address this issue, OpenAI is considering the idea of manufacturing its own AI chips.
The report cites three informed sources stating that the Microsoft-backed company is also hiring former Google employees to leverage their experience and expertise in developing Tensor processors to create its own AI server chips. This move aims not only to reduce dependence on NVIDIA but is also part of CEO Sam Altman's vision to enhance global semiconductor infrastructure.
"OpenAI is engaged in ongoing discussions with industry and government stakeholders to expand access to the necessary infrastructure, ensuring that the benefits of AI are widely shared," said a spokesperson for OpenAI.
Additionally, the report indicates that many details are still pending finalization, and if the chip is eventually developed, production may not begin until 2026 at the earliest. Following this news, Broadcom's stock rose 2.91% on Thursday, with a year-to-date increase of about 48%.
To fully enter the AI field, OpenAI has indeed put in considerable effort. According to earlier reports this year, CEO Sam Altman plans to raise billions of dollars and is considering partnerships with chip manufacturers such as Intel, TSMC, and Samsung Electronics to produce semiconductors.
Moreover, to capture more market share, OpenAI launched the "affordable yet powerful" GPT-4o mini model on Thursday, initiating a price war. According to the company, GPT-4o mini is an entry-level "small model" with significantly reduced pricing.
Reportedly, free and paid users of ChatGPT have started using this new model since Thursday, while enterprise users will receive updates next week.
Statistics show that GPT-4o mini has reached the lowest price point among mainstream "small models" from U.S. AI companies, with a cost of 15 cents for input/output of 1 million tokens and 60 cents for output. In comparison, the price for GPT-4 model input/output of 1 million tokens is $5 for input and $15 for output.