Every time a query is made to ChatGPT, for example about the ugliest town in Spain, the execution of the AI model or its previous training has required the effort of millions of chips. The popularity of these and other generative AIs has accelerated the plans of most companies to develop their models.
This implies high economic expenditure and energy consumption. An increase that translates into greater pressure on the specific chip market dominated by Nvidia. For this reason, the creators of ChatGPT are considering manufacturing their chips focused on artificial intelligence.
OpenAI, creator of ChatGPT, explores creating its own AI chip Nvidia
As reported by Reuters, the company has even considered a possible acquisition, according to people familiar with the company’s plans. OpenAI is not the first to consider designing and manufacturing its chips as a solution to this shortage and the high price of chips. Google and Amazon are working on it, as is their partner Microsoft or other giants in this commercial race like Meta. However, the parents of ChatGPT have not yet taken a clear position on the matter.
Specific chips, or AI accelerators, are needed to train and run the artificial intelligence models that more and more companies are developing. The rise of generative artificial intelligence in recent years has heightened demand for specific chips, a market that was already showing signs of serious shortages.
Running artificial intelligence models requires a high price tag. These artificial intelligence programs consume more energy than any other computer system. Training a single AI model has a consumption equivalent to 100 US homes in one year.
For OpenAI, each query on ChatGPT costs, on the economic side, approximately 4 cents, according to an analysis by Bernstein analyst Stacy Rasgon. If ChatGPT queries grow to one-tenth the scale of Google search, it would require about $48.1 billion worth of GPUs initially and about $16 billion worth of chips annually to stay operational.
OpenAI CEO Sam Altman has previously complained publicly about the shortage of graphics processing units. Since 2020, OpenAI has developed its generative AI technologies on a massive supercomputer built by Microsoft, one of its largest backers, using 10,000 Nvidia graphics processing units (GPUs).
This possible strategy of undertaking its manufacturing has been discussed internally since at least last year. An option that would seek to solve the shortage and high cost of chips due to the growth in demand. However, the company has not yet made a final decision on the matter.
To do this, it should work more closely with other chip manufacturers, including Nvidia, in addition to diversifying its suppliers beyond Nvidia. This firm is the main market agent, with 80% control worldwide.
It’s not the only one
If this plan goes ahead, it would mean a heavy investment of hundreds of millions of dollars per year for OpenAI. To streamline the process, a good option would be to acquire a chip company that would pave the design and build path for OpenAI, as it did with Amazon.com and its acquisition of Annapurna Labs in 2015. However, the name of the company that the creators of ChatGPT would have valued for that possible purchase.
It is not the first company that its partner, Microsoft, is also developing a custom AI chip that OpenAI is testing, The Information reported. The plans could indicate a greater distance between both partners.
Facebook company Meta recently announced its efforts to develop a proprietary chip to run its artificial intelligence. Meta’s custom chip effort has been plagued with problems, leading the company to scrap some of its AI chips, before this new attempt.