Sam Altman Takes on Nvidia with New Global Network of AI Chip Factories

Sam Altman Takes on Nvidia with New Global Network of AI Chip Factories

OpenAI CEO Sam Altman has embarked on a global campaign to build a network of artificial intelligence chip factories that can counter Nvidia's technology dominance

Large AI labs like OpenAI are spending billions of dollars on Nvidia GPUs to train the next generation of large-scale language models And they are spending even more to run those models for consumers

To address this problem, several large companies are looking at ways to reduce the size of their models, improve efficiency, and even create new custom, cheaper chips

For his new chip project, which could cost billions of dollars, Altman has spoken with several investors Abu Dhabi-based G42 and Japan's Softbank Group are among the potential backers, and he is reportedly in talks with Taiwanese manufacturer TSMC about manufacturing the units

Nvidia became the first trillion-dollar company last year when it nearly dominated high-end GPUs capable of training cutting-edge AI models

Earlier this month, Meta announced the purchase of 350,000 Nvidia H100 GPUs to train future superintelligence and open source it Called the first chips designed for generative AI, the H100 GPUs are in high demand at about $30,000 per chip

Google has trained its next-generation Gemini model on chips known as Tensor Processing Units (TPUs), which it has been developing for over a decade

This would have greatly reduced the overall cost of training such a large model and would have given Google developers greater control over how the model is trained and optimized [Semiconductor manufacturing is expensive It takes a lot of natural resources, money, and research to get a new chip to the highest level of performance

There are only a limited number of manufacturing facilities around the world that can produce the type of high-end chips that OpenAI needs, leading to potential bottlenecks in training the next generation models

Altman hopes to increase this global capability with a new network of manufacturing facilities dedicated to AI chips

OpenAI is expected to partner with companies like Intel, TSMC, and Samsung for its own AI chips, or possibly with existing investor Microsoft The company announced last year that it is producing its own AI chip that will run within Azure, a cloud platform for running AI services

Amazon has its own Trainium chip that runs for AI models within its AWS cloud service, while Google Cloud uses TPUs However, despite having their own chips, all major cloud companies heavily use Nvidia's H1000 processors

Altman also stands by Nvidia's continued improvements, which may draw investors away from OpenAI's proprietary chip project

The GH200 Grace Hopper chip was confirmed last year, and Intel has a new AI chip running on Meteor Lake processors

Categories