Microsoft has been developing its own artificial intelligence chips since 2019, according to The Verge's blog post. These chips are designed to train large language models, such as GPT-4, to help minimize expensive dependencies on Nvidia.
AI chips are a hot commodity now, with Nvidia being the primary supplier of server chips. OpenAI may require up to 30,000 Nvidia's A100 GPUs to commercialize ChatGPT. Nvidia's high-end H100 GPUs are sold on eBay for over $40,000, showing the strong demand to deploy AI-oriented software.
Microsoft is looking to cut costs by developing its in-house chips. The company is speeding up work on codename Athena, which pertains to the production of its own AI chips. It is uncertain if the chips would be available to Azure cloud customers. Still, the plan is to disseminate these chips within Microsoft and OpenAI next year. Microsoft has a strategic road map for these chips, which includes multiple future iterations.
Microsoft's AI chips are not expected to fully replace Nvidia's chips, but they could reduce expenses significantly. These chips would propel Microsoft's plan to incorporate AI-powered features in Bing, Office apps, GitHub, and more.
Moreover, Microsoft has been crafting its own ARM-based chips for a while. In 2020, Bloomberg reported that Microsoft was considering designing its ARM-based processors for servers, and possibly for a future Surface device. As of now, these ARM chips have not been launched, however, Microsoft has previously collaborated with AMD and Qualcomm for custom chips for Surface Laptop and Surface Pro X devices.
Microsoft is not alone in this development. Other tech giants like Amazon, Google, and Meta have their in-house chips for AI, but most companies still rely on Nvidia chips for their large language models.
Microsoft's initiative on AI chips marks an important step in the AI chip race, especially considering its strategic collaborations with AMD and Qualcomm in the past. By attempting to cut costs and reduce external dependencies for optimized AI-powered applications, Microsoft aligns with the trend of tech companies building in-house AI capabilities. This could potentially transform the way large language models like GPT-4 work, shaping the future of AI software deployments.
Read the full article Microsoft working on its own AI chips vs Nvidia’s
Microsoft, the prominent software conglomerate, is reportedly designing artificial intelligence (AI) chips internally. These chips are envisioned to boost large language models, thus eliminating dependence on Nvidia, a popular AI server chip provider.
The development has been veiled in secrecy since 2019 and only a select group of Microsoft and OpenAI employees reportedly have access to these chips for testing. They are subjecting it to performance tests to ensure it reaches or even surpasses the functions of the present AI-generated models, such as GPT-4.
Presently, Nvidia is seen as a major supplier of AI server chips that several organizations are competing to acquire. This demand is underlined by the hypothesis that OpenAI will require over 30,000 of Nvidia’s A100 GPUs to commercialize ChatGPT. Nvidia's advanced H100 GPUs are trading at over $40k on eBay. This showcases the industry's craving for chips that can enable deployment of AI applications.
While Nvidia is striving to meet the increasing demand, Microsoft is exploring options within its own ecosystem to curtail expenditures on its AI thrust. The software enterprise has further supported project Athena with Microsoft designing its AI chipsets. Although there's ambiguity around if these chips would be accessible to its Azure cloud customers, there's a likelihood that it would be more generally available locally within Microsoft and OpenAI by next year.
The tech juggernaut's projected AI chipsets aren’t perceived as a direct substitute to Nvidia’s, but the initiative in itself could chop expenses in a substantial way considering Microsoft's consistent endeavors to incorporate AI-powered features across its line of products like Bing, Office apps, GitHub, etc.
Microsoft already has its ARM-based chips project running in parallel for several years, as revealed by Bloomberg in late 2020; the ultimate plan being own ARM-based processors to be forged for servers and in all probability, for an upcoming Surface device.
This internal effort by Microsoft to create its AI chips is a part of a trend reportedly followed by other tech giants. Amazon, Google, and Meta have undertaken similar initiatives for internally developed chips for AI, while many firms still depend on Nvidia's offering for their large language model deployments.
Microsoft's collaboration with AMD and Qualcomm in the past for custom chips for its Surface Laptop and Surface Pro X devices can also be noted as part of their broader strategy. If successful, Microsoft's venture into AI chips could recast the AI landscape, providing the market with an alternative to the currently Nvidia dominated sector.
Microsoft AI chips, Nvidia AI chips, Microsoft vs Nvidia, AI chips comparison, AI technology Microsoft, Artificial Intelligence Nvidia, Microsoft AI development, Nvidia AI advancement, Microsoft AI chip innovation, Nvidia's AI chip progress