Welcome to WsxMall | Register
BOM
WsxMall > Industry Information > Challenge Nvidia, The Silicon Valley Giant Shows The Strength Of The Self -Developed AI Chip

Challenge Nvidia, The Silicon Valley Giant Shows The Strength Of The Self -Developed AI Chip

WsxMall 2023-11-21 16:06:29 53 Related Key Words: AI chip

In this era full of challenges and opportunities, can Huang Renxun still sit still?

Microsoft's IGNITE Technology Conference opened in Seattle. The Group CEO Nadella delivered an hour of unveiling speech and shared the latest developments in the fields of Microsoft in ESG, new -generation empty core optical fiber, Azure Boost data center.However, the highlight of the whole speech fell on AI, especially Microsoft's first self -developed AI chip Azure Maia 100's amazing appearance undoubtedly became the focus of countless spotlights outside the field.

Microsoft's emphasis on AI is well known, and now it is no longer a secret for self -developed chips.The appearance of Azure Maia 100 is Microsoft's first phase of answer sheet, showing their ambitions and strengths in the field of AI chips to the outside world.

Interestingly, Huang Renxun, CEO of Nvidia, also came to the scene to cooperate with AI Foundry Service for Azure and Nvida.However, Nadella released a self -developed AI chip in front of him, and this scene was inevitable.

Nvidia's monopoly on high -computing chips has long become the heart disease of many large factories in Silicon Valley.On the one hand, these big factories are inseparable from Nvidia, and on the other hand, they do not want to be constrained forever.Nowadays, self -developed chips have become trend, and major factories such as Microsoft, Meta, Google, Amazon have brighted their own bottom technology.In this fierce competition, who can truly break free of Nvidia's shackles?

In the face of this era of challenges and opportunities coexisting, can Huang Renxun still sit still?This has undoubtedly left a deep thinking for people.

Microsoft's first AI chip Maia 100: explore the sea of stars, challenge the limit of computing power


Microsoft's first AI chip MAIA 100 finally unveiled the mysterious veil.The name of this AI acceleration chip is inspired by the NGC 2336 galaxy. Its bright blue star may be Microsoft's deep fable for the AI world.Microsoft is named after this, and maybe it is because of endless cosmic metaphors to refer to the AI world full of infinite possibilities, while expressing determination to pursue high computing power.


The release of Maia 100 is not abrupt. As early as October, it was reported that Microsoft will release the first self -developed AI chip at the developer conference and supply Azure Cloud customers.However, Microsoft has done a good job of confidentiality of the self -developed chip plan. It is not until the official release that we can truly understand the details of its design, computing power and application scenarios.


According to Nadella, Maia 100 is an AI acceleration chip, based on ARM architecture design, mainly used for cloud training, reasoning, and high load cloud operations of Azure.However, Nadella denied rumors that would provide cloud computing customers, saying that this self -developed chip would give priority to Microsoft's own needs and open up to partners and customers at the right time.


The person in charge of the Azure chip department and the vice president of Microsoft, La Ni Bocal, added that MAIA 100 has been tested on the artificial intelligence kit of Bing and Office.Partner OpenAI also started using this chip to test some products and functions, such as GPT 3.5 Turbo.Although there is no specific report on test effects, Nadella and Bocal emphasized that MAIA 100 can speed up data processing, especially in terms of voice and image recognition.


To improve the speed processing speed, the key is naturally computing power.In order to provide a strong computing power, Microsoft has made great efforts on the MAIA 100: using TSMC's 5nm process process, the number of transistors reached 105 billion.Compared with the information exposed in April this year, MAIA's process technology and design architecture are not much different, and performance may need to be tested in practical applications.


However, if we compare MAIA 100 with products in large manufacturers such as Nvidia and AMD, we will find that there is still a big gap in parameters.For example, the number of MI 300X chip chip transistors specially released by AMD this year has reached 153 billion, not to mention the superb power of the great computing power.


Taking the recently released H200 released by Nvidia as an example, the core of the GPU is the same as the H100, but the CUDA nuclear number reaches 16,896, the acceleration frequency is 1.83GHz, the video memory has larger capacity and higher bandwidth, which can support large model training and reasoning of large parameters.Official data shows that the H200 can increase by 40%and 60%compared to the training speed of LLAMA2 and ChatGPT compared to the previous generation products.


It can be seen that from MI 300X to H200, and to MAIA 100, the pursuit of training parameters, training speed and chip composition for large manufacturers has no upper limit.As the iterative speed of the big model accelerates, the major factories are working hard to run faster than their opponents.


In this computing power competition, the chip is a key part.No one wants to drop the chain on it.In order to get rid of the dependence on Nvidia, self -research is the best way out.Microsoft is working hard in this direction, looking forward to bringing us more surprises.

When a self -developed AI chip becomes a must -have

With the rapid development of artificial intelligence technology, major technology companies have invested huge sums of money to carry out self -developed AI chip plans.Nvidia's chip occupies a dominant position in the market, but with the continuous increase in demand, insufficient supply and high prices have become a difficult problem that hinders its further development.Therefore, self -developed AI chips have become important ways to pursue higher performance and lower cost in Silicon Valley manufacturers.

Although Nvidia's chip performed well, its production capacity and price issues have always plagued major technology companies.In order to solve these problems, many large factories have begun to accelerate the embrace of self -developed chips.Microsoft, Google, Amazon, Meta and other companies have made certain progress in self -developed chips.

Google was one of the earliest companies to develop a self -developed chip plan. Its Tensor chip has been successfully applied to Google's AI system, and its performance has been 370%higher than the previous generation of products.In addition, Google also plans to remove Broadcom's ranks of AI chip suppliers in 2027 to save procurement costs.Google's TPU has been developed for a long time, and it will not be ruled out in the future.

Amazon is also an old player of self -developed chips. Its self -developed chip industry chain is complete, including multiple product lines such as network chips, server chips, and AI chips.Amazon's self -developed products include universal computing chips, machine learning training chips, learning reasoning chips, etc. The quantity and coverage areas far exceed other Silicon Valley factories.The Trainium chip released by AWS is also one of the earliest major manufacturers of AI -specific chips, and contributed to AWS to conquer the world.

Although META is behind the self -developed chip, they are also actively carrying out the core -building plan, launching a customized chip MTIA V1, and holding Qualcomm and reorganizing the R & D team.The self -developed chip plans of major technology companies are accelerating, and the market structure will be further changed in the future.

However, it is not easy to replace Nvidia.Although the computing power of the Silicon Valley factory's self -developed chip has continued to improve, there is still a certain gap compared with Nvidia.In addition, Nvidia also has a complete AI software and hardware ecosystem, which is also an inevitable advantage of other large manufacturers.In the future, major technology companies need to seek cooperation in the game to jointly promote the development of AI chip technology.

Huang Renxun appeared at Microsoft Ignite Global Technology Conference to promote Nvidia H100's NC H100 V5 virtual machine. This is a service similar to AI foundry to help Azure customers and cooperative companies develop large language models.Microsoft Azure is still using AMD's MI300X to accelerate virtual machines and the latest GPU to improve AI model training and reasoning speed.

Microsoft also announced the launch of the MaaS model as service at the technical conference on the 15th to open the API interface to users to deploy its own open source model in the cloud.It is reported that Meta and other large factories will also be added to open source. The well -known big models such as LLAMA 2 will be opened to third parties with the computing power support of Nvidia.

Although Nadella and Huang Renxun seem to cooperate well on the surface, their relationship may not be so simple.The contradiction between large companies and start -ups has a long history. The large factory has exerted its self -developed AI chip as a dose of catalyst.To survive, start -ups need to come up with more true skills.

Wave Computing is an once -red AI chip unicorn, claiming to catch up with Nvidia, and its own DPU products need to exceed Nivine's GPU1000 times in the training speed of accelerating neural networks.However, although its products surpass the Nvidia's GPU on some parameters, it does not have the significance of promotion.Because the former lacks a general computing architecture, it is impossible to customize and transform different application scenarios, and there is no sufficient number of developers.In the end, Wave Computing burned the capital of investors and went to the point of bankruptcy liquidation.

OpenAI has been quietly updated a few days ago, making some AI start -ups feel that the end is coming, and some foreign media said that OpenAI is killing the production artificial intelligence startup.It can be seen that the survival pressure of large -scale AI models and AI chips is huge, and the difficulty of R & D and high operating costs may be crushed at any time.

In the contradictions between large companies and startups, startups need to come up with more real skills in order to find a trace of living space in the gaps of the giants.They need to have innovative technology and business models in order to stand out in the competition of large factories.In addition, startups also need to establish a strong partnership to obtain more resources and support.

In short, although there are contradictions and competition between large companies and start -ups, the two sides can cooperate with each other, mutually beneficial and win -win.By establishing partnership, sharing resources and technological innovation, the two sides can jointly promote the development and application of artificial intelligence technology and create more business opportunities and value.




Product Index :