Microsoft doesn’t want to be hollowed out by OpenAI

Zuckerberg is in a good mood lately. Threads, a new social platform that targets the “meta-universe version of Twitter”, has completed the accumulation of 100 million users in less than five days, making it the fastest new social platform in the world to break the 100 million user mark.Threads, has become a forceful punch in Musk’s face by Za. But there is no referee in the boxing ring of the mall to call a halt, Zuckerberg said Threads user scale target is 1 billion. And only after reaching the 1 billion goal will Meta seriously consider the issue of Threads making money. That number is twice the size of Twitter’s current size.

Zucker is serious. Both the angry users who resented Musk’s version of Twitter and the vast network of business organizations that the Meta empire had deposited over the years flocked to the Threads platform in a very short period of time. And interspersed among the myriad of nomadically migrating users is one of Silicon Valley’s most powerful bald heads – Microsoft CEO Satya Nadella.
Nadella used to be the venerable Twitter Blue V, with 3.07 million subscribers, retweeting all sorts of Microsoft press PR releases every day like an AI without emotion. This time, however, he registered a Threads account and enthusiastically announced the “big model marriage” between Microsoft and Meta: Microsoft cloud Azure will help Meta train and promote the Llama big model, and Llama will be on-line Azure libraries and adapted to the Windows system.And as the official announcement documents from both parties state, Azure has been purposefully designed at the device, hardware and software (facility, hardware and software) levels to support world-leading AI training. And with this partnership, Llama developers will be able to utilize Azure AI’s tools for feature development in training, fine-tuning, reasoning and security.

Cloud partnerships between giants are commonplace. But the world knows that Azure is OpenAI’s royal cloud platform. After Microsoft shelled out $1 billion to invest in OpenAI in 2019, Azure has been OpenAI’s exclusive cloud provider. And Azure has reworked some of its architectural designs in order to better provide arithmetic and external services for big models. Even in the eyes of some observers, the core reason for Microsoft to shoot for investment in OpenAI back then was actually the development of Azure – after all, Transformer was not yet fully validated at that time, but Microsoft had already been betting on AI for many years and had begun to view the future vision of the cloud business as “a supercomputing facility for AI”. Whether it’s arithmetic support or OpenAI opening up the world to chatbot testing, Azure has a lot to answer for.
Altman himself even once specifically tweeted to express gratitude for the Azure team’s strong support, highly praised Microsoft as “the world’s best AI infrastructure”. Now Sam has just burned two incense sticks in front of his feet, Nadella turned his hand to rent this “world’s best AI infrastructure” to Zuckerberg. I don’t know if the two of them discussed in advance. Or perhaps for Nadella, this is probably all part of the plan. Anyway, Zuckerberg is acting happy. On Zuckerberg’s Instagram account, he posted an intimate photo with Nadella to “thank Nadella,” and Zuckerberg said, “[Meta] open-sourced Llama2 to Microsoft, and [this open-sourcing] will be the foundation for the next generation of big model building efforts. .”
Microsoft and Meta do need each other, Meta is going to the big model of the “deep water”, in the 70B parameters of the Llama2 has reached the level of GPT3.0, almost has become the best reputation of the big model base. For the closed-source large model camp, Llama2 success brings the pressure is no less than Threads for Twitter’s shock. Closed source companies spend tens of millions of dollars to get out of the thing, the open source community can be used directly, equivalent to the future of the global open source big model community to raise the starting line to the level of 3.0.
Recently topped the HuggingFace open source big model list Stability’s Freewilly big model, considered close to the level of 3.5, is precisely based on Llama2.0 tuned and optimized products. For Meta, the increase in the scale of parameters brings a climb in the amount of computation, and the gradual maturity of the model also allows to see better commercial potential. But to overcome these difficulties and implement these potentials, Zuckerberg needs a more efficient partner. And the things that Azure already has, but Meta doesn’t, are even more important to Llama, such as Azure’s arithmetic experience, Azure’s AI toolkit, and Azure’s cloud itself ……
Meta is one of the few internet giants without a public cloud service. In the past, both Meta and Amazon have been each other’s mega-customers, including some of Meta’s AI R&D arithmetic, which has been procured from AWS. and Microsoft’s prying solution this time around, in addition to opening up the Windows scenario, it also opened up Azure’s enterprise channel capabilities to add Llama2 to its own product roster. and Azure, though in overall market share still lags behind AWS, it is significantly ahead of its competitors in the SaaS sales space. And with the convergence of cloud and SaaS, Microsoft has a significant differentiator at the channel level. With Azure, Meta and its ecosystem followers can sell and use Llama2 products directly through the cloud.
For Microsoft, the big model challenge is much more multifaceted. Microsoft has been almost ALL IN on OpenAI products at the big model application level in the past. Whether it is the earliest access to Bing, or the Windows ecosystem based Copilot \ developer-oriented tools ecosystem AI Studio, and even the new AI cloud service brand OpenAI Azure, behind the shadow of OpenAI.
OpenAI is the best big model company in the world, and Windows remains the most important productivity software ecosystem in the world. But in the big model global arms race, the combination of the two doesn’t mean an absolute winner.
With Meta and Microsoft combination almost at the same time “official announcement”, is Apple’s big model program. According to overseas media reports, Apple has completed the big prophecy model called “Ajax” basic framework, will develop similar to ChatGPT conversation AI. and the launch of consumer-grade products is planned to be released next year. Apple’s entry is considered an important variable in Silicon Valley’s big modeling race.
Compared to the public cloud and other Internet winds, AI is highly recognized by Apple’s management of the technology direction, the recent management of the mouth also has a trend towards AI to increase the size of the trend. In addition to the long-term focus on AI technology, Apple is the world’s richest and most scenic company. Apple’s annual net profit is close to $100 billion, and its net cash flow from operations is more than $120 billion, equivalent to the sum of Microsoft and Meta. Apple Eco has surpassed Microsoft as the world’s largest closed operating system, with more than 2 billion active devices, compared to Microsoft’s 1.5 billion. And what makes Apple more imaginative than its book strength is its semiconductor capabilities. It may be one of the few tech companies in the world participating in the big model race that will not have to source GPUs and CPUs in the future. Not only that, but Apple’s chip efficiency seems even more imaginative.

At WWDC 2023, Apple introduced the M2 Ultra chip. Compared to the common manufacturer’s CPU and GPU separation deployment, the M2 Ultra unified memory architecture and the accompanying ultra-high memory bandwidth can even allow developers to run large models on a single card. Although similar consumer-grade chips are not yet comparable to NVIDIA’s professional chips, a similar small show of muscle also makes the outside world interested in Apple’s future GPU arithmetic extension capabilities.
OpenAI, for example, the outside world predicted that it probably used 20,000 graphics cards for calculation at the same time. But Wang Xiaochuan recently told the media, OpenAI is testing 10 million graphics cards at the same time computing model, equivalent to Nvidia’s current capacity of 10 years, “completely the moon landing program (level)”. With Google fierce and Apple eyeing it, Microsoft and Meta have chosen to align. For Nadella, standing in line with Meta can make Microsoft more solid in the ecological war of big models. First of all, Microsoft still needs open source, open source will continue to play an important role in the future of the big model competition. Open source naturally has the ecological ability to involve many talents, iterate quickly, and cover more efficiently in the pendant category. Although OpenAI took the top spot, the rate of progress in the open source community is still impressive. For example, Llama only used half a year’s time, the use of parameters 70B, has caught up with 175B, took 2 years of GPT3. Especially if the open source route in the future has become the mainstream of the industry solutions, Llama and Azure depth of the combination, may really help Microsoft’s cloud business to complete the curved road to overtake AWS (the end of 2022, the market share of Azure 23%, AWS is 32%). After all, compared to Windows and Office, Azure is Microsoft’s most profitable and most promising business.
Second, the continuous development of the big open source model will inevitably make manufacturers willing to spend money on closed systems will also naturally become fewer and fewer.
Bard, for example, came under considerable pressure after Llama2. In addition to many good people in the evening to discuss the pressure of the future long-term development of bard, there are media reports that Google insiders have also written that bard in the fight against the open source community strengths of the latter’s rapid progress and less cost, the scene is richer.
In the absence of changes in the structure of the industry of big model super input, the closed big model will still have its existence of reasonableness, but probably will be limited to only a very few leaders, and the probability is that among them there will be OpenAI. if OpenAI has a moat, it may be called Llama2. in addition to apple, microsoft, google, meta, amazon and other giants have self-developed AI chip In addition to Apple, Microsoft, Google, Meta, Amazon and other giants have plans to develop their own AI chips, but Apple, which already has top semiconductor development capabilities, is still the most qualified to “vigorously out of the miracle”.

Of course, OpenAI is actually not Microsoft’s “own son”. In the $10 billion investment, although Microsoft has 75% of the dividend rights, but in fact only 49% of the equity of OpenAI. In other words, although Microsoft holds a lot of OpenAI resources, but does not have absolute control of OpenAI. However, the cooperation with llama is like a sign that Microsoft is actually becoming the rule maker of the game with OpenAI in its hands: it has the most promising AI infrastructure Azure, and the most cutting-edge commercialization interface Windows copilot, and when the most core infrastructure and channel capabilities are in the hands of Microsoft, OpenAI is only a “super programmer” of Microsoft. “Super programmer”. When Meta and other platforms mature, Microsoft can introduce more “programmers”, and even open more system-level scenarios to the open-source ecosystem, so that the productivity of the Windows system to further enhance.
In fact, OpenAI was originally just an option for Nadella to focus on AI and language models.
In ChatGPT before, Microsoft and NVIDIA even developed a large language model Megatron-Turing Megatron-Turing of 530 billion parameters, was the largest transformer-based model, parameters than GPT3 several times more, about the absolute power out of the miracle. But in the end Megatron still lost to Ultraman, so Megatron genius chose to buy Ultraman. However, Microsoft has actually not given up the development of the technical route related to the big model internally. For example, in June, Microsoft released phi-1, a 1.3 billion-parameter “small” large language model. with OpenAI as its core asset, Microsoft did not go for the “Powerful Miracle” mode, but switched to the “textbook-grade” high-quality data set. With OpenAI as its core asset, Microsoft has switched from the “big miracle” model to using what it calls a “textbook” high-quality dataset to train its model, making the actual results better than the 100 billion-parameter GPT 3.5. In July, Microsoft also proposed a new big model architecture, RetNet, which it says can be better than transformer on the basis of a larger data dimension. The battle of big models is far from halftime, and the game between Megatron and Ultraman may have just begun.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top