Finance

Broadcom Challenges Nvidia's AI Dominance

Advertisements

Last Friday marked an extraordinary event in the U.Sstock market, characterized by the phrase “Buy Broadcom, Sell Nvidia.” Broadcom's stock surged by an impressive 27%, setting a record for its largest single-day increase, while its market capitalization surpassed a staggering $1 trillionIn contrast, the shares of chip giant Nvidia recorded a 3.3% declineThis dramatic shift in fortunes was ignited by an audacious projection from Hock Tan, Broadcom's CEO, during the company's earnings call, where he forecasted that the market for customized AI chips, specifically Application-Specific Integrated Circuits (ASICs), could reach between $60 billion and $90 billion by 2027.

This revelation sparked significant investor enthusiasmAnalysts noted that if this projection materializes, it would indicate that Broadcom's AI business linked to ASICs could potentially double annually over the next three years (2025-2027). Such staggering growth potential significantly enhances market expectations surrounding ASICs, suggesting these chips may be on the brink of a transformative phase.

However, the discussions surrounding AI training models highlight a critical concern regarding the sustainability of data resources, given their current depletion and the diminishing marginal returns from scaling existing models

The pre-training phase of AI models, which involves continuously feeding data into the models for iteration and improvement, may be reaching a crossroads.

In pursuit of superior model performance, industry leaders have engaged in a frenzy to acquire top-performing Nvidia GPUs, driven by the principle known as “Scaling Law,” which dictates that larger data, computation, and model parameter scales yield better performance outcomesYet, the intense and large-scale training of models risks exhausting global databases, and coupled with the high costs associated with computing power, this situation has sparked debates about the potential end of the pre-training phase for AI.

Recently, at the NeurIPS 2024 conference, Ilya Sutskever, a co-founder of OpenAI, expressed a sentiment that resonated deeply within the AI community: the pre-training era could soon be coming to a close

He referred to data as the fossil fuel of AI, asserting that the amount currently utilized for AI pre-training has reached its peak.

Noam Brown, another prominent figure from OpenAI, echoed these sentiments, emphasizing that the extraordinary advancements witnessed in AI from 2019 to the present stem largely from the expansion of data and computational resourcesHowever, he pointed out an unsettling paradox: despite these leaps, large language models struggle to solve even simple problems, like Tic-Tac-Toe.

This raises a crucial question: Is scaling the only solution? Do we genuinely need to expend more significant resources to develop better AI technologies? As these inquiries linger, attention has shifted toward the logical reasoning phase of AI, which is the next logical progression following pre-training.

Logical reasoning in AI encapsulates the development of applications for AI within specialized vertical fields, utilizing the capabilities of existing large models

Amidst evolving market scenarios, AI agents, such as Google's Gemini 2.0 and OpenAI's o1, have emerged as focal points for numerous companies, representing a shift toward practical applications of AIWith the maturation of AI models, many observers posit that ASICs may gradually usurp the traditional role of GPUs, becoming the favored choice among AI firms.

Broadcom's CEO's optimistic forecast for the ASIC market partially validates the outward anticipation regarding this shift in AI paradigms, which, in turn, catalyzed the surge in stock prices last Friday.

So, what exactly is an ASIC? Fundamentally, semiconductors can be classified into standard semiconductors and ASICsStandard semiconductors adhere to established specifications and can be applied across various electronic devices as long as they meet basic requirements, while ASICs are custom-designed semiconductors tailored to specific product needs.

Consequently, ASICs find application in uniquely designed and manufactured devices, fulfilling required functions

alefox

This divergence has paved two pathways for AI computation: one championed by Nvidia's GPUs, suitable for general-purpose high-performance computing, and another led by ASICs, focusing on customized solutions.

As a quintessential standard semiconductor product, GPUs excel in handling vast, parallel computing tasksHowever, they run into limitations, such as the memory wall issue when processing extensive matrix multiplicationsIn stark contrast, ASICs, designed for specific tasks, can mitigate this limitation, projecting an enhanced cost-performance ratio, especially once mass production is realized.

To summarize, GPUs thrive on a mature product and industry supply chain, while ASICs thrive on their specificity and efficiency, achieving superior processing speeds and lower energy consumption in single-task operationsThus, ASICs emerge as ideal candidates for inference at the edge of computing.

As GPU supplies dwindle and prices soar, various tech giants are now delving into the realm of self-developed ASIC chips solely for their own use

Observers speculate that Google is a trailblazer in this domain, having launched its first-generation TPU (ASIC) products back in 2015. Other pioneering examples include Amazon's Tranium and Inferentia, Microsoft's Maia, Meta's MTIA, and Tesla's Dojo.

Amidst this evolution, two manufacturing powerhouses—Marvell and Broadcom—have long dominated the upstream supply chain for self-designed AI chipsMarvell's ascent stems from its strategic pivots since CEO Matt Murphy took the helm in 2016. Tapping into an opportune moment of corporate restructuring, Murphy redirected the company's focus toward customizing chips for tech giants, successfully capitalizing on the AI wave.

In addition to its major clients, Google and Microsoft, Marvell recently inked a five-year agreement with Amazon AWS to assist in crafting its proprietary AI chipsIndustry insiders predict that this partnership will empower Marvell's AI-customized chip division to achieve exponential growth in the upcoming fiscal year.

Broadcom, a fierce competitor to Marvell, also garners significant contracts from giants like Google, Meta, and ByteDance

Leave a reply