Though the race to energy the huge ambitions of AI corporations may seem to be it’s all about Nvidia, there’s a actual competitors getting into AI accelerator chips. The most recent instance: At
Intel’s Imaginative and prescient 2024 occasion this week in Phoenix, Ariz., the corporate gave the primary architectural particulars of its third-generation AI accelerator, Gaudi 3.
With the predecessor chip, the corporate had touted how near parity its efficiency was to Nvidia’s high chip of the time, H100, and claimed a superior ratio of worth versus efficiency. With Gaudi 3, it’s pointing to large-language-model (LLM) efficiency the place it might declare outright superiority. However, looming within the background is Nvidia’s subsequent GPU, the Blackwell B200, anticipated to reach later this yr.
Gaudi Structure Evolution
Gaudi 3 doubles down on its predecessor Gaudi 2’s structure, actually in some circumstances. As an alternative of Gaudi 2’s single chip, Gaudi 3 is made up of two similar silicon dies joined by a high-bandwidth connection. Every has a central area of 48 megabytes of cache reminiscence. Surrounding which are the chip’s AI workforce—4 engines for matrix multiplication and 32 programmable models known as tensor processor cores. All that’s surrounded by connections to reminiscence and capped with media processing and community infrastructure at one finish.
Intel says that every one that mixes to supply double the AI compute of Gaudi 2 utilizing 8-bit floating-point infrastructure that has emerged as
key to coaching transformer fashions. It additionally gives a fourfold enhance for computations utilizing the BFloat 16 quantity format.
Gaudi 3 LLM Efficiency
Intel tasks a 40 p.c quicker coaching time for the GPT-3 175B giant language mannequin versus the H100 and even higher outcomes for the 7-billion and 8-billion parameter variations of Llama2.
For inferencing, the competition was a lot nearer, in response to Intel, the place the brand new chip delivered 95 to 170 p.c of the efficiency of H100 for 2 variations of Llama. Although for the Falcon 180B mannequin, Gaudi 3 achieved as a lot as a fourfold benefit. Unsurprisingly, the benefit was smaller towards the Nvidia H200—80 to 110 p.c for Llama and three.8x for Falcon.
Intel claims extra dramatic outcomes when measuring energy effectivity, the place it tasks as a lot as 220 p.c H100’s worth on Llama and 230 p.c on Falcon.
“Our prospects are telling us that what they discover limiting is getting sufficient energy to the info heart,” says Intel’s Habana Labs chief working officer Eitan Medina.
The energy-efficiency outcomes have been greatest when the LLMs have been tasked with delivering an extended output. Medina places that benefit right down to the Gaudi structure’s large-matrix math engines. These are 512 bits throughout. Different architectures use many smaller engines to carry out the identical calculation, however Gaudi’s supersize model “wants nearly an order of magnitude much less reminiscence bandwidth to feed it,” he says.
Gaudi 3 Versus Blackwell
It’s hypothesis to match accelerators earlier than they’re in hand, however there are a few knowledge factors to match, specific in reminiscence and reminiscence bandwidth. Reminiscence has all the time been necessary in AI, and as generative AI has taken maintain and well-liked fashions attain the tens of billions of parameters in dimension it’s turn into much more essential.
Each make use of high-bandwidth reminiscence (HBM), which is a stack of DRAM reminiscence dies atop a management chip. In high-end accelerators, it sits inside the identical bundle because the logic silicon, surrounding it on no less than two sides. Chipmakers use superior packaging, reminiscent of Intel’s EMIB silicon bridges or TSMC’s chip-on-wafer-on-silicon (CoWoS), to offer a high-bandwidth path between the logic and reminiscence.
Because the chart reveals, Gaudi 3 has extra HBM than H100, however lower than H200, B200, or AMD’s MI300. It’s reminiscence bandwidth can be superior to H100’s. Probably of significance to Gaudi’s worth competitiveness, it makes use of the inexpensive HBM2e versus the others’ HBM3 or HBM3e, that are considered a
important fraction of the tens of 1000’s of {dollars} the accelerators reportedly promote for.
Yet another level of comparability is that Gaudi 3 is made utilizing
TSMC’s N5 (typically known as 5-nanometer) course of know-how. Intel has principally been a course of node behind Nvidia for generations of Gaudi, so it’s been caught evaluating its newest chip to at least one that was no less than one rung increased on the Moore’s Legislation ladder. With Gaudi 3, that a part of the race is narrowing barely. The brand new chip makes use of the identical course of as H100 and H200. What’s extra, as an alternative of shifting to 3-nm know-how, the approaching competitor Blackwell is finished on a course of known as N4P. TSMC describes N4P as being in the identical 5-nm household as N5 however delivering an 11 p.c efficiency enhance, 22 p.c higher effectivity, and 6 p.c increased density.
By way of Moore’s Legislation, the massive query is what know-how the subsequent technology of Gaudi, presently code-named Falcon Shores, will use. Up to now the product has relied on TSMC know-how whereas Intel will get its foundry enterprise up and operating. However subsequent yr Intel will start providing its
18A know-how to foundry prospects and can already be utilizing 20A internally. These two nodes carry the subsequent technology of transistor know-how, nanosheets, with bottom energy supply, a mixture TSMC shouldn’t be planning till 2026.
