By Max A. Cherney
SAN FRANCISCO (Reuters) -Broadcom’s chip unit unveiled on Tuesday a new networking processor that aims to speed artificial intelligence data crunching, which requires stringing together hundreds of chips that work together.
The new chip is the latest piece of hardware that Broadcom has brought to bear against rival AI giant Nvidia. Broadcom helps Alphabet’s Google produce its AI chips, which are perceived by developers and industry experts as one of the few viable alternatives to Nvidia’s powerful graphics processors (GPUs).
Dubbed the Tomahawk Ultra, Broadcom’s chip acts as a traffic controller for data whizzing between dozens or hundreds of chips that sit relatively closely together inside a data center, such as inside a single server rack.
The chip aims to compete with Nvidia’s NVLink Switch chip which has a similar purpose, but the Tomahawk Ultra can tie together four times the number of chips, Ram Velaga, a Broadcom senior vice president, told Reuters in an interview. And instead of a proprietary protocol to move the data, it uses a boosted-for-speed version of ethernet.
Both companies’ chips help data center builders and others tie as many chips as possible together within a few feet of each other, a technique the industry calls “scale-up” computing. By ensuring close-by chips can communicate with each other quickly, software developers can summon the computing horsepower necessary for AI.
Taiwan Semiconductor Manufacturing will manufacture the Ultra line of processors with its five nano-meter process, Velaga said. The processor is now shipping.
It took Broadcom’s teams of engineers roughly three years to develop the design, which was originally built for a segment of the market known as high-performance computing. But as generative AI boomed, Broadcom adapted the chip for use by AI companies because it is suited to scaling up.
(Reporting by Max A. Cherney in San Francisco; Editing by Leslie Adler)
Comments