Nvidia is utilizing misleading practices and abusing its market dominance to quash the competitors, in response to Cerebras Techniques CEO Andrew Feldman, after the agency unexpectedly introduced its newest GPU product roadmap in October 2023.
Nvidia outlined new graphics playing cards set for annual launch between 2024 and 2026 so as to add to the business main A100 and H100 GPUs presently in such excessive demand, with organizations throughout the business sphere swallowing them up for generative AI workloads.
However Feldman labelled this information a “predetary pre-announcement” talking to HPCWire, highlighting the agency has no obligation to see by means of on releasing any of the parts it’s teased. By doing this, he’s speculated it’s solely confused the market, particularly in gentle of the actual fact Nvidia was, say, a yr late with the H100 GPU. And he doubts Nvidia can see by means of on this technique, nor may it wish to.
Nvidia is simply ‘throwing sand up within the air’
Nvidia teased yearly leaps on a single structure in its announcement, with the Hopper Subsequent following the Hpper GPU in 2024, adopted by the Ada Lovelace-Subsequent GPU, a successor to the Ada Lovelace graphics card, set for launch in 2025.
“Corporations have been making chips for a very long time, and no person has ever been in a position to succeed on a one-year cadence as a result of the fabs don’t change at a one-year tempo, Feldman countered to HPCWire.
“In some ways, it has been a horrible block of time for Nvidia. Stability AI mentioned they had been going to go on Intel. Amazon mentioned the Anthropic was going to run on them. We introduced a monstrous deal that may produce sufficient compute so it could be clear that you would construct… massive clusters with us.
“[Nvidia’s] response, not shocking to me, within the technique realm, shouldn’t be a greater product. It’s… throw sand up within the air and transfer your arms quite a bit. And you realize, Nvidia was a yr late with the H100.”
Feldman has designed the world’s largest AI chip on the planet, the Cerebras Wafer-Scale Engine 2 CPU – which is 46,226 square-mm and incorporates 2.6 trillion transistors throughout 850,000 cores.
He informed the New Yorker that large chips are higher than smaller ones as a result of cores talk quicker once they’re on the identical chip somewhat than being scattered throughout a server room.