No One Can Beat Nvidia: It’s Set to Earn $10 Billion for Manufacturing 450,000 Blackwell GPUs

  • The B200 GPU contains 208 billion transistors and is compatible with 192 GB of VRAM.

  • Nvidia intends to manufacture 450,000 B200 GPUs in the fourth quarter of 2024.

Nvidia
No comments Twitter Flipboard E-mail

Nvidia isn’t the only GPU manufacturer for AI applications on the market. AMD, Intel, and Huawei are some of its main competitors, but none of them has managed to overshadow it in the slightest, at least for the moment.

Nvidia’s dominance is overwhelming. According to tech outlet HPCWire, Nvidia reached a market share of 98% in 2022 and 2023. Although this only refers to GPUs for distributed data centers, it’s a spectacular figure. In fact, it represents the company’s de facto monopoly in the industry.

More importantly, the near-term future looks promising. According to analysts at Morgan Stanley, Nvidia expects to produce no less than 450,000 Blackwell GPUs in the fourth quarter of this year, thanks to its strong alliance with TSMC. If it sells them all, which seems reasonable to expect, the company will earn more than $10 billion from this alone.

However, Nvidia CEO Jensen Huang and his engineers haven’t had it easy over the past few months for one reason: Making the B200 GPU is no easy feat.

An AI Beast That Is Far From Easy to Produce

When Nvidia unveiled its B200 chip for AI applications in March 2024, it was clear that it had a real powerhouse on its hands. Its features are impressive. It offers 208 billion transistors, next-generation Blackwell architecture, and up to 20 petaflops of FP4 horsepower if accompanied by liquid cooling. It can also work alongside a memory map of up to 192 GB of VRAM and reaches a bandwidth of 8 TB/s.

Nvidia has acknowledged that the performance of its manufacturing processes was lower than expected, prompting its engineers to redesign some layers of the chip.

Interestingly, the first units of the B200 chip have been slow to arrive. Nvidia has admitted that the performance of its manufacturing processes was lower than expected, forcing its engineers to redesign some layers of the chip in order to address a problem that could delay the delivery of the first units. The company said in a statement, “We executed a change to the Blackwell GPU mask to improve production yield.”

In any case, the modifications made by the company’s technicians to the chip appear to have been successful. As mentioned earlier, mass production of this chip will commence in the fourth quarter of this year.

In an August blog post, Nvidia shared initial benchmarks. Experts anticipated the B200 chip to be highly powerful, and the numbers indicate that it’ll undoubtedly be the most capable AI GPU on the market. While it’s wise to approach Nvidia’s numbers with some skepticism, they do provide a rough idea of the chip’s capabilities.

According to the initial results in MLPerf 4.1, the new B200 chip offers four times the performance of its predecessor, the H100 GPU implemented on the Hopper microarchitecture. If this is accurate, the performance gap is substantial.

Additionally, Nvidia says the B200 chip achieves 10,755 tokens per second in an inference test and 11,264 tokens per second in an offline benchmark test. Evidently, the B200 GPU is set to be the leading AI chip, but you can expect AMD, Huawei, and other companies to respond soon.

Image | Nvidia

Related | This Is How NVIDIA Makes Money: A Breakdown of a Company That Began in Gaming and Now Dominates Data Centers in the Era of AI

Home o Index