Nvidia unveils their latest AI GPUs: Blackwell Ultra, Vera Rubin, and Rubin Ultra

Nvidia unveils their latest AI GPUs: Blackwell Ultra, Vera Rubin, and Rubin Ultra

  • 19.03.2025 07:07
  • businesstoday.in
  • Keywords: AI

Nvidia has unveiled its next-gen AI GPUs—Blackwell Ultra, Vera Rubin, and Rubin Ultra—at GTC 2025. These powerful processors aim to revolutionize AI computing with exponential performance gains, reflecting Nvidia's growing focus on data center growth over gaming.

Nvidia Reports

Estimated market influence

Context

Analysis and Summary: Nvidia's Latest AI GPU Launches

Market Impact

  • AI Computing Demand: Jensen Huang highlighted a 100x increase in demand for AI computing power compared to last year, signaling exponential growth in the AI sector.
  • Profit Shift: Nvidia’s data center business now surpasses gaming GPUs, generating $2,300 per second in profits.
  • Global Adoption: Top buyers have already purchased 1.8 million Blackwell GPUs, with $11 billion in revenue generated in 2025 alone.

Product Launches

Blackwell Ultra GB300 (H2 2025)

  • Performance: 20 petaflops of FP4 inference.
  • Memory: 288GB HBM3e memory (up from 192GB).
  • Enterprise Cluster: DGX GB300 Superpod delivers 11.5 exaflops and 300TB memory.

Vera Rubin (Late 2026)

  • Performance: 50 petaflops of FP4 inference, 2.5x faster than Blackwell Ultra.
  • Memory: 1TB HBM memory.

Rubin Ultra (2027)

  • Dual GPUs: Combines two Vera Rubins for 100 petaflops of FP4 and 2TB HBM memory.
  • NVL576 Rack System: Delivers 15 exaflops of FP4 inference and 5 exaflops of FP8 training.

New Hardware

DGX Station (Desktop)

  • Features: Single GB300 GPU, 784GB unified system memory, 800Gbps networking, 20 petaflops AI performance.

NVL72 Rack System

  • Performance: 1.1 exaflops FP4 computing, 20TB HBM memory, 14.4TB/sec networking.

Competitive Landscape

  • Market Leadership: Nvidia continues to dominate the AI GPU market with its advanced architectures and performance improvements.
  • Strategic Focus: The company’s shift to prioritize data center solutions over gaming GPUs underscores its commitment to AI innovation.

Long-Term Strategy

  • Future Roadmap: Next-generation architecture named Feynman (after physicist Richard Feynman) is confirmed for 2028, signaling Nvidia’s long-term focus on AI advancements.
  • Performance Claims: New architectures promise dramatic reductions in AI processing times, with the NVL72 cluster reducing response times by 10x compared to 2022 H100 GPUs.

Regulatory and Industry Implications

  • Demand Growth: The rapid adoption of AI across industries is driving unprecedented demand for high-performance computing solutions.
  • Investment in Innovation: Nvidia’s aggressive product launches indicate significant R&D investment, further solidifying its position as a market leader.