Nvidia announces Blackwell Ultra and Vera Rubin AI chips

Nvidia announces Blackwell Ultra and Vera Rubin AI chips

  • 18.03.2025 16:47
  • nbcchicago.com
  • Keywords: AI

Nvidia announced Blackwell Ultra AI chips for building models and Vera Rubin GPUs for next-gen AI, set to launch later this year and in 2026 respectively. The annual updates reflect Nvidia's response to growing AI demand.

Nvidia NewsNvidia ProductsNvidia Reports

Context

Nvidia Announces Blackwell Ultra and Vera Rubin AI Chips: Business Insights and Market Implications

Key Announcements

  • Blackwell Ultra:
    • New chip family announced for building and deploying AI models.
    • Expected to ship later in 2024.
    • Capable of generating more tokens per second, enabling cloud providers to offer premium AI services.
    • Revenue potential: 50 times higher than the Hopper generation (launched in 2023).
    • Available in versions:
      • GB300 (paired with Nvidia Arm CPU).
      • B300 (single GPU version).
      • Rack version with 72 Blackwell chips.
  • Vera Rubin:
    • Next-generation GPU family expected to ship in the second half of 2026.
    • System components:
      • Vera: Custom CPU design, twice as fast as CPUs used in Grace Blackwell chips (launched in 2023).
      • Rubin: New GPU design capable of managing 50 petaflops during inference, double the performance of current Blackwell chips.
    • Memory capacity: Up to 288 gigabytes of fast memory.
    • Architecture:
      • Rubin is two GPUs combined into one chip.
      • Future plans: "Rubin Next" in late 2027 will combine four dies into one chip, doubling speed.

Market Impact

  • Sales Growth: Nvidia's sales have surged over sixfold since the release of OpenAI's ChatGPT in late 2022, driven by demand for its GPUs in AI training.
  • Cloud Provider Focus: Cloud companies (e.g., , , ) are critical customers, and their spending on Nvidia chips is closely monitored by investors.
  • Annual Release Cadence: Nvidia is transitioning to an annual chip release strategy, a response to the hyper-accelerated demand for AI computing power.

Competitive Dynamics

  • DeepSeek R1 Model: While DeepSeek's model initially raised concerns due to its efficiency, Nvidia sees it as an opportunity. Blackwell Ultra chips are optimized for reasoning models, which require more computing power.
  • Strategic Advantage: Nvidia's focus on inference and reasoning capabilities positions it to handle the growing demand for advanced AI models.

Strategic Considerations

  • Partnerships and Ecosystem: The GTC conference (expected 25,000 attendees) highlights Nvidia's strong ecosystem with partners like Microsoft, Waymo, and .
  • Future Roadmap:
    • Next chip architecture named after physicist Richard Feynman, expected in 2028.
    • Continued investment in AI-focused hardware and software solutions (e.g., Dynamo package for optimizing GPU performance).

Long-Term Effects

  • AI Training Dominance: Nvidia's leadership in AI training is solidified, with cloud companies deploying three times more Blackwell chips than Hopper chips.
  • Regulatory and Industry Impact: While not explicitly mentioned, the rapid evolution of AI hardware may influence future regulatory considerations for semiconductor and AI industries.

Conclusion

Nvidia's announcements underscore its commitment to innovation in AI hardware, driven by the exploding demand for advanced AI models. The annual release cadence, combined with strategic partnerships and ecosystem development, positions Nvidia to maintain its dominance in the AI chip market.