Skip to main content

Intel and NVIDIA Partner on Hybrid Rack-Scale AI Platform

·552 words·3 mins
Intel NVIDIA Gaudi 3 Blackwell AI Infrastructure Rack-Scale Data Center Ethernet
Table of Contents

Intel and NVIDIA Collaborate on Hybrid AI Rack Solution
#

At the OCP Global Summit 2025, Intel announced a major initiative: a hybrid rack-scale server platform that integrates Intel Gaudi 3 AI accelerators with NVIDIA Blackwell GPUs. This partnership marks a significant pivot in Intel’s AI hardware strategy — moving away from direct competition toward collaborative system integration aimed at enhancing AI infrastructure efficiency and market reach.

Gaudi 3 Rack Scale Solution: Architecture Overview
#

The platform, officially named the Gaudi 3 Rack Scale Solution, features a modular rack design comprising multiple compute trays and switch trays.
Each compute tray includes:

  • 2× Intel Xeon processors
  • 4× Gaudi 3 AI accelerators
  • 4× NVIDIA ConnectX-7 400GbE NICs
  • 1× BlueField-3 DPU

The rack accommodates 16 compute trays, interconnected via a Broadcom Tomahawk 5 switch delivering up to 51.2 Tb/s bandwidth. The architecture prioritizes Ethernet-based scaling, enabling high-bandwidth, low-latency communication optimized for AI inference workloads.

Hybrid AI Execution: Blackwell and Gaudi in Tandem
#

Unlike traditional single-vendor systems, the Gaudi 3 Rack Scale solution deeply integrates NVIDIA’s Blackwell B200 GPU.
It employs a “decomposed inference” model:

  • Blackwell B200 handles the prefill stage — the compute-intensive portion of large-model execution.
  • Gaudi 3 manages the decode stage — the latency-sensitive, high-concurrency phase.

This division of labor allows each architecture to excel in its strengths:
Blackwell maximizes matrix throughput, while Gaudi 3 leverages its memory bandwidth and Ethernet interconnects for efficient parallel inference.
According to SemiAnalysis, this hybrid configuration achieves up to 1.7× higher prefill throughput compared to racks using only Blackwell GPUs.

Strategic Implications: From Rivalry to Integration
#

Intel’s Gaudi platform has struggled to gain standalone traction in a market dominated by NVIDIA’s ecosystem. By embracing rack-level integration with Blackwell, Intel can now leverage CUDA, NVLink, and NVIDIA’s software stack, expanding its reach into mixed-architecture AI deployments.

This cooperation demonstrates Intel’s pragmatic shift — focusing on open networking and Ethernet-based scaling rather than isolated chip-level competition. It showcases how Intel’s Xeon CPUs, DPUs, and AI accelerators can interoperate within diverse, multi-vendor clusters.

Challenges and Limitations
#

Despite the architectural advantages, several hurdles remain:

  • The Gaudi software stack lags behind NVIDIA’s mature CUDA ecosystem, requiring higher integration effort.
  • Gaudi 3, built on a 5nm process, is a transitional product, expected to be succeeded by a next-generation design soon.
  • Analysts view this rack-scale initiative as a “showcase strategy”, intended to demonstrate Intel’s flexibility in AI infrastructure rather than compete head-to-head in raw performance.

NVIDIA’s Role and Gains
#

NVIDIA also benefits significantly from this collaboration.
The Gaudi 3 Rack Scale solution relies heavily on NVIDIA’s networking technologies, including ConnectX NICs and BlueField DPUs, reinforcing NVIDIA’s dominance in high-bandwidth data interconnects.
For Intel, meanwhile, the partnership boosts Gaudi shipments and validates its system-level AI integration capabilities.

A Glimpse into the Future of AI Infrastructure
#

The Intel–NVIDIA collaboration may foreshadow a new paradigm for data centers: heterogeneous, cross-architecture clusters replacing single-vendor dominance.
By adopting an open, interoperable design philosophy, Intel positions itself as a system integrator rather than merely a chip competitor.

In this light, the Gaudi 3 Rack Scale Solution is not just a hardware launch — it’s a strategic signal.
Intel is redefining its role in the AI ecosystem, moving from chip-level competition to rack-scale integration and optimization, reflecting a broader industry transition toward flexible, multi-vendor AI infrastructure.

Related

AMD Counters Intel-NVIDIA Partnership with Strix Halo and Zen 6 Roadmap
·517 words·3 mins
AMD Intel NVIDIA Strix Halo Zen 6 PC Market AI APU SoC
Intel 18A Process Draws Microsoft’s Attention
·673 words·4 mins
Intel 18A Microsoft Semiconductors Foundry AI Chips Process Technology
Intel Xe3P Architecture Debuts with Crescent Island GPU
·675 words·4 mins
Intel GPU Xe3P Crescent Island AI Inference Data Center