DataCenterNews Asia Pacific - Specialist news for cloud & data center decision-makers
Modern ai data center aisle with gpu servers blue cabling

OpenNebula integrates Nvidia Spectrum-X for AI clouds

Wed, 11th Feb 2026

OpenNebula Systems has been validated by Nvidia as an orchestration platform integrated with Nvidia Spectrum-X Ethernet networking, linking OpenNebula's cloud management software with Nvidia's data centre network stack for large-scale AI infrastructure.

The companies framed the work as part of "AI Factory" infrastructure, an industry term for shared platforms that run training and inference across pooled compute, GPU and network resources. The validation covers a fully integrated cloud environment built on Spectrum-X Ethernet.

Network Bottlenecks

Training large AI models places heavy demand on data centre networks, particularly when many GPUs exchange data during distributed training. Latency, congestion and jitter can limit throughput as AI workloads scale across nodes. Spectrum-X Ethernet is positioned as a response to those constraints in conventional Ethernet deployments.

OpenNebula is used for private, hybrid and edge cloud deployments, including multi-tenant environments where one physical infrastructure stack is shared across users or teams. In AI environments, this model can increase utilisation of scarce GPU resources, but it also adds complexity around isolation and governance.

The integration links OpenNebula orchestration with Spectrum-X Ethernet fabrics, bringing compute, GPU and networking layers under a single control plane. The goal is a production environment for AI workloads delivered through a software-defined cloud model.

Multi-Tenant Controls

OpenNebula said the validated configuration provides end-to-end multi-tenancy across compute, GPU and network layers on a shared Spectrum-X Ethernet fabric. The announcement also cited direct GPU and SuperNIC passthrough, approaches commonly used to reduce overhead by giving workloads more direct access to hardware.

The platform includes governance and lifecycle management features for large-scale accelerated infrastructure, including provisioning, resource quotas, scheduling and enforcement of separation between tenants sharing the same hardware estate.

OpenNebula also said the integration is automated, with tenant provisioning, network configuration and device attachment orchestrated through Spectrum-X Ethernet. This points to tighter coupling between the cloud management layer and the network fabric than models that rely on manual configuration or separate tooling.

Testing on Nvidia Air

Part of the work centres on Nvidia Air, Nvidia's environment for testing, integration and validation. OpenNebula said its control plane runs on Nvidia Air, allowing customers to evaluate the Spectrum-X Ethernet integration and run simulations and automation workflows.

For enterprises and service providers, such test environments can shorten evaluation cycles for new infrastructure designs and help validate automation and configuration patterns before deployment on physical equipment.

OpenNebula and Nvidia positioned the combined stack for European enterprise and service provider deployments. They cited large-scale AI Factories and "AI Gigafactories", a term used for very large deployments that pool substantial GPU capacity.

OpenNebula also linked the work to Nvidia compute platforms, referencing Nvidia Grace Blackwell and Nvidia Grace Blackwell Ultra systems alongside Spectrum-X Ethernet networking. These platforms combine CPUs and GPUs for AI and high performance computing use cases.

Ignacio M. Llorente, CEO of OpenNebula Systems, said the collaboration addresses the infrastructure requirements of modern AI environments.

"Through our collaboration with NVIDIA, we are extending OpenNebula to support the networking and performance requirements of modern AI infrastructures," said Ignacio M. Llorente, CEO, OpenNebula Systems.

He added that the integrated orchestration supports Nvidia's latest compute platforms and Spectrum-X Ethernet as a unified platform.

"This integration allows customers to manage multi-tenant AI environments where NVIDIA Grace Blackwell and NVIDIA Grace Blackwell Ultra compute and Spectrum-X Ethernet networking are tightly orchestrated and optimized as a single platform," said Llorente.

Amit Katz, VP of Networking at Nvidia, said the integration brings "cloud-native agility" to AI Factory deployments, with an emphasis on predictability in shared, accelerated infrastructure.

"OpenNebula's integration with NVIDIA Spectrum-X Ethernet brings cloud-native agility to the AI Factory, enabling customers to orchestrate multi-tenant accelerated infrastructure with maximum performance and predictability," said Amit Katz, VP of Networking, NVIDIA.

Katz also pointed to Nvidia Air as a way for customers to validate large-scale deployments.

"NVIDIA Air enables OpenNebula and our ecosystem partners to validate and simulate large-scale AI Factory deployments, giving customers a powerful environment to evaluate and accelerate their AI cloud strategies," said Katz.

OpenNebula said it has more than 5,000 cloud deployments worldwide and has positioned its software as an alternative for organisations reassessing virtualisation and private cloud strategies. The company also said the platform supports deployments that scale to thousands of hosts and tens of thousands of GPUs.

The validated integration is available for organisations assessing shared AI infrastructure designs that combine Spectrum-X Ethernet networking with OpenNebula's orchestration and multi-tenant controls.