At KubeCon Europe 2026, Mirantis reaches a milestone by integrating the Nvidia NCX Infra Controller, now open to the community, into its k0rdent AI platform. The goal is to transform Kubernetes into a unified orchestration layer for GPU compute, network, and storage, providing cloud operators and large enterprises with a composable and reproducible multi-tenant AI infrastructure free from proprietary silos of hyperscalers.
The upcoming phase of AI deployment in enterprises revolves around orchestrating large-scale GPU environments, automating high-performance network and storage provisioning, and delivering a cohesive platform without relying on manual integrations. Mirantis positions itself in this space with announcements made in Amsterdam during KubeCon CloudNativeCon Europe 2026.
Having built expertise over more than a decade of OpenStack and Kubernetes deployments in production for top-tier companies and service providers, Mirantis is now repositioning its k0rdent AI platform as a unified control plane for the entire AI stack. One of the key signals of this strategy is the integration of the Nvidia NCX Infra Controller, recently open-sourced by Nvidia as part of the progressive standardization of its infrastructure ecosystem. Mirantis actively contributes to the project and integrates it into k0rdent AI following the standards of the Nvidia AI Cloud Ready Initiative.
“Harnessing the full potential of hardware to model AI workloads, k0rdent AI caters to Nvidia’s range of reference architectures, including Ampere, Hopper, and Blackwell, as well as various networking technologies. The operational goal is to enable cloud-native environments and enterprises to industrialize their GPU setups through reproducible models, rather than ad hoc configurations.”
Warren Barkley, Vice President of Product Management at Nvidia, explains the rationale behind this collaboration: “IA infrastructure service providers must maximize resource utilization and operational efficiency. Our partnership with Mirantis helps neoclouds deploy validated AI infrastructure faster, combining ISV validation with automated lifecycle management of the Nvidia NCX Infra Controller.” This emphasizes the importance of operational efficiency alongside GPU performance, crucial for infrastructure teams managing GPU clusters in a multi-tenant environment.
The announcements at KubeCon also include integrations with Netris, Supermicro, and Vast Data to complement the k0rdent AI layer in networking, computing, and storage aspects. These partnerships aim to streamline Kubernetes cluster delivery, automate infrastructure provisioning for sovereign AI environments, and enhance high-performance storage without the need for manual integration.
Mirantis’ strategy revolves around leveraging open source as the de facto standard for enterprise AI infrastructure, akin to what OpenStack and Kubernetes have achieved in the private cloud realm. Shaun O’Meara, Mirantis’ CTO, highlights the company’s pivotal role in deploying open and production-ready AI cloud infrastructure for the next generation of workloads.
By standardizing AI infrastructure architectures around community-driven technologies like Kubernetes, k0rdent, and the NCX Infra Controller, Mirantis offers a viable alternative to the proprietary architectures of major hyperscalers. This proposition caters to IT leaders seeking agility, cost control, and contractual reversibility, built on a composable platform with open-source foundations and operationally validated according to Nvidia’s standards.





