Red Hat to support NVIDIA NIM microservices on OpenShift AI
Red Hat has revealed its impending integration support for NVIDIA NIM microservices on Red Hat OpenShift AI. This will allow organisations to exploit Red Hat OpenShift AI using NVIDIA NIM and hasten the delivery of Generative AI (GenAI) applications for a quick time to value.
Red Hat's integration with NVIDIA aims to give users the ability to amalgamate AI models created utilising Red Hat OpenShift AI with NVIDIA NIM microservices. This will facilitate the development of Gen AI-driven applications on a familiar, trusted Machine Learning Operations (MLOps) platform.
The support provided for NVIDIA NIM on Red Hat OpenShift AI enhances the existing optimisation for NVIDIA AI Enterprise on Red Hat's most renowned open hybrid cloud technologies. These comprise Red Hat Enterprise Linux and Red Hat OpenShift. As part of this close-knit collaboration, NVIDIA will facilitate the interoperability of NIM with KServe, an open-source project operating on Kubernetes for considerably scalable AI applications and a primary upstream contributor for Red Hat OpenShift AI.
This arrangement will promote ongoing interoperability for NVIDIA NIM microservices within the future iterations of Red Hat OpenShift AI. Enterprises will be provided with the means to increase their productivity with GenAI capabilities, such as expanded customer service with virtual assistants, summarisation of IT tickets, and acceleration of business operations with sector-specific satellites.
Availing Red Hat OpenShift AI with NVIDIA NIM will render organisations with several advantages. An optimised path to integration for the deployment of NVIDIA NIM in a shared workflow with other AI deployments will bring increased uniformity and easier management. There will be integrated scaling and monitoring for NVIDIA NIM deployments in collaboration with other AI model deployments across hybrid cloud environments. Enterprises will also be provided enterprise-grade security, support, and stability to guarantee a smooth transition from the prototype stage to production for businesses that function on AI.
The design of NVIDIA NIM microservices aims to speed up GenAI deployment in enterprises. By backing a broad range of AI models, including open-source community models and NVIDIA AI Foundation models, NIM offers seamless, scalable AI inferencing on-premises or in the cloud through industry-standard application programming interfaces (APIs).
Commenting on the collaboration, Chris Wright, the chief technology officer and senior vice president of Global Engineering at Red Hat, stated, "Red Hat is hyper-focused on breaking down the barriers and complexities associated with rapidly building, managing and deploying Gen AI-enabled applications. Red Hat OpenShift AI provides a scalable, flexible foundation to extend the reach of NIM microservices, empowering developers with pre-built containers and industry-standard APIs, all powered by open-source innovation."
Simultaneously, Justin Boitano, vice president of Enterprise Products at NVIDIA, noted, "Every enterprise development team wants to get their generative AI applications into production as quickly and securely as possible. Integrating NVIDIA NIM in Red Hat OpenShift AI marks a new milestone in our collaboration as it will help developers rapidly build and scale modern enterprise applications using a performance-optimized foundation and embedding models across any cloud or data centre."