Nvidia launches a set of microservices for optimized inferencing

Share via:


At its GTC conference, Nvidia today announced Nvidia NIM, a new software platform designed to streamline the deployment of custom and pre-trained AI models into production environments. NIM takes the software work Nvidia has done around inferencing and optimizing models and makes it easily accessible by combining a given model with an optimized inferencing engine and then packing this into a container, making that accessible as a microservice.

Typically, it would take developers weeks — if not months — to ship similar containers, Nvidia argues — and that is if the company even has any in-house AI talent. With NIM, Nvidia clearly aims to create an ecosystem of AI-ready containers that use its hardware as the foundational layer with these curated microservices as the core software layer for companies that want to speed up their AI roadmap.

NIM currently includes support for models from NVIDIA, A121, Adept, Cohere, Getty Images, and Shutterstock as well as open models from Google, Hugging Face, Meta, Microsoft, Mistral AI and Stability AI. Nvidia is already working with Amazon, Google and Microsoft to make these NIM microservices available on SageMaker, Kubernetes Engine and Azure AI, respectively. They’ll also be integrated into frameworks like Deepset, LangChain and LlamaIndex.

Image Credits: Nvidia

“We believe that the Nvidia GPU is the best place to run inference of these models on […], and we believe that NVIDIA NIM is the best software package, the best runtime, for developers to build on top of so that they can focus on the enterprise applications — and just let Nvidia do the work to produce these models for them in the most efficient, enterprise-grade manner, so that they can just do the rest of their work,” said Manuvir Das, the head of enterprise computing at Nvidia, during a press conference ahead of today’s announcements.”

As for the inference engine, Nvidia will use the Triton Inference Server, TensorRT and TensorRT-LLM. Some of the Nvidia microservices available through NIM will include Riva for customizing speech and translation models, cuOpt for routing optimizations and the Earth-2 model for weather and climate simulations.

The company plans to add additional capabilities over time, including, for example, making the Nvidia RAG LLM operator available as a NIM, which promises to make building generative AI chatbots that can pull in with custom data a lot easier.

This wouldn’t be a developer conference without a few customer and partner announcements. Among NIM’s current users are the likes of Box, Cloudera, Cohesity, Datastax, Dropbox
and NetApp.

“Established enterprise platforms are sitting on a goldmine of data that can be transformed into generative AI copilots,” said Jensen Huang, founder and CEO of NVIDIA. “Created with our partner ecosystem, these containerized AI microservices are the building blocks for enterprises in every industry to become AI companies.”



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

Nvidia launches a set of microservices for optimized inferencing


At its GTC conference, Nvidia today announced Nvidia NIM, a new software platform designed to streamline the deployment of custom and pre-trained AI models into production environments. NIM takes the software work Nvidia has done around inferencing and optimizing models and makes it easily accessible by combining a given model with an optimized inferencing engine and then packing this into a container, making that accessible as a microservice.

Typically, it would take developers weeks — if not months — to ship similar containers, Nvidia argues — and that is if the company even has any in-house AI talent. With NIM, Nvidia clearly aims to create an ecosystem of AI-ready containers that use its hardware as the foundational layer with these curated microservices as the core software layer for companies that want to speed up their AI roadmap.

NIM currently includes support for models from NVIDIA, A121, Adept, Cohere, Getty Images, and Shutterstock as well as open models from Google, Hugging Face, Meta, Microsoft, Mistral AI and Stability AI. Nvidia is already working with Amazon, Google and Microsoft to make these NIM microservices available on SageMaker, Kubernetes Engine and Azure AI, respectively. They’ll also be integrated into frameworks like Deepset, LangChain and LlamaIndex.

Image Credits: Nvidia

“We believe that the Nvidia GPU is the best place to run inference of these models on […], and we believe that NVIDIA NIM is the best software package, the best runtime, for developers to build on top of so that they can focus on the enterprise applications — and just let Nvidia do the work to produce these models for them in the most efficient, enterprise-grade manner, so that they can just do the rest of their work,” said Manuvir Das, the head of enterprise computing at Nvidia, during a press conference ahead of today’s announcements.”

As for the inference engine, Nvidia will use the Triton Inference Server, TensorRT and TensorRT-LLM. Some of the Nvidia microservices available through NIM will include Riva for customizing speech and translation models, cuOpt for routing optimizations and the Earth-2 model for weather and climate simulations.

The company plans to add additional capabilities over time, including, for example, making the Nvidia RAG LLM operator available as a NIM, which promises to make building generative AI chatbots that can pull in with custom data a lot easier.

This wouldn’t be a developer conference without a few customer and partner announcements. Among NIM’s current users are the likes of Box, Cloudera, Cohesity, Datastax, Dropbox
and NetApp.

“Established enterprise platforms are sitting on a goldmine of data that can be transformed into generative AI copilots,” said Jensen Huang, founder and CEO of NVIDIA. “Created with our partner ecosystem, these containerized AI microservices are the building blocks for enterprises in every industry to become AI companies.”



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

CFTC report endorses tokenizing trading collateral 

Distributed ledger technology can help solve longstanding challenges...

Tap to Pay on iPhone now available in one...

Following a recent expansion of Tap to Pay...

ChatGPT: Everything you need to know about the AI-powered...

ChatGPT, OpenAI’s text-generating AI chatbot, has taken the...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!