SLMs are trained on smaller (specific), and higher quality data than large language models (LLMs). The IT major added, these models are developed as part of the Infosys center of excellence dedicated to NVIDIA technologies and built to help businesses quickly adopt and scale AI.
The small language models utilize general and industry-specific data, enhanced by NVIDIA’s AI Enterprise and NVIDIA AI Foundry in collaboration with Sarvam AI. The models are fine-tuned with Infosys data and integrated into existing offerings, like Infosys Finacle and Infosys Topaz for business and IT operations. Infosys also provides these models as services that include pretraining-as-a-service and fine-tuning-as-a-service, to help businesses build their own custom AI models securely, in compliance with industry standards, said the company.
Balakrishna D. R., executive vice president, global services head, AI and industry verticals, Infosys, said, “As we further our enterprise AI journey with NVIDIA, our focus is now on delivering foundational small language models as services for businesses to build on. By integrating the NVIDIA AI stack with Infosys Topaz, we are taking advantage of very advanced enterprise AI capabilities to tackle unique business challenges, enhance operational efficiency, and deliver bespoke solutions that drive business value for our clients. Our dedicated center of excellence ensures continuous innovation and establishes Infosys as a preferred partner for our clients’ AI-powered transformation.”
On Wednesday, the IT major announced strengthening its collaboration with Meta, to drive innovation in generative AI through open-source initiatives. Leveraging Meta’s Llama stack, a family of open-source large language models and tools, Infosys is driving significant advancements in AI and fostering innovation across industries.
Infosys also unveiled a Meta center of excellence focused on accelerating enterprise AI integration. This center will enable a large pool of talent on the Llama stack, develop industry-specific use cases, and collaborate closely with Meta to help customers seamlessly adopt the Llama stack.