[Exclusive] Tech Mahindra to Announce Project Indus 2 Soon

Share via:


Nikhil Malhotra, chief innovation officer at Tech Mahindra, announced at Cypher 2024, India’s biggest AI conference, that the company will launch Project Indus 2 within a couple of months. 

“NVIDIA is a great partner, and you should all look forward to next month when we plan to launch Project Indus 2. We’re going to make it state-of-the-art in terms of Hindi and its various dialects, using a modern, open-source approach. I developed this model to raise awareness and show the world that India has the capability to achieve this,” said Malhotra. 

Most recently, Malhotra also met NVIDIA chief Jensen Huang and the team at NVIDIA’s offices to discuss sovereign LLMs.

Tech Mahindra launched Project Indus earlier this June through its R&D arm, Makers Lab. This AI model is built to converse in multiple Indic languages and dialects, starting with Hindi and its 37+ variations.

The model has 1.2 billion parameters and was trained on 22 billion tokens. It is built on GPT-2 and designed to handle the complexities of the Hindi language.

Malhotra said they managed to build the model for $400,000 in response to the challenge from OpenAI’s chief, Sam Altman, who claimed that India wouldn’t be able to build a model for under $10 million.

Interestingly, CP Gurnani, co-founder of AIonOS and former CEO of Tech Mahindra, previously revealed at the MachineCon GCC Summit 2024 that Tech Mahindra was able to develop an Indian LLM for local languages and over 37 dialects in just five months, spending less than $5 million, a claim that Malhotra disputed.

Furthermore, Malhotra shared that when they began working on Project Indus, the goal was not to focus on any specific language but to address various dialects. “India is home to 24 mother tongues and 1,645 dialects. Officially, the country speaks 19,200 dialects, some of which have become extinct or are endangered,” he said.

“It’s not just about Hindi. It’s going to include variations of Hindi like Dogri, as well as languages such as Pancha Pargania and Magahi,” he added.

He said that Project Indus can work on AI PCs locally without the requirement of GPUs. “The model is benchmarked on Intel Xeon servers, not GPUs. Since it’s based on Intel Xeon, our model runs on these AI PCs.”

Malhotra revealed that they are not in the race of building 175 billion parameter models. “We are not in a race of saying you’re building 170 billion parameters, because most of my research over the last 20 years in language says that after 3 billion to 4 billion parameters, the model knows the language,” he explained. 

Era of Sovereign LLMs

Malhotra said that it’s not just India that is working on sovereign LLMs. “There are countries in Southeast Asia that have now taken the lead. All of them want to build large language models for their country, using their data hosted within the country. These models will reflect their culture, be aware of biases within that culture, and align with their country’s vision,” he said. 

Moreover, Malhotra shared that Tech Mahindra has built the first sovereign LLM for Indonesia, called Garuda. “This is a big model. It’s about an eight to nine billion parameter model. It’s actually trained on the entire Nvidia stack of Nemo,” said Malhotra. 

Earlier this year, Tech Mahindra partnered with Indosat Ooredoo Hutchison to Build Garuda, an LLM for Bahasa Indonesia and its Dialects.

“The era of sovereign LLMs has just begun,” he said, adding that soon many countries, including Malaysia, Australia, and New Zealand, will be building indigenous LLMs that understand local languages and not just English.

What’s Next?

Malhotra said that one of the things he wants to do with AI, and one of his research goals, is to explore how AI can become less compute-intensive. 

Furthermore, he also unveiled his innovative concept for AI development, which he calls the ‘min-max regret model.’ This approach shifts away from traditional reward models and aims to empower AI to “dream” about its capabilities and understand its existence in a more profound way.

Malhotra explained that this dreaming model allows AI to contemplate its own questions and aspirations. “Can you dream about yourself? Once you develop your dream, now come back to life and start with the life that you have,” he said, highlighting the physical aspect of existence and how it relates to AI.

Drawing a parallel between human cognition and AI functionality, Malhotra said that just as humans subconsciously store information in the hippocampus, AI systems will use their ‘memory’ to inform decision-making processes. 

“A lot of the data that you collect to do a lot of your information still resides at the back of the hippocampus, which is a memory. And as a result, you pull out that memory when you have to cycle,” he elaborated, likening it to learning to ride a bike—an instinctual skill honed from childhood that remains stored in our subconscious.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

[Exclusive] Tech Mahindra to Announce Project Indus 2 Soon


Nikhil Malhotra, chief innovation officer at Tech Mahindra, announced at Cypher 2024, India’s biggest AI conference, that the company will launch Project Indus 2 within a couple of months. 

“NVIDIA is a great partner, and you should all look forward to next month when we plan to launch Project Indus 2. We’re going to make it state-of-the-art in terms of Hindi and its various dialects, using a modern, open-source approach. I developed this model to raise awareness and show the world that India has the capability to achieve this,” said Malhotra. 

Most recently, Malhotra also met NVIDIA chief Jensen Huang and the team at NVIDIA’s offices to discuss sovereign LLMs.

Tech Mahindra launched Project Indus earlier this June through its R&D arm, Makers Lab. This AI model is built to converse in multiple Indic languages and dialects, starting with Hindi and its 37+ variations.

The model has 1.2 billion parameters and was trained on 22 billion tokens. It is built on GPT-2 and designed to handle the complexities of the Hindi language.

Malhotra said they managed to build the model for $400,000 in response to the challenge from OpenAI’s chief, Sam Altman, who claimed that India wouldn’t be able to build a model for under $10 million.

Interestingly, CP Gurnani, co-founder of AIonOS and former CEO of Tech Mahindra, previously revealed at the MachineCon GCC Summit 2024 that Tech Mahindra was able to develop an Indian LLM for local languages and over 37 dialects in just five months, spending less than $5 million, a claim that Malhotra disputed.

Furthermore, Malhotra shared that when they began working on Project Indus, the goal was not to focus on any specific language but to address various dialects. “India is home to 24 mother tongues and 1,645 dialects. Officially, the country speaks 19,200 dialects, some of which have become extinct or are endangered,” he said.

“It’s not just about Hindi. It’s going to include variations of Hindi like Dogri, as well as languages such as Pancha Pargania and Magahi,” he added.

He said that Project Indus can work on AI PCs locally without the requirement of GPUs. “The model is benchmarked on Intel Xeon servers, not GPUs. Since it’s based on Intel Xeon, our model runs on these AI PCs.”

Malhotra revealed that they are not in the race of building 175 billion parameter models. “We are not in a race of saying you’re building 170 billion parameters, because most of my research over the last 20 years in language says that after 3 billion to 4 billion parameters, the model knows the language,” he explained. 

Era of Sovereign LLMs

Malhotra said that it’s not just India that is working on sovereign LLMs. “There are countries in Southeast Asia that have now taken the lead. All of them want to build large language models for their country, using their data hosted within the country. These models will reflect their culture, be aware of biases within that culture, and align with their country’s vision,” he said. 

Moreover, Malhotra shared that Tech Mahindra has built the first sovereign LLM for Indonesia, called Garuda. “This is a big model. It’s about an eight to nine billion parameter model. It’s actually trained on the entire Nvidia stack of Nemo,” said Malhotra. 

Earlier this year, Tech Mahindra partnered with Indosat Ooredoo Hutchison to Build Garuda, an LLM for Bahasa Indonesia and its Dialects.

“The era of sovereign LLMs has just begun,” he said, adding that soon many countries, including Malaysia, Australia, and New Zealand, will be building indigenous LLMs that understand local languages and not just English.

What’s Next?

Malhotra said that one of the things he wants to do with AI, and one of his research goals, is to explore how AI can become less compute-intensive. 

Furthermore, he also unveiled his innovative concept for AI development, which he calls the ‘min-max regret model.’ This approach shifts away from traditional reward models and aims to empower AI to “dream” about its capabilities and understand its existence in a more profound way.

Malhotra explained that this dreaming model allows AI to contemplate its own questions and aspirations. “Can you dream about yourself? Once you develop your dream, now come back to life and start with the life that you have,” he said, highlighting the physical aspect of existence and how it relates to AI.

Drawing a parallel between human cognition and AI functionality, Malhotra said that just as humans subconsciously store information in the hippocampus, AI systems will use their ‘memory’ to inform decision-making processes. 

“A lot of the data that you collect to do a lot of your information still resides at the back of the hippocampus, which is a memory. And as a result, you pull out that memory when you have to cycle,” he elaborated, likening it to learning to ride a bike—an instinctual skill honed from childhood that remains stored in our subconscious.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

Wrapped Bitcoin supplied on Aave hits record high despite...

WBTC is still the most popular Bitcoin wrapper,...

Apple releases another security update for iTunes on Windows

Apple’s iTunes was discontinued a long time ago,...

Runway earmarks $5M to fund up to 100 films...

AI video generators need to believe that filmmakers...

Popular

Upcoming Events

[td_woo_products_block modules_on_row="" td_ajax_preloading="preload_all" hide_image="yes" show_btn="none" all_space="15" tdc_css="eyJhbGwiOnsibWFyZ2luLWJvdHRvbSI6IjAiLCJib3JkZXItdG9wLXdpZHRoIjoiMiIsImJvcmRlci1yaWdodC13aWR0aCI6IjIiLCJib3JkZXItbGVmdC13aWR0aCI6IjIiLCJwYWRkaW5nLXRvcCI6IjEwIiwicGFkZGluZy1yaWdodCI6IjQiLCJwYWRkaW5nLWJvdHRvbSI6IjEwIiwicGFkZGluZy1sZWZ0IjoiNCIsImRpc3BsYXkiOiIifX0=" f_title_font_family="tk_2" f_title_font_size="14" f_title_font_line_height="1.3" f_title_font_weight="500" title_color="var(--accent-color)" title_color_h="var(--news-hub-accent-hover)" show_excerpt="block" excerpt_cut="15" excerpt_space="0" f_ex_font_weight="400" f_ex_font_size="12" f_ex_font_line_height="1.2" ex_txt="var(--news-hub-dark-grey)"]

Startup Information that matters. Get in your inbox Daily!