E2E Network Prepares for AI Inference Market Boom in India

Share via:


Until and unless you have millions of dollars in your pocket, it is not easy to get your hands on the most advanced NVIDIA graphics processing units (GPUs). 

There is, however, a more affordable way to access high-end GPUs. Companies such as E2E Networks are acquiring these GPUs and making them readily available to enterprises, startups, and developers in India at pocket-friendly hourly prices.

The AI hypersaler offers NVIDIA’s H100 GPUs at a price point of just INR 412/hour. 

So far, the National Stock Exchange (NSE) listed company has around 700 NVIDIA H100s at its disposal. Kesava Reddy, chief revenue officer at E2E Network, recently told AIM that the company has now placed an order for NVIDIA H200 GPUs as well.

GPU Agnostic 

While Reddy refrained from sharing further details about H200s, E2E Networks recently raised around $50 million in a strategic investment via preferential issue of equity shares. 

According to Reddy, a major part of these investments will go towards acquiring new GPUs for its AI infrastructure. Moreover, the company also offers nearly 600 non-H100 GPUs on its cloud GPU platform including A30, A40, RTX8000, V100(32GB), T4, L4 and L40S.

However, Reddy claims the company is GPU agnostic and has not locked in on just NVIDIA’s hardware.

“Currently nobody is running heavy inference on AMD or Intel, but, having said that, I do believe 18 months down the line, some inference might move to, let’s say, for instance, Groq,” Reddy said.

He adds that the kind of hardware they choose to acquire is heavily driven by customer demand, and currently, most customers want NVIDIA.

“We have initiated talks with Groq. If customers request Groq’s hardware tomorrow, we can provide it. We just need to determine the price point that customers are willing to pay, as we have the software ready to go,” he said.

Over the years, we have seen the emergence of AI chip startups that are developing solutions they assert are significantly superior to NVIDIA’s offerings. For instance, d-Matrix, a California-based startup backed by Microsoft, is developing chips specifically designed for inferencing smaller language models.

Inference Market Boom in India 

Interestingly, a few other AI hyperscalers have emerged in India in the past fifteen to eighteen months. Most notable among them is the Hiranandani Group-backed Yotta Digital Services, which plans to build a 32000 GPU cluster in a few years. Other smaller players include NeevCloud and Jarvis Labs.

Earlier this year, Tata Communications announced a partnership with NVIDIA to build an AI cloud infrastructure with a focus on the inference market. 

However, given that only a handful of companies train large language models (LLMs), the question that arises is whether there is a demand for such a large number of GPUs in India.

Indeed, it remains to be seen how many NVIDIA GPUs these AI hyperscalers accumulate over the years, but according to Reddy, the demand for GPUs will only ‘explode’ in the coming years.

“Currently, our clusters are utilised at 90-95% by customers, with the remaining 5% reserved for internal use,” he added, stating that this signifies there is demand.

Already a lot of E2E Network’s customers are leveraging their infrastructure for fine-tuning and inference workloads. He believes going forward, many enterprises will want to leverage the power of LLMs and they will do this by fine-tuning an existing model to make it suitable for their enterprise use case.

Most of the demand in India, according to Reddy, will be driven by the desire among enterprises to deliver their services in vernacular languages.

AIM spoke to a few experts who echoed similar sentiments. Many feel that most AI workloads in the future will be inferencing.

OCC and GPU Cluster for Startups 

Some of the notable customers of E2E Network include Zomato, IndiaMART, CarDekho, Zoomcar, Niyo, Nykaa, Mobikwik, Reverie, IIIT Hyderabad, ISB, IIT Guwahati, and Matrimony.com, among others.

The company is also in talks with People+AI to be part of its Open Cloud Compute (OCC) Network. People+AI, which branches out from Nandan Nilekani-backed non-profit EkStep Foundation, is on a mission to develop an interoperable cloud computing network.

Reddy revealed that he is actively in dialogue with folks from People+AI. Moreover, the company also wants to be part of the government’s plan to build a GPU cluster of around 10,000-20,000 GPUs which will be made available for research institutions and startups in India. 

“We have given our pre-bid queries to the government and are waiting for a response,” Reddy said.

Moreover, E2E Network recently received its MeitY empanelment, meaning the company can now provide its cloud services to government departments and agencies. “This is a side of our business we want to explore.”



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

E2E Network Prepares for AI Inference Market Boom in India


Until and unless you have millions of dollars in your pocket, it is not easy to get your hands on the most advanced NVIDIA graphics processing units (GPUs). 

There is, however, a more affordable way to access high-end GPUs. Companies such as E2E Networks are acquiring these GPUs and making them readily available to enterprises, startups, and developers in India at pocket-friendly hourly prices.

The AI hypersaler offers NVIDIA’s H100 GPUs at a price point of just INR 412/hour. 

So far, the National Stock Exchange (NSE) listed company has around 700 NVIDIA H100s at its disposal. Kesava Reddy, chief revenue officer at E2E Network, recently told AIM that the company has now placed an order for NVIDIA H200 GPUs as well.

GPU Agnostic 

While Reddy refrained from sharing further details about H200s, E2E Networks recently raised around $50 million in a strategic investment via preferential issue of equity shares. 

According to Reddy, a major part of these investments will go towards acquiring new GPUs for its AI infrastructure. Moreover, the company also offers nearly 600 non-H100 GPUs on its cloud GPU platform including A30, A40, RTX8000, V100(32GB), T4, L4 and L40S.

However, Reddy claims the company is GPU agnostic and has not locked in on just NVIDIA’s hardware.

“Currently nobody is running heavy inference on AMD or Intel, but, having said that, I do believe 18 months down the line, some inference might move to, let’s say, for instance, Groq,” Reddy said.

He adds that the kind of hardware they choose to acquire is heavily driven by customer demand, and currently, most customers want NVIDIA.

“We have initiated talks with Groq. If customers request Groq’s hardware tomorrow, we can provide it. We just need to determine the price point that customers are willing to pay, as we have the software ready to go,” he said.

Over the years, we have seen the emergence of AI chip startups that are developing solutions they assert are significantly superior to NVIDIA’s offerings. For instance, d-Matrix, a California-based startup backed by Microsoft, is developing chips specifically designed for inferencing smaller language models.

Inference Market Boom in India 

Interestingly, a few other AI hyperscalers have emerged in India in the past fifteen to eighteen months. Most notable among them is the Hiranandani Group-backed Yotta Digital Services, which plans to build a 32000 GPU cluster in a few years. Other smaller players include NeevCloud and Jarvis Labs.

Earlier this year, Tata Communications announced a partnership with NVIDIA to build an AI cloud infrastructure with a focus on the inference market. 

However, given that only a handful of companies train large language models (LLMs), the question that arises is whether there is a demand for such a large number of GPUs in India.

Indeed, it remains to be seen how many NVIDIA GPUs these AI hyperscalers accumulate over the years, but according to Reddy, the demand for GPUs will only ‘explode’ in the coming years.

“Currently, our clusters are utilised at 90-95% by customers, with the remaining 5% reserved for internal use,” he added, stating that this signifies there is demand.

Already a lot of E2E Network’s customers are leveraging their infrastructure for fine-tuning and inference workloads. He believes going forward, many enterprises will want to leverage the power of LLMs and they will do this by fine-tuning an existing model to make it suitable for their enterprise use case.

Most of the demand in India, according to Reddy, will be driven by the desire among enterprises to deliver their services in vernacular languages.

AIM spoke to a few experts who echoed similar sentiments. Many feel that most AI workloads in the future will be inferencing.

OCC and GPU Cluster for Startups 

Some of the notable customers of E2E Network include Zomato, IndiaMART, CarDekho, Zoomcar, Niyo, Nykaa, Mobikwik, Reverie, IIIT Hyderabad, ISB, IIT Guwahati, and Matrimony.com, among others.

The company is also in talks with People+AI to be part of its Open Cloud Compute (OCC) Network. People+AI, which branches out from Nandan Nilekani-backed non-profit EkStep Foundation, is on a mission to develop an interoperable cloud computing network.

Reddy revealed that he is actively in dialogue with folks from People+AI. Moreover, the company also wants to be part of the government’s plan to build a GPU cluster of around 10,000-20,000 GPUs which will be made available for research institutions and startups in India. 

“We have given our pre-bid queries to the government and are waiting for a response,” Reddy said.

Moreover, E2E Network recently received its MeitY empanelment, meaning the company can now provide its cloud services to government departments and agencies. “This is a side of our business we want to explore.”



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

Tony Fadell-backed Plumerai brings on-device AI to home security...

On Wednesday, some Harvard students made headlines by...

What is a rug pull in crypto and 6...

Ever heard of crypto rug pulls? Find out...

Coding Is Getting Cheaper, And So Are Coders 

In the last two years, there has been...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!