RHEL Gives Linux A Much Needed AI Update

Share via:


A few days ago, Dell Technologies partnered with Red Hat Inc to bring the Red Hat Enterprise Linux AI (RHEL AI) platform to its popular PowerEdge servers, paving the way for its hardware to serve as a foundation for AI development. 

The idea behind this partnership was to make it easier for organisations to scale their IT infrastructure to support successful AI and ML strategies without needing to host those workloads in the cloud. They can deploy Dell’s PowerEdge servers within their own on-premises data centres or alternatively use them as part of a larger hybrid cloud setup.

Apart from partnering with Dell, Red Hat recently unveiled Enterprise Linux AI, which is primarily focused on developers. One good example is a bootable RHEL image with pre-configured AI libraries like PyTorch, which allows users to quickly set up an AI-ready environment without going through complex installation and configuration processes.

AMD is also very active in supporting AMD GPUs on Linux and recently released the AMD XDNA Linux Driver v3, which is crucial for enabling the Ryzen AI Neural Processing Unit (NPU) on Linux systems. It is likely to get merged into the Linux kernel 6.13. 

Linux-centric companies like openSUSE are contributing to AI so that it remains accessible. For example, openSUSE got listed on Hugging Face and made the first contribution of a dataset called cavil-licence-patterns, aiming to provide more advanced and accurate detection of licence issues and compliance.

The Contribution of Red Hat

RHEL AI combines several key components to create a powerful foundation for AI innovation. At its core are the open-source Granite models, a family of LLMs developed by IBM Research. Complementing these models is InstructLab, an open-source project that simplifies model experimentation and fine-tuning. 

This allows domain experts to contribute to AI models without extensive data science skills. All these components are packaged into a bootable Red Hat Enterprise Linux image, streamlining deployment across hybrid cloud environments.

This approach is more inclined towards ethical AI. Many of Red Hat’s customers couldn’t go near AI because of the copyright implications, but this is basically the most ethical form AI can take.

Red Hat’s approach addresses several challenges in enterprise AI adoption. By leveraging open-source principles, RHEL AI lowers the barriers to entry for AI innovation, making it more accessible to a broader range of organisations. The platform offers up to 50% lower costs compared to similar solutions, making AI development more economically viable for enterprises.

A Reddit user praised the closed integration to CI/CD and mentioned that you can create and share host images just like you would container images, and now developers and operators can run the exact same image in a container, or as bare metal on the host.

“The technology is neat, but what it can do when integrated into your development, build and deployment pipeline is where the magic happens. It’s not a huge leap forward, just a few modest but highly useful steps forward,” he added. 

Linux Matters a Lot for AI Developers

Developers are in the favour of using Linux to train models. A developer on Reddit mentioned that compiling applications is so much easier with Linux and when he uses Windows for the same task, he has to try and find VS 2022 launcher, download and run it, and Google to see what options he has to tick. All this combined with several GB of unnecessary dependencies, which he does not want was given anyway.

“Then I have to go to the NVIDIA site, download the CUDA Toolkit, and install it. Then I need CMAKE, download it, and install it. If it all works, compiling takes about 25 minutes. On Arch Linux, just type pacman -S base-devel cuda, and you’re ready to go. Compiling takes like 5-10 minutes, and inference is ~25% faster too,” he added, suggesting Linux is not only efficient while compiling but also with inferencing. 

Efforts like RHEL AI are important for the Linux community as almost everything around AI is researched and developed on Linux, which eventually makes it a more efficient platform for training and using AI models. 

For the most part, you are just one command away from setting up the AI development environment, and it just works, spending hours on Windows to configure the AI development environment.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

RHEL Gives Linux A Much Needed AI Update


A few days ago, Dell Technologies partnered with Red Hat Inc to bring the Red Hat Enterprise Linux AI (RHEL AI) platform to its popular PowerEdge servers, paving the way for its hardware to serve as a foundation for AI development. 

The idea behind this partnership was to make it easier for organisations to scale their IT infrastructure to support successful AI and ML strategies without needing to host those workloads in the cloud. They can deploy Dell’s PowerEdge servers within their own on-premises data centres or alternatively use them as part of a larger hybrid cloud setup.

Apart from partnering with Dell, Red Hat recently unveiled Enterprise Linux AI, which is primarily focused on developers. One good example is a bootable RHEL image with pre-configured AI libraries like PyTorch, which allows users to quickly set up an AI-ready environment without going through complex installation and configuration processes.

AMD is also very active in supporting AMD GPUs on Linux and recently released the AMD XDNA Linux Driver v3, which is crucial for enabling the Ryzen AI Neural Processing Unit (NPU) on Linux systems. It is likely to get merged into the Linux kernel 6.13. 

Linux-centric companies like openSUSE are contributing to AI so that it remains accessible. For example, openSUSE got listed on Hugging Face and made the first contribution of a dataset called cavil-licence-patterns, aiming to provide more advanced and accurate detection of licence issues and compliance.

The Contribution of Red Hat

RHEL AI combines several key components to create a powerful foundation for AI innovation. At its core are the open-source Granite models, a family of LLMs developed by IBM Research. Complementing these models is InstructLab, an open-source project that simplifies model experimentation and fine-tuning. 

This allows domain experts to contribute to AI models without extensive data science skills. All these components are packaged into a bootable Red Hat Enterprise Linux image, streamlining deployment across hybrid cloud environments.

This approach is more inclined towards ethical AI. Many of Red Hat’s customers couldn’t go near AI because of the copyright implications, but this is basically the most ethical form AI can take.

Red Hat’s approach addresses several challenges in enterprise AI adoption. By leveraging open-source principles, RHEL AI lowers the barriers to entry for AI innovation, making it more accessible to a broader range of organisations. The platform offers up to 50% lower costs compared to similar solutions, making AI development more economically viable for enterprises.

A Reddit user praised the closed integration to CI/CD and mentioned that you can create and share host images just like you would container images, and now developers and operators can run the exact same image in a container, or as bare metal on the host.

“The technology is neat, but what it can do when integrated into your development, build and deployment pipeline is where the magic happens. It’s not a huge leap forward, just a few modest but highly useful steps forward,” he added. 

Linux Matters a Lot for AI Developers

Developers are in the favour of using Linux to train models. A developer on Reddit mentioned that compiling applications is so much easier with Linux and when he uses Windows for the same task, he has to try and find VS 2022 launcher, download and run it, and Google to see what options he has to tick. All this combined with several GB of unnecessary dependencies, which he does not want was given anyway.

“Then I have to go to the NVIDIA site, download the CUDA Toolkit, and install it. Then I need CMAKE, download it, and install it. If it all works, compiling takes about 25 minutes. On Arch Linux, just type pacman -S base-devel cuda, and you’re ready to go. Compiling takes like 5-10 minutes, and inference is ~25% faster too,” he added, suggesting Linux is not only efficient while compiling but also with inferencing. 

Efforts like RHEL AI are important for the Linux community as almost everything around AI is researched and developed on Linux, which eventually makes it a more efficient platform for training and using AI models. 

For the most part, you are just one command away from setting up the AI development environment, and it just works, spending hours on Windows to configure the AI development environment.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

Coworking Space Provider DevX Files DRHP For IPO

SUMMARY DevX’s proposed initial public offering (IPO) will consist...

Series, a GenAI game development platform, has quietly raised...

It’s been quite the year for gaming industry...

BharatPe Vs Ashneer Grover: A Settlement Or Compromise?

Earlier today, fintech unicorn BharatPe claimed to have...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!