News

RHEL AI

Red Hat makes custom GenAI model development a cinch

You know that the potential of generative AI is immense. You’ve undoubtedly seen opportunities to make your processes more efficient using GenAI. But to achieve that, you need an AI model tailored to your company. And that requires a lot of resources, expertise, and training data to build, refine, and deploy. RHEL AI offers an innovative solution that simplifies the whole process.

Currently, companies are eager to abandon existing AI solutions such as ChatGPT and instead develop and deploy their own GenAI models, tailored to their specific business processes and trained with their business data. To make this ambitious leap possible, Red Hat is launching an extension of its renowned Linux distribution: Red Hat Enterprise Linux AI (RHEL AI).

RHEL AI

Drastically lowering the threshold

With RHEL AI you can develop, train, and implement large language models (i.e. GenAI models) for business applications. Of course, you could use the new Red Hat OpenShift AI platform for this, but the advantage of RHEL AI is that it significantly lowers the threshold to getting on the custom GenAI ladder.

The jumping-off point for RHEL AI is the installation of a bootable container image of the software, with all the tools and drivers you need on board to develop, train, and deploy AI models. You see, RHEL AI features a brand-new series of pre-trained large language models from IBM’s Granite family. These are completely open source and, importantly, suitable for commercial use, thanks to the Apache 2.0 license. This combination is perfect for companies looking to improve their processes.

Fewer recources

Using open-source InstructLab — also included with RHEL AI — you can further train and improve a Granite LLM to achieve a specific business goal. And you can do this with far fewer resources than were previously needed. Just as importantly, you’ll also need much less human-generated training data.

No doubt you’ll have questions you need answers to as you develop and put your AI application into use. But fear not! Red Hat Enterprise Linux AI is backed up by a vast open-source community that’s continuously improving the software. You can always rely on their expertise to help you build and train your custom AI model, so you can rest assured that you’ll never have to figure everything out by yourself.

Eager to get started?

Officially, Red Hat Enterprise Linux AI won’t be available until later this year. But if you have a Red Hat developer account, you can already download and try out a preview. So if you’re planning to use generative AI to improve your company processes, you can get started right now.

If you then want to scale up the AI model you’ve developed in RHEL AI, that’s all very straightforward. While you have to scale up a virtual machine vertically — with more RAM, stronger CPU, and so on — an AI model in RHEL AI can easily be scaled up horizontally because it’s container-image based. Or you could scale up your AI workflows with Red Hat OpenShift AI, Red Hat’s powerful new AI/ML platform.

Many companies that want to dive into generative AI have a long list of questions: how do we get started? What resources do we need? Where do we find the training data and expertise we need? How can AI improve our processes? RHEL AI allows you to discover the answers to these questions all by yourself. With RHEL AI you can take your first steps in AI development without vast banks of training data, technical know-how, computer power, or financial resources. And it’s not theoretical but truly practical. So be sure to check out the preview and see for yourself what RHEL AI can do for your organization.

Would you like to get started with Red Hat Enterprise Linux AI? Read more about it here and download the preview.

Are you looking for an experienced partner to implement your AI application? Contact us.

Share

More things you might be interested in
×

 

Hello!

Click one of our contacts below to chat on WhatsApp

× How can we help you?