Klemsy - Fotolia

Red Hat OpenShift Operators target AI, big data workloads

Speedy deployment of machine learning jobs may not arrive too soon. But container platform enhancements from Red Hat aim to quicken the uptake of such innovations.

BOSTON -- Open source Linux and OpenShift container platform vendor Red Hat moved to advance its support for AI and machine learning in enterprises. The company discussed work with Nvidia to speed GPU-based AI application building on OpenShift via new OpenShift Operators.

Red Hat also highlighted, at its Red Hat Summit 2019 conference here May 9, operators for bringing AI and machine learning out of R&D. Examples include operators from H2O.ai and ProphetStor Data Services, which for the former ease configuration of machine learning jobs, and for the latter improve monitoring of such jobs once they are deployed.

OpenShift Operators are prescribed methods for packaging, deploying and managing containers. Enterprises are finding a need for them as they seek to move machine learning experiments into actual operations.

Such "AIOps" have emerged as a major driver of Red Hat's AI efforts as Watson AI originator IBM moves toward closing its $34 billion acquisition of Red Hat, the leading open source player.

Optimized deployment for GPU jobs

For AI and machine learning to reach into mainstream computing, performance optimizations at chip, firmware, driver, algorithmic library and other levels in the software stack will be needed, said Chris Lamb, vice president of computing software at Nvidia, in a keynote at the conference.

Today, we think of the world as a set of continuously learning containers. Now, data is part of the container.
Sri Satish AmbatiCEO, H2O.ai

Nvidia's goal in a new early access program it is launching with Red Hat is to better enable OpenShift containers to run such optimizations, Lamb said. Like others, Lamb points to containers as a likely means by which AI jobs will take shape in operations.

"Our goal here is to simplify the management of the full stack [for] accelerated computing," Lamb said, referring to the Red Hat-Nvidia effort to forge repeatable container deployment and management practices for AI and machine learning in data centers. These often take the form of OpenShift Operators.

Many parts of the AI pipeline require automated configurations, so activity around persistent containers for data is also an area of development. Red Hat also introduced at the event new certifications for OpenShift Operators, including operators from Crunchy Data, MariaDB, MemSQL, NuoDB and others.

Red Hat Summit 2019 show floor
AIOps proved to be a strong suit at Red Hat Summit 2019

In pursuit of operations

All of this could pose OpenShift Operators as a key step toward applied machine learning in the enterprise.

Sri Satish Ambati, CEO, H2O.aiSri Satish Ambati

Successful automated modeling for machine learning is especially necessary, as the data and tools used for machine learning are constantly being updated, according to Sri Satish Ambati, CEO and co-founder of data science and machine learning platform vendor H2O.ai. Here, again, containers play a role.

"Today, we think of the world as a set of continuously learning containers. Now, data is part of the container," Ambati said in an interview.

"As data is streaming into containers today you can package it easily, as well as make it simple for programmers to use," Ambati said. He demonstrated H2O's Driverless AI automated machine learning platform running on OpenShift at the Red Hat Summit.

"The containers allow us to take advantage of the latest developments in algorithms -- in the latest set of recipes that data scientists have created," he said.

Automating excellence

Still, machine learning workloads today bring new complexity to computing infrastructure that already held significant challenges, said Daniel Riek, senior director at Red Hat's AI center of excellence, in an interview.

Riek said Red Hat development activity has focused on AI as a customer workload requirement, AI for general business process improvement and the application of AI to IT infrastructure itself.

"We've seen growing complexity in IT. With AI, you need to automate that even more," Riek said. And, as AI moves out of the cloud and into edge computing and IoT, automation of machine learning configurations will become more pressing still, he added.

AI on the merge

In recent years, Red Hat has particularly emphasized predictive and AI analytics capabilities for data center operations and its AIOps efforts will be an expanding area of emphasis for Red Hat's OpenShift container effort, according to James Kobielus, an analyst at Wikibon.

But, it is too early to speculate on how Red Hat and IBM AI efforts will be harmonized upon completion of the merger, Kobielus said.

"IBM has a much deeper pool of research, patents and products. It is safe to assume that premium AI functionality will come from intellectual property that the future parent will be providing to Red Hat," he said.

Meanwhile, the IBM-Red Hat pairing took an important step forward earlier this month, as an IBM SEC filing disclosed that the U.S. Department of Justice had effectively approved the planned merger.

The conference was May 7 to 9 at the Boston Convention and Exhibition Center.

Dig Deeper on Artificial intelligence platforms

Business Analytics
CIO
Data Management
ERP
Close