phonlamaiphoto - stock.adobe.com

Manage Learn to apply best practices and optimize your operations.

Machine learning in production challenges developers' skills

Deploying machine learning models requires an entirely different skill set than developing them, and data scientists and engineering teams need to be ready to bridge this gap.

Enterprises face different challenges when it comes to developing machine learning AI algorithms and putting machine learning in production. Machine learning development is an experimental and exploratory process, whereas deployment demands consistent results that are secure and well-managed.

In the development phase, the goal is to optimize an algorithm for accuracy.

"Research is inherently experimental, and failure is accepted," said JF Huard, CTO of data science at AppDynamics, a vendor that sells an application performance monitoring platform.

In the deployment phase, a machine learning model is launched to internal or external consumers. This phase has more constraints, and a higher level of accuracy and performance is expected. Also, managing the cost and scale of the implementation can be very challenging and cost-prohibitive.

"In production and deployment, there are more constraints, such as cost and resources and new patterns in data, that were not observed while doing research, partly because, in research, one cannot evaluate all possibilities," Huard said.

Making sense of unstructured data

One of the biggest challenges with putting machine learning in production is making sense of unstructured data. It's hard to put models in production because the production model may encounter unstructured data or different data types in the production environment than it was trained on in the controlled research lab.

"In production, you have a lot of unexpected data and situations that creep up," Huard said.

For example, when using a machine learning algorithm to tag images, there's an image labeling process that determines either this is an apple or this is not an apple and, over time, new input information trains the algorithm to correctly identify an image of an apple.

Example of the machine learning process
Many machine learning processes follow similar steps.

The issue is that someone needs to label images in a training data set, said Huard. To take the image labeling example a step further, on a social media site, a user would have to tag people in a photo so the algorithm can subsequently learn to recognize a group of pixels as the face of one person or another.

But this process is more complex for many common enterprise applications of machine learning models, such as mining support tickets as a data set to improve IT management. In this case, data scientists need to find a way to take the support ticket incidents and correlate each data point with in-house application data to train the algorithm and make it actionable.

"It's very challenging, and even getting access to the right data sets can be a challenge in itself," Huard said.

Planning for data drift

Another challenge lies in keeping track of and responding to changes in the performance of machine learning models put into production. Machine learning applications can suffer data drift, which distinguishes them from other types of applications.

"Contrary to other engineering systems and products, a machine learning product that works today could fail tomorrow," said Jennifer Prendki, vice president of machine learning at Figure Eight, which offers an AI platform that improves machine learning training data.

This is because the data driving machine learning in production is subject to trends, seasonality patterns and changes over time. This creates a need for models to be regularly retrained on new, real-world data. The work is never complete.

Prendki said managers should consider machine learning lifecycle management. Unfortunately, it can be challenging to generalize this kind of process across all machine learning models in the enterprise because the rules related to the management of models are specific to particular use cases and are difficult to validate. This adds an extra layer of complexity to the task of putting models into production.

Track accuracy to optimize retraining schedule

One strategy is to develop a machine learning operations process, sometimes called MLOps, which complements DevOps-related processes. Whereas DevOps focuses on reducing interruptions that tend to be sporadic and unexpected system failures that reduce system accessibility, MLOps addresses the progressive and inexorable decay of machine learning models.

The process starts by creating a minimum viable product, or MVP, that enables the early identification of gaps that exist between the research and deployment phases. There will always be time to refine the model and improve accuracy once there is proof that the product actually addresses the problems of the customer and that there is no disconnect between the training data and the actual data.

Enterprises should allow the model and the system to run in the production environment for a couple of cycles before attempting to automate its lifecycle after rollout. In order to keep pace with data drift, the creator of a model might suggest a training frequency and set up a schedule according to this recommendation. Still, there is no guarantee that the optimal interval between model retraining will stay constant, which could mean that models are either not retrained enough, leading to inaccurate predictions, or they are trained too often, which leads to high computing costs.

Prendki suggested a much better practice is to keep track of the statistical signatures of the AI machine learning models' inputs and outputs by evaluating the instantaneous accuracy of the model. This requires an investment in data monitoring tools and hiring data analysts for machine learning teams.

Improve communications

The field of DevOps arose because enterprises recognized there were communication gaps between developers and operations teams. MLOps can address similar gaps between the data scientists building machine learning models and the operations teams tasked with keeping them running. This is more of a cultural challenge than a technical one.

Putting machine learning into production can lead to a gap in areas of expertise and experience between an organization's data science and operations teams. Both teams are often tasked with machine learning runtime responsibilities without truly understanding the core considerations of each side.

"Most organizations make the mistake of assuming that their data team will solve the whole problem and bring them to the finish line of the machine learning journey," said Sivan Metzger, CEO of ParallelM, provider of a tool that manages machine learning models in production. "This mistake inevitably leads to frustration by all sides and is guaranteed not to yield the desired results."

Organizations must realize that the success of putting machine learning models into production requires operations teams to work with data scientists from the beginning to ensure that the resulting models yield good results, are easy to manage and are regularly updated. This involves having both teams collaboratively set guidelines for optimizing the automation of machine learning deployment, management and scaling, Metzger said.

This was last published in September 2018

Dig Deeper on AI integration

Join the conversation

2 comments

Send me notifications when other members comment.

Please create a username to comment.

What do you think are the biggest challenges to putting machine learning into production?
Cancel
Handoffs between personnel and groups are where delays and inefficiency appear.  When a model requires a handoff between Data Science/Engineering/DevOps for deployments and updates there are likely to be significant delays.  

One method is to have data scientists do everything, but this is a wildly inefficient way to scale.  
Cancel

-ADS BY GOOGLE

SearchBusinessAnalytics

SearchCRM

SearchCIO

SearchDataManagement

SearchERP

Close