Serverless machine learning reduces development burdens

Getting started with machine learning throws multiple hurdles at enterprises. But the serverless computing trend, when applied to machine learning, can help remove some barriers.

IT infrastructure that enables rapid scaling, integration and automation is a greatly valued commodity in a marketplace that is evolving faster. And fast, serverless machine learning is a primary example.

Serverless computing is a cloud-based model wherein a service provider accepts code from a customer, dynamically allocates resources to the job and executes it. This model can be more cost-effective than conventional pay-or-rent server models. Elasticity replaces scalability, relieving the customer of deployment grief. Code development can be far more modular. And headaches from processes like HTTP request processing and multithreading vanish altogether.

It's as efficient as development could possibly be from the standpoint of time and money: The enterprise pays the provider job by job, billed only for the resources consumed in any one job execution. This simple pay-as-you-go model frees up enterprise resources for more rapid app and service development and levels the playing field for development companies not at the enterprise level.

Attractive as it is, how can this paradigm accommodate machine learning, which is becoming a mission-critical competitive advantage in many industries?

A common problem in working with machine learning is moving training models into production at scale. It's a matter of getting the model to perform for a great many users, often in different places, as fast as the users need it to do so. Nested in this broad problem is the more granular headache of concept drift, as the model's performance degrades over time with increasing variations in data, which causes such models to need frequent retraining. And that, in turn, creates a versioning issue and so on.

Function as a service

Function as a service (FaaS) is an implementation of serverless computing that works well for many application deployment scenarios -- serverless machine learning included. The idea is to create a pipeline by which code is moved, in series, from testing to versioning to deployment, using FaaS throughout as the processing resource. When the pipeline is well-conceived and implemented, most of the housekeeping difficulties of development and deployment are minimized, if not removed.

A machine learning model deployment adds two steps to this pipeline:

  • training, upon which the model's quality depends; and
  • publishing -- timing the go-live of the code in production, once it's deployed.

FaaS is a great platform for this kind of process, given its versatile and flexible nature.

All the major public clouds provide FaaS. The list begins with AWS Lambda, Microsoft Azure Functions, Google Cloud Functions and IBM Cloud Functions, and it includes many others.

Easier AI development

The major FaaS function platforms accommodate JavaScript, Python and a broad range of other languages. For example, Azure Functions is Python- and JavaScript-friendly.

Beyond the languages themselves, there are many machine learning libraries available through serverless machine learning offerings: TensorFlow, PyTorch, Keras, MLpack, Spark ML, Apache MXNet and a great many more.

A key point about AI development in the FaaS domain is it vastly simplifies the developer's investment in architecture: autoscaling is built in; multithreading goes away, as mentioned above; fault tolerance and high availability are provided by default.

Moreover, if machine learning models are essentially functions handled by FaaS, then they are abstracted and autonomous in a way that relieves timeline pressure when different teams are working with different microservices in an application system. The lives of product managers get much easier.

Turnkey FaaS machine learning

Vendors are doubling down on the concept of serverless computing and continuing to refine their options. Amazon, Google and others have services set aside to do your model training for you, on demand: Amazon SageMaker and the Google Cloud ML Engine are two of many such services, which also include IBM Watson Machine Learning, Salesforce Einstein and Seldon Core -- which is open source.

These services do more than just train machine learning models. Many serverless machine learning offerings handle the construction of data sets to be used in model training, provide libraries of machine learning algorithms, and configure and optimize the machine learning libraries mentioned above.

Some offer model tuning -- automated adjustments of algorithm parameters to tweak the model to its highest predictive capacity.

The FaaS you choose, then, could lead to one-stop shopping for your machine learning application.

Dig Deeper on AI in the cloud