Over the past year there has been a great deal of talk about both automation and machine learning (ML). However, it has become increasingly common to see the two together particularly in the form of automated machine learning, or AutoML.

What is AutoML?

In a recent explanatory document, Redmond, WA-based Microsoft explained that automated machine learning, also referred to as automated ML or AutoML, is the process of automating the time-consuming, iterative tasks of machine learning model development. It allows data scientists, analysts, and developers to build ML models with high scale, efficiency, and productivity all while sustaining model quality. Automated ML in Azure Machine Learning is based on work done in the Microsoft Research division.

Traditional machine learning model development is resource-intensive, requiring significant domain knowledge and time to produce and compare dozens of models. With automated machine learning users accelerate the time it takes to get production-ready ML models with great ease and efficiency.

According to Wayne Butterfield, director of Stamford, Conn.-based ISG Automation, a unit of technology research and advisory firm ISG, AutoML is to 'no data scientists' what robotic process automation is to 'no developers.' It is the low-code/no-code equivalent of a useful tool that would otherwise require a skill set the masses do not possess.

This does not remove the need for training and some basic understanding of data, data labeling, and an understanding of a desired outcomes, he said. However, it does allow a plethora of existing machine learning algorithms to be tried and tested against data sets as autonomously as possible, hence the AutoML label for this technology type.

Data is, and always will be, complex to deal with. Although AutoML goes some way to assisting in the picking of, and tuning of, an algorithm, moving from an idea to proof of concept, to pilot and finally into production requires a whole new set of tools and capabilities. “This means AutoML is a useful tool to have in your tool bag, but it doesn't replicate every stage of getting your ML models into production,” he added.

Related Article: Balancing the Opportunities and Risks of Machine Learning 

AutoML Feeding Data Into Models

John Kane is head of signal processing and machine learning at Boston-based Cogito. He explained that training neural network models is an automated process where data is continuously fed as input to the model. The model outputs predictions, given its current weights, and based on these some error term is computed, and this is then propagated back through the network to update the weights of the model. This is done continuously until some stopping criteria is met.

Although the weights of the model are updated during training, the structure (or architecture) of the model is not. That is until AutoML came along. AutoML seeks to minimize the need for human intervention in the machine learning development process. One major focus of AutoML is to optimize not just the model weights, but also the architecture during training. The intention is to automate this architecture selection process which is traditionally done by scientific practitioners or via brute force grid searching.

“Although AutoML (in particular so-called Neural Architecture Search; aka NAS) is an attractive current area of research, it is still to gain traction with engineers working on commercial ML applications,” Kane said. He cites the example of speech and language processing, although there have been interesting research developments in this area (see this Interspeech paper by researchers from Google), the complexity and resource requirements make this approach largely prohibitive.

However, other focus areas under the AutoML umbrella have received much wider adoption, in particular areas related to ML Ops. ML Ops is the term typically associated with efforts to bring rigorous software engineering, data engineering and devops practices to machine learning. ML Ops started to receive intensive research and commercial interest from around 2015 where the now famous Google paper on the “Hidden Technical Debt in Machine Learning Systems” was published at the NeurIPS conference.

State-of-the-art commercial machine learning systems now adopt ML Ops best practices, which ensures that the entire workflow from raw data to selected models (and even model deployment) is fully reproducible from start to finish. It also ensures that machine learning models are properly versioned, efficiently deployed to production, and with monitors, dashboards and alarms which allow machine learning engineers to have transparency as to how their model is behaving out in the real world.

Learning Opportunities

This also enables activities like periodic or continuous retraining of machine learning models as new data is available. Further, it enables machine learning engineers to react to issues observed with their models in production, including model drift or bias towards certain demographic variables, by retraining with new, more balanced data sampling.

AutoML and Its Role in the Enterprise 

As for its role in the enterprise? “In my opinion, any commercial organization, particularly ones with enterprise customers, involved in developing machine learning products need to take ML Ops extremely seriously and they ought to invest heavily in these systems,” Kane said. The benefit will be that machine learning engineers will identify problems with their production models before customers do, and they will be enabled to efficiently react to those issues, adjust those models either by just adding more data to the training, or modifying the training protocol slightly and deploy regular improvements to those models to maintain optimal user experience.

Automated machine learning (AutoML), mostly in the form of Neural Architecture Search in which a neural network is trained automatically, has a lot more to give. It can help more companies access machine learning, while the companies may have less expertise in AI in general, Leif-Nissen Lundbæk, co-founder and CEO of  Germany-based Xayn, told us.

However, it is just one step in the creation of an efficient AI system. For example, the labeling and all the data preparation are at least as important and largely more laborious than choosing and training the correct model. The data preparation process is often a lot more time-consuming.

In addition to this, users of AutoML should be very careful in thinking that anyone can utilize it. It is a bit like GitHub's Copilot, as an AI that helps developers code. There is so much more that goes into developing a good model and especially using it in production. “Companies will certainly need AI engineers to use and develop efficient models. For example, it took us more than a year to get the whole AI system production-ready while the design and training of the actual models took only a couple of weeks,” he said.

Nevertheless, it makes sense to further invest in automating the whole AI development workflow to further democratize the usage of machine learning and to decrease the development cost especially since AI engineering resources are extremely scarce especially for non-AI companies.

AI and Its Potential for Personalization

The global pandemic permanently pivoted consumer behavior and commerce, and business owners must learn how to target consumers and market to them in new ways, Jason Perry, CEO and co-founder of Austin-based Engagency, added. Currently martech systems can target segments of people with specific information and services. Perry believes this year will be a seismic shift in AI that can personalize information to a specific person, rather than groups of people, and experiment with what marketing techniques are successful and continually fine tune the response to it.

Think of it this way — currently you can target a group of 20-year-old females that buy a certain brand; however, in 2021, you can target a specific person — a 20-year-old female in Austin, Texas, that has specific interests and behaviors, he says. “In the future, there will be a day when a computer will tell a human what it needs. No human will need to interpret the data — the machine will do it. That will be the next phase,” he said.