What is ML Model Monitoring, and Why is It Important?

Once an ML model is deployed, its performance must be monitored continuously. That’s because model performance can degrade over time, which needs to be detected and addressed promptly.

ml model monitoring
Where machine learning model monitoring sits in the ML development lifecycle

Why Do ML Models Degrade?

Models can degrade due to a number of issues, including changes in customer behavior or problems with the data (also known as concept drift and data drift).

Let’s take a product recommendation system. Say this recommendation system is trained using consumer purchase data from 2019. 

This model may not work well in 2020. That’s because consumer behavior suddenly changed in 2020 due to the pandemic. In 2020, consumers were busy stocking up on groceries, disinfectants, and masks.

Travel items, apparel, and party supplies would’ve probably been low on the priority list as the world limited travel and congregation. Say you were getting recommendations for business outfits while working from home. Or you were getting recommendations for natural cleaners when all you wanted were disinfectants.

In this case, the model has lost touch with the customer’s buying behaviors; its performance has deteriorated. Customers would either ignore the recommendations or churn as they seek a better e-commerce platform. And in the end, sales takes a hit. 

So you can see why monitoring models in production is necessary. Any performance degradation needs to be caught early on and addressed. The corrective actions can include retraining models, fixing data quality issues, revamping models, and auditing upstream systems.

You may need special tooling to store the model predictions so that they can later be used for diagnosis and quality analysis. 

machine learning model monitoring
Model degradation (left). Periodically refreshed model (right).

Apart From Monitoring For Degradation: Feedback

Apart from monitoring models for performance, you can also collect feedback from the usage of these models. This feedback can be captured directly from customers or collected implicitly by monitoring clicks and usage patterns surrounding the model.

All this data can then be used to understand user preferences, which can come in handy when fine-tuning models.

By continuously monitoring models in production and collecting user feedback, you can systematically improve models, fix problems as they arise, and better understand your customers. 

Common Pitfalls In Monitoring & Feedback

When it comes to monitoring ML models, often what happens in practice is that teams use a set-it-and-forget-it approach. This is when models are deployed and not looked at again until a problem arises.

This may not be harmful initially, but down the road, when an issue occurs, that’s when teams rush to get data to diagnose the problem. 

By which point, it’s too late. 

Since data was not being actively collected, there’s limited diagnosis that you can do at that time. Teams have to either suspend the operation of the ML model altogether or only then start collecting data for diagnosis. And this is a poor experience for the people impacted. But you can avoid much of this if you start monitoring models and collect feedback early on. 

Keep Learning & Succeed With AI

  • JOIN OUR NEWSLETTER, AI Integrated, which teaches you how to successfully integrate AI into your business to attain growth and profitability for years to come.
  • READ OUR BOOK, The Business Case for AI, to learn practical AI applications, immediately usable strategies, and best practices to be successful with AI. Available as: audiobook, print, and eBook.
  • GET EXPERT ADVICE, Move your AI agenda forward with expert advice, implementation help, and customized guidance for your AI problems. From strategic issues to AI product design.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top
Scroll to Top