Production ML Monitoring: Outliers, Drift, Explainers & Statistical Performance

The lifecycle of a machine learning model only begins once it’s in production. In this talk we provide a practical deep dive on best practices, principles, patterns and techniques around production monitoring of machine learning models. We will cover standard microservice monitoring techniques applied into deployed machine learning models, as well as more advanced paradigms to monitor machine learning models with Python leveraging advanced monitoring concepts such as concept drift, outlier detector and explainability. We’ll dive into a hands on example, where we will train an image classification machine learning model from scratch using Tensorflow, deploy it, and introduce advanced monitoring components as architectural patterns with hands on examples. These monitoring techniques will include AI Explainers, Outlier Detectors, Concept Drift detectors and Adversarial Detectors. We will also be understanding high level architectural patterns that abstract these complex and advanced monitoring tec
Back to Top