Open-source tutorials on MLOps, ML model evaluation, and monitoring for data scientists and machine learning engineers. All tutorials contain end-to-end code examples.
In this tutorial, you will learn how to run batch ML model inference and deploy a model monitoring dashboard for production ML models using open-source tools.
A beginner-friendly MLOps tutorial on how to evaluate ML data quality, data drift, model performance in production, and track them all over time using open-source tools.
How do you monitor unstructured text data? In this code tutorial, we’ll explore how to track interpretable text descriptors that help assign specific properties to every text.
In this code tutorial, you will learn how to create interactive visual ML model cards to document your models and data using Evidently, an open-source Python library.
In this code tutorial, you will learn how to set up an ML monitoring system for models deployed with FastAPI. This is a complete deployment blueprint for ML serving and monitoring using open-source tools.
In this code tutorial, you will learn how to run batch ML model inference, collect data and ML model quality monitoring metrics, and visualize them on a live dashboard.
In this tutorial, you will learn how to create a data quality and ML model monitoring dashboard using the two open-source libraries: Evidently and Streamlit.
In this tutorial, we will explore issues affecting the performance of NLP models in production, imitate them on an example toy dataset, and show how to monitor and debug them.
You can look at historical drift in data to understand how your data changes and choose the monitoring thresholds. Here is an example with Evidently, Plotly, Mlflow, and some Python code.
There is more to performance than accuracy. In this tutorial, we explore how to evaluate the behavior of a classification model before production use.
Want to learn production ML monitoring? Sign up for our Open-source ML observability course for data scientists and ML engineers. It's free!