There is an overwhelming set of potential metrics to monitor. In this blog, we'll try to introduce a reasonable hierarchy.
When one mentions "ML monitoring," this can mean many things. Are you tracking service latency? Model accuracy? Data quality? This blog organizes everything one can look at in a single framework.
Meet the new feature in the Evidently open-source Python library! You can easily integrate data and model checks into your ML pipeline with a clear success/fail result. It comes with presets and defaults to make the configuration painless.
Our CTO Emeli Dral gave a tutorial on how to use Evidently at the Stanford Winter 2022 course CS 329S on Machine Learning System design. Here is the written version of the tutorial and a code example.
Data and prediction drift often need contextual interpretation. In this blog, we walk you through possible scenarios for when you detect these types of drift together or independently.
Meet the new Data Quality report in the Evidently open-source Python library! You can use it to explore your dataset and track feature statistics and behavior changes.
We are building an open-source tool to evaluate, monitor, and debug machine learning models in production. Here is a look back at what has happened at Evidently AI in 2021.
Now, you can easily customize the pre-built Evidently reports to add your metrics, statistical tests or change the look of the dashboards with a bit of Python code.
Even if you can calculate the model quality metric, monitoring data and prediction drift can be often useful. Let’s consider a few examples when it makes sense to track the distributions of the model inputs and outputs.
What can you do once you detect data drift for a production ML model? Here is an introductory overview of the possible steps.
Now, you can use Evidently to display dashboards not only in Jupyter notebook but also in Colab, Kaggle, and Deepnote.
When monitoring ML models in production, we can apply different techniques. Data drift and outlier detection are among those. What is the difference? Here is a visual explanation.
You can use Evidently together with Prometheus and Grafana to set up live monitoring dashboards. We created an integration example for Data Drift monitoring. You can easily configure it to use with your existing ML service.
You can look at historical drift in data to understand how your data changes and choose the monitoring thresholds. Here is an example with Evidently, Plotly, Mlflow, and some Python code.
Is it time to retrain your machine learning model? Even though data science is all about… data, the answer to this question is surprisingly often based on a gut feeling. Can we do better?
Now, you can use Evidently to generate JSON profiles. It makes it easy to send metrics and test results elsewhere.
Can you train a machine learning model to predict your model’s mistakes? Nothing stops you from trying. But chances are, you are better off without it.
There is more to performance than accuracy. In this tutorial, we explore how to evaluate the behavior of a classification model before production use.
You can now use Evidently to analyze the performance of classification models in production and explore the errors they make.
What can go wrong with ML model in production? Here is a story of how we trained a model, simulated deployment, and analyzed its gradual decay.
You can now use Evidently to analyze the performance of production ML models and explore their weak spots.
Our second report is released! Now, you can use Evidently to explore the changes in your target function and model predictions.
We are excited to announce our first release. You can now use Evidently open-source python package to estimate and explore data drift for machine learning models.
No model lasts forever. While the data quality can be fine, the model itself can start degrading. A few terms are used in this context. Let’s dive in.
A bunch of things can go wrong with the data that goes into a machine learning model. Our goal is to catch them on time.