Give us a star ⭐️ on GitHub to support the project!
🚀 Join us January 25 for the Evidently monthly demo and Q&A. Register now →
New! You can now self-host an ML monitoring dashboard. Read the release blog ->
New!
Self-host an ML monitoring dashboard. Start the tutorial ->
New! You can now self-host an ML monitoring dashboard. Read the release blog ->

The open-source ML observability platform

Evaluate, test, and monitor ML models from validation to production.
From tabular data to NLP and LLM. Built for data scientists and ML engineers.

Evaluate, test, and monitor ML models from validation to production. From tabular data to NLP and LLM. Built for data scientists and ML engineers.

Try cloud DEPLOY OPEN-SOURCEEvidently ML observability platform

All you need to reliably run ML systems in production 

Start with simple ad hoc checks. Scale to the complete monitoring platform. All within one tool, with consistent API and metrics.

Build reports

Useful, beautiful, and shareable. Get a comprehensive view of data and ML model quality to explore and debug. Takes a minute to start.

GET STARTED
Evidently Reports
Evidently Test Suites

Test your pipelines

Test before you ship, validate in production and run checks at every model update. Skip the manual setup by generating test conditions from a reference dataset.

GET STARTED

Monitor it all

Monitor every aspect of your data, models, and test results. Proactively catch and resolve production model issues, ensure optimal performance, and continuously improve it.

GET STARTED
Evidently ML Monitoring

Understand, visualize and track with 100+ metrics

Data quality

Stay on top of data quality throughout the ML lifecycle.

  • Run exploratory analysis and profile your data with a single line of code.
  • Spot and solve nulls, duplicates, and range violations in production pipelines.
  • Track model features over time and ensure compliance with data quality KPIs.
GET STARTED
Evidently data quality metrics

Data drift

Catch shifts in predictions and input data distributions.

  • Learn from past drift patterns to know what to expect.
  • Get early warnings about potential model decay without labeled data.
  • Speed up debugging by easily pinpointing the source of change.
GET STARTED
Evidently data drift metrics

Model performance

Track and improve your ML models in the real world.

  • Get visibility into all your production models. Grasp trends and catch deviations quickly. 
  • Use templates for common model types and add custom metrics for anything else.
  • Find the root cause of model quality drops with ready-made dashboards.
GET STARTED
Evidently model performance metrics

LLM and NLP models

Keep tabs on text-based models and unstructured data.

  • Monitor the quality of model responses and data inputs.
  • Extract meaningful descriptors from text data and track how they evolve.
  • Detect distribution drift in texts and embeddings to spot the change before you get the labels.
GET STARTED
Evidently for NLP and LLM

Join 2,500+ data scientists and ML engineers

Get support, contribute, and chat ML in production in our Discord community.

join discord

What the community says

Dayle Fernandes

Dayle Fernandes

MLOps Engineer, DeepL

“We use Evidently daily to test data quality and monitor production data drift. It takes away a lot of headache of building monitoring suites, so we can focus on how to react to monitoring results. Evidently is a very well-built and polished tool. It is like a Swiss army knife we use more often than expected.”

Read the blog →
Moe Alter

Moe Antar

Senior Data Engineer, PlushCare

“We use Evidently to continuously monitor our business-critical ML models at all stages of the ML lifecycle. It has become an invaluable tool, enabling us to flag model drift and data quality issues directly from our CI/CD and model monitoring DAGs. We can proactively address potential issues before they impact our end users.”

Jonathan Bown

Ming-Ju Valentine Lin

ML Infrastructure Engineer, Plaid

“We use Evidently for continuous model monitoring, comparing daily inference logs to corresponding days from the previous week and against initial training data. This practice prevents score drifts across minor versions and ensures our models remain fresh and relevant. Evidently’s comprehensive suite of tests has proven invaluable, greatly improving our model reliability and operational efficiency.”

Javier López Peña

Javier López Peña

Data Science Manager, Wayflyer

“Evidently is a fantastic tool! We find it incredibly useful to run the data quality reports during EDA and identify features that might be unstable or require further engineering. The Evidently reports are a substantial component of our Model Cards as well. We are now expanding to production monitoring.”

Read the blog →
Jonathan Bown

Jonathan Bown

MLOps Engineer, Western Governors University

“The user experience of our MLOps platform has been greatly enhanced by integrating Evidently alongside MLflow. Evidently's preset tests and metrics expedited the provisioning of our infrastructure with the tools for monitoring models in production. Evidently enhanced the flexibility of our platform for data scientists to further customize tests, metrics, and reports to meet their unique requirements.”

Ben Wilson

Principal RSA, Databricks

“Check out Evidently: I haven't seen a more promising model drift detection framework released to open-source yet!”

Niklas von Maltzahn

Head of Decision Science, JUMO

“Evidently is a first-of-its-kind monitoring tool that makes debugging machine learning models simple and interactive. It's really easy to get started!”

Manoj Kumar

Data Scientist, Walmart Labs

“I was searching for an open-source tool, and Evidently perfectly fit my requirement for model monitoring in production. It was very simple to implement, user-friendly and solved my problem!”

Emmanuel Raj

Senior Machine Learning Engineer, TietoEVRY

“I love the plug-and-play features for monitoring ML models.”

How it works

Turn predictions to metrics, and metrics to dashboards.

Evidently presets

1. Pick your preset

Decide what to collect: from individual metrics to complete statistical data snapshots. Customize everything or go with defaults.

Evidently logging

2. Log snapshots

Capture metrics, summaries, and test results with Evidently Python library. Send data from anywhere in your pipeline, batch or real-time.

Evidently ML monitoring dashboard

3. Get a dashboard

Visualize the results on a monitoring dashboard. Explore your data over time, customize the views, and share with others on your team. 

Install Evidently

pip install Evidently

Check the complete documentation.

Get started

Add Evidently to existing workflows, no matter where you deploy. 

Evidently Cloud

Evidently Cloud is the easiest way to get ML monitoring up and running.

START FREE

Open-source

Deploy and run Evidently on your own.
Apache 2.0 license.

DEPLOY NOW
By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.