🚀 Join us January 25 for the Evidently monthly demo and Q&A. Register now →
Give us a star ⭐️ on GitHub to support the project!


Free to start. Scales as you add more models. Powered by open-source.  


get started
Great to get started or try Evidently.
  • All evaluations: data drift, data quality, model quality
  • All data types: tabular, text, embeddings
  • Batch and real-time monitoring
  • Custom metrics
  • Open-Source (Apache 2.0)
Community support


/ month
billed annually
Reliable monitoring for growing teams.
All of Free, plus:
  • Ready to use web app: we host it for you
  • Pre-built monitoring tabs with zero config
  • SSO with Google and GitHub
  • Alerting integrations
  • 1 project
  • 5 users
Email support


Get demo
For enterprises running ML at scale.
All of Cloud, plus:
  • Private deployment options
  • Extended limits
  • Role-based access control
  • SSO with Okta, SAML
  • Audit logs (coming soon)
  • Raw data debugging (coming soon)
Enterprise support and SLA

Compare the plans




Deployment options
Deployment options
Self-hosted or SaaS
Core features
Reports and Test Suites
Monitoring dashboard
Batch ML monitoring
Online ML monitoring
Supported evaluations
Data quality and integrity
Data drift
ML model quality
Feature importance
Bias and fairness
Coming soon
Coming soon
Coming soon
Custom metrics
Supported data types
Tabular data
Text data
Time series data
Supported model types
Ranking and RecSys
Unsupervised models
Coming soon
Coming soon
Coming soon
Monitoring and analytics
Custom monitoring panels
Dashboard as code
Pre-built monitoring tabs
Dashboard configuration in UI
Raw data debugging
Coming soon
Number of projects
1 project
Number of rows / predictions
Number of columns
5 users
Data retention
Snapshot storage
Role-based access controls
Single sign-on
Okta, SAML
Activity logs
Coming soon
Support channels
Discord, GitHub
Custom support SLA
Onboarding session

Frequently Asked Questions

Is Evidently open-source?

The Evidently Python library is open source under the Apache 2.0 license. The open-source version is tailored for individual data scientists and small ML teams. You can also use the open-source version to run a proof of concept without additional approvals.

Evidently Cloud and Evidently Enterprise are commercial products that build upon the functionality of the open-source Evidently library. They are designed for growing teams and larger enterprises that operate ML at scale and need advanced analytics, reliability, and support.

What is the difference between paid and open-source versions?

You can self-host and manage the open-source version independently. 

Choose Evidently Cloud if you prefer a hassle-free experience: we will take care of managing the monitoring service, storage backups, and upgrades for you. Evidently Cloud also includes features not available in the open source, such as pre-built monitoring tabs and alerting integrations. 

The Enterprise edition provides extended limits for companies that operate ML at scale and unlocks additional team management and security features, such as dedicated roles and permissions. Enterprise version is also available for self-hosting and includes extra support and training options.

What is a project?

A project refers to any machine learning model, dataset, and data stream you wish to monitor. You can also group multiple ML models within a single project, using tags to distinguish between them. For example, you can log data for shadow and production models or multiple models of the same type (e.g., models for different locations) together. 

Each project provides a unified monitoring view, with a central monitoring dashboard and related tabs for comprehensive monitoring and analysis.

What is a snapshot?

A snapshot is a single log “unit” that serves as a data source for the Evidently Monitoring. This can be an individual Report or a Test Suite computed for a given period for a specific model or dataset. You can define which metrics or tests to include in each snapshot and set a custom computation frequency. 

For example, you might have daily snapshots for a batch ML model to evaluate data quality and drift and then a weekly snapshot for model performance once you get the labels. For a near real-time model, you might compute snapshots every 1, 10 minutes, or every hour.

The size of the resulting snapshot depends on the chosen metrics and tests and the number of columns and rows in each batch of data.

How do I estimate my storage volume needs?

You can assess it using the Evidently Python library by generating sample snapshots for your data in any Python environment (e.g., Jupyter notebook). Choose the metrics you’d like to compute, generate a few snapshots, and estimate their volume. You can then multiply it by the expected computation frequency. Check out this Quickstart tutorial that explains how to generate individual snapshots.

Example: if you run a data drift report for 50 columns and 10,000 rows of current and reference data, the resulting snapshot can be ~ 1MB. (For 100 columns X 10,000 rows: ~ 3.5MB; for 100 columns X 100,000 rows: ~ 9MB). Note that these estimations may vary significantly based on the metrics you use, so it’s best to perform your sizing. 

You can also perform these estimations for a particular model during the trial use of the platform.

How do I send data to Evidently Cloud?

You can compute snapshots locally and send them to the Evidently Cloud using the API key provided after creating an account. You have two options: 

1. Use the Evidently Python library to compute snapshots in your data pipelines. For example, you can orchestrate monitoring jobs using a workflow manager. 

2. Deploy the Evidently Collector service. Then, you can send inferences from your ML prediction service to the Collector that will compute and send the snapshots to Evidently Cloud.

We are working on adding direct integrations with raw data sources, such as data lakes or object storage. Want to request this option? Contact us to discuss more.

Do you store raw predictions?

No, Evidently does not store raw predictions. The monitoring relies on snapshots, which include various summary statistics, metrics and metadata on test results. For example, a snapshot might contain a statistical profile of each column in the dataset, such as the share of missing values, min, max, and average values, and a binned distribution histogram. However, it does not retain raw predictions. 

As an exception, some metrics retain a small amount of raw data. For example, metrics related to text data drift might retain 10 examples of words that help differentiate between reference and current datasets. If you are concerned about potentially logging such data, you can explore what is included in each Metric or Tests and choose an alternative.

Can I add custom metrics or tests?

Yes, you can implement custom Metrics using the interface of the Evidently Python library. Evidently already has 100+ individual metrics, many of which can be parameterized, and we continue adding more. Feel free to open a GitHub issue with your feature request if you have a particular metric you’d like us to add.

Can I migrate from open-source to the cloud?

Yes, you can upgrade from the open-source version to a paid plan once you feel the need for additional features, better performance, or no longer want to manage the services independently. The transition is designed to be smooth to ensure continuity in your projects with minimal change to existing pipelines.

Can I get a trial of the enterprise version?

Yes. Contact us to know more. 

By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.