Free to start. Scales as you add more models. Powered by open-source.
The Evidently Python library is open source under the Apache 2.0 license. The open-source version is tailored for individual data scientists and small ML teams. You can also use the open-source version to run a proof of concept without additional approvals.
Evidently Cloud and Evidently Enterprise are commercial products that build upon the functionality of the open-source Evidently library. They are designed for growing teams and larger enterprises that operate ML at scale and need advanced analytics, reliability, and support.
You can self-host and manage the open-source version independently.
Choose Evidently Cloud if you prefer a hassle-free experience: we will take care of managing the monitoring service, storage backups, and upgrades for you. Evidently Cloud also includes features not available in the open source, such as pre-built monitoring tabs and alerting integrations.
The Enterprise edition provides extended limits for companies that operate ML at scale and unlocks additional team management and security features, such as dedicated roles and permissions. Enterprise version is also available for self-hosting and includes extra support and training options.
A project refers to any machine learning model, dataset, and data stream you wish to monitor. You can also group multiple ML models within a single project, using tags to distinguish between them. For example, you can log data for shadow and production models or multiple models of the same type (e.g., models for different locations) together.
Each project provides a unified monitoring view, with a central monitoring dashboard and related tabs for comprehensive monitoring and analysis.
A snapshot is a single log “unit” that serves as a data source for the Evidently Monitoring. This can be an individual Report or a Test Suite computed for a given period for a specific model or dataset. You can define which metrics or tests to include in each snapshot and set a custom computation frequency.
For example, you might have daily snapshots for a batch ML model to evaluate data quality and drift and then a weekly snapshot for model performance once you get the labels. For a near real-time model, you might compute snapshots every 1, 10 minutes, or every hour.
The size of the resulting snapshot depends on the chosen metrics and tests and the number of columns and rows in each batch of data.
You can assess it using the Evidently Python library by generating sample snapshots for your data in any Python environment (e.g., Jupyter notebook). Choose the metrics you’d like to compute, generate a few snapshots, and estimate their volume. You can then multiply it by the expected computation frequency. Check out this Quickstart tutorial that explains how to generate individual snapshots.
Example: if you run a data drift report for 50 columns and 10,000 rows of current and reference data, the resulting snapshot can be ~ 1MB. (For 100 columns X 10,000 rows: ~ 3.5MB; for 100 columns X 100,000 rows: ~ 9MB). Note that these estimations may vary significantly based on the metrics you use, so it’s best to perform your sizing.
You can also perform these estimations for a particular model during the trial use of the platform.
You can compute snapshots locally and send them to the Evidently Cloud using the API key provided after creating an account. You have two options:
1. Use the Evidently Python library to compute snapshots in your data pipelines. For example, you can orchestrate monitoring jobs using a workflow manager.
2. Deploy the Evidently Collector service. Then, you can send inferences from your ML prediction service to the Collector that will compute and send the snapshots to Evidently Cloud.
We are working on adding direct integrations with raw data sources, such as data lakes or object storage. Want to request this option? Contact us to discuss more.
No, Evidently does not store raw predictions. The monitoring relies on snapshots, which include various summary statistics, metrics and metadata on test results. For example, a snapshot might contain a statistical profile of each column in the dataset, such as the share of missing values, min, max, and average values, and a binned distribution histogram. However, it does not retain raw predictions.
As an exception, some metrics retain a small amount of raw data. For example, metrics related to text data drift might retain 10 examples of words that help differentiate between reference and current datasets. If you are concerned about potentially logging such data, you can explore what is included in each Metric or Tests and choose an alternative.
Yes, you can implement custom Metrics using the interface of the Evidently Python library. Evidently already has 100+ individual metrics, many of which can be parameterized, and we continue adding more. Feel free to open a GitHub issue with your feature request if you have a particular metric you’d like us to add.
Yes, you can upgrade from the open-source version to a paid plan once you feel the need for additional features, better performance, or no longer want to manage the services independently. The transition is designed to be smooth to ensure continuity in your projects with minimal change to existing pipelines.
Yes. Contact us to know more.