We build tools to evaluate, test and monitor machine learning models, so you don't have to.
Yes, it is open source.
Evaluate the ML model quality, and go beyond aggregate performance to discover where it fails.
GET STARTEDRun statistical tests to compare the input feature distributions, and visually explore the drift.
GET STARTEDGet a snapshot of data health, and drill down to explore feature behavior and statistical properties.
GET STARTEDGenerate interactive reports in the notebook or export them as an HTML file. Use them for visual evaluation, debugging and sharing with the team.
Run the data and model checks as part of the pipeline. Integrate with tools like Mlflow or Airflow to schedule the tests and log the results.
Collect the model quality metrics from the deployed ML service. Currently works through integration with Prometheus and Grafana.
“Check out Evidently: I haven't seen a more promising model drift detection framework released to open-source yet!”
“I love the plug-and-play features for monitoring ML models.”
“Evidently is a first-of-its-kind monitoring tool that makes debugging machine learning models simple and interactive. It's really easy to get started!”
“I was searching for an open-source tool, and Evidently perfectly fit my requirement for model monitoring in production. It was very simple to implement, user-friendly and solved my problem!”
If you need help in using the tool or want to chat about doing ML in production, we have a place for that!
Join the communityGet a roundup with product updates, events, and the best blogs every few weeks. No spam.
By signing up you agree to receive emails from us. You can opt out any time.