Give us a star ⭐️ on GitHub to support the project!
🚀 Join us January 25 for the Evidently monthly demo and Q&A. Register now →
Want to read this as a PDF instead?
By signing up you agree to receive emails from us. Opt out any time.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Want to read this as a PDF instead?
By signing up you agree to receive emails from us. Opt out any time.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Want to read this as a PDF instead?
By signing up you agree to receive emails from us. Opt out any time.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
📚 The Big Book of ML Monitoring. Get your PDF copy
December 14, 2023
Last Updated:
December 14, 2023

7 new features at Evidently: ranking metrics, data drift on Spark, and more

Evidently
OPEN-SOURCE ML MONITORING
Evaluate, test and monitor your ML models with Evidently.
START ON GITHUB
ML IN PRODUCTION NEWSLETTER
Best MLOps content, blogs, and events.
Thank you! Please check your email to confirm subscription!
Oops! Something went wrong while submitting the form.

Did you miss some of the latest updates at Evidently open-source Python library? We summed up a few features we shipped recently in one blog. 

All these features are available in Evidently 0.4.11 and above.

We also send open-source release notes like this in the newsletter every couple of months. Sign up here.

🛒 Ranking and RecSys metrics

You can now evaluate and monitor your ranking and recommendation models in Evidently.

Monitor ranking and recommendation models with Evidently AI

What’s cool about it?

We covered not only standard metrics like Normalized Discounted Cumulative Gain (NDCG) or Recall at top-K but also behavioral metrics like Serendipity or Popularity Bias.

Learn more:

🚦 Warnings in Test Suites 

You can set Warnings for non-critical Tests in a Test suite. If you want to get a “Warning” instead of ‘Fail” for a particular test, set the “is_critical” parameter to False.

Test criticality for Evidently Test Suites

What’s cool about it?

You can flexibly design alerting and logging workflows by splitting the Tests into groups: for example, set alerts only on critical failures and treat the rest as informational reports. 

📈 Monitoring test outcomes 

Are you computing Test Suites on a cadence? You can now add a new type of monitoring panel to track the results of each Test Suite in time in the Evidently UI.

Monitoring test outcomes with Evidently AI

This is in addition to all the panels that help visualize the metric values. You can also choose which subset of tests to show together using tags. Say, you can add one monitoring panel to track failed data quality checks, another for data drift, and so on.    

What’s cool about it?

You can choose a detailed view option. It will show not just the combined results but also a granular breakdown of all tests, such as which exact features drifted. 

Learn more:

🏗 Near real-time monitoring 

You can deploy an Evidently collector service to integrate with your ML service.

Near real-time ML monitoring with Evidently AI

In this scenario, you can POST your input data and model predictions directly from your ML service. The Evidently service will collect online events into batches, create Reports or Test Suites over them, and save them as snapshots you can later visualize in the monitoring UI.

What’s cool about it?

No need to write Python code or manage monitoring jobs: you can define the monitoring setup via a configuration file.  

Learn more:

🛢️ Data drift on Spark

You can finally run data drift calculations on Spark.

Data drift calculation on Spark with Evidently AI

We currently support only some of the drift detection methods on Spark, but we’ll be adding more metrics over time. Do you know which metrics you’d want to work on Spark next? Open an issue on GitHub to tell us. 

What’s cool about it?

If you deal with large datasets – your life is now much easier!

Learn more:

📊 Monitoring UI updates

Our Monitoring UI is getting better day by day!

Evidently ML Monitoring UI

You can now browse tags in the interface as you look for individual Reports or Test Suites, easily switch between different monitoring periods, view metadata, and more!

Learn more:

🔢 Feature importance in drift detection

You can show the feature importances on the Data Drift dashboard. 

Feature importance in drift detection with Evidently AI

This will help sort features by importance when viewing the data drift results.

What’s cool about it?

You can pass the feature importances as a list. But if you don’t – Evidently can train a background model and derive the importances from it. 

Learn more:

☁️ Evidently Cloud is in private beta

Evidently Cloud

Do you want to use the Evidently Monitoring UI without self-hosting? Evidently Cloud is currently in private beta. Sign up here to join our early tester program.

[fs-toc-omit]Want to stay in the loop?
Sign up to the User newsletter to get updates on new features, integrations and code tutorials. No spam, just good old release notes.

Subscribe ⟶

https://www.linkedin.com/in/elenasamuylova/
Elena Samuylova

Co-founder and CEO

Evidently AI

You might also like:

December 14, 2023
Last Updated:
December 14, 2023

7 new features at Evidently: ranking metrics, data drift on Spark, and more

Evidently
OPEN-SOURCE ML MONITORING
Evaluate, test and monitor your ML models with Evidently.
START ON GITHUB
Get EVIDENTLY UPDATES
New features, integrations, and code tutorials.
Thank you! Please check your email to confirm subscription!
Oops! Something went wrong while submitting the form.

Did you miss some of the latest updates at Evidently open-source Python library? We summed up a few features we shipped recently in one blog. 

All these features are available in Evidently 0.4.11 and above.

We also send open-source release notes like this in the newsletter every couple of months. Sign up here.

🛒 Ranking and RecSys metrics

You can now evaluate and monitor your ranking and recommendation models in Evidently.

Monitor ranking and recommendation models with Evidently AI

What’s cool about it?

We covered not only standard metrics like Normalized Discounted Cumulative Gain (NDCG) or Recall at top-K but also behavioral metrics like Serendipity or Popularity Bias.

Learn more:

🚦 Warnings in Test Suites 

You can set Warnings for non-critical Tests in a Test suite. If you want to get a “Warning” instead of ‘Fail” for a particular test, set the “is_critical” parameter to False.

Test criticality for Evidently Test Suites

What’s cool about it?

You can flexibly design alerting and logging workflows by splitting the Tests into groups: for example, set alerts only on critical failures and treat the rest as informational reports. 

📈 Monitoring test outcomes 

Are you computing Test Suites on a cadence? You can now add a new type of monitoring panel to track the results of each Test Suite in time in the Evidently UI.

Monitoring test outcomes with Evidently AI

This is in addition to all the panels that help visualize the metric values. You can also choose which subset of tests to show together using tags. Say, you can add one monitoring panel to track failed data quality checks, another for data drift, and so on.    

What’s cool about it?

You can choose a detailed view option. It will show not just the combined results but also a granular breakdown of all tests, such as which exact features drifted. 

Learn more:

🏗 Near real-time monitoring 

You can deploy an Evidently collector service to integrate with your ML service.

Near real-time ML monitoring with Evidently AI

In this scenario, you can POST your input data and model predictions directly from your ML service. The Evidently service will collect online events into batches, create Reports or Test Suites over them, and save them as snapshots you can later visualize in the monitoring UI.

What’s cool about it?

No need to write Python code or manage monitoring jobs: you can define the monitoring setup via a configuration file.  

Learn more:

🛢️ Data drift on Spark

You can finally run data drift calculations on Spark.

Data drift calculation on Spark with Evidently AI

We currently support only some of the drift detection methods on Spark, but we’ll be adding more metrics over time. Do you know which metrics you’d want to work on Spark next? Open an issue on GitHub to tell us. 

What’s cool about it?

If you deal with large datasets – your life is now much easier!

Learn more:

📊 Monitoring UI updates

Our Monitoring UI is getting better day by day!

Evidently ML Monitoring UI

You can now browse tags in the interface as you look for individual Reports or Test Suites, easily switch between different monitoring periods, view metadata, and more!

Learn more:

🔢 Feature importance in drift detection

You can show the feature importances on the Data Drift dashboard. 

Feature importance in drift detection with Evidently AI

This will help sort features by importance when viewing the data drift results.

What’s cool about it?

You can pass the feature importances as a list. But if you don’t – Evidently can train a background model and derive the importances from it. 

Learn more:

☁️ Evidently Cloud is in private beta

Evidently Cloud

Do you want to use the Evidently Monitoring UI without self-hosting? Evidently Cloud is currently in private beta. Sign up here to join our early tester program.

[fs-toc-omit]Want to stay in the loop?
Sign up to the User newsletter to get updates on new features, integrations and code tutorials. No spam, just good old release notes.

Subscribe ⟶

https://www.linkedin.com/in/elenasamuylova/
Elena Samuylova

Co-founder and CEO

Evidently AI

You might also like:

By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.