Give us a star ⭐️ on GitHub to support the project!
🚀 Join us January 25 for the Evidently monthly demo and Q&A. Register now →
Want to read this as a PDF instead?
By signing up you agree to receive emails from us. Opt out any time.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Want to read this as a PDF instead?
By signing up you agree to receive emails from us. Opt out any time.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Want to read this as a PDF instead?
By signing up you agree to receive emails from us. Opt out any time.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
📚 The Big Book of ML Monitoring. Get your PDF copy
October 31, 2022
Last Updated:
April 26, 2023

Evidently 0.1.59: Migrating from Dashboards and JSON profiles to Reports

Evidently
OPEN-SOURCE ML MONITORING
Evaluate, test and monitor your ML models with Evidently.
START ON GITHUB
ML IN PRODUCTION NEWSLETTER
Best MLOps content, blogs, and events.
Thank you! Please check your email to confirm subscription!
Oops! Something went wrong while submitting the form.

In Evidently v0.1.59, we moved the existing dashboard functionality to the new API.    

Here is a quick guide on migrating from the old to the new API. In short, it is very, very easy.

What has changed

Previously, we had two different objects in Evidently. 

One is a Dashboard: a pre-built report that computes different metrics and rich interactive visualizations. There are several Tabs for different types of dashboards to choose from, like a RegressionPerformanceTab, or DataDriftTab.

Another is a JSON profile: a JSON “version” of the Dashboard that only contains raw metrics and simple plot data that you can log and use elsewhere. There are several ProfileSections, like RegressionPerformanceProfileSection, or DataDriftProfileSection. 

We simplified things now.

Now, there is a single Report object that has Presets, like DataDriftPreset or RegressionPreset. You can generate the Report object and choose to render it in the notebook, save it as HTML, generate (or save) a JSON, or get it as a Python dictionary. 

That’s it! 

The rest works just the same: you need to feed two datasets in the Evidently as a reference and current datasets, you can use column mapping, and choose between pre-built reports that combine different metrics together. 

Here is how it works now.

If you used Dashboards

After installing Evidently and preparing the data, you can create a Report object and list the presets to include. 

Here is how you create a Data Drift report and show it in the notebook.

data_drift_report = Report(metrics=[
    DataDriftPreset(),
])

data_drift_report.run(reference_data=ref, current_data=cur)
data_drift_report


Here is an example notebook to replay with all available presets. You can also save the report as an HTML file.

If you used JSON profiles

You must repeat the same steps to create and run the Report object.

data_drift_report = Report(metrics=[
    DataDriftPreset(),
])

data_drift_report.run(reference_data=ref, current_data=cur)


To get the output as a JSON instead of a visual report, simply write:

data_drift_report.json()


You can also get the output as a Python dictionary.

data_drift_report.as_dict()


You can see how it works in the same example notebook.

Can I use the old API?

Don’t worry, the old Dashboards API will remain in the library for a while to ensure there are no breaking changes. Your old code still works.

However, we will eventually remove it. We suggest you move to the new API soon by updating existing pipelines or notebooks that use Evidently.

If something does not work as expected during your migration, please ask in GitHub issues or our Discord. All the new cool features will be added to the new Reports only!

You can also fix the version of Evidently in your code if you want to keep using the older version.

Why did you change it?

Things are now more elegant and consistent, but this is not the only reason. 

This refactoring allowed us to make sure that we can continue scaling Evidently with ease. 

Unified metrics. We also created a new underlying Metrics component that is reused across Evidently Reports and Tests. This way, if you calculate a certain metric or run a test, it is always calculated in the same way, across all the reports, without code duplication. 

Ease of customization. Since the beginning, users have asked for small and big tweaks to the Evidently dashboards. We get it! We implemented a few options, but they remained a bit clunky. Now, it has become super easy to build your own Report. Instead of “editing” the existing Dashboards, you can just list the Metrics to include and build exactly the Report you want. 

Here is how you can do it now:

data_quality_column_report = Report(metrics=[
    ColumnDistributionMetric(column_name="education"), 
    ColumnQuantileMetric(column_name="education-num", quantile=0.75), 
    ColumnCorrelationsMetric(column_name="education"),
    ColumnValueListMetric(column_name="relationship", values=["Husband", "Unmarried"]), 
    ColumnValueRangeMetric(column_name="age", left=10, right=20),
    
])


You can check this example notebook to see all the metrics and how to customize them.

Enabling future development. Now that we have this new backend in place, it will make it much easier to add new types of Reports, add options, and build new functionality. Yahoo! 

Tackling big data is the next big thing we are investigating. If you want to use Evidently on Spark, come chat with us on Discord!

Reports and Test Suites: what is the difference?

Difference between Evidently Reports and Test Suites

We hope you have already had the chance to work with Test Suites. This is an alternative interface that you can use to evaluate your models and data.

We treat Reports and Tests Suites as complementary as they fit different workflows. 

When it’s best to use Reports:

  • You want to explore or debug your data and model performance: HTML Reports focus on interactive visualizations and do not require setting any expectations upfront.
  • You want to compute and log metrics: you can just get the JSON metrics and use them elsewhere. For example, send it to a different BI system or write it to a database.
  • You are doing ad hoc analytics over model logs, for example, after model evaluation on the training set or when comparing several models.   
  • Reports are great for documentation and reporting, too.

When it’s best to use Test Suites:

  • You want to run automated model checks as part of the pipeline, and you can set up expectations upfront (or derive them from the reference dataset). Tests force you to think through what you expect from your data and models, and you can run them at scale, only reacting to the failure alerts. 

Tests are made to integrate with tools like Airflow and get into your production pipelines. For example, you can run them whenever you receive a new batch of data, new labels, or generate predictions. And if tests fail, you can always get the Report to debug!

Share your feedback

We hope you liked this update!

We are always happy to learn about how you use Evidently, and what else is missing. Please share your feedback in GitHub, on Discord, or simply drop us an email.

[fs-toc-omit] Want to stay in the loop?
Sign up to the User newsletter to get updates on new features, integrations and code tutorials. No spam, just good old release notes.

Subscribe ⟶
https://www.linkedin.com/in/elenasamuylova/
Elena Samuylova

Co-founder and CEO

Evidently AI
https://www.linkedin.com/in/emelidral/
Emeli Dral

Co-founder and CTO

Evidently AI

You might also like:

October 31, 2022
Last Updated:
April 26, 2023

Evidently 0.1.59: Migrating from Dashboards and JSON profiles to Reports

Evidently
OPEN-SOURCE ML MONITORING
Evaluate, test and monitor your ML models with Evidently.
START ON GITHUB
Get EVIDENTLY UPDATES
New features, integrations, and code tutorials.
Thank you! Please check your email to confirm subscription!
Oops! Something went wrong while submitting the form.

In Evidently v0.1.59, we moved the existing dashboard functionality to the new API.    

Here is a quick guide on migrating from the old to the new API. In short, it is very, very easy.

What has changed

Previously, we had two different objects in Evidently. 

One is a Dashboard: a pre-built report that computes different metrics and rich interactive visualizations. There are several Tabs for different types of dashboards to choose from, like a RegressionPerformanceTab, or DataDriftTab.

Another is a JSON profile: a JSON “version” of the Dashboard that only contains raw metrics and simple plot data that you can log and use elsewhere. There are several ProfileSections, like RegressionPerformanceProfileSection, or DataDriftProfileSection. 

We simplified things now.

Now, there is a single Report object that has Presets, like DataDriftPreset or RegressionPreset. You can generate the Report object and choose to render it in the notebook, save it as HTML, generate (or save) a JSON, or get it as a Python dictionary. 

That’s it! 

The rest works just the same: you need to feed two datasets in the Evidently as a reference and current datasets, you can use column mapping, and choose between pre-built reports that combine different metrics together. 

Here is how it works now.

If you used Dashboards

After installing Evidently and preparing the data, you can create a Report object and list the presets to include. 

Here is how you create a Data Drift report and show it in the notebook.

data_drift_report = Report(metrics=[
    DataDriftPreset(),
])

data_drift_report.run(reference_data=ref, current_data=cur)
data_drift_report


Here is an example notebook to replay with all available presets. You can also save the report as an HTML file.

If you used JSON profiles

You must repeat the same steps to create and run the Report object.

data_drift_report = Report(metrics=[
    DataDriftPreset(),
])

data_drift_report.run(reference_data=ref, current_data=cur)


To get the output as a JSON instead of a visual report, simply write:

data_drift_report.json()


You can also get the output as a Python dictionary.

data_drift_report.as_dict()


You can see how it works in the same example notebook.

Can I use the old API?

Don’t worry, the old Dashboards API will remain in the library for a while to ensure there are no breaking changes. Your old code still works.

However, we will eventually remove it. We suggest you move to the new API soon by updating existing pipelines or notebooks that use Evidently.

If something does not work as expected during your migration, please ask in GitHub issues or our Discord. All the new cool features will be added to the new Reports only!

You can also fix the version of Evidently in your code if you want to keep using the older version.

Why did you change it?

Things are now more elegant and consistent, but this is not the only reason. 

This refactoring allowed us to make sure that we can continue scaling Evidently with ease. 

Unified metrics. We also created a new underlying Metrics component that is reused across Evidently Reports and Tests. This way, if you calculate a certain metric or run a test, it is always calculated in the same way, across all the reports, without code duplication. 

Ease of customization. Since the beginning, users have asked for small and big tweaks to the Evidently dashboards. We get it! We implemented a few options, but they remained a bit clunky. Now, it has become super easy to build your own Report. Instead of “editing” the existing Dashboards, you can just list the Metrics to include and build exactly the Report you want. 

Here is how you can do it now:

data_quality_column_report = Report(metrics=[
    ColumnDistributionMetric(column_name="education"), 
    ColumnQuantileMetric(column_name="education-num", quantile=0.75), 
    ColumnCorrelationsMetric(column_name="education"),
    ColumnValueListMetric(column_name="relationship", values=["Husband", "Unmarried"]), 
    ColumnValueRangeMetric(column_name="age", left=10, right=20),
    
])


You can check this example notebook to see all the metrics and how to customize them.

Enabling future development. Now that we have this new backend in place, it will make it much easier to add new types of Reports, add options, and build new functionality. Yahoo! 

Tackling big data is the next big thing we are investigating. If you want to use Evidently on Spark, come chat with us on Discord!

Reports and Test Suites: what is the difference?

Difference between Evidently Reports and Test Suites

We hope you have already had the chance to work with Test Suites. This is an alternative interface that you can use to evaluate your models and data.

We treat Reports and Tests Suites as complementary as they fit different workflows. 

When it’s best to use Reports:

  • You want to explore or debug your data and model performance: HTML Reports focus on interactive visualizations and do not require setting any expectations upfront.
  • You want to compute and log metrics: you can just get the JSON metrics and use them elsewhere. For example, send it to a different BI system or write it to a database.
  • You are doing ad hoc analytics over model logs, for example, after model evaluation on the training set or when comparing several models.   
  • Reports are great for documentation and reporting, too.

When it’s best to use Test Suites:

  • You want to run automated model checks as part of the pipeline, and you can set up expectations upfront (or derive them from the reference dataset). Tests force you to think through what you expect from your data and models, and you can run them at scale, only reacting to the failure alerts. 

Tests are made to integrate with tools like Airflow and get into your production pipelines. For example, you can run them whenever you receive a new batch of data, new labels, or generate predictions. And if tests fail, you can always get the Report to debug!

Share your feedback

We hope you liked this update!

We are always happy to learn about how you use Evidently, and what else is missing. Please share your feedback in GitHub, on Discord, or simply drop us an email.

[fs-toc-omit] Want to stay in the loop?
Sign up to the User newsletter to get updates on new features, integrations and code tutorials. No spam, just good old release notes.

Subscribe ⟶
https://www.linkedin.com/in/elenasamuylova/
Elena Samuylova

Co-founder and CEO

Evidently AI
https://www.linkedin.com/in/emelidral/
Emeli Dral

Co-founder and CTO

Evidently AI

You might also like:

By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.