Why does the model err?
Why does the model work this way?
How can I trust this model?
Is this model fair?

The interpretability layer for your 
black-box models.

With black-box models, the machine learns, and you don’t. 
We help businesses interpret and explain their models.

Thank you! Your message has been sent.
Unable to send your message. Please fix errors then try again.

Coming Soon! Sign up now to request early access.

The easiest and simplest way to understand models

Let us show you how easy it is to empower your operational users and data scientists!

Selling a Home Example

Imagine a scenario where you are planning to sell your home. To better price the listing, you use a black-box model. 

Therefore, you collect a ton of data that represents similar homes in your area and begin to train your model.

Your out of sample testing shows promising results, and you are confident that the model will predict an accurate price for your home.

But, how do you know? What evidence besides the model score do you have? Can you really be sure that the model isn't undervaluing or overvaluing the home price?

Traditional black-box models can't provide this information. But with DataWhys Interpreter, you will get evidence-based outputs that are human readable and easy to understand without sacrificing model performance.

The best part - with just a couple of API integrations, DataWhys Interpreter works with any tabular data model in any industry. 

Black-box model output
Listing Price: 310,000
Black-box model output with DataWhys

When :

Square ft. is between 1200 and 1650
AND
Bedrooms is 3
AND 
Bathrooms is 2.5
AND 
Has Deck is Yes
AND
Roof Condition is New
THEN
Listing Price is 325,000 (+5%)

Data Scientist

Work directly within your Python notebook

DataWhys Interpreter was designed with Data Scientists in mind. We integrate directly into your Python notebook, so you can analyze your models without leaving your current working environment. You can quickly find insights which are both interpretable and transparent, making it even easier to explain your models to business managers.

Data Analyst

Simple interface with zero coding required

With our easy to use interface, Data Analysts can leverage the power of the DataWhys Interpreter without any coding knowledge. Once you have completed the setup, we will automatically generate an API call that can be used to ingest data from your model. 

Why is Model Interpretability Important?

Explainability

Understand why your model works. Easily spot segments where your model performs well and where it typically errors.

Trust

Gain visibility into how your model behaves, and enable business managers to both understand and trust the model output.

Maintenance

Model analysis made easy. Know when model predictions are drifting away from the original training and track behavior to ensure models are performing as expected so you know when it’s time to retrain or retire your model.

Compliance

Ensure your model output is compliant with regulations in your industry. Provide clear  explanations of model behavior to better capture risks, and use bias analysis to build confidence that your model outcomes and predictions are ethical and fair.

Model Transparency Helps Data Scientist's

Advancements in AI and ML have improved the ability for businesses to make data driven decisions. However, those decisions are not always transparent or understandable. 

Data Scientist typically rely on tools like Shapley Values or LIME Values to help them interpret the behavior of their models. While these tools are useful in helping data experts understand the factors which lead to a model’s decision, they frequently fail to provide a higher level understanding to those outside the Data Science team. Therefore, Data Scientist generally need to translate these findings to a form which is understandable to the business. This is often a painful and time-consuming process. 

Having model transparency ensures Data Scientists can easily explain model behavior.

How Does it Work?

Start finding insights about your model in three easy steps

Upload Data

Upload tabular validation data, select your actuals and predicted.

Connect to Model

Drop our auto-generated code snippet into your data workflow where your model resides.

Get Insights

Use our model evidence dashboard to find insights into your model's behavior.

Direct Integration with Existing Workflows

DataWhys Interpreter bolts onto your existing workflow with little effort and can be used without making any disruptive or expensive changes to your model.

Once you are connected, our evidence dashboard will receive updates automatically every time your model makes a prediction. Giving you insights into your model without ever breaking your workflow. 

With our clear interpretable explanations, you will empower your business and operational experts to understand model behavior and more easily communicate your model’s limitations and value to the organization.

Coming Soon!

Sign up today for early beta access.
Limited licenses available.
Beta may contain limited feature sets.

Thank you! Your message has been sent.
Unable to send your message. Please fix errors then try again.