Why does the model err?
Why does the model work this way?
How can I trust this model?
Is this model fair?
Coming Soon! Sign up now to request early access.
Let us show you how easy it is to empower your operational users and data scientists!
Imagine a scenario where you are planning to sell your home. To better
price the listing, you use a black-box model.
Therefore, you collect a ton of data that represents similar homes in your area and begin to train your model.
Your out of sample testing shows promising results, and you are confident that the model will predict an accurate price for your home.
But, how do you know? What evidence besides the model score do you have? Can you really be sure that the model isn't undervaluing or overvaluing the home price?
Traditional black-box models can't provide this information. But with DataWhys Interpreter, you will get evidence-based outputs that are human readable and easy to understand without sacrificing model performance.
The best part - with just a couple of API integrations, DataWhys Interpreter works with any tabular data model in any industry.
Listing Price: 310,000
Square ft. is between 1200 and 1650
Bedrooms is 3
Bathrooms is 2.5
Has Deck is Yes
Roof Condition is New
Listing Price is 325,000 (+5%)
Work directly within your Python notebook
DataWhys Interpreter was designed with Data Scientists in mind. We integrate directly into your Python notebook, so you can analyze your models without leaving your current working environment. You can quickly find insights which are both interpretable and transparent, making it even easier to explain your models to business managers.
Simple interface with zero coding required
With our easy to use interface, Data Analysts can leverage the power of the DataWhys Interpreter without any coding knowledge. Once you have completed the setup, we will automatically generate an API call that can be used to ingest data from your model.
Understand why your model works. Easily spot segments where your model performs well and where it typically errors.
Gain visibility into how your model behaves, and enable business managers to both understand and trust the model output.
Model analysis made easy. Know when model predictions are drifting away from the original training and track behavior to ensure models are performing as expected so you know when it’s time to retrain or retire your model.
Advancements in AI and ML have improved
the ability for businesses to make data driven decisions. However, those
decisions are not always transparent or understandable.
Data Scientist typically rely on tools like Shapley Values or LIME Values to help them interpret the behavior of their models. While these tools are useful in helping data experts understand the factors which lead to a model’s decision, they frequently fail to provide a higher level understanding to those outside the Data Science team. Therefore, Data Scientist generally need to translate these findings to a form which is understandable to the business. This is often a painful and time-consuming process.
Having model transparency ensures Data Scientists can easily explain model behavior.
Start finding insights about your model in three easy steps
Upload tabular validation data, select your actuals and predicted.
Drop our auto-generated code snippet into your data workflow where your model resides.
Use our model evidence dashboard to find insights into your model's behavior.
DataWhys Interpreter bolts onto your existing
workflow with little effort and can be used without making any disruptive or expensive changes to your
Once you are connected, our evidence dashboard will receive updates automatically every time your model makes a prediction. Giving you insights into your model without ever breaking your workflow.
With our clear interpretable explanations, you will empower your business and operational experts to understand model behavior and more easily communicate your model’s limitations and value to the organization.
Sign up today for early beta access.
Limited licenses available.
Beta may contain limited feature sets.