10 min read
Apr 21, 2020
Business Intelligence

5 SaaS Toolkits to Build Explainable AI

Explainable AI helps tech teams prevent bias and build fair, accurate algorithms. These SaaS toolkits can help you achieve it.

Lauren MaffeoPrincipal Analyst

Explainable AI (XAI) is a design decision to build algorithms with transparency in mind. It helps machine learning (ML) developers view their algorithms’ abilities and limitations throughout the algorithmic training process, which allows technical teams to decrease the risk of bias entering their algorithms. 

Algorithm bias can have serious consequences for end users. From credit cards that offer lower limits for women to correlating recidivism with race, the consequences of not knowing how AI works are well-documented. As a result, XAI is no longer optional. 

If you want to build and deploy AI in your business, everyone from senior stakeholders to customers will expect you to explain how it works. The good news? Increased demand for XAI means an increased range of tools and techniques your technical team can use to get started with building explainable AI. 

When evaluating explainability tools and toolkits: 

  • Document your algorithm’s requirements early. This should include outlining which methods of fairness you’ll use and how you plan to prioritize them. 

  • Don’t try to find the perfect toolkit for explainable AI. Instead, aim for incremental implementation using a wider range of tools. 

  • Make sure to compare each toolkit against the business problem that your algorithm must solve. Different toolkits support various requirements for explainability (e.g., if you want to build an algorithm explaining customer decisions, avoid using a toolkit built for regulatory requirements). 

Group 3@1x Created with Sketch.

5 SaaS tools you and your tech team can use to build explainable AI

These five Software-as-a-Service (SaaS) tools can aid your technical team's explainable AI efforts (and were cited in a recent Gartner article as example techniques and methodologies that can help build XAI; full research available to Gartner clients). 

When reviewing the toolkits below, look for features that address  explainability and interpretability. The former will help you hold your model accountable during algorithmic training, while the latter will help you explain the model’s results to stakeholders and customers. (Products presented in alphabetical order.) 

Group 3@1x Created with Sketch.

1. DataRobot

DataRobot’s software lets teams build and deploy their own AI models in-house, without the need for explicit programming. If you don’t have a data scientist on your team yet (or your business can’t afford one), DataRobot can serve as your substitute. It automates standard data science tasks and aims to help customers solve specific business problems (as one example, United Airlines used DataRobot to predict which customers are most likely to gate-check bags).

 Since DataRobot automates machine learning, it also supports interpretable models. The tool includes a model blueprint, which shows you the preprocessing steps that each model uses to make its conclusions. This feature makes DataRobot an especially strong choice for teams building models that must comply with regulatory agencies. 

DataRobot also has prediction explanations, which show the top variables impacting the model’s outcome for each record. This is important since algorithms assign different weights to various data points throughout the training process, which impacts its recommendations. Prediction explanations prevent possible bias by explaining how each algorithm reaches its conclusions. 

Cost: DataRobot offers custom three-year contracts based on your business goals. Before choosing your software configuration, you’ll speak with a member of DataRobot’s customer-facing data science team. You can get started by contacting them here

Group 3@1x Created with Sketch.

2. Google Cloud Platform

With one billion users on its platform, Google Cloud’s suite of platform services is hard to match in size and scope. It includes a robust suite of tools for AI and machine learning. In November 2019 Google Cloud added an explainable AI service, which evaluates algorithmic models throughout the product lifecycle. 

Features such as AutoML Tables and AI Platform give users transparency to know if they should improve their models’ datasets and/or architecture. Once you deploy models on AutoML Tables or AI Platform, you’ll get real-time scores indicating how certain factors impact final results. When used in tandem with Google Cloud’s continuous feedback feature, you can compare model predictions and optimize performance.

Google Cloud’s What-If tool displays interactive dashboards for users to review AI platform prediction models. It integrates with Jupyter and Colab notebooks, and comes pre-installed on AI Platform Notebooks Tensorflow instances. If your models’ outputs don’t match the What-If tool’s requirements, you can define adjustment functions in your code. 

Cost: Google Cloud’s explainable AI tools are free for those already using AutoML Tables or AI Platform. Visit their website for pricing details. 

Group 3@1x Created with Sketch.

3. H2O Driverless AI

H20 Driverless AI automates several aspects of the ML workflow, including model validating, tuning, selection, and deployment. It runs on commodity hardware and is designed to use graphical processing units (GPUs). This is a strong selling point, since GPUs play a crucial role in deep learning. 

H20 Driverless AI also provides machine learning interpretability (MLI) as a core feature. Its offerings include:

  • Shapely (which shows how features directly impact each row’s unique prediction)

  • k-LIME (which can generate reason codes and English-language explanations for more complex models)

  • Surrogate decision trees (which provide a flowchart showing how a model made decisions based on the original features)

  • Partial dependence plots (which shows average model predictions and standard deviations for initial features’ values)

H20 Driverless AI includes a feature called disparate impact analysis. If a model produces adverse effects for specific groups of users, disparate impact analysis lets you test that model for possible bias. Since bias can creep into models at several points during the algorithmic training process, this is a crucial feature to have. 

Cost: H20 is an open source ML platform, which makes it free to use. Driverless AI is a standalone enterprise product that requires payment. Running this system on Google Cloud will cost approximately $2,281 per month. 

Group 3@1x Created with Sketch.

4. IBM Watson OpenScale

In 2011 Watson (a question-and-answering system that IBM developed) shocked the world when it beat two human Jeopardy champions to win a $1 million prize. Today, businesses use IBM Watson OpenScale to build models predicting credit risk, asset failures, claims processing, and more. 

Watson OpenScale has several model control features for users. It alerts users if it finds AI model drift, which occurs when models encounter data in production that differs from the data they were trained on. Such alerts are crucial since model drift carries the risk of introducing bias to models. 

Watson OpenScale also offers contrastive explanations for any classification models you build. That means it displays pertinent positives and pertinent negatives, which both help explain each model’s behavior. Two types of de-biasing (passive and active) are also available

Cost: Watson OpenScale offers two pricing plans based on how many models you plan to deploy and monitor. Its lite service is free, but will be deleted after 30 days of inactivity. 

Group 3@1x Created with Sketch.

5. Microsoft Azure

Microsoft Azure is a cloud computing service that lets users, build, test, deploy, and manage applications. It supports a wide range of programming languages, tools, and frameworks within and beyond Microsoft’s ecosystem. In the 10 years since its creation, it has grown to support more than 600 services

Azure’s Basic and Enterprise edition users can access the platform’s model interpretability. Per Azure’s documentation, this offers three key benefits for users:

  • Feature importance values for raw and engineered features

  • Interpretability on real-world datasets at scale, during the training and inference stages

  • Interactive visualizations to find patterns within data.

Azure allows you to apply these features globally on all data, or on specific local data points. Likewise, you can apply interpretability methods to either global behavior or specific predictions. 

Azure’s model interpretability offers nine explainer techniques to choose from. This allows you to match your explainer technique to the technique your team used to train your model. For example, if you used the deep learning technique, you can use the SHAP Deep Explainer to peek under its hood. 

Cost: Azure offers pay-as-you-go pricing based on region, usage type, billing options, and more. You can find pricing calculators, prices per product, and more on Azure’s website. There’s a dedicated tab for Azure’s suite of AI and machine learning products. 

Group 3@1x Created with Sketch.

Want more software to help build explainable AI?

Machine learning has potential to grow your business by automating key tasks and saving valuable time. But the days of building opaque ML algorithms that don’t share how they make decisions are done. To pass regulatory scrutiny and build trust with users, businesses who want to benefit from ML’s power must use toolkits to do so responsibly.

Back to top