GetApp offers objective, independent research and verified user reviews. We may earn a referral fee when you visit a vendor through our links.
Our commitment
Independent research methodology
Our researchers use a mix of verified reviews, independent research, and objective methodologies to bring you selection and ranking information you can trust. While we may earn a referral fee when you visit a provider through our links or speak to an advisor, this has no influence on our research or methodology.
How GetApp verifies reviews
GetApp carefully verified over 2 million reviews to bring you authentic software experiences from real users. Our human moderators verify that reviewers are real people and that reviews are authentic. They use leading tech to analyze text quality and to detect plagiarism and generative AI.
How GetApp ensures transparency
GetApp lists all providers across its website—not just those that pay us—so that users can make informed purchase decisions. GetApp is free for users. Software providers pay us for sponsored profiles to receive web traffic and sales opportunities. Sponsored profiles include a link-out icon that takes users to the provider’s website.
As a larger range of tasks become automated—from diagnosing cancer to driving vehicles—we are all increasingly beholden to AI. But most of the public isn’t ready for it. In Edelman’s 2019 Trust in Technology Survey, 47% of global respondents said that tech innovation is happening too quickly, and is not in their best interests.
This lack of trust has a direct business impact. Gartner expects that by 2020, companies considered “digitally trustworthy” will earn 20% more in profits than their less trustworthy peers.
If you’re in any position to train AI that makes decisions about people, you can’t afford not to know how it reaches conclusions. Whether you work in healthcare or mortgage loan approval, your customers will expect you to explain any AI that you use to make decisions about them. If you want to earn—and keep—their trust, you can’t afford not to know how your algorithms work.
With trust in AI decidedly low, and stories about biased AI on the rise, there are increased calls for explainable AI. But given its relatively new role in modern computer science, there’s a lot of confusion about the concept.
To clear some of it up, we'll answer five common questions about explainable AI.
Explainable AI is a technique that holds algorithms accountable for the results they recommend. Businesses that use explainable AI apply open source frameworks and toolkits to ensure that algorithms’ outputs are fair and accurate for their end users.
These toolkits don’t provide perfect explanations for why algorithms make certain decisions over others. Instead, they let machine learning (ML) developers see their algorithms’ capabilities and limitations.
This results in greater understanding about how algorithmic models suggest certain outputs (like incorrectly predicting someone’s likelihood to recommit a crime based on their skin color.)
Dr. Matt Turek, program manager in DARPA’s Information Innovation Office, offers an expanded explanation. He defines explainable AI as:
“Machine learning techniques that:
Produce more explainable models, while maintaining a high level of learning performance (prediction accuracy).
Enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.”
No; Different applications of AI have different requirements for explainability.
AI that generates internal ideas and insights—which can then be further tested before deployment—doesn’t need as much documentation on explainability.
For example, let’s say your tech team trains an algorithm on your company’s sales data. Your end goal is for the algorithm to learn which sales lead characteristics (business size, industry, etc.) will most likely convert to customers. In this case, it’s less essential to explain how the algorithm reaches its conclusions.
However, AI that makes decisions in a closed loop with important consequences (such as self-driving cars) has a high need for explainability. DARPA predicts that explainable AI will be essential in the following fields:
Transportation
Security
Medicine
Finance
Legal
Military applications.
The ethical and legal risks of using AI to make decisions in these contexts make explainable AI a crucial part of the design and documentation process. Without it, “next-gen” products like self-driving cars will fail to earn mass adoption. Users’ lives aren’t worth the risk.
Explainable AI doesn’t solve machine bias. Rather, it’s a solution to help tech teams and consumers build trust in tools that use AI to make decisions about users.
As mentioned above, trust is a prerequisite to mass AI adoption. Without transparency into how a system works, stakeholders hesitate to approve products. After years of falling behind, the law is finally starting to catch up.
We’re now seeing legislation that gives citizens the right to sue businesses that can’t explain how their algorithms work or what data they use to train their algorithms. Europe’s GDPR legislation guarantees a “right to explanation” about how businesses manage EU citizens’ personal data, which is what users want.
Research from the University of Chicago and the University of Pennsylvania shows that users have more trust in modifiable algorithms than those built by experts. This research shows that people prefer algorithms where they can clearly see how the algorithms work, even if those algorithms were/are wrong.
With that said, it’s not a catch-all solution. Explaining how an algorithm arrived at a decision increases trust, but it won’t solve the problem of machine bias. That’s because, to quote Prajwal Paudyal, “Explainable AI (XAI) is NOT an AI that can explain itself, it is a design decision by developers.”
In other words: if you don’t design your models to be interpretable upfront, you probably won’t be able to explain them in the future.
Algorithms make more and more decisions that impact all our lives. These decisions range from choosing which job candidates deserve in-person interviews to predicting which criminals are most likely to re-offend.
Without explainable AI, people who are rejected for interviews or denied bail have little recourse—even if the algorithm’s results are incorrect.
Within the past decade, advances in deep learning techniques started to replace traditional training methods, like linear regression and decision trees. Deep learning techniques improved the speed at which tech teams could deploy algorithms, and offered more accurate results than traditional training methods.
The problem is that when tech teams use deep learning, they can’t always see how the data points in a dataset engage each other throughout the training process.
So, even an algorithm’s creators can’t always understand how algorithms reach their end conclusions. In these cases, they’ve created black box algorithms—or a lack of knowledge about how algorithms teach themselves new skills.
To answer this question, start by asking which training technique a team used for its algorithm. The more input and hidden layers in a deep learning algorithm, the more accurate your model becomes. At the same time, it gets harder to interpret the end outcome compared to traditional training methods.
More traditional algorithmic training techniques (such as linear regression and decision trees) tend to be less complex and thus more explainable. It’s easier to trace the output back to specific parts of the training process, because that process is less complex than deep learning.
But since these models are more simplistic, they’re not always as accurate. So, if someone decries an explainable model’s accuracy, it’s more likely that they’re critiquing the training technique and not the concept of explainable AI.
A separate problem occurs when teams build additional algorithmic models with the goal to explain how an initial black box algorithm worked. In this case, “explainable AI” provides little value.
This process creates extra work for tech teams, who must build more models in addition to the ones already produced. To make matters worse, the explainable model risks copying problematic outcomes from the black box model.
As Cynthia Rudin of Duke University writes, it’s best for teams using such technology to design interpretable models from the start. Rudin’s research disproves the theory that explainable and accurate models are mutually exclusive:
“When considering problems that have structured data with meaningful features, there is often no significant difference in performance between more complex classifiers (deep neural networks, boosted decision trees, random forests) and much simpler classifiers (logistic regression, decision lists) after preprocessing…
Even for applications such as computer vision, where deep learning has major performance gains, and where interpretability is much more difficult to define, some forms of interpretability can be imbued directly into the models without losing accuracy.”
If you’re in a position to build AI, you should strive to make it explainable from the start. To do so:
Document the business problem that you’re trying to solve. Bias in AI can occur when business goals aren’t easily interpreted by computers. So, it’s not enough to say that you want your algorithm to predict a customer’s “creditworthiness.” You must also decide (and document) whether or not you want to define “creditworthiness” in specific business terms (e.g., the amount of loans that get repaid).
Design your algorithm’s requirements early. During your product specification phase, write guidelines for inference and explanation accuracy, interpretability, and execution. Clearly defining these goals upfront will make you more aware of bias throughout the product lifecycle.
Explore ML explainability toolkits based on the business problem you’re trying to solve. As with any software, don’t expect one toolkit to solve all your problems. Instead, review a range of toolkits that you can use in tandem.
Lauren Maffeo