Explainable AI: Why it Matters

AI can deliver amazing answers to seemingly insolvable problems, but it’s often a black box. You can’t see the “why” behind the answers. Here’s why Explainable AI matters.

Today’s Problem: Black Box AI

Traditional AI systems—mostly based on neural networks—are incredibly powerful in their ability to generate accurate answers. But the answers are extremely difficult to comprehend, because the dizzying array of internal calculations obscures the effect of any single input. You can’t easily trace the “why” of an answer. This is often referred to as Black Box AI.

Here’s why we have Black Box AI: In a neural network, the input data is poured in through the front-end of a dense web of computations where layers of intermediate results are distributed. Those results are joined with other intermediate results and collected back together many times over. The specific relationships between any individual input and the output (the answer) are pretty much impossible to see.

If you can’t understand why an AI system gives you certain answers, how can you be completely confident about the decisions you make using those answers? And how can you be sure those answers aren’t skewed by bias?

Explanations matter

Saying “the machine made the decision” is simply not good enough for many real-world decisions. For example, accepting or denying a loan application. Or making an important medical diagnosis. In situations like these, determining why a specific prediction was made is critical to those affected by the model’s prediction. Governments around the world are taking a legal stand on this issue with right to explanation laws and policies.


 Explainable AI Today

Not all machine learning systems provide unexplainable decisions. Some simple learning models are very transparent. The trouble with getting explainable results today is the tradeoff: Simple models provide simple explanations but lack accuracy. More complex models like neural networks perform well in terms of accuracy but provide no explanation.

Here’s a look at three common types of explainable AI: decision trees, K-nearest Neighbors, and neural networks. 

Decision Trees

A Decision Tree provides a hierarchy of very specific questions. The answer to one question guides the prediction process down various branches of the tree. At the bottom of the tree is the prediction. The means for determining why this prediction was made are obvious. One simply looks at each decision from the top of the tree down to the bottom and how each question was answered.  However, as a decision tree becomes more complex, the ability to interpret the results becomes increasingly difficult, as this breast cancer detection decision tree demonstrates.

K-nearest Neighbors

Another fairly transparent machine learning technique is the K-nearest Neighbors (KNN) method. When a query is presented to this machine learning system, the system uses the closest-matching examples from the training set to determine the prediction.

For example, when a loan application is denied, the KNN machine essentially says, “This loan application was denied because it looks very much like other loan applications from the training examples that belong to the ‘denied’ category.” The presumption is the person(s) responsible for constructing the training dataset can provide satisfactory explanations behind the observations within the “denied” and the “accepted” categories.

Neural Networks

Neural Networks (NNs) dominate mainstream machine learning, but they fall short on explainability. These systems are more complex than decision trees and KNNs, so they can be very accurate, but predictions are virtually impossible to explain. You can see what goes into the neural network and what comes out, but looking inside the machine to determine what exactly is happening is extremely difficult. It’s a black box. Learn more about Neural Networks and Explainability here. 


Natural Intelligence and Explainability

The Natural Intelligence Machine Learning (NIML) system is a pattern-based approach to machine learning. Similar to neural networks, the NIML system is a reinforcement learning system and can provide very accurate results. Common patterns in the training set lead to strengthened connections within the machine learning fabric.  NIML achieves similar accuracy to a neural net, often with a fraction of the data.

Explainability is an inherent part of the NIML model. The computations within the NIML system are localized, which makes it possible to trace data flow through the system. In other words: NIML lets you see the “why” behind the answers.

Benefits of NIML

The NIML Explainability provides the following benefits:

  1. The decision process can be reversed back through the model. Important features can be distinguished from non-informative features.  The values for each feature can be compared to the values the model was trained on.
  2. Specific data ranges that were important to the prediction are identified. For each classification, the model describes which features are relevant and which do not have an impact.
  3. Complex relationships between the inputs and the predictions are learned. The system has the ability to continuously update its internal state and evolves learning structures to capture the nuances within the training data. This captured learning is captured in a way that makes it possible to tease apart. When inference-time predictions are made, these predictions can be transposed back to the input data space.  

This information provides significant benefit to the users of the system. For example, somebody denied a loan application can see what needs to be done to improve their likelihood of being approved.  A doctor gets information about why a certain diagnosis was made to help determine the appropriate treatment.

Machine learning systems are increasingly being used to make important decisions. And explainable machine learning will fundamentally transform the field of AI in the very near future.

Download Explainability White Paper

To learn more about Natural Intelligence and Explainable AI, please fill in the form below to download our white paper:

 

 

Talk with an Expert

Enter your details below and we will be in touch with you shortly.

Request a Demo

Enter your details below and we will be in touch with you shortly.