How to Transform from Blackbox to Transparent AI

If a stranger approached you and informed you that it would begin to rain in exactly 2 minutes, so you had better take out your umbrella, without any knowledge of their qualifications or how they arrived at such a conclusion, would you believe them? Probably not. On a basic level, this is the problem with blackbox artificial intelligence (AI) and the fuel driving the move away from these opaque solutions and toward transparent, explainable AI and ethics.  

Why has explainable AI become more widely discussed and practiced in recent years?

Human beings, by their very nature, are disinclined to trust an outcome that cannot be explained. This is the basis for the upward trend of explainable AI solutions/machine learning (ML) algorithms. On one end of the spectrum, this distrust encourages people to both question how algorithms arrive at their respective conclusions and to learn more about the ML/AI development process overall. On the other end of things, when AI solutions are inexplicable they fall under the “blackbox” category. That is, they will analyze data and spit back an answer without the ability to explain why or how the result was reached, and this is an issue. 

Blackbox AI is problematic because it leaves those employing these solutions unaware of what factors the algorithm weighs more heavily and how those factors of influence are identified in the first place. In low risk situations, for example Netflix show recommendations, this may not matter as much. However, in high stakes use cases such as the development of self-driving cars, medical imaging diagnostics, or AI use in aviation safety, such black box solutions are not only unacceptable, but unethical.

Consider a scenario in which an algorithm is created to provide more accurate patient diagnoses by analyzing medical imaging results. Both the doctor and patient are unlikely to accept a diagnosis primarily based in technology that cannot be explained. Furthermore, if this diagnosis turns out to be wrong, who is responsible? The creator of the algorithm? The doctor? The hospital? In this type of situation it’s difficult to settle on any one conclusion. 

This lack of transparency in AI reduces the ability to fully trust the outcome. To fully trust the outcome at which the algorithm has arrived, we must be sure that the algorithm is free of bias — this leads us to the next question:

How can we be sure that the algorithm hasn’t fallen victim to bias? 

After all, humans are not infallible, so is it reasonable to expect that algorithms developed by humans will be free of bias?

The answer: we can’t, not without an awareness of precisely what factors (and to what extent these factors) play a role in the ML’s decision process.  It’s too easy for bias to permeate the training data and, consequently, the outcomes derived from AI- assisted solutions. 

Now that we’ve established why explainable AI has been on the rise, it’s time to discuss how to ensure the algorithms governing these technologies are transparent. The following offers an overview of some of the techniques available to identify areas of a high degree of influence in machine learning, the weight of those areas, and how they relate to specific algorithmic outputs.

Transforming to Transparent AI

Technique #1: RETAIN (Reverse Time Attention Model)

The RETAIN technique makes use of the attention mechanism (in place of the traditional Seq2Seq model). The Seq2Seq model can cause inefficiencies due to its processing structure. If the input is complex, then the context vector has difficulty “remembering” all of the information from the encoder’s hidden last state. However, when the attention mechanism is employed, the context vector is able to refer back to the input sequence, eliminating the chance that it might “forget” more complicated inputs (instead of a fixed-length context vector as used with the Seq2Seq model). The attention mechanism then weighs the influences to determine each output one at a time. This let’s the analyst understand where the solution focuses when determining each component of the output.

Technique #2: LIME(Local Interpretable Model-Agnostic Explanations) 

This technique can be used to explain the output after the algorithm has alrady reached the outcome. In this method, individual inputs are disturbed, blocked, or changed to observe how each affects the final output.

Technique #3: LRP (Layerwise Relevance Propagation)

The LRP technique functions by first starting with an input and corresponding probability of its classification. Then, working backward through the use of a redistribution LRP equation, each input factor’s relevancy is calculated.

Technique #4: Gradient Descent

Gradient descent determines which weight (input) will produce the least error, or “Which one correctly represents the signals contained in the input data, and translates them to a correct classification?” as describes a Skymind article on neural networks.

The Competitive Advantage

While building explainable AI that complies with ethical specifications, increases trust and encourages a more widespread use of the technology, it also lends a competitive advantage to companies making their AI solutions transparent. If models do fail or incorrectly calculate an output, explainability will facilitate understanding as to what caused the model to fail and point towards how to fix it. 

Having this awareness lets you iterate faster and enables a quicker turnaround to reach an improved solution. By offering a clear view of what factors drive outputs, it is easier to identify business drivers to capitalize on. As PwC describes in their 2018 XAI research, “The inability to see inside the black box can only hold up AI development and adoption.”

The first step to developing and deploying explainable ML/AI technology in a company is digitization. If you’d like some direction, shoot us a message to set up a free consultation, we’d love to work with you!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s