Problem solve Get help with specific problems with your technologies, process and projects.

Expanding explainable AI examples key for the industry

Improving AI explainability and interpretability is key to the continued building of trust with consumers and the continued success of the technology.

AI systems have tremendous potential, but the average user has little visibility and knowledge on how the machines make their decisions. AI explainability can build trust and further push the capabilities and adoption of the technology.

When a human makes a decision, you can how the decision was made. But with many AI algorithms, an answer is provided without any specific reason. This is a problem.

AI explainability is a big topic in the tech world right now, and experts have been working to create ways for machines to start explaining what they are doing.

What is AI explainability?

Determining how an AI model works isn't as simple as lifting the hood and taking a look at the programming. For most AI algorithms and models, especially ones using deep learning neural nets, it is not particularly apparent how the model came to its decision.

AI models can be in positions of great responsibility, such as with autonomous vehicles or assisting the hiring process, and users demand information about their decisions. They want to be able to see how the AI comes to a decision and have clear answers and explanations.

Explainable AI, also referred to as XAI, is an emerging field in machine learning that aims to address how decisions of AI systems are made. This area inspects and tries to understand the steps involved in the AI model making decisions.

There have been attempts by many in the research community and, in particular the U.S. Defense Advanced Research Projects Agency (DARPA), to improve the understanding of these models. DARPA is pursuing efforts to produce explainable AI through a number of funded research initiatives, as well as many companies helping to bring explainability to AI.

Why explainability is important

For certain use cases of AI, being able to explain the decision-making process the AI system went through isn't pressing. But when it comes to autonomous vehicles and other important decisions, the need to understand the rationale behind the AI is heightened.

On top of needing to know that the logic used is sound, it is also important to know that the AI is completing its tasks safely and in compliance with laws and regulations. This need is especially important in heavily regulated industries such as insurance, banking and healthcare. If an incident does happen, the humans involved need to be able to understand why and how that incident happened.

Behind the desire for a better understanding of AI is the need to trust the systems people are using. For artificial intelligence and machine learning to be useful, there must be trust. And for trust to be earned, there needs to be a way to understand how decisions are being made by these intelligence machines. The challenge is that some of the technologies being adopted for AI are not transparent and, therefore, make it difficult to fully trust the decisions, especially when humans are only operating in a limited capacity or completely removed from the loop.

We also want to make sure that decisions were made for fair and unbiased reasons. There have been numerous examples where AI systems have been in the news for biased decision-making processes. In one example of this, the AI that was created to determine the likelihood of a criminal reoffending was found to be biased toward people of color. Being able to identify this kind of bias in both the data and the AI model is essential to creating models that perform as expected.

Why is AI explainability so difficult?

Today, there are numerous AI algorithms that lack explainability and transparency. Some algorithms, such as decision trees, can be examined by humans and understood. However, the more sophisticated and powerful neural network algorithms, such as deep learning, are much more opaque and difficult to interpret.

These popular and successful algorithms have resulted in potent capabilities for AI and machine learning; however, the end result is systems that are not easily understood. Relying on black box technology can be dangerous.

But explainability isn't as easy as it sounds. The more complicated a system gets, the more that system is making connections between different pieces of data. For example, when a system is doing facial recognition, it's matching an image to a person. But the system can't explain specifically how the bits of the image are mapped to that person because of the complex set of connections.

How we can make AI explainable

There are two main ways to provide explainable AI. The first is to use machine learning approaches that are inherently explainable, such as decision trees, or Bayesian classifiers or other explainable approaches. These have certain amounts of traceability and transparency in their decision-making, which can provide the visibility needed for critical AI systems without sacrificing too much performance or accuracy.

The second is to develop new approaches to explain more complicated, but sophisticated, neural networks. Researchers and institutions, such as DARPA, are currently working to create methods for explaining these more complicated machine learning methods. However, progress in this area has been slow going.

The more that AI is a part of our everyday lives, the more we need these black box algorithms to be transparent. Having AI that is trustworthy, reliable and explainable, without greatly sacrificing AI performance or sophistication, is a must.

There are several good examples of tools out there to help with AI explainability including many vendor offerings, as well as open source options.

Some organizations such as Advanced Technology Academic Research Center are working on transparency assessments. This self-assessed multifactor score takes into account a variety of factors such as algorithm explainability, identification of data sources used for training and methods used for data collection.

By taking all these factors into account, people are able to self-assess their models. While not perfect, it's a great and necessary starting point to allow others to gain insights into what's going on behind the scenes. Because, first and foremost, making AI trustworthy and explainable is essential.

Dig Deeper on AI infrastructure

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

SearchBusinessAnalytics

SearchCIO

SearchDataManagement

SearchERP

Close