This content is part of the Conference Coverage: IBM Think 2020: AI, security, COVID-19 news, trends and analysis

IBM interprets machine models with AI Explainability kit

IBM's open source AI Explainability 360 toolkit packages algorithms and training examples to help humans better understand the decision-making process of machine learning models.

IBM has introduced an open source toolkit to help humans interpret the decision-making process of machine learning models.

Machine learning models have gained traction across many use cases, such as processing loan applications, and they are also increasingly accurate. However, end users who interact with these systems may not necessarily understand how the models work.

As these systems proliferate, it's important for them to trust those systems' decisions. Can we expect to accept results if a machine learning-based decision system denies bank loans, identifies people from images as potential suspects, targets people for additional security or fraud detection measures, or otherwise impacts lives? Unreliable AI cannot be trusted.

The challenge is that the most powerful machine learning algorithms in use today, especially deep learning, are essentially a black box. After a decision has been made, there's no way to examine exactly why a decision was made for that particular outcome.

"What we are working on is to increase the trust that people have for the systems by making them more explainable," said Kush Varshney, a research staff member and manager in IBM Research AI at the company's Thomas J. Watson Research Center in Yorktown Heights, NY.

IBM's goal is to help overcome this black-box syndrome with the AI Explainability 360 toolkit. It features algorithms for case-based reasoning, directly interpretable rules, and post-hoc local and global explanations. It is an extension and enhancement of the Watson OpenScale platform IBM released last year. It also is related to the IBM Fairness tool, released last year, designed to address AI bias.

In the larger picture, AI Explainability 360 is a "universal translator" for AI processes and decision-making, a toolkit with a common interface to support all the different ways to explain how AI machine models work and why they make particular decisions, said Charles King, an analyst at Pund-IT, an IT research company based in Hayward, Calif.

Ronald Schmelzer, analyst, CognyliticaRonald Schmelzer

IBM's approach doesn't specifically aim to solve the "inherent inexplainability" of widely-used deep learning algorithms, but rather provides some pathways to help end users understand how AI is applied to a decision that impacts them, said Ronald Schmelzer, an analyst at Cognilytica in Ellicott City, Md., a research consultancy that focuses on AI.

For example, these pathways can steer the user to decision trees, directly interpretable models or post-hoc explanations of how the system arrived at a particular decision. The system can also provide case-based reasoning for data sampling choices, explain why specific features were chosen, and provide end-user training examples.

"[IBM is] trying to approach this problem as a set of tooling, meant for the data scientist, to give them support for their models without trying to solve the fundamental underlying problem of inexplainability for certain kinds of algorithms," Schmelzer said.

Virtually every commercial technology has gone through a similar process where the common jargon used by product developers is translated and normalized for larger audiences.
Charles KingAnalyst, Pund-IT

The AI toolkit also is part of IBM's effort to make these techniques available for the developer community to use and integrate into their natural workflows, Varshney said. The software, available for download on GitHub, is extensible, written in a programming paradigm very similar to scikit-learn, a machine learning language for Python. It includes tutorials, notebooks and sample use cases that came from IBM's customers, Varshney said.

The initial release contains eight algorithms recently created by IBM Research, plus metrics from the developer community that serve as quantitative proxies for the quality of explanations, said Aleksandra Mojsilovic, an IBM Fellow and a lead researcher on the project, in a blog post.

One of these algorithms, Boolean Classification Rules via Column Generation, is a scalable method to directly interpret machine learning. Another, Contrastive Explanations Method, is a local post hoc method that helps to explain why an event happened instead of some alternative event, a piece of explainable AI often overlooked by researchers and practitioners, she said.

"Virtually every commercial technology has gone through a similar process where the common jargon used by product developers is translated and normalized for larger audiences," King said.

Dig Deeper on Machine learning platforms

Business Analytics
CIO
Data Management
ERP
Close