Gabi Moisa - Fotolia

Problem solve Get help with specific problems with your technologies, process and projects.

How to solve the black box AI problem through transparency

Ethical AI black box problems complicate user trust in the decision making of algorithms. As AI looks to the future, experts urge developers to take a glass box approach.

If "black box" sounds insidious, that's because experts are constantly warning against the complications of non-transparent AI problems, which can include bias and ethical concerns.

But black box development is currently the primary method of deep learning modeling. In this approach, millions of data points are inputted into the algorithm, which then identifies correlations between specific data features to produce an output. The process inside the box, however, is mostly self-directed and is generally difficult for data scientists, programmers and users to interpret.

How to clear the box

The solution to these issues around black box AI is not as easy as cleaning training data sets. Currently, most AI tools are underpinned by neural networks, which are hard to decipher. Trust in the company and its training process is a starting point, but experts say the real solution to black box AI is shifting to a training approach called glass box or white box AI.

AI should be reliable, unbiased and robust to be able to handle any errors that happen across the AI solution's lifecycle.
Sankar NarayananChief practice officer at Fractal Analytics

Glass box modeling requires reliable training data that analysts can explain, change and examine in order to build user trust in the ethical decision making process. When white box AI applications make decisions that are pertinent to humans, the algorithm itself can be explained and has gone through rigorous testing to ensure accuracy.

"AI needs to be traceable and explainable. It should be reliable, unbiased and robust to be able to handle any errors that happen across the AI solution's lifecycle," said Sankar Narayanan, chief practice officer at Fractal Analytics Inc.

Modeling is complicated by a fundamental issue. AI is meant to mimic the human thought process but behavioral economics research says that the thought process of humans is often irrational and unexplainable, said Narayanan.

"We rely on our thought process even if we are not able to rationally explain it, i.e. a black box," he continued.

A human touch

One key to successful glass box AI is more human interaction with the algorithm. Jana Eggers, CEO of Boston-based AI company Nara Logics, said that strictly black box AI reflects both human bias and data bias, which affect the development and implementation of AI. Explainability and transparency begin with context provided by developers to both the algorithm and the users through universal familiarity with training data and strict parameters for the algorithms' calculations and capabilities.

Another step toward dismantling some of the AI black box problems is to analyze the content being inputted and the algorithm's output in order to deduce the decision making process. Once the process is clear to developers, they can then adjust it to reflect human ethics.

"There are plenty of times when the AI is wrong and the humans are right. You have to prepare for that," Eggers said.

Clarifying black box AI means creating an explainable methodology for human scale using simple, understandable data science that includes outlining the program's decision-making process, including what factors were weighed and how heavily.

"When we think about explanations, we need to think about what is appropriate for a human cognitive scale," said Brian D'Alessandro, director of data science at SparkBeyond. "Most people can fully consume a rule that has five or six different factors in it. Once you get beyond that, the complexity starts to get overwhelming for people."

White and black box testing models
The two popular testing models differ in explainability.

The future of AI and Ethics

Recently there's been a lot of discussion about AI bias, ethical concerns and accountability. Vendors, engineers, and users can do their part, but these types of problems are hard to spot and stamp out in a black box AI application.

Black box AI complicates the ability for programmers to filter out inappropriate content and measure bias, as developers can't know which parts of the input are weighed and analyzed to create the output.

Sometimes the data was collected in a way that makes it biased and black box functionality creates the risk that these problems get replicated and magnified, D'Alessandro said.

In one example, Amazon created an AI recruiting tool that analyzed 10 years of applications in order to create a system that automatically identified characteristics of high-performing employees and scored new candidates against those standards. The tool made headlines in 2018 when it was revealed that, due to societal influences such as wage gaps and gender bias in technology jobs, the algorithm favored male applicants.

Now, companies using AI are left individually searching for ethical guidelines for AI data collection and deployment. In May, the European Union released a standard guideline defining ethical AI use that experts are hailing as a step toward tackling the black box AI problem.

"The [EU's] Ethics Guidelines for Trustworthy AI is path breaking, not just for Europe, but for the world at large," Narayanan said. "The guidelines will nudge businesses to start to trust the specialists and veer away from generalist services providers that have rebadged themselves as AI companies."

Next Steps

Don't let bias be an anchor to app development, AI systems

Dig Deeper on Machine learning modeling