carloscastilla - Fotolia

Expert panel warns developers to beware of AI bias

Despite new development tools to build AI applications, developers must be wary of bias creeping into their systems.

Artificial intelligence systems are only as good as the data fed into them, which can often result in bias in some form or other.

The advent of AI systems that mimic human thinking and behavior has caused many to question the ethical use of the technology, as some early systems showed signs of bias regarding sex, race, social standing and other issues. Developers must beware of AI bias in the systems they build, said a panel of experts at a recent IBM developer event.

AI system developers must be watchful for the inevitable bias that is almost certain to appear in the current crop of AI applications, said Francesca Rossi, a distinguished researcher in AI ethics at IBM's Thomas J. Watson Research Center in Yorktown Heights, N.Y.

IBM researchers are working on tools that would help developers detect this kind of bias before the applications launch, Rossi said. Because, "AI systems can replicate or enhance our own bias," she noted.

"We allocate a number of resources to this issue; we're working on it right now," said Yuri Smirnoff, an AI and optimization expert at Facebook. Smirnoff noted that Facebook analyzes its data sets to try to root out bias.

Moreover, badly designed statistical models can introduce AI bias, said Alex Smola, director of machine learning and deep learning at AWS.

"There are plenty of places where we see undesirable systems, but it's often the result of sloppy engineering," so it's not always necessary to hit things with the "heavy hand" of social issues, Smola said.

Yet, the bias in AI systems is real, Rossi said. Many see the makeup of the teams building these systems as part of the problem.

Sam Charrington, analyst, CloudPulse StrategiesSam Charrington

That's a key reason diversity has emerged as an important topic on the AI landscape, said Sam Charrington, founder and principal analyst at CloudPulse Strategies and host of This Week in Machine Learning & AI.

There is a growing recognition that when machine learning algorithms are trained on biased data, they take on these biases, he said. Famous examples of algorithmic bias include natural language models that identify "homemaker" as the likely profession of women, and computer vision models that tag black people as gorillas, Charrington added.

artificial intelligence AI bias
Look out for AI bias

"The argument is that with greater diversity in the teams creating the models, there will be a greater awareness of these issues, and they'll be less likely to occur," he said.

diversity in AI is important for two main reasons: diversity in AI talent, and avoiding bias in AI training data, said Kathleen Walch, co-founder and senior analyst at Cognilytica, an analyst firm specializing in AI issues and based in Ellicott City, Md.

There are plenty of places where we see undesirable systems, but it's often the result of sloppy engineering.
Alex Smoladirector of machine learning and deep learning, AWS

"If the researchers and developers developing our AI systems are themselves lacking diversity, then the problems that AI systems solve and training data used both become biased based on what these data scientists feed into AI training data," she said. "Diversity brings about different ways of thinking, different ethics and different mindsets. Together, this creates more diverse and less biased AI systems.  This will result in more representative data models, diverse and different problems for AI solutions to solve, and different use cases feed to these systems if there is a more diverse group feeding that information."

Indeed, diversity in AI and data science teams is important for the same reasons as in other parts of organizations and society at large -- to provide broader perspective, and because diverse teams are smarter, more creative and higher-performing, Charrington said.

"Fighting algorithmic bias isn't the only thing people are talking about when they're talking about diversity in AI, though," he said.

Some AI developers recognize that diversity is important as a way to ensure that the AI systems they create are a reflection of society's ideals. Others are focused on the transformational opportunity that machine learning and AI offer various users and their communities, and want to ensure that all of those communities are prepared to take advantage of it. Similarly, some see the ability to wield AI as the next potential digital divide, and view diversity in the field as a way to avoid this, Charrington said.

"The need for diversity is almost a foregone conclusion now," said Erin McKean, an IBM developer advocate and American lexicographer. "I think it's hard to argue that monocultures make better stuff in general."

Dig Deeper on AI business strategies

Business Analytics
CIO
Data Management
ERP
Close