5 ways AI bias hurts your business

A biased AI system can lead businesses to produce skewed, harmful and even racist predictions. It's important for enterprises to understand the power and risks of AI bias.

As a visiting research scientist at Spotify, Chirag Shah saw the business case for addressing biases in AI-based systems.

Shah said he and his colleagues determined that the algorithms Spotify used to make music recommendations could evolve to only recommend the most popular songs, thereby shutting out new and lesser-known artists, as well as less-popular tunes that users would still like.

The algorithms would create a repetitive selection of songs, leading users to become bored with the service, determined Shah, an associate professor in the Information School at the University of Washington.

The limited recommendations would also hurt artists and their talent management companies and ultimately hurt Spotify's business model, which relies on satisfied users.

Spotify leaders agreed and took action; Shah said executives everywhere should do the same.

"We should be addressing the issue of bias and fairness in AI. It's not only the right thing to do, but it's helpful for the business," Shah said.

What is AI bias?

For businesses, AI bias is a real and significant issue that stems from how artificial intelligence is constructed.

AI relies on humans to learn. An AI-powered computer system runs on human-designed algorithms, trains with data sets selected by humans and uses data to perform tasks assigned by humans.

Artificial intelligence systems are designed to learn as they run through their tasks, quickly detecting patterns in huge amounts of data and then using those insights to recommend or perform an action. The system uses the results of those actions, its ongoing performance and analysis from any other new data sources to refine the whole process continuously.

Yet, flaws in this process can create bias -- skewed results that can lead to inaccurate predictions.

Often, training data made inaccurate by human prejudices or assumptions causes AI bias. An overrepresentation of certain data types can lead a system to put more emphasis on that data, instead of assigning equal weight to different data points.

"It's any insight that doesn't really represent reality," explained Shay Hershkovitz, SparkBeyond's research lead for its climate change initiatives and an expert in AI bias.

These biased results can have costly consequences for organizations -- from legal ramifications to lost financial opportunities -- so understanding and managing biases has become a business imperative for all enterprise leaders adopting AI.

"You have to have awareness of potential biases and how to reduce them, because the reduction of bias means more accurate models and more accuracy means better business outcomes," said Yoav Schlesinger, director and principal with the ethical AI practice at Salesforce.

AI bias in action

AI experts pointed to a Microsoft-designed chatbot called Tay as an example of how bias works and how it can hurt a business.

Microsoft used machine learning and natural language processing technologies to create Tay, a chatbot meant to learn and engage with the online community as if it were a teenaged girl, and released it onto Twitter in 2016.

We should be addressing the issue of bias and fairness in AI.
Chirag ShahAssociate professor, University of Washington

Online trolls quickly bombarded the bot with racist, misogynistic and anti-Semitic language. The overrepresentation of hate speech and a lack of rules preventing the chatbot from learning and repeating that language quickly led Tay to post harmful messages.

As a result, Microsoft suspended the experiment the same day.

Experts said that incident shows how AI bias can hurt companies. A biased AI system can damage a company's credibility and reputation while also producing unfair and harmful or useless results.

AI bias in business

How AI bias hurts business

Experts highlighted five specific ways AI bias can be detrimental to an organization:

  1. Ethical issues. Law enforcement and some private entities already use facial recognition technology for identification, even as the technology has proven problematic. Police have arrested and jailed people who the technology falsely identified. That has raised questions about the ethics of using facial recognition technology and led some local governments and corporations to ban its use. Google also had a very public problem with AI. When it first rolled out its Photos app in 2015, its photo recognition technology tagged a Black couple as gorillas. Google drew further criticism when it decided to address the problem by removing tags related to primates rather than developing technology that could make accurate distinctions.
  2. Reputational damage. Amazon's AI-infused hiring practices garnered some negative press back in 2018 when it came to light that its computer models -- which had been trained predominantly on resumes submitted by men -- were biased against female applicants. Society has become less tolerant of such missteps, Hershkovitz said. He added that many -- particularly those in the millennial and Generation Z demographics -- are willing to ostracize companies for their errors.
  3. Lost opportunities. AI is often used to help businesses forecast customer demand so they can have adequate supplies of the right items for the targeted audiences. But biases can throw off such equations, leaving companies with too many or too few in-demand products and services.
  4. Lack of trust from users. Employees who see their company's AI investments deliver poor results won't trust it and therefore won't use it, even if AI engineers address biases and improve the processes. Consequently, executives will find that it will take longer to incorporate AI-generated insights into decision-making, thus, take longer to see returns on their AI-related investments. "The biggest barrier to AI success is AI adoption, and the biggest barrier to AI adoption is trust," said Svetlana Sicular, an analyst at Gartner.
  5. Regulatory and compliance problems. Consider, for example, if a financial institution uses a problematic algorithm or data that introduces race or gender into lending decisions. They may not even be conscious that they're doing that, Sicular said; she explained that some information, such as names, could be proxies for categorizing and identifying applicants in illegal ways. Yet even if the bias is unintentional, it still puts the organization at odds with regulatory requirements and could lead to certain groups of people unfairly denied loans or lines of credit.

Next Steps

26 AI content generators to explore in 2023

Breaking the cycle of algorithmic bias in AI systems

Dig Deeper on AI business strategies

Business Analytics
CIO
Data Management
ERP
Close