Rawpixel - Fotolia

How Getty Images reduces bias in AI algorithms to avoid harm

In applications from internal job recruiting to law enforcement technology, AI bias is a widespread issue. Here's what enterprises can do to reduce bias in training and deployment.

The rapid development of AI has brought with it a sharp rise in bias in AI algorithms. Major companies are facing controversies over the perception their AI tools contain biases, and developers are now warning of the potential for human biases to creep in through data collection, training and deployment.

According to Dan Gifford, senior data scientist at Getty Images, bias in AI can only be mitigated through a series of validation techniques. His team sees success in reducing bias in AI algorithms through hiring a diverse team, rigorous testing of applications, and keeping human workers involved in all stages of the AI development process.

How does bias in AI algorithms negatively affect research and enterprise use?

Dan Gifford, senior data scientist at Getty ImagesDan Gifford

Dan Gifford: Biases can reveal themselves in all kinds of ways -- from the way we hire and build teams, to the training sets we construct to teach algorithms to identify patterns. These biases sometimes have small effects on algorithms, but they can also be so large that your company winds up on the front page for allegations of discrimination in your product. Even though the encoded bias in algorithms may be completely unintentional, that does not change the negative effects it might have on happiness, use, or safety of certain communities that adopt the technology. If this trust is compromised, it can decrease the appetite for further research and future adoption of the technology in enterprise settings.

How does Getty navigate AI biases?

Gifford: We believe that bias can never completely be removed from AI systems so long as we are attempting to have AI replicate human-defined skills based on human-created data. However, there are a variety of techniques and efforts that can and should be made to minimize the risk of large biases impacting our algorithms. The first is hiring and working with diverse teams -- in every way a team can be diverse -- to help uncover and avoid the biases any one person on that team may have. Secondly, we develop a set of validation tools that can help identify imbalances in training or testing sets that an algorithm uses to train. Lastly, we make sure the algorithm is tested with a diverse set of users to ensure anything that slips through the cracks is identified early and can be corrected promptly.

Should there be an independent training method or set of guidelines for mitigating bias in AI algorithms?

Gifford: This is difficult to answer, because defining when bias becomes harmful is a challenging task. For instance, in the United States, almost 90% of nurses are women. If an algorithm accepting applications for a nursing school favors female candidates over male candidates, that is a clear case of not just bias, but discrimination. Similarly, when you make a search for "nurse" on Getty Images website, roughly 90% of the results contain female nurses. But is that the right percentage? Would a 50/50 male/female be better? Here, there isn't a right answer, and regulations likely can't be specific enough to treat each case. I believe the area where regulation will help is requiring AI algorithms to be better at explaining their decisions, which today is not a requirement in many instances.

For as much good as AI can do, there is a lot of anxiety over its malicious applications. Why? How can companies avoid malicious use of data?

Gifford: With algorithms that learn from data sets that are larger than any human can inspect and produce outputs that are difficult to cross examine, any company that prioritizes pure efficiency of these algorithms over other aspects, like bias or customer deception, are going to find themselves in the position of defending their trustworthiness before long. There has been a long-held belief that removing the human from the decision-making process will lead to more objective, data-driven choices. However, it didn't take long to discover that the big data movement many saw as the hero can just as easily act the villain when that data is biased or incomplete. When it comes to mitigating these effects, common sense usually prevails when deciding how much to rely solely on the output of algorithms in place of human judgment.

AI technology is not inherently problematic or helpful -- it's the application. One example, deepfakes, are a result of advancements in generative adversarial networks (GANs) that can generate images or portions of images from a learned representation of data. In the case of deepfakes, GANs can be used to control the facial motions of someone in a video, and combined with audio, make them say or do something they didn't. GANs are a relatively new advancement in the field of AI and have been a large step forward in creating synthetic data in many domains.

We can use facial recognition as another example. All problems with that technology -- from racial or gender bias to controversial military/surveillance use -- are human-generated problems and are not inherent to the system itself. In reinforcement learning, the situation is the same. Technologies like DeepMind's AlphaGo and its successors have managed to best some of the top Go players in the world, allowing them to learn from the algorithm's creative play and employ new strategies. Can reinforcement learning be deployed in harmful ways? Absolutely. But that is an inherently human decision, rather than something inherent to the algorithm.

Dig Deeper on AI business strategies

Business Analytics
CIO
Data Management
ERP
Close