kirill_makarov - stock.adobe.com

News Stay informed about the latest enterprise technology news and product updates.

DataRobot releases new feature to detect AI bias

DataRobot is the latest AI vendor to address AI bias, releasing a new feature to help organizations detect and prevent bias in their models.

A new feature in DataRobot's automated machine learning platform allows users to detect bias in their models.

The Bias and Fairness Testing feature, introduced Dec. 15, can automatically identify model bias and its source and then suggest how users can prevent similar bias in the future.

Bias detection

The feature provides users with a guided workflow to help them choose the most appropriate fairness metrics in their model, such as if predictions themselves should be balanced, said Nenshad Bardoliwalla, senior vice president of product at DataRobot.

"There are different ways to assess fairness and insist on fairness," he said.

The Bias and Fairness Testing then provides users with visual insights to surface its results.

DataRobot's new feature comes as AI vendors put more emphasis on rooting out bias. Just earlier this month, AWS released Amazon SageMaker Clarify, a new tool designed to help developers better understand how their models work to mitigate bias.

These new tools can help enterprises create fairer AI models for enterprises and help them build more profitable, accurate models.

"AI fairness is no longer just an ethical issue," said Ritu Jyoti, program vice president of AI research at IDC. "It is fast becoming an economic and competitive imperative. Organizations can be more profitable and productive by mitigating biases."

DataRobot, Bias and Fairness
DataRobot releases a new Bias and Fairness feature

"The costs of bias in terms of compliance efforts or as fines or damage to reputation are substantial," she said.

Still, products to eliminate AI bias have been around for a while. Two years ago, IBM released Watson OpenScale, a model management platform that can help users explain, govern and scale their AI models.

AI fairness is no longer just an ethical issue. It is fast becoming an economic and competitive imperative. Organizations can be more profitable and productive by mitigating biases.
Ritu JyotiProgram vice president of AI research, IDC

According to Forrester analyst Mike Gualtieri, OpenScale is "the most impressive tool in the market for bias and fairness testing."

DataRobot's Bias and Fairness Testing feature builds off of some of the Boston-based AI vendor's existing bias detection tools, including Prediction Explanations, which can surface factors that contribute to model outcomes.

With Bias and Fairness Testing, "now we actually help you in a much more practiced and diagnostic way," Bardoliwalla said.

Jyoti noted that "this capability by DataRobot … is certainly useful to their customers to help drive faster and safer realization of business value."

Alongside Bias and Fairness Testing, DataRobot released several other new or updates features, including Portable Prediction Servers, which enable users to integrate any model with outside applications, including those in AWS, Azure, Kubernetes and Google Cloud.

Next Steps

New DataRobot CEO sees bright AI future for the vendor

Dig Deeper on Artificial intelligence platforms

SearchBusinessAnalytics

SearchCIO

SearchDataManagement

SearchERP

Close