Maksim Kabakou - stock.adobe.com

News Stay informed about the latest enterprise technology news and product updates.

It's time to address AI ethics

Enterprises need to focus on creating and adopting AI ethical guidelines, especially for emerging technologies such as facial recognition and home assistants.

It's 2020. Modern-day AI has been around for years now, and enterprises are continuing to automate and augment their business processes with AI technology.

As enterprises become more comfortable using AI technologies, they are turning their efforts from adopting them to making them reliable and safe to use. To do so, enterprises are beginning to focus on AI ethics.

AI ethics

Ethical guidelines for data and privacy aren't new to AI, noted Ed McLaughlin, president of operations and technology at Mastercard. While the practices and capabilities of AI are relatively recent, the principles of ethics, security and responsibility have long been established.

"Ethics aren't situational," McLaughlin said. "You shouldn't have to think about what you believe just because you have new capabilities."

Enterprises should build explainability, privacy and security into their models from the start. Companies need to ensure they are benefitting consumers who entrust them with their data and lay out easy-to-read, understandable data privacy policies, McLaughlin said.

Enterprises that don't consider ethics can open themselves up to legal, public relations and even model accuracy risks. AI ethics, then, can help mitigate risks, said Beena Ammanath, executive director of the Deloitte AI Institute.

Ammanath helped produce Deloitte's 2020 "State of AI in the Enterprise" report, which surveyed 2,737 IT and line-of-business executives in nine countries. Most respondents ranked managing AI-related risks as a top challenge for their AI initiatives, with many reporting "major" or "extreme" concerns regarding potential strategic, operational and ethical risks in AI.

Digital ethics
Components of digital ethics

About 56% of responders said their organization is slowing AI adoption due to emerging risks in AI, including project failures, misuse of personal data, ethical problems and regulatory uncertainty. The same percentage of responders said they believe public perceptions will either slow or stop the adoption of some AI technologies.

Part of the concern about AI is due to several high-profile examples of AI ethical problems, as well as several Senate hearings on AI and data ethics over the past few years, Ammanath said.

Business executives and policymakers also have a firmer grasp of AI technologies than they did even a few years ago, and enterprises have significantly increased their rate of AI adoption over the past few years.

Policymakers and enterprises have a trust gap, according to a survey of 71 policymakers and more than 280 global organizations conducted by global professional services network EY in collaboration with The Future Society.

Policymakers vs. enterprises

As for regulation of AI, policymakers mostly don't trust the intentions of enterprises.

There is an increasing realization that a framework is needed.
Nigel DuffyGlobal AI leader, EY

While most enterprises think self-regulation of AI by industry is better than government regulation, most government policymakers disagree. Enterprises also primarily see themselves as investing in ethical AI, even as it reduces profits. Lawmakers, however, appear to be increasingly unwilling to let business regulate AI by itself.

Both enterprises and lawmakers need to work together to bridge this trust gap, said Nigel Duffy, EY global AI leader. They should look forward to emerging AI ethical risks, such as privacy risks with facial recognition, human emotion analysis and home assistants.

Still, Duffy noted, among both enterprises and policymakers, "there is an increasing realization that a framework is needed."

The groups will likely increase their focus on emerging AI ethical risks over the next two years, he said.

Enterprises should work more with policymakers and regulators to create more and better AI ethical standards.

Mastercard formed a group of regulators, business executives and technology professionals, to help guide the credit card giant's AI practices.

"That was very, very helpful for us," McLaughlin said.

Dig Deeper on AI ethics issues

Join the conversation

1 comment

Send me notifications when other members comment.

Please create a username to comment.

Interesting title, I've been addressing it for three years on www.diginomica.com/author/neil-raden, leading workshops and the landmark report "Ethical Use of Artificial Intelligence for Actuaries" sponsored bu the Society of actuaries. But I suppose one thing that puzzles me is, why now. Ethics in automatic inferencing has been around for decades. Another thing I've noticed is clients tell me their people come back from our class and workshop and hae a grasp of the material, but it doesn't change their work. Reason? Too much ethics, not enough practicum or forbearance.
Cancel

SearchBusinessAnalytics

SearchCIO

SearchDataManagement

SearchERP

Close