3 ways to create an AI ethics framework for responsible tech

AI can often reflect the biases and limits of its human developers. Experts say diversity, review boards and a strong AI ethics framework will lead the way toward ethical AI.

The message from March's MIT EmTech Digital conference in San Francisco: AI is here to stay, and we must start to consider the ethical implications of emerging technology. Executives from Microsoft, Amazon and Autodesk took the stage to discuss major AI ethics concerns along with strategies for mitigating the negative effects of AI. AI-generated content, conversational AI and emotional interpretations are leading the future of AI development, and all demand an ethical guideline to use.

In the short run, developing an ethics framework, diversity and multi-company collaboration will generate conversation around strong AI ethics. In the long run, companies must specifically identify how AI can benefit or hinder workers, users and the communities they serve in order to make a sustainable impact and ethical decisions.

Ethics review boards

As major companies begin to assign teams to study ethical AI use, Microsoft's standing AI ethics committee has been working with other companies to identify solutions to technological problems with AI. Microsoft is developing tools to look for fake news, which can track how information changed from original sources, and is working with U.S. government program DARPA's Media Forensics program to help identify changes in media distributed on the Internet.

"We need to educate consumers about where their content comes from," said Harry Shum, vice president of AI & research at Microsoft at his "State of AI" talk. "We need a multidisciplinary effort at the center of the development cycle."

Microsoft itself doesn't ship products for launch without adequate security, privacy and accessibility reviews. An AI ethics review will soon be added to the list of requirements, Shum said. Microsoft has several categories of ethical review including reliability, safety, fairness, privacy, security, inclusiveness, transparency and accountability that are analyzed by its developer ethics committee that includes engineers, legal and business managers.

Shum has been struggling with creating an appropriate ethical framework for addressing the potential for humans to form emotional attachments with chatbots, and mediating appropriate chat. Shum is also looking at addressing concerns about how AI-enabled technologies -- such as facial recognition -- could be misused. It's hard to control use intentions after tools have been shipped, and if Microsoft prohibits certain uses, it risks losing business to unregulated competitors. These conundrums require a framework for cross-enterprise responsibility around how to ensure these technologies can be used appropriately. Good intentions and trust alone may not be enough to create a solid ethical foundation for AI development. Some regulations may be required to address companies that place profits over other ethical considerations that may create new problems.

Start with a press release

When developing AI, take notes from Amazon and preempt user concerns in a press release. The strategy around releasing their virtual assistant Alexa was to start with a press release quelling predicted consumer anxiety, said Rohit Prasad, vice president and head scientist for Alexa AI.

We need a multidisciplinary effort at the center of the development cycle.
Harry Shum, vice president of AI & research at Microsoft

Prasad's team knew that privacy issues were most concerning to users so their press release -- published years before the assistant itself -- described how notifications and data sharing would work on their platform. This release helped guide feedback to investment in areas like a notification LED on the device. It also guided Amazon's framework for limiting data collection from people's conversations.

Since then, Amazon has grappled with ethical AI issues like how Alexa should respond to sexually explicit or demeaning inquiries. Prasad's team is working with sociologists and psychologists to improve Alexa's understanding capabilities with the intention of making her learn intonation and language cues that stop her from responding to inappropriate requests.

Another ethical concern is that children are naturally learning speech patterns and seeking information from Alexa. Prasad and his team had to then look at tailoring the response to the user. If a child asks a question about the solar system, Alexa will generate a more basic response appropriate for the child's level of understanding, as opposed to an adult's level.

Alexa is also offering opt-in and opt-out features to consumers who are concerned about a constantly recording mic. From whisper mode to broken glass detection, consumers can chose their level of comfort and program their technology accordingly.

"We want more control in the hands of consumers," Prasad said. 

Create diversity

Implementing ethical AI will also require bringing diversity into the development teams. Companies risk creating further bias in AI by thinking only of results, and not how AI has the potential to impact different populations.

There are many things that matter to AI algorithms that we don't know how to measure properly, Rediet Abebe, researcher at Cornell University and cofounder of Black in AI said. As a result, new algorithms may not adequately consider bias or discrimination. Without diversity, it's harder to engage different communities and see biases, errors and disadvantages of selected populations. Measures of economic welfare used for algorithms that analyze opportunities for assistance, housing and education tend to focus on income -- but there is a growing body of research that shows that income alone does not capture the economic disadvantage that marginalized communities face.

Creating a path to ethical AI also needs to include opinions from a sampling of the work force expected to use the new tools -- not just developers and executives. AI developers can come up with clever ideas, but if AI does not flow easily into the existing job, the technology will not be adopted.

"You can partner with the ecosystem and see how to work with it, or shove it into the ecosystem and [employees] are going to push back," said Andrew Anagnost, president and CEO of Autodesk.

Next Steps

Why you need an AI ethics committee

Dig Deeper on AI business strategies

Business Analytics
CIO
Data Management
ERP
Close