Enterprises work toward AI trust and transparency

If ignored, a lack of trust in AI algorithms could diminish user adoption. To remedy this risk, enterprises are working to make their applications more transparent and explainable.

As we move into the era of increasing ubiquity of artificial intelligence, there is a natural hesitation to embrace some of its uses and applications. Many see AI as futuristic science fiction and something to be feared. To address some of these real and imagined worries, some companies are trying to boost AI trust and transparency and governments are considering laws and regulations that would require greater explainability.

Being transparent with how AI models came to decisions is crucial for gaining people's trust. Without trust, users won't embrace systems that embed AI capabilities. This is why so many organizations are now taking active steps to find ways to explain how these machine learning-based systems work. The goal is to help people recognize AI as a tool for good, rather than a soulless machine trained to conspire against them.

Building trust in AI systems

As AI becomes more ingrained in various products with decision-making capabilities, companies are making efforts to be more open about what the systems actually do, and how they operate. This means taking the time to share information about how algorithms work and details about the data used to train machine learning models. The goal is to help people be more comfortable with the technology and improve AI trust and transparency.

Explainability also helps make it easier for users to learn how decisions are made and how these systems work. Some companies are trying to show not only that their training data is reliable, but also providing fair and trustworthy results. There's a lot of discussion today about the use of biased training data, which has spurred some organizations to be more open about the ways that their systems collect and evaluate data in an effort to stay ahead of potential criticism.

As AI has the ability to alter many industries and the way we work, live and interact, lawmakers are bringing more attention to regulation of AI systems. While there are doubts as to how realistic regulations on AI might be, some are quick to note that AI technology could have dangerous implications if it is left unregulated. Such concerns include the use of AI for autonomous vehicles and aircraft systems, use of facial recognition technology, bots in software and social media, and the ability for machine learning systems to influence public opinion and votes. The potential for machine learning-driven attacks on people, systems and governments is one of many concerns making people think more deeply about how open AI-driven companies need to be when it comes to what they are doing behind the scenes.

While some are concerned with narrow uses of AI in specific applications, others are more worried about the futuristic proposition of machines becoming superintelligent. Some see DeepMind's recent successes with AI systems beating humans at multiplayer games as a sign of breakaway success of AI algorithms. AI systems are also being used to generate images that, while realistic, are not real. This proposition of fake images, audio, video and text content posts challenges in an environment where you can't believe what you see, read or hear.

As AI continues to become more widely used, with technologies such as facial recognition and autonomous vehicles growing more common, regulations will need to be put in place soon to address the use and potential misuse of these technologies. Without oversight, will companies pay enough attention to AI trust and transparency? It's likely that governments will need to identify and come up with a framework of ethics for AI to prevent not only malicious use of AI, but day-to-day uses that give people reasons for concern.

Corporate governance of AI

Some argue that companies should be left to self-regulate, while others feel that government should step in. In early 2019, Google famously created an AI ethics board only to disband it one week later in response to controversy over the makeup of the board. Because of this, many don't believe corporations can self-regulate their use of AI technology.

AI transparency is even more pressing in industries that are heavily regulated. For industries such as banking and insurance where AI is used to manage loans and determine credit risk, these systems need to be trusted. AI is making it easy for financial institutions to quickly evaluate whether or not a person should qualify for a loan or comparable product. However if someone is denied, these companies will need to be able to reference what steps the AI took to get to that decision. Within emerging technology groups and standards organizations, such as ATARC, there is talk about standardizing levels of explainability to give users and regulators additional insight into how explainable certain algorithms are.

The pharmaceutical industry, another heavily regulated industry, has seen great advancements with AI, including help with drug discovery, drug manufacturing and clinical trials. AI technologies are able to better identify candidates for clinical trials by using a wider range of data and information from social media platforms, doctor visits and other alternate sources. This allows for better targeting, and quicker and less expensive trials overall. However, as with any regulated industry pharmaceutical companies need to have an audit trail with explainable steps and decisions documented as to how individuals were selected for trials. This is why addressing the idea of explainability is so important.

Challenges of regulating AI

When it comes to regulating AI, governments are still trying to catch up with the reality of this new technology. One unique approach that some governments are taking is the regulation of investment in AI. The belief is that controlling the flow of money to those working on AI will help to direct the expansion of the technology into areas of public benefit, while allowing laws to catch up to the technology itself. However, with the vast sums of money flowing into AI startups and vendors, the sheer quantity of capital will make any efforts to staunch growth through funding regulations a non-starter.

Governments around the world -- particularly in Europe, which is governed by GDPR rules -- are working on requiring transparency from companies. This means that companies would be required to tell users and anyone involved what the actual technology does and then allow users to give input in how the technology affects the outcome. This means that if AI was used to abuse rights in some way or in a way that actively worked against people, those affected could bring their cases to the courts and work with the government to control or end unethical practices. While this might be difficult to do now, only time will tell whether it ultimately ends up helping the public.

For companies that face the potential of regulatory penalties for non-compliance, AI can bring a lot of value helping with compliance, auditability and risk management. So on the one hand, AI introduces governance challenges, but it can also help solve existing ones. Having a governance framework in place is becoming increasingly important, especially for so-called "AI-" companies. In the coming years as AI becomes even more a part of our everyday lives, governments and organizations around the world will need to resolve key discussions concerning transparency, levels of explainability, and compliance related issues.

Dig Deeper on AI business strategies

Business Analytics
CIO
Data Management
ERP
Close