The complex nature of regulating AI

Regulating AI is a difficult task because the technology changes rapidly. Governments must be able to employ preventative regulation to prevent any misuse.

When AI began gaining popularity, there was little regulation and laws surrounding proper use. However, a slew of newsworthy data breaches and data misuse is leading customers and users to demand increased visibility into what is happening with their data. Similarly, the market is already starting to see pushback on the use of AI for hyperpersonalization and facial recognition. In the year ahead, regulators will need to step in to fill the demand for more legislative oversight into AI developments.

Where should regulatory efforts be focused?

Many governments worldwide have begun to see the deployment of artificial intelligence as strategic importance for their country. Whereas in decades past, only a few developed nations spent any of their budgets on AI research and advancement, now it seems almost every country has invested in it. However, these countries differ on their basic approaches to privacy, data transparency and the connection between the economy and governmental oversight. Western countries operate on varying levels of government oversight over business operations, while China has a closer cooperation between government and business activities while being slow to regulate privacy and data transparency.

The problem with regulating AI is that it is not a discrete technology but a collection of different technologies and patterns that use machine learning to achieve different objectives. Some AI technologies focus on autonomous systems, while others focus on enabling conversational systems or recognitions. As such, addressing where to focus regulatory attention is a challenge for governmental and legislative bodies.

It is difficult to decide whether governments should address the potential risks of AI during the research phase or only be concerned once such technologies are applied in the market. Some groups encourage governments to prevent certain controversial uses of AI technology such as generative adversarial networks, social media bots and deepfakes. However, regulating AI can come at the cost of slowing research and development.

Applying existing regulations and laws to AI systems

Currently, many states in the U.S. and various countries have rules regarding the privacy of conversations and recording, but the emerging voice assistants pose a challenge to these laws. Amazon Alexa offers an interesting privacy impact example because it gathers the information without the ability to redistribute private or confidential information. This means that Amazon, Apple, Microsoft, Samsung, and others have the responsibility to make sure that conversations are not distributed outside the use of the system or they risk running afoul of wiretapping and call recording laws. However, it has been recently disclosed that they are using humans to monitor these conversations to improve machine learning algorithms.

The challenges of regulating AI don't apply just to corporations and other entities making somewhat rational uses of their technology. Opportunistic criminals can turn to AI to help commit crimes. Robbing a bank with a drone, while ridiculous, seems like a clear-cut case for prosecuting the responsible operator. This does not set an applicable precedent for addressing autonomous technology, however. Criminals using AI-enabled technology to impersonate a bank manager to authorize the improper disbursement of funds wouldn't be as clear-cut. These sorts of attacks, while novel now, might become more common in the future and it isn't clear if regulation will be in place in time.

In other areas of regulatory concern, in 2018 we saw the first autonomous vehicle fatality where the autonomous vehicle struck and killed a pedestrian. Congress is looking at regulations to guide use of autonomous vehicles and assign liability in the case of fatal accidents and groups such as the Congressional Artificial Intelligence Caucus exist to inform policymakers of the technological, economic and social effects of advances in AI.

Legal precedent exists for addressing vehicle autonomy. In 2007, Toyota was subject to a lawsuit when customers experienced stuck gas pedals that caused several tragic accidents. The court found Toyota liable as they believed the car essentially drove itself. This liability was placed on the manufacturer, not the operator. Using this legal approach, the autonomous vehicle would be at fault for a struck pedestrian, even if the technology was faulty or if a distracted human operator was behind the wheel.

Preventative AI regulation

Not all regulation relates to real-world situations. In some instances, governing bodies want to prevent the emergence of capabilities before they happen. Existing laws on development have a historic precedent. The Outer Space Treaty of 1967 eliminated the development of weapons designed for space. Similar precautions could apply for laws about weaponizing AI. Using predictive measures to ensure lawful and ethical applications of advancements that are in the conceptual stages can have some preventative value. Several laws also exist to govern human cloning and stem cell research efforts.

We're also starting to see regulatory movement that addresses ambiguous communication and privacy. For example, the Google Duplex demo in 2018 that mimicked an AI "human" interacting with a real-life human caused many to become concerned about a world where we can't differentiate between computer bots and real people. Future regulation might require that AI systems must disclose they are not human at the beginning of their interactions, and furthermore cannot retain confidential information without explicit approval or human oversight.

Is AI regulation even possible?

In the 1980s, no laws dealt with cell phone use in cars, but by the end of the 1990s, every state and federal body in the U.S. dealt with the use of cell phones, especially by car drivers. Now, we not only have rules regarding talking on the phone but also regarding texting and other manners of distracted driving. This concept of distracted driving wasn't relevant in the 1980s, and so it was impossible for legislators to even have that concept when preparing legislation. That is the same challenge legislators have with AI.

It is almost impossible to think of the ways that AI will be used or misused in the future. Unexpected breakthroughs and unusual applications can make laws instantly obsolete. Furthermore, the world's regulations are starting to overstep their physical geographic boundaries. For example, laws in the European Union such as General Data Protection Regulation (GDPR) affect U.S. companies and U.S. regulations affect European companies, which causes more confusion around an already complex topic.

Instead of each country working on their own regulations, experts might need a world summit to resolve these issues. Already we're seeing countries collaborate, such as the Organization for Economic Cooperation and Development recommendation on AI, including identifying principles for the responsible stewardship of trustworthy AI. But even so, the laws and recommendations only apply to the good actors -- the challenge is dealing with malicious intentions at large scale with AI technologies.

For regulators dealing with the here-and-now, it certainly will be a challenge to predict AI advancements ten years out, yet alone many decades in the future. As such, we will continue to see efforts to regulate AI worldwide met with the reality of how AI is being used (and misused) today.

Next Steps

The accelerating use of generative AI may prompt U.S. action

Dig Deeper on AI business strategies

Business Analytics
CIO
Data Management
ERP
Close