grandeduc - Fotolia

Threats of AI include malicious use against enterprises

As sophisticated tools become easier to use, enterprises need to protect themselves against AI threats to ensure they do not become the victims of malicious attacks.

As AI and machine learning increase their footprint in the enterprise, companies are starting to worry about their exposure to the new threats of AI. Likewise, new threats are emerging from malicious uses of AI and machine learning, from acts of mischief and criminal activity to new forms of state-sponsored attacks and cyberwarfare.

A report on malicious AI, "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation," produced by a group of 26 experts from 14 different academic, governmental and industry organizations, warns of three major categories of AI-driven threats gaining traction -- automated bots, convincing fake media and threat detection avoidance. The report's authors see that AI and machine learning are expanding the landscape of existing threats by increasing the scale and decreasing the cost of attacks that would otherwise require more substantial human labor or cost.

Rise of the bots spells trouble for businesses

The most notable example of these expanded AI threats is the use of bots to interfere with social media networks. AI-enabled bots are infiltrating social media groups and accounts to influence perceptions or mine personal information for criminal use. According to tech firm Distil Networks, over 25% of internet traffic in 2017 was the result of bot activity, and that rate has increased sharply over the past few years.

The announcement of Google Duplex -- a voice assistant that can make calls for users to book appointments or restaurant reservations -- raises the specter of criminals and those with malicious intent using voice-based services to interfere with business operations and personal safety. The use of voice-based systems introduces the threat of retail denial of service in which bogus appointments are scheduled and then canceled, causing significant business harm.

Additional aspects of automating criminal behavior include the use of text and voice interactions to massively scale social engineering hacks. The combination of phishing and Robocalls paired with the personal data that has already been exposed in the industry is a truly frightening prospect.

AI facilitates new types of threats

The malicious AI report also calls attention to ways in which new technologies can exploit the vulnerability of systems that are dependent on AI and machine learning technologies. For example, the recent Deepfakes activity in which celebrities' faces were superimposed into explicit videos has caused significant concern.

The ability for AI systems to easily create false or misleading photos, audio and video are signs that, soon, we will have trouble trusting what we read, see or hear. Criminals and state actors can use fake imagery or audio to cause personal or business harm or to interfere with government operations.

In addition, the increasing use of AI for a wide range of tasks, from image recognition and autonomous vehicle operation to augmented intelligence, smart home operation and predictive analytics, is providing new avenues for attack for those looking to cause harm.

Autonomous vehicle hardware companies are looking to protect their systems from adversarial attacks on their image recognition systems. Companies using chatbots are finding they need to prevent bad actors from interfering with the learning systems and training data and marketers are finding ways to prevent competitors from tainting their machine learning systems. IoT vendors are finding that their systems are an increasingly common mode of attack against individual and enterprise networks and systems.

Threats of AI get harder to see

Furthermore, the malicious AI report identifies ways in which AI and machine learning change the nature of threats, making them harder to detect, more random in appearance, more adaptive to systems and environments, and more efficient at identifying and targeting vulnerabilities in systems.

Criminals are using the same cloud-based infrastructure that enterprises use to enable their AI platforms to create constantly evolving, highly targeted attacks on company systems. Simply blacklisting an IP or email address becomes difficult when Amazon, Google, IBM and Microsoft infrastructures are being used to conduct zero-day, highly customized attacks.

A more frightening prospect is the idea that criminals will use drone technology combined with facial recognition and autonomous capabilities to inflict physical harm to people, buildings and infrastructure. Last year, University of California Berkeley professor Stuart Russell produced a video in association with the Future of Life Institute that was presented at the United Nations Convention on Certain Conventional Weapons to call attention to the potential for significant harm from drone and AI technology. The video aims to raise awareness about the dangers of autonomous weapons enabled by AI.

The report concludes, "The malicious use of AI will impact how we construct and manage our digital infrastructure as well as how we design and distribute AI systems, and will likely require policy and other institutional responses." Clearly, work has only begun in this area.

While AI is no doubt enabling enterprises to accomplish tasks and provide value, it is also enabling new and more dangerous criminal and malicious behavior. As the threats of AI impact cybersecurity, they're creating opportunities for both cybersecurity businesses and the criminals that target them.

Dig Deeper on AI business strategies

Business Analytics
CIO
Data Management
ERP
Close