10 AI use cases in manufacturing artificial intelligence (AI)

15 AI risks businesses must confront and how to address them

These risks associated with implementing AI systems must be acknowledged by organizations that want to use the technology ethically and with as little liability as possible.

Companies have always had to manage risks associated with the technologies they adopt to build their businesses. They must do the same when it comes to implementing artificial intelligence.

Some of the risks with AI are the same as those when deploying any new technology: poor strategic alignment to business goals, a lack of skills to support initiatives and a failure to get buy-in throughout the ranks of the organization.

For such challenges, executives should lean on the best practices that have guided the effective adoption of other technologies. Management consultants and AI experts said they advise CIOs and their C-suite colleagues to identify areas where AI can help them meet organizational objectives, develop strategies to ensure they have the expertise to support AI programs and create strong change management policies to smooth and speed enterprise adoption.

However, executives are finding that AI in the enterprise also comes with unique risks that need to be acknowledged and addressed head-on.

Here are 15 areas of risk that can arise as organizations implement and use AI technologies in the enterprise.

1. A lack of employee trust can shut down AI adoption

Not all workers are ready to embrace AI.

Professional services firm KPMG, in a partnership with the University of Queensland in Australia, found that 61% of respondents to its "Trust in Artificial Intelligence: Global Insights 2023" report are either ambivalent about or unwilling to trust AI.

Without that trust, an AI implementation will be unproductive, according to experts.

Consider, for example, what would happen if workers don't trust an AI solution on a factory floor that determines a machine must be shut down for maintenance. Even if the AI system is nearly always accurate, if the user doesn't trust the machine then that AI is a failure.

2. AI can have unintentional biases

At its most basic level, AI takes large volumes of data and then, using algorithms, identifies and learns to perform from the patterns it identifies in the data.

But when the data is biased or problematic, AI produces faulty results.

Similarly, problematic algorithms -- such as those that reflect the biases of the programmers -- can lead AI systems to produce biased results.

"This is not a hypothetical issue," according to "The Civil Rights Implications of Algorithms," a March 2023 report from the Connecticut Advisory Committee to the U.S. Commission on Civil Rights.

The report explained how certain training data could lead to biased results, noting as an example that "in New York City, police officers stopped and frisked over five million people over the past decade. During that time, Black and Latino people were nine times more likely to be stopped than their White counterparts. As a result, predictive policing algorithms trained on data from that jurisdiction will over predict criminality in neighborhoods with predominantly Black and Latino residents."

Image listing 15 risks that AI can pose for businesses

3. Biases, errors greatly magnified by volume of AI transactions

Human workers, of course, have biases and make mistakes, but the consequences of their errors are limited to the volume of work they do before the errors are caught -- which is often not very much. However, the consequences of biases or hidden errors in operational AI systems can be exponentially larger.

As experts explained, humans might make dozens of mistakes in a day, but a bot handling millions of transactions a day magnifies by millions any single error.

4. AI might be delusional

Most AI systems are stochastic or probabilistic. This means machine learning algorithms, deep learning, predictive analytics and other technologies work together to analyze data and produce the most probable response in each scenario. That's in contrast to deterministic AI environments, in which an algorithm's behavior can be predicted from the input.

But most real-world AI environments are stochastic or probabilistic, and they're not 100% accurate.

"They return their best guess to what you're prompting," explained Will Wong, principal research director at Info-Tech Research Group.

In fact, inaccurate results are common enough -- particularly with more and more people using ChatGPT -- that there's a term for the problem: AI hallucinations.

"So, just like you can't believe everything on the internet, you can't believe everything you hear from a chatbot; you have to vet it," Wong advised.

5. AI can create unexplainable results, thereby damaging trust

Explainability, or the ability to determine and articulate how and why an AI system reached its decisions or predictions, is another term frequently used when talking about AI.

Although explainability is critical to validate results and build trust in AI overall, it's not always possible -- particularly when dealing with sophisticated AI systems that are continuously learning as they operate.

For example, Wong said, AI experts often don't know how AI systems reached those faulty conclusions labeled as hallucinations.

Such situations can stymie the adoption of AI, despite the benefits it can bring to many organizations.

In a September 2022 article, "Why businesses need explainable AI -- and how to deliver it," global management firm McKinsey & Company noted that "Customers, regulators, and the public at large all need to feel confident that the AI models rendering consequential decisions are doing so in an accurate and fair way. Likewise, even the most cutting-edge AI systems will gather dust if intended users don't understand the basis for the recommendations being supplied."

6. AI can have unintended consequences

Similarly, the use of AI can have consequences that enterprise leaders either fail to consider or were unable to contemplate, Wong said.

A 2022 report posted by the White House, "The Impact of Artificial Intelligence on the Future of Workforces in the European Union and the United States of Americas," spoke to this point and cited the findings of Google researchers who studied "how natural-language models interpret discussions of disabilities and mental illness and found that various sentiment models penalized such discussions, creating bias against even positive phrases such as 'I will fight for people with mental illness.'"

7. AI can behave unethically, illegally

Some uses of AI might result in ethical dilemmas for their users, said Jordan Rae Kelly, senior managing director and head of cybersecurity for the Americas at FTI Consulting.

There is a potential ethical impact to how you use AI that your internal or external stakeholders might have a problem with.
Jordan Rae Kelly Senior managing director and head of cybersecurity for the Americas, FTI Consulting

"There is a potential ethical impact to how you use AI that your internal or external stakeholders might have a problem with," she said. Workers, for instance, might find the use of an AI-based monitoring system both an invasion of privacy and corporate overreach, Kelly added.

Others have raised similar concerns. The 2022 White House report also highlighted how AI systems can operate in potentially unethical ways, citing a case in which "STEM career ads that were explicitly meant to be gender neutral were disproportionately displayed by an algorithm to potential male applicants because the cost of advertising to younger female applicants is higher and the algorithm optimized cost-efficiency."

8. Employee use of AI can evade or escape enterprise control

The April 2023 "KPMG Generative AI Survey" polled 225 executives and found that 68% of respondents haven't appointed a central person or team to organize a response to the emergence of the technology, noting that "for the time being, the IT function is leading the effort."

KPMG also found that 60% of those surveyed believe they're one to two years away from implementing their first generative AI solution, 72% said generative AI plays a critical role in building and maintaining stakeholder trust, and 45% think it might have a negative effect on their organization's trust if the correct risk management tools aren't implemented.

But while executives consider generative AI solutions and guardrails to implement in upcoming years, many workers are already using such tools. A recent survey from Fishbowl, a social network for professionals, found that 43% of the 11,793 respondents used AI tools for work tasks and almost 70% do so without their boss's knowledge.

Info-Tech Research Group's Wong said enterprise leaders are developing a range of policies to govern enterprise use of AI tools, including ChatGPT. However, he said companies that prohibited its use are finding that such restrictions aren't popular or even feasible to enforce. As a result, some are reworking their policies to allow use of such tools in certain cases and with nonproprietary and nonrestricted data.

9. Liability issues are unsettled and undetermined

Legal questions have emerged around accountability as organizations use AI systems to make decisions and as they embed AI into the products and services they sell, with the question of who would be liable for bad results remaining undetermined.

For example, FTI Consulting's Kelly said it's unclear who -- or what -- would or should be faulted if AI writes a bad piece of computer code that causes problems. That issue leaves executives, along with lawyers, courts and lawmakers, to move forward with AI use cases with a high degree of uncertainty.

10. Enterprise use could run afoul of proposed laws and expected regulations

Governments around the world are looking at whether they should put laws in place to regulate the use of AI and what those laws should be. Legal and AI experts said they expect governments to start passing new rules in the coming years.

Organizations might then need to adjust their AI roadmaps, curtail their planned implementations or even eliminate some of their AI uses if they run afoul of any forthcoming legislation, Kelly said.

Executives could find that challenging, she added, as AI is often embedded in the technologies and services they purchase from vendors. This means enterprise leaders will have to review their internally developed AI initiatives and the AI in the products and services bought from others to ensure they're not breaking any laws.

11. Key skills might be at risk of being eroded by AI

After two plane crashes involving Boeing 737 Max jets, one in late 2018 and one in early 2019, some experts expressed concern that pilots were losing basic flying skills as they relied more and more on increasing amounts of automation in the cockpit.

Although those incidents are extreme cases, experts said AI will erode other key skills that enterprises might want to preserve in their human workforce.

"We're going to let go of people who know how to do things without technology," said Yossi Sheffi, a global supply chain expert, director of the MIT Center for Transportation & Logistics and author of The Magic Conveyer Belt: Supply Chains, A.I. and the Future of Work.

12. AI could lead to societal unrest

A May 2023 survey titled "AI, Automation and the Future of Workplaces," from workplace software maker Robin, found that 61% of respondents believe AI-driven tools will make some jobs obsolete.

And many employees who don't lose their jobs will see shifts in how they work and what kind of work they do, experts added.

Sheffi said such technology-driven changes in the labor market in the past have led to labor unrest and could possibly do so again.

Even if such a scenario doesn't happen with AI, Sheffi and others said organizations will need to adjust job responsibilities, as well as help employees learn to use AI tools and accept new ways of working.

13. Poor training data, lack of monitoring can sabotage AI systems

In 2016, Microsoft released a chatbot named Tay on Twitter. Engineers had designed the bot to engage in online interactions and then learn patterns of language so that she -- yes, Tay was designed to mimic the speech of a female teenager -- would sound natural on the internet.

Instead, trolls taught Tay racist, misogynistic and antisemitic language, with her language becoming so hostile and offensive within hours that Microsoft suspended the account.

Microsoft's experience highlights another big risk with building and using AI: It must be taught well to work right.

14. Hackers can use AI to create more sophisticated attacks

Bad actors are using AI to increase the sophistication of their attacks, make their attacks more effective and improve the likelihood of their attacks successfully penetrating their victims' defenses.

"AI can speed up the effectiveness of the bad guys," Kelly said.

Experienced hackers aren't the only ones leveraging AI. Wong said AI -- and generative AI in particular -- lets inexperienced would-be hackers develop malicious code with relative ease and speed.

"You can have a dialogue with ChatGPT to find out how to be a hacker," Wong said. "You can just ask ChatGPT to write the code for you. You just have to know how to ask the right questions."

15. Poor decisions around AI use could damage reputations

After the February 2023 shooting at a private Nashville school, Vanderbilt University's Peabody Office of Equity, Diversity and Inclusion responded to the tragic event with an email that included, at its end, a note saying the message had been written using ChatGPT. Students and others quickly criticized the technology's use in such circumstances, leading the university to apologize for "poor judgement."

The incident highlights the risk that organizations face when using AI: How they opt to use the technology could affect how their employees, customers, partners and the public view them.

Organizations that use AI in ways that some believe is biased, invasive, manipulative or unethical might face backlash and reputational harm. "It could change the perception of their brand in a way they don't want it to," Kelly added.

How to manage risks

The risks stemming from or associated with the use of AI can't be eliminated, but they can be managed.

Organizations must first recognize and understand these risks, according to multiple experts in AI and executive leadership. From there, they need to implement policies to help minimize the likelihood of such risks negatively affecting their organizations. Those policies should ensure the use of high-quality data for training and require testing and validation to root out unintended biases.

Policies should also mandate ongoing monitoring to keep biases from creeping into systems, which learn as they work, and to identify any unexpected consequences that arise through use.

And although organizational leaders might not be able to foresee every ethical consideration, experts said enterprises should have frameworks to ensure their AI systems contain the policies and boundaries to create ethical, transparent, fair and unbiased results -- with human employees monitoring these systems to confirm the results meet the organization's established standards.

Organizations seeking to be successful in such work should involve the board and the C-suite. As Wong said, "This is not just an IT problem, so all executives need to get involved in this."

Next Steps

What is trustworthy AI and why is it important?

Artificial intelligence vs. human intelligence: How are they different?

Top advantages and disadvantages of AI

How businesses can measure AI success with KPIs

AI vs. machine learning vs. deep learning: Key differences

Dig Deeper on AI business strategies

Business Analytics
CIO
Data Management
ERP
Close