NEW YORK -- When IBM's Watson competed on Jeopardy! in 2011, the team working on the cognitive computing platform...
insisted the television broadcast include a box at the bottom of the screen showing Watson weighing different answers and assigning probabilities to various possibilities.
The reason, according to David Ferrucci, who led the group developing the Watson artificial intelligence (AI) tool at the time, was the team wanted people to know Watson wasn't simply looking up the correct answer, but engaging in a deliberative process just like a person would.
"People think that computers either have the right answer, or they're broken," said Ferrucci, who is now senior technologist at investment firm Bridgewater Associates LP in Westport, Conn., and founder and CEO of New York-based AI systems company Elemental Cognition. Ferrucci was participating in a media question-and-answer session at the O'Reilly Artificial Intelligence Conference here this week.
But that's not how Watson and other AI systems work. Instead of accepting a command and executing a corresponding action, like most software does, AI systems sift through data looking for correlations among many variables. They then return a probabilistic answer to a question, like identifying various objects in an image. There's a chance the answer is right and a chance it's wrong.
As businesses move to integrate AI into their operations, they're going to have to fundamentally rethink how they use software to make decisions.
Ready to accept AI uncertainty?
The biggest question is whether your enterprise is ready to handle the uncertainty that comes with AI. Today, most business users -- and computer users in general -- think computers deliver definitive answers to questions. A software system or analytical model may have incomplete data or be based on bad assumptions, but in a perfect world, their answers can be considered right.
But to embrace AI, businesses are going to have deal with much more uncertainty.
"You have to be sure your organization is ready and capable of learning," said Jana Eggers, CEO of AI systems vendor Nara Logics Inc. in Cambridge, Mass. "Thirty years ago, we were looking at numbers and coming up with answers. That's what we're comfortable with. What we need to get to is that cultural shift where we're OK with trying different things and being more probabilistic."
Failure is OK
The biggest cultural change that organizations need to make, Eggers said, is accepting failure and wrong decisions -- an attitude many in the business and technology world aspire to, but struggle to apply.
For example, a company might test an AI-based customer service chatbot that delivers a natural language response to customers' queries that is helpful 90% of the time. If it says something nonsensical and off-topic the rest of the time, the company may abandon the project. But the errors don't mean the system didn't work. It may just need some tweaking to sharpen its responses.
An example of this misunderstanding of uncertainty came during the previous U.S. presidential election. Nearly every predictive model predicted Hillary Clinton would beat Donald Trump. Some put her chances of winning at more than 90%. When she lost, many people said the models failed; however, any prediction short of 100% acknowledged uncertainty in the situation and assigned a probability to the likelihood of Trump winning. Every model said it was possible.
Peter Norvigdirector of research, Google
That situation was a more straightforward exercise in predictive modeling. Things get only more complicated with AI, which typically uses more opaque methods, like deep learning, to model much larger data sets. Often, the relationships it identifies as significant are subtle and all but undetectable by humans. In this situation, uncertainty only grows.
Corporate leadership needs to understand that with AI systems, uncertainty is a necessary component.
"In good AI systems, we want the uncertainty to propagate throughout the model," said Peter Norvig, a director of research at Google, in a presentation at the conference.
As an example, he discussed how a speech-to-text system might not transcribe a recording completely accurately due to some noise in its output. That doesn't mean it failed, however, he explained -- it simply wouldn't be possible to do speech-to-text with a more deterministic method. Businesses have to decide if they can live with that kind of haze.
Norvig pointed to software development as a good application. Many businesses have databases full of examples of code and related bugs. When a programmer writes a line of code, an AI algorithm might review it and warn of potential bugs that are likely to result. This could work similar to Google's "Did you mean" search suggestion tool. Maybe the recommendation would be wrong occasionally, but the programmer can live with a wrong recommendation in that case. It's fixable.
Embracing this uncertainty will be productive for businesses as they look to turn more operations over to machines, Ferrucci said. But it might be a slow transition. "I think that that's the right kind of relationship to start developing between humans and machines, but it's a hard one to get over."
Some businesses might soon need a chief AI officer
AI in healthcare focuses on upping machines' emotional intelligence
AI is a boon to big data and IoT applications