Say you're interested in automation software for some customer service tasks. You visit a vendor's website, provide your contact information to request a live software demo and a sales assistant emails you to make sure you have the information you need.
If you don't respond to that email, the sales assistant sends another the next day and the next -- this last one with an apology for bugging you and a smiley face. You feel a bit bad and finally answer this personable, persistent sales assistant, who, as it turns out, isn't a person at all. It's a chatbot.
That scenario is already happening with virtual assistant platforms such as Conversica, and while it isn't pervasive just yet, within the next five years, there's a high probability that the "person" emailing you about a product or service you've shown interest in will be an artificial intelligence (AI) chatbot thanks to leaps forward in technologies that understand customer intent and that can generate dialog to simulate natural conversations.
In fact, artificial intelligence has come so far so fast in recent years, Gartner predicts it will be pervasive in all new products by 2020, with technologies including natural language capabilities, deep neural networks and conversational capabilities.
Other analysts share that expectation. Technologies that encompass the umbrella term artificial intelligence -- including image recognition, machine learning, AI chatbots and speech recognition -- will soon be ubiquitous in business applications as developers gain access to it through platforms such as the IBM Watson Conversation API and the Google Cloud Natural Language API.
"Every enterprise application you use will become a smart application in the next five years. Who is going to buy a dumb application at that point?" asked Dave Schubmehl, research director, cognitive systems and content analytics for IDC.
AI expectations: Avoid the anthropomorphic trap
To assert that all products will include AI so soon may come as a surprise given how far flung artificial intelligence seems. But seeds of AI have already taken root in many business apps and will soon be deeply tangled in the application ecosystem.
Deep neural networks -- the technology behind artificial intelligence projects -- along with image recognition, speech recognition and natural language processing, are efficient content classifiers when properly set up and fed with data against analytical models, according to Tom Austin, a Gartner VP who led a recent webinar about the realities of AI.
Jeff Cohenco-founder and vice president of functional architecture, Welltok Inc.
To be sure, companies are still far from getting their own Star Trek-esque Data android. But that's not what businesses need from AI, and those sci-fi fantasies are what push people toward the type of "anthropomorphic thinking" that sets false expectations about what AI is and what it can do, according to Austin. He cautioned businesses to beware of AI promoters who "weave seductive tales, talk up successes and hide failures."
"Intelligent machines don't think, they don't have common sense … they aren't self-aware, they aren't conscious," Austin said in the webinar. "People who go around talking about those things are deceiving you, or they are insufficiently informed in the facts."
Indeed, plenty of technology vendors claim to provide artificial intelligence, but misuse the term or throw it around too loosely. In reality, app developers have begun to replace traditional rules and heuristics with machine learning statistical data models generated automatically or with lots of care and feeding, IDC's Schubmehl explained. This change amounts to much smarter applications.
"In the old days, you would need a ruleset to say ‘if this happens, do that.' The applications that don't rely on rules and heuristics, but rely on a data model that comes from good, valid data can actually correct [themselves] as part of their learning," Schubmehl said. "The idea is, these systems get smarter and smarter and better and better at things like predicting when a part is likely to fail."
IBM, Google move the AI conversation forward
Various levels of AI are on the market now; Apple's Siri and Microsoft's Cortana are essentially verbal search engines in that they can't answer strings of questions and don't understand context. But new conversational platforms from Google and IBM can.
IBM's Watson Conversation API is available for developers and Google launched two Cloud Machine Learning products in beta this summer: Cloud Natural Language API and Cloud Speech API. With these APIs, companies can add a natural language processing interface to a business app to derive meaning and sentiments from what their customers write about products and services on the web. They can also automate interactions with end users through virtual agents or an AI chatbot.
Schubmehl of IDC said he believes voice will soon be the primary interface, as speech recognition and conversation apps improve. Today's conversational API systems can enable a developer to build an AI chatbot that can have "a reasonable conversation," he said.
This new level of conversational technology goes beyond interactive voice response systems to give companies more flexibility in the way they answer customer questions and increase the percentage of questions they are equipped to handle, Schubmehl explained.
This is the culmination of decades of research into conversational systems and the hardening of algorithms developed decades ago, explained IBM Watson Platform Director Steve Abrams.
"I don't want anyone to think this will be an overnight success, but we are at a tipping point … and we are seeing the rapid rollout of new applications," Abrams said.
Welltok Inc., an IBM partner, hasn't put the Watson Conversation API into action yet. However, the company already uses Watson machine learning and cognitive computing technologies to drive its CaféWell Concierge platform, which provides healthcare consumers with the information they need.
For example, a health insurance customer might reach out to Welltok's Concierge for information about an insurance deductible or for help understanding insurance plans written in non-consumer friendly language. Welltok's Concierge comprehends the intent of the question and provides understandable answers, said Jeff Cohen, the company's co-founder and vice president of functional architecture.
In instances where the Concierge doesn't understand a customer question, the system determines how to disambiguate and follow up with clarifying questions, he said.
"That's the tricky part -- weaving all these technologies together to mimic a conversation with a human and not a canned chatbot," Cohen said. "You want an intelligent dialog, a system that can learn and has context about you -- your health plan, your age, your dependents covered, the type of coverage, other questions you have asked in the past -- to provide a personalized response so that [customers] have confidence in the technology."
AI isn't plug and play
Though the new Conversation APIs give developers an easier path to implement AI in their applications, these cognitive systems aren't plug and play. They require a significant investment of time to get to the confidence levels companies need, Cohen said.
"You need to go in with your eyes wide open," he said. "Healthcare is conservative, and we don't ever want to give a wrong answer about health benefits, so we spent three to six months in a focused subject area until we felt confident enough to put it in front of pilot users."
Indeed, an AI chatbot can only provide information that is part of its knowledge base, and if you feed it bad information, your customers will get bad information, IDC's Schubmehl said. Companies can't just call it a day when a system is fed. Just as with employees, training must continue to keep the information current, he added.
Welltok set a confidence level of 95%, so if its cognitive system isn't at least that sure of an answer, it points the customer to a different resource or to a live agent.
"We won't give a customer a wrong answer; instead we will redirect them and give no answer instead of a wrong one," Cohen said.
The artificial intelligence tipping point
After 65 years of pie-in-the-sky visions and disappointments, artificial intelligence is finally being put to use thanks to some important breakthroughs over the past five years.
In 2010 and 2011, the best visual recognition scores were in the 25-30% error rate range, based on results from the ImageNet Large Scale Visual Recognition Challenge -- a machine learning and computer vision competition. That error rate has vastly improved.
It's not just visuals; deep neural networks have produced the best text to speech outcomes to date, Tom Austin, a Gartner VP, said in a recent webinar about the realities of AI.
The new, deep neural network model runs on high performance GPUs and ingests large amounts of data. As of 2014, all vendors use this model.
The "big bang of 2012" also involved the combination of GPUs -- which have seen significant improvements over the past decade -- bigger deep neural network models and the ability to ingest more data, according to Austin.
Beyond that, there have been leaps forward in conversational user interfaces, which use dialog to control a bot or to get information without knowing the "magic words" required by earlier language processing systems. The most recent iterations of conversational platforms from Google and IBM follow years of research into conversational systems.
Future uses for AI include jobs that require the ability to read, search, remember and find answers to complicated questions. Another is when there are terabytes of information that needs to be processed quickly to help professionals find answers and talk to the data set.
"For those scenarios, there is a killer application waiting to be built," IBM Watson's Steve Abrams said. "We are only beginning to scratch the surface."
The time and effort companies invest in getting to a high confidence level is worthwhile when it results in reduced call volume and frees up call center employees to focus on complex customer service issues. But that's the low-hanging fruit.
"What we are trying to accomplish -- the bigger plan -- is to do things like provide intelligent, personalized responses and provide the right guidance; more than just an answer to your question, but guide you to resources that you may not have known about," Cohen said. "We call it anticipatory next best actions, which guide the consumer to additional resources to enhance the entire experience."
Cohen has begun to look at the ROI of their cognitive computing efforts, but beyond dollars and cents, AI creates exponential value in the elimination of some call center agent tasks and the help it provides to customers, he said.
Where to start with AI
AI experts and users caution companies to start small with cognitive computing projects; pick an area that has well-defined boundaries and rely on subject-matter experts to teach the system what real customers ask and how they ask it. Companies also need to keep business goals and the results they need to derive in clear sight, Gartner's Austin said in his webinar.
"Choose [an application] that offers short time to value," he said. "You don't have to build a moonshot ... All you have to do is get the little sunfish to sail across the pond. Maybe a little bit of smart is good enough. Go simple versus complex."
For companies without development teams or with limited IT resources, software as a service (SaaS) apps are available from companies, such as Conversica, which provide AI chatbots for lead engagement.
KnowledgeVision, an online business presentation platform provider, uses Conversica's virtual assistant platform to follow up on low-priority leads. The SaaS app virtual assistant is a combination of AI technologies -- one to decipher intent, another for sentiment, a neural network and more -- which work together. KnowledgeVision's virtual assistant follows up with inbound leads via email and creates a dialog to gather information, which is ultimately passed on to live sales or marketing reps.
Though it works quite well in most instances, the virtual assistant can get tripped up if, for example, it receives an out of office response that includes a return-to-work date. The virtual assistant may see that date as a request for a meeting date. So KnowledgeVision has to review interactions before following up with customers, said Susan Zaney, KnowledgeVision's VP of Marketing, whose department relies on Conversica.
But the upsides far outweigh the limitations of the AI chatbot, according to Zaney. Now the company is able to follow up on every lead, and virtual assistants are 24/7 employees who never get sick. If a potential customer opens an email at 1 a.m. and responds, the virtual assistant responds back immediately. And customers are never the wiser that this persistent, tireless sales person is a chatbot.
In part 2 of this article, see where AI won't be able to replace live employees.
AI development hits the fast track
Get caught up on machine learning algorithms
AI application reality check
Oracle blends AI and machine learning into chatbot tech framework