Manage Learn to apply best practices and optimize your operations.

Address anonymity and data privacy in chatbot security

A key to successful enterprise chatbot security is to program your chatbot to recognize personal or sensitive information and treat it accordingly.

Chatbots continue to play ever-larger roles in enterprises, allowing streamlined internal and external online communication. With the expanding enforcement of GDPR and state-level privacy laws in the U.S., it's more important than ever that businesses prioritize privacy in their AI strategies.

Chabot security, in particular, is pivotal. Andy Peart, chief marketing and strategy officer at Artificial Solutions, advises companies developing chatbot security to focus on one key trait: Make sure it can keep personal information a secret.

Editor's note: The following has been edited for clarity and brevity.

Why do chatbots require data security?

Andy Peart: Data security is a key consideration for any enterprise, particularly when dealing with regulatory frameworks and customers' personal information. Personally identifiable information (PII) is very valuable to the business -- and very dangerous. The information gleaned from your customers' conversations is like gold dust. But you absolutely cannot afford to compromise that data, because you will lose your customers' trust in an instant.  

It's critical to look after the data and privacy of your users. Enterprises need to consider their options. It may be that your application is providing very general information, and there is no opportunity for data misuse. Or, it may be that you need to need to worry about the data collected. It's imperative that systems are able to anonymize or pseudonymize conversational data, replacing identifiable data with placeholders, so you can still understand the intent for analytics purposes, but not know the customer identity.

Can you have personalized chatbots without personal data?

Chatbot greeting
Chatbots represent the first front of a business's customer service.

Peart: The internet is still, by far, the preferred communication method by a large majority of customers, and smartphones and other mobile devices are rapidly becoming the choice of how people go online. But consumers are demanding a better conversational experience. They want to use their own words and terminology and receive the correct response the first time, every time. They want intelligent interactions, where the context and the sentiment of the conversation are fully understood. They want to talk to technology as if it was another human, even though they still want to know that they are talking to a machine.

But [personal] data is at the heart of conversational AI systems. Even when it's been anonymized, it still holds a wealth of information that enterprises can learn from and use to add value to the business. Having access to this clean [data that has been stripped of PII] enables organizations to move beyond basic business intelligence.

Machine learning delivers significant benefits in conversational AI. So, by taking a hybrid approach combined with built-in language resources, enterprises can not only have their conversational AI application up and running faster, but remove the risk of alienating their customers in the process.

Can chatbots be legally compliant?

Peart: From a GDPR perspective, enterprises need to be able to easily recognize personal and sensitive data within their automated conversations. They also need to know the context of that conversation to be able to decide what should be done with that information.

Using conversational AI allows you to extract more from a conversation than the usual prescriptive feedback survey. It allows you to truly get into the heads of your customers. Intelligent features such as sentiment analysis help enterprises gauge more accurately the mindset of their customers, while in-depth analysis of free-format conversational data delivers insight as to what you're doing well and how you can improve.

The challenge for enterprises is how to maximize the benefits of deploying advanced AI technologies within their business and remain within the requirements of data protection legislation, which is continually being extended to afford data subjects more rights and increased protection. Organizations must be able to implement advanced conversational AI applications across all platforms, devices and operating systems, and benefit from extensive data analysis, without contravening regulations such as GDPR.

This was last published in March 2019

Dig Deeper on AI ethics issues

Join the conversation

3 comments

Send me notifications when other members comment.

Please create a username to comment.

How does your chatbot security prioritize anonymity of personal data?
Cancel

I do believe that the AI-driven technology Chatbots is becoming more and more meaningful to brands and even individuals. A unique way to engage with brands and get your questions answered without getting on long wait calls. I would like to introduce Engati.com a chatbot platform here. It allows you to build, manage, integrate, train, analyze and publish your personalized bot in a matter of minutes. It presently supports 12 major messaging platforms including Website, Facebook Messenger, Whatsapp, Telegram, Line, Skype and more with a focus on customer engagement, conversational commerce, and customer service and fulfillment.


Cancel
Hello Kassidy,

I really enjoyed your post and points which you have covered on "data privacy in chatbot security" is really appreciable.

Thanks
Mike | Chatbots Developers
9series Solutions 
Cancel

-ADS BY GOOGLE

SearchBusinessAnalytics

SearchCIO

SearchDataManagement

SearchERP

Close