BACKGROUND IMAGE: stock.adobe.com
Data privacy and marketing: At times, the concepts appear at odds. One seeks to limit the flow of a person's data, and the other attempts to take and use as much data as possible.
In his first book, The Invisible Brand, William Ammerman, executive vice president of digital media at Engaged Media, a digital and traditional media publishing firm based in Irvine, Calif., describes how AI-powered software, when used by marketers, can influence and predict people's behaviors.
"I really wanted to share my experiences and knowledge with both marketers and the general public to help them see something that's operating invisibly in their lives and which is oftentimes hidden from view," Ammerman said.
The goal of The Invisible Brand, published by McGraw-Hill, is to provide readers with a general background as to how AI technologies work, their potential uses, and how corporations and political organizations wield them to attempt to change the way people think and act.
The technology presents both opportunity and risks in this technology, Ammerman said. "I think it's important for people to understand them."
Ammerman ranges over all these issues in this Q&A.
How much does the public know about what's going on behind the scenes of some of these AI, data privacy and marketing issues?
William Ammerman: I think the general public has a collective sense that their data is being gathered, that they're being manipulated and that, you know, big business, big tech corporations have an unfair kind of advantage in the process. I think it's less understood how that works, and the how is where I really take the book.
I try to explain, in the first half of the book, the origins of where we are today and how we got here. I detail four trends that are collectively working together.
And what are those trends? How do they relate to AI, data privacy and marketing issues?
Ammerman: The first trend is the personalization of information. We have reached the point where we can deliver personalized advertising -- highly targeted messaging down to the individual level based on their online behaviors and their offline behaviors and, increasingly, more detailed information about their location and personality, and even their current sentiment.
People are getting that news and information from social media feeds that are personalized. That puts us into these echo chambers, these filter bubbles, where we are consuming news that we like, rather than the news we necessarily need, and that's transformative.
The second major trend is the science of persuasion. We have the ability to perform testing in real time and optimize campaigns toward specific KPIs [key performance indicators] defined by the advertiser or the marketer, and we can really be persuasive at a scientific level.
The third major trend is that we now have a branch of artificial intelligence called machine learning, where machines can actually learn how to persuade you using personalized information.
The fourth major trend is a second branch of artificial intelligence known as natural language processing, in which we can actually talk directly to the machine. So, now, we're having a conversation with a machine that's designed to learn how to persuade us using personalized information. Today, those conversations are relatively simple, but they're going to get more complex.
When we're having those kinds of conversations with machines that are designed to persuade us using personalized information, we're going to be very vulnerable to the places that are answering back. We have to understand that the voices that are answering us through artificial intelligence are essentially motivated by corporate interests or by political interests.
AI technologies, like you mentioned, can help influence people's behaviors. At the same time, however, AI can save time and energy by automating processes. How can people find those benefits, while limiting the negative aspects?
William AmmermanExecutive vice president of digital media at Engaged Media
Ammerman: I decided that the solution I was going to propose was simply to start by educating people about the technology dug in deeply on issues like cookies and device IDs, and targeting, and all of the technology that delivers the messaging, including the ad exchanges, to really help people get a stronger sense of exactly how it works.
I felt that education was a good starting point, rather than me taking strong stances on whether it should be regulated exactly, taking a political position on it. I stayed away from those topics, because I didn't want to alienate anyone.
I've concluded that the first step is education that helping people see and understand how it works is really vital in helping people use it ethically and protect against the abuse.
How do those trends -- AI, data privacy and marketing -- influence people?
Ammerman: Those four trends merge together to create what I call psychological technology, or which I've shortened to psychotechnology. I write about how psychotechnology impacts areas of our economy as diverse as the media, finance, education, the arts, politics, government and even religion.
My goal wasn't to frighten people; it was to give people a vision for what opportunities lie ahead. There are many benefits that could perhaps be used for the greater good that I talk about in the book. These technologies can be used for really positive social benefits, if they're applied ethically and if people who use them understand how they're being used, as well as what kind of data is being gathered and what benefits they could potentially derive from the use of the technology.
We should start being cautious about the risks by acknowledging that we don't have all the answers and people aren't as educated about technologies they should be. Then, we should start educating ourselves to be better consumers of this technology.
Editor's note: This interview has been edited for clarity and conciseness.