Sergey Nivens - Fotolia
The expectation these days is that data science will deliver smarter, fairer, less biased and more consistent decisions. But according to Cathy O'Neil, data scientist and author of the recently published Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, it will be a while before data science makes good on that promise.
O'Neil, a data skeptic, likens data science to the early days of the automobile industry, before drivers knew to question the fallibility of cars and before safety standards were established. Today, O'Neil said businesses appear to trust mathematical algorithms without question, putting blind faith in algorithms that may rely on immoral and possibly illegal methods. She said she believes it is time to establish a national regulatory board to ensure algorithms do less harm -- an idea she borrowed from Ben Shneiderman's recent talk at the Alan Turing Institute.
O'Neil sat down with SearchCIO ahead of a talk she's giving at the Real Business Intelligence Conference in Cambridge, Mass., to discuss the CIO's role as data skeptic, the importance of data literacy and why Uber, in addition to its leadership issues these days, has an algorithm problem. In part one of this two-part Q&A, O'Neil talked about the dark side of data science. This Q&A has been edited for clarity and brevity.
CIOs can be seen as an obstruction to the business moving fast. Being a data skeptic for the company would likely add to that impression. How can a CIO be a data skeptic and help the business make money?
O'Neil: Unfortunately, right now, they are going to seem like a pill at a party because we don't have any laws against unfair algorithms. I'm going to be straight up with you: There's not a lot of leverage I have at this moment to talk somebody into worrying about their algorithm being unfair -- even illegal. I know plenty of algorithms that are illegal based on laws about hiring or how you discriminate against a promotion as a company. Those laws are not being enforced simply because, under the [Donald] Trump administration, we can't expect our regulators to learn how to use technology tools to investigate and interrogate algorithms.
So, in other words, the leverage coming from the regulators is very light right now, even for those things that are outright illegal. And then there are a massive number of the things that are not illegal, but are just immoral. And so it might just sound like you're whining to the person who is focusing on the bottom line, because the dirty truth is, it is expensive to be fair.
If you're building a new kind of credit score with big data, and you're using social media data, which is proxies for race, class and gender, what you're doing is technically illegal under the ECOA -- Equal Credit Opportunity Act -- but it's not being enforced. And you're getting a business edge by bypassing that law.
Should CIOs -- and businesses -- focus on making employees data literate?
O'Neil: The short answer is yes. The longer answer is that this is not a math test. I'm not asking for everyone to go and learn calculus. I'm asking for very basic literacy, which is to say show me evidence that this works. What I mean by that is show me this doesn't unduly burden poor people, or African-Americans, or people with mental health status, or people with veteran status, or the disabled, or the elderly or children.
In other words, I'm not asking for nuance. I'm asking for a very crude standard of evidence that this stuff isn't ruining people's lives. Let me give you an analogy: When we first invented cars, there were no safety standards for transportation. The National Transportation Safety Board was only created in, I believe, the [1960s]. And people didn't even ask the question, 'How likely am I to die when I drive this car into a tree?' Now, we have crash test dummies, and we have safety standards and cars that are much safer.
That's where we are right now: We're at the very beginning of this whole new field called algorithms or big data or whatever you want to call it, where we don't even ask how many people are dying. And I don't want to be histrionic about it, but I do think people's lives are being ruined.
Who should be in charge of developing standards and overseeing that they're adhered to?
O'Neil: There's an article about this guy, Ben Shneiderman, who recently went to the Alan Turing Institute in London. One of the reasons I mentioned the National Transportation Safety Board is because he actually suggested that we create a national algorithms safety board -- or maybe even an international one. And so I've been I've been playing around with that.
Will there be backlash for companies that use unethical or immoral algorithms?
O'Neil: There are three different forms of backlash: One of them is that the regulators get on it; another one is that individuals sue, and they get taken to court and they win. That's possible. I'm hoping for that to happen. And then the third possibility is just a public relations mess.
So, for example, there's an algorithm that Uber was [recently] discovered to be using that lowballs salary offers. This works against women because of the already existing pay gap between women and men. For that matter, it propagates or even exacerbates any salary discrimination that already exists. It seems to have been modified since it came to light. Actually, Uber has a whole suite of algorithms that are really bad and have made public relations messes for them.
What is the remedy for Uber and other companies that are using bad algorithms?
O'Neil: To be completely honest, I think in the short-term sense, our best bet is to have lawsuits, and in a medium-term sense, I think we need to have regulators that take this on.
A quick glance at 10 big data case studies
Big data, AI in the ER
How to measure the ROI on AI investments