Responsible AI supported by internal data scientists and external acquisitions are key to the digital revolution, according to Kush Saxena, CTO of Mastercard.
At the MIT Sloan CIO Symposium, Saxena outlined how his team has used AI fraud prevention platforms and technologies. Separately, Mastercard is able to improve internal business processes while gleaning data from transactions, customer intent and action. Saxena also offered his thoughts on one prominent question: is big data as important as clean data when rolling out AI applications?
Did you implement AI in a way that's been helpful to your company over the past year?
Kush Saxena: We've been leveraging AI for the last seven or eight years in very serious ways, so at some level everything that we've done last year is really just a continuation of what we've done in the past. We have lots of use cases that we use AI for -- but our most prominent use cases, and the ones that we go to market with -- center around fraud, data and analytics for predicting customer intent and driving commerce behavior.
Those are ideas that we are using it at large scale. We move, on our payments network, about 50-60 billion transactions. We score all of them for fraud. We've got some of the most sophisticated fraud scoring technology. Then on the data and analytics side from a commerce standpoint, we see a lot of transaction data, which we cleanse and anonymize and then drive commerce value out of because our data represents not just customer interest. It actually represents true customer intent on which customers have acted because they've actually transacted -- so merchants and banks find the data very useful. Yes, we've used AI extensively and will continue to do so.
For these AI cases, have you sourced your own internal talent? Are you hiring more data scientists and data engineers?
Saxena: It's all of the above. We've leveraged third-party help where we've needed to leverage third-party help. We've gone ahead and acquired some of those companies -- we bought a company called Brighterion which powers all of our fraud technology. We bought a company called NuData, which does behavioral biometrics on authentication technology. We bought a company called Applied Predictive Technologies, which does simulation for customers on commercial value propositions and pilots for retailers and hospitality companies. We've acquired companies and of course, we are hiring quite aggressively for our own internal talent on data science, as well.
Having more data can be axiomatic to having good AI, but there are challenges to doing good AI on small data. How do you manage that?
Saxena: I think having good data is axiomatic with good AI. You can have a lot of data at relatively lower levels of quality, which may lead you to completely wrong outcomes. If you think about bias inherent in data sets and the value of explainability, more data actually makes that harder. More data could pronounce and amplify bias, and more data could make explainability harder. So in my mind, I think there are certain problem sets and problem types where more data helps where outcomes are incontrovertible -- but if you are looking at AI for judgment type problems, I think good data is much more important than more data.