With the launch of ChatGPT in 2022, artificial intelligence transformed from a science fiction movie plot to an easily accessible tool for the masses. Suddenly, everyone was talking about how AI would transform the world, for better or for worse.
But credit card issuers have been using AI for decades. When you get an alert about a suspicious charge, are offered a credit limit increase, or interact with a chatbot, you’re experiencing some of the ways your credit card company employs AI to get to know you as a customer and help save you time.
And future use cases could potentially go a lot further, such as AI personal shoppers you can authorize to spend your money. Really.
First, though, here are some helpful terms to keep in mind, because not all AI is created equal:
When there’s a specific input, the program delivers a specific output. A customer service chatbot is one example — if it can’t answer your question, it won’t get creative to help you solve your issue. |
|
This form of AI can learn from data it’s given to create new, unique responses when prompted by a user. ChatGPT is a generative AI model. Generative AI is a newer form of machine learning. While machine learning analyzes data and recognizes patterns, generative AI can be used to generate content, including answers to credit card customers’ questions. |
|
Given some initial parameters, this form of AI can act autonomously to perform tasks or solve problems. |
How the credit card industry is currently using AI
Credit card companies collect large quantities of consumer data — demographic and financial information, spending patterns and more. Using AI makes it possible to efficiently put that data to work in a number of ways.
“AI gives us access to far more data than we can process as humans,” says Courtney Cardin, co-founder and chief product and business officer at Aura Finance, a financial wellness platform. “In the past 50 years, we’ve created massive amounts of electronic records, but our capacity to synthesize them is limited. AI bridges that gap.”
Detecting and preventing fraud
When you get a text alert about a suspicious charge on your credit card, that’s AI at work. It recognizes when a transaction falls outside of your typical spending habits, such as a purchase in another country.
“That’s a classic use case: AI understanding normal customer behavior and flagging deviations,” says Michael Storiale, senior vice president of innovation, payments and AI at Synchrony. “We’ve used AI for years in that way.”
Older uses of AI for fraud detection were more rules-based, according to Ranjita Iyer, executive vice president, services, North America at Mastercard. The model was programmed to respond a certain way if a card transaction fit, or drifted away from, certain patterns. But generative AI has made it possible for fraud to be detected more quickly, and it can even put up roadblocks to prevent potential fraudulent charges from happening in the first place.
“We can simulate things that haven’t happened yet but might happen in the future,” Iyer says. “We’re able to predict whether a transaction falls into the realm of possibility given that combination of merchant, customer, country. And if it falls outside that, we can flag it as high-risk.”
Understanding and serving customers
According to Cristián Bravo — professor and Canada Research Chair in banking and insurance analytics at Western University in London, Ontario — AI plays a role in a number of ways issuers get to know customers. It’s used to create targeted offers for potential new customers, and to determine someone’s likelihood of qualifying for a card. If an existing customer is at risk of taking their business to another issuer, AI can help create a retention offer to keep that customer loyal.
“Machine learning and AI, it’s ingrained across the full life cycle of the credit card process,” Bravo says. “It’s probably one of the most automatized and AI-supported processes.”
Intelligent virtual agents — such as Synchrony’s Sydney, Capital One’s Eno, and Bank of America®’s Erica — are often accessible via banking apps and websites, and they allow customers to ask questions or get insights on their spending habits. Customer service chatbots can have limited capabilities when it comes to the kinds of questions they can help answer, but they can still reduce the number of queries that require human assistance. Storiale says that this year, Sydney has fielded around 20 million customer conversations and was able to resolve 80% of their issues.
Processes that ‘give people their time back’
Using AI speeds up a number of internal processes, such as customer support, credit decisions, compliance checks and the development of different tools.
According to Storiale, this saves Synchrony thousands of hours of manual work per year.
“We’re focused on how AI gives people their time back,” Storiale says.
AI agents: The next frontier (and hurdle)
Agentic AI, which can make decisions and take actions on your behalf with your consent, is the latest buzzword in the payments and commerce space. Think of it like having a personal assistant with permission to shop for you.
Let’s say you’re planning a vacation. While generative AI can help you craft an itinerary, you need to book the trip yourself. With an AI agent, you can set parameters (such as location, size of your travel group, ages, activity preferences, and budget), and the agent will not only set an itinerary, but it might also book flights, hotels, rental cars and tickets to attractions for you.
But if an AI agent can spend your money, it needs to be secure. Ironically, AI is essentially used to monitor AI and identify instances where the agent is acting in an unpredictable way.
“AI is critical here — trying to figure out, ‘Is this agent behaving as expected? Does it have the right consent? Could it have been taken over? Is it going rogue?’” Iyer says. “Those are all things we’re addressing now.”
Other obstacles may require human fixes
Another major concern when it comes to using AI for credit decisions — extending offers, approving or denying applications, creating customer profiles based on spending patterns — is bias.
For example, bias can be built into the algorithms used to determine whether you qualify for a card in the first place and, if you do, what credit limit and interest rate you’ll be assigned. In 2019, the New York State Department of Financial Services launched an investigation into Apple’s co-branded credit card following complaints by opposite-sex couples who claimed that husbands were granted higher credit limits on the card than their wives were, despite the couples otherwise sharing finances.
The investigation determined that there was no deliberate discrimination, but the report on the investigation acknowledged how current credit scores are affected by decades of past racial and gender discrimination. And because underwriting models rely on past lending data, unintended bias can affect access to credit.
But fixing the algorithm requires getting humans on the same page first. “The complex discussions need to happen more, not at the AI level, but at the political level, at the regulatory level,” Bravo says. Algorithms can then be designed around what is then legally defined as fair access to credit, with weighting built in to account for past access (or lack of access) to credit.