Why Fake Customers Are a Real Problem for Lenders
Credit unions are still at a high risk of being targeted by synthetic fraudsters, but AI and machine learning can help.
The bad news: Small lenders and credit unions are still at a high risk of being targeted by synthetic fraudsters.
Even with these dips at the big banks, synthetic fraud is still an incredibly serious challenge for all banks. In a synthetic fraud, a scammer steals the Social Security number of someone, such as a child, who doesn’t already have a credit file and then slowly builds up a credit history by going for low-hanging credit fruit (think department store credit cards). Even if the scammer is rejected for those cards, they’ve established a credit history that they can then use to apply for increasingly legitimate products until they’ve climbed up the credit ladder to a car loan, and then they quickly disappear.
Spotting synthetic fraud is a real challenge. According to a Federal Reserve report, 85% to 95% of synthetic applicants are not flagged by traditional fraud models – a problem which, in the U.S. alone, is expected to cost banks $1.26 billion by 2020, up from $968 million in 2008, according to the Aite Group. One consumer credit company we work with attributes 45% of its losses to synthetic fraud. (According to the TransUnion report, synthetic fraud in credit card applications was actually up slightly in the second quarter of 2019.)
The problem is that synthetic fraudsters are so meticulous about the details they add to their credit file, they almost look like the real thing. When screening applicants, most banks rely heavily on FICO scores, which aren’t great for catching synthetic fraud because they look at only a handful of factors.
Artificial intelligence and machine learning can help. By going beyond the FICO score, ML can find hidden connections that could indicate an applicant is not who they say they are. But smaller banks have been slow to adopt ML. According to a 2018 survey by Fannie Mae, 76% of larger banks and 67% of mid-sized banks are familiar with AI and ML technology, compared to only 47% of smaller institutions.
And when using ML, all banks need to proceed with caution. ML models that banks use to screen applicants can’t be black boxes — places where banks know the information that goes in and the data that’s coming out the other end, but they can’t explain how the algorithm reached a decision on whether or not to approve a loan. That creates the risk that banks could violate fair lending laws without even knowing it. To protect themselves from accusations that their ML is biased, banks often shuffle flagged loan applications to call centers where a human may review the documents and, in the absence of a clear credit reason to reject, they’ll likely decide everything looks in order and approve the loan.
It’s possible, however, to build ML algorithms that aren’t black boxes. Understanding the connections that an algorithm finds is as important as the results coming out of the model. With the right kind of explainability, an ML system can pinpoint exactly why someone was rejected for a loan and give that applicant a valid credit reason for the rejection. For example, an applicant may be legitimately rejected because they have an unused $100,000 home equity line at 3.25% but is applying for a $10,000 car loan at 4.75%.
Bank modelers also have to target the right time period in which to spot fraud. For example, working with a credit card client, we discovered that most fraudulent accounts default within six months. So we trained an ML system on data from accounts that had defaulted in that short time frame. It turns out that the model was excellent at finding fraudulent applicants because so many of the signals that showed someone was going to default early also showed that an applicant was fake.
Fraudsters are going to get better at fooling the algorithms, but technology will evolve apace. Loan applications could eventually require biometric inputs such as finger, retina or facial scans, which would drastically reduce synthetic fraud and save financial institutions from unacceptably high losses and countless hours in customer service calls. But it’s an open question whether financial institutions and consumers will embrace them or balk at the privacy concerns. While the industry develops better techniques to separate the fake from the real, machine learning can pick up the slack.
Jay Budzik is the chief technology officer for Zest AI (formerly ZestFinance). He can be reached at partner@zestfinance.com.