Fair Is Fair: How Credit Unions Can Unbias AI

Trust in AI will be hard-won, but we must develop and adopt the tools to make it safe to use.

CUs should lead the charge in fair practices of AI.

Artificial intelligence has ushered in an underwriting revolution promising broader access to credit with fewer defaults. Using complex math, these machine learning (ML) algorithms train computers to hunt for patterns in mountains of data, allowing lenders to assess risk more accurately than in the past.

That’s why more credit unions are embracing ML. By the end of next year, according to a Fannie Mae survey of mortgage lenders, 71% of credit unions plan to investigate, test or fully implement AI/ML solutions – up from just 40% in 2018.

But even as credit unions are embracing ML, they’re also wary of the risk of bias. Humans can express bias without even realizing they’re doing it. And since humans are writing the algorithms, it’s reasonable to worry that those algorithms will also be biased.

In October, the New York State Department of Financial Services said it would investigate insurance giant UnitedHealthcare for using algorithms that, according to one study from the American Association for the Advancement of Science, lowballed the number of African American patients in need of extra care by more than half. Two weeks later, Goldman Sachs, backer of the new Apple Card, got shelled when Apple co-founder Steve Wozniak discovered his wife was approved for a sliver of the credit he received, even though they both file the same tax returns.

Credit unions, which pride themselves on member service and human touch, are recognizing the potential impact of AI. But the key to making the trend stick, especially in the current climate, is fairness testing – and it’s well within the industry’s grasp.

Today, underwriters rely on primitive methods to test their credit-scoring models for fairness. If they notice racial or gender disparity, they might remove a certain variable (like income or zip code) to restore balance; then again, dropping a key variable dilutes a model’s predictive power and, in an ML algorithm, may only shift the bias to a set of other variables. Either way, a chorus of angry regulators could be knocking at your door.

The good news is that AI providers have come a long way in building tools to keep algorithms in check. My company, Zest.ai, has been tackling his problem since 2009. Google, IBM and Microsoft have all recently introduced new techniques for diagnosing bias in ML models. Any “fair AI” solution should be able to fully understand the impact of every variable and how each tracks through the model to outcomes, and do that across all the various protected classes such as gender, age and race. The best solutions take that a step further and actively remove the bias from the model. One technique we use – called adversarial debiasing – pits two algorithms against each other: One model estimates creditworthiness while the second predicts the race or gender of the borrower being scored. The dueling algorithms “learn” through the competition until the predictions of the credit-scoring model are race- or gender-blind.

Credit unions will need these tools and more as the fairness debate spills onto the national stage. Democratic Presidential candidates Elizabeth Warren, Bernie Sanders and Pete Buttigieg have voiced concerns over model bias. In a joint letter opposing a Trump administration bid to restrict housing finance discrimination suits, 22 Attorneys General from states including California, Illinois, North Carolina and Virginia highlighted “extensive literature” suggesting that algorithmic models yield “discriminatory results.” Agencies such as the Department of Housing and Urban Development, Federal Trade Commission and Federal Reserve have also issued their intentions to police AI bias.

This conversation is welcome – and overdue. We know that traditional underwriting methods are oversimplified and can lead to biased outcomes. We also know that customers want a fairer system and are willing to share more personal data to get it. A recent Harris Poll survey found that eight out of 10 people – hailing from all demographic groups – think banks should use cutting-edge technologies to assess their creditworthiness.

Three-digit credit scores are anything but cutting edge and don’t fully capture the creditworthiness of millions of American consumers. ML can give credit unions a far more nuanced picture – and with the right monitoring can also give members better and fairer access to credit.

Trust in AI will be hard-won, but we have to develop and adopt the tools to make it safe to use.

Jay Budzik

Jay Budzik is Chief Technology Officer for Zest AI, a provider of artificial intelligence tools based in Burbank, Calif.