Can AI and Fair Lending Coexist?

The premise of our current fair lending laws is incompatible with the concept of AI. What should CUs do?

Credit/AdobeStock

That is the key question that lenders of all shapes and sizes will be struggling with for years to come as they seek to utilize the paradigm shifting use of artificial intelligence within a framework of laws developed when AI was the stuff of fantasy gleaned from the latest episode of Star Trek. As things stand now, this resulting disconnect between technology and the law raises substantial compliance and legal risks that, if left unaddressed, pose challenges for even the biggest of credit unions and will inevitably exacerbate the increasing divide between large and small lenders within the credit union industry.

The field is so new that we don’t even have set terms to guide our legal and operational analysis. For purposes of this column, I like this description of AI provided by OCC’s Acting Comptroller of the Currency in a recent speech touching on these very topics: “AI systems, which are generally based on neural networks, are not programmed explicitly like most software. They require training, and their outputs are not predictable … [this] creates a fundamental problem: Since AI systems are built to ‘learn,’ they may or may not do what we want or behave consistent with our values.”

The most basic question facing the early adopters is the classic “are the benefits of AI lending worth the risk?” Simply put, our fair lending laws, most notably the ECOA, are predicated on the assumption not only that we should ban intentional discrimination, but that we also have the ability to identify lending criteria that has the effect of discriminating. This premise is incompatible with AI, which by definition will extrapolate lending decisions based on huge data sets and draw conclusions that are not attainable by the mere human mind. On the bright side, these issues have been discussed and analyzed within the technology sector for years now.  And there has already been published research by the FinTecLab, which strongly suggests over time that at least some of these issues can be addressed. But in the short to medium term, this still means that your typical financial institution would have to have employees knowledgeable enough to translate AI algorithms in a way that is comprehensible to regulators and borrowers alike.

Furthermore, since the AI systems used by institutions will be forever evolving, they will have to be able to legally defend a forever shifting target. Think about the number of financial institutions that face hugely complicated and contentious fair housing discrimination claims predicated on analysis of a set universe of criteria. That same legal framework will make it almost impossible to successfully defend against claims that criteria discriminates in making lending decisions. In other words, even if the regulators can be placated, AI lending will expose any financial institution that adopts it to never ending litigation. And aside from the legal risk, there are the enormous operational investments that will have to be made. In some ways, data is as valuable to the modern day economy as oil was in the 20th century. AI will simply accelerate this trend, making it both essential for all financial institutions to have access to clean data sets while at the same time putting the cost of such data sets beyond the reach of your average institution.

So what are the solutions? First, the financial industry, in general, and consumer lenders in particular, need to explain to Congress the need to update Fair Lending Laws by modifying the effects test and developing a framework for the certification of data quality. Just like we developed the FDA to ensure the safety of drugs before they are released to the public, it is time for the government to develop a mechanism for the pre-approval of data integrity, data that should be accessible to all financial institutions regardless of their size.

Assuming that congressional action will not be coming anytime soon, it is time for credit unions to consider funding their own association dedicated to cost effectively researching and sharing AI information across the entire industry. The credit union industry is, after all, a cooperative industry.

At the very least, credit unions also need to advocate for due diligence protocols that allow small institutions to work with vendors without being at the mercy of one-sided contracts that expose them to all the risks inherent in AI lending. A final option is, of course, for your institution to simply forego AI lending and continue to do things the way you have always done them. But this is tantamount to admitting that your credit union will be merged out of existence.

AI technology is paradigm-shifting technology which will fundamentally shift the way all business is done in general and the way consumer banking is executed, in particular. Unfortunately, the laws are not keeping pace with this fundamental shift.

Henry Meier, Esq.

Henry Meier is the former General Counsel of the New York Credit Union Association, where he authored the popular New York State of Mind blog. He now provides legal advice to credit unions on a broad range of legal, regulatory and legislative issues. He can be reached at (518) 223-5126 or via email at henrymeieresq@outlook.com.