July 23, 2019
Excerpt:Artificial intelligence (AI) may be streamlining the lending process for lenders in terms of determining who is creditworthy enough for a loan, but there is a concern that AI may be creating different effects among different classes of applicants.
Algorithms that apply artificial intelligence (AI) to determine credit strength of loan applicants may still be just as biased as human practices.
Technology has been exploding in the world of banking over the recent past. Banks are heavily investing in digital technology, giving their clients a more convenient platform to do their banking and obtain loans while also helping banks stay relevant and more competitive, particularly in the face of FinTech.
But as useful as technology continues to be, could there be a downfall to it in terms of banking? More specifically, is technology creating a biased atmosphere in terms of credit scoring among consumers?
These days, financial institutions are increasingly using artificial intelligence (AI) to help determine who is eligible for loan approval. AI is increasingly being depended on to make decisions on credit scores and credit risk.
Digital technology allows banks to save a lot of time and effort by factoring in many different variables to help them make much faster, more accurate decisions while weeding out fraudulent applications. It's this streamlined process that has more and more financial institutions adopting AI into their lending tactics.
AI algorithms help to find associations between consumer behavior and various relevant factors and variables. As a result, much larger sets of data can be factored into credit calculations.
But as useful and convenient as technology may be in helping financial institutions sift through loan applications, is it inadvertently creating discriminatory practices?
The software used in this realm depends on algorithms that make use of AI, but such algorithms might actually be perpetuating the bias that the U.S. Equal Credit Opportunity Act has been trying to combat.
Are loan applicants being discriminated against based on their socioeconomic class, even with the use of AI to decipher who's eligible for a loan, and who's not?
Discrimination because of race is illegal under this Act, but bias still continues, even with machine-based assistance. According to a recent study, decisions made by technological learning systems charged some borrowers slightly higher interest rates, concluding that these algorithms that are supposed to streamline the lending process still discriminate based on socioeconomic status. And while it may not necessarily be worse, it still exists.
These AI models seem to create disparate effects on different classes if they're giving applicants of certain classes higher interest rates, especially when considering the notion that these platforms are not supposed to be considering ethnicity. Specific variables that are associated with loan risk and potential default are something that can and should be considered.
It's this "loophole" that allows gives lenders the go-ahead to make credit decisions that many would consider to be unfair.
Considering the potential societal consequences of bias from AI, lawmakers will have to factor in the practices that are unlawful and what structures may be required to keep consumers protected against discriminatory lending practices.
Policymakers will have to think long and hard about anti-discriminatory structures - both from traditional human practices as well as today's machine-led processes - to understand and combat any new issues that of AI and big data have brought about.
While banks and other financial institutions have technological tools at their fingertips to help make the right decisions about where to allocate their loan capital, due diligence is still important to ensure that sound loan assets are kept on the books. Otherwise, selling off potentially risky assets and replacing them with stronger ones should be on the agenda, and Garnet Capital can help.
Sign up for our newsletter today.