Are Your AI Tools Making Discriminatory Lending Decisions?

Algorithms that use Artificial Intelligence (AI) and Machine Learning (ML) to automate complex tasks are revolutionizing the financial services industry—but they can also involve risk.
Credit scoring algorithms are helping financial institutions reduce loan default rates, tweak their loan approval criteria, streamline the application process, reduce staffing and enhance the customer experience.
However, there are inherent risks to these methods. Even though these AI-driven algorithms appear to be neutral, they may not be. If the results of their computations can be proven to discriminate against a class of borrowers such as members of an ethnic group, financial institutions may face financial, reputational and legal risk. Unintended discrimination is still discrimination, leading to damages to settle government enforcement or private litigation—and the need to apologize to the public, which is never a good thing.
When assessing creditworthiness, lenders have traditionally focused on data such as borrowers’ debt-to-income and loan-to value ratios and their payment and credit histories. But with the help of data on internet usage, the new creditworthiness analytics are considering borrowers’ internet browsing, shopping and entertainment consumption habits.
Data from social media, such as the average credit score of an applicant's "friends," may be a useful predictor of default—but can also lead to denying loans to individuals who are in fact creditworthy.
How algorithms select and analyze variables and identify meaningful patterns within large pools of data is not always clear, even to a program's developers. This lack of algorithmic transparency makes it hard to determine where and how bias enters the system.
Lenders should expect to have to explain the basis for algorithm-based denials of credit in ways that are easily understood by the customer and yet formulaic enough to work at scale.
Companies should ensure that developers of AI and ML programs are trained on fair lending discrimination laws and can identify and prevent discriminatory outcomes.
Analyzing data inputs to identify systemic or selection bias is essential. Some financial services companies are starting to publish technical details of their algorithms’ design and data sets—so that an independent third party can review and test for potential discriminatory results.
Financial institutions using borrowers’ personal attributes as inputs for a creditworthiness algorithm should document a business justification and test results that demonstrate unbiased results.
Smart AI users will get the benefits of this transformative technology while anticipating and mitigating the risks of unintended discrimination resulting from its use.
Leave a comment
Your e-mail address will not be published. Required fields are marked *