Are Your AI Tools Making Discriminatory Lending Decisions?

  • Published: July 5, 2018
  • Categories: ICT, Enterprise Network, Banking & Insurance
  • Share this article:

Algorithms that use Artificial Intelligence (AI) and Machine Learning (ML) to automate complex tasks are revolutionizing the financial services industry—but they can also involve risk.

Credit scoring algorithms are helping financial institutions reduce loan default rates, tweak their loan approval criteria, streamline the application process, reduce staffing and enhance the customer experience.

However, there are inherent risks to these methods. Even though these AI-driven algorithms appear to be neutral, they may not be. If the results of their computations can be proven to discriminate against a class of borrowers such as members of an ethnic group, financial institutions may face financial, reputational and legal risk. Unintended discrimination is still discrimination, leading to damages to settle government enforcement or private litigation—and the need to apologize to the public, which is never a good thing.

When assessing creditworthiness, lenders have traditionally focused on data such as borrowers’ debt-to-income and loan-to value ratios and their payment and credit histories. But with the help of data on internet usage, the new creditworthiness analytics are considering borrowers’ internet browsing, shopping and entertainment consumption habits.

Data from social media, such as the average credit score of an applicant's "friends," may be a useful predictor of default—but can also lead to denying loans to individuals who are in fact creditworthy.

How algorithms select and analyze variables and identify meaningful patterns within large pools of data is not always clear, even to a program's developers. This lack of algorithmic transparency makes it hard to determine where and how bias enters the system.

Lenders should expect to have to explain the basis for algorithm-based denials of credit in ways that are easily understood by the customer and yet formulaic enough to work at scale.

Companies should ensure that developers of AI and ML programs are trained on fair lending discrimination laws and can identify and prevent discriminatory outcomes.

Analyzing data inputs to identify systemic or selection bias is essential. Some financial services companies are starting to publish technical details of their algorithms’ design and data sets—so that an independent third party can review and test for potential discriminatory results.

Financial institutions using borrowers’ personal attributes as inputs for a creditworthiness algorithm should document a business justification and test results that demonstrate unbiased results.

Smart AI users will get the benefits of this transformative technology while anticipating and mitigating the risks of unintended discrimination resulting from its use.

Author: Tamás Keiger Lead Sales Manager Financial Sector T-Systems Hungary
  • Published: July 5, 2018
  • Categories: ICT, Enterprise Network, Banking & Insurance
  • Share this article:
0 Comments, be the first to leave a reply Write a comment

Leave a comment

Your e-mail address will not be published. Required fields are marked *

We are not robots, therefore please choose which symbol does not fit.

Read more:

Schließen
Terms and conditions

The data provided by me can be used by Deutsche Telekom AG for general customer consultation, requirements-orientated design of the services I use, advertising and market research. Transferring this data for these purposes within the scope of my consent is to be done so solely within Deutsche Telekom AG. The use of my data for the above-listed purposes cannot be done so if I withdraw my consent. Withdrawing consent can be done so either in writing or electronically, e.g., via Email, at any time.