Verge

An AI audit tool for banks and lenders that reviews banks’ algorithmic loan decisions for fairness in lending.

Overview
Verge is an AI audit tool for banks and lenders that reviews banks’ algorithmic loan decisions for fairness in lending. As a cross-functional team composed of data scientists, UX researcher and UX designer, we completed Verge from ideation, research, high-fidelity design to algorithm training based on interview with banking staff and Kaggle data. By adopting a principle of trustworthiness, transparency, ethics, I designed this human-in-the-loop AI tool to support financial decision makers to make fair decisions and guarantee compliance with financial regulations. Our project won the second place in the final project voting and peer evaluation.
My Role
UX Designer
Co-Researcher
Duration
3 months
Mar - May 2021
Teammates
Data Science
Yunyi Li, Nicholas Wolczynski
UX
Elena Gonzales Melinger, Yiyan Huang
Tool
Paper & pen
Figma, Illustrator

Verge

An AI audit tool for banks and lenders that reviews banks’ algorithmic loan decisions for fairness in lending.

Project context

Bias in loan application decisions is a persistent issue. The Federal Trade Commission (FTC) points to two federal laws, the Equal Credit Opportunity Act (ECOA) and the Fair Housing Act (FHA), which aim to offer protections against lending discrimination for protected groups. These laws make it illegal to offer less favorable loan terms to applicants on the basis of their race, nationality, gender, age, and marital status. Despite the laws, cases of discrimination in home loan and credit approval persist.

screen capture of the news with headline reading "is an algorithm less racist than a loan officer" by New York Timesscreen capture of a piece of news from CNBC with headline reading "a troubling tale of a black man trying to refinance his mortgage"
Stories from NYT and CNBC about racial discrimination in loan application. Akridge, a black man, who had all the necessary financial credentials like steady job, well-paid salary, high FICO credit score, was declined to refinance his mortgage.
According to the report from CNBC, "A majority (59%), of Black homebuyers are concerned about qualifying for a mortgage, while less than half (46%) of White buyers are, according to a recent survey by Zillow. Lenders deny mortgages for Black applicants at a rate 80% higher than that of White applicants, according to 2020 data from the Home Mortgage Disclosure Act."
The Home Mortgage Disclosure Act (HMDA) requires home mortgage lenders to publicly disclose their mortgage application decisions. A 2019 HMDA report by the Consumer Financial Protection Bureau states that ‘the denial rates for conventional home-purchase loans were 16.0 percent for Black borrowers and 10.8 percent for Hispanic White borrowers. In contrast, denial rates for such loans were 8.6 percent for Asian borrowers and 6.1 percent for non-Hispanic White borrowers.’ In 2019, the Apple Card was investigated after facing complaints of gender discrimination, however, investigators ultimately found no wrong-doing. These findings not only show that bias in loan application decisions persevere, but also that it is difficult to address.
Verge will assist the decision making process for loan approvals and assist lenders to double check whether the prediction results of their AI tool used for approving loan applications meet fairness standards. It offers a ‘second opinion’ on the bank’s original algorithmic decision, and makes cases visible where the bank’s model and our models that are optimized for different fairness metrics disagree. Verge will provide predictions from fairer algorithms in a transparent, trustworthy, and intuitive way, and highlights instances where human-in-the-loop review may prevent an unfair decision from happening due to algorithmic bias. Our product would also increase trust in the remaining AI decisions by making it clear when a decision satisfies the bank’s algorithm’s criteria as well as our criteria, which is optimized for fairness.

Discovery

Secondary research
Historically, bank lending has been riddled with biases against protected characteristics, such as race, gender, and age. Race and ethnicity is one of the most concerning fairness areas in bank loan applications, and many earlier studies also provide evidence of racial biases in loan markets and housing markets. Barlett et al. found evidence of racial discrimination by studying more than 13 million mortgage applications and refinance applications in 2019. They found that the racial bias against black borrowers can come from both face-to-face lending and algorithmic lending (Bartlett et al., 2019).

Gender differences in bank loan access have been studied thoroughly. Accessing bank credit is more difficult or requires higher costs for female entrepreneurs (Calcagnini et al. 2015, Fay, 1993, Ongena et al. 2015). Given the same level of income, female loan applicants encounter higher rejection rates and lower loan approval amounts.

Structured biases exist in machine learning. While many people might think that machine learning and AI-driven systems can bring more objectiveness to the decision making process in loan applications, it is unfortunately not true. AI-driven systems rely on algorithms and models for making predictions. Algorithms do what they are taught. The biased societal patterns hidden in the data are thus unintentionally learnt by the algorithms and reproduce biased predictions. If left unchecked, the algorithms could exacerbate the unethical bias and injustice in the society. Our study thus aims to provide a tool for checking if the algorithmic errors systematically disadvantage a group of people, we need to correct this tendency with algorithms optimized by fairness metrics.

Fair algorithms usually address preprocessing, adding fairness constraints at training phase, and post-processing. The trade off between the existing objective (eg. accuracy) and fairness measures can be treated as a user parameter and thus can be adjusted based on different context and stakeholders’ interest. Adjusting and assigning different thresholds for different groups could directly satisfy certain fairness measures (eg. equalized opportunity or equalized odds). Post-processing can be applied after any classifiers. Post-processing is one of the most flexible yet very efficient and simple approaches for improving fairness. Our modeling framework thus employs post-processing as our fairness algorithm.
Competitor Analysis
The market needs new tools and methods to evaluate more traditionally overlooked loan applicants beyond credit score.

AI used in the fintech market for credit scoring and assessing lending. We studied ZestAI, Lenddo, and Upstart as our competitors. ZestAI aims to facilitate underwriting with machine learning by providing streamlined data and algorithm training with modeling explainability. Trying to provide more opportunities to biased groups without depending too much on credit score, Lenddo used users’ digital history to predict default while Upstart intends to give young people more opportunities by considering using test scores as a way of credit approval. As most of the products are B2B products, we are not able to view the actual interface and features of competitors. However, we learned about the significance of financial explanation and the market need of ensuring fairness in applicants beyond a single metric of credit score. While trying to find alternative ways to evaluate applicants, none of these tools mentioned fairness metrics. Verge will cover this gap in the market.
Survey
We surveyed 30 people on their experience and opinions with algorithmically judged loan applications.

Loan applicants were not sensitive to if their application was judged algorithmically versus by a human in person.

55% of survey applicants reported having been declined for a loan instantly, but this did not translate into high rates of dissatisfaction or mistrust in the decision. While most customers reported that they understood what factors went into their loan decision, most disagreed that there was a way to get a satisfactory explanation about their loan application results, with ⅔ of all respondents strongly agreeing or agreeing with the statement “Hypothetically, if my application were rejected, I would want to know why”.
User interview
We interviewed professionals working in banks, both to understand what aspects of fairness and compliance bank employees are sensitive to and to understand their present feelings about automated loan decisions and AI. One interviewee was certain that AI was not used at his workplace to judge applications, and the other was unsure if the platform they use to record customer information had a component of AI or not. The bank employees we spoke with had generally positive impressions of AI and fairness, and saw incorporating algorithmically judged loan applications as an important step in scaling up their operations.

After collecting this stakeholder feedback, we concluded that there was potential for this tool, especially in the arena of explanations of loan application results identified both by customers and bank employees as important.
This project is still under construction. Please refer to our slides:

Final slides for report
back
to top
back to top button icon