[House Hearing, 117 Congress]
[From the U.S. Government Publishing Office]


                       EQUITABLE ALGORITHMS: HOW
                     HUMAN-CENTERED AI CAN ADDRESS
                   SYSTEMIC RACISM AND RACIAL JUSTICE
                   IN HOUSING AND FINANCIAL SERVICES

=======================================================================

                            VIRTUAL HEARING

                               BEFORE THE

                 TASK FORCE ON ARTIFICIAL INTELLIGENCE

                                 OF THE

                    COMMITTEE ON FINANCIAL SERVICES

                     U.S. HOUSE OF REPRESENTATIVES

                    ONE HUNDRED SEVENTEENTH CONGRESS

                             FIRST SESSION

                               __________

                              MAY 7, 2021

                               __________

       Printed for the use of the Committee on Financial Services

                           Serial No. 117-23
                           
[GRAPHIC NOT AVAILABLE IN TIFF FORMAT]

                              __________

                    U.S. GOVERNMENT PUBLISHING OFFICE                    
44-838 PDF                 WASHINGTON : 2020                     
          
----------------------------------------------------------------------------------- 

                 HOUSE COMMITTEE ON FINANCIAL SERVICES

                 MAXINE WATERS, California, Chairwoman

CAROLYN B. MALONEY, New York         PATRICK McHENRY, North Carolina, 
NYDIA M. VELAZQUEZ, New York             Ranking Member
BRAD SHERMAN, California             FRANK D. LUCAS, Oklahoma
GREGORY W. MEEKS, New York           BILL POSEY, Florida
DAVID SCOTT, Georgia                 BLAINE LUETKEMEYER, Missouri
AL GREEN, Texas                      BILL HUIZENGA, Michigan
EMANUEL CLEAVER, Missouri            STEVE STIVERS, Ohio
ED PERLMUTTER, Colorado              ANN WAGNER, Missouri
JIM A. HIMES, Connecticut            ANDY BARR, Kentucky
BILL FOSTER, Illinois                ROGER WILLIAMS, Texas
JOYCE BEATTY, Ohio                   FRENCH HILL, Arkansas
JUAN VARGAS, California              TOM EMMER, Minnesota
JOSH GOTTHEIMER, New Jersey          LEE M. ZELDIN, New York
VICENTE GONZALEZ, Texas              BARRY LOUDERMILK, Georgia
AL LAWSON, Florida                   ALEXANDER X. MOONEY, West Virginia
MICHAEL SAN NICOLAS, Guam            WARREN DAVIDSON, Ohio
CINDY AXNE, Iowa                     TED BUDD, North Carolina
SEAN CASTEN, Illinois                DAVID KUSTOFF, Tennessee
AYANNA PRESSLEY, Massachusetts       TREY HOLLINGSWORTH, Indiana
RITCHIE TORRES, New York             ANTHONY GONZALEZ, Ohio
STEPHEN F. LYNCH, Massachusetts      JOHN ROSE, Tennessee
ALMA ADAMS, North Carolina           BRYAN STEIL, Wisconsin
RASHIDA TLAIB, Michigan              LANCE GOODEN, Texas
MADELEINE DEAN, Pennsylvania         WILLIAM TIMMONS, South Carolina
ALEXANDRIA OCASIO-CORTEZ, New York   VAN TAYLOR, Texas
JESUS ``CHUY'' GARCIA, Illinois
SYLVIA GARCIA, Texas
NIKEMA WILLIAMS, Georgia
JAKE AUCHINCLOSS, Massachusetts

                   Charla Ouertatani, Staff Director
                 TASK FORCE ON ARTIFICIAL INTELLIGENCE

                    BILL FOSTER, Illinois, Chairman

BRAD SHERMAN, California             ANTHONY GONZALEZ, Ohio, Ranking 
SEAN CASTEN, Illinois                    Member
AYANNA PRESSLEY, Massachusetts       BARRY LOUDERMILK, Georgia
ALMA ADAMS, North Carolina           TED BUDD, North Carolina
SYLVIA GARCIA, Texas                 TREY HOLLINGSWORTH, Indiana
JAKE AUCHINCLOSS, Massachusetts      VAN TAYLOR, Texas
                            
                            
                            C O N T E N T S

                              ----------                              
                                                                   Page
Hearing held on:
    May 7, 2021..................................................     1
Appendix:
    May 7, 2021..................................................    27

                               WITNESSES
                          Friday, May 7, 2021

Girouard, Dave, CEO and Co-Founder, Upstart......................    10
Hayes, Stephen F., Partner, Relman Colfax PLLC...................     4
Koide, Melissa, Founder and CEO, FinRegLab.......................     5
Rice, Lisa, President and CEO, National Fair Housing Alliance....     7
Saleh, Kareem, Founder and CEO, FairPlay.........................     8

                                APPENDIX

Prepared statements:
    Garcia, Hon. Sylvia..........................................    28
    Girouard, Dave...............................................    30
    Hayes, Stephen F.............................................    34
    Koide, Melissa...............................................    40
    Rice, Lisa...................................................    55
    Saleh, Kareem................................................    69

              Additional Material Submitted for the Record

Garcia, Hon. Sylvia:
    Written responses to questions for the record from Lisa Rice.    72

 
                       EQUITABLE ALGORITHMS: HOW
                     HUMAN-CENTERED AI CAN ADDRESS
                   SYSTEMIC RACISM AND RACIAL JUSTICE
                   IN HOUSING AND FINANCIAL SERVICES

                              ----------                              


                          Friday, May 7, 2021

             U.S. House of Representatives,
             Task Force on Artificial Intelligence,
                           Committee on Financial Services,
                                                   Washington, D.C.
    The task force met, pursuant to notice, at 12 p.m., via 
Webex, Hon. Bill Foster [chairman of the task force ] 
presiding.
    Members present: Representatives Foster, Sherman, Casten, 
Pressley, Adams, Garcia of Texas, Auchincloss; Gonzalez of 
Ohio, Loudermilk, Budd, Hollingsworth, and Taylor.
    Ex officio present: Representative Waters.
    Chairman Foster. The Task Force on Artificial Intelligence 
will come to order. Without objection, the Chair is authorized 
to declare a recess of the task force at any time.
    Also, without objection, members of the full Financial 
Services Committee who are not members of this task force are 
authorized to participate in today's hearing.
    As a reminder, I ask all Members to keep themselves muted 
when they are not being recognized by the Chair. The staff has 
been instructed not to mute Members, except when a Member is 
not being recognized by the Chair and there is inadvertent 
background noise. Members are reminded that they may only 
participate in one remote proceeding at a time. If you are 
participating today, please keep your camera on, and if you 
choose to attend a different remote proceeding, please turn 
your camera off.
    Today's hearing is entitled, ``Equitable Algorithms: How 
Human-Centered AI Can Address Systemic Racism and Racial 
Justice in Housing and Financial Services.''
    I now recognize myself for 4 minutes to give an opening 
statement.
    Thank you, everyone, for joining us today for what should 
be a very interesting discussion. We have a great panel of 
witnesses that I know will provide some stimulating and 
thought-provoking points of view. Today, we are here to explore 
how artificial intelligence (AI) can be used to increase racial 
equity in housing and financial services. There has been 
extensive discussion around this topic, mostly focusing on the 
real problems that can occur when we use AI that can inherently 
or unknowingly be biased. I think that a lot of these issues 
can be more complicated and nuanced than how they are portrayed 
in the media, but it is clear that the use of AI is hitting a 
nerve with a lot of folks, and that concern is for a good 
cause. No one should be denied the opportunity to own a home, a 
pillar of the American Dream, because of a non-human, 
automated, and, often, unlawfully discriminatory decision. 
Regulators and policymakers have a big responsibility here, 
too.
    We must actively engage in these sorts of discussions to 
determine what the best practices are and to enact laws that 
reflect and encourage those practices, while also fostering 
innovation and improvements. Ideally, we should get to a space 
where AI is not only compliant with and meeting the standards 
that we have set for fairness, but exceeding those standards. 
It should be a tool that augments and automates fairness, not 
something that we have to babysit to make sure that it is still 
meeting our standards. The real promise of AI in this space is 
that it may eventually produce greater fairness and equity in 
ways that we may not have contemplated ourselves. So, we want 
to make sure that the biases of the analog world are not 
repeated in the AI and machine-learning world.
    I am excited to have this conversation to see how we can 
make AI the best version of itself, and how to design 
algorithmic models that best capture the ideals of fairness and 
transparency that are reflected in our fair lending laws. Thank 
you all again for being part of this important discussion, and 
the Chair will now recognize the ranking member of the task 
force, Mr. Gonzalez of Ohio, for 5 minutes for an opening 
statement.
    Mr. Gonzalez of Ohio. Thank you, Chairman Foster. First of 
all, I want to say how pleased I am to work with you as I take 
on the role of ranking member of this important task force. You 
have always shown a great willingness to be a thoughtful, 
bipartisan partner, and I look forward to continuing our work 
together. I also want to thank Ranking Member McHenry, ranking 
member of the full Financial Services Committee, for putting 
his trust in me to lead on this task force. He has been a 
tremendous mentor to me, and a thoughtful leader on policies 
that promote and expand the use of innovative technologies.
    Financial services is an industry that continues to be on 
the cutting edge of technology, as is evident through the use 
of AI and other emerging technologies. I believe that this 
committee, and particularly this task force, should embrace 
this innovation and continue to consider ways that Congress can 
provide helpful clarity to industry without stifling 
innovation. Technology can help to not only propel forward our 
advancements in the financial services industry, but can also 
foster further inclusion and opportunities to our unbanked and 
underbanked communities.
    Advanced credit decision models can use AI to improve the 
confidence of lenders in extending credit, reducing defaults, 
and finding data that is not readily available for traditional 
assessments of creditworthiness.
    Additionally, it is my belief that AI technologies can 
provide Federal regulators with additional oversight tools to 
reduce and prevent financial crimes. We should be encouraging 
Federal agencies to be working more with the industry in a way 
that fosters adoption and can assist on money laundering 
efforts. On top of using AI to catch bad actors, Federal 
entities can take steps to work with industry to further adopt 
the use of artificial intelligence through the use of RegTech, 
in order to help automate and streamline regulatory compliance.
    Today's hearing is an important one. We are having an 
important discussion about some of the challenges the industry 
faces by employing this technology, specifically on bias in 
algorithms. I believe these discussions are important to have. 
We must acknowledge and recognize that these technologies, at 
times, are not perfect due to the inherent nature of a 
technology created by humans. It is vital, though, that we do 
not take steps backwards by overregulating this industry, which 
may have a chilling effect on the deployment of these 
technologies. Instead, my hope is that we will continue to work 
with the experts in industry in order to move forward in a 
bipartisan way that both celebrates the technological 
advancements and ensures that there is transparency and 
fairness through the use of artificial intelligence.
    I look forward to hearing from our witnesses today about 
the importance of this technology in the financial services 
sector and how Congress can act to encourage innovation and 
promote fairness. And with that, I yield back.
    Chairman Foster. Thank you. The Chair will now recognize 
the Chair of the full Financial Services Committee, the 
gentlewoman from California, Chairwoman Waters, for 1 minute.
    Chairwoman Waters. Thank you so very much, Chairman Foster. 
I am so delighted and excited about artificial intelligence, 
and I am very pleased that you chose to provide the leadership 
for this task force that will help us to understand how we can 
get rid of bias in lending, and other efforts that should be 
made throughout our society in dealing with, simply, fairness 
and justice. I am very pleased, and I think that our committee 
will provide the leadership in the Congress of the United 
States for dealing with this issue.
    As a matter of fact, we created a Subcommittee on Diversity 
and Inclusion, and your Task Force on Artificial Intelligence 
works very well with that subcommittee, because actually, you 
are going down the same paths, looking at the same issues, and 
dealing with what we can do to get rid of injustice and 
unfairness. Thank you so very much, and, please, go forward, 
and you are the one to do it. Thank you very much. I yield 
back.
    Chairman Foster. Thank you, Madam Chairwoman. Today, we 
welcome the testimony of our distinguished witnesses: Stephen 
Hayes, a partner at Relman Colfax PLLC; Melissa Koide, the 
founder and CEO of FinRegLab; Lisa Rice, the president and CEO 
of the National Fair Housing Alliance; Kareem Saleh, the 
founder of FairPlay AI; and Dave Girouard, the founder and CEO 
of Upstart.
    Witnesses are reminded that their oral testimony will be 
limited to 5 minutes. You should be able to see a timer on your 
screen that will indicate how much time you have left, and a 
chime will go off at the end of your time. I would ask you to 
be mindful of the timer and quickly wrap up your testimony if 
you hear the chime so we can be respectful of both the 
witnesses' and the task force members' time.
    And without objection, your full written statements will be 
made a part of the record.
    Mr. Hayes, you are now recognized for 5 minutes to give an 
oral presentation of your testimony.

   STATEMENT OF STEPHEN F. HAYES, PARTNER, RELMAN COLFAX PLLC

    Mr. Hayes. Chairwoman Waters, Chairman Foster, Ranking 
Member Gonzalez, and members of the task force, thank you for 
giving me the opportunity to testify. My name is Stephen Hayes, 
and I am a partner at Relman Colfax, a civil rights law firm. 
We have a litigation practice focused on combating 
discrimination in housing and lending. We also provide legal 
counsel to entities, including counsel on testing algorithms 
for discrimination risks. I previously worked at the Consumer 
Financial Protection Bureau (CFPB).
    Credit markets reflect our nation's history of 
discrimination. There are stark gaps in credit access and 
disparities in credit scoring and in populations with thin or 
no credit histories. There is evidence that some alternative 
data and AI-based machine-learning models (ML models) can help 
lenders make credit decisions for these groups, and so have the 
potential to expand access. Whether that is true in practice 
and whether any increases will improve or exacerbate 
disparities is a context-specific question. Use of alternative 
data and alternative models can also raise serious risks 
related to explainability, validity, and, of course, 
discrimination.
    The Equal Credit Opportunity Act (ECOA) and the Fair 
Housing Act prohibit lending and housing discrimination. They 
prohibit intentional discrimination, sometimes called disparate 
treatment, as well as an unintentional type of discrimination 
called disparate impact. Disparate impact focuses on fair 
outcomes. Unlawful disparate impact occurs when: one, a policy 
disproportionately harms members of a protected class; two, 
either the policy does not advance an interest; or three, there 
is a less discriminatory way to serve that interest. And what 
that means in practice is that entities should not adopt 
policies, like models, that unnecessarily cause disparities.
    These frameworks, in particular, disparate impacts, 
translate well to lending models, including to ML models. Some 
banks have been testing models for discrimination for years, 
and, of course, disparities remain in credit markets, and model 
fairness alone is not going to solve that problem. But these 
programs demonstrate that discrimination testing is possible, 
and it can be effective.
    As a general matter, the best programs align with legal 
principles, so first disparate treatment. The programs ensure 
that models don't include protected classes or proxies as 
variables, and that the models are accurate across groups, 
which is important, but it is insufficient to eliminate 
discrimination. The programs include a disparate impact 
assessment using the three-step framework that I mentioned 
before.
    The final step in that framework, minimizing the 
disparities caused by models, is key to this process. In the 
case of traditional models, this involves substituting 
variables in the models with the goal of identifying variations 
of models that maintain performance, but that have less 
disparate impact, and newer methods exist now that can improve 
upon that process for ML models.
    Disparate impact testing can benefit businesses and 
consumers. It can create more representative training samples 
and increase access to credit over time. It can also counteract 
the legacies of historic and of existing discrimination. These 
tests are also paired with more holistic measures, like fair 
lending training for modelers, ensuring that teams have diverse 
backgrounds, reviewing policies within which models operate, 
and monitoring areas of discussion.
    Finally, banks are expected to comply with agency model 
risk guidance, which is meant to help mitigate safety and 
soundness risks. And these principles are not focused on 
discrimination, but they can help facilitate discrimination 
testing because they create an audit trail for models, and they 
help establish monitoring systems for models.
    In my experience, many companies understand that models can 
perpetuate discrimination, and they don't want to use 
discriminatory models. But at the same time, discrimination 
testing is very uneven, and oftentimes nonexistent, which is 
the result of legal and structural background characteristics 
that incentivize testing in some areas, but not in others.
    Policymakers can take steps to ensure more uniform and 
effective testing. First, agencies like the CFPB can routinely 
test models for discrimination, including assessing whether 
less discriminatory models exist.
    Second, agencies should announce the methodologies that 
they use to test models, and they should encourage adoption of 
discrimination-specific model risk principles.
    And third, agencies should clarify that discrimination, 
including unnecessary disparate impact, is illegal across 
markets outside of traditional areas like credit and housing.
    Thank you for considering my testimony today.
    [The prepared statement of Mr. Hayes can be found on page 
34 of the appendix.]
    Chairman Foster. Thank you. Ms. Koide, you are now 
recognized for 5 minutes.

     STATEMENT OF MELISSA KOIDE, FOUNDER AND CEO, FINREGLAB

    Ms. Koide. Thank you so much, Chairman Foster. Good 
afternoon. And thank you, Chairwoman Waters, Ranking Member 
McHenry, Ranking Member Gonzalez, and the entire AI Task Force. 
My name is Melissa Koide, and I am the founder and CEO of 
FinRegLab. FinRegLab is a nonprofit research organization 
evaluating the use of new technologies and data in financial 
services to drive greater financial inclusion.
    FinRegLab has focused on the use of alternative financial 
data and machine learning algorithms in credit underwriting 
because credit not only helps bridge short-term gaps, but it is 
critical for enabling longer-term investments for families and 
homes, education and small business.
    The credit system, as we all realize, reflects and 
influences the ability of families and small businesses to 
participate in the broader economy, yet I think we also realize 
that about 20 percent of adults in the U.S. lack a sufficient 
credit history to be scored under the most widely-used models. 
Another 30 percent have struggled to access affordable credit 
because their scores were non-prime. Communities of color and 
low-income populations are substantially more likely to be 
affected. Nearly 30 percent of African Americans and Hispanics 
cannot be scored under traditional means compared to 16 percent 
of Whites and Asians.
    Our work at FinRegLab directly intersects with the task 
force's inquiry into ways to safely harness the power of AI and 
data to increase opportunity, equity, and inclusiveness. 
FinRegLab's first empirical research evaluated cash flow data 
as a means to risk-assess underserved people in small 
businesses for credit. We found cash flow data has substantial 
potential to increase credit inclusion.
    Our latest project, launched last month, focuses on machine 
learning algorithms and their use in credit underwriting. We 
are empirically evaluating the capability and performance of 
diagnostic tools that seek to explain machine learning 
underwriting models with respect to reliability, fairness, and 
transparency.
    Financial services providers have begun using machine 
learning models in a variety of contexts because of the 
potential to increase the prediction accuracy. There are many 
ways AI and machine learning may be beneficial for consumers 
and small businesses, but the technology could also be 
transformational where information gaps and other obstacles 
currently heighten the costs and risks of serving particular 
populations. Yet, we all realize that the complexity of AI and 
machine learning models can make it harder to understand and 
manage, and they raise important concerns around exacerbating 
historical disparities as well as flaws in the underlying data.
    Publicly-available research is limited, but what there is 
supports the general predictiveness benefits of machine 
learning. Yet, it also suggests the effects of fairness and 
inclusion may vary depending upon--and this is important--the 
underlying data used. Some sources suggest it can increase 
inclusion when used to analyze traditional credit bureau data, 
while other studies find mixed or even negative effects when 
additional supplemental data source is used. For this reason, 
we believe more research is needed to better understand the 
effect of machine learning alone and in conjunction with 
promising types of financial data.
    So, what is happening in the market today? Some banks and 
non-banks are beginning to use machine learning algorithms 
directly in their underwriting models in order to evaluate 
applications for credit cards, and personal auto and small 
business loans. They are doing so to improve the credit risk 
accuracy, to leverage the speed and efficiency of the 
technology, and to keep up with competitors. Yet, while 
interest in machine learning is increasing, there are 
fundamental questions about the ability to diagnose and manage 
these model, and might both have general concerns about 
reliability, transparency, fairness, and specific Federal 
regulatory requirements that Steve just discussed.
    FinRegLab is, therefore, partnering with researchers from 
the Stanford Graduate School of Business to evaluate the 
performance and the capabilities of explainability tools 
designed to help lenders develop and manage machine learning 
algorithms in credit underwriting. We will use the Federal 
requirements concerning risk model governance, fair lending, 
and adverse action disclosures as a starting point, but expect 
that our research may be useful to address broader questions 
about machine learning reliability and the use of diagnostic 
tools for managing algorithmic decisions in a range of 
contexts.
    In addition to focusing on the machine learning 
explainability, we intend to continue to study the role of 
alternative financial data, both alone and in conjunction with 
AI and machine learning, to foster greater financial inclusion. 
Thank you very much.
    [The prepared statement of Ms. Koide can be found on page 
40 of the appendix.]
    Chairman Foster. Thank you, Ms. Koide. Ms. Rice, you are 
now recognized for 5 minutes to give an oral presentation of 
your testimony.

   STATEMENT OF LISA RICE, PRESIDENT AND CEO, NATIONAL FAIR 
                        HOUSING ALLIANCE

    Ms. Rice. Chairman Foster, Ranking Member Gonzalez, and 
members of the task force, thank you so much for inviting me to 
testify at today's hearing. The National Fair Housing Alliance 
is the country's only national civil rights agency dedicated 
solely to eliminating all forms of housing and lending 
discrimination, and this includes eliminating bias- and 
algorithmic-based systems used in housing and financial 
services through our recently-launched Tech Equity Initiative.
    How AI systems are designed, the data used to build them, 
the subjective renderings applied by the scientist creating the 
models, and other issues, can cause discrimination, create or 
further entrench structural inequality, and deny people 
critical opportunities. On the other hand, innovations in the 
area of artificial intelligence have the potential to reduce 
discriminatory outcomes and help millions of people. Much as 
scientists used the coronavirus to develop lifesaving vaccines, 
we can use AI to detect, diagnose, and cure harmful 
technologies that are extremely detrimental to people in 
communities.
    We have biased AI systems because the data used to build 
the models is deeply flawed. Technicians developing the systems 
are not educated about how technology can render discriminatory 
outcomes, and regulators are not equipped to sufficiently 
handle the myriad manifestations of bias generated by the 
technologies we use in financial services and housing. Let's 
start with the data.
    The building blocks for algorithmic tools are tainted data 
that is embedded with bias generated from centuries of 
discrimination. Not only are we building systems with biased 
data, but oftentimes datasets are underinclusive and not 
representative of underserved groups. As a result, for example, 
traditional credit scoring systems, as you just heard Melissa 
say, oftentimes cannot see the behavior of consumers that are 
not represented in the data. This is why communities of color 
are disproportionately credit invisible or inaccurately scored. 
For example, in Detroit, Michigan, almost 40 percent of Black 
adults are credit invisible. This pattern is common throughout 
our nation.
    So, how do these consumers access quality credit 
opportunities, rent apartments, obtain affordable insurance, or 
access other important opportunities necessary for people to 
lead productive lives? Technology does not have to be biased. 
There are mechanisms for producing fair systems, and I will 
mention just a few. One method of de-biasing tech is to 
integrate the review of racial and other forms of bias into 
every phase of the algorithm's life cycle, including data 
selection, development, deployment, and monitoring. The 
European Union's newly-proposed regulation for AI offers one 
way of addressing this issue. It creates a risk-based framework 
that considers technologies, like credit scoring, as a high-
risk category because of the grave impact it has on people's 
lives. The proposal holds high-risk models to a higher standard 
and incorporates a review for discrimination risk in all 
aspects of the algorithm life cycle.
    To help de-bias tech, all AI stakeholders, including 
regulators, scientists, engineers, and more, should be trained 
on fair housing and fair lending issues. Trained professionals 
are better able to identify red flags and design solutions for 
de-biasing tech. In fact, recent innovations in building fair 
tech have come from AI experts trained on issues of fairness. 
Increasing diversity will also lead to better outcomes for 
consumers. Research shows that diverse teams are more 
innovative and productive. Moreover, in several instances, it 
has been people of color working in the field who are able to 
identify potentially discriminatory AI systems.
    I will close by calling out the need for the creation of a 
publicly-available dataset to be used for research and 
educational purposes. Congress should encourage the release of 
more loan-level data from the National Mortgage Survey and the 
national mortgage databases so researchers, advocacy groups, 
and the public can study bias in housing and finance markets 
and, in particular, as it may relate to AI systems.
    Thank you so much for the opportunity to testify today.
    [The prepared statement of Ms. Rice can be found on page 55 
of the appendix.]
    Chairman Foster. Thank you, Ms. Rice. Mr. Saleh, you are 
now recognized for 5 minutes.

      STATEMENT OF KAREEM SALEH, FOUNDER AND CEO, FAIRPLAY

    Mr. Saleh. Thank you, Chairwoman Waters, Chairman Foster, 
Ranking Member Gonzalez, and members of the task force, for the 
opportunity to testify today. My name is Kareem Saleh, and I am 
the founder and CEO of FairPlay, the world's first fairness-as-
a-service company. I have witnessed firsthand the extraordinary 
potential of AI algorithms to increase access to credit and 
opportunity, but I have also seen the risks these algorithms 
pose to many Americans. If we are to fully harness the benefits 
of AI, we must commit to building infrastructure that embeds 
fairness in every step of the algorithm decisioning process.
    Despite the passage of the fair lending laws almost 50 
years ago, people of color and other historically-
underprivileged groups are still denied loans at an alarming 
rate. The result is a persistent wealth gap and fewer 
opportunities for minority families and communities to create a 
prosperous future.
    Why are we still so deeply unfair? The truth is that the 
current methods of bias detection in lending are completely 
unsuited to the AI era. Even though lending has become AI-
powered and automated, fair lending compliance is stuck in the 
analog past.
    So how can we bring fair lending compliance into the 21st 
Century? We must give lenders the tools and guidance they need 
to increase fairness without putting their businesses at risk. 
Today, lenders are required to measure and remediate bias in 
their credit decisioning systems. If, say, Black applicants are 
approved at materially lower rates than White applicants, 
lenders must evaluate whether this disparity is justified by a 
business necessity or determine whether the lender's objectives 
could be met by a less discriminatory alternative. It is at 
this stage, the search for alternatives and the invocation of 
business justifications, where our current fair lending system 
has the greatest potential to evolve.
    The way most lenders search for less discriminatory models 
involves taking credit scores out of an algorithm, re-running 
it, and evaluating the differences in outcomes for protected 
groups. This method almost always results in a fairer model, 
but also a less profitable one. This puts lenders in a catch-
22. They would like to be fair, but they would also like to 
stay in business, plus there is no guidance on what constitutes 
an appropriate tradeoff between profitability and fairness, 
creating uncertainty for lenders about how to meet regulatory 
requirements. Worse still, lenders fear that the very act of 
trying to find a fairer, better means of underwriting or 
pricing loans could be used against them as evidence they knew 
their algorithms were biased to begin with.
    Faced with this problem, most lenders opt for safety, 
writing explanations for the use of unfair models instead of 
searching for alternatives that may yield fairer results. The 
upshot is that fair lending compliance has become an exercise 
in justifying unfairness rather than an opportunity to increase 
inclusion.
    Today, a better, fairer option exists, using AI fairness 
tools to de-bias algorithms without sacrificing profitability. 
Several AI techniques allow lenders to take a variable, like 
credit score, and disentangle its predictive power from its 
disparity-driving effects. In many instances, these AI fairness 
tools have increased approval rates for protected groups 
anywhere from 10 to 30 percent without increasing risk.
    Of course, industry will need support in order to fully 
embrace the benefits of AI fairness. Here, Congress and 
regulators can play an important role by ensuring that fairness 
testing is being done by more lenders more often, applied to 
their underwriting, pricing, marketing, and collections models, 
and includes a robust search for less discriminatory 
alternatives.
    In addition, policymakers should ease the fear of liability 
for lenders who commit to thoroughly searching for disparities 
and less discriminatory alternatives, to reward rather than 
punish those who proactively look for fairer systems. 
Regulators can provide guidance on how lenders should view the 
tradeoffs between profitability and fairness, and set 
expectations for what lenders should do if disparities are 
identified.
    To bring fairness to AI decisions, we must build the 
fairness infrastructure of the future, not justify the 
discrimination of the past. Using AI de-biasing tools, we can 
embed fairness into the algorithmic decisions to promote 
opportunity for all Americans while allowing financial 
institutions to reap the rewards of a safe and inclusive 
approach. If we prioritize fairness, the machines we build will 
follow.
    Thank you. I am happy to answer your questions.
    [The prepared statement of Mr. Saleh can be found on page 
69 of the appendix.]
    Chairman Foster. Thank you, Mr. Saleh. Mr. Girouard, you 
are now recognized for 5 minutes to give us an oral 
presentation of your testimony.

    STATEMENT OF DAVE GIROUARD, CEO AND CO-FOUNDER, UPSTART

    Mr. Girouard. Chairwoman Waters, Chairman Foster, Ranking 
Member Gonzalez, and members of the Task Force on Artificial 
Intelligence, thank you for the opportunity to participate in 
today's conversation. My name is Dave Girouard, and I am co-
founder and CEO of Upstart, a leading artificial intelligence 
lending platform headquartered in San Mateo, California, and 
Columbus, Ohio.
    I founded Upstart more than 9 years ago in order to improve 
access to affordable credit through application of modern 
technology and data science. In the last 7 years, our bank and 
credit union partners have originated more than $9 billion in 
high-quality consumer loans using our technology, about half of 
which were made to low- and moderate-income borrowers. Our AI-
based system combines billions of cells of training data with 
machine learning algorithms to more accurately determine an 
applicant's creditworthiness.
    As a company entirely focused on improving access to 
affordable credit for the American consumer, fairness and 
inclusiveness are issues we care about deeply. The opportunity 
for AI-based lending to improve access to credit for the 
American consumer is dramatic, but equally dramatic is the 
opportunity to reduce disparities and inequities that exist in 
the traditional credit scoring system.
    In the early days at Upstart, we conducted a retroactive 
study of a large credit bureau, and we uncovered a jarring pair 
of statistics: just 45 percent of Americans have access to bank 
quality credit, yet 83 percent of Americans have never actually 
defaulted on a loan. That is not what we would call fair 
lending. The FICO score was introduced in 1989 and has since 
become the default way banks judge a loan applicant, but, in 
reality, FICO is extremely limited in its ability to predict 
credit performance because it is narrow in scope and inherently 
backward-looking. And as consumer protection groups, such as 
the National Consumer Law Center, have highlighted, for the 
past 2 decades, study after study has found that African-
American and Latino communities have lower credit scores as a 
group than White borrowers.
    At Upstart, we use modern technology and data science to 
find more ways to prove that consumers are indeed creditworthy, 
to bridge that 45 percent versus 83 percent gap. We believe 
that consumers are more than their credit scores, and going 
beyond the FICO score and including a wide variety of other 
information, such as a consumer's employment history and 
educational background, results in significantly more accurate 
and inclusive credit modeling. While most people believe a more 
accurate credit model means saying, ``no'' to more applicants, 
the truth is just the opposite. Accurately identifying the 
small fraction of borrowers who are unlikely to be able to 
repay a loan is a better outcome for everyone. It leads to 
significantly higher approval rates and lower interest rates 
than a traditional model, especially for underserved 
demographic groups, such as Black and Hispanic applicants.
    Since our early days, skeptics have asked whether AI models 
will hold up in a down economy. The tragedy of the COVID 
pandemic, where unemployment rose from 4 percent to more than 
14 percent in just a few weeks, required that we prove our 
mettle, and, in fact, we did just that. Despite the elevated 
level of unemployment, the pandemic had no material impact on 
the performance of Upstart-powered loans held by our bank 
holders. With the support of a more accurate credit model 
powered by AI, our bank and credit union partners can have the 
confidence to lend regardless of the state of the economy. 
Imagine banks lending consistently and responsibly just when 
credit is needed most. That is an outcome for which we can all 
cheer.
    The concern that AI in credit decisioning could replicate 
or even amplify human bias is well-founded. We have understood 
since our inception that strong consumer protection laws, 
including the Equal Credit Opportunity Act, help ensure that 
good intentions are actually matched by good outcomes. This is 
especially true when it comes to algorithmic lending. For these 
reasons and more, we proactively met with the appropriate 
regulator, the Consumer Financial Protection Bureau, well 
before launching our company. Quite simply, we decided to put 
independent oversight into the equation. After significant 
good-faith efforts, starting in 2015, between Upstart and the 
CFPB to determine the proper way to measure bias in AI models, 
we demonstrated that our AI-driven model doesn't result in an 
unlawful disparate impact against protected classes of 
consumers.
    Because AI models change and improve over time, we 
developed automated tests with the regulator's input to test 
every single applicant on our platform for bias, and we provide 
the results of these tests to the CFPB on a quarterly basis.
    In September 2017, we received the first no-action letter 
from the CFPB recognizing that Upstart's platform improves 
access to affordable credit without introducing unlawful bias. 
Thus far, we have been able to report to the CFPB that our AI-
based system significantly improved access to credit. 
Specifically, the Upstart model approves 32 percent more 
consumers and lowers interest rates by almost 3\1/2\ percentage 
points compared to a traditional model. For near prime 
consumers, our model approves 86 percent more consumers and 
reduces their interest rates by more than 5 percentage points 
compared to a traditional model.
    Upstart's model also provides approval rates and lower 
interest rates for every traditionally-underserved demographic. 
For example, over the last 3 years, the Upstart model helped 
banks that use Upstart approve 34 percent more Black borrowers 
than a traditional model would have, with 4-percentage-point 
lower interest rates. That is the type of consumer benefit we 
should all get excited about.
    I apologize that I am running long, so I will be happy to 
just cut it here if that is what the committee would prefer.
    [The prepared statement of Mr. Girouard can be found on 
page 30 of the appendix.]
    Chairman Foster. Thank you, Mr. Girouard, for your 
testimony.
    The Chair will now recognize himself for 5 minutes for some 
questions.
    One big prerequisite to racial and gender equity is 
socioeconomic integration. Minorities and traditionally-
disenfranchised individuals should have the same access to 
communities with quality schools, banks, grocery stores, and 
other community staples, all of which stem from where they are 
able to work and live. Additionally, socioeconomically-
integrated communities foster a greater sense of understanding 
and tolerance across people from different walks of lives and 
experiences. So to that end, I am interested in exploring how 
AI, as well as optimally-designed subsidies, can help improve 
socioeconomic integration.
    There are many possibilities on how to proceed. For 
example, one might decide to subsidize investments in 
communities that have historically suffered from redlining, but 
if those communities have subsequently gentrified, then blanket 
subsidies in those areas might not be justified, so a broader 
set of data would be needed.
    Or perhaps we should just acknowledge that there are many 
situations where there is an essential tradeoff between 
fairness and profitability, so we should explicitly subsidize 
lenders to adopt a more fair model while retaining the power of 
AI to identify the most promising loans to subsidize. For 
example, there is a program in Ottawa, Canada, that has been 
using AI to identify areas undergoing gentrification or 
disinvestment by analyzing home improvements that are visible 
by Google Earth and satellite images. This sort of technology 
might be showing where we are gaining or losing socioeconomic 
integration and where subsidies might be appropriate.
    My question is for, I guess, all of the witnesses here. If 
our goals are not only to eliminate unfairness going forward, 
but also to correct for past unfairness, what sort of changes 
to the objective functions or explicit subsidies would we want 
to optimize an AI program to measure and reward socioeconomic 
integration and other things that we are interested in 
promoting? You can take it in any order you want.
    Ms. Rice. I can kick it off. One of the things that we have 
been championing, Chairman Foster, is the building and 
development of a really robust publicly-available dataset for 
research purposes and to help fashion technology that is more 
fair. What we are finding is that a lot of discrimination and 
biases that we are seeing in AIs that we use are not just in 
financial services and housing, but in every area--criminal 
justice, education, employment, et cetera. One of the 
challenges is that the datasets upon which the models are used 
are extremely flawed and insufficient. They are 
underrepresentative.
    So, if we can build more robust datasets, we can even use 
synthetic data so we don't have to use completely pure original 
data that may raise privacy concerns. But if we had more robust 
datasets, not only could we ensure that we are building better 
models that are less discriminatory and that provide more 
socioeconomic benefits for everyone in our society, but it 
would also give us better tools for a better foundation for 
diagnosing different forms of discrimination and building more 
accurate tools for rooting out discrimination in algorithmic-
based systems.
    Chairman Foster. Thank you. Does anyone else want to take 
on the sort of optimal subsidy part of the question?
    Mr. Saleh. Congressman, I will say that our experience 
working in emerging markets is that if you can provide some 
sort of credit enhancement for lenders to incentivize them to 
lend into these subpopulations that are not well-represented in 
the data, you can both give people a bridge to being scorable 
in the future, and also incentivize the creation of a more 
robust corpus of data that is truly representative of the 
ability and willingness of some of these historically-
underprivileged communities to pay back loans. So, I endorse 
very much the comments Lisa made, and I think that we should 
look at credit enhancement programs for lenders to incentivize 
exactly the kind of lending development you are talking about.
    Ms. Rice. Yes. And Kareem's statement just reminded me that 
Canada has a program that does that. They actually subsidize, 
on the insurance base, consumers who get declined from the 
voluntary market, and so there is a subsidy program to provide 
insurance for those consumers. And it has actually helped build 
a more robust dataset, and we can provide more information 
about that later.
    Chairman Foster. Yes, thank you. I think this is a very 
important area to pursue, to really use AI to promote what we 
want instead of just looking at it to prevent it from acting 
badly.
    I now recognize the ranking member of the task force, Mr. 
Gonzalez of Ohio, for 5 minutes.
    Mr. Gonzalez of Ohio. Thank you, Chairman Foster. Mr. 
Girouard, I want to start with you. I find your testimony and 
your entire business model, frankly, to be inspiring and 
interesting in so many ways. But I am curious as to how 
scalable the process was with the CFPB from the very beginning, 
because I think one concern I have is that the CFPB, or any 
other entity, might not be able to handle, say, 100 companies, 
Mr. Girouard, sort of what you guys did.
    So I guess my first question would be, from a structure 
standpoint, how did you go about approaching the CFPB from the 
beginning, because you sort of embedded compliance in the very 
beginning, which makes perfect sense. But I am curious how that 
all played out, how that evolved, and whether or not you think 
whatever program you used could handle, let's say, 100 Upstarts 
if we ever got to that point. So, I will just kind of turn it 
over to you to comment on that.
    Mr. Girouard. Sure. Thank you, Congressman. First of all, I 
will say one thing, which is that the Equal Credit Opportunity 
Act actually is quite useful. You might think of it like old 
legislation from decades ago being irrelevant today or just not 
keeping up with the times, but it actually does, to a large 
extent. It works and it can be implemented. But, of course, 
there is some ambiguity when you get into sort of algorithmic 
lending and such.
    So, we introduced ourselves to the Consumer Financial 
Protection Bureau (CFPB) before we ever launched as a company 
because we were naive. People told us, you shouldn't go talk to 
the regulators, just sort of hide out, but we didn't believe 
that was the right path, so we introduced ourselves, and told 
them what we were hoping to achieve. And after years of good 
work, we got what is termed a no-action letter, which basically 
means trying to provide some clarity where there is ambiguity 
in the regulation. That, of course, is not a scalable path for 
anybody.
    And we also necessarily took on a bit of risk in our early 
days because we didn't know what the outcomes of our models 
would be, but we were a startup, so we had the capacity to take 
on that risk. The reality is, if there is going to be a path 
forward where these tools are broadly used, and used in a 
responsible manner where they do not introduce bias, they do 
improve credit outcomes, it is going to require some form of 
legislation or rulemaking to standardize how testing is done. 
We have sort of done that one-off, but it is really not 
scalable to the larger industry, which is, I think, what is 
necessary.
    Mr. Gonzalez of Ohio. Yes, I couldn't agree more, and I 
would love to follow up with you--I only have 3\1/2\ minutes 
left--to get your ideas on what that might look like because I 
think it is really important.
    Ms. Koide, I want to move to you. We know that bank 
regulators are increasingly open to new kinds of underwriting 
as a driver for more inclusive lending and even for sounder 
lending. The agencies put out a joint statement on this. The 
CFPB provided the no-action letter with Upstart, as we all 
know. What are the obstacles to industry adoption of these new 
models? Is it mostly regulatory risk, or technological or 
cultural, or something else, and what else could be done to 
sort of clear the obstacles?
    Ms. Koide. Yes, thank you for the question. We have been 
quite focused in providing some of the empirical analysis on 
alternative financial data cash flow information. And to 
clarify here, it is transaction data that you can see in a bank 
account and, importantly, even a prepaid card transaction 
product which we have greater coverage, especially among 
underserved communities and populations in terms of bank and 
prepaid access as compared to credit records and histories. And 
that research, I think, helped to inform the regulators' 
awareness. They had been thinking about alternative data for a 
while as well, but, nevertheless, providing that kind of 
research and empirical insight, I think, helped to inform the 
steps that the regulators took jointly to issue that statement.
    There are, nevertheless, important questions around using 
new types of data in underwriting, and more generally as well. 
They extend from, how are we ensuring consumer permission 
information is able to flow--we have Section 1033 under the 
Dodd-Frank Act, for which we do not have rules written that 
would articulate that process and the data that would be then 
flowing under that authority--to how adverse action notices are 
ultimately sufficiently responded to? If you are going to be 
extending credit to somebody that is different from what they 
expected to receive or under different terms than they 
expected, you have to explain it. And I think articulating 
those explanations to consumers are areas where the industry 
has continued to think about, how do they provide those kinds 
of explanations in a way that is comfortable for consumers and 
responsive to [inaudible].
    Mr. Gonzalez of Ohio. Great. Thank you so much, and I yield 
back.
    Chairman Foster. Thank you, and I will now recognize the 
Chair of the Full Committee, Chairwoman Waters, for 5 minutes.
    Chairwoman Waters. Thank you so very much. This will be 
directed to Ms. Rice and Mr. Hayes. The Equal Credit 
Opportunity Act and the Fair Housing Act prohibit 
discrimination for protected classes in the extension of credit 
in housing. Earlier this year, the Federal Reserve, the FDIC, 
the OCC, the NCUA, and the Consumer Financial Protection Bureau 
sent out a request to financial institutions and other 
stakeholders on how AI and ML are being used in the financial 
services space, and how these activities conform with these 
laws. Additionally, the Federal Trade Commission issued a 
separate guidance that racial or gender bias in AI can prompt 
law enforcement action.
    Ms. Rice and Mr. Hayes, are these Federal agencies doing 
enough to ensure that existing loans prevent bias and 
discrimination or providing sufficient accountability for 
disparate impacts that can result from the use of AI models? 
What should they be doing? Ms. Rice?
    Ms. Rice. Chairwoman Waters, thank you so much for the 
question. The National Fair Housing Alliance is currently 
working with all of those institutions and all of those Federal 
agencies that you have just named on the issue of AI fairness. 
And one of the challenges that we face is that the institutions 
themselves don't necessarily have sufficient staff and 
resources in order to effectively diagnose AI systems, detect 
discrimination, and generate mechanisms and solutions for 
overcoming bias.
    As an example, financial services institutions have been 
using credit scoring systems, automated underwriting systems, 
risk-based pricing systems for decades, right? And we are now 
finding out, in part by using AI tools, that these systems have 
been generating bias for decades and decades, but for all of 
these years, the financial regulators were really not able to 
detect the deep level of bias ingrained in these systems. So, 
we really have to support the Federal regulatory agencies, make 
sure they are educated, make sure they are well-equipped so 
that they can do an efficient job, not only working with 
financial services institutions, but also to make their systems 
more fair.
    Chairwoman Waters. Let me interrupt you here for a minute, 
Ms. Rice and Mr. Hayes. We would like this information brought 
to us because when we talk about the longstanding biases, we 
should be on top of fighting for resources and insisting that 
the agencies have what they need to deal with it. And because 
they are embedded now, it is because we have not done 
everything we could do to make sure that they are equipped to 
do what they needed to do to avoid and to get rid of these 
biases. So, we want the information. We want you guys to bring 
the information to us so that we can now legislate and we can 
go after the funds that are needed. I thank you for continuing 
to work on these issues, but I want you to bring that 
information to us so we can do some legislation.
    Mr. Hayes, do you have anything else to add to this?
    Mr. Hayes. I completely agree with Lisa. I am hearing what 
you are saying. I think that is a great idea. I say the 
agencies have been in learning mode for a few years, and now it 
is actually time to provide more guidance on how you should 
test AI models. I think industry is ready for that. We are 
ready for that. We would like to help inform that process, but 
I do think now is the time for some more generally applicable 
guidance and action in this space.
    Chairwoman Waters. I think that Mr. Foster would welcome 
additional information, as would other Members of Congress, 
including me, the Chair of this Financial Services Committee, 
because we cannot just wait, wait, wait, and tell the agencies 
to do better. We have to force them to do better. And enforcing 
them to do better means that we understand where the biases 
are, and we actually legislate and we tell the agencies what 
they have to do.
    So, I am so pleased about this hearing today. And I am so 
pleased about the leadership of Mr. Foster. But this is a 
moment in history for us to deal with getting rid of 
discrimination and biases in lending and housing and all of 
this, and so help us. Help us out. Don't just go to them. Come 
to us and tell us what we need to do. Is that okay?
    Thank you very much. I yield back the balance of my time.
    Chairman Foster. Thank you, Madam Chairwoman. And I just 
wanted to say that if any of the Members or the witnesses are 
interested in sort of hanging around informally after the close 
of the hearing--it is something that we often do with in-person 
hearings, and we are happy to try to duplicate that in the 
online era here.
    And the Chair will now recognize the gentleman from 
Georgia, Mr. Loudermilk, for 5 minutes.
    Mr. Loudermilk. Thank you, Mr. Chairman. I appreciate 
having another very intriguing hearing on a very important 
matter here, especially as we adopt newer technologies in the 
financial services sector.
    Last year, the FDIC issued a request for information 
regarding standard setting and voluntary certification for 
technology providers. The idea was to have a voluntary 
certification program to streamline the process for banks and 
credit unions to partner with third-party FinTech and AI 
providers. The proposal is intriguing to me because when I met 
with both financial institutions and technology providers, one 
of their biggest concerns with the current regulatory 
requirements is that it takes an enormous amount of time and 
due diligence every time they want to form a partnership. I 
believe streamlining the onboarding process is an important 
step toward encouraging these type of partnerships.
    Mr. Girouard: what are your thoughts on this issue?
    Mr. Girouard. Yes, this is a really important issue. We 
tend to serve community banks, smaller banks which are often 
struggling to compete with the larger banks that have a lot 
more technical resources and people they put against the 
diligence they are required to do to use any type of third-
party technology in their business. And if you are Wells Fargo, 
or Chase, or PNC, you can spend all day and millions of dollars 
evaluating technology solutions. But if you are a community 
bank, that is not possible.
    Mr. Loudermilk. Right.
    Mr. Girouard. I think if you want to even the playing 
field, if you want to keep the smaller banks alive, valid in 
the communities they serve, you need to make it easier for them 
to adopt technology. And that doesn't mean sort of foregoing 
the evaluations or the prudence that you need to responsibly 
adopt it. It just means allowing them to essentially put their 
efforts together on some sort of standard that would allow 
small banks across the country to keep up with all the 
investment going on in the top handful of banks out there.
    Mr. Loudermilk. So if we were able to streamline the 
ability to form these partnerships, would that benefit 
consumers by expanding the FinTech and AI products?
    Mr. Girouard. Oh, for sure. Every month or so, we turn on 
another community bank who suddenly offers attractively-priced 
products with higher approval rates, lower interest rates, in 
their communities, and it is happening regularly. But, 
honestly, it is just the tip of the iceberg. The opportunity is 
so much larger, and most banks, frankly, just don't have those 
kinds of resources. This is a process that can take 6 months. 
You can go through hundreds of hours of meetings and 
discussions. You have your regulator come in that you talk to, 
whether it is the FDIC, the OCC, et cetera. There is this 
incredible process that most banks just don't have the time and 
resources to take on, so it just gets sidelined.
    Mr. Loudermilk. Another topic that I have brought up in 
these hearings before is dealing with the issue of bias. We 
need to recognize the difference between what types of bias we 
want to have in AI versus those that need to be rooted out. 
Obviously, you have to have a level of bias to discriminate 
against those who can and cannot pay a loan back. Not all types 
of biases are bad. If you think about it, the whole purpose of 
using AI in loan underwriting is to be biased against those who 
are unable to repay a loan, or at least identify those who have 
the dataset that would say these folks are unlikely to pay a 
loan, or even just to set an interest rate. At the same time, 
algorithms obviously should not contain bias that is based on 
factors that are irrelevant to the actual creditworthiness of 
the borrower, like race, or gender, or any other factor.
    Mr. Girouard, do you agree that we need to be careful not 
to eliminate all bias in AI, but, rather, we should be working 
to eliminate the types of bias that really don't belong there?
    Mr. Girouard. Congressman, perhaps it is a bit of 
semantics, but we believe that bias is always wrong. Accuracy 
in a credit model is what we seek. And giving a loan to 
somebody who is going to fail to pay it back is not doing any 
good for them, so, of course, wanting to lend to people who 
have the capacity to pay it back is always our goal. But we 
don't view an accurate credit model or making offers of credit 
as good as possible for people who are likely to pay it back in 
any sense biased against everybody else. It is really just 
accuracy in predicting and understanding who has the capacity 
to repay.
    Mr. Loudermilk. And maybe it is semantics, but what we are 
looking at is for AI to look at data, just hard data, 
regardless of any other demographic factor, just looking at the 
creditability of the borrower. And I see that as a technical 
term as a level of bias just to be able to determine, is this 
person able to pay back the loan in the amount that they are 
borrowing or are they not? Set all that other stuff aside. That 
is really what we want AI to be able to do, not look at race, 
or gender, or any of those factors. Just, are they of the 
income level, do they have the credit history, do they have a 
history of paying back loans, et cetera? That is really what we 
are trying to get to, correct?
    Mr. Girouard. It is true that we are trying to have an 
accurate model that will lend to people who can pay it back, 
and we constantly strive to make our model more accurate 
because when we do that, it tends to approve more people at 
lower rates, and it actually disproportionately improves more 
underserved people--Black Americans, the Hispanic community--so 
that is all good. But having said that, my thorough belief is 
that you need a supervisory system, a separate system that 
watches and makes sure that we are not introducing bias.
    Mr. Loudermilk. I agree, and I appreciate your answer. And 
I yield back.
    Chairman Foster. Thank you. The Chair now recognizes the 
gentlewoman from Massachusetts, Ms. Pressley, for 5 minutes.
    Ms. Pressley. Thank you, Mr. Chairman, for convening this 
task force hearing, and to each of our witnesses for their 
testimony. Last year, I had the opportunity to ask the former 
CFPB Director about a practice that remains a serious concern 
to me: the use of information about people's education, 
including where they went to college, when making decisions 
about access to credit and the cost of credit. An investigation 
by consumer advocates shows that the artificial intelligence 
lending company, Upstart, was charging customers who went to 
Historically Black Colleges and Universities more money for 
student loans than customers who went to other schools, holding 
all else equal. Now, I know Upstart has vigorously denied these 
allegations, but I have here the first report prepared by Mr. 
Hayes and his colleagues as a part of a settlement the company 
reached with the NAACP Legal Defense Fund and the Student 
Borrower Protection Center.
    On page 23, it appears to say that Upstart made significant 
changes to its business model after coming under fire for its 
lending practices. I will certainly be watching closely see if 
Mr. Hayes' firm can independently verify that these changes 
actually address the disturbing effects of Upstart's approach 
to lending. It is hard to imagine a practice that better 
illustrates the deep and lasting legacy of systemic racism in 
American higher education than educational redlining. That is 
why I was so troubled to see that yet another FinTech lender 
that uses AI, a company called Stride Funding, was engaged in 
what sounds like the very same discriminatory practices as 
Upstart. Mr. Hayes, should we be worried that these practices 
are driving racial inequality and leading to disparate outcomes 
for former students?
    Mr. Hayes. Thank you, Representative. I will say as a 
general matter, every time you use data in a model, part of the 
reason for using that data is to replicate some patterns in 
that data, and we also know that there are disparities in our 
education system. As you pointed out, they are with respect to 
race, national origin, and sex. Those could be replicated if 
you use that data model that is risk. It is not inevitable. 
There are lots of ways to use data to design models so that you 
don't do that.
    Our role in the Upstart and Student Borrower Protection 
Center matters was as an independent monitor, so I don't have 
views at this point about whether that has happened, whether 
those reports are accurate or not. That is part of our charge 
as an independent monitor. I think it is a risk. It is one that 
should be guarded against, and I think any company that uses 
this type of data should be very careful with it and test its 
intuition.
    Ms. Pressley. Okay. So, Mr. Hayes, how can Congress and 
financial regulators ensure that complex algorithms and machine 
learning [inaudible] have skewered the disparate and illegal 
impact of these lending practices? What can we do?
    Mr. Hayes. That is a great question. I will say as an 
initial matter, there is a [inaudible] in AI and ML models, and 
some of them are quite difficult to explain, or may be 
impossible to explain. Others are not. Others are explainable. 
And as an initial matter, if an institution cannot explain its 
model, why it is reaching certain conclusions, it should be 
very hesitant or maybe not use it at all for important 
decisions. I think that is pretty key.
    This goes also back to the point that Chairwoman Waters had 
made. I think it is a great opportunity for the CFPB to come in 
and start actively testing some of these models, to test some 
of these intuitions, to test if these risks are real. That is a 
role it can play. As an outside advocate, there is only so much 
you can do with the model. It takes an agency with supervisory 
authority to really help institutions understand how their 
models work and make sure they are not going to violate the 
law.
    Ms. Pressley. Okay. Thank you. These patterns are certainly 
very disturbing, and it seems that people have not learned from 
Upstart's errors. The discrimination against students who have 
gone to HBCUs and minority-serving institutions exacerbates the 
disproportionate burden of student loans on Black Americans and 
perpetuates economic discrimination. If the use of AI in 
lending is to continue and expand in the financial services 
sector, Congress and Federal regulators must be positioned to 
provide proper oversight. And, as I mentioned, I will be 
watching closely. Thank you. I yield back.
    Chairman Foster. Thank you. The Chair now recognizes the 
gentleman from Texas, Mr. Taylor, for 5 minutes.
    Mr. Taylor. Thank you, Mr. Chairman. It is great to be on 
the task force, and I appreciate the opportunity for this 
hearing. Ms. Pressley, I certainly hope you won't discriminate 
against me for having gone to college and business school in 
your district. Since Upstart has been named here, I would love 
to give the CEO an opportunity to respond to that question set.
    Mr. Girouard. Sure. Thank you. And, Congresswoman, I 
certainly appreciate your concern, but I will say, first and 
foremost, I have dedicated my career to improving access to 
credit, and I stand proud with what we have accomplished and 
how we have done it. The use of education data, without 
question, improves access to credit for Black Americans, for 
Hispanic Americans, for almost any demographic that you can 
speak to. Our models aren't perfect, but they certainly are not 
discriminatory.
    We had a disagreement with the Student Borrower Protection 
Center, and their conclusions, in our view, were inaccurate. 
Having said that, we very willingly began to work with them and 
to engage with them to figure out, are there ways we can make 
even more improvements to our testing and to our methodology, 
and we continue to do that, as well as with the NAACP Legal 
Defense Fund. So, I think Upstart has demonstrated good faith 
in trying to improve credit access for all and to do it in a 
fair way that is working proactively with regulators, is here 
working with lawmakers, and we will work with consumer 
advocates if they want to. We have nothing to hide, and 
frankly, we are proud of the effort we are making to improve 
access to credit for Americans.
    Mr. Taylor. Ms. Pressley, do you want to ask a follow up? I 
would be happy to yield the floor to you to ask a follow up to 
Mr. Girouard, or I can continue on with my questioning.
    [No response.]
    Mr. Taylor. Okay. So, Mr. Girouard, I really appreciate 
what you are doing. I think you have an impressive model, and 
it is amazing to see the application of AI in the way you have 
done it. How do you source your loans? Are you doing those 
directly or are you doing those through traditional banking 
platforms?
    Mr. Girouard. Borrowers come either to Upstart through our 
brand and recognizing our marketing efforts to say, come here 
and you can get a better loan than you can get elsewhere. They 
can also come directly through our bank partners. There are 
more than 15 banks on our platform which also can, using our 
technology, offer loans to their own customers. So, they can 
find us in many different ways.
    Mr. Taylor. How big are your 15 banking partners? Are those 
kind of regional banks? Are those G-SIBs? Are those community 
banks?
    Mr. Girouard. They vary from community banks to credit 
unions, and credit unions are, on our platforms, growing quite 
quickly.
    Mr. Taylor. What is your average loan size?
    Mr. Girouard. In the range of $10,000 to $12,000.
    Mr. Taylor. Okay. I just want to put this card on the 
table--I was on a bank board for 12 years, and I sat on the 
loan committee, and so, I was part of approving every loan for 
12 years. I can honestly say that never once was credit score 
determinative of a loan. To be very honest, in the director 
discussions, I would say that credit score didn't come up in 
[inaudible] percent of our loan decisions. So, the statement 
that you made about it being a primary means of making 
decisions at least was antithetical to my own limited 
experience. We were one of the 5,000 banks in the United 
States, in terms of how we thought about credit. And I will say 
that--
    Mr. Girouard. I have yet to meet a bank that doesn't have a 
minimum credit score requirement for a loan, typically 680 or 
something of that nature. So if they are out there, I haven't 
met them yet.
    Mr. Taylor. Okay. I see where you are coming from. I think 
I understand what you are saying. Thank you for that. That just 
kind of clarifies where you are coming from in that particular 
assessment. But again, I would just say that underwriting 
credit is very important, and the other thing is you want to 
have costs be lower. The final thing I would say is, if I add a 
whole bunch of regulations on UI commerce, doesn't that make it 
more expensive for you to do business and then, in turn, force 
you to raise your rates?
    Mr. Girouard. It depends what that regulation is. A lot of 
times regulation can be clarity that actually helps adoption of 
the technology--
    Mr. Taylor. If I make it more expensive for you to operate, 
doesn't that increase the cost of operating?
    Mr. Girouard. Oh, by definition, it for sure does, 
Congressman.
    Mr. Taylor. Okay. Thank you. I just would encourage my 
colleagues as we think about this, to make sure that we don't 
increase the cost of operating, and then, in turn, lower access 
to capital, which I think is our mutual objective. I yield 
back.
    Chairman Foster. Thank you. The Chair will now recognize 
the gentlewoman from North Carolina, Ms. Adams, for 5 minutes.
    Ms. Adams. Thank you, Mr. Chairman. Thank you for calling 
this hearing, and Chairwoman Waters, we appreciate your support 
as well. And to the witnesses, thank you for offering your 
expertise and your insights.
    I am grateful to Representative Pressley for diving into 
educational redlining and its harmful impacts on HBCU students 
and graduates. Over the past year, we have seen examples of how 
using such data and algorithms by lenders could result in 
borrowers facing thousands of dollars in additional charges if 
they attended a minority-serving institution, like an 
Historically Black College or University (HBCU). I am a proud 
product of an HBCU, a 2-time graduate of North Carolina A&T, 
and a 40-year professor at Bennett College, also an HBCU. And I 
do know how invaluable these schools have been to my success, 
and their outsized role in the economic and social mobility of 
millions of Black people in this country. They play a critical 
role in diversifying the workforce, particularly the tech 
sector.
    Ms. Rice, and Mr. Saleh, we know that AI bias is real. Can 
you speak to the importance and value of increasing the 
diversity among AI researchers, scientists, and developers to 
improve quality of algorithm development and datasets, and how 
can we ensure that HBCUs play a greater role in diversifying 
the AI pipeline?
    Ms. Rice. Congresswoman Adams, thank you so much for that 
question. It is critically important. I mentioned earlier that 
the National Fair Housing Alliance has launched the Tech Equity 
Initiative. One of the major goals of the Tech Equity 
Initiative is to increase diversity in the tech field, and one 
of the ways of doing that, of course, as you just mentioned, is 
partnering with Black, Indigenous, and People of Color (BIPOC)-
serving financial institutions and HBCUs. I hinted in my 
statement that the National Fair Housing Alliance has been 
working on tech bias issues since our inception almost 40 years 
ago. So, these issues--tech bias, AI algorithmic bias--are not 
new. They are just gaining more media attention.
    But we have found that as we work with financial services 
institutions on the issue of tech bias, and we have been doing 
this, again, for almost 40 years, the more these financial 
services institutions--lenders, insurance companies, et 
cetera--as they diversify their employee base, they yield 
better policies that are more inclusive and fair, they also 
themselves design better systems that are not only more 
accurate, but have less discriminatory outcomes. And 
oftentimes, it is because those people of color who are working 
inside those institutions can see signs of discrimination. They 
can pick up on variables that are being used in the algorithm 
and, from their own personal experience, can detect and sort of 
understand how those variables can generate a discriminatory 
outcome.
    I mentioned that a lot of the innovations that we are 
seeing in the AI field, a lot of the tech bias that has been 
documented has come from scientists like Joy Buolamwini, who is 
one of the most noted data scientists in the world. How did she 
detect that facial recognition systems were discriminatory? 
Because she was working on a project and facial recognition 
technology did not work for her Black face.
    Ms. Adams. Right. Okay.
    Ms. Rice. If she had not been Black, she wouldn't have 
noticed that. So, I yield to my colleague, Mr. Saleh.
    Ms. Adams. Mr. Saleh?
    Mr. Saleh. I don't have much to add to Lisa's excellent 
comments. Congresswoman, you are absolutely right. We must do 
more to diversify the population of people who are building AI 
systems, governing AI systems, and monitoring AI systems. The 
technology industry has not been sufficiently good in that 
regard.
    Ms. Adams. We know that tenant-screening algorithms have 
been increasingly employed by landlords, but there is evidence 
that algorithms adversely affect Black and Latino renters. For 
example, when a Navy veteran named Marco Fernandez returned 
from deployment, and was trying to rent a house, the tenant-
screen algorithm [inaudible]. I am going to have to yield back, 
Mr. Chairman. Thank you so very much, and thank you to our 
guests for your responses.
    Chairman Foster. Thank you. The Chair now recognizes the 
gentleman from Indiana, Mr. Hollingsworth, for 5 minutes.
    Mr. Hollingsworth. I appreciate the Chair, and I certainly 
appreciate the ranking member for having this great hearing 
today, talking about these very important topics. I certainly 
welcome and hope for more diversity in the technology field 
writ large, and to find more opportunities for more people to 
contribute their great talents to this country. I think that is 
what has made us a leader around the world in technology, and I 
hope it is what will continue to make us a leader of technology 
around the world.
    Mr. Girouard, I wanted to talk a little bit about this for 
a second. I certainly know that you are a fan of making sure 
that your workforces and other workforces are very diverse. But 
I also want to recognize the desire that you have for ensuring 
that your platform isn't biased in some way, that you make 
money by making loans, and if you can find more creditworthy 
individuals, no matter what walk of life they come from, no 
matter what color their skin, no matter what background they 
may have than other potential technologies, then you are better 
off because of that. Wouldn't you agree that you are 
incentivized to make sure that you find as many opportunities 
to make creditworthy loans as possible?
    Mr. Girouard. Yes, absolutely. The way my company grows is 
the AI models get smarter at identifying who will and won't pay 
a loan, and that might seem odd. You might think that could 
make you shrink, not grow, but, in reality, millions and 
millions of people who are actually creditworthy, in reality 
are not recognized as such by a credit score.
    Mr. Hollingsworth. Right.
    Mr. Girouard. And that little oddness there means the 
better our models get unbalanced, the more people get approved, 
and the lower the interest rates are. So, it is a sort of win 
for everybody as long as the technology keeps improving, and, 
thus far, it has worked well for us.
    Mr. Hollingsworth. And I definitely want to get back to, 
how do we keep improving the technology, but I just want to hit 
this point once again because I think, frequently, it goes 
unsaid, that the wind is at your back. The goal is to increase 
the number of loans and, frankly, to find opportunities to make 
loans where others might not be able to make those loans or may 
not find that same opportunity. So it is not as if we are 
struggling to hold back a problem, but, instead, the problem 
resolution and the market incentive here are working in the 
same direction. And I think that is really important for us to 
remember because in many other places, they work in opposite 
directions.
    Second, I want to come back to exactly what you said, which 
is, how do we improve this technology over time? How do we 
expand the breadth of this technology over time? And I wondered 
whether there are stories or narratives or specific points as 
to how we might do that, how we as policymakers might empower 
you, your cohorts, your colleagues, your counterparts, and, 
frankly, the next generation of ``you's'' to develop this 
technology and be able to make it mainstream so that we can 
empower more Americans, no matter the color of their skin, no 
matter their background, to be able to get access to financial 
capital.
    Mr. Girouard. Yes. First, thank you for the question, 
Congressman. I think, first of all, one of the most important 
things that could happen, just to provide clarity, we are all 
for testing, as you can see. We believe we are leading the 
charge on how rigorous testing for bias can be and should be. 
And as much as it is probably to our benefit that no one else 
figured out how to do it and deploy this technology, it is to 
the country's benefit that there is as much of this used 
responsibly as possible.
    The problem, of course, is that banks are regulated not by 
one agency, but by at least four, if not more than that, and 
you have State-level regulators as well. So, it is really 
difficult for technology like this to get a hold when, even 
within one regulator, there is not a consistent opinion. A 
supervisor of this bank might say one thing, and a supervisor 
of another bank says another thing, so the adoption ends up 
being very slow.
    There is one other important matter I want to raise, which 
is that banks have to worry about consumer protection, et 
cetera. But on the other side, they have the bank solvency, the 
people who care about whether the bank is going to go out of 
business, and these are sometimes at odds because they are 
prevented from making loans to what the regulator would 
perceive as risky borrowers. So, you have this sort of 
governance of banks that is oftentimes in conflict with moving 
toward a more equitable, more inclusive lending program. And 
that is difficult--
    Mr. Hollingsworth. Mr. Girouard, I think that is a great 
point and something we really need to hit home. What you are 
saying is, we care about the solvency of our financial markets, 
the safety, but we also care about the efficiency, and making 
sure we don't push one too far in favor of the other is a 
really important dynamic going forward. And I think Van Taylor 
hit on this, but regulation can both help efficiency, but it 
can also hurt efficiency greatly, and making sure we monitor 
that is very important. I yield back to the Chair.
    Chairman Foster. Thank you. The Chair now recognizes the 
gentleman from Massachusetts, Mr. Auchincloss, for 5 minutes.
    Mr. Auchincloss. Thanks, Mr. Chairman, for organizing this 
hearing, and to our witnesses for their terrific testimony and 
Q&A. Massachusetts has been really on the cutting edge of 
artificial intelligence and its use in computational biology, 
in insurance, in the provision of legal services, in investing 
in real estate, and also in thinking about the regulatory 
dimensions.
    The Massachusetts State House has formed a Facial 
Recognition Commission, led by State Senator, Cindy Creem, in 
my district, because of concerns over facial recognition 
application. A study from MIT in 2018 found that while accuracy 
rates for White men were north of 99 percent with facial 
recognition technology, for Black women, it was significantly 
less. And, Ms. Rice, this is why I was very happy to hear you 
raise this issue.
    I was wondering if I could really bring up two questions 
with you. The first is concerns you may have on proposed 
regulations for the introduction of facial recognition 
technology into the setting of housing. We are seeing already 
that smart home technology, like Latch, or smart keypads and 
Nests are really becoming standard fare, and I don't think it 
is very far behind to have cameras that are linked up for 
recognition as well. Has this been an area that you have looked 
at in regards to housing, and are there safeguards in place?
    Ms. Rice. Yes, Congressman. Thank you for the question, and 
one other area that we have particularly been focusing on is 
the use of facial recognition technology in the area of 
financial services. So, for example, more transactions have 
been happening in the virtual space, and there is certainly the 
opportunity to use facial recognition technology as a fraud 
detection mechanism, for example. So, yes, this is an area of 
deep and grave concern. It is one of the reasons why we have 
been calling for the building and development of more 
inclusive, robust datasets in many different areas. One of the 
ways that Joy Buolamwini and other data scientists were able to 
work with IBM, and Google, and Facebook, et cetera, to help 
them improve or lessen the discrimination on their systems was 
by building better training datasets.
    Mr. Auchincloss. That was actually the second point I 
wanted to raise. You have been ahead of me this whole hearing. 
You had mentioned earlier in your comments the idea of 
synthetic data as a way to buttress training sets. My 
understanding for how the original facial recognition training 
sets were composed is that the faces were really scraped off of 
a lot of media sites and elsewhere, and they were pulling, it 
seems like, disproportionately White faces. Has there been work 
done, and maybe just describe more how those training sets have 
been fixed because, as you say, really the raw data is the core 
of undoing bias in the actual outcomes?
    Ms. Rice. Yes, and I should have been more specific. I was 
sort of myopically focused on financial and housing services in 
terms of my reference to a synthetic dataset, publicly-
available dataset, for research and education only. I don't 
think we should be building real systems and models using a lot 
of synthetic data, so I am sorry I didn't get a chance to make 
that distinction.
    Mr. Auchincloss. Absolutely. Ms. Koide, maybe you could 
weigh in here as well about any oversight that you think is 
necessary for facial recognition technology.
    Ms. Koide. Thank you for the question. We have been much 
more focused on tabular data, data that is being contemplated 
or used in credit underwriting. We have not been evaluating 
visual recognition data, but it is a great question.
    Mr. Auchincloss. Understood. Yes, it is an area that we 
have been leaning into in Massachusetts and, I think, 
increasingly nationally just because, in some ways, the 
technology is both really good and really bad. Really good in 
the sense that it has been incredibly effective and has created 
some kind of compelling results in its accuracy, but very bad 
in the sense that these kinds of biases have snuck through in a 
way that, as Ms. Rice pointed out, were not identified for too 
long. So, it has been an area of concern for me both at the 
State and the Federal level, and I will yield back the balance 
of my time, Mr. Chairman.
    Chairman Foster. Thank you, and I would like to thank all 
of our witnesses for their testimony today.
    The Chair notes that some Members may have additional 
questions for this panel, which they may wish to submit in 
writing. Without objection, the hearing record will remain open 
for 5 legislative days for Members to submit written questions 
to these witnesses and to place their responses in the record. 
Also, without objection, Members will have 5 legislative days 
to submit extraneous materials to the Chair for inclusion in 
the record.
    This hearing is now adjourned.
    [Whereupon, at 1:24 p.m., the hearing was adjourned.]

                            A P P E N D I X


                              May 7, 2021
                              
[GRAPHICS NOT AVAILABLE IN TIFF FORMAT]