[House Hearing, 118 Congress]
[From the U.S. Government Publishing Office]


                  ARTIFICIAL INTELLIGENCE (AI): INNOVATIONS 
                        WITHIN THE LEGISLATIVE BRANCH

=======================================================================

                                HEARING

                               BEFORE THE

                           COMMITTEE ON HOUSE
                             ADMINISTRATION

                        HOUSE OF REPRESENTATIVES

                    ONE HUNDRED EIGHTEENTH CONGRESS

                             SECOND SESSION

                               __________

                            JANUARY 30, 2024

                               __________

      Printed for the use of the Committee on House Administration
      
[GRAPHIC NOT AVAILABLE IN TIFF FORMAT]      


                             www.govinfo.gov
                           www.cha.house.gov
                           
                              __________

                   U.S. GOVERNMENT PUBLISHING OFFICE                    
54-784                      WASHINGTON : 2024                    
          
-----------------------------------------------------------------------------------                              

                   Committee on House Administration

                    BRYAN STEIL, Wisconsin, Chairman

BARRY LOUDERMILK, Georgia            JOSEPH MORELLE, New York,
H. MORGAN GRIFFITH, Virginia              Ranking Member
GREG MURPHY, North Carolina          TERRI A. SEWELL, Alabama
STEPHANIE BICE, Oklahoma             DEREK KILMER, Washington
MIKE CAREY, Ohio                     NORMA TORRES, California
ANTHONY D'ESPOSITO, New York
LAUREL LEE, Florida

                      Mike Platt,  Staff Director 
                 Jamie Fleet,  Minority Staff Director 
                        
                        C  O  N  T  E  N  T  S

                              ----------                              
                                                                   Page

                           Opening Statements

Chairman Bryan Steil, Representative from the State of Wisconsin.     1
    Prepared statement of Chairman Bryan Steil...................     2
Ranking Member Joseph Morelle, Representative from the State of 
  New York.......................................................     3
    Prepared statement of Ranking Member Joseph Morelle..........     5

                               Witnesses

Hugh Halpern, U.S. Government Publishing Office Director.........     6
    Prepared statement of Hugh Halpern...........................     9
Judith Conklin, Chief Information Officer for the Library of 
  Congress.......................................................    16
    Prepared statement of Judith Conklin.........................    18
John Clocker, Deputy Chief Administrative Officer for the House 
  of Representatives.............................................    22
    Prepared statement of John Clocker...........................    24
Taka Ariga, Chief Data Scientist and Director of the Innovation 
  Lab............................................................    31
    Prepared statement of Taka Ariga.............................    33

                       Submissions for the Record

Letter Announcing Vacancy........................................    66
Letter Appointing Clerk..........................................    67

                        Questions for the Record

Hugh Halpern answers to submitted questions......................    69
Judith Conklin answers to submitted questions....................    77
John Clocker answers to submitted questions......................    89
Taka Ariga answers to submitted questions........................    96

 
ARTIFICIAL INTELLIGENCE (AI): INNOVATIONS WITHIN THE LEGISLATIVE BRANCH

                              ----------                              


                            January 30, 2024

                 Committee on House Administration,
                                  House of Representatives,
                                                    Washington, DC.

    The Committee met, pursuant to notice, at 10:33 a.m., in 
room 1013, Longworth House Office Building, Hon. Bryan Steil 
[Chairman of the Committee] presiding.
    Present: Representatives Steil, Loudermilk, Griffith, 
Murphy, Bice, Carey, Morelle, Sewell, Torres, and Kilmer.
    Also present: Representative Lieu.
    Staff present: Caleb Hays, Deputy Staff Director, General 
Counsel, Parliamentarian; Jessica Smith, Professional Staff; 
Jordan Wilson, Director of Member Services; Caitlin O'Dell, 
Legal Assistant and Deputy Clerk; Kristen Monterroso, 
Legislative Clerk; Khalil Abboud, Minority Deputy Staff 
Director, Chief Counsel; Matt DeFreitas, Minority Director of 
House Communication Standards Commission; and Jamie Fleet, 
Minority Staff Director.

    OPENING STATEMENT OF HON. BRYAN STEIL, CHAIRMAN OF THE 
 COMMITTEE ON HOUSE ADMINISTRATION, A U.S. REPRESENTATIVE FROM 
                           WISCONSIN

    Chairman Steil. The Committee on House Administration will 
come to order. I note that a quorum is present.
    Without objection, the Chair may declare a recess at any 
time.
    Also, without objection, the meeting record will remain 
open for 5 legislative days so Members may submit any materials 
they wish to include therein.
    Thank you, Ranking Member Morelle, Members of the 
Committee, and our witnesses, for participating in today's 
hearing.
    AI has sparked an interest around the world. With any new 
technology, there comes risks and rewards. Recently, we have 
seen it used to mimic President Biden's voice in a recent 
election robocall. We have also seen it used to spur important 
research and reduce burdensome Government paperwork.
    We must ensure Congress is ready to manage the risks AI 
poses while leaning into its rewards. Today, we will explore 
how the legislative branch is developing AI governance plans 
for Congress to innovate effectively and efficiently.
    In the early months of 2023, generative AI disrupted 
multiple industries, including Government operations. It 
triggered a global conversation about the power of AI and its 
social implications.
    By the middle of last year, the Committee took steps to 
meet with each of our legislative-branch entities to discuss 
the power of AI and the need to develop responsible governance 
plans.
    By the end of year, we saw several AI use cases emerging 
from our legislative-branch agencies. For example, we have seen 
innovative experiments with optical character recognition to 
assist visually impaired Library patrons. The U.S. Copyright 
Office is using AI to improve digital accessibility to 
copyright registration records and other data. Natural language 
processing has helped rapidly summarize legislation. Testing 
now enhanced search tools could help the public more quickly 
find Government publications.
    These use cases are just some examples of what will enable 
Congress to be more effective. The Committee is dedicated to 
promoting transparency around these use cases. This 
transparency is essential to ensure Congress maintains a 
detailed understanding of the use of AI for this institution 
and for the American people.
    Congress must also ensure our legislative-branch agencies 
are developing comprehensive AI governance plans. These plans 
are foundational IT best practices and are necessary to 
effectively manage AI over the long term. The NIST AI framework 
is a critical resource that can help legislative-branch 
agencies turn AI principles into specific Government policies.
    As the legislative branch experiments and develops these 
governance plans, we must also be looking ahead to the need for 
upskilling the legislative-branch workforce. AI is new to many 
of us, making training a critical element of successful 
implementation. I was once told that AI will not replace humans 
but humans that use AI could replace those that are not using 
AI. I look forward to hearing how legislative-branch staff will 
be upskilling and made familiar with generative AI technology.
    The Committee on House Administration is focused on 
ensuring the legislative branch is able and equipped to address 
the challenges AI presents while utilizing its benefits. I look 
forward to hearing from each of our witnesses today. I am 
pleased to welcome later today into this room Congressman 
Obernolte and Congressman Lieu to the Committee to discuss AI.
    [The prepared statement of Chairman Steil follows:]

   PREPARED STATEMENT OF CHAIRMAN OF THE COMMITTEE ON HOUSE 
                   ADMINISTRATION BRYAN STEIL

    AI has sparked an interest around the world. With any new 
technology, there comes risks and rewards. Recently, we have 
seen it used to mimic President Biden's voice in a recent 
election robocall. We have also seen it used to spur important 
research and reduce burdensome Government paperwork.
    We must ensure Congress is ready to manage the risks AI 
poses while leaning into its rewards. Today, we will explore 
how the legislative branch is developing AI governance plans 
for Congress to innovate effectively and efficiently.
    In the early months of 2023, generative AI disrupted 
multiple industries, including Government operations. It 
triggered a global conversation about the power of AI and its 
social implications.
    By the middle of last year, the Committee took steps to 
meet with each of our legislative-branch entities to discuss 
the power of AI and the need to develop responsible governance 
plans.
    By the end of year, we saw several AI use cases emerging 
from our legislative-branch agencies. For example, we have seen 
innovative experiments with optical character recognition to 
assist visually impaired Library patrons. The U.S. Copyright 
Office is using AI to improve digital accessibility to 
copyright registration records and other data. Natural language 
processing has helped rapidly summarize legislation. Testing 
now enhanced search tools could help the public more quickly 
find Government publications.
    These use cases are just some examples of what will enable 
Congress to be more effective. The Committee is dedicated to 
promoting transparency around these use cases. This 
transparency is essential to ensure Congress maintains a 
detailed understanding of the use of AI for this institution 
and for the American people.
    Congress must also ensure our legislative-branch agencies 
are developing comprehensive AI governance plans. These plans 
are foundational IT best practices and are necessary to 
effectively manage AI over the long term. The NIST AI framework 
is a critical resource that can help legislative-branch 
agencies turn AI principles into specific Government policies.
    As the legislative branch experiments and develops these 
governance plans, we must also be looking ahead to the need for 
upskilling the legislative-branch workforce. AI is new to many 
of us, making training a critical element of successful 
implementation. I was once told that AI will not replace humans 
but humans that use AI could replace those that are not using 
AI. I look forward to hearing how legislative-branch staff will 
be upskilling and made familiar with generative AI technology.
    The Committee on House Administration is focused on 
ensuring the legislative branch is able and equipped to address 
the challenges AI presents while utilizing its benefits. I look 
forward to hearing from each of our witnesses today. I am 
pleased to welcome later today into this room Congressman 
Obernolte and Congressman Lieu to the Committee to discuss AI.

    With that, I will yield to the Ranking Member for 5 minutes 
for opening remarks, and I now recognize you, Mr. Morelle.

OPENING STATEMENT OF HON. JOSEPH MORELLE, RANKING MEMBER OF THE 
 COMMITTEE ON HOUSE ADMINISTRATION, A U.S. REPRESENTATIVE FROM 
                            NEW YORK

    Mr. Morelle. Thank you so much, Mr. Chairman, for the 
introduction and also for convening this important hearing. I 
am very grateful to you and your staff, who have worked 
tirelessly to make this a bipartisan conversation. It is 
critically important, and I look forward to discussing the ways 
that recent innovations in generative artificial intelligence 
can improve the efficiency of the legislative-branch operations 
and help us better serve our constituents.
    I want to express appreciation to all the folks who have 
gathered with us to speak on this issue. I am looking forward 
to listening to them. We have witnesses, as you have said, from 
the Library of Congress, Government Publishing Office, 
Government Accountability Office, and Chief Administrative 
Office.
    I am grateful to all of you and your organizations, which 
are the backbone of Congress and on which we rely each and 
every day to provide us with administrative research and 
technical tools that we need to carry out all of our 
constitutional duties. I am looking forward to discussing your 
work with AI so far and how the Committee can help best support 
your efforts in each of your responsibilities.
    Over the past several years, the combination of enhanced 
computer power or proliferation of data, improvements to 
underlying models, and an increased availability of AI tools 
has led to widespread use of AI across sectors. Today, it would 
be difficult, if not impossible, to find a single business or 
single industry that has not or will not soon be impacted by 
advancements in this technology.
    The U.S. Government is no exception. Here in the 
legislative branch, for instance, the Library of Congress is 
using AI to create standardized records from e-books, extend 
data from historic documents, and help blind and print-disabled 
patrons access Library resources.
    At the same time the Copyright Office is grappling with how 
it should consider registration applications for AI-produced 
works, the Government Publishing Office is using AI to 
transcribe meetings and is exploring ways to make proofreaders 
more efficient, public information more accessible, and data 
more secure.
    The House Chief Administrative Office is using AI to 
supplement its help-desk services and working with Members' 
offices to use the technology to deconflict schedules and 
assist in constituent correspondence--things that we are all 
intimately involved in.
    The Government Accountability Office has established a lab 
to design and implement new AI technologies as well as an 
internal working group to analyze AI governance issues.
    All these are critically important and all very exciting.
    AI can simplify complex tasks, provide insights into data, 
build capacity, improve workflows, and more. For all the 
exciting opportunities that AI presents, we must also be 
cognizant of its threats and its risks.
    We have seen some of the dangers associated with AI in the 
headlines recently. Last week, an unknown party attempted to 
confuse and disenfranchise New Hampshire voters using AI-
generated robocalls imitating the voice of President Biden in 
advance of the primary election.
    Recognizing that grave threat that AI may pose to our 
elections, I asked the Attorney General of the United States to 
immediately investigate this attempt at election subversion and 
to ward off future actors who would attempt to use generative 
AI to undermine our Nation's elections.
    Deepfake pornography, which makes up 96 percent of all 
deepfakes online, almost exclusively targets women and is 
becoming increasingly pervasive. Astoundingly, it is not a 
Federal crime, although I have a bill, bipartisan bill, that I 
have introduced with Congressman Kean, the Preventing Deepfakes 
of Intimate Images Act, which would change that. We have seen 
elements of that with Taylor Swift's situation in the past 
week.
    Threats and risks exist here in the legislative branch too, 
and we need to be mindful of them as we establish what will be 
the operating culture for congressional AI use for years to 
come. What are the implications for the institutional workforce 
here? How do we control for bias and other data quality issues 
that affect the trustworthiness of these systems? There are 
certainly questions that are important to sort out as we work 
through this.
    To that end, I am grateful that the Committee and our 
institutional partners have utilized the administration's 
National Institute of Standards and Technology AI framework and 
related executive orders as models for the legislative branch 
as it relates to AI governance policies. With these as a guide, 
Congress, I think, will be better equipped to minimize risk and 
adopt this technology responsibly and ethically.
    It has great promise, great risks, but I think the fact 
that we are gathered here--and, Mr. Chairman, thank you again, 
to you and your staff, for bringing us together and bringing 
all the witnesses to testify on this incredibly important 
issue.
    With that, I yield back. Thank you.
    [The prepared statement of Ranking Member Morelle follows:]

PREPARED STATEMENT OF RANKING MEMBER OF THE COMMITTEE ON HOUSE 
                 ADMINISTRATION JOSEPH MORELLE

    Over the past several years, the combination of enhanced 
computer power or proliferation of data, improvements to 
underlying models, and an increased availability of AI tools 
has led to widespread use of AI across sectors. Today, it would 
be difficult, if not impossible, to find a single business or 
single industry that has not or will not soon be impacted by 
advancements in this technology.
    The U.S. Government is no exception. Here in the 
legislative branch, for instance, the Library of Congress is 
using AI to create standardized records from e-books, extend 
data from historic documents, and help blind and print-disabled 
patrons access Library resources.
    At the same time the Copyright Office is grappling with how 
it should consider registration applications for AI-produced 
works, the Government Publishing Office is using AI to 
transcribe meetings and is exploring ways to make proofreaders 
more efficient, public information more accessible, and data 
more secure.
    The House Chief Administrative Office is using AI to 
supplement its help-desk services and working with Members' 
offices to use the technology to deconflict schedules and 
assist in constituent correspondence--things that we are all 
intimately involved in.
    The Government Accountability Office has established a lab 
to design and implement new AI technologies as well as an 
internal working group to analyze AI governance issues.
    All these are critically important and all very exciting.
    AI can simplify complex tasks, provide insights into data, 
build capacity, improve workflows, and more. For all the 
exciting opportunities that AI presents, we must also be 
cognizant of its threats and its risks.
    We have seen some of the dangers associated with AI in the 
headlines recently. Last week, an unknown party attempted to 
confuse and disenfranchise New Hampshire voters using AI-
generated robocalls imitating the voice of President Biden in 
advance of the primary election.
    Recognizing that grave threat that AI may pose to our 
elections, I asked the Attorney General of the United States to 
immediately investigate this attempt at election subversion and 
to ward off future actors who would attempt to use generative 
AI to undermine our Nation's elections.
    Deepfake pornography, which makes up 96 percent of all 
deepfakes online, almost exclusively targets women and is 
becoming increasingly pervasive. Astoundingly, it is not a 
Federal crime, although I have a bill, bipartisan bill, that I 
have introduced with Congressman Kean, the Preventing Deepfakes 
of Intimate Images Act, which would change that. We have seen 
elements of that with Taylor Swift's situation in the past 
week.
    Threats and risks exist here in the legislative branch too, 
and we need to be mindful of them as we establish what will be 
the operating culture for congressional AI use for years to 
come. What are the implications for the institutional workforce 
here? How do we control for bias and other data quality issues 
that affect the trustworthiness of these systems? There are 
certainly questions that are important to sort out as we work 
through this.
    To that end, I am grateful that the Committee and our 
institutional partners have utilized the administration's 
National Institute of Standards and Technology AI framework and 
related executive orders as models for the legislative branch 
as it relates to AI governance policies. With these as a guide, 
Congress, I think, will be better equipped to minimize risk and 
adopt this technology responsibly and ethically.
    It has great promise, great risks, but I think the fact 
that we are gathered here--and, Mr. Chairman, thank you again, 
to you and your staff, for bringing us together and bringing 
all the witnesses to testify on this incredibly important 
issue.

    Chairman Steil. The gentleman yields back.
    Without objection, all other Members' opening statements 
will be made part of the hearing record if they are submitted 
to the Committee clerk by 5 p.m. today.
    Today we have one witness panel. We welcome Hon. Hugh 
Halpern, Ms. Judith Conklin, Mr. John Clocker, and Mr. Taka 
Ariga.
    We appreciate you being with us today, and I look forward 
to all of your testimony.
    Pursuant to paragraph B of Committee Rule 6, the witnesses 
will please stand and raise their right hands.
    [Witnesses sworn.]
    Chairman Steil. Let the record show the witnesses all 
answered in the affirmative and may be seated.
    I will now proceed to introduce our panel of witnesses.
    Our first witness, Honorable Hugh Halpern, is the U.S. 
Government Publishing Office Director, the agency's chief 
executive officer. Donald Trump nominated Mr. Halpern to be GPO 
Director and the U.S. Senate confirmed him in 2019.
    Our next witness, Ms. Judith Conklin, is the Chief 
Information Officer for the Library of Congress. She serves as 
the primary advisor to the Librarian of Congress on all 
technology matters and is a member of the Library's Executive 
Committee. She chairs the legislative branch CIO Council and is 
the Library's senior agency official for records management.
    Our next witness, Mr. John Clocker, serves as Deputy Chief 
Administrative Officer for the House of Representatives. Mr. 
Clocker has over three decades of legislative-branch 
experience, which touches all aspects of the administrative, 
financial, and technical operations of the House.
    Our next witness, Mr. Taka Ariga, is the Chief Data 
Scientist and Director of the Innovation Lab, Science, 
Technology Assessment, and Analytics and has helped GAO develop 
and implement advanced analytical capabilities for its auditing 
practices.
    We appreciate each of you being here today, and we look 
forward to your testimony.
    Let me remind the witnesses that we have read your written 
statements and they will appear in full in the hearing record. 
Under Committee Rule 9, you are to limit your oral presentation 
to a brief summary of your written statements.
    I will now recognize Mr. Hugh Halpern for 5 minutes.

  STATEMENTS OF HON. HUGH HALPERN, DIRECTOR, U.S. GOVERNMENT 
 PUBLISHING OFFICE; JUDITH CONKLIN, CHIEF INFORMATION OFFICER, 
LIBRARY OF CONGRESS; JOHN CLOCKER, DEPUTY CHIEF ADMINISTRATIVE 
 OFFICER, U.S. HOUSE OF REPRESENTATIVES; AND TAKA ARIGA, CHIEF 
          DATA SCIENTIST AND DIRECTOR, INNOVATION LAB

                   STATEMENT OF HUGH HALPERN

    Mr. Halpern. Thank you, Mr. Chairman, Ranking Member 
Morelle, and Members of the Committee. I am pleased to appear 
here before you today to share some potential uses of AI and 
related technologies at GPO.
    GPO is fundamentally a manufacturing operation. We publish, 
produce, and maintain materials for all three branches of 
Government. Our 1,600 craftspeople and professionals produce 
virtually all of Congress's documents, along with numerous 
other publications, as well as manufacture secure-credential 
products like the U.S. passport.
    We also provide digital information, either through our own 
trusted digital repository, GovInfo.gov, or by serving data to 
our partners like the Library of Congress, where they use that 
data on Congress.gov.
    No matter what you call it--artificial intelligence, 
machine learning, or a large language model--GPO's operations 
are just as susceptible to disruption as any commercial firm's. 
That is not necessarily a bad thing.
    My written testimony describes GPO's policy approach to 
this new generation of tools, so today I am going to focus on 
three potential applications for AI and related technologies in 
our day-to-day operations.
    First, we believe these tools can improve our quality-
assurance process by automatically recognizing defects that a 
human inspector might miss.
    We already use a rudimentary form of this technology in the 
production of the current version of the U.S. passport. GPO 
uses equipment that optically scans the pages that will become 
the identity page in a personalized passport. This equipment 
looks at each strip of three pages for variances that exceed 
the specifications from the material and rejects them as pages 
that do not conform to the standard.
    AI technology has the potential to further refine this 
review, allowing machines to learn what may constitute a 
natural variation and what is not. This has the potential to 
reduce defect rates, lower waste, and free up our quality-
assurance team to focus on solving more difficult quality 
problems as they arise.
    I also expect future printing presses acquired by GPO to 
incorporate similar technology.
    Second, we see tremendous potential for supplementing our 
proofreading team. Proofreaders are very difficult to hire, and 
we need to free them from making routine, repetitive 
corrections and allow them to focus on more subtle issues that 
require a human being to interpret.
    One example is capitalization. GPO style says that we 
capitalize the ``S'' in the word ``State'' when referring to a 
political subdivision in the United States. Currently, we use 
scripts that perform global search-and-replace functions on 
documents to correct our most common errors. Those scripts are 
blunt instruments. They cannot tell the difference between the 
State of Wisconsin and a ``New York State of mind.''
    AI holds the promise of tools that understand context and 
know when text refers to one kind of ``state'' or the other. 
That will cut down on the need for our proofreaders to review 
and correct material that has already been run through our 
automated tools, freeing them up to focus on more difficult 
contextual issues.
    My final example comes from GPO's public information 
mission. We have had great success making congressionally 
mandated reports publicly available, with nearly 200 available 
on GovInfo.
    Most of those reports come to us as PDFs. While that is a 
good format to show how the printed document looks, it is not 
always the best format for viewing on a phone, tablet, or for 
those who may have vision or other impairments. While GPO would 
like to get these reports in more flexible formats like XML, 
the agencies offering the reports are not always equipped to 
supply them that way.
    AI technologies hold the promise of allowing us to extract 
the information from a PDF, understand the document structure, 
and produce an alternative view that works on different kinds 
of devices, all without manual, time-consuming work from our 
team.
    These are just three examples where we see applications in 
GPO's operations and are considering pilots in the future. All 
of these are intended to act as a force multiplier for our 
team, allowing our folks to be more productive and deliver a 
higher value for our customers and taxpayers.
    Mr. Chairman, Ranking Member Morelle, thank you for the 
opportunity to testify before the Committee today. I look 
forward to any questions you may have.
    [The prepared statement of Mr. Halpern follows:]
    [GRAPHICS NOT AVAILABLE IN TIFF FORMAT]
    
    Chairman Steil. Thank you very much, Mr. Halpern.
    Mr. Morelle. May I--may I just interrupt for 1 second?
    As a Representative of the Empire State, script 
notwithstanding, ``New York State of Mind'' should always be 
capitalized. I want to make that point.
    Chairman Steil. I will hold back on my Wisconsin comment 
there.
    Thank you very much, Mr. Halpern.
    We will now recognize Ms. Judith Conklin for 5 minutes.

                  STATEMENT OF JUDITH CONKLIN

    Ms. Conklin. Chairman Steil, Ranking Member Morelle, and 
Members of the Committee, thank you for this opportunity to 
appear on behalf of the Library of Congress to share our work 
with artificial intelligence.
    As the Library's CIO, I work directly for the Librarian of 
Congress, Dr. Carla Hayden. We agree technology is baked into 
all facets of the Library of Congress. AI has recently become a 
focus in Government and industry, but the Library began working 
with AI and machine-learning technologies more than a decade 
ago.
    The Library of Congress is fortunate to possess data 
necessary for thoughtful AI adoption. We serve Congress and the 
American people with a universal and enduring source of 
knowledge and creativity. We steward vast collections of 
cultural heritage materials.
    We also function as the research arm of Congress through 
the Congressional Research Service and the Law Library of 
Congress. We are home to the U.S. Copyright Office and have an 
abundance of copyright deposits, rich historical records and 
data, and legislative and policy materials in our care.
    We trace the beginning of our work with AI to optical 
character recognition more than 10 years ago. We saw the power 
of this technology to enhance the ability to search our 
collections.
    The most compelling example of this early AI in action can 
be seen in our ``Chronicling America'' initiative, where we 
have empowered users to search more than 20 million digitized 
historic newspaper pages.
    We have seen how aspects of AI can help researchers, 
genealogists, historians, educators, and creators to cast a 
wider net and journey through our vast collections faster than 
ever before.
    The Library's AI experiments have a consistent finding: AI, 
when properly understood, empowers, but it cannot replace 
humans.
    The role of people at the Library of Congress informs how 
we use AI. Every day, hundreds of cataloguers create detailed 
records for material in hundreds of languages to help 
researchers across the globe navigate our collections. 
Legislative analysts help Congress and the American people 
understand complex issues and bill language through historic 
and legal lenses. Our colleagues in the U.S. Copyright Office 
promote creativity and support research and commerce by 
providing access to records of over a century of registered 
copyrighted works and recordations.
    For over 5 years, the Library has engaged with the public 
to experiment with our collections data. As much as possible, 
we make our curated data sets available to the public.
    CRS has embarked on several AI activities. Currently, OCIO 
and CRS are conducting a controlled experiment to determine AI 
approaches that could assist with the creation of high-quality 
and trustworthy bill summaries. OCIO is collaborating with the 
U.S. Copyright Office to discern ways AI can help make 
digitized historical copyright records more discoverable.
    In October 2023, the Library released its new strategic 
plan, ``A Library for All.'' The Library once had a separate 
digital strategy. As a sign of our growing digital maturity, 
our new strategic plan embeds digital strategy throughout.
    We also engage in external conversations. The Library 
monitors AI executive-branch actions, participates in the GSA 
AI Community of Practice, and engages with NIST regarding AI 
risks. We are a leading member of the International AI for 
Libraries, Archives, and Museums, AI4LAM.
    We have also provided reports to this Committee on our 
progress with both AI experiments and governance. We 
established an agency-wide AI working group co-led by the 
Principal Deputy Librarian of Congress and myself. Early last 
year, CRS established their internal AI working group to 
examine developments, policy implications, and AI potential.
    In addition to these activities, I serve as chair of the 
legislative branch CIO Council. Members have shared their AI 
initiatives and discussed the technical aspects of AI. We will 
establish an AI working group to further explore AI 
opportunities and challenges. This will enable us to inform 
Congress on our technological AI capabilities and properly 
safeguard our data.
    In closing, for more than two centuries, the Library of 
Congress has embraced the Herculean charge of keeping pace with 
human knowledge. Computers can perform remarkable tasks with 
data, but we will always need people at the center of this work 
if we are to truly remain a source of authentic, enduring 
knowledge and creativity.
    AI is empowering the Library of Congress in remarkable 
ways. More will be accomplished with this technology in the 
future. None of this would be possible without your continued 
support. Thank you for this opportunity to share our work 
today.
    [The prepared statement of Ms. Conklin follows:]
   [GRAPHICS NOT AVAILABLE IN TIFF FORMAT]
    
    Chairman Steil. Thank you very much, Ms. Conklin.
    Mr. Clocker, you are now recognized for 5 minutes.

                   STATEMENT OF JOHN CLOCKER

    Mr. Clocker. Thank you, Chairman Steil, Ranking Member 
Morelle, and Members of the Committee, for holding today's 
hearing on AI integration--specifically, emerging generative 
AI.
    With me here today, representing the CIO's Innovation 
Division, is Senior Director for Innovation Stephen Dwyer. Also 
with me today, representing our cybersecurity teams, is our 
Deputy Chief Information Security Officer, Addie Adeniji. These 
two and their teams are at the forefront of this endeavor for 
the CAO, making sure the House strikes the right balance 
between innovation and security.
    Over the past few years, AI advancements have gained 
significant momentum and prominence with the advent of gen AI, 
large language modules capable of analyzing massive data sets 
and producing original, human-like content.
    This technology has transformative potential for House 
operations. For Member offices faced with an ever-increasing 
volume of constituent engagement and oversight 
responsibilities, AI has the potential to augment staff 
workload capacity. It can also help the House, as an 
institution, improve business efficiencies.
    To support its integration, the CAO, with the support of 
this Committee, implemented a disciplined, methodical approach. 
We have conducted legal and security reviews of available 
products. As a result of our security and legal reviews, the 
Committee has initially approved ChatGPT Plus for limited use 
in the House environment.
    We have established a House-wide advisory group to collect, 
analyze, and share information about how Member, Committee, and 
leadership offices can use this technology. Last spring, our 
House Digital Services team established the AI advisory group 
because it is important we understand these tools' capabilities 
and also the tool limitations within the House environment.
    The advisory group is conducting its case studies in two 
phases. The first phase included over 200 House staffers, 
representing approximately 150 Member, Committee, and 
leadership offices.
    Initial feedback has been very positive. Offices have 
touted these tools as great for proofreading; as very effective 
for producing first drafts of memos, press releases, letters; 
summarizing reports, transcripts, and large data sets.
    However, they have also experienced limitations. The tools 
are not quite ready for drafting legislation or producing legal 
documents. The tools do not necessarily always understand the 
certain context of social media. The tools today do not 
adequately capture a Member's voice and style.
    The advisory group's second phase will focus on 
experimenting within institutional offices, supporting such 
functions as quality assurance or summarizing complex 
procurement responses and much more.
    We have conducted a governance assessment to identify 
needed improvements.
    This is all very exciting, but we need to be careful. The 
House will face elevated cyber risks. Our Nation's adversaries 
and cyber criminals will use these tools to try to harm the 
House.
    Because of this, we must evolve and be increasingly 
vigilant about the AI tools and websites we access. Just 
because an employee can access a site through their House 
device does not mean it is safe.
    We need to develop AI-specific policies and processes. We 
need to develop guidance and training opportunities for House 
staff. When ready, we need to provide expanded access to House-
validated AI tools.
    Working with this Committee, the CIO will develop a new AI-
specific policy. It will also determine how our security 
protocols need to be expanded. There will be costs associated 
with this expansion.
    We plan to establish guidance and training opportunities so 
offices can correctly optimize use of AI tools. We also 
recommend offices adopt internal AI policies customized to each 
Member's preference and risk tolerance for the use of these 
tools. Finally, when ready, the House needs to expand access to 
secure and validated AI tools.
    If we execute these correctly and deliver validated and 
safe tools, we believe these tools will significantly improve 
our capacity to serve Members and the American people.
    Thank you.
    [The prepared statement of Mr. Clocker follows:]
   [GRAPHICS NOT AVAILABLE IN TIFF FORMAT]
    
    Chairman Steil. Thank you, Mr. Clocker.
    Mr. Ariga, you are now recognized for 5 minutes.

                    STATEMENT OF TAKA ARIGA

    Mr. Ariga. Chairman Steil, Ranking Member Morelle, and 
distinguished Members of the Committee, thank you for today's 
hearing on AI innovations within the legislative branch.
    As I previously testified before Congress, we are living in 
an algorithmic renaissance where the confluence of data, cloud 
computing, and mathematics is allowing us to do things that 
were once in the realm of fiction only a few years ago. Today, 
we can use common descriptions to create images or functioning 
software codes and extrapolate existing research into exciting 
scientific findings.
    Yet these incredible advances are offset by real impacts on 
jobs, privacy, and equity. That is why GAO established the AI 
Accountability Framework in 2021 to serve as a blueprint for 
responsible implementations of AI. The framework is a beacon 
for the Oversight Committee and from which a growing body of 
GAO's AI work follows.
    We are heartened to see recent executive-agency action 
toward similar accountability goals and strongly believe that 
robust oversight is essential as AI technologies continue to 
leap forward at a remarkable pace.
    An integral part of strengthening GAO's AI capacity is 
developing our own hands-on technical know-how. This allows us 
to separate hype from reality, but, more importantly, allows us 
to grasp nuanced complexities of machine-learning techniques so 
that we can design meaningful AI solutions for the benefit of 
GAO.
    I am proud to lead the data scientists and technologists 
working on groundbreaking AI projects within GAO's Innovation 
Lab. The eight described in our AI use-case inventory exemplify 
GAO's boundary-pushing ingenuity. The value proposition is 
simple: GAO's effective use of AI means we can better serve 
Congress and the American people.
    We recently crossed an important milestone with our 
successful deployment of a large language model. This is 
allowing us to benefit from generative AI on trusted data while 
meeting our core values of accountability, integrity, and 
reliability.
    We are hard at work to make sure that GAO's reports and 
recommendations can be summarized effortlessly on any 
particular topic; GAO leadership can improve interpretation of 
feedback from our annual employee experience survey to ensure 
GAO remains the best midsize agency to work for; and GAO 
employees can ask routine questions 24/7 on operational 
policies or IT-related issues with instantaneous responses.
    While generative AI has garnered recent attention, we are 
also exploring internal use of more general AI. One prototype 
applies neural networks to automate copyediting functions based 
on GAO's extensive style guide. This means that our analysts 
can focus more time on refining clarity of GAO reports.
    We are also looking to embed computer vision models using 
extended reality glasses to improve site visit collaboration 
while reducing safety risks. Through strong internal governance 
and following the principles of our AI Accountability 
Framework, we are using AI to amplify critical mission and 
operational functions with a sense of urgency.
    AI is not a technological fad. That is why GAO continues to 
carry out vigorous oversight and foresight work on AI. Just 
last month, we reported on the current state of AI 
implementations across Federal agencies. We also issued several 
technology assessments, including one on AI modeling for 
natural disasters, and have large-scale efforts focused on 
generative AI underway. Since 2018, GAO has issued nearly 250 
products on AI. Today, we have 20 ongoing AI engagements, with 
many more planned.
    This volume of work means GAO's capacity-building on AI 
must be broad and holistic. We are strategically using direct 
hiring authority to expand a cadre of specialists with hard-to-
find skills. We are scaling digital literacy and technical 
training opportunities. We are modernizing our compute 
infrastructure in the cloud to support the next generation of 
AI development. We are keeping close tabs on a variety of AI 
issues across a global network of academic, industry, policy, 
and oversight entities.
    GAO is committed to credibly embrace promises of AI while 
shedding light on its adverse impacts. In short, we are 
applying human intelligence to achieve sound use of artificial 
intelligence.
    As a watchdog, GAO remains a steadfast partner to Congress 
on matters related to AI and other emerging technologies. Our 
data scientists, technologists, engineers, and cybersecurity 
professionals are always available to advise both chambers and 
provide timely technical assistance.
    We appreciate this Committee's continued support, and I am 
happy to answer any questions you may have.
    [The prepared statement of Mr. Ariga follows:]
    [GRAPHICS NOT AVAILABLE IN TIFF FORMAT]
    
    Chairman Steil. Thank you, Mr. Ariga.
    Thank you all for your testimony today.
    I will begin questions today, followed by the Ranking 
Member. We will then alternate between the parties.
    I would now recognize myself for the purpose of questioning 
our witnesses.
    I will let everybody get set here.
    Alright. Let us just start with you, if I can, Mr. Ariga. 
In particular, your agency, the GAO, reported that the Federal 
Government made $247 billion in improper payments, including 
roughly $200 billion in overpayments, last year. I know the 
Innovation Lab has been working with the executive branch to 
strengthen payment integrity using technologies such as AI. I 
think it could be a real game-changer here for congressional 
oversight.
    In your submitted statement, you mentioned using AI to 
organize large volumes of text, like public comments from 
regulation.gov. Is that the same system or something that could 
help identify some of these improper payments? Could you give 
us a little color into that?
    Mr. Ariga. Thank you, Chairman.
    Certainly, improper payment is a significant concern for 
the Comptroller General. We currently have a project looking at 
something similar across the Federal single audits to identify 
persistent and systemic issues that may indicate instances of 
improper payments. We are sort of extrapolating that to say 
what might be the future likelihood of those systemic issues 
continuing to persist.
    I think this is something that--you know, as long as we can 
get the right data governance structure in place, something 
that holds tremendous promise for us.
    Chairman Steil. Thank you very much. I appreciate the work 
you are doing in that space.
    Let me jump to you, Mr. Clocker, if I can. In your 
submitted testimony, you had mentioned that the House has 
approximately 160 approved cloud service providers in use today 
throughout our Member and Committee offices. That is a lot of 
different pieces of software, companies for CAO to manage in 
support of this institution.
    We are working to build these tools to make sure that this 
software is moving us in the right direction. How do you plan 
to evaluate providers as it relates to AI for both legal, 
cybersecurity concerns, and that we do not avoid the rush to 
roll out but we are also leaning into the advantages they 
provide?
    Mr. Clocker. That is a great question. We know there is 
going to be a transition time where we are using our existing 
risk-management framework to evaluate products. We think that 
is a useful approach right now, but it is going to need to 
evolve and evolve very quickly. We are going to have to have 
AI-specific frameworks, similar to in this framework, and we 
are going to have AI-specific protocols and procedures to 
evaluate these products.
    We are already talking to the vendors about how their 
products handle House data, protect House data, and how those 
products use AI in various use cases.
    We are developing the questions that we need to ask these 
vendors. We do not have all the answers to those questions yet, 
so it is an ongoing process.
    Chairman Steil. Do you think you have the human capital to 
be able to make that evaluation of the vendors?
    Mr. Clocker. I think that is a great question, right? We 
have got--you mentioned the 160 products. These are cloud 
products we have authorized for use by Members and Committees 
and leadership. Right now, we have an annual process to review 
these products. When we do review those products, we review the 
quality of the firm. Are they a mature firm providing 
technology? That is a great foundation, but we are going to 
have to add some resources to do this correctly. Well, we are 
going to have to cut down the number of products that we have 
authorized here in the House. It is a challenge.
    Chairman Steil. Thank----
    Mr. Clocker. And--yep.
    Chairman Steil. No, finish your thought.
    Mr. Clocker. I think the final point on that is, you know, 
I just--please be increasingly vigilant about the products you 
use in your office and the data you use for it during this 
transition.
    Chairman Steil. Thank you very much. I think this is going 
to be an ongoing dialog as work to continue to evaluate the 
technological products that are in front of us to make sure 
that we are leveraging the benefits but also managing any 
downside risk.
    Let me shift to you, if I can, Ms. Conklin. In your written 
testimony, you talked about improving discoverability as it 
relates to supporting the U.S. Copyright Office. I think that 
is an area where we have great potential here for a use case in 
AI.
    Can you walk through a little bit more as it relates to the 
Copyright Office registration records and how AI could be 
leveraged in that space?
    Ms. Conklin. As I mentioned--thank you for the question. As 
I mentioned, we do have a significant amount of historical data 
that includes historical copyrighted records, and we are 
currently performing an experiment on the records----
    Chairman Steil. Could I just ask you to just double-check 
your mike? I can hear you, but I know those listening at home 
might not be able to.
    Ms. Conklin. Thank you.
    We have a current experiment with the Copyright Office on 
their historical records. Due to the--within the agency, we 
have a significant amount of historical data, and we are 
currently exploring the--we are currently experimenting with 
the Copyright Office historical data. That will make the data 
more discoverable on those digitized records that exist today.
    The difference between digitizing the records and making 
them discoverable via AI is, a machine can search much faster 
using AI technologies. That is what our hope is, that 
researchers can use that data for their research.
    Chairman Steil. Thank you very much.
    Thank you all for your testimony today.
    I will now recognize Mr. Morelle for 5 minutes.
    Mr. Morelle. Thank you.
    Again, thank you all for your testimony and your good work.
    I wanted to just step back a little bit. I think the 
benefits of AI to Congress--any other institution, but 
particularly as it relates to Congress--is how it contributes 
to our system of democracy, which ultimately relies on a 
trustworthy data governance strategy.
    It seems to me that the task is two ways. Congress 
generates as well as collects data. If the data governance is 
to succeed in building trust, which I think is really at a 
critical juncture in our moment in history now, data has to be 
accessible, transparent, and representative.
    I just wondered if you could each step back and just give 
me a little snippet of how you think we can appropriately use 
artificial intelligence to increase trust in the institution of 
Congress and what concerns we ought to have relative to 
increasing mistrust or distrust in the institution. Sort of 
broadly speaking, and in your mission, how you think about 
that.
    I will put the burden on you first, but--Mr. Halpern will 
have it easiest, because he will have had the benefit of having 
heard each one of you. I would just be curious, big picture, 
how you think about that.
    Mr. Ariga. Thank you, Ranking Member Morelle. I absolutely 
agree; data governance is crucial, making sure that the data we 
use to train and run AI systems are complete, accurate, timely. 
GAO has a lot of experience dealing with data-quality 
challenges historically.
    What I also think is important is whether users of AI 
systems can interpret information in a probabilistic way. AI, 
by definition, gives a sort of value, a confidence value, on 
what the outcome might look like. What that means is, it is 
never 100-percent certain. Do the users of AI understand how to 
interpret that information, then act upon that information?
    I think digital literacy is an important attribute of a 
successful implementation and usage of AI.
    Mr. Morelle. I am sorry to interrupt, but I think you make 
a really important point. This is about probabilities. People 
who are using it should not assume that the data is always 100 
percent correct, because it is clearly based on probabilities 
based on huge data sets. Is that correct?
    Mr. Ariga. That is right.
    Mr. Morelle. Yes.
    Sorry. Mr. Clocker?
    Mr. Clocker. It is a great question. It is one we are 
already thinking a lot about.
    I think, for the CAO, we do need to improve our data 
governance, right? We do have good controls around our 
procurement data, our H.R. data, our financial data, but there 
are other types of data we are going to have to look at if it 
starts ingesting or being analyzed by AI.
    I think improving trust--I think the Committee has already 
taken a lead on this by being transparent about how we are 
using these tools in the House and, you know, publishing use 
cases and being honest when we are using AI to analyze data or 
produce data.
    Then a final thought is, I really think we really need to 
focus on always reminding folks where are the authoritative 
sources of data, right? They are on GovInfo.gov, they are on 
Congress.gov, they are on House.gov.
    Mr. Morelle. Ms. Conklin?
    Ms. Conklin. Thank you.
    The Library of Congress established an agency data 
management initiative to take a look at all of our data, not 
just our collections data or our copyrighted data, but the 
financial data--all aspects.
    From an AI perspective, we believe that the--and I use the 
term ``humans in the loop''--are integral in determining what 
we can do with our data in the AI experiments and AI 
production. We need the humans. The experts at the Library of 
Congress need to train the tools prior to AI utilization. Then, 
after the AI output, humans will be needed--are needed to 
determine the quality of that data from an authoritative, 
nonpartisan--for congressional data, and ensure that it is 
accurate.
    Humans are still required, but we have an agency--an AI 
working group, as I stated in my oral testimony, and we are 
looking at all aspects of that. We have a very robust AI 
experimentation process, and we follow that very closely.
    Mr. Morelle. Mr. Chairman, if I may ask Mr. Halpern----
    Chairman Steil. Absolutely. Please, go ahead.
    Mr. Halpern. Thank you, Mr. Morelle.
    I actually want to take a step back, because your question 
dealt with how do we improve trust in the institution. One of 
the ways to do that is to give our fellow Americans and people 
around the world a view into how we make laws. That is really 
GPO's mission. Our vision is for an America informed.
    To the extent that AI tools can increase our speed and 
accuracy in delivering that kind of transparent, primary 
information to those folks trying to figure out how Congress is 
working, that is a benefit to the agency and, we think, 
ultimately, a benefit to Americans and citizens of the world.
    Mr. Morelle. Thank you all for your thoughtful answers. I 
am sure we will have many more opportunities to connect.
    I appreciate your indulgence. I yield back. Thank you, Mr. 
Chairman.
    Chairman Steil. The gentleman yields back.
    Mr. Loudermilk is recognized for 5 minutes.
    Mr. Loudermilk. Well, thank you, Mr. Chairman.
    I appreciate you having this hearing. This is a very 
important topic and very timely. You know, artificial 
intelligence is a topic that spans every jurisdiction of every 
Committee that we have, and it is something that we need to get 
in front of, not behind.
    I think we are still away from C3PO chairing any of these 
Committees, but, in the view of a lot of people out there, they 
are concerned----
    Chairman Steil. Well, thank goodness, Mr. Loudermilk, that 
that is still the case.
    Mr. Loudermilk. I opened a can of worms there, didn't I?
    Seriously, Mr. Clocker, over the course of 2023, I 
understand there were many competing AI frameworks that 
agencies could have followed.
    Here in the House, we have a very complicated ecosystem, 
and we are in a situation where everyone is better off when we 
all row in the same direction, instead of proverbially rowing 
in different directions and ending up going in circles--which, 
in fact, we are very good at, you know, in Congress.
    What framework did the CAO decide to use?
    Mr. Clocker. Mr. Loudermilk, I am glad you recognize this 
is a very difficult environment sometimes.
    We are using the NIST framework for AI. We think it is a 
good framework. It is very thoughtful. It is general, and we 
are going to tailor it for the challenging environment here in 
the House of Representatives.
    When we do that, we are going to be working with this 
Committee, we are going to be working with the Sergeant at 
Arms, we are going to work with the clerk and the other 
officers of the House. Because they all use the same--when we 
adopt cyber policies for the House, we send them to the 
Committee, the Committee approves them, and all the officers 
and institutional offices follow them.
    Mr. Loudermilk. OK. I may have missed it, but did you 
just--did you say which framework that you are----
    Mr. Clocker. It is NIST. We will be using the NIST----
    Mr. Loudermilk. The NIST. Okay.
    Mr. Clocker. We will use NIST. We will tailor it for our 
environment; we just----
    Mr. Loudermilk. Right.
    Mr. Clocker [continuing]. do not take it. Yes.
    Mr. Loudermilk. Okay.
    Mr. Clocker. Yes.
    Mr. Loudermilk. Is that the same framework--and you may be 
addressing this in your comment. Your intention is, all House 
agencies to use that same framework then?
    Mr. Clocker. It is. It is. Yes.
    For example, when we adopt the framework and the Committee 
approves it--you know, it is a decision-making framework; it is 
how you evaluate risk. The Sergeant at Arms will use it, and, 
you know, at the end of the day, the Sergeant at Arms will make 
the risk assessment with the available facts the framework 
produces.
    Mr. Loudermilk. Oh, great. We are going to end up having 
robocops too, right? Yes. Boy, what Hollywood has done with 
this environment.
    Can you walk us through which function of the framework you 
have actually focused on first?
    Mr. Clocker. Our primary interest right now is governance, 
right? Getting the governance right, building that basic 
foundation of the policies.
    We are also focusing on mapping the existing functions of 
the House--you know, understanding where AI is being used or 
considered being used in the House today. That is the use-case 
inventory that has been developed so far.
    We are going to use that use-case inventory to understand 
how people intend to use these tools so we can, you know, 
implement this framework correctly to understand where the risk 
really is, whether it is using these tools with financial data, 
using these tools with procurement data, or constituent data at 
the right time.
    Mr. Loudermilk. Have you reached out to other House offices 
to make sure that we are going in the same direction, that 
everyone is prepared to adopt these policies, the framework 
that you are leading on?
    Mr. Clocker. It is an ongoing conversation. We have 
certainly talked to the clerk. We have certainly talked to 
Legislative Counsel. We have talked to over 200 staffers and 
Member, Committee, and leadership offices to understand how are 
they using those tools. These conversations need to continue.
    I think we will have a draft policy to the Committee 
probably in 2 or 3 months. And----
    Mr. Loudermilk. Okay.
    Mr. Clocker [continuing]. we would expect additional back-
and-forth as we develop the final policy.
    Mr. Loudermilk. Coming from an IT background, one thing 
about AI is, its output is only as good as the data that is 
used in the input. Is there any consideration of clarifying or 
making sure that everything that we use it for is coming from a 
good data set?
    Mr. Clocker. I think that is one of the most important 
elements to the use of AI. The answer to that is going to be 
different depending on the data set.
    I think we envision a future state where you will have AI 
tools in your office that will work on your data. It is your 
data, right? Today, the tools we have been testing so far, it 
is the entire internet, right? It is--we know there is bias, we 
know that is----
    Mr. Loudermilk. Yes.
    Mr. Clocker [continuing]. probably not where we want to be 
long term.
    Mr. Loudermilk. Surprising, there is bias on the internet. 
I thought----
    Mr. Clocker. Yes, exactly.
    Mr. Loudermilk [continuing]. everything there was true.
    Mr. Clocker. Yes.
    Mr. Loudermilk. Well, one thing--if you could use AI to 
figure out how to get the escalators in Rayburn to work, that 
would be an awesome achievement.
    With that, Mr. Chairman, I yield back.
    Chairman Steil. The gentleman yields back.
    Ms. Sewell is recognized for 5 minutes.
    Ms. Sewell. Thank you, Mr. Chairman.
    I want to thank all of our witnesses.
    I actually want to drill down on what this last 
conversation was about. We have spent a lot of time talking 
about the positive benefits of AI. I would like to drill down 
on some of the associated risks of AI--in particular, biasness 
and ethics and cybersecurity.
    Let us start with bias. One particularly difficult 
challenge of these tools is they inherit the biases of their 
source, which is often the internet at large. The internet is 
filled with biases across all spectrums of opinion, and these 
opinions are often presented as facts.
    In addition, many of the existing data sources derived are 
derived predominantly from White, Anglo-Saxon, mostly American-
speaking or English-speaking academics, and that leaves 
underrepresented communities of different race and gender and 
language without the same weight of consideration in the 
creation of these systems. The AI will then deliver what 
appears to be facts and data-based, when, in fact, it is not.
    I know that time is limited, so I am going to ask this 
question of you, Mr. Halpern, with respect to the GPO and to 
you, Ms. Conklin, with respect to the Library of Congress.
    Mr. Halpern. It may be a simpler answer or a more complex 
answer to your question, but GPO does not create content. We 
deliver content. So----
    Ms. Sewell. You stand by the content by which you deliver, 
right.
    Mr. Halpern. Which is created by our customers. We are 
authenticating the fact that this hearing, when it is 
published, came from Congress. We are not creating the material 
inside of that hearing.
    Ms. Sewell. Gotcha.
    Maybe my question is more for the Library of Congress. In 
particular, if you could also talk a little bit about the 
copyright and ethics concerns around copyright.
    The New York Times recently sued OpenAI for copyright 
infringement for using content to create a ChatGPT. 
Plagiarism's unacceptable; I know that and we all know that. 
Are languages, texts, videos, images generated by machines also 
allowable?
    Ms. Conklin. Thank you----
    Ms. Sewell. Bias and ethics.
    Ms. Conklin. I will cover bias first.
    Ms. Sewell. Okay.
    Ms. Conklin. Thank you.
    We have been including the issue of bias and ethics in our 
research and collaboration since 2018 when we began considering 
AI and its impacts. Our strategic plan--our new strategic plan 
for the agency has a pair of objectives that reflect the 
Library's commitment to providing trustworthy and authoritative 
data and building and enriching Library collections and content 
to serve Congress and America's communities.
    Examining and expanding our collections with our strong 
guardrails of data security and data provenance for AI are 
cornerstones of the Library of Congress. We adhere to the NIST 
risk framework to understand the spectrum of biases. And----
    Chairman Steil. Will--just to check your microphone. Again, 
in the room we can hear you, but I know online we might not.
    Ms. Conklin. I will put it closer.
    Chairman Steil. Thank you.
    Ms. Conklin. The Library agrees that bias is an important 
issue that needs to be addressed in AI. Transparency is 
imperative and often not part of the current AI commercial 
products----
    Ms. Sewell. I am down to the last minute, and----
    Ms. Conklin. Oh, Okay.
    Ms. Sewell [continuing]. I really do want to ask Mr. 
Clocker. I will allow you to--if you could submit something to 
the record with respect to that, that would be great.
    Ms. Conklin. Thank you.
    Ms. Sewell. Mr. Clocker, when it comes to information, we 
know that there is disinformation, misinformation. As keeper of 
the administration of Congress, you are deluged with 
information, and how do you protect against cybersecurity 
threats with respect to that information, whether that has 
personal identifiable information that you get through 
employment records or the like? Can you speak a little bit 
about the guardrails that you have in place to protect us 
against cybersecurity?
    Mr. Clocker. Certainly. This is obviously a very 
significant concern of ours as these tools become more and more 
prevalent.
    I can tell you, we have very strong controls and technology 
controls around our administrative data, whether it is H.R. 
data about Members or staff, whether it is financial data, 
procurement data. These areas are probably where we will be 
most careful and implement AI technologies later in the process 
as we develop it.
    I do think, you know, Members will be overwhelmed with data 
soon, because these tools are going to be very ubiquitous, 
producing, you know, more constituent engagement, which could 
be real, which could be real content--but you will get more of 
it--and not all of it may be real, right? That is something we 
are thinking about--you know, how can we work to give Members 
the tools to sift through potentially fake data.
    Ms. Sewell. Mr. Chairman, we should really dig a little 
deeper when it comes to some of the threats, especially 
cybersecurity.
    Thank you all for your testimony.
    Chairman Steil. The gentlewoman yields back.
    Mr. Griffith is recognized for 5 minutes.
    Mr. Griffith. Thank you, Mr. Chairman. I am going to play 
off of that, a slightly different tack, and rearrange the way I 
was going to ask my questions.
    Mr. Clocker, you were talking about data for Members. One 
of the things I think could be a real opportunity for us in AI 
is to do things like quickly and automatically have an AI 
create bill summaries. Getting hearing transcripts a lot 
faster, that can help us internally.
    Two of the other things I think would help the institution, 
both on the floor and Committee activity, would be a voice 
recording of all of the documents so that--a lot of people fly. 
They can read on the airplane if they want to. I drive home, 4 
hours each way, and then I drive all over my district. I listen 
to a lot of stuff in the car. If suddenly, you know, without 
too much trouble, I can get--and I know there are some apps I 
could purchase that would convert them, but you have got to 
take pictures of them and run them through. If we could get 
those things done quickly so that we could listen to reports, 
bills, amendments.
    Then last but not least, again, helping Members get through 
all the data, I believe AI could help us do, you know, almost 
instantaneously--not instant, but close to it--amendments on 
the fly on the floor that would really help us out in making 
the process go forward.
    Are you all looking at some of those things?
    Mr. Clocker. We are. I think what is--I think interests, 
but all those use cases--is, a lot of that is the work your 
staff do today for you to prepare you or to provide a summary 
for you as you travel home----
    Mr. Griffith. Well, and what I do is----
    Mr. Clocker. Yes.
    Mr. Griffith [continuing]. I actually read the text of any 
bill that I think I am going to vote for, which is a lot of 
them--and then I get dissuaded as I read them. If I think I am 
going to vote for it, I read all the words. It sure would be a 
lot easier to listen to it.
    Then, if I have got a problem on page 14 or page 5,000--
Okay, they are not quite that bad, but sometimes in the 
thousands--I can call my staff and say, get me the text to read 
that specifically; you know, email me that particular section 
of the bill.
    Do you think you all could work on that?
    Mr. Clocker. I do, absolutely. A lot of those use cases you 
talk about are about summarizing public information----
    Mr. Griffith. Yes.
    Mr. Clocker [continuing]. right? I think we can move 
rapidly on those.
    When you talk about drafting amendments, I think that is a 
very different story, involving Leg Counsel. I can kind of tell 
you, they are concerned about the appropriate use of drafting 
legislative language----
    Mr. Griffith. Well, I think that is up for the Member to 
decide----
    Mr. Clocker. Sure.
    Mr. Griffith [continuing]. but it is a tool where, if you 
hear something and you want to make a quick----
    Mr. Clocker. Great.
    Mr. Griffith [continuing]. amendment--I mean, I can 
remember Don McEachin and I both agreed on something in 
Committee that needed to be changed, and everybody was like, 
``Well, let us get staff to do that.'' Don and I could have 
probably plugged that into AI and had it out in about 5 
minutes. We might have been able to do it on paper in about 10.
    Mr. Clocker. Yes.
    Mr. Griffith. They wanted to get it through and run it 
through the staff. I think we could be more efficient.
    Mr. Halpern wants to answer on this, I think.
    Mr. Halpern. Well, to facilitate a lot of these 
applications, you need good data underneath. That is one of the 
things that GPO tries to do, is, we will take text trials from 
a lot of folks involved in the process here; we try and convert 
that into good, machine-readable data, XML-based.
    The legislative-branch standard is based around United 
States Legislative Markup. That is our flavor of XML. The more 
documents we can deliver in that format, the broader the range 
of applications you will have. Whether it is reader 
applications, whether it is the ability to synthesize that 
data, the more of that that is in that machine-readable data 
format, the more flexibility we are going to have going 
forward.
    Mr. Griffith. I appreciate that.
    Mr. Clocker, I am going to switch back to what was 
originally my first question, but I was feeding off of Ms. 
Sewell's questions. Are you all exploring whether AI could be 
helpful in tracking utility costs or other operating costs on 
the Hill?
    As an example, I was told one time that everything in 
Rayburn is pretty much on the same system, and so we really do 
not know what costs are in this area or that area.
    Are you looking at ways that--for commercial operations, or 
if a particular office, even if it is my office, that uses more 
electricity, you know, by 15 or 20 percent more than other 
offices--so that we might be able to figure out ways to bring 
down our utility costs and our use of that energy or utilities?
    Mr. Clocker. What we are exploring is looking at public 
House data about how the House spends money and using the AI 
tools to analyze it and look for opportunities, whether it is 
certain products Members are buying that we need to be a little 
more quicker about finding a better price.
    The Architect of the Capitol, you know, runs the facility, 
and I am sure they are thinking about those things around 
utility costs. If these tools are going to be appropriate for 
those use cases, you are going to have to think--you know, be 
careful about what data you are putting into it at this stage, 
if that makes sense.
    Mr. Griffith. We have some privacy concerns, too.
    Mr. Clocker. Yes.
    Mr. Griffith. My time is up, and so I will yield back.
    Chairman Steil. The gentleman yields back.
    Mr. Kilmer is recognized for 5 minutes.
    Mr. Kilmer. Thank you, Mr. Chairman.
    Thanks for being with us. I spent 4 years chairing the 
Modernization Committee, and now I am ranker on the 
Subcommittee. I want to thank each of you for your work in 
trying to help us get some of the recommendations of that 
Committee actually implemented. I want to use my time to ask 
about some of those.
    Ms. Conklin, I remain interested in the potential of 
artificial intelligence to help Members and their staff just do 
their jobs more efficiently, and the Select Committee made a 
handful of recommendations in that regard.
    One of them was No. 120, which states that Congress.gov 
should do a clearer accounting of Member contributions to 
legislation. Oftentimes we will introduce, you know, bills that 
may be singles or doubles, and they get put into a big bill, 
like the NDAA or something like that. Right now, if a Member's 
bill is part of the NDAA, there is not really a way to figure 
that out on Congress.gov.
    One of the recommendations was to make sure that Members, 
so that their constituents know it and our colleagues know it, 
actually sort of get credit on Congress.gov for when they have 
a bill part of a broader bill.
    Any status update on that, on the implementation of that 
recommendation? Do you need anything from this further--
anything further from this Committee to get that cooking?
    Ms. Conklin. Thank you. Thank you, Congressman Kilmer, for 
all the attention you have given to legislative branch 
technology.
    We continue to make progress on this recommendation. In 
March, we expect to deploy new bill summary workflow tools 
which use natural language processing. That will do two things: 
assign bills to legislative analysts and CRS based on subject 
and identify similar bills, so bill comparison.
    There are other bill relationship enhancement services 
related to Congress.gov that are being implemented in 
collaboration with our legislative branch data partners.
    Congress.gov is downstream of the legislative process, so 
it is downstream from the House and the Senate and makes 
congressional authoritative data available to the public. This 
information has been included in the June 2023 Congress.gov 
update study.
    We appreciate your recommendations. In response to your 
question about your support regarding this recommendation, the 
CRS analyst tool we plan to implement may require funding for 
additional licenses to expand access to Congress. We are 
implementing it within CRS for the analysts, but if we were to 
bring it to Congress, Congress to use, that would require 
additional funding.
    Mr. Kilmer. This notion specifically of--the notion of 
identifying when a Member's bill is part of a broader bill----
    Ms. Conklin. Yes.
    Mr. Kilmer [continuing]. that may be part of this updated 
tool.
    Ms. Conklin. Yes.
    Mr. Kilmer. Great.
    Let me ask about one other thing on that front. Mr. 
Clocker, you already spoke to the bill summaries. It sounds 
like that is work in progress and maybe, as a first draft, not 
too shabby.
    Have you looked at using AI to have a--you know, right now 
there is a related bills tab in Congress.gov. Has there been a 
contemplation of doing like a previously related bills tab 
where Members and staff and the public can look at the history 
of bills with a longer history, you know, when they first 
started, who was involved, how they changed over time, which 
can be helpful as Members evaluate legislation? Is that 
something that you have or could look into as you do this 
Congress.gov update?
    Ms. Conklin. It will be part of the tool. We are also doing 
an experiment on CRS bill summaries and to determine, from a 
generative AI perspective, and that will help, depending on the 
results of the experiment. Our hope is that it will generate an 
initial bill summaries with the items you are talking about.
    Mr. Kilmer. Being able to look at from a past Congress.
    Ms. Conklin. From a past.
    Mr. Kilmer. Okay.
    Ms. Conklin. Well, and currently legislative analysts have 
that ability within CRS with a tool they have; but with what we 
are delivering in March, they will be able to do that further.
    Mr. Kilmer. Great.
    I have another question, but I think I only have 20 
seconds, so I will yield back. Thanks.
    Chairman Steil. The gentleman yields back.
    Dr. Murphy is recognized.
    Dr. Murphy. Thank you, Mr. Chairman.
    This has like opened up a huge Pandora's box, and we kind 
of--as we all know, a box of chocolates can give us a lot of 
good things and a lot of bad things too.
    I have been in medicine for now 35 years and I have been 
able, fortunately, to witness so many different transformative 
changes in medicine. I look at the first two decades, this, 
that, and the other stuff. The last 5 to 8 years in medicine 
has been an absolute explosion with new technology. We are 
really required in medicine to adapt every day for new 
technologies, new medicines, and all these other things for the 
improvement of our patients.
    The whole idea of disruptive behavior, if it will, is not 
foreign to me. I know it is for some, you know, governmental 
bureaucracies, and that is absolutely understandable.
    This is the major challenge I would see that you guys are 
facing, because this is something unlike--it is actually unlike 
anything we have ever seen before in medicine, but absolutely 
understandable, unlike anything that has happened in a 
Government bureaucracy.
    You know the old ``Terminator'' movies we are all now 
afraid of; that Skynet becomes aware, and this is where the old 
human in the loop. I do not like to look at it that way. I like 
to see the humans holding the loop, because we are probably at 
some point going to need AI that is over-watching the AI.
    Because, to Representative Sewell's comments, there are 
going to be biases in, because all this is doing is giving us 
information but then correlating it, subsidizing it, and 
pulling it down correctly just into a quick response. It is not 
anything that does queries or any type of analysis really of 
any type of import that a human would do.
    My main queries really are going to be about security 
concerns, because I will tell you--now being in politics for 
about 7 years. I was in the private sector for a very long 
time. Where are we going with making sure that we are having 
watchdogs not only within the Government but outside the 
Government? Because I think sometimes we get a little bit 
cloistered in our thoughts as far as the Government. I would 
love us to have a private-public partnership with anything like 
this.
    I would be interested, Mr. Clocker or Ms. Conklin, if you 
have any comments about that.
    Mr. Clocker. Dr. Murphy, those are some very challenging 
questions that we are going to have to face. You mentioned in 
the area of medicine. Well, within the CAO, we have many 
professions. We have financial professionals. We have 
cybersecurity professionals. We have creative professionals. 
You know, the integration of those tools into those professions 
is going to have transformative effects.
    You are exactly right; these tools are nowhere ready to 
replace human judgment. The CAO will not use these tools to 
replace human judgment anytime soon, and I cannot imagine us 
doing that anytime soon.
    You are also absolutely right. You know, we need to talk to 
industry. We need to talk with executive branch partners. We 
need to talk with/see what other legislatures are doing and how 
are they dealing with these challenges. It is just not an 
inward-looking issue. We need to talk across various parts of 
America.
    Dr. Murphy. This is happening in every single segment of 
our society. I really honestly would love something like 
Deloitte to come in and actually start auditing our own 
Government agencies.
    Because if you take CBO, for example, I have come to--
always thought it was the great almighty really judge of things 
but found it to be vastly inaccurate. Why couldn't we use these 
private agencies, which do a really good job at some of these 
things, to do some of these things for us?
    Ms. Conklin, and maybe the same thing with Mr. Clocker, are 
we going to be able to use this--and when I say ``security,'' I 
am talking about not only security of data but about security 
of the House, being able to look at the--work with the DOD, 
work with other administrations to make sure that our Members 
are safe/secure in all the different attacks now. I think the 
Chairman was stating there has been a 300-percent increase in 
the number of threats against House Members. Anything like 
that?
    Mr. Clocker. From a cyber perspective, we see an increase 
in the size, scope, and sophistication of the cyber attacks on 
the House. We are responding. We are concerned about it. We 
are--we will need to add more resources. We will need to add 
more tools, and at some point we are going to need to add more 
funds to all of that.
    Dr. Murphy. Yes, that is the big problem, is we need more 
funds. Maybe AI can tell us how we cannot need more funds. This 
is going to be different.
    Just last question: Over the last 6 months, the NIST 
Framework has emerged as the benchmark for Federal agencies, 
private agencies. Quickly, do you guys have an AI governance 
plan in place, or is that still something that is still being 
assimilated?
    Mr. Halpern. We have had an AI governance directive in 
place for about a month and a half. Our AI Governance Committee 
actually had its inaugural meeting yesterday, and it is based 
off of the NIST Framework. We are starting to move forward on 
that.
    Dr. Murphy. Who comprises that plan?
    Mr. Halpern. The Governance Committee. It is 
representatives of a variety of our business units. It is 
chaired by our chief information officer, but our chief 
technology officer is also part of it as well as 
representatives from our major business units at GPO.
    Dr. Murphy. Do you have--so you have private individuals 
coming in?
    Mr. Halpern. It is internal to us. We are consulting--GPO 
is--essentially runs as a business.
    Dr. Murphy. Right.
    Mr. Halpern. We are consulting all the time with folks from 
industry, because we need to adopt the latest tools to make 
sure that we remain competitive for our customers.
    Dr. Murphy. Great. Thank you.
    With that, I will yield back, Mr. Chairman.
    Chairman Steil. The gentleman yields back.
    Mrs. Torres is recognized for 5 minutes.
    Mrs. Torres. Thank you, Chairman Steil and Ranking Member 
Morelle, for convening us today on these very important issues.
    Most of all, thanks to everyone in the audience today for 
your interest in being here to listen to what AI can do for us 
in the future.
    Artificial intelligence is a powerful, transformative tool 
that can improve the efficiency, effectiveness, and 
accountability of Congress. However, AI could also bring 
unintended consequences. It opens the door to issues about 
authenticity and quality of work.
    As this Committee looks at AI in the legislative branch, we 
must diligently look at AI innovation and its impacts on our 
society. Is AI going to be used to write bills, to solve some 
of mankind's greatest problems? How will we continue to center 
the human experience on legislation? Is AI going to be used to 
automate and expand the ability of special interests to 
influence Members of Congress or our staff?
    I are hope that this hearing is the start of a much larger 
conversation, and I look forward to working with all of my 
colleagues on responsible and ethical AI innovation, respecting 
the rights and dignity of all Americans, and upholding the core 
values of our democracy.
    Let me give you an example, and my questions are going to 
be to Mr. Clocker. Congressional casework staff help 
constituents access critical Federal benefits that oftentimes 
are denied, and they need that human moral value in order to 
have some oversight over opinions on these cases. However, 
congressional offices can have huge numbers of casework, and it 
is overwhelming for our district staffs to deal with some of 
that.
    How can AI help improve our staff--and I say ``improve,'' 
not replace, our staff--when dealing with casework issues?
    Mr. Clocker. That is a great question. We agree; it is not 
going to replace staff. Constituents want to talk to a human. 
They are not going to want to talk to a chatbot; we know that. 
We have not seen--you know, these chatbot technologies have 
been out there a long time. We have not seen those implemented 
by Member offices, because it just does not work for that 
relationship.
    Mrs. Torres. I wrote a letter recently asking this very 
question: How can AI be improved to help staff monitor their 
casework load, right, to help improve communication, responses 
to our constituents? That is what I am looking for.
    Mr. Clocker. Yes.
    Mrs. Torres. Assistance for the staff.
    Mr. Clocker. I think where we should focus on right now, we 
know the tools that we have today are very good at helping 
staff produce the first draft or to summarize information.
    I think the focus to help caseworkers or other staff, you 
know, handling constituent inquiries, just focus on the 
training, right? Here is what these tools are good for today, 
and here is how to safely integrate into your current operation 
while we look at the technology tools that are available to 
your office.
    Mrs. Torres. How do we protect personal data that is very 
sensitive personal data that is shared with our offices if we 
utilize a system like AI to help process these casework?
    Mr. Clocker. I think that is the hard problem here, right? 
We will--this is one area we will probably be extraordinarily 
cautious before we integrate AI tools, looking into, you know, 
private PII constituent data, which has health information, all 
sorts of information that we need to protect.
    Mrs. Torres. I hope when we are convening these meetings 
with professionals that are knowledgeable on all of these 
issues that this conversation is happening. Protecting the 
privacy of our constituents has to be a priority.
    Ensuring that our staff has every tool available to them to 
improve their job has to be a priority, but in many of these 
jobs, you just cannot replace a human being with a computer. 
Take that to heart and take that serious.
    Mr. Clocker. We agree.
    Mrs. Torres. Thank you. I yield back.
    Chairman Steil. The gentlewoman yields back.
    Mrs. Bice is recognized for 5 minutes.
    Mrs. Bice. Thank you, Mr. Chairman.
    Thank you for the witnesses for joining us this morning/
afternoon.
    First, I am going to make a shameless plug. The 
Modernization Subcommittee will also be holding some AI 
hearings in the near future. This seems to be a very 
interesting topic, because I do not often see this room packed 
full of visitors. Feel free to join the Subcommittee when we 
post that particular hearing.
    We have seen AI in the legislative branch assist library 
patrons with optical character recognition, building LLM models 
to allow experimentation and tests based on high-quality 
Government data or reports, and work to upskill legislative 
branch staff to become familiar with generative AI technology.
    The first question I have, in the 2023 year-end report on 
the Federal judiciary, Chief Justice Roberts cautioned the 
judicial branch that, quote, ``Any use of AI requires caution 
and humility.'' I think we are here today in that spirit to 
acknowledge the first steps of the legislative branch must be 
cautious, humble, thoughtful and forward-looking.
    If I can start with Mr. Halpern, the GPO completed a formal 
AI directive and issued--that was issued internally in October, 
correct? Can you explain the directive and the impact it has 
had on GPO operations over the last few months?
    Mr. Halpern. Sure. The--we are in the nascent stages of 
implementing that directive. Largely the directive incorporated 
the NIST Framework into our own decision-making process. As I 
mentioned, our AI Governance Committee established by that 
directive held its inaugural meeting yesterday.
    We actually authorized three pilot programs to move forward 
using AI technologies. One was a simple intranet chatbot for 
search. Another is a module for our Contract Lifecycle 
Management system to test that to see if that can assist our 
acquisitions process. Then also looking at tools to assist our 
public information search.
    As I said, the directive is to provide that framework for 
our decisions as we look at these technologies and incorporate 
them into GPO's operations. Really, you know, our first stage 
is to do no harm, so to test things thoroughly before they move 
into production.
    Mrs. Bice. Great. Thank you.
    Mr. Clocker, you talked earlier about the need for 
potentially staff in reviewing and evaluating vendors, and I 
think it is an important point. Certainly, we want to make sure 
that there is a thorough vetting of any applications that are 
getting put forward to use by House offices. I also think we 
have to think through the speed at which we do that, because 
technology is changing at a very rapid pace.
    If we are taking 6 months or a year to actually vet and 
approve these types of applications, it is a, you know, 
giving--putting us I think at a competitive disadvantage to be 
able to actually utilize those programs.
    Large language models are being weaponized. In December, 
VentureBeat published an article about LLM attacks against 
members of the U.K. Parliament. In this instance, hackers used 
them to personalize emails and kept sending new emails in rapid 
succession until they got through.
    Mr. Clocker, is there anything like this happening in the 
House today?
    Mr. Clocker. I do not think we have seen that specific 
example. We are certainly concerned about it. We have seen 
increased speed and sophistication of cyber attacks. Our 
adversaries, those who want to harm the House, whether 
politically motivated or they are just fraudulent fraudsters, 
these tools are going to be widely available. They are going to 
use it to generate emails that look like constituent letters, 
social media posts that looks like they are coming from 
constituents. We are going to have to figure out how to get in 
front of that.
    Mrs. Bice. Absolutely. Have you--has the CAO actually 
identified any caseworker-related AI use cases?
    Mr. Clocker. We have. I think we are starting with one of 
the Modernization Subcommittee priorities, which was to 
anonymize and aggregate casework information. We are not using 
AI in that project right now.
    We are being very cautious. There is sensitivity. It is 
anonymized data, but there is still sensitivity about that 
data. We want to prove that project as we roll it out with the 
additional pilot participants.
    I do think, once that dataset gets large enough, there is 
definitely opportunities to use AI. Not this year. You know, 
maybe next year.
    Mrs. Bice. Perfect. Thank you for those answers.
    Mr. Chairman, I yield.
    Chairman Steil. The gentlewoman yields back.
    We appreciate having Mr. Lieu on our Committee with us 
today.
    Mr. Lieu, you are now recognized for 5 minutes.
    Mr. Lieu. Thank you, Chairman Steil and Ranking Member 
Morelle, for allowing me to be on this Committee today. I 
really appreciate it.
    Thank you to the witnesses for your presentations and your 
work. I am a recovering computer science major. I am aware of 
both the benefits as well as the risks that AI poses.
    I have some questions first to Mr. Clocker.
    You all had approved ChatGPT Plus on a limited basis. You 
had not approved Microsoft's Bing Chat or, for example, Google 
was barred, other large language models.
    What would be some of the risks of a House office using 
those large language models?
    Mr. Clocker. We talked to all those companies, and we are 
still talking to them. ChatGPT Plus, which is, as you know, run 
by OpenAI, we have some stated terms and conditions around 
protection of House data, protection of Member data, and they 
agreed to them. The other products have not.
    It is really about how they handle your data and, for 
example, to make sure they do not share it with anybody else. 
Until they agree to that, we probably will not authorize them.
    Mr. Lieu. OpenAI has said, anything that a House office 
views through that particular license, they are not going to 
incorporate into training or sharing or anything like that.
    Mr. Clocker. That is correct. That is correct. Now, one 
of----
    Mr. Lieu. All our computers are under that license, or how 
does that work? If I just go on my laptop and use ChatGPT 
Plus----
    Mr. Clocker. No, it is the paid version only. I think it is 
about $20 a month. It is very low cost. You do have to go in 
and get the ChatGPT Plus version, not the free version. If you 
use----
    Mr. Lieu. It has to be specifically through that license 
that they do not share that information?
    Mr. Clocker. That is correct. Yes, yes.
    Mr. Lieu. If I go on my desktop and use Microsoft's Bing 
Chat----
    Mr. Clocker. That is correct.
    Mr. Lieu [continuing]. then the risk is my queries could 
sort of go somewhere into their, I do not know, large----
    Mr. Clocker. Obviously, the risk is very low. We do not 
know what the typical House staffer is going to do with the 
data they put in through those tools. It is more than just are 
they going to use the data to train the model, right? There are 
some requirements in our terms and conditions around reporting 
on cyber incidents, that sort of thing.
    These are all fine companies. Obviously, Google is a very 
mature company. Where we are today is just the ChatGPT Plus.
    Mr. Lieu. Thank you.
    I did want to follow up on something that Representative 
Bice said about LLM's targeting and being weaponized against 
other folks, including legislators.
    You may have seen that there was a fake audio of Joe 
Biden's voice in a recent election. There is technology called 
liveness detection technology that is pretty good, that will be 
pretty darn accurate in letting you know if the voice at the 
other end is a human being or not a human being.
    Could the CAO look at that technology and see whether it 
might be useful to incorporate?
    Mr. Clocker. Absolutely. We think we are going to need to 
use technology in a lot of instances. Is it a real voice? Is it 
a real message generated by AI that is coming in? We definitely 
think there is an opportunity to use that technology.
    Mr. Lieu. Thank you.
    I have some questions for Mr. Ariga. Am I saying that 
correctly?
    Mr. Ariga. Yes.
    Mr. Lieu. Your agency is developing your own large language 
model. As you know, one of the issues of large language models 
is they are not designed to seek the truth. They are 
essentially designed to seek the most popular response to your 
query.
    What sometimes happens is they do this very techno term 
called ``hallucination,'' where they have a perfectly 
grammatically correct paragraph that is bonkers, totally false.
    One of the ways they try to correct for that is they 
literally, after they have the OpenAI model, go 6 months and 
hire thousands of human beings that basically go through and 
make sure it does not say bonkers things.
    Is your agency going to do that and have all these human 
beings test that model when you develop it?
    Mr. Ariga. Yes. Our approach to large language model, it 
takes a slightly different approach than what Mr. Clocker may 
have described. We chose to deploy a large language model 
inside GAO so we can control what comes in and what goes out.
    Specifically, that is allowing us to then adding GAO-
trusted data sources, for example, GAO published reports or 
even Congress.gov, as a source of information for us to add.
    Just as a--and perhaps an anecdote, when we first started 
experimenting with generative AI technology, we asked a very 
basic question: What was Abraham Lincoln's opinion of GAO's AI 
accountability framework? Not surprisingly, it told us Abraham 
Lincoln hated our framework.
    That sort of told us that there is certainly a danger for 
hallucination, and that really informed our approach in terms 
of how can we augment this pretrained model with trusted data, 
and then also be able to sort of describe the rationale in 
which those answers are being described. It is a combination of 
technology and digital literacy for us to make sure that we can 
recognize signatures of hallucinations.
    Mr. Lieu. Thank you. I yield back.
    Chairman Steil. The gentleman yields back.
    Mr. Carey is recognized for 5 minutes.
    Mr. Carey. I want to thank the Chairman, and I want to 
thank Ranking Member Morelle for holding this hearing.
    You know, I can see how AI can be confusing. Last Sunday, I 
was sitting in my--at my house, and somebody sent me something 
that was generated. I opened it. I really thought that there 
was going to be a ``Back to the Future 4'' movie coming out. 
Now, this was something very lighthearted, but it was something 
that really looked like we were, in fact, going to have that 
movie.
    I know, Mr. Clocker, you described a lot of this stuff in 
your testimony, but I do want to go to what Ms. Sewell said 
earlier about the AI logarithms and how we can do things to 
avoid any type of unintended consequences, if you will.
    Briefly, if you could, how does the CAO regulate, collect, 
and inventory AI use cases?
    Mr. Clocker. We started talking to staffers and Members and 
Committee leadership offices. We have----
    Mr. Carey. How many offices have you surveyed?
    Mr. Clocker. We had 150 offices total who participated in 
the pilot.
    Mr. Carey. How many Committees would you say?
    Mr. Clocker. I think it is about half the Committees, so 
roughly 10.
    Mr. Carey. Along that, the CAO's ChatGPT working group, 
what has been the most common popular office use, and what do 
you think were some of the biggest shortcomings?
    Mr. Clocker. The most popular uses are very similar uses 
you have heard in other areas. It is really that--producing 
that first draft and then giving it to a human to go from 
there--first draft of testimony, first draft of witness 
questions, first draft of a speech.
    A lot of times what people talked about, it gets you over 
that writer--if you have writer's block, it is going to get you 
over that writer's block, and it is going to give you a good 
framework to actually customize in the Member's voice.
    The pitfalls of what we have heard here, right; it does 
hallucinate. It is also very confident, right, that it knows 
what it is talking about. It sounds very confident even though 
it is hallucinating.
    We will address--we need to address that through training 
and how to use the tools effectively.
    Mr. Carey. I think the thing that scares me the most is 
that we are going to have a system where we are going to have 
AI-generated letters which we receive from constituents that 
are then going to be AI-generated letters back to those AI-
generated--it is just going to be this vicious circle.
    To whatever extent--I know that Mrs. Bice mentioned that we 
are going to be doing some more investigating that.
    Again, Mr. Chairman, I appreciate the hearing.
    I appreciate all of your testimony, and thank you for 
everybody being here today.
    With that, I will yield back, Mr. Chairman.
    Chairman Steil. Thank you very much. The gentleman yields 
back.
    We have concluded the questions for our witnesses. I will 
yield to the Ranking Member for any concluding remarks.
    Mr. Morelle. Yes. I just want to say thank you to, first of 
all, Chairman Steil and the staff on both sides, who have been 
working so hard at pulling this together. I know this will be, 
I am sure, just one of many, many conversations on this topic.
    I am grateful to the Chairman for his leadership and his 
friendship.
    I want to thank all the witnesses for the work--not only 
for being here today, but the work you are doing day in and day 
out to help secure and to improve this Congress and the work 
that we do and the Members are doing to support those Members.
    I look forward to more conversations. This is obviously a 
very, very big topic. Again, I want to thank the Members and 
thank Mr. Lieu for joining us as well today.
    Thank you, Mr. Chairman. With that, I will yield back.
    Chairman Steil. Thank you very much.
    I also just want to reiterate my appreciation for our 
witnesses coming in and testifying. This is going to be an 
ongoing conversation about how we can leverage the benefits of 
this new technology and managing the downside risk. I think we 
will have the potential risk for all of our Members of looking 
back to some of the early hearings as it related to email and 
as Members talked about email, and now we look back and have a 
good chuckle.
    I think as AI develops and moves forward, I think we are 
going to see some amazing opportunities that we may not know of 
today. I think this conversation is an opportunity to make sure 
we are leveraging the benefits of this technology and managing 
the downside risk.
    One thing I heard time and again is that AI will not 
replace people. I think it is a real opportunity to make sure 
that we are using AI, allowing people to upskill with that and 
allowing people to leverage the technology to improve the work 
that is being produced at the end of the day. I appreciate all 
of our testimony today.
    Without objection, each Member will have 5 legislative days 
to insert additional materials into the record or to revise and 
extend their remarks.
    Members of the Committee may have some additional questions 
for you, our witnesses, and we ask that you respond to those 
questions in writing.
    Also now, pursuant to paragraph (b) of rule 14 of the rules 
of the Committee, I announce the vacancy of the position of 
clerk of the Committee and hereby appoint Kristen Monterroso to 
fill the vacancy.
    Without objection, a letter announcing the vacancy as well 
as a letter announcing the appointment of Kristen Monterroso as 
clerk will be placed into the record.
    [The letters referred to follow:]
    [GRAPHICS NOT AVAILABLE IN TIFF FORMAT]
    
    Chairman Steil. A copy of these letters will be made 
available to all Committee Members.
    If there is no further business, I want to thank the 
Members for their participation.
    Without objection, the Committee stands adjourned.
    [Whereupon, at 12:05 p.m., the Committee was adjourned.]
    [GRAPHICS NOT AVAILABLE IN TIFF FORMAT]
    
                             [all]