[Senate Hearing 118-135]
[From the U.S. Government Publishing Office]
S. Hrg. 118-135
GOVERNING AI THROUGH ACQUISITION AND PROCUREMENT
=======================================================================
HEARING
BEFORE THE
COMMITTEE ON
HOMELAND SECURITY AND GOVERNMENTAL AFFAIRS
UNITED STATES SENATE
ONE HUNDRED EIGHTEENTH CONGRESS
FIRST SESSION
__________
SEPTEMBER 14, 2023
__________
Available via the World Wide Web: http://www.govinfo.gov
Printed for the use of the
Committee on Homeland Security and Governmental Affairs
[GRAPHIC NOT AVAILABLE IN TIFF FORMAT]
U.S. GOVERNMENT PUBLISHING OFFICE
53-707 PDF WASHINGTON : 2024
COMMITTEE ON HOMELAND SECURITY AND GOVERNMENTAL AFFAIRS
GARY C. PETERS, Michigan, Chairman
THOMAS R. CARPER, Delaware RAND PAUL, Kentucky
MAGGIE HASSAN, New Hampshire RON JOHNSON, Wisconsin
KYRSTEN SINEMA, Arizona JAMES LANKFORD, Oklahoma
JACKY ROSEN, Nevada MITT ROMNEY, Utah
ALEX PADILLA, California RICK SCOTT, Florida
JON OSSOFF, Georgia JOSH HAWLEY, Missouri
RICHARD BLUMENTHAL, Connecticut ROGER MARSHALL, Kansas
David M. Weinberg, Staff Director
Lena C. Chang, Director of Governmental Affairs
Michelle M. Benecke, Senior Counsel
Evan E. Freeman, Counsel
Liana S. Keesing, Research Assistant
William E. Henderson III, Minority Staff Director
Christina N. Salazar, Minority Chief Counsel
Kendal B. Tigner, Minority Professional Staff Member
Laura W. Kilbride, Chief Clerk
Ashley A. Gonzalez, Hearing Clerk
C O N T E N T S
------
Opening statements:
Page
Senator Peters............................................... 1
Senator Paul................................................. 3
Senator Hawley............................................... 17
Senator Blumenthal........................................... 20
Senator Hassan............................................... 25
Senator Lankford............................................. 27
Senator Rosen................................................ 30
Senator Carper............................................... 33
Prepared statements:
Senator Peters............................................... 41
Senator Paul................................................. 44
WITNESSES
THURSDAY, SEPTEMBER 14, 2023
Rayid Ghani, Distinguished Career Professor, Machine Learning
Department and the Heinz College of Information Systems and
Public Policy, Carnegie Mellon University...................... 5
Fei-Fei Li, Ph.D., Sequoia Professor, Computer Science Department
and Co-Director, Human-Centered AI Institute, Stanford
University..................................................... 7
Devaki Raj, Former Chief Executive Officer and Co-Founder,
CrowdAI........................................................ 9
William Roberts, Director of Emerging Technologies, ASI
Government..................................................... 11
Michael Shellenberger, Founder, Public........................... 13
Alphabetical List of Witnesses
Ghani, Rayid:
Testimony.................................................... 5
Prepared statement........................................... 46
Li, Fei-Fei, Ph.D.:
Testimony.................................................... 7
Prepared statement........................................... 53
Raj, Devaki:
Testimony.................................................... 9
Prepared statement........................................... 58
Roberts, William:
Testimony.................................................... 11
Prepared statement........................................... 66
Shellenberger, Michael:
Testimony.................................................... 13
Prepared statement........................................... 75
APPENDIX
Statements submitted for the Record:
Tim Cooke, CEO and Owner, ASI Government LLC................. 86
Scale AI..................................................... 93
Anjana Susarla, Professor of Responsible AI, Michigan State
University................................................. 97
GOVERNING AI THROUGH ACQUISITION AND PROCUREMENT
----------
THURSDAY, SEPTEMBER 14, 2023
U.S. Senate,
Committee on Homeland Security
and Governmental Affairs,
Washington, DC.
The Committee met, pursuant to notice, at 10 a.m., in room
562, Dirksen Senate Office Building, Hon. Gary Peters, Chair of
the Committee, presiding.
Present: Senators Peters [presiding], Carper, Hassan,
Sinema, Rosen, Ossoff, Blumenthal, Paul, Lankford, Scott,
Hawley, and Marshall.
OPENING STATEMENT OF SENATOR PETERS\1\
Chairman Peters. The Committee will come to order. Today's
hearing is the third in a series that I have convened on
artificial intelligence (AI). At our first meeting in March, we
discussed the transformative potential of AI, as well as the
possible risk that these technologies may pose.
---------------------------------------------------------------------------
\1\ The prepared statement of Senator Peters appears in the
Appendix on page 41.
---------------------------------------------------------------------------
At our second hearing in May, we considered the role of AI
in government. How AI tools can improve the delivery of
services to the American people, and how to ensure they are
being used both responsibly and effectively.
Today, we are going to do a deeper dive into how government
will purchase AI technologies, and how the standards and
guardrails that government sets for these tools will shape
their development and use really all across all industries.
The Federal Government is already using AI and its use
across agencies is only expected to grow in the coming years.
These systems could help provide more efficient services,
assess potential security threats, and automate routine tasks
to enhance the Federal workforce.
For example, the Department of Homeland Security (DHS) is
using natural language processing to evaluate employee surveys
and improve workplace experience. The Federal Aviation
Administration (FAA) deploys machine learning to update the
weather models that help land planes successfully. Other
technologies that are continuing to develop, such as generative
AI, offer the potential to improve government services even
more.
For example, many agencies, from the Office of Personnel
Management (OPM) to the Department of Health and Human Services
(HHS), to the Department of Education, have rolled out chat
bots to provide better service to Federal employees and the
larger American public. AI is here and is already being put to
good use.
Many of these systems are not developed by the government,
but rather the private sector. Over half of the AI tools used
by Federal agencies have been purchased from commercial
vendors. This collaboration between the public and the private
sector is crucial. It ensures that government is using the most
effective AI systems.
American companies are breaking new ground with these
technologies, and we have a chance to share in the benefits of
that incredible innovation. But these tools also bring
potential risk and policy implications. They require new
knowledge from procurement officials, as well as increased
coordination across agencies.
In order to successfully and effectively purchase and use
AI tools, Federal agencies have to be prepared to address
issues like privacy concerns about the use of Federal data to
train commercial models and bias in government decisionmaking.
We must be nimble, whenever the government collaborates with a
private sector, but this is especially true with AI, where new
developments emerge almost every single day.
The tools that are purchased are often actively learning
and are changing as they are used. Last Congress, I authored
and enacted a law that requires officials that procure AI
tools, be trained in their capabilities as well as their
potential risk. This year, I introduced legislation that would
extend this training to all Federal managers and supervisors.
I have also introduced legislation that would designate a
chief AI officer at every Federal agency so that they have
leadership and expertise to maximize the potential of these
technologies and effectively address risk. These guardrails are
more important than ever. Federal agencies are inundated with
sales pitches and technology demos, promising the next big
thing.
While the Federal Government must be forward thinking, we
also have to be cautious in procuring these new tools. We must
continue to work past the initial purchase, testing and fine
tuning our models to ensure that they are effectively serving
the American people. As AI development accelerates, private
industry has yet to standardize practices for evaluating AI
systems for risk, for trustworthiness and responsibility.
Through the Federal procurement policy, the government has
a unique opportunity to shape these standards and frameworks
for development and deployment of these technologies across the
private sector more broadly. I look forward to hearing from our
expert witnesses here today.
We look forward to working with you not just today, but in
the weeks, months, and years ahead, and continue our bipartisan
work to help encourage American development of AI and ensure
that it is being used appropriately. I would now like to turn
the microphone over to Ranking Member, Senator Paul, for his
opening statement.
OPENING STATEMENT OF SENATOR PAUL\1\
Senator Paul. Thank you. In 2021, the Pentagon, through the
Defense Advanced Research Projects Agency (DARPA), asked for
proposals for real time comprehensive tools that established
ground truth for how countries are conducting domestic
information control.
---------------------------------------------------------------------------
\1\ The prepared statement of Senator Paul appears in the Appendix
on page 44.
---------------------------------------------------------------------------
DARPA's goal in developing AI technology for measuring the
information control environment was to help the U.S. Government
better understand how digitally authoritarian regimes repress
their populations at scale over the internet via censorship,
blocking, or throttling.
Of course, the solicitation made it clear that the Pentagon
did not want the proposals to look at activities of the U.S.
Government. The Pentagon and the U.S. Government as a whole
enjoy professing moral superiority over authoritarian
governments when it comes to upholding basic democratic values.
American politicians have no qualms about criticizing
foreign governments like Russia and China for their suppression
of civil liberties and efforts to eliminate dissent. Yet there
seems to be a complete unwillingness to have an honest
conversation about the disturbingly similar actions our own
government is actively engaged in and financing.
For decades, the Pentagon and other Federal agencies have
been quietly partnering with private organizations to develop
powerful surveillance and intervention tools designed to
monitor and influence narratives on social media.
For example, a 2021 Pentagon program called Civil Sanctuary
sought to use artificial intelligence tools to scale the
moderation capability of social media platforms to create what
it describes as a more stable information environment. In other
words, the goal of this Pentagon program was to exponentially
multiply the government's ability to coordinate censorship of
online speech.
The Pentagon has invested millions of dollars to develop
these tools, not only for use by social media companies, but
also the intelligence community (IC) and law enforcement.
Meanwhile, the Department of Commerce (DOC) is awarding million
dollar grants for cognitive research into how the U.S.
Government can foster trust in artificial intelligence with the
general public.
While the Federal Government is using taxpayer dollars to
develop AI to surveil and monitor Americans' online speech, it
is also spending money to figure out how to get you to trust
the government with AI.
Over the last year, starting with the Twitter Files,
journalists started to expose the deep coordination between the
Federal Government and social media. When it comes to content
moderation, these decisions in policing the speech of
Americans, we have seen this enormous connection between
Government and private entities.
As Michael Shellenberger rightly points out, the threat to
our civil liberties comes not from AI, but from the people who
want to control it and use it to censor information. It is not
the tool, it is the corruption of power that is always the
problem.
Last week, the Fifth Circuit affirmed the government likely
violated the First Amendment, a big deal, violated the First
Amendment by coercing social media companies to remove speech
that the government disagreed with related to the origins of
Coronavirus Disease 2019 (COVID-19), the pandemic lockdowns,
vaccine efficacy, and the Hunter Biden laptop story.
The court cited numerous examples of U.S. Government
officials engaging in domestic information control on social
media. Our concern is not that they are doing it, but they are
going to do it even more efficiently and even more ruthlessly
if they get artificial intelligence and are able to comb
through the entire internet.
They are already doing this. Our concern with artificial
intelligence is they will take that tool and much more
efficiently go through millions and millions opposed to say,
that is not allowed. Government officials demanded that the
platforms implement stronger COVID misinformation monitoring
programs, and then they threatened the platforms. They
threatened them with taking away Section 230.
They threatened them with antitrust action. It is amazing.
This was no sort of please take down some information. It is,
take it down or else. That is why the court enjoined and said
to the Biden Administration, you must stop. Currently, the
Biden Administration is not meeting with them.
They have had to cancel their meetings with the Federal
Bureau of Investigation (FBI), with the Department of Homeland
Security. But this is a big deal. But it seems to be only one
side of the aisle has been concerned at all with what has
happened because some of it involves politics. But it should
not. I mean, free speech should be something that both parties
really are concerned with trying to protect.
After one meeting with Federal officials, one platform,
social media platform, committed to reducing the visibility of
information that was skeptical of the government's COVID
vaccine policy, even when it did not contain any
misinformation.
They were saying, even if it is true, we want you to take
it down, because some people might not get vaccinated because
you said something that actually did occur, but we do not want
people to know that because it would lessen people's
enthusiasm.
That is a crazy notion. Facebook promised to label and
demote a popular video after officials flagged it, even though
they acknowledged it did not qualify for removal under its
policies. I fear that we are likely only in the beginning
stages of understanding the extent of the Federal Government's
involvement in content moderation and the decisions that
private social media platforms make.
What we do know is that our government is funding the
development of powerful artificial intelligence tools for
monitoring and shaping online discourse. I want to be clear, AI
is not inherently malicious. It has the potential to
revolutionize basic aspects of society, from health care to
education.
However, in the hands of unchecked government, AI can be
weaponized as a tool to suppress fundamental values like
speech, things that our country was founded upon--the open
exchange of ideas, the freedom to question, and the right to
dissent. This should not be a partisan issue.
Chairman Peters. It is the practice of the Homeland
Security and Governmental Affairs Committee (HSGAC) to swear in
witnesses. If each of you would please rise and raise your
right hand. Do you swear that the testimony that you will give
before this Committee will be the truth, the whole truth, and
nothing but the truth, so help you, God?
Mr. Ghani. I do.
Ms. Li. I do.
Ms. Raj. I do.
Mr. Roberts. I do.
Mr. Shellenberger. I do.
Chairman Peters. Thank you. You may be seated. Our first
witness is Professor Rayid Ghani. Professor Ghani is a
Distinguished Career Professor in the Machine Learning
Department at the Heinz College of Information Systems and
Public Policy at Carnegie Mellon University (CMU). At CMU, he
leads the Data Science and Public Policy Group, and the Data
Science for Social Good program, as well as the Responsible AI
Initiative, which he co-leads. Professor Ghani, it is great to
have you here at the Committee. You are recognized for your
opening statement.
TESTIMONY OF RAYID GHANI,\1\ DISTINGUISHED CAREER PROFESSOR,
MACHINE LEARNING DEPARTMENT AND THE HEINZ COLLEGE OF
INFORMATION SYSTEMS AND PUBLIC POLICY, CARNEGIE MELLON
UNIVERSITY
Mr. Ghani. Thank you. Thank you, Chair Peters, Ranking
Member Paul, and other Members of the Committee. Thanks for
hosting this hearing today and for giving me the opportunity to
present this testimony.
---------------------------------------------------------------------------
\1\ The prepared statement of Mr. Ghani appears in the Appendix on
page 46.
---------------------------------------------------------------------------
As Chair Peters mentioned, my name is Rayid Ghani. I am a
Professor of Machine Learning and Public Policy at Carnegie
Mellon. I am here today because I believe that AI has enormous
potential in helping us tackle critical societal problems that
our governments are focused on.
Much of the work I have done over the last decade has been
in this space through working extensively with governments at
the Federal, State, and local level, including helping and
using AI systems to tackle problem across health, criminal
justice, education, public safety, human services, workforce
development, particularly on supporting fair and equitable
outcomes.
Based on my experience, I believe that I can benefit every
Federal, State and local agency. However, any AI system, or any
other type of system affecting people's lives, has to be
explicitly designed to promote our societal values, such as
equity, and not just narrowly optimized for efficiency.
I think it is critical for us, Government agencies,
policymakers, to ensure that these systems are designed in a
way that they do result in promoting our values. Now, while the
entire lifecycle of AI systems, from scoping, to procurement,
to designing, to testing, to deploying needs to have those
guidelines in place that maximizes societal benefits and
minimize potential harms, there has been a lack of attention to
the earlier phases of this process, particularly on the problem
scoping and procurement parts.
As Chairman Peters mentioned, many of the AI systems being
used in government are not built in-house. They are procured
through vendors, consultants, and researchers. That makes
getting the procurement phase correct critical. Many problems
and harms discovered downstream can be avoided by a more
effective procurement process.
We need to make sure that the government procurement of AI
follows a responsible process and in turn makes the AI vendors
accountable for the systems they design. They themselves have
to promote accountability, transparency, and fairness.
Government agencies often go on the market to buy AI
without understanding and defining and scoping the problem they
want to tackle, without assessing whether AI is even the right
tool, and without including individuals and communities that
will be affected. AI systems are not one size fits all.
Procuring AI is first and foremost procuring a solution
that is helping solve a problem and should be assessed in the
ability to better solve the problem at hand. In that respect,
procuring AI is not that different from procuring other
technologies. Now, there are a few areas where it is different.
One, AI algorithms are neither inherently biased or
unbiased, nor have inherent fixed values. The design of these
systems requires making hundreds and sometimes thousands of
choices that determine the behavior of the system. If these
choices explicitly focus on outcomes we care about and we
evaluate the systems against those intended outcomes, the AI
system can help us achieve what we want to achieve.
Unfortunately, today, those decisions are too often left
typically to the AI system developer who defines those values
implicitly or explicitly. The procurement process needs to
define these goals and values very explicitly. AI requires
that. Society requires that. Ensure that the vendors address
those appropriately in the system being procured and provide
evidence of that.
Building responsible AI systems requires a structured
approach, and the procurement process needs to set
expectations, enforce transparency and accountability from
vendors in each of these steps.
That includes defining goals, translating them into
requirements that the vendors to design the system to achieve,
and setting up a continuous monitoring and evaluation process,
because the system will both itself change, as well as have to
live and function in an ever changing world.
It is critical and urgent for policymakers to act and
provide guidelines and regulations for procuring, developing,
and using AI, in order to ensure that these systems are built
in a transparent and accountable manner, and result in fair and
equitable outcomes for our society.
As initial steps, here are some of my recommendations. No.
1, focusing the AI procurement process on specific use cases
rather than general purpose, one size fits all AI, both to
support intended outcomes around that use case, as well as to
prevent harm through misuse.
No. 2, development of common procurement requirements for
AI and templates the government agencies can start from. That
does not exist today. No. 3, create guidelines that ensure
meaningful involvement of the communities that will be impacted
by the AI system, right from the inception stage and
continuously.
No. 4, and last, creating trainings, processes, and tools
to support the procurement teams within government agencies. As
the government teams expand their role and start procuring AI
augment the systems more regularly, they will need to be
supported by increasing their capacity to fulfill this role.
I recommend creating a set of trainings and processes and
collaboration mechanisms and tools to help them achieve that.
The overall goal behind these recommendations is to set some
standards around procurement of AI by government agencies, and
to support and enable the agencies to implement those standards
effectively and procure AI systems that can help us achieve
their policy and societal goals. Thank you.
Chairman Peters [continuing]. Dr. Li is the Sequoia
Professor of Computer Science at Stanford University, and Co-
Director of the Stanford Institute for Human Centered
Artificial Intelligence (HAI). Before that, Dr. Li spent five
years as the Director of Stanford's AI Lab. During that time,
she was also Vice President at Google and the Chief Scientist
of AI-ML for Google Cloud.
Dr. Li is the inventor of ImageNet and the ImageNet
Challenge, a large scale data set effort that contributed to
significant advances in deep learning and computer vision. Dr.
Li, thank you for being here today. We look forward to your
opening comments.
TESTIMONY OF FEI-FEI LI, PH.D.,\1\ SEQUOIA PROFESSOR, COMPUTER
SCIENCE DEPARTMENT AND CO-DIRECTOR, HUMAN-CENTERED AI
INSTITUTE, STANFORD UNIVERSITY
Dr. Li. Thank you. Thank you, Chair Peters, Ranking Member
Paul, Members of the Committee. Thank you for the privilege of
appearing before this prestigious body. It is truly an honor.
---------------------------------------------------------------------------
\1\ The prepared statement of Dr. Li appears in the Appendix on
page 53.
---------------------------------------------------------------------------
I have spent my life working in the field of artificial
intelligence, over 20 plus years, studying, developing, and
understanding the technology that has entered the public
consciousness due to recent breakthroughs. There is no doubt
that we have arrived at an inflection point in AI, largely
powered by generative AI, including large language models (LLM)
and my own field of computer vision, where we essentially teach
computers to see.
For example, a notable application is health care, where AI
is augmenting the capabilities of caretakers and medical
professionals by detecting anomalies in medical imagery such as
X-rays or MRI scans, thereby aiding early diagnosis and
treatment. Most importantly, AI presents many opportunities to
the U.S. Government.
One area is to streamline the efficiency of government.
Many Federal agencies are already experimenting with AI powered
tools. For example, HHS has initiated a pilot program that
employs AI to enhance the efficiency of fraud detection within
the Centers for Medicare and Medicaid Services (CMS).
Second, AI in health care reduces the burden on public
health care resources, including Medicare and Medicaid. Medical
AI tools can decrease the frequency of unneeded emergency
medical interventions and hospital readmissions, which are
significant cost drivers in health care expenditures.
However, while AI, like most technologies, promises to
solve many problems for the common good, it can also be misused
to cause harm and carry unintended consequences. Let me just
give you two examples of when the harms of AI can affect how
the government approaches it.
First, bias in AI is well-documented. For example, in
credit risk scoring, research shows that predictive tools used
to approve or reject loans are less accurate for low income
minority groups in the United States due to the lack of data in
their credit histories. To ensure that AI applications deliver
reliable results for all Americans, the availability of high
quality representative datasets is crucial.
Second, in an area of heightened public concern over data
collection and misuse, we must integrate strong privacy and
security protocols into these applications from the beginning.
To build Privacy-preserving technology, we must engage diverse
stakeholders in health care. This includes AI developers,
public sector, health care professionals, patients, and more.
This is why I founded Stanford Institute for Human Centered
Artificial Intelligence, where we study AI and its impact not
as a field exclusive to computer science, but instead as a
multidisciplinary field that includes the social sciences,
engineering, law, medicine, humanities, and more.
The Federal Government should adopt a similar approach to
properly understood the future of AI. It falls upon the U.S.
Government to spearhead the ethical procurement and deployment
of these systems, setting norms for AI development and
ultimately shaping the field of responsible AI.
I applaud this Committee and the work that has been done
thus far on AI, including the AI Training Act and the AI Lead
Act, which create powerful tools for Federal Government to set
such norms. In the United States, government spending on AI
related contracts--as the U.S. Government's funding has surged
in AI related contracts, it is more crucial than ever to
closely examine these vendors to ensure their goals align with
those of the Federal Government. One key component is
evaluation, especially in key areas like health care,
education, agriculture, finance, and more.
Having created one of the most consequential evaluation
datasets for AI models, ImageNet, I firmly believe that
evaluation should consider every factor in a holistic way, from
accuracy, to fairness, to the reliability of models performing
under real world conditions. Second, we must build-in
transparency measures.
Vendors should disclose key information about their
systems, including how they collect and annotate datasets, what
potential risks their systems pose, and how they mitigate those
risks. But the procurement is just one piece of the puzzle.
For the United States to maintain its leadership in AI, the
Federal Government must make the needed critical public
investments in AI. Due to the vast amount of compute and data
required to train these systems, only a select few industry
players can shape the future of AI, leaving an imbalance in the
innovation ecosystem that lacks the diverse voices of public
sector and government labs.
The lack of public sector investment in AI hampers not only
in thoughtful regulation, but also proper Federal procurement.
Without the ability to train AI talents, the Federal Government
will not have the necessary human capital to create meaningful
regulation, ensure ethical AI procurement, and be the true AI
leader it has the potential to be.
This is why I am unequivocally a strong supporter of the
Create AI Act, a strong bipartisan legislation introduced this
summer in both chambers. The Create AI Act will establish a
national AI research resource that provides the needed
resources to allow public sector researchers to innovate and
train the next generation of AI leaders.
In June, I personally shared with President Biden how I
believe the United States is not prepared for this imminent AI
moment. What we need right now is a coordinated moonshot effort
for the Nation to ensure America's leadership in AI for the
good of humanity.
This task will be no small feat, but with meticulous
coordination, significant investment in scientific AI research,
and robust collaboration across government, public sector, and
industry, we can rise to meet this challenge and ensure
America's leadership in AI is both impactful and enduring.
Thank you to the Chairman, Ranking Member, and all the
Members of the Committee for allowing me to testify today.
Chairman Peters. Thank you. Our next witness is Devaki Raj.
Ms. Raj is the former Chief Executive Officer (CEO) and co-
founder of CrowdAI, a computer vision startup that has been
contracting with the U.S. Government since 2019. Ms. Raj was
recognized in 2019 on the Forbes 30, under 30, as an AI leader
to watch.
Previously, she worked for Google as a Data Scientist in
their maps and android sector. Ms. Raj, thank you for being
here today, and we look forward to your testimony.
TESTIMONY OF DEVAKI RAJ,\1\ FORMER CHIEF EXECUTIVE OFFICER AND
CO-FOUNDER, CROWDAI
Ms. Raj. Chairman Peters, Ranking Member Paul, and
distinguished Members of the Committee, thank you for this
opportunity to testify on governing AI through acquisition and
procurement from the perspective of a small business.
---------------------------------------------------------------------------
\1\ The prepared statement of Ms. Raj appears in the Appendix on
page 58.
---------------------------------------------------------------------------
My name is Devaki Raj. I am honored to be here representing
CrowdAI, a startup developing no-code artificial intelligence
tools since 2016. Until a recent acquisition by Saab, I served
as CrowdAI's CEO and Co-Founder.
I admire Chair Peters and Ranking Member Paul respectively
for their leadership on AI initiatives and on improving Small
Business Innovation Research (SBIR) procurement. Today, there
is an emphasis on procurement of commercial technology.
However, AI needs to be procured in a manner that reflects this
novel technology.
First commercial off the shelf AI solutions need government
curated data to be mission ready. Second, AI procurement needs
to include ongoing AI retraining to support it. Third,
commercial AI technologies must undergo rigorous testing and
evaluation. Finally, it is important to establish paths to
programs of record for small businesses through project
transition milestones.
First, for government missions be it public health or
homeland security, commercial AI does not just transfer out of
the box. While the tools to create, modify, and operate AI are
available commercially, the algorithms themselves are trained
on commercially available datasets.
For example, imagine an AI algorithm built for self-driving
cars used to analyze nighttime drone video during a maritime
search and rescue mission. Yes, both models were trained to
identify vehicles, but their domains, the operational context,
and sensors are different.
Robust AI must learn from specific domain mission data it
is applied to. If not, models remain brittle. It is important
to note that there are constraints that come with the use of
government data, statutory limitations, privacy, access,
security, etcetera. Appreciating this, offices desiring to
implement AI should curate their domain specific data to
accelerate development, testing, and transition to operations.
Second, AI is a journey not a destination. AI procurement
must include ongoing AI model retraining for continued
operational relevance. In 2018, CrowdAI collaborated with the
California Air National Guard to automate wildfire mapping
using MQ-9 drones, a collaboration we proudly continue.
Our predictive models performed extremely well in Northern
California's forested regions where wildfires were common, and
we had access to government furnished data. However, as
evolving wildfire epicenters shifted to urban areas in the
South, our models required retraining to maintain operational
relevance.
While AI models are flexible, they still require
contracting officers to include model training and contracts,
ensuring alignment with evolving mission data. It is a dynamic
process akin to software updates, not just a one-time
procurement. Third, open publicly available AI code requires
rigorous testing and evaluation. Today, anyone can go online
and download an open source AI model.
This poses a challenge for government selection panels
because they have little expertise of verifying commercial
claims. For example, we developed an AI model to identify
remote airstrips often used for drug trafficking in South
America. From a statistical standpoint, our model performed
expertly, finding 100 percent of the airstrips.
But from a mission perspective, it generated excessive
false positives by also identifying significant amounts of dirt
roads, which was fundamentally detrimental to intelligence
analysis. I share these negative results with you to show that
evaluating AI is not simple.
It is therefore critical that procurement activities
include both qualitative and quantitative evaluation for
evaluation metrics for AI throughout both the solicitation
process and post-delivery phases. Finally, small business AI
procurement should have project transition milestones.
While SBIR's phased approach offers a structured path for
validation to transition, the increasing number of awardees
contrasted with fewer transitions suggests misalignment of
incentives. I believe that is crucial for any government
procurement to have clear transition milestones for a path to a
program of record.
The Naval Air Warfare Center's record of including
transition milestones in SBIR awards is a shining example. In
conclusion, the needs and resources of government missions are
unique, requiring tailored AI solutions.
Procurement vehicles must be modernized to reflect the
iterative nature of AI for government missions and introduce
stringent standards for testing and evaluation. Thank you for
your time and I look forward to answering questions.
Chairman Peters. Thank you. Our next witness is William
Roberts. Mr. Roberts is the Director of Emerging technologies
for ASI Government. He previously served as the Head of
Acquisitions for the Joint Artificial Intelligence Center
(JAIC), and Chief Digital and AI Officer (CDAO) for the
Department of Defense (DOD).
Before that, Mr. Roberts was the contracting officer for
the Office of the Secretary of the Air Force and worked on
acquisition policy for the Department of Defense Education
Activity (DODEA). Mr. Roberts, welcome to the Committee. You
may proceed with your opening remarks.
TESTIMONY OF WILLIAM ROBERTS,\1\ DIRECTOR OF EMERGING
TECHNOLOGIES, ASI GOVERNMENT
Mr. Roberts. Thank you, Chairman Peters, Ranking Member
Paul, and distinguished Members of the panel. I am Will
Roberts. As Chairman Peters mentioned, I am the Director of
Emerging Technologies for ASI Government.
---------------------------------------------------------------------------
\1\ The prepared statement of Mr. Roberts appears in the Appendix
on page 66.
---------------------------------------------------------------------------
Previously, I was the Director of Acquisition for the
Department of Defense Joint AI Center (JAIC), and I have a
particular passion for government contracting. During my time
at the JAIC, I became very aware of the procurement related
challenges to buying and delivering AI to the end user for
adoption, for true, real adoption.
I also developed a genuine belief that the acquisition
professional, and specifically the contracting officer, holds a
very historically important role for us in this space, but they
have to be trained so they can step up into this historic role.
Currently, they are not, and this is the basis of my testimony.
Considering the historic role of the contracting officer,
you could look at U.S. history as a string of transactions. It
is a series of deals that led us and brought us to the Nation
that we are today. That is because since the Revolutionary War,
before we became a nation, we have always relied on industry to
deliver.
The government has always connected the services and the
innovations to the mission for the benefit of the citizens, but
it is the ingenuity of industry that has brought us the
innovations, the stuff, the airplanes, the ships, the tanks,
the things that brought us to the moon in the 1960s.
This is all spelled out in the four corners of a string of
business transactions between the government and industry
created by the American Government dealmaker. Now we are facing
a new chapter in our history with technology we have never seen
before, and our American Government dealmaker is called to
action once again.
The question that the government must ask itself is not how
to develop it. If we are to follow the historic path that led
us to be the successful nation we are today, and the question
that the government must ask itself is how to buy it.
This question of how to buy it was my life for the last
four years. It is complicated. It requires people who take
their jobs very seriously. It led myself and my team and others
in this space to rethink the way we do procurement within the
bounds of the law. It required a special talent and one that
had to be learned and developed.
First and foremost, the diligence AI acquisition official
must realize that AI is a means to the end, and the end is
always the mission. We are never really buying AI. We are
buying an enhancements to our mission. Until acquisition
professionals realize that they will never really deliver
anything truly valuable to the end users.
But within the four corners of the contract vehicle, the
parties negotiate very important intellectual property terms,
which require knowledge of the various components of AI, the
data rights, the cloud, the platforms, the infrastructure, the
trained and untrained model, all of which could have a separate
intellectual property strategy, which could make or break the
project.
Within the four corners of the contract, the parties decide
upon the perimeters of the responsible and safe use of the AI
and its associated risk mitigation. Within the four corners of
the contract, the parties agree upon how this will actually be
delivered, really delivered and adopted--not just talk.
Through iterative methodologies, through flexibility, and
being able to pull out if the project is not working to prevent
wasting taxpayer dollars. Within the four corners of the
contract rests the fate of our success in truly delivering AI
into the government. The American Government dealmaker and
their role becomes very important.
I will close with my two recommendations. One is to provide
more contract authorities for contracting officers across the
entire government. We had the privilege of using a lot of
various authorities, but they were only available to the
Department of Defense. These are tools that should be used by a
skilled contracting officer in her tool belt. The skilled as is
the emphasis there because the tools are useless if you do not
know how to use them.
My second and more important point is there needs to be a
robust training program that completely re-skills the
acquisition workforce. The AI Training Act was a great step
forward, but more needs to be done. There needs to be three
core competencies. One on AI technical knowledge, which the Act
covers, the functionalities and the risks. But the second two
actually lead to true delivery.
One is AI business acumen, knowledge of the unique AI
market. Many of the members in the market which have never
worked with the government before. The third is AI unique
contract domain knowledge, to include internet protocol (IP),
ethics, agile contracting, and use of these various contract
authorities.
I think it is only then that we would really realize the
benefits and the savings of the costs and savings to the lives,
and the benefits to the welfare and defense of our Nation.
Thank you.
Chairman Peters. Thank you. Our final witness will be
introduced by Ranking Member Paul.
Senator Paul. We are pleased today to have Michael
Shellenberger here with us. He has become one of the most
prominent contemporary writers on the scene with regard to
censorship. He has been a long time writer on the environment.
But has come to prominence to a lot of people's attention
because of his being chosen by Elon Musk to look at the Twitter
Files and to see firsthand the interaction between government
and a large social media company, and how censorship was
brought about by the government pushing and forcing a social
media company to adhere to a government interpretation of
policy events.
We are excited to have him. Michael graduated with a
Bachelor of Arts (B.A.) in Peace and Global Studies from
Earlham College, has a Masters of Arts (M.A.) in Cultural
Anthropology from the University of California Santa Cruz, is
the Founder and President of Environmental Progress from
Berkeley, California. And Michael, we are happy to have you
today. Thanks for coming.
TESTIMONY OF MICHAEL SHELLENBERGER,\1\ FOUNDER, PUBLIC
Mr. Shellenberger. Thank you very much. Chairman Peters,
Ranking Member Paul, and Committee Members, thank you for your
stated concern with the implications of AI for our civil
liberties and our Constitutional rights, and for requesting my
testimony. I am honored to provide it.
---------------------------------------------------------------------------
\1\ The prepared statement of Mr. Shellenberger appears in the
Appendix on page 75.
---------------------------------------------------------------------------
The ability to create deepfakes and fake news through the
use of AI is a major threat to democracy, say many experts. The
Washington Post recently reported that AI generated images and
videos have triggered a panic among researchers, politicians,
and even some tech workers who warn that fabricated photos and
videos could mislead voters in what a United Nations AI adviser
called in one interview, the deep fake election.
Never before in the United States have we been better
prepared to detect deepfakes and fake news than we are today.
In truth, the U.S. Department of Defense has been developing
such tools both for the creation and the detection of deep
fakes for decades. Before elaborating on this point, I want to
emphasize that I view AI as a human not machine problem, as
well as a dual use technology with the potential for good and
bad.
My attitude toward AI is the same fundamentally as it is
toward other powerful tools we have developed, from nuclear
energy to biomedical research. With such powerful tools,
democratic civilian control and transparent use of these
technologies would allow for their safe use, while secretive,
undemocratic, and military control increases the danger.
The problem in a nutshell is not with the technology of
computers attempting to emulate human thinking through
algorithms, but rather, who will control it and how they will
do so. There is a widespread belief that users already choose
their content on social media platforms.
In reality, social media platforms decide a significant
portion of what users see. The heavy lifting of censorship, or
what we call content moderation, was by 2021 already
overwhelmingly determined by AI. Mark Zuckerberg, the CEO of
Meta, said that more than 95 percent of the hate speech that
Facebook took down is done by AI, not by a person, ``98 or 99
percent of terrorist content that we take down is identified by
AI.''
Similarly, 99 percent of Twitter's content takedowns
started with machine learning. The problem with AI technology
today, funded by the U.S. Government, whether DARPA or National
Science Foundation (NSF), is fundamentally around the control
of these technologies by small groups of individuals and
institutions unaccountable to the citizens of the United
States.
The censorship industrial complex of government agencies
and government contractors has its roots in the war on
terrorism and the expansion of surveillance after 9/11. In
2003, DARPA told Congress that National Security Agency (NSA)
was its experimental partner, using total information awareness
and AI to detect false information.
In 2013, the New York Times reported on the NSA's use of
AI, which foreshadowed how counter disinformation experts would
nearly a decade later describe fighting misinformation online.
In 2015, DARPA launched the funding track that directly
resulted in AI tools that leading internet and social media
companies used today.
Their goal was to develop a science and practice for
determining the authenticity and establishing the integrity of
visual media. DARPA's warning eight years ago is identical to
the Washington Post warning about deepfakes last month. The
adoption of AI has grown alongside alarmism about deepfakes,
and misinformation and disinformation more broadly.
Also in 2019, a new non-governmental organization (NGO)
called the Deep Trust Alliance launched a series of events
called The Fix Fake Symposia. The Deep Trust Alliance described
itself as, ``the ecosystem to tackle disinformation'' and its
website invited audiences to join the global network, actively
driving policy and technology to confront the threat of
malicious deepfakes.
Yet, the goal of this Deep Trust Alliance appears to be to
advocate for policies to censor and even criminalize digital
harms. The head of the organization said that laws needed to be
extended to digital harms. There needs to be a set of practices
across social media platforms.
It was during this period that the U.S. Government, DHS,
created Election Integrity Partnership (EIP) to censor
elections skepticism. The year afterwards, it created a
project, The Virality Project, to censor COVID skepticism and
COVID criticism, or most famously, the Biden White House
demanded widespread censorship by Facebook of what Facebook
itself called, often true documentation of vaccine side
effects.
While social media platforms use AI to identify and censor
content, the decisions of what to censor, of course, remain in
the hands of humans. The Federal Trade Commission (FTC) in June
of last year warned Congress specifically about the dangers of
using AI for censorship, urging great caution.
Good intentions were not enough, said the FTC, because it
turns out that even well-intended AI uses can have the same
problems like bias, discrimination, and censorship, often
discussed in connection with the uses of AI.
My recommendations to the Congress, rather, are that we
have much stricter oversight of these programs, making sure
that we have greater understanding and greater control of how
these censorship technologies are used. They should be in the
hands of users, not in the hands of big platforms working with
big government. Thank you very much.
Chairman Peters. Thank you. Mr. Roberts, you spoke about
the responsible use of AI in your opening comments. My question
for you is, from your experience at the Joint AI Center, would
you tell the Committee more specifically about what responsible
risk management actually looks like at each stage of the
procurement process?
Mr. Roberts. Yes, Senator. I will mention that it was an
evolving process when I started at JAIC. We started with the
two levels of trustworthiness, which was the trustworthiness
and the functionality itself, whether it will work or created
distrust.
But the trustworthiness and the safety and responsible use
became a big theme for us. It started in the very planning
phase. This is the first instance that we noticed we had to
have a more balanced team.
It was not like a relay race where the money people, sent
it to the contracting, and the program manager sent it over,
and the testers and evaluators were at the very end. We needed
to have the input from testing and evaluation from the ethics
policy professionals and the end users at the very planning
stage, so that the team worked more like a football team where
we all ran the ball down the field at the same time.
This was a cultural change for us. This is not common in
acquisition. We had to adjust accordingly for that. The two
areas where we can provide the most meaningful value in the
procurement process is in the evaluation phase and the testing
phase. Evaluation, where we can make the responsible use of AI
a discriminator for source selection, for award. Then the
testing, of course, we can make as a metric for successful
performance.
This was challenging for us though, because we did not want
to be too restrictive at the time. We wanted to bring in
players and companies. We started our efforts by turning it
around to industry and asking them.
We had five principles for responsible use of AI the
Secretary of Defense released, and we asked them to give us a
quality control plan of how they were going to live out those
principles. For the most part, it became this ongoing dialog,
since we were all in uncharted territory, about how to do
things responsibly.
But I will also mention that aside from some specific
projects, especially those that dealt with health and
warfighting, so much of our projects had low risk in terms of
responsible use. We were able to talk with our ethics person
and they were able to weigh the risks.
But for those that were high risk, we treaded carefully,
and we worked hand in hand with industry, specially making sure
that when the contract was created, the post award performance,
which is where it really matters, was all set up to make sure
that as this AI was being delivered, it was being done in a
responsible way.
Chairman Peters. Thank you. Ms. Raj, right now, we are
hearing from our Federal procurement officers that they are
basically being bombarded by companies wanting to demonstrate
the promise of their products, and they have no shortage of
promises that they are presenting.
My question for you is, how can we ensure that the Federal
agencies avoid being caught up in what is clearly AI hype right
now around the country and spend taxpayer money responsibly on
services that will actually deliver tangible results for the
American people?
Ms. Raj. Thank you, Senator, for the question. As a former
small business owner, we have to continuously rise against the
same noise to provide the best quality computer vision for our
customers.
We first started working with the U.S. Government in 2017,
and in 2017 we were one of the handful of Silicon Valley
companies working with the U.S. Government. Now we see large
tech companies with massive internal resources, or a handful of
companies that are being massively funded by venture
capitalists, in the hopes that they can land meaningful
contracts from the U.S. Government in AI.
I think I have two points. The first is point one and three
of my oral testimony. For Federal agencies looking to bring in
AI technology, it is important to have datasets that are
curated and readily available during the contracting
solicitation time, as well as during when that contracting is
evaluated.
Models can be benchmarked even before the technology can be
acquired. Often when we worked on contracts, waiting for
government furnished data was the longest lead time on
contracts. For faster evaluation, the less taxpayer money is
spent. I want to commend National Geospatial Intelligence
Agency (NGA). They have been doing a great job getting their
data ready for AI.
Second, to your point, right, anyone can download AI models
off the internet and claim AI expertise. Again, having testing
and evaluation datasets ready, both at the contracting process
and throughout the entire execution of the contract.
It will be easier for procurement officials to be able to
both find and evaluate tech through quantitative and
qualitative means.
Chairman Peters. All right. Thank you. Dr. Li, as
legislators work to establish key values for the American use
of AI, the idea of explainability has received substantial
attention. However, building responsibility and trust into the
use of AI seems like it is going to require more than just an
understanding of the math behind a model.
Could you offer some specific suggestions as to how we
ought to prioritize when evaluating AI systems for potential
use by Federal agencies?
Dr. Li. Yes, thank you for the question. Actually,
explainability is a very much used word in the talks of AI. If
we think about it, it is actually a very nuanced word. I will
give you an example that is not AI. For example, you talk about
math.
When a bottle drops, we can actually use a Newtonian
mathematical equation to explain why the bottom drops. But when
it comes to the usage of Tylenol, other than some doctors and
biomedical researchers, even as a consumer, I do not know how
to explain how Tylenol works, yet there is an explainable
process of how the Federal Government has regulated it so that
I can trust it.
Or another example of using Google Map to go from point A
to point B. There, the explainability is not in mathematical
equations nor in regulatory framework. It is more about the
options it has provided to me, the fastest route, or the avoid
the tolls, and so on. I am using these three examples to show
you explainability is a very nuanced term and depend on the use
cases. It depends on the system approach.
We have to think about it carefully. Again, I advocate for
a systematic approach in thinking about explainability, but do
put the human values, as well as our society's value at the
foreground of this, and depending on how we use it, will
require a different kind of explainability.
Chairman Peters. All right. Thank you. Ranking Member Paul,
you are recognized for your questions.
Senator Paul. If it is OK, could I defer my questions and
allow Senator Hawley to go?
Chairman Peters. Yes, that is fine. Senator Hawley, you are
recognized for your questions.
OPENING STATEMENT OF SENATOR HAWLEY
Senator Hawley. Thank you very much. Thank you, Senator
Paul. I thank you, Mr. Chair. Thanks to all the witnesses for
being here. Mr. Shellenberger, I want to start with you, if I
could. I am so glad that you are here with us today, and you
are here at a significant time. I am looking at a piece from
yesterday, I think it is, yes, that you published, U.S.
Intelligence Dangerously Compromised, Warned CIA and FBI
Whistleblowers.
You are not the only one to report this, of course, but I
was reading your report on it this morning. This is something
that you have been warning about for quite some time. The
allegations stem from a whistleblower who has come forward to
the House, a whistleblower from the Central Intelligence
Agency.
I have the letter, the relevant letter here from the House
Oversight Committee. The whistleblower alleges that a CIA team
was paid to change its assessment of the origins of COVID-19.
Do I have that broadly correct? Is that your understanding of
the report?
Mr. Shellenberger. Yes, sir.
Senator Hawley. This is obviously a bombshell report.
Deeply troubling. I am glad that the House is going to look
into it. We should look into it. What caught my attention, as
you point out in your article on this, that the government has
deliberately violated the COVID Origins Act, which this body
passed unanimously, which the House passed, the President
signed into law.
Maybe wasn't so happy about signing it into law, but he
did. It is the law of the land, and which required that all of
the government's intelligence on the origins of COVID be made
public. Instead, what the Administration did was offer up a
summary, which they then in turn heavily redacted.
You point out that in addition, the Administration refused
to report the names of scientists who fell ill at the Wuhan
Institute of Virology in 2019, despite the fact they know the
names.
The intelligence community knows the names. Now, you are
absolutely right to say this is a violation of the COVID
Origins Act, and I would know because I wrote it. I am not very
happy about the fact that this Administration continues to
flaunt, flout, completely ignore public law passed, again,
unanimously by the U.S. Senate.
For what end? I cannot tell. I cannot figure out why in the
world. I do not know what partisan gain there is to it. Why in
the world they want to lie to the American people. You conclude
your article by saying, the government has become extremely
comfortable with lying to us. Just explain what you mean by
that and tell us why you think this is so significant.
Mr. Shellenberger. Sure. Just on the very specific point,
if we were the first to identify the three people that
contracted the coronavirus in China. They were the people
working on gain of function research in the Wuhan Institute of
Virology.
The Wall Street Journal confirmed our reporting two weeks
later, and then I think it was one week after that or a few
days after that, the Director of National Intelligence (DNI)
report came out and it did not reveal this information. We had
multiple sources. We have no idea if the Wall Street Journal's
sources were the same.
But I think we are clearly seen a lot of abuses of power
occurring in multiple executive agencies. We have seen it with
the FBI. One of the things that we noted yesterday was that we
saw perverse incentives in the FBI to go after so-called
domestic violent extremism (DVE), pulling an agent off of
things like child exploitation, onto really hyping a set of
cases that particularly appeared to be aimed at spreading
disinformation around the idea that there is a significant
increase of domestic extremism when we do not think that the
evidence shows that.
Now we see this report that came out that suggests that
there is an FBI whistleblower who says that six of the seven
analysts had said it was a laboratory origin and that they had
reversed their position in some exchange for some sort of a
salary bonus or some sort of a financial incentive.
We keep documenting it. We just keep finding agencies and
agencies, DHS involved in trying to create a disinformation
governance board. The censorship industrial complex, we just
keep finding new parts of it. In the research for this
testimony, we discovered this Deep Trust Alliance that had what
appears to be ties to the security and intelligence agencies of
the U.S. Government, appears to be trying to set itself up,
although it is now kind of ghosted after 2021.
But it appeared to be trying to set itself up to decide
what is reality and what are fakes for people, and I think it
should have a chilling effect that is not how we do free speech
in America.
We do not have government agencies, we do not have cut outs
or front groups that appear to have support from those
agencies, telling the American people what is true, what is
false, or telling social media companies behind the scenes what
they should be censoring.
Senator Hawley. To that last point, we now know, thanks to
the case of Missouri v. Biden, that that is exactly what this
Administration, from the White House, to the FBI, to the State
Department, to the CDC, to Cybersecurity and Infrastructure
Security Agency (CISA), have all been meeting with the social
media companies for years now, giving them direct commands
about what to censor and takedown, naming specific accounts and
specific speech they want suppressed, threatening the social
media platforms if they do not do it.
Remarkably, and I am quoting the court here, the Fifth
Circuit Court of Appeals, and there is a huge evidentiary
record. Everybody don't take my word for it. Go read the
record. It is all on the record from the District Court. What
the Fifth Circuit said, it is remarkably that the social media
platforms all complied. All of them.
They all agreed to be tools of the U.S. Government and to
censor what they were ordered to censor, to suppress the speech
they were ordered to suppress. You are a journalist. Tell us
about the threat to the First Amendment--and by the way, just
for the record, I think it is important to establish, the
Federal Court of Appeals said directly in no uncertain terms,
this was a clear violation of the United States Constitution.
The First Amendment does not allow the Federal Government
to use private companies to censor what they would not be able
to do it themselves, and that is exactly what this
Administration has done.
Tell us, as a journalist, the threat to free speech, to
freedom of the press from this kind of collusion between a very
powerful government trying to hijack every media company it can
get its hands on.
Mr. Shellenberger. Sure. If you start on the issue of the
COVID vaccine, for example, public interest advocates spent a
very long time requiring the pharmaceutical companies to list
the side effects of their drugs in their advertisements.
Here we saw a situation where people were sharing
information about the side effects of the vaccine on Facebook
and other social media platforms, and the White House demanding
that it be taken down. Facebook complying, acknowledging that
it was often true information.
We also saw that Facebook's own internal research showed
that actually it increases vaccine hesitancy when you censor
those stories. That people, if they want to be comfortable with
a new drug, they need to be able to talk it out of it. Facebook
told the White House that actually it would backfire.
The White House insisted. Facebook caved in because
according to the Facebook executive Nick Clegg, he said, well,
we have this other business that we need to do with the White
House, which is the data flows. Meaning we need the White House
to help us negotiate with the Europeans to bring our data back
to the United States.
I think the Fifth Circuit Court did a great job in
identifying the clearly coercive measures, but I do not think
it went far enough because the First Amendment, it prevents the
government from abridging or infringing on free speech.
Offering an incentive to social media platforms such as
helping them with their dispute with Europe in exchange for
censoring often true content, though, of course, the First
Amendment also protects false content, I think is a very
chilling effect.
I think it is very disturbing. Anybody that cares about
holding powerful entities to account should be disturbed by
what we saw take place on Facebook, on Twitter. I think we have
to remind ourselves--and what disturbs me, when I hear sort of
the conversation around AI coming into it sort of with a
beginner mind, I hear a lot of talk about how to protect the
public from harm.
We have to protect the public from harm. What people are
saying is that we need to censor speech, censor certain voices,
censor disfavored voices because of this idea that it will
cause real world harm. This is a well-documented phenomenon
that psychologists have measured where over decades people have
grossly expanded their definition of things that cause harm.
I think this should be a moment for a reset. That free
speech is almost absolute in the United States, with a few
exceptions, around immediate incitement to violence, around
fraud, around child exploitation. But we allow very open
conversation in the United States. It is what makes us so
special. It has been a chilling effect. As a journalist, I
personally have been censored by Facebook. I think the
platforms are out of control.
Senator Hawley. Thank you, Mr. Shellenberger. Thank you,
Mr. Chair.
Chairman Peters. Thank you, Senator. Senator Blumenthal,
you are recognized for your questions.
OPENING STATEMENT OF SENATOR BLUMENTHAL
Senator Blumenthal. Thanks very much, Mr. Chair. As you may
know, Senator Hawley and I have authored a framework for
protecting the public against some of the perils, I would like
to think all the perils, of AI through an oversight entity that
would license new models, would require testing and red
teaming, require transparency, a notice when AI is used, but
also accountability on the part of AI companies, and a means of
enforcement.
That is a very rough summary. But the point is that we have
been working through the Judiciary committee, the subcommittee
of the Judiciary committee on Privacy, Technology and Law, and
I want to thank Senator Peters for his focus on AI in this
Committee as well. But we had a very useful and productive
forum yesterday.
My hope is that many of the people who came before us in
that forum will agree to come before our Subcommittee and
testify in public, under oath, and give us the benefit of their
views, which they expressed to us privately as Senators,
because I think the public has a right to know, and we should
be putting these views on the record.
Transparency in our process is as important as transparency
in the disclosure of how algorithms work, how AI works, so the
public has a better understanding of it. I think we are going
to be pursuing our framework, putting it into legislative form.
We have gotten tremendously positive response. I would say
that almost all the provisions of our framework, in fact, all
of them were endorsed by one member of that group yesterday or
another, and the vast majority of the group endorsed the core
provisions of our framework.
We are making progress. I think what is important about the
chair's actions here is he has sponsored a bill called the
Transparent Automated Governance Act. I do not know whether he
has mentioned it yet, but I am going to be joining as a co-
sponsor. What it requires is more disclosure to the public
about when they encounter AI.
In fact, a Subcommittee of this Committee, which I chair,
the Permanent Subcommittee on Investigations (PSI), held a
hearing recently on Medicare Advantage, which is a government
program, health insurance, that provides key coverage for
people who are eligible for Medicare, and they can choose to go
into this program.
I am vastly oversimplifying, but not in any way diminishing
the key point here, which is, Medicare Advantage insurance
companies are using AI to make decisions about what they will
cover or not. Some of these decisions cause denial of coverage
to people who then have to try to access the system to get
overruled a decision which essentially is made through AI.
I know that I am somewhat simplifying, but the key here is
that AI is making decisions that hugely impact people's lives,
often without their knowing it. That is why the chairman's bill
I think is so important. I see a number of you nodding, I hope
it is in agreement, with my basic point, which is that
disclosure is very important here.
I will just ask you, perhaps beginning with Dr. Ghani, will
you support a legal obligation for AI companies to disclose
when a person is interacting with an AI or decisions being made
about them using AI?
Mr. Ghani. Absolutely. I think it has to be critical. I
think more than that, it is not just disclosing when you are
interacting with the system, because you might be interacting
with a human that is informed by an AI.
It is nuanced, but you want to make sure that if a decision
is being supported by AI, it is not just the person knows but I
think even more importantly, they need to have recourse. That
exists in other areas. In financial services, we have these
things called adverse action notices.
When a decision is made against you for denying you a loan,
the bank is supposed to tell you why, and then allow you to
change those characteristics, and then give you the loan if you
changed that. I think the same thing needs to be present as one
example of extending the current AI systems. I think
procurement, again, is the perfect vehicle to force that.
Senator Blumenthal. Thank you. Dr. Li.
Dr. Li. Thank you, Senator. I have to say, when you talk
about Medicare Advantage, this is the life I live in, home
caring two elderly parents who are chronically ill, and I have
personally experienced claim denials and getting on the phone
forever to talk about all these cases.
Yes, I think disclosure is part of a systematic approach to
how to use these powerful tools intelligently and also
responsibly. It is really important to recognize AI is a tool,
and every tool can be used to our advantage but can also have
unintended consequences. For example, we all take airplane
rides, and we know there is autopilot in the airplane.
Yet we know there is enough regulation and disclosure and
responsible use to feel safe to a large degree about this.
Right now, this technology is so new, we, multistakeholders,
need to get on the table and have a nuanced approach to these
critical issues in high risk areas, like disclosure,
trustworthiness, privacy preserving, all these issues.
Yes, thank you for your effort.
Senator Blumenthal. Thank you.
Ms. Raj. Thank you, Senator. These are issues that are
important to the American people. They are common sense
measures. We realize that we do not want to stifle innovation,
but there are guardrails in place that make it important, so
thank you.
Mr. Roberts. Yes, Senator, I also agree with your
statements. But I would say also that this is one of the
challenges of government contracting, is putting the language
in the contract versus once the contract is awarded, making
sure that these perimeters are followed.
What I have seen is usually when we award a contract, the
focus of trustworthiness on the functionality of the tool
itself usually takes a front seat, and then all the planning of
how we would do this responsibly sometimes takes a backseat. We
focus so much on will this work, will this benefit the users.
We are the largest, the Federal Government is the largest
buyer in the world. We have so much ability to put forceful
language in our contracts. But it is also what happens after
the award and the quality assurance measures we are taking to
make sure this responsible use is not taking a backseat.
Senator Blumenthal. Thank you. Mr. Shellenberger.
Mr. Shellenberger. Yes, absolutely.
Senator Blumenthal. Thank you. Mr. Chair, again, thank you
for your leadership. The chairman's bill was reported out of
the Commerce committee in July, and disclosure is also an
essential part of the framework that Senator Hawley and I have
advanced.
I am looking forward to supporting his bill, and perhaps in
our efforts to combine our ideas and our forces, taking them up
together. But this panel has offered some really important
insights and I really want to thank you all for being here
today. Thanks, Mr. Chair.
Chairman Peters. Thank you, Senator. Thank you for co-
sponsoring the bill. We will hopefully move it through the
Senate as quickly as possible. I need to step aside briefly for
another committee hearing that is going on to ask questions, so
I am going to pass the gavel to Senator Hassan. But before I do
that, Ranking Member Paul is recognized for his questions.
Senator Paul. Thank you, and thanks to the panel for being
here today. I think we have had a good discussion, pros and
cons of AI, how we can kind of control it to make sure it does
not lead to abuses.
But I think as we have, and if you think about potential
uses, we had a hearing a month or two ago talking about
classified data. We have like 25 million bits of data or
something sitting out there. It is supposed to be declassified
after 25 years. No human is ever getting through it. We need
help.
Something like AI could be of great benefit. But as we have
talked about different ways to try to control AI, either it is
contracting, or transparency, or this rule or that rule, I
think what we are missing is how far apart we actually are. We
think we are all together on let us have some controls, but I
do not think we are very much together, if you think about the
most basic of rights would be our Bill of Rights.
Among the Bill of Rights, the first one is supposed to be
one of the most important. The Supreme Court says it has
special scrutiny for the First Amendment, and yet we have
absolute complete disagreement on the First Amendment.
The idea of whether or not it is in breach of the First
Amendment for the FBI and the Department of Homeland Security
to meet on a regular basis with Twitter and Facebook and talk
about the content, what people are saying, people's speech, and
limit that.
If you look at the court case, it is worse than that. It is
not just talking about or notifying, it is actually threats to
say that we may take away your Section 230 if you do not comply
and take down this. The Section 230 is the liability
protection, we may get rid of that.
We may institute antitrust rules to try to break your
company up, which they are apparently going after Google now
anyway. We also may notify the top guy, the President will be
informed that you are not taking down this information.
But even more than that, I think they went even further. It
is my understanding that when Twitter said, oh, yes, we will
take this down, but it is a heck of a lot of work, can you pay
us? I think the FBI actually paid them $3 million. Mr.
Shellenberger, can you comment on did Twitter take money from
the FBI to take down content?
Mr. Shellenberger. It did. You are absolutely right that
that happened. There was some controversy around it because
Twitter was being reimbursed for the time that it was spent in
helping FBI. But nonetheless, it appeared as though Twitter had
refused that money previously because they recognized there
would be a conflict of interest.
After the former Chief Counsel for FBI, Jim Baker, came to
Twitter, they changed that policy and they did take that money
from the FBI after working with them on this.
It was one of many, what the Fifth Circuit Court recently
called it, kind of close nexus, which is the kind of thing that
you are worried about, a close nexus in terms of censoring
content online that we saw. To see the financial incentives
there was very troubling.
Senator Paul. This is sort of what concerns me, because we
think, oh, through contracting or through rules or
transparency, somehow we are going to control this.
When I am concerned that I do not think there is one Senate
Democrat that has criticized the FBI and Department of Homeland
Security meeting on a regular basis to discuss taking down
speech.
Even true speech, as you mentioned, the Virality Project,
stuff they said, well, so-and-so had the vaccine and they are
in the hospital today, and it was verifiably true. Now, cause
and effect, everybody has an opinion on what caused what, and
yet they were taking that down, even admitting that it was
true, taking that down, and that we do not seem to have any
concern on one side of the aisle.
How are we going to get to fixing this problem with
contracting and transparency if we cannot even agree on what
the First Amendment is. This is a big deal, and I hope this
case, it has now been decided at the appellate court level. I
hope it makes it to the Supreme Court level so it can be
basically adhered to across the land.
But, as much as I want to say, oh, let us all get together,
we are going to hold hands and have transparency and fix the
contracting in AI, I am more than worried than ever. I am not
an opponent of AI. My son does AI. I am a fan of technology. I
think AI can do great things.
Yet, I am worried that one entire party, about half of our
country, representative wise, does not seem to have any concern
about the First Amendment, about the FBI meeting. Now, if this
were the 1960s, it would be interesting where it would be.
In the 1960s, if and when people heard that J. Edgar Hoover
was meeting and looking through MLK's mail, looking through
Vietnam War protesters, at that point in time, the left was
much better than the right, and they were absolutely exercised.
They wanted to defend the First Amendment.
Somehow we have lost that. It really should not be
partisan. I do not care who is the President and I do not want
any President sending the FBI. The way I try to put it so
people can comprehend this. I will do interviews on television.
Let us say I am on there and the woman on television, the
man on the television interviewing me and I say, after you are
done, what would you think if the FBI called you and wanted to
sit down and discuss my interview, and because I said I do not
think masks work in the general public and do not prevent the
trajectory of the virus, that is my opinion, I can give you 25
studies to support it, but how would you feel if the FBI sat
you down and said, we are worried about that and we think that
is misinformation.
No broadcast TV would ever stomach that. No newspaper would
ever. The Washington Post would not stomach that. Yet nobody in
the left seems to care at all about the FBI meeting with
Twitter, which is arguably maybe more powerful than all of the
traditional legacy media anymore.
You mentioned that already in their algorithms, Mr.
Shellenberger, that 99 percent of it is being taken down
through artificial intelligence. You mentioned that a lot of
this originated in DARPA. I assume you mean the funding for
originating and discovering how to do artificial intelligence.
Is that what you are saying came out of DARPA?
Mr. Shellenberger. Yes. Also, and I did not get in as much
detail about it, but also obviously DARPA has been--or maybe
not obviously, but in the process of creating deepfakes and
then creating technologies to detect those deepfakes, we have
also seen, we know there is cases where they are actually
identifying persons of interest to be developing deep fakes
around.
I think there is a set of things going on that people are
not aware of that they should be aware of in the development of
these technologies. Then there is an ostensibly civilian
process been going on to try to govern these deepfakes, to try
to kind of establish some separate authority to decide what is
deepfake, what is not.
Senator Paul. To point out the problem, though, if DARPA is
in charge of a lot of this, they are a small, secretive group
that won't respond to me. They are the group that was going to
fund research in Wuhan to stick a furin cleavage site into a
virus, which turns out what COVID looks like.
Yet when I asked them, even today, I have been asking DARPA
for their information, they won't give it to me. Now I am
supposed to trust DARPA with artificial intelligence, an agency
that won't give me unclassified documents with proper request.
That is a real danger that DARPA, Defense Threat Reduction
Agency (DTRA), all these defense and intelligence agencies that
are developing this to spy on others are so secretive that we
don't have oversight. I think they are in government without
oversight, and I am worried about that as we move forward, that
we are sort of saying, oh, yes, we will just have oversight, we
will have contracting.
I am not against that. I think that is good. But how can I
contract something when I don't even know what their budget is.
Any comment on the secrecy of DARPA?
Mr. Shellenberger. The secrecy in some ways is the main
event and that is how the censorship has been taking place,
behind closed doors. It is not jawboning. They have sort of
said there has been this argument that politicians should be
able to get up there and criticize somebody publicly, but that
is not what was going on.
It was behind the scenes work. I am actually glad to hear
that they have been as unresponsive to you, Senator, as they
have been to us. We went to over 50 or 60 of these
organizations, most of which had some sort of government
funding, to just have a conversation with them about their
censorship activities, and not a single one of them agreed to
be interviewed. This is not the kind of transparency you would
expect from government contractors.
OPENING STATEMENT OF SENATOR HASSAN
Senator Hassan. Thank you, Senator Paul. I am now going to
recognize myself for the next round of questions. I am really
glad we are having this hearing. I am really grateful to all of
you for being here and being a part of it.
I want to start with a question to you, Dr. Li. I am Chair
of the Subcommittee on Emerging Threats and Spending Oversight
(ETSO), and I am concerned about potential public safety and
national security ramifications of AI. Dr. Li, you have raised
concerns about AI behaving in unintended or unpredictable ways
that could be detrimental to Americans.
What are the key considerations for Federal acquisition and
procurement policy to ensure the safety of AI?
Dr. Li. Thank you, Senator. I have been known to say there
is no independent machine values. Machine values are human
values. Especially when it comes to U.S. Government, I think we
really need to care about, as Senator Paul said, our
Constitutional values, our Bill of Rights, and all that.
In the procurement process, again, I think we must take a
systematic approach to ensure that the kind of investment and
support of the AI systems we want to develop and deploy reflect
our values. This includes starting from procurement of data,
the privacy issues, the bias issues, the trustworthiness
issues, all the way to the development of the system itself.
All the way to what in machine learning we call inference,
which is when you have developed the algorithm, now you are
ready to deploy it and do things. Here you, again, have privacy
issues, bias issues, and trustworthy issues, and the whole
ethical framework.
Again and again, I want to say that this is a powerful
tool, and it has good and bad sides. We need to take a
systematic approach, and every step of the way we should apply
responsible and ethical values to this----
Senator Hassan. Acquisition--to the standards that we use
in acquiring this technology.
Dr. Li. Yes.
Senator Hassan. Thank you. Mr. Roberts, I want to turn to
you. Last year, I worked with Chair Peters to lead the
bipartisan effort to codify Federal Risk and Authorization
Management Program (FedRAMP), the Federal program run by
General Services Administration (GSA) that evaluates cloud
service providers (CSP) and their products for use by Federal
agencies. FedRAMP promotes efficiency and increases security by
having one agency responsible for vetting and approving these
companies and their products.
Now, I understand AI is a little bit different, but could a
FedRAMP type program, or an entity with aspects that FedRAMP
has, that is designed to specifically evaluate AI products be a
feasible option for evaluating the safety of those AI products?
Mr. Roberts. Yes, Senator. I think that would be
beneficial. So much of AI is software centric and we have had
our projects challenged by security requirements to include
authorities to operate in FedRAMP.
We found ways of working with that, especially with
prototypes. But we have been looking for new guidance and new
ways in which we can make this easier, especially for the
developers and contractors.
Senator Hassan. OK. Because what we are really looking
toward is conserving resources as we try to buildup these
standards and apply them in the acquisition process, but also
really having a centralized place and one set of standards that
are applied across. So, you think something like FedRAMP might
be applicable?
Mr. Roberts. I think that would be helpful, Senator.
Senator Hassan. Another question for you, Mr. Roberts.
During my time in the Senate, I have focused on reducing the
Federal Government's reliance on aging technology in order to
save taxpayer dollars, improve security, and obviously provide
better service to the American people. I am concerned that
Federal efforts to adopt and use AI may not be successful if we
continue to rely on legacy IT.
Could the Federal Government's aging infrastructure prevent
us from effectively adopting AI technology? Alternatively, are
there ways in which artificial intelligence could help agencies
convert from costly legacy IT to modern systems that provide
better services and are more efficient?
Mr. Roberts. Yes, Senator. I do not think legacy systems
will necessarily prevent us from implementing AI, but I do
think that the Federal acquisition workforce needs to be more
trained on making that analysis of whether to sunset a legacy
system, which has been done successfully in various areas of
the Federal Government, and bring about a more modernized, a
flexible system with little disruption, or to try to modify
that system through Application Programming Interface (APIs)
with AI functionalities.
But I would say that the best approach for an agency that
wants to introduce AI into their organization and their systems
that leads to the most success is to start small, and start
narrow, and start feasible, with minimal risks in terms of
responsible use.
We have seen many cases of that where it is just automation
through AI of business systems that create huge impact, that
bypass any risks toward ethics or responsible use. But also,
because they are done in isolation, they can yield immediate
impact to the end users.
Senator Hassan. OK. Thank you. Last question. Again, to
you, Mr. Roberts. I helped lead a bipartisan effort to codify
the General Service Administration's IT Modernization Centers
of Excellence, including the Center of Excellence for
Artificial Intelligence.
The AI Center of Excellence assists Federal agencies
seeking to use AI tools and acts as a centralized resource
center for agencies looking to develop policies around AI. In
your view, is the AI Center of Excellence equipped to provide
support to all agencies, especially smaller agencies seeking to
procure and adopt AI? If not, how could it be improved?
Mr. Roberts. Yes, Senator. We had a good relationship in
the JAIC with the GSA Center of Excellence for AI. I would say
that this is still a very challenging topic to try to at scale
create policies and guidance to deliver and procure and deliver
artificial intelligence.
I think right now it is still a pocket. They are a good
organization, but it is a pocket. Whereas it should not be an
anomaly. This should be more mainstream and widespread, and
hopefully they will contribute to that effort.
Senator Hassan. OK. Thank you. Senator Lankford.
OPEMNING STATEMENT OF SENATOR LANKFORD
Senator Lankford. Thank you. Thanks to all of you all. This
is an exceptionally complicated issue, obviously, as you are
trying to wrap your head around that, but some things are
pretty straightforward with this.
The most difficult thing that I continue to be able to hear
is the continual term everyone throws around, we just want to
make sure there is responsible use of AI, to which I always
smile and say, define that for me. I have yet to have anyone be
able to really wrap a good definition of what responsible use
of AI means and does not mean.
Does anyone want to try to jump in on that and give me a
concise definition of what responsible use of AI means? I am
seeing a long pause. This is not a trick question. I am just
trying to because this is an issue that we have to be able to
both regulate and be able to wrap around.
Everyone is saying, hey, there needs to be some kind of
controls here for responsible use. What does that mean?
Senator Carper. If I could interrupt for just a moment. I
am going to Chair this Committee for just a little while. He is
asking a great question. Do not be afraid to answer--and even
if you are going to change your answer later on, but give it a
shot, if you would, please. Go back to Senator Lankford.
Ms. Raj. Thank you for your question. AI is a powerful
tool, we believe, in large part that can keep people and
society safe. But fundamentally, as Dr. Li said, it is really
about the human knowledge guiding it.
I think that when you think about responsible AI and
implementing responsible AI, it is really about equal
protections of the American people. Making sure that everyone
can benefit from the AI system in an equal manner.
Dr. Li. Can I answer that?
Senator Lankford. Yes, ma'am.
Dr. Li. First of all, great question. AI is a tool, just
like many tools. When we ask about responsible AI, I personally
start with a responsible tool use. It is not just AI. I do
believe there is no one size fits all answer.
A responsible drug is different from a piece of financial
service product because every use case when rubber meets the
road is different, and when we think about these, we start with
the value.
There are very high level, Constitutional values America
represents, and within that framework, it is important to look
at the different use cases and define responsibility,
responsible, and trustworthiness within those framework.
Senator Lankford. Who would you suggest actually sets that
framework for what is responsible AI in each one of those
categories?
Dr. Li. I personally believe in multi-stakeholder approach.
I think, for example, going back to health care, right, if it
comes to drugs and food and all that, FDA should be a huge part
of it. So does the consumers and the industry, as well as
public sector.
Senator Lankford. It is not a simple issue, to say the
least on it. But it is one of those things that hangs out there
because as we discuss a regulatory framework, you want to hang
a regulatory framework on a set of values and to be able to say
these match our values.
We have not been able to get good clarity among all of us
on, how do you define what is that value of what is responsible
use of AI in each sector, and I encourage anyone who is
participating in this, because I agree, multi-stakeholder is
helpful. There is 320 million people smarter than any one of
us.
To have that engagement, as people have ideas on that, they
need to be able to contribute those because we have to set that
value set on this. Obviously, the Constitution is the first set
of values, but then we have to be able to set for each one of
those, what is that responsible use?
We have another big challenge in the Federal acquisition
procurement process that deals with software is who owns the
intellectual property. Because if we do weapon systems, there
is just, we are going to keep updating it, and we have to know
this.
With AI, it becomes particularly difficult on who owns the
intellectual property for this and when changes need to be
made. How do you know that you have a safe process to be able
to do updates on this?
Or if someone were to get into the system and to be able to
change some of the system, how do you know it and how do you
track that? The first thing on AI for me is on the procurement
process, how do we handle who owns it?
It is one thing to be able to buy it off the shelf and to
say we are going to use this product. It is another thing to
say this is a unique product that we are going to use as a
Federal product that has AI that is built into it. Somebody
want to try to step into that?
Ms. Raj. Yes. That is something that we had to deal with as
a vendor for the JAIC, licensing of the architecture, licensing
of the model, with licensing of the government data. That was
something that we had to really navigate alongside JAIC.
I think fundamentally, AI needs to be purchased in a manner
that reflects the updating nature of this rapidly changing
technology. To my second point, I mentioned that a contracting
should have the ability to include ongoing retraining.
We do not want taxpayers buying model 2.0 when weeks later,
model 4.0 is released. It is about really taxpayer saving money
while ensuring the latest technology is in the hands of the
U.S. Government.
Senator Lankford. Yes, we have a weapon system that was
delivered about five years ago that was delivered with
Microsoft 7 on board five years ago, and the entire system was
built on Microsoft's 7, and all the software that connects to
it is all built on Microsoft 7, which is not even updated
anymore.
We cannot have a situation where we are that far out of
date, especially with something like this. But the other
challenge is, when we do security checks for most Federal
systems, we are trying to inspect to make sure that there is
nothing within the system that is a problem.
That is uniquely challenging with AI, because we have to
get into how has it been trained, what is the decisionmaking
and the process, how do we verify that decisionmaking process
when we are verifying it for security, for biases, for all
those things?
If there are ideas that are out there that you have seen
for that type of verification, that would be exceptionally
important to us because we are not going to buy an off the
shelf item for AI and just assume it has good ethics that is
built into it, or it has good security built into it.
One of the thing that I want to be able to mention, then I
want to pass this on to other colleagues who want to be able to
ask questions. You had mentioned before about AI is a tool, and
I totally agree. There has been some dialog even here in this
Committee, about putting an AI representative in every agency.
I have real concerns on that because if you have an AI
representative in each agency, their job is to increase AI
usage. That would be like having a screwdriver representative
in each agency to find a way to increase the screwdriver uses.
It is a tool.
We need to treat it as a tool. I have concerns when there
is a focus on, it is the latest thing we talk about, so let us
proliferate that in the Federal Government. If it is a tool
that works, fine, but we need to have some ways to be able to
verify security, verify on the updating, verifying the IP
address, and then verifying what is not the IP, but who owns
the IP on it, and then verifying the whole process of updating
it, using it, and then also what is responsible use of it.
But I just have to be able to tell this body, I do not
think we should have an AI office in each one of our Federal
entities. I think we have AI specialists within our technology
folks, but not focused on trying to get more of it. Did you
want to make a quick comment on that, Ms. Li?
Dr. Li. I want to respond by saying AI is very powerful and
it is a horizontal technology. It is really important that many
of us leaders of our society have that knowledge.
This is why under the AI Training Act, Stanford HHI is
actually committed to creating educational opportunities to our
policy leaders and policy members of the policy world, because
it is not just one person or one person per agency's
responsibility.
It is our collective responsibility, and having that basic
level knowledge, and also in some cases, specialized knowledge.
It is really important to recognize that.
Senator Lankford. To the former chairman and the current
fill-in interim chairman of the day, thanks for your gift of an
extra minute there in time. I appreciate that.
Senator Carper. Any time. Thank you for those very
thoughtful questions and good responsive answers. Senator
Rosen, you are next.
OPENING STATEMENT OF SENATOR ROSEN
Senator Rosen. thank you, Senator Carper. Thank you to
everyone here. I appreciate all the work that you have done and
being here today, and I am just going to get right into it
because I know we think AI, it is all iterative and it is going
to learn on its own and do all of that.
But then we know it does not all build itself, and so we
need a Federal AI workforce, one that is trained. The White
House Select Committee on AI has worked to enhance the overall
effectiveness and productivity of the Federal Government's
efforts, of course, related to AI. Earlier this year, the
Select Committee on AI released the Strategic Plan for AI,
which identified significant challenges with the AI workforce.
I have a two part question for you, Mr. Ghani, and then a
follow up for you, Ms. Raj. Mr. Ghani, to that point, how do we
ensure the existing Federal AI workforce has the necessary
skills to buy, procure as we are talking about, build, and
field these AI enabled technologies?
Are there programs that currently exist to upskill our
Federal workforce? You can think about it, Ms. Raj, your follow
up will be, if you can speak to it, how can academia and
industry talent be leveraged to meet these dynamic needs?
Because it is not a static industry for sure.
Mr. Ghani. Thank you for the question. It is a little bit
related to the previous conversation on, do you train everyone
on everything, or do you have specialized people? I think this
is a dilemma that even the private sector has been focusing on
for the last 20 years in this space.
Where do you build one central team that is a tech team, an
IT team, an AI team, and then you have them help everybody
else? Or do you enable each agency, each department, upskill
them? For me to answer that--what has worked is the latter. It
is because, again, AI is not a generic tool, it is not
Microsoft Office, it is not Windows, it is not a word
processor. It is different. Then what you need to have is you
need to have it de-configured and used for specific
applications, specific programs, specific policy areas.
If you are bringing in tech people in a centralized way,
you need to now have them be trained in every single thing that
every single agency might do.
That is not possible. I think in my mind, the way to do
this is to augment the existing trainings of people within
different departments, within agencies, and enable them to do
their work in an AI augmented world rather than AI first
training.
To your question, Senator Rosen, on do these types of
programs exist? Not exactly. There are programs that exist in
pockets. The majority of programs that exist are in upskilling
and in several universities and several nonprofits, including
Carnegie Mellon, and Coleridge Initiative that I have been part
of.
We have created programs to train government agencies and
workers in the use of these types of technologies. But not
enough of them exist and they are not at the scale that we need
very quickly.
We need to enhance the training, but I think there are two
pieces there, and I will echo some of the things my colleagues
have said.
Senator Rosen. I am a former coder. I wrote software for a
living. I get it.
Mr. Ghani. They need to be kind of experience centered,
grounded in real problems, and have people solve these things,
which requires access to data, which requires access to real
problems and access to experts who can help.
These programs are expensive, and these programs do not
scale because you cannot just put them up and have Massive Open
Online Courses (MOOCs).
I think we need an investment in training government
agencies and not just in using them but being at the forefront
of helping design them. Because, again, the needs are very
different for governments than they are for the private sector.
Senator Rosen. Ms. Raj.
Ms. Raj. Thank you, Senator, for your question. It is
evolving at a rapid pace that even folks like myself in the
field have a hard time keeping up. There have been pockets of
incredible initiatives across the U.S. Government, namely the
MIT Air Force AI Accelerator, which is a multi-disciplinary
team of embedded officers and enlisted airmen who joined MIT
faculty, researchers, and students.
But what is really interesting about it is that leadership
at all levels participate in these workshops that are
reflective of the mission need. I believe it is
interdisciplinary teaming through AI workshops and training
alongside both industry and academia that make this possible.
But again, it is not a one-off workshop. It is a
continuously evolving relationship between academia and
industry.
Senator Rosen. Thank you. I want to move on a little bit to
AI procurement, because we have heard testimony in this
Committee about, of course, we all know how quickly AI is
evolving. This is no secret there.
Giving the breakneck pace of the AI evolution and, of
course, much iterations, how quickly is going to learn on its
own, I am concerned that our Federal Government's acquisition
process is just going to not be able to keep up with the rapid
pace of the development of the AI tools.
This question is for you, Mr. Roberts. How should the
Federal procurement process, how could we improve it,
streamline it, to be sure that AI products are purchased at the
speed of relevance, meaning that they are not rendered obsolete
before we even benefit from their use? How should our Federal
contracts account for the need to retrain a procured AI system?
They often require ongoing updates, security patches, and
monitoring.
I am a former computer programmer myself. I have been
advocating for what we call a software bill of materials (SBOM)
so you know--I guess like if you look at the back of a cereal
box, you know all the ingredients that are in there.
We should be able to know that for our government software,
so we have to make the appropriate changes so we know the list
of ingredients, if you will. If you want to speak to that.
Mr. Roberts. Yes, Senator. I would start by saying I think
the best way to change the processes is to invest aggressively
in our talent. I would say that there should be nothing short
of a mandatory AI training for all interns coming into the
acquisition workforce.
I think this is the only way that professionals coming in
will really understand. The reason why is twofold. I think one
is if we really truly believe this is transformative
technology, which I think it is. It will affect every mission.
Every contracting officer in the future will have to know what
this is.
Right now, they are set up for failure. The second reason
is, to acquire AI, it requires a major reskilling of the way
processes are done. There is a mantra in contracting that says,
poor planning equals poor contracting. What that is translated
into is that if you do not have all your ducks in a row about
exactly how this project is going to pan out, then you should
not release your solicitation.
Obviously, that does not work with artificial intelligence.
What we have, what we need is a team that works in an iterative
fashion, which is hard right now in the government culture.
What we need is a team that focuses on, we have something
called modular contracting in Federal Acquisition Regulation
(FAR) Part 39.
It is underused, but it is there in the Federal Acquisition
Regulation. But we also need performance that is based on value
add to the actual end user. There is a surprising lack of focus
in the end user just because there is so many rules and
regulation and compliance.
Senator Rosen. I am the last one here, so I am going to ask
my follow up question, which you have led me into perfectly. We
have some authorities at the Federal Government to acquire
these tools, maybe do this training. What authorities are
missing to help move us forward?
Mr. Roberts. Yes, Senator. In the Department of Defense, we
had a lot of the authorities we needed. The Federal Acquisition
Regulation did a lot for us, but we also needed other
transactions when it required.
We needed public, private partnerships, which is a great
tool for AI because it incentivizes industry for dual use
application. But that was the Department of Defense. These
authorities are not given to other civilian agencies who will
also be entering this field.
My recommendation would be to also provide that authority
to those civilian agencies, especially other transnational (OT)
authority, and things like partnership intermediary agreements,
public, private partnerships. AI is so diverse that these are
truly tools that match perfectly depending on the situation. We
have used them all.
Senator Rosen. Thank you. I see our chairman is back, so I
will turn it over to you. Thank you.
Chairman Peters. Thank you, Senator. Senator Carper is
recognized for his questions, so thank you for holding the
gavel.
OPENING STATEMENT OF SENATOR CARPER
Senator Carper. God, it was fun doing that again. I used to
be him. Now I am me. [Laughter.]
Right. I got to Chair a Committee on Environment Public
Works (EPW), which is pretty good gig as well.
All right. My name is Senator Carper. I have been called
worse. Welcome, everybody. We are delighted that you are here.
We had this big seminar yesterday, as some of you probably
participated in, or at least followed it.
I was asked by the press, the press stakeout after it was
over and I was leaving, and they said, well, what do you think?
Do you know a lot about AI? I told him earlier this year, when
we started getting really focused on this, I said I could
barely spell AI.
Another reporter said to me, how do you feel? Do you feel
like you are coming along on AI now? I said, I quoted the wife
of Albert Einstein. She allegedly was asked a long time ago if
she understood her husband's theory of relativity. She was very
smart woman.
She said, I understand the words, but not the sentences. I
feel that pretty well sums up my approach released earlier this
year on AI. Hope I am coming along a little bit, taking baby
steps, but we will eventually get there. I just remind us all
that our members tend to have broad and different backgrounds.
I got an MBA a long time ago, became a naval flight officer
(NFO) in Vietnam War, and many years retired Navy captain.
I have been Delaware's Treasurer, Congressman, Governor,
and Senator. So, I should be able to spell AI. I am not going
to give up and will eventually hopefully be able to make some
real contributions through this Committee and others as well.
Among the questions I would like to ask you, I have a couple of
them, and we will just see how much time we have. Dr. Li.
Where was your family from originally?
Dr. Li. New Jersey.
Senator Carper. New Jersey. West Virginia, here, so.
Dr. Li. Oh, great. I was a small town. Parsippany, New
Jersey.
Senator Carper. All right, good. Very good. Dr. Li, let me
just begin by thanking, again, all of you for being here and
for your efforts to help make us, I like to say guided missiles
as opposed to unguided missiles. But thank you for your
testimony and your thoughtful testimony on the impact of AI.
As we heard this morning, AI has a potential to transform
many aspects, not all aspects, but many aspects of our lives,
including the speed and effectiveness of government services.
We are servants. Our job is to serve the people of this
country.
The question here is for me is--how can AI help us to be
better servants and to serve the people in this country in a
wide variety of ways? Along with helping us with the spread of
information, helping us with respect to economic
competitiveness as a Nation, and help us with respect to other
changing worker responsibilities.
I am curious about how AI will be incorporated throughout
the Federal Government, specifically with regards to efforts to
streamline service delivery and waste reduction. I do not like
to waste my money. I do not like the waste of the taxpayer
money. I think that is probably true for all of us who serve
here.
Dr. Li, my question. What are some of the ways that you can
see artificial intelligence improving the delivery of services
to our constituents? That is pretty broad range because we want
to provide services in a broad range of ways, but just a couple
of examples, Dr. Li, of how AI can help us improve delivery of
services to our constituents.
Dr. Li. Thank you, Senator, for the question. First of all,
this is actually one of the upsides of this technology, is that
it can help productivity greatly. I want to continue on the
example earlier that, for example, Medicare Advantage Services,
right.
As a user of that for my elderly parents, it is actually
highly inefficient right now to have a conversation talking
about claims. You can just imagine, especially with these
language model technology, that it actually can greatly help
our Federal Government to become much more efficient in
handling claims and all that.
When I say efficient, I do not mean replacing humans. I
mean augmenting humans. This technology can serve as a
companion and working assistant to many aspects of our Federal
Government's work.
Senator Carper. If I could use an aviation term, it is more
as a copilot.
Dr. Li. Copilot is actually the fantastic term. Even in
software engineering now, we call it co-pilot. This is,
absolutely there is opportunity for a co-piloting. Also, there
are what we can call machine in the loop of human work.
For example, there are a vast amount of documents and
knowledge we have to sort through. Sometimes AI can become that
kind of copilot to help to preliminarily sorting that. These
are just simple examples, as long as we adhere to our
responsible framework.
Senator Carper. Doctor, thanks for that. Can you also
discuss for us how the adoption of artificial intelligence
tools will impact the Federal workforce on a daily basis, and
how these tools will impact how Federal agencies plan for the
future.
Dr. Li. A lot of Federal Government work is knowledge
based. Again, whether we are processing documents, making
decisions, and AI right now, especially in the recent
breakthroughs, a lot of these language based models are
extremely helpful in knowledge work, as a copilot.
In Stanford, we have colleagues working with the Internal
Revenue Service (IRS), looking at, for example, tax and looking
at fraud detections. I can imagine EPA looking at environmental
issues, understanding different aspect of environmental
problems. We can use this for example, firefighting and climate
help.
There are many ways that AI can help in Federal work. Like
my colleague here just said, if we empower our Federal workers,
if we continue to educate them, train them, their productivity
at work can be really elevated, and that is what I personally
hope to see. That is part of American leadership.
Senator Carper. Mr. Chairman, just one last thought before
I yield back to the real Chair. You mentioned the IRS. We have
a fairly new commissioner, right, in charge of the IRS. Daniel
Werfel is doing a great job, and we are actually beginning to
do a much better job, as the chairman knows, collecting taxes
that are owed by a lot of folks, including people of great
wealth and companies that are highly profitable.
The idea of having better tools for our folks at the IRS is
a big plus. I Chaired the Committee on Environment Public
Works. We were all about the climate change and the kind of
weather that we are seeing, weather conditions we are seeing
around the country, and fires, and so forth.
But we have our hands full, and our firefighters have their
hands full, and they can use some help. Maybe AI at the end of
the day can be of help there too. Thank you. Thanks, Mr.
Chairman.
Chairman Peters. Thank you, Senator Carper. Professor
Ghani, a couple of questions. We are going to wrap this up. All
of you have been great, but we are starting to wrap this up
with just a few more questions. Mr. Ghani, what do you see as
essential provisions of contracts involving AI systems that
governments are not currently incorporating and should be
required?
Mr. Ghani. I think simply what the majority of the
procurement process I have seen, it over focuses on the
mechanics of the system being procured.
I will give you an example. Many years ago, I was working
with police departments on these systems called early
intervention systems that were designed to identify police
officers who were going to use unjustified force in shootings
and detect them early.
When you look at procurement documents, request for
proposals (RFPs) for systems, what they talked about was
measuring what is the uptime of the system? Can people log into
the system? Does it show up? As opposed to the truly functional
requirements, which are, does it reduce police shootings? Does
it prevent those shootings? Does it help save people's lives?
What is happening is that the focus has been on the
mechanics, because that is easy to do. It is easy to measure.
It does not require that much thinking and effort, and so we do
the easy thing, and we forget the hard stuff.
There are many such other examples where we have contracts
that we get stuck into that are unnecessarily long term. We do
not allow the systems to get data out and put more data in. We
do not have them interoperate, because as we talked about, AI
systems do not work in isolation, they are connected to
different pieces.
There is a lack of customization and configuration. I think
there is a whole set of things. What we need to do is create a
much more holistic procurement process that has both
requirements around what did you designed the system to do, how
did you validate that it did what you wanted it to do, and how
are you developing a continuous monitoring process that is
continuing to do so? Most of those things, there are no
standards that exist today for that procurement, and we need to
create those.
Chairman Peters. A follow up on that. Many others were
nodding their head. This is important. What those requirements
are. The other question is, can we standardize those
requirements across a variety of government agencies, or does
it have to be more niche? You want to take the first stab at
that.
Then I see Ms. Raj, you are nodding your head, too, so your
thoughts on that. Then any others want to jump in. But, Mr.
Ghani.
Mr. Ghani. I think it is going to have to be an 80-20 thing
here, where there is a series of things that we can
standardize, and what we can standardize on is what do we ask
for. We can standardize if we need to figure out what values
this system is built on.
We need to figure out how you built it. What design choices
did you build? What artifacts were produced? Does it work for
people? Who does it work for? Who does it not work for? How is
it validated? Those are kind of high level questions that we
can standardize that need to be there.
Data in, data out, interoperability, configuration. What we
cannot standardize on is what specific values it should have.
That needs to come from the use case. That is where that
collaboration is going to happen, is people with expertise in
understanding the policy issue.
We are talking about service delivery. What is the goal of
that service? Is it to improve people's lives or is to save
money? When there is a tradeoff, who decides that? The
questions that we were talking about earlier, those are the
things that are not going to be standard. They are going to
depend on the specific use case and specific policy and the
specific service.
But everything, the process that was used to design that
system, to come up with those values, and to validate it, and
all the other things, those can be standardized. Again, I think
it is going to be an 80-20 thing, where 80 percent can be
standardized, and 20 percent will need to be customized.
Chairman Peters. Right. If anyone else has thoughts. But
Ms. Raj, you have some thoughts.
Ms. Raj. Yes. Thank you, Senator Peters, for your question.
At a high level, I think we can also think about it from the
other way, which is what data is available for AI to tackle
from a low hanging fruit perspective, right. I think there is a
way to organize, hey, this is the data that is available that
potentially can be used for automation.
This is the level of the responsible AI that can be applied
to these particular questions. Perhaps getting AI slowly
integrated via this is the data available, and these are the
questions that there are more guardrails around, that could be
a good way to start the standardization process. Because
starting standards without actually tying it to specific use
cases and mission need, then you will have that misalignment.
Chairman Peters. Mr. Roberts.
Mr. Roberts. Yes, Senator. I will piggyback on something
Professor Ghani said, too, about obsessing over the mechanics
in terms of valuing the performance. This is why it is so
important for acquisition professionals to be mission focused
and look at AI as an enhancement of the mission.
Because it is so easy to, as Professor Ghani mentioned, to
measure performance based on mechanics, based on how is it
working rather than is it working, is it actually valuable to
the end user? When you have a focus on that, I think you will
find that the acquisition team changes the way they approach
even risks and responsible use of AI.
They change the way they look at the intellectual property.
It all focuses on mission value and value to end user, which
trickles down into the way we look at everything.
Chairman Peters. Very good. Mr. Ghani, back to you. We have
heard previously about the need to audit AI systems to account
for drift and unintended consequences. The question for you is,
what procedures should be put in place to audit the government
and these AI systems within the government, and should this
need be accounted for upfront in the procurement process? And
if so, how?
Mr. Ghani. If so, absolutely, yes. I think the audit has to
be there. I think there are, in my mind there are three stages
of this audit, right. The first audit, and I think I keep going
back to and we are all sort of saying the same word, values,
right.
When we are designing a system, the procurement has to ask
for a system to help achieve certain values. We need to audit
those and validate whether those are the values we should care
about. That is the first audit.
That is not a technical audit, that is a values audit. Two,
when a system is being procured, we need to audit how did the
vendor, consultant, researcher build this system to help it
achieve those values? That is a technical audit. Three, once it
is deployed, it is not going to work in isolation.
In most cases it is informing, especially high stakes
decision, it is informing humans. You can audit the system for
what it outputs, but we need to audit, how does it interact
with this person, and how does this human decision change?
Because that is eventually what we care about, is this impact
on people.
We need to audit the interaction between the AI system and
the human system, and then audit the outcomes that it produces.
So those are the four pieces. It is not a one-off. It is a
continuous thing because it is going to change.
Chairman Peters. Good. Mr. Roberts, how can the government
ensure that the data that is used for testing and training the
Government's AI platforms is actually secure and protected?
Anyone else can jump in on these too.
I will pick out one individual, but feel free if you want
to say anything. Raise your hand as we wrap up. But only a
couple of more questions and then you will be free. Mr.
Roberts.
Mr. Roberts. Yes, Senator. I will start by saying that the
more we restrict the free flow of data and access from
contractors to the data, there is also a sense in which that
becomes more problematic as well, especially for the
functioning of the model.
We have seen instances just on the other side of over-
classification, over-regulation, over-protection of data that
has killed projects. However, having said that, data security,
protection, and privacy, is essential, and especially in areas
that I have worked with, with the classified data that affects
national security and personally identifiable information
(PII), especially with health records.
We have seen some things that have helped, especially with
health records. The use of synthetic data was beneficial to us
that we were able to use. There are other sources of data. But
not to oversimplify, I think the most important thing for the
acquisition field is to put the ethics professional, the person
who is dealing with privacy, the security professional, into
the planning phase.
Again, having a balanced team that has all these
professionals involved at the very beginning. Supply chain risk
is another big problem with security. It is something that is
not looked at much. We are finding with AI, because a lot of
these rules such as supply chain risk were always rules, but
they are reemerging in much more important ways when we are
looking at the risks, the adversarial threats.
It is looking at all these risks in new ways, and it is
making sure you have a full, balanced team to be with you at
the planning stage, on how to deal with it.
Chairman Peters. Right. Ms. Raj.
Ms. Raj. Yes. I want to talk about it from the perspective
of small business. CrowdAI has worked with the U.S. Government
on AI initiatives on a wide range of sensors, all unclassified.
As we move forward and mature, we started working with more
with the Defense Counterintelligence and Security Agency (DCSA)
to make sure that we could be ready for other types of data,
more sensitive data.
Many systems that we worked with, with U.S. Government
data, were either in bare metal servers or data that remained
in government clouds. I believe that it is important that if a
company that is in dual use wants to work with the U.S.
Government, they also need to ensure that the data is treated
with responsibility and privacy to the maximum extent possible.
I think that as companies start putting their technology in
a more dual use manner, they also need to comply with privacy
and regulation that is so standard across a lot of large
companies.
AI is the ever evolving technology, and so the way you make
sure that companies of all sizes continue this type of
evaluation around privacy and ethics is making sure that there
is qualitative and quantitative testing, because often
aggregated statistics may not paint the full picture.
Chairman Peters. Very good. I would like to thank our
witnesses for being here today. I am certainly grateful to your
contributions. This is a very important discussion, and it does
not end here. We are going to have many more discussions going
forward. We hope all of you are available to help this
Committee work on this important issue.
Certainly, as we heard today, the use of automated systems
to help the government provide public services more efficiently
is nothing new. We have been dealing with this for a long time.
Mr. Roberts, you have been dealing with it for a long time,
as well as everybody on the panel. However, as we enter this
age of rapid development of advanced machine learning models
and other forms of artificial intelligence, now is the time to
ensure that the algorithmic systems that the government buys do
not have unintended or harmful consequences.
I think, as each of our witnesses have emphasized, enacting
appropriate guardrails and oversight policies for the
procurement of AI in Government will shape its development and
use across all industry, and industries in the years to come.
Americans deserve a government that is modern, that is
efficient, and innovative, as well as one that is transparent,
fair, trustworthy, and protects their privacy. As Chair of this
Committee, I will continue to work to ensure that government
lives up to these principles and that promise.
Your testimony will help inform the Committee's future and
legislative activities going forward. Again, we hope this is an
ongoing dialog in a very fast moving and challenging area, but
essential for us to understand and act appropriately.
The record for this hearing will remain open for 15 days
until 5.00 p.m. on September 29, 2023 for the submission of
statements and questions for the record. This hearing is now
adjourned.
[Whereupon, at 12:01 p.m., the hearing was adjourned.]
A P P E N D I X
----------
[GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]
[all]