[Senate Hearing 118-164]
[From the U.S. Government Publishing Office]


                                                        S. Hrg. 118-164

    THE PHILOSOPHY OF AI: LEARNING FROM HISTORY, SHAPING OUR FUTURE

=======================================================================

                                HEARING

                               BEFORE THE

                              COMMITTEE ON
               HOMELAND SECURITY AND GOVERNMENTAL AFFAIRS
                          UNITED STATES SENATE

                    ONE HUNDRED EIGHTEENTH CONGRESS


                             FIRST SESSION
                               __________

                            NOVEMBER 8, 2023
                               __________

        Available via the World Wide Web: http://www.govinfo.gov

                       Printed for the use of the
        Committee on Homeland Security and Governmental Affairs
        

                  [GRAPHIC NOT AVAILABLE IN TIFF FORMAT]
                  

                    U.S. GOVERNMENT PUBLISHING OFFICE
                    
53-996 PDF                 WASHINGTON : 2024                     
                  
        

        COMMITTEE ON HOMELAND SECURITY AND GOVERNMENTAL AFFAIRS

                   GARY C. PETERS, Michigan, Chairman
THOMAS R. CARPER, Delaware           RAND PAUL, Kentucky
MAGGIE HASSAN, New Hampshire         RON JOHNSON, Wisconsin
KYRSTEN SINEMA, Arizona              JAMES LANKFORD, Oklahoma
JACKY ROSEN, Nevada                  MITT ROMNEY, Utah
ALEX PADILLA, California             RICK SCOTT, Florida
JON OSSOFF, Georgia                  JOSH HAWLEY, Missouri
RICHARD BLUMENTHAL, Connecticut      ROGER MARSHALL, Kansas

                   David M. Weinberg, Staff Director
            Lena C. Chang, Director of Governmental Affairs
                  Michelle M. Benecke, Senior Counsel
                        Evan E. Freeman, Counsel
                        Avery M. Blank, Counsel
           William E. Henderson III, Minority Staff Director
              Christina N. Salazar, Minority Chief Counsel
          Kendal B. Tigner, Minority Professional Staff Member
                     Laura W. Kilbride, Chief Clerk
                   Ashley A. Gonzalez, Hearing Clerk

                            C O N T E N T S

                                 ------                                
Opening statements:
                                                                   Page
    Senator Peters...............................................     1
    Senator Johnson..............................................     8
    Senator Hassan...............................................    14
    Senator Butler...............................................    16
    Senator Hawley...............................................    18
    Senator Blumenthal...........................................    21
    Senator Ossoff...............................................    24
    Senator Rosen................................................    31
Prepared statements:
    Senator Peters...............................................    35

                               WITNESSES
                      WEDNESDAY, NOBEMBER 8, 2023

Daron Acemoglu, Ph.D. Institute Professor, Department of 
  Economics, Massachusetts Institute of Technology...............     3
Margaret Hu, Taylor Reveley Research Professor, Professor of Law, 
  Director, Digital Democracy Lab, William & Mary Law School.....     5
Shannon Vallor, Ph.D. Baillie Gifford Chair in the Ethics of Data 
  and Artificial Intelligence, Director, Centre for Technomoral 
  Futures, Edinburgh Futures Institute, The University of 
  Edinburgh......................................................     7

                     Alphabetical List of Witnesses

Acemoglu, Daron, Ph.D.:
    Testimony....................................................     3
    Prepared statement...........................................    37
Hu, Margaret:
    Testimony....................................................     5
    Prepared statement...........................................    55
Vallor, Shannon:
    Testimony....................................................     7
    Prepared statement...........................................    67

                                APPENDIX

Senator Hawley Chart.............................................    84
Senator Hawley Letter to Inspector General.......................    85

 
    THE PHILOSOPHY OF AI: LEARNING FROM HISTORY, SHAPING OUR FUTURE

                              ----------                              


                      WEDNESDAY, NOBEMBER 8, 2023

                                     U.S. Senate,  
                           Committee on Homeland Security  
                                  and Governmental Affairs,
                                                    Washington, DC.
    The Committee met, pursuant to notice, at 10:01 a.m., in 
room SD-562, Dirksen Senate Office Building, Hon. Gary Peters, 
Chair of the Committee, presiding.
    Present: Senators Peters [presiding], Hassan, Rosen, 
Blumenthal, Ossoff, Butler, Johnson, and Hawley.

             OPENING STATEMENT OF SENATOR PETERS\1\

    Chairman Peters. The Committee will come to order. We are 
living through perhaps one of the most exciting times in human 
history as artificial intelligence (AI) becomes more advanced 
each and every day.
---------------------------------------------------------------------------
    \1\ The prepared statement of Senator Peters appears in the 
Appendix on page 35.
---------------------------------------------------------------------------
    AI tools have the capacity to revolutionize medicine, 
expand the frontiers of scientific research, ease the burdens 
of physical work, and create new instruments of art and 
culture. AI has the potential to transform our world for a 
better place, but these technologies also bring new risks to 
our democracy, to civil liberties, and even our human agency.
    As we shape and regulate AI, we cannot be blinded by its 
potential for good. We must also understand how it will shape 
us and be prepared for the challenges that these tools will 
also bring. Some of that work will be accomplished with 
innovative policy, and I am proud to have passed numerous bills 
to improve the government's use of AI through increased 
transparency, responsible procurement, workforce training and 
more.
    I have convened hearings that explore AI safety risk, 
procurement of these tools, and how to prepare our Federal 
workforce to properly utilize them. But as policymakers, we 
also have to explore the broader context surrounding this 
technology. We have to examine the historical, the ethical, and 
philosophical questions that it raises. Today's hearing and our 
panel of witnesses give us the opportunity to do just that. 
This is not the first time that humans have developed 
staggering new innovations. Such moments in history have not 
just made our technologies more advanced, they have affected 
our politics, influenced our culture, and changed the fabric of 
our society.
    The Industrial Revolution is one useful example of that 
phenomena. During that era, humans invented new tools that 
drastically changed our capacity to make things. The means of 
mass production spread around the world and allowed us to usher 
in modern manufacturing economy. But that era brought with it 
new challenges. It led to concerns about monopolies, worker 
safety, unfair wages, and child labor. It produced the weapons 
that were used to fight two world wars. In short, it was not 
just about technology that could be used for good.
    I am grateful our first witness, Daron Acemoglu, has 
studied these phenomena. He has not only examined the history 
of technological change, but also the democratic institutions 
that are needed in response. In the 20th century, we had trade 
unions to protect workers' rights, and effective government 
regulation to keep those industries in check. What tools do we 
need to meet this moment and what else should we learn from the 
history?
    Artificial intelligence also brings unique challenges. The 
history of technological change has largely centered on human 
strength and how we can augment it through the use of new 
machines. AI will affect physical work, but unlike other 
technologies, it is more directly tied to our intellectual and 
cultural capacities. It has already introduced new ways to ask 
and answer questions, synthesize information, conduct research, 
and even make art.
    These qualities, the ability to understand ideas and create 
culture, are the very foundation of our humanity. We must work 
to preserve them as they become influenced by artificial tools. 
Perhaps most importantly, AI's influence on these capacities is 
not neutral. These tools, like the humans who make them, are 
biased. We must define what values lie at the core of our human 
experience and create technological tools that support them.
    Our second witness Shannon Vallor, will be a helpful 
resource in understanding these ethical questions. She studies 
the way that new technologies reshape our habits, our practices 
and moral character. With her help, we can understand the 
values embedded in these technologies and the effect that it 
will have on our human character.
    Finally, we will explore AI through a constitutional law 
framework. AI poses risks to our civil liberties. New 
surveillance tools can be used to target vulnerable 
communities. Biometric systems like facial recognition can 
endanger a citizen's right to due process. Advanced technology 
brings renewed questions about our privacy and our personal 
information.
    If we do not understand how AI can be used in ways that 
erode our constitutional rights, it can pose a grave danger to 
our democracy and civic institutions. Our third witness, 
Margaret Hu, will help us understand these intersections. She 
researches the risk that AI possesses to constitutional rights, 
due process, and civil liberties.
    Artificial intelligence has already begun to shape the 
fabric of our society. Our response cannot come through 
piecemeal policy alone or isolated technological fixes. It must 
include a deeper examination of our history, our democracy, and 
our values, and how we want this technology to shape our 
future.
    We must look to the past and learn the lessons of previous 
technological revolutions. We must answer the ethical questions 
that AI poses, and use these new technologies to build a world 
where all humans can thrive. We must protect our civil 
liberties and democratic institutions against risk that these 
tools can pose.
    This hearing provides an excellent opportunity to focus on 
this work, and I would like to thank our witnesses for joining 
us today. We certainly look forward to your testimony. It is 
the practice of this Homeland Security and Governmental Affais 
Committee (HSGAC) to swear in witnesses. If each of you would 
please rise and raise your right hands. Do you swear to the 
testimony you give before this Committee will be the truth, the 
whole truth, and nothing but the truth, so help you, God?
    Dr. Acemoglu. I do.
    Dr. Vallor. I do.
    Ms. Hu. I do.
    Chairman Peters. Thank you. You may be seated.
    Our first witness is Daron Acemoglu. Professor Acemoglu is 
an economist at the Massachusetts Institute of Technology 
(MIT). His work focuses on the intersection of technological 
change with economic growth, prosperity, and inequity.
    Professor, welcome to the Committee. You are recognized for 
your opening comments.

TESTIMONY OF DARON ACEMOGLU,\1\ INSTITUTE PROFESSOR, DEPARTMENT 
      OF ECONOMICS, MASSACHUSETTS INSTITUTE OF TECHNOLOGY

    Dr. Acemoglu. Thank you for inviting me to testify on this 
important topic. I will argue that there is a pro-human meaning 
pro-worker and pro-citizen direction of artificial intelligence 
that would be much better for democracy and shared prosperity.
---------------------------------------------------------------------------
    \1\ The prepared statement of Dr. Acemoglu appears in the Appendix 
on page 37.
---------------------------------------------------------------------------
    Unfortunately, we are currently on a very different and 
worrying trajectory. Digital technologies have already 
transformed our lives, and AI has further amplified these 
trends, but all has not been good. U.S. inequality has served 
since 1980, many workers, especially men without a high school 
degree or just a high school degree, have seen very significant 
declines in their real earnings, and inequality has multiplied 
in other dimensions as well.
    My research indicates that the most important cause of 
these trends is automation, meaning the substitution of 
machines and algorithm for tasks previously performed by 
workers. Automation accounts for more than half of the increase 
in U.S. inequality. Other trends such as offshoring and imports 
from China have played a somewhat smaller role.
    Technological change is a force for good, but we need to 
use it the right way. During the mechanization of agriculture 
and the three decades following World War II, automation was 
rapid, but the U.S. economy created millions of good jobs and 
built shared prosperity. The main difference from the digital 
age was that the new technologies not only automated some 
tasks, but also created new ones for workers.
    Henry Ford's factories used new electrical machinery that 
automated some work, but at the same time, they also introduced 
many new technical tasks for blue collar workers. 
Simultaneously, manufacturing became much more intensive in 
information activities and created a lot of jobs through these 
channels as well, such as in design, planning, inspection, 
quality control, and accounting. Overall, new tasks were 
critical for employment and wage growth during these eras.
    Unfortunately, my research showed that emerging AI 
technologies are today predominantly targeting automation and 
surveillance. The emphasis on surveillance is, of course, much 
more intense in China. We are already seeing the social and 
democratic implications of the rising inequality. In the United 
States, areas that have been most hit by Chinese competition or 
the introduction of robots show much greater degrees of 
polarization. Inequality undermines support for democracy, and 
this lack of support make democracies more unstable and less 
capable of dealing with challenges.
    This path is not inevitable. To improve human performance. 
We need to think beyond creating AI systems that to that seek 
to achieve artificial general intelligence or human parity. The 
emphasis on general intelligence is not just a Kymira, but 
distracts from the more beneficial users of digital 
technologies to expand human capabilities.
    Making machines useful to humans is not a new aspiration. 
Many people were working on this agenda as early as the 1949, 
and many technologies that have been foundational to our lives 
today, including the computer mouse, hyperlinks, menu-driven 
computer systems came out of this vision.
    Machine usefulness is more promising today than in the 
past. The irony of our current age is that information is 
abundant, but useful information is scarce. AI can help humans 
become better problem solvers and decisionmakers by presenting 
useful information. For example, an electrician can diagnose 
rare problems and accomplish more complex tasks when presented 
useful information by AI systems.
    The analog to the pro-worker agenda in communication is a 
pro-citizen perspective to provide better information to 
individuals and enable them to participate in deliberations 
without manipulation or undue bias. The opposite approach is 
one that focuses on surveillance, manipulation, manufacturing 
false conformity. The evolution of social media illustrates 
this manipulative path with algorithms used for creating echo 
chambers and extremism.
    The survival of any political regime depends on how 
information is controlled and presented. Authoritarian rulers 
have understood this for ages. The rulers in China, 2,200 years 
ago, reputedly burned books and executed people who could 
rewrite them to control information. The anti-democratic use of 
computers is clearly visible in Russia, Iran, and China.
    Whoever controls information matters no less for democratic 
regimes. Digital platforms' monopoly over information today is 
completely unprecedented. Their business model is based on 
monetizing data via digital ads and much work in social 
psychology documents, and in fact, unfortunately teaches 
platforms how to increase engagement by manipulating user 
perceptions and presenting them with varying stimuli and 
emotional triggers.
    AI is a new technology, but as you pointed out, history 
offers important clues about how to best to manage it. The 
British Industrial Revolution is today remembered as the origin 
of our prosperity. This is true, but only part of the story. 
The first 100 years of the British Industrial Revolution were 
simply awful for the working people. Real income stagnated, 
working hours increased, working conditions deteriorated, and 
health and life expectancy became much worse in the face of 
uncontrolled epidemics and intensifying pollution.
    The more positive developments after 1850 were due to a 
major direction of technology away from just automation and 
toward pro-human goals. This was embedded in fundamental 
political and social changes, including democratization and new 
laws to protect worker voice and worker rights.
    Just like during the industrial revolution, we have widely 
different paths ahead of us. A pro-human direction of AI would 
be much better for prosperity, democracy, and national 
security. Yet that is not where we are heading. My five minutes 
is up, but I will be happy to discuss later policy proposals 
for redirecting AI toward a more beneficial trajectory. Thank 
you.
    Chairman Peters. Thank you, Professor.
    Professor Hu is a Professor of Law and Director of the 
Digital Democracy Lab at William & Mary Law School. She is a 
constitutional law expert, and her research focus focuses on 
the intersection of technology, civil rights, and national 
security. Professor Hu previously served in the Civil Rights 
Division at the U.S. Department of Justice (DOJ).
    Professor Hu, welcome to the Committee. You are recognized 
for your opening comments.

TESTIMONY OF MARGARET HU,\1\ TAYLOR REVELEY RESEARCH PROFESSOR, 
 PROFESSOR OF LAW, DIRECTOR, DIGITAL DEMOCRACY LAB, WILLIAM & 
                        MARY LAW SCHOOL

    Ms. Hu. Good morning. It is an honor to be a part of this 
critically important dialog on the philosophical and historical 
dimensions of the future of AI governance. The reason we must 
consider the philosophy of AI is because we are at a critical 
juncture in history where we are faced with a decision; either 
the law governs AI or AI governs the law.
---------------------------------------------------------------------------
    \1\ The prepared statement of Ms. Hu appears in the Appendix on 
page 55.
---------------------------------------------------------------------------
    Today, I would like to place AI side by side with 
constitutional law. Doing so allows us to visualize how both 
function on a philosophical level. It also provides us with a 
window into how they are philosophically in conversation with 
one another and gives us a method on how we must best respond 
when we see that they are philosophically in conflict with one 
another.
    Constitutional law is more than just the text of the 
Constitution and cases. Similarly, AI is more than its 
technological components. AI can be understood as more of a 
philosophy than a technology. Like constitutional law. They are 
both highly philosophical in nature. Specifically, AI is 
animated by multiple sciences and philosophies, including 
epistemology, a science in philosophy concerning the structure 
of knowledge and ontology; the philosophy of existence.
    AI and the law is highly complex and requires grappling 
with these interdisciplinary consequences, just as 
constitutional law is highly nuanced and contextualized. In the 
past year, we have entered a new phase of large commercially 
driven AI investments. This new phase brings into sharp relief 
the need for a dialog on rights-based AI governance.
    The creators of generative AI have shared that their 
ambition is to advance artificial general intelligence, which 
aims to surpass human capacities. Generative AI and Artificial 
General Intelligence (AGI) ambitions force us to confront these 
epistemological and ontological questions head on and with some 
urgency in a constitutional democracy.
    AI is already being deployed as a governing tool in 
multiple contexts. AI, particularly due to its combined 
ontological and epistemological powers, as well as its combined 
economic, political, and social power, has the potential to 
evolve into a governance philosophy as well as potentially a 
governance ideology.
    AI is constitutive of not only a knowledge structure, but 
also a market structure in an information society and a 
governing structure in a digital political economy. The 
incentives of AI privatization and the exponential growth of 
datafication can operate as an invisible governing 
superstructure under an invisible and potentially unaccountable 
hand. Additionally, AI can execute both private and public 
ordering functions, sometimes without authorization, rapidly 
shifting power toward centralized and privatized and often 
automated and semi-automated methods of governing.
    The Constitution is inspired by a philosophy of how to 
guarantee rights and how to constrain power. Constitutional law 
is animated by commitment to a governing philosophy surrounding 
self-governance through a republican form of government. In 
theory and philosophy, it separates and decentralizes power, 
installs checks and balances to prevent or mitigate power 
abuses and supports a government that is representative of the 
people by the people for the people.
    An important question at this critical juncture is how to 
ensure that AI, as it potentially evolves into a governing 
philosophy, will not compete with and rival constitutional law 
as a governing philosophy in a way that sacrifices our 
philosophical commitments, to fundamental rights, and to 
constraints on power, including separations of power.
    The Constitution is more than a text; it is a philosophy. 
AI is more than a technology; it is a philosophy. I would like 
to return to my opening question, which is; will AI govern the 
law or will law govern AI. In order to preserve our democracy 
and reinforce it, there can only be one answer. The law must 
govern AI.
    Thank you.
    Chairman Peters. Thank you, Professor.
    Finally, Shannon Vallor is a Professor at the University of 
Edinburgh. She is appointed to the university's department of 
philosophy, as well as Edinburgh Futures Institute. Her 
research centers on the ethical challenges of AI and how these 
new technologies reshape human character, habits, and 
practices.
    Professor Vallor, wonderful to have you here all the way 
from Edinburgh. You may proceed with your opening comments.

 TESTIMONY OF SHANNON VALLOR,\1\ BAILLIE GIFFORD CHAIR IN THE 
 ETHICS OF DATA AND ARTIFICIAL INTELLIGENCE, DIRECTOR, CENTRE 
   FOR TECHNOMORAL FUTURES, EDINBURGH FUTURES INSTITUTE, THE 
                    UNIVERSITY OF EDINBURGH

    Dr. Vallor. Thank you, Chairman Peters, and distinguished 
Members of the Committee for this opportunity to testify today. 
It is a profound honor to address you on a matter of such vital 
importance to the Nation and the human family.
---------------------------------------------------------------------------
    \1\ The prepared statement of Dr. Vallor appears in the Appendix on 
page 67.
---------------------------------------------------------------------------
    I direct the Center for Technomoral Futures at the 
University of Edinburgh, which integrates technical and moral 
knowledge in new models of responsible innovation and 
technology governance. My research is focused on the ethical 
and political implications of AI for over a decade. It is 
deeply informed by our philosophical and historical 
perspectives on AI's role in shaping human character and 
capabilities.
    The most vital of these capabilities is self-governance. 
This capability to reason, think, and judge for oneself how 
best to live underpins the civil and political liberties 
guaranteed by the U.S. Constitution and by international law. 
It also underpins democratic life. My written testimony 
explores the deep tension between AI and our capacity for 
democratic self-governance, and some important and powerful 
lessons from history for resolving it.
    The power of AI is one we must govern. In modern 
democracies, free peoples may not justifiably be subjected to 
social and political powers which determine their basic 
liberties and opportunities, but over which they have no say, 
which they cannot see and freely endorse, and which powers are 
in no way constrained by or answerable to them.
    Many of the greatest risks of AI technologies have arrived 
before the promised social benefits, which prove harder to 
deliver at scale. Yet, the gap between AI's social power and 
our democratic will to govern it remains vast. As a result, 
public attitudes toward AI are souring. This is a grave warning 
for those of us who want AI technologies to mature and succeed 
for human benefit. GMOs and nuclear power also suffered public 
backlash in ways that greatly limited their beneficial use and 
advancement. AI may become a similar target.
    Yet, we do know how to govern AI technologies, and 
responsible AI researchers have given us plenty of tools to get 
started. The United States has a long and proud history of 
regulatory ambition in making powerful and risky technologies 
safer, more trustworthy, and more effective, all while fueling 
innovation and enabling wider adoption. It was done first in 
the 19th century with steamboat regulation, then automobiles, 
aviation, pharmaceuticals, and medical devices to name just a 
few.
    This required the courage to give manufacturers, operators, 
and users irresistible incentives to cooperate. It required the 
capacity to keep learning, and innovating, and adjusting our 
regulatory systems to accommodate technological change. It also 
required persistence of shared governance aims in the public 
interest across changes in political administration.
    This was all within our democratic capacity and still is, 
but the political will to use that capacity is now damaged for 
many reasons. The mischaracterization and misuse of AI 
technologies makes this problem worse by undermining our 
confidence and our own capabilities to reason and govern 
ourselves. This was predicted by early AI pioneers.
    In 1976, Joseph Weizenbaum lamented that intelligent 
automation was emerging just when humans have, ``ceased to 
believe in, let alone to trust, our own autonomy.'' Norbert 
Weiner, who developed the first theories of machine learning 
and intelligent automation, warned in 1954 that for humans to 
surrender moral and political decisionmaking to machines, ``is 
to cast our responsibility to the winds and to find it coming 
back seated on the whirlwind.''
    Yet, many of today's powerful AI scientists and business 
leaders claim that the truly important decisions will soon be 
out of our hands. As just one example, OpenAI Sam Altman has 
suggested that we are merely the biological bootloader for a 
form of machine intelligence that will dwarf hours not just in 
computing power, but in wisdom and fairness.
    These careless and unscientific AI narratives are pressing 
on democratic cultures already riddled with stress fractures. 
If we do not assert and wisely exercise our shared capacity for 
democratic governance of AI, it might be the last chance at 
democratic governance we get.
    Had AI arrived in a period of democratic health, none of 
its risks would be unmanageable. But we are in a weakened 
political condition and dangerously susceptible to manipulation 
by AI evangelists who now routinely ask, ``What if the future 
is about humans writing down the questions and machines coming 
up with the answers?'' That future is an authoritarian's 
paradise.
    The question upon which the future of democracy hangs, and 
with it our fundamental liberties and capacity to live together 
is not, ``What will AI become and where is it taking us?'' That 
question is asked by someone who wants you to believe that you 
are already out of the driver's seat. The real question is, 
``What kind of future, with AI, will democracies choose to 
preserve and sustain with the power we still hold?'' One where 
human judgment and decisions matter, or one where they don't.
    Thank you to the Committee.
    Chairman Peters. Thank you, Professor.
    You can see there is a lot going on today. We have people 
coming and going. But Senator Johnson has to leave shortly. 
Senator Johnson, if you have a moment for a question or two, 
you are recognized.

              OPENING STATEMENT OF SENATOR JOHNSON

    Senator Johnson. First of all, I appreciate this hearing. I 
really do. I thank the witnesses. I like the hearing title 
Philosophy of AI because I think it is crucial. I have been 
interested in science fiction all my life, and now we have been 
holding these seminars or hearings here in the Senate trying to 
understand what this is. But I have also been reading some 
pretty interesting science fiction books.
    Science fiction writers are unbelievably prescient. These 
things are researched pretty well. These things go off in 
different directions, and some pretty troubling directions, 
which is, I think, what the Chairman was talking about in his 
opening remarks as well as what you are talking about.
    Professor Vallor, you are talking about our capacity to 
regulate this and our ability to do so. President Eisenhower in 
his farewell address not only talked about the military 
industrial complex, he also warned us about government funding 
science and research. That would lead to scientists more 
concerned about obtaining a government grant than really 
addressing pure scientific knowledge. It could lead to a 
scientific and technological elite that drove public policy.
    I think that was the concern with AI. He was concerned 
about human beings that were technologically and scientifically 
elite. Now we have computer capability that's going to vastly 
outpace our ability in terms of volumes and speed of 
calculations. It is highly concerning. I would argue just with 
the latest pandemic. Scientific research was certainly looking 
at how to take a virus with gain of function, make it more 
dangerous, and then come up with a countermeasure anticipating 
biological warfare.
    I would argue that obviously got out of hand. We do not 
know the exact origin, but I guess I am less convinced that we 
are going to really be able to control this and that we have 
the governing capacity to do so. It is hard to really put a 
question in on this, but this is an incredibly important issue 
in question, and I really do not know whether this 
dysfunctional place is going to come up with the right answers.
    I think of the Hippocratic Oath; first, do no harm. Again, 
I am not a computer scientist. I cannot even begin to grapple 
with how they create these algorithms. We have a few smart 
people that know this, that are warning us about certainly a 
possibility of AI destroying this country, destroying humanity. 
I guess just speak to that.
    Dr. Vallor. Happy to. Thank you very much. I think the 
important point in what you are saying is that AI today is an 
accelerant of many of the dynamics that are currently present 
in our society, in our economy. Among those, for example, is 
rising economic inequality, and declining social and economic 
mobility, which has been an issue now in this country for 
decades.
    One of the greatest worries about AI is that it will 
accelerate those trends unless we actively govern AI in ways 
that ensure that its benefits are realized by everyone who has 
a right to have the infrastructure that AI will build, serve 
them.
    I will just say that, I think, the fact that we have done 
this in the past with other technologies that were at the time 
equally unprecedented, equally powerful, equally challenging to 
regulate, actually if we have the political will, leaves us in 
a better place than we have ever been to govern a complex 
technology like AI because we have 200 years of experience in 
regulatory innovation, in adjusting the incentives for powerful 
actors.
    It has been effective before. That is why airplanes now are 
safer than driving. That would not have happened if we had 
stepped back and let airlines operate without any kind of 
regulatory oversight, accountability, or constraint.
    Senator Johnson. But regulating transportation devices is 
completely different. I would even say nuclear power versus 
nuclear weapons is completely different than this that we 
cannot even begin to grapple with once this is unleashed and it 
is starting to learn and maybe even becoming self-aware. What 
is that going to actually result in?
    Professor Acemoglu, you talked about, we have all talked 
about growing inequality. I would argue that we have put 
ourselves in a huge pickle over the last couple of generations 
as we have told all of our children you have to get a 4-year 
degree. To me, the greatest threat AI represents to loss of 
jobs are those college-educated kids that machines can learn a 
lot quicker. You are seeing what is happening with ChatGPT.
    Certainly, we are seeing a real shortage of workers in 
manufacturing, in the trades. We are always going to need those 
folks. Unfortunately, certainly in Wisconsin, those employers 
try to hire people, our kids are not doing it because we send 
them all to college and they think that kind of work is beneath 
them.
    They are screaming for legal immigration reform, which I am 
all for, but, here is an instance where we did not really 
regulate, but society en masse told all of our kids, you have 
to get college educated thereby implying that being in 
construction or being a trades person was somehow a lesser 
occupation, you are a second class citizen. I think all work 
has value. Why do you not speak to that in my remaining time?
    Dr. Acemoglu. That's a very important point. I have also 
come to believe exactly like you said that we have undervalued 
a lot of important work. But we have not just undervalued it 
philosophically, we have also failed to create the right 
training environment and the right technologies for these 
workers.
    It is a tragedy in this country that we have a tremendous 
shortage of skilled craftspeople, electricians, but they are 
not even paid that well.
    Senator Johnson. They are starting to get paid well.
    Dr. Acemoglu. They are going to get paid somewhat better 
because of scarcity, but they can be paid even more if we 
create the right environment for them to acquire more expertise 
and provide the tools for them.
    The promise of AI is really, if we strip all of the hype, 
is really in doing that. Because what is generative AI good at? 
It is taking a vast body of knowledge and some specific 
context, and finding what is the relevant bit of that vast body 
for that specific context. That is a great technology for 
training. That is a great technology for boosting expertise. 
That is the way to actually use AI in a way that is not 
inequality inducing.
    Now, you have raised another very important point, which 
many economists also make, which is; well, these ChatGPT-like 
technologies are going to go after college-educated jobs. I am 
not actually sure. This is not the first technology that has 
been promised to automate white collar workers or white collar 
work and therefore reducing equality that way.
    My work finds that many of these technologies end up 
actually going after the lower skilled jobs. Like, you are not 
going to automate managerial jobs or those people who have 
power, but you are going to do to the sort of information 
technology (IT) security-type jobs, which are not very well 
paid anyway.
    Moreover, that is not a very effective way of reducing 
inequality because what happens to people who let us say used 
to do IT security or, advertisement writing, et cetera, they go 
and compete for other white collar jobs that were lower paid, 
and the burden, again, falls on lower educated workers.
    You are 100 percent right. Four-year college for everybody 
is not the solution, but skills for everybody, building 
expertise for everybody is the right solution.
    Senator Johnson. Sorry, I cannot stay around. This really 
is an important subject. Thank you.
    Chairman Peters. Thank you, Senator Johnson.
    I want to kind of have a little bit of a dialog perhaps, 
kind of ask a broad question and then we will go through. A 
little different than hearings where there is questions and 
answers. If you want to chat among each other too, that would 
be very much appreciated because you bring some different 
perspectives here.
    I am going to ask a very broad question. First, Professor 
Acemoglu, I hope you can answer based on your historical 
research, your understanding of economics and framing that. All 
of you of have some specific examples that would be helpful for 
us to have in the record. Professor Hu, I would like to 
obviously hear your perspective based on your understanding of 
constitutional law. Professor Vallor, I hope you can do it 
based on your study of a future worth wanting. What should 
humans want and how do we achieve that?
    The first open question is that there is a popular line of 
thought out there touted by many influential people that 
unfettered technological innovation will solve all of our 
problems, and it is going to lead to increased well-being for 
all. Just let it go and we should all be very happy about the 
end result. Do each of you agree with that line of reasoning? 
If not, why and what should we be thinking about? We will start 
with you.
    Dr. Acemoglu. I completely disagree. First of all, we as 
humans decide how to use technology. Technology is never a 
solution to problems. It could be a helper or it could be a 
distractor exactly like you said in your opening remarks. 
Moreover, unfettered competition is not always the vehicle for 
finding the right direction of technology. There are so many 
different things we can do with technology, and profit 
incentives sometimes align with the social good and sometimes 
do not.
    I am certainly not arguing that government bureaucrats or 
central planning could be a rival to the market processes for 
creating entrepreneurial energy or innovative energy. I do not 
think there is any evidence that anything better than the 
market process for innovation has been invented by humans. But 
that does not mean that the market process is going to get the 
direction of technology right, and it does not mean that 
without regulation, we are going to use these technologies the 
right way.
    That is why exactly like Professors Hu, and Vallor also 
pointed out, we need the right regulatory framework, and the 
right regulatory framework has to be broadly construed. It is 
not like we create the technologies and then we put some 
regulations on how they can be used. I think we need to create 
the right ecosystem where social input, democratic input, and 
government expertise are part of setting the agenda for 
innovation.
    In the past, U.S. Government played a very important 
leadership role in many of the transformative technologies from 
antibiotics, computers, sensors, aerospace, nanotechnology, and 
of course, the Internet. I think the right priorities for 
redirecting technological change in a socially beneficial 
direction is very important, and that is the way to make use of 
these technological innovations.
    But if I could add one other thing, which is a reaction to 
the question that Senator Johnson raised, which is, is it 
possible to regulate AI? I certainly believe it is possible to 
regulate AI, but I agree with Senator Johnson that it is much 
harder than the previous technologies. But the reason for that 
is not just the nature of technology, it is because we have 
become completely mesmerized with AI in the wrong way.
    Bboth Professors Hu and Vallor emphasize AI is a 
philosophy. You could say AI is also an ideology. We have been 
chosen one specific way of perceiving that AI ideology as this 
general intelligence that is going to solve our problems, it is 
going to take away human agency, and it is not only dangerous, 
it is also really making it much harder for us both to find the 
right technologies to solve social problems and to regulate it.
    I think we need a general change in perspective to help 
with the regulation of AI. Thank you.
    Chairman Peters. Thank you.
    Professor Hu. We can go beyond this clock. You are all 
professors. I know you usually like to expand on your answers, 
and you are free to do that.
    Ms. Hu. Thank you for that very important question. I think 
this type of techno utopianism is something that we really need 
to look at with an eye of skepticism and especially in a 
constitutional democracy. We need to ask the question whether 
or not we have the proper means in order to achieve those ends. 
With that type of invitation to see technology as something 
that can solve all problems and needs to be unfettered, I think 
that it poses the problem that the ends may be not justifying 
the means.
    In a constitutional democracy, we must always consider the 
means. I think that especially when we are faced with very 
compelling ends that are being presented before us; that AI can 
resolve pressing issues in national security or in health, for 
example, then it seems even more compelling. But I think that 
this really also opens the door to the conversation of whether 
or not when we are thinking about AI regulation, we really need 
to think about an ex-ante approach and not just an ex-post 
approach.
    In the law, oftentimes, it is highly reactionary. We look 
at the harms, and then we try to find some type of structure to 
deal with those harms. But with AI, I think that this is now a 
moment for us to ask what type of laws and regulations and 
rules do we need in order to anticipate the harms before they 
occur and address them.
    Chairman Peters. Thank you, Professor.
    Professor Vallor.
    Dr. Vallor. Thank you for this important question. I will 
echo some things that my fellow witnesses have said. First of 
all, technology is a tool. We solve problems often with the 
help of technology, but technology does not solve problems. 
When we start expecting technology to solve our problems for us 
without human wisdom and responsibility guiding it, our 
problems actually tend to get worse.
    That is what a lot of people are seeing with AI, that we 
have something that is not being used wisely and responsibly as 
a tool to solve problems, but something that we are 
increasingly expecting to solve our problems for us. To 
Professor Acemoglu's point, that is really undermining some of 
the confidence and ambition that we need in order to govern AI 
responsibly and wisely.
    In response to Senator Johnson's earlier question, we 
certainly cannot cut and paste from aviation or any other 
sector, a regulatory model that is going to work for AI, but we 
do not need to. We can do what we have done every time before, 
which is innovate in the sphere of governance and adjust 
different incentives and powers within the AI ecosystem until 
we have the results that we want.
    But this brings me to my second point. You talked about the 
idea of unfettered technological innovation, and this ideology 
that that kind of unfettered innovation leads us to human well-
being for all. But notice that we always hear this promise now 
made with the word ``innovation'', almost never the word 
``progress''. There is a reason for that.
    Technology, not just the machines we build, but the 
techniques and systems that we create for all of history has 
been an essential driver of human progress. But that means 
meaningful, measurable improvements in things like life 
expectancy, infant mortality, sanitation, literacy, political 
equity, justice and participation, economic opportunity and 
mobility, and protections of fundamental rights and liberties.
    Today, there is more advanced technology in the United 
States than anywhere else, but we have actually started seeing 
measurable declines in many of those metrics that I just 
mentioned. What does that tell us about the connection between 
technology and progress? It suggests that it is breaking down, 
because we have substituted the concept of innovation where we 
do not need to prove that a new technology actually meets a 
human need only that we can invent a market for it often by 
changing our social infrastructure so that we cannot opt out of 
it.
    We need to go back to the heart of technology, which is the 
ambition to improve the human condition. You asked about this 
work that I have done on building a future worth wanting. The 
Spanish philosopher Jose Ortega Y Gasset said in 1939, that 
technology is, strictly speaking, not the beginning of things. 
He says it will within certain limits, of course, succeed in 
realizing the human project, but it does not draw up that 
project. The final aims it has to pursue come from elsewhere.
    Those aims come from us. We have to identify what we want 
and need technology to help us achieve. AI can be a powerful 
tool in helping us do that, but not if we treat innovation as 
an end in itself.
    Chairman Peters. Thank you, Professor.
    Senator Hassan, you are recognized for your questions.

              OPENING STATEMENT OF SENATOR HASSAN

    Senator Hassan. Thank you very much, Chair Peters. I want 
to thank you and the Ranking Member for holding this hearing, 
and a thank you to the witnesses for being here. We really 
appreciate it.
    Professor Vallor, I want to start with a question to you. 
Your testimony discusses the possibility that bad actors can 
use AI in ways that threaten national security, such as in the 
bioengineering and nuclear fields. What would you recommend 
Congress do to minimize national security risks posed by the 
misuse of AI?
    Dr. Vallor. I think one point is that we have to be 
realistic and recognize that AI is not a technology that we 
will always be able and everywhere to keep out of the hands of 
bad actors.
    One of the important things to recognize is that this is 
part of the risk profile of AI that needs to be managed in the 
same way that we manage many other inherently risky 
technologies that can also be abused and exploited.
    One of the most important things is to identify what are 
the powerful incentives that bad actors have to abuse this 
technology, and where can we remove those incentives, or 
increase the costs for bad actors abusing AI in harmful ways. 
For example, the use of AI to produce disinformation is a worry 
for a lot of researchers. But actually there are cheaper ways 
to produce disinformation that a lot of bad actors have been 
relying on. It is not clear, for example, that AI will be the 
most attractive path for people who want to do harm through 
that, through that pathway.
    I think from a national security perspective, obviously 
need to have close monitoring of AI developments. This is 
something that we need in the commercial ecosystem as well--
forms of early warning systems, where we see incidents being 
reported back to us that we can then chase back. Many platform 
companies can be incentivized to do that kind of incident 
reporting, so that if we see signs of bad actors exploiting 
their tools, we have some advanced warning and ability to act.
    Senator Hassan. Thank you very much.
    Professor Acemoglu, as we discuss AI and how it can magnify 
threats to democracy, I am particularly concerned about Chinese 
AI tools that are used for surveillance and censorship, and how 
these tools may undermine democracy and freedom around the 
world. Based on your research, how are Chinese surveillance and 
censorship tools spreading throughout the world, and what is 
the effect of these tools on democracy and free speech?
    Dr. Acemoglu. Thank you, Senator Hassan, and you are 
absolutely right to be concerned. China is not at the frontier 
of most AI areas, but facial recognition, censorship and other 
control tools have received the most investment in China. That 
is one area in which China is on a par with the United States 
and other nations that are leading the AI knowledge.
    Those tools are not being developed intensively in China, 
but they are also being used very much both at the local level 
and the national level for controlling the population. There is 
evidence suggesting that they are not completely ineffective. 
In fact, one of the things that is quite surprising in China is 
that the middle class has multiplied and there are a lot of 
aspirations, but those aspirations are not reflecting 
themselves in the political domain. A lot of that is because of 
this very intense use of data collection control.
    You are also absolutely right that those technologies are 
not just staying in China, China is actively exporting them to 
many other nations. Only the Huawei company has exported 
surveillance technologies to more than 60 other countries, most 
of them non-democratic, and those countries are also using them 
for surveillance.
    This is part of the reason why AI leadership coming from 
the United States is so important because the United States has 
the resources, scientific resources and corporate resources to 
set the direction of research and it can choose a very 
different one from China. If the United States makes those 
choices, other countries will follow because the advances in 
the United States are going to provide profit opportunities for 
companies.
    This is part of the reason why setting the right priorities 
with government support, but with also shifting priorities in 
the corporate and the tech world is so important. Thank you.
    Senator Hassan. Thank you. Another question for you. In 
today's political climate, extremism can sometimes boil over 
into acts of violence. Just last week, the Committee heard from 
the Federal Bureau of Investigation (FBI) Director Christopher 
Wray, that the most persistent terrorist threats to the United 
States are small groups or lone actors who sometimes commit 
acts of violence after watching or reading online content that 
glorifies extreme or hateful views.
    Professor, what lessons can we learn from history about how 
major technology advancements can contribute to a climate of 
extremism, and what recommendations do you have for Congress to 
mitigate how AI systems may contribute to extremism in the 
United States?
    Dr. Acemoglu. Thank you for this important question as 
well. I think it is inevitable that digital tools are going to 
be used for spreading misinformation and disinformation. It 
cannot be stopped. But then again, the printing press was used 
for the same thing, the radio was used for the same thing, and 
lots of other vehicles were available to actors for fermenting 
extremism.
    The issue is that AI and digital platforms in general 
increase the capabilities for bad actors to use these tools, 
and this is an obvious area for regulation. But more 
importantly, I think we have to ask questions about how the 
business models of the leading tech companies is playing out in 
this domain. Part of the reason for the phenomena that you are 
pointing out is that many of these digital platforms are 
actually not just displaying misinformation, but they are 
actively promoting it. I think displaying misinformation is 
very difficult to solve, but promoting is a choice. It is a 
choice that they make because of their business model which is 
based on monetizing information via digital ads.
    This is something that provides a lot of alternative 
directions for us. It is possible to use AI technologies in a 
way that is much more reliable in a way that does not create 
the most pernicious echo chambers in a way that does not 
promote misinformation and disinformation.
    I think three types of policies are particularly important. 
Government regulation of where extremism is taking place and 
going after it is very important. The government has to invest 
more in tracking where this is happening, and I think your 
Committee is at the forefront of this.
    Second, I think digital ad-based business models are 
creating a lot of negative social effects. My proposal has been 
for a while that we should consider digital ad taxes, meaning, 
taxes on advertisement that uses personalized ads collected 
from digital platforms. I think when we do that, we are both 
going to discourage, to some extent, the most pernicious users 
of these digital ads, but second, open up the market for 
alternative models.
    The marriage of data collection, and venture capital, and 
other kinds of funding has created a business environment in 
the tech world where the most successful companies are those 
that try to collect as much data as possible, and try to get as 
much market share as possible for sometimes a decade or more.
    They do not even make money, but they can get funding 
because this is the way of the future as viewed by venture 
capitalists. But that also means that alternative business 
models cannot enter because the market is being captured by 
these things. A meaningful digital ad tax would actually be a 
pro-competitive tool.
    Then the final policy is data markets. Right now, a lot of 
this is also completely entangled with digital platforms being 
able to take data as they wish. I think we need to have better 
regulation about who has rights to data, and also perhaps start 
building legislation to create data markets in which, for 
example, creative data artists or writers have collective data 
ownership rights.
    This way there will be a fairer division of the gains from 
digital technologies and AI, but also it could encourage better 
use of data and better ways of developing new monetization 
models in the digital age. Thank you.
    Senator Hassan. Thank you very much.
    Thank you, Mr. Chair, for your indulgence.
    Chairman Peters. Thank you, Senator Hassan.
    Senator Butler, you're recognized for your questions.

              OPENING STATEMENT OF SENATOR BUTLER

    Senator Butler. Thank you so much, Chair, and colleagues 
for helping us have more deep discussion about AI. This has 
been a topic that all of us have been talking about, it feels 
like in my short time, for a good deal of time. I appreciate 
all of you for your work and your leadership on the topic.
    Dr. Hu, I think I will start with you, if that's OK. You 
have been doing an incredible amount of academic examination in 
this area. I understand that AI could become a critical asset 
to stakeholders in the criminal justice system.
    However, we have already begun to see cutting edge 
artificial intelligence-based technology like facial 
recognition systems drive wrongful arrests of innocent people. 
The reality is that this technology is already widening pre-
existing inequities by empowering systems that have long 
histories of racist and anti-activist surveillance.
    Here's my question. I am curious to hear your thoughts, 
really, on how we can best use this sort of tension because 
without action, we know that that communities of color will 
disproportionately continue to disproportionately face the 
harmful consequences of this technology. I would love to hear 
your thoughts on how we can best respond, acknowledging that it 
is a technology that is going to exist, and we have these sort 
of built in inequities in our current system.
    Ms. Hu. Yes, thank you so much for that important question. 
I think that this is one of the critical inquiries that we are 
faced with when we are talking about AI. That it can, in the 
way in which it absorbs vast oceans of data, also absorb very 
historically problematic information and then translate that 
and interpret that in ways that are not consistent with our 
constitutional values or principles or civil rights.
    This is particularly troubling in the field and in the way 
in which the technologies are being enrolled in criminal 
justice and criminal procedure because of our deep commitments 
to fairness in criminal justice and the ways in which we have 
those protections embedded in the Fourth, Fifth, Sixth 
Amendments, for example, of the Bill of Rights. How are we now 
faced with these types of evolutions in AI and these 
technologies and algorithmic decisionmaking in particular, in 
ways that we are having a hard time trying to preserve those 
fundamental constitutional rights.
    I think that this is an opportunity for us to try to think 
through exactly what types of new jurisprudential methods and 
interpretations do we need in order to, for example, expand our 
interpretation of the Fourth Amendment in a way that 
encompasses these types of challenges so that we can stay true 
to our first principles of protections.
    Senator Butler. Thank you so much for that.
    Ms. Vallor, if I could turn to you quickly. I think the 
three practical recommendations in your written testimony to 
the Committee are very compelling. I was struck by the idea 
that just like the Dutch childcare benefits scandal, it is 
inevitable that we will get some of this stuff wrong, and even 
despite our best efforts.
    Can you talk a little bit about why you think it is so 
important to create new systems of liability, contestability, 
and redress for impacted groups, which adjacent to my first 
question often includes the most vulnerable communities?
    Dr. Vallor. Absolutely. Thank you for that important 
question. I think we have seen plenty of evidence that if we do 
nothing, the use of AI technologies will continue to 
disproportionately harm the most vulnerable and marginalized 
communities here in this country and also in other countries. 
As you noted, it has been seen in multiple places in the world 
where this dynamic occurs.
    A researcher in our field, Abeba Birhane, has described 
these technologies as conservative, not in the political sense, 
but in the sense that they take patterns of the past and they 
literally conserve them and push them into the present and the 
future. They make it harder for us to overcome some of the 
patterns of the past that we are rightly committed to 
addressing.
    We have to direct AI then as a tool for identifying harmful 
patterns, harmful biases, and mitigating those. AI can be used 
as a tool for that as well and has been in many cases. It comes 
down to who absorbs the risks that new technologies inevitably 
introduce. New technology can be completely safe or risk-free, 
but it's about who absorbs those risks, and who reaps the 
benefits.
    When you allow large companies and wealthy investors to 
reap the benefits of innovation in ways that push all the risk 
and cost of that process onto society, and particularly onto 
the most vulnerable members of society, as we are seeing today, 
you produce an engine of accelerating inequality and 
accelerating injustice.
    What Congress needs to do is to ensure that those who stand 
to profit the most from innovation are asked to take on most of 
those costs and risks. We have done this before in areas like 
environmental regulation with the Polluter Pays Principle, 
right? When it's implemented correctly, it actually 
incentivizes for profit companies to build safety and 
responsibility into their operations so that instead of 
spending money to have to clean up pollution, they can spend 
money to make their operations cleaner and safer in the first 
place.
    I would love to see that dynamic be pursued in AI 
regulation as well, where we think about how can we incentivize 
companies to build more responsibly in the first place. I think 
we can, obviously, begin where we already have some power, and 
that is something that has come out of bills in this Committee 
to address the uses of AI in the public sector, to address uses 
by Federal agencies.
    You see also in the Executive Order (EO) recently released 
many moves to empower and instruct Federal agencies to begin to 
take action so that we can in a way start by making sure that 
government uses of AI are appropriately governed, audited, 
monitored, and that the powers that government has to use AI 
are used to increase the opportunity, and equity, and justice 
in society rather than decrease it. Which can happen even when 
we do not intend it if we are not actually implementing many of 
the measures that I and other panelists here have described in 
the regulatory environment.
    Senator Butler. Thank you.
    Thank you, Mr. Chair.
    Chairman Peters. Thank you, Senator Butler.
    Senator Hawley, you are recognized for your questions.

              OPENING STATEMENT OF SENATOR HAWLEY

    Senator Hawley. Thank you very much, Mr. Chair. Thanks to 
the witnesses for being here.
    I want to start my time with a piece of oversight business, 
if I could. Last week when the Secretary of Homeland Security 
was here, Secretary Mayorkas, I asked him about a whistleblower 
claim. A whistleblower who had come forward to my office and 
alleged that as many as 600 security special agents from 
Homeland Security Investigations (HSI), 600, had been removed 
from felony investigations, including particularly child 
exploitation investigations and sent to the Southern Border to 
do things like make sandwiches for illegal immigrants.
    That's a quote from the whistleblower, not from me. Here is 
what she said. ``We are being told to shut down investigations 
to go hand out sandwiches, and escort migrants to the shower, 
and sit with them while they are in the hospital, and those 
types of tasks.\1\ Now, Secretary Mayorkas did not deny this. 
He did say that, well, they are working on fentanyl, or they 
may be working on fentanyl claims while they are at the border.
---------------------------------------------------------------------------
    \1\ The quote referenced by Senator Hawley appears in the Appendix 
on page 84.
---------------------------------------------------------------------------
    After that testimony, multiple additional whistleblowers 
came forward to my office from across the country, different 
whistleblowers unrelated to each other from different offices 
across the country, and directly contradicted Secretary 
Mayorkas' testimony. One whistleblower said Secretary Mayorkas 
was, and I am going to quote him now, ``Absolutely lying,'' and 
that agents were not in fact being reassigned to investigate 
fentanyl cases.
    Another whistleblower claimed that he was reassigned to the 
border to, in his words, ``Babysit illegal immigrants.'' A 
fourth whistleblower confirmed that special agents had been 
pulled off child exploitation investigations, and all of these 
whistleblowers provided documentation about being asked to drop 
felony investigations, move to the Southern Border to conduct, 
essentially, ministerial tasks along the lines that the first 
whistleblower alleged.
    Mr. Chair, of course, I do not personally know whether this 
is accurate or not. I know now we have multiple whistleblowers 
who are all alleging the same thing. They are also pointed out 
to me these whistleblowers that there may be violations of the 
law. In fact, the whistleblowers allege that these practices 
violate 31 USC 1301, that it violates the Office of Management 
and Budget (OMB) Circular A76, that it violates in internal 
U.S. Immigration and Customs Enforcement (ICE) travel policies.
    What I have done, Mr. Chair, as per my normal practice and 
the practice that I think all of us follow on this Committee, I 
have collected this information. I have written a letter to the 
Inspector General (IG) of the Department of Homeland Security 
(DHS) asking his office to investigate these claims, which I am 
sharing with the Committee today, and I have asked him to 
report back to me and to the Committee so that we can see what 
he says. I would like to submit this for the record,\2\ if I 
could, Mr. Chairman.
---------------------------------------------------------------------------
    \2\ The information referenced by Senator Hawley appears in the 
Appendix on page 85.
---------------------------------------------------------------------------
    I want to thank you for your work always with 
whistleblowers, and for those who come forward to my office and 
other offices before. I am putting this on the record. We will 
see what he says. I hope that he will look into this, and he 
will get back to us and we can evaluate these claims. Thank 
you.
    Chairman Peters. Without objection.
    Senator Hawley. Now, turning if I could, to you, Professor 
Acemoglu. Let me ask you a little bit about AI in the 
recruiting and hiring context. My understanding is that, 
increasingly, companies are using AI recruiting tools to what 
they would say enhance efficiency in the hiring process. This 
is especially true among large established companies.
    My concern is this, is that AI's application to recruitment 
is often controversial because hot hiring is an inherently 
subjective process. We were just discussing, in fact, some of 
the issues when you use AI to make what we might call ``people 
decisions'' and to some of the biases that AI tends to scoop up 
and replicate. One example of this is Amazon in 2018, where 
it's reported that AI software used in the hiring process 
systematically discriminated against women.
    My question to you is this, where would you draw the line 
on AI decisionmaking in hiring practices? What should we be 
aware of or concerned about there?
    Dr. Acemoglu. Excellent question. Thank you very much, 
Senator Hawley. I think that's a very difficult question. I am 
very concerned about all users of AI that takes away human 
judgment, especially when human judgment is very important. 
This becomes particularly concerning when AI practices then 
legitimize things that were not previously completely accepted.
    Let me give you an example. For instance, imagine that we 
have an AI system that puts a lot of weight on somebody having 
completed a 4-year college for essentially a semi-manual task. 
It is quite easy how that might come about. Four-year college 
workers are doing much better in the labor market. But in for 
many semi-manual tasks, those college skills are not that 
important. But if the AI system starts putting that emphasis, 
it is going to start turning a lot of good candidates down.
    The more it does that the more it becomes accepted that you 
should really have a 4-year college to become an electrician. 
Then our social norms and our expectations completely shift, 
even though the original decision to turn down people who had 
just a high school degree was not right.
    This is not a hypothetical situation, because we are having 
a lot of similar cases happen when AI systems are engaged in 
decisionmaking, especially when people do not know how to 
evaluate them. There's a lot of evidence, for example, that 
doctors who get AI recommendations do not know how to evaluate 
it, and they sometimes get confused where the AI recommendation 
comes from. They may put overweight on things that they should 
not really be overweight because they do not understand the 
blackbox nature of the system.
    I think human judgment, and the expert opinion of people 
who have accumulated expertise is going to be very important. 
This is particularly true when we start using AI, not just for 
recruitment, but lots of other human resource tasks. For 
example, promotion, or deciding who's doing well, or how to 
assign workers to different shifts.
    We are going to do much better, which if we do something 
broadly consistent with what I try to emphasize in my written 
testimony; choose a pro-human direction of AI. Which means that 
we try to choose the AI technologies trajectory in a way that 
empowers humans. Then we train the decisionmakers so that they 
have the right expertise to work with AI, and that includes 
valuing their own judgment not becoming slaves of the AI 
system. Thank you for that question.
    Senator Hawley. Oh, very good, and your answer touches on 
something that I think is so important that we cannot lose 
sight of who has control of the AI, and who the AI is 
benefiting. I have said over and over, I am sure that these 
giant corporations who are developing AI, I am sure that they 
will make lots of money on it. I have no doubt about that. Will 
it be good for the people that they employ and in particular, 
will it be good for working people in this country? I am less 
certain about that.
    I see my friend Senator Blumenthal across the dais, in a 
hearing that we had recently, I still remember the testimony of 
a large corporate executive who just remarked offhand that it 
was wonderful that AI was doing things like replacing people 
who work at fast food restaurants. I think he just expected 
everyone to agree because, of course, those are not creative 
tasks. It is good we can do without them.
    I thought wait a minute, it is easy for you to say as you 
sit in your position in the C-suite, maybe not so much for the 
person for whom that is the first job that is getting a 
foothold in the labor market from which she can advance to 
something else.
    I think who controls the AI and what the biases are in it 
in a way that you point out is very important. Thank you, Mr. 
Chairman.
    Thank you, Senator Hawley.
    Senator Blumenthal, you are recognized for your questions.

            OPENING STATEMENT OF SENATOR BLUMENTHAL

    Senator Blumenthal. Thank you. I will just expand on the 
line of questioning that Senator Hawley was asking because he 
and I actually had been having hearings on the Judiciary 
Privacy, Technology and the Law Subcommittee. The labor aspects 
have been perhaps less important for us than preventing the 
deep fakes and impersonations. We have developed a framework 
for legislation, including a licensing regime with an oversight 
entity and testing before products are released so as to 
prevent the kind of deep fakes and impersonations that so scare 
people. At the same time, preserve the promise of AI that I 
think all of you and we too agree is very important.
    But the impact on the labor market in terms of inequality, 
aggravating inequality, eliminating tasks without creating new 
tasks, I think is a very important point that you made, 
Professor. You say, and I know that you cite in footnote 5, a 
lot of studies that have been done on electrician and plumbers, 
could you make it real for us how can AI enhance the work done 
by electricians and plumbers? Then also, what new tasks can you 
give us an example of how AI could create new tasks so that it 
can be pro-worker, as you say, pro-citizen, pro-human?
    Dr. Acemoglu. Thank you very much, Senator Blumenthal. Let 
me actually start giving a different example than the 
electrician, I will come to the electrician, educators, 
teachers. A lot of the emphasis today is to use AI in 
classrooms for automated grading, automated teaching, and also 
large language models that take the place of experts in forming 
students.
    But actually, one of the problems in many U.S. schools is 
that a lot of students are falling behind. There is quite a bit 
of evidence in the education science literature showing that 
personalized teaching is tremendously useful for the students. 
If we had the resources to have a teacher work with one or two 
students identifying their weaknesses, and how the material 
could be presented to them so that they could understand, they 
could have a chance to catch up. But they do not have those 
resources, they do not have those opportunities, so those 
students fall behind. That is part of our educational crisis 
right now.
    One quite feasible direction of AI, it is actually well 
within the technological frontier, it does not even require any 
advances, is use of existing AI tools in real time to identify 
which students are having trouble with which part of the 
curriculum. You can do that actually as the class progresses, 
and then provide suggestions to teachers.
    You would need more and better trained teachers to do that. 
But you provide suggestions to these teachers to say, let us 
take these two or three students and present the material 
differently, spend a little bit more time give some remedial 
help. That's the kind of system recommendation that AI can 
easily do. You can see here that the tasks that the teachers 
will start performing will become new tasks. The current 
educators, they teach to the class 30 people or something, they 
do not have this aspect of identifying and working one on one 
in a systematic way. Those would be the examples of new tasks.
    Having given that example to the educators, I come to the 
electricians. It is exactly the same issue. Electricians are 
going to have more and more problems as, for example, 
electrification of the grid, or new electrical machinery. There 
are going to be more and more rare problems, more and more 
troubleshooting problems. Right now, even for the very regular 
issues that I have in my house, an electrician will come as a 
semi-skilled electrician, is not the very best. They will look 
at the problem and they cannot solve it, and they have to go 
and they have to do some research, and then another expert 
comes and they try to deal with these issues.
    One way that you would make them much better, and this will 
help in general a lot of semi-skilled craftspeople is that real 
time AI tools would draw from the expertise of a lot more 
electricians with similar problems and would make them 
recommendations. They can do on the spot troubleshooting, 
problem solving, and deal with the more complex and new tasks 
that are going to emerge with the changing environment.
    The benefit of that is not just it is going to help with 
our shortage of electricians, it is going to increase the 
earning capacity of electricians and the economy, but it is 
actually going to be an equalizing tool. Because who is going 
to benefit most from this? It is not going to be the very best 
electrician because he or she would have been able to solve 
these problems, it is going to be those with middle expertise 
who are good enough to do certain tasks, but they need help, 
they need training, they need additional recommendations to 
deal with the more complex problems.
    That is where the promise lies. Thank you for your 
question.
    Senator Blumenthal. Thank you. That is a really helpful 
answer. Will that in turn, address the phenomenon of growing 
inequality in our system?
    Dr. Acemoglu. I think it has a real chance of being a very 
contributing factor. It is not going to be sufficient by 
itself, but one of the major reasons for why we have so much 
inequality is that we have not helped low education workers. We 
have replaced their jobs exactly like Senator Hawley pointed 
out, and we have not given them new opportunities, and we have 
not given them new tools.
    Those workers can become much more productive if we give 
them better technologies and better training opportunities. 
Again, AI has that capacity, especially generative AI. Forget 
the hype, I really think the hype is misleading. But there are 
some very impressive aspects of it. The most impressive one is 
that you can load on a tremendous amount of information, and 
then give some clues about a context, and it finds from that 
vast amount of information which bits are relevant for that 
context.
    If we use that, we can really deploy it for making more 
helpful technologies for low education workers, for skilled 
craftsmen, for semi-skilled craftspeople, for service workers, 
for healthcare workers, for educators.
    Senator Blumenthal. That is a very exciting prospect. At 
the same time, you know, I guess there is good AI and less 
effective AI. I read an article recently about hallucination 
that said that there is a variation of three percent 
hallucination to 27 percent hallucination, depending on the 
system. I hope the plumber or electrician gets the more 
accurate version rather than 27 percent because they will be 
fired.
    Dr. Acemoglu. That is actually a very important point. 
Right now, you could not use ChatGPT or similar models to do 
that exactly because they are not providing reliable 
information. This goes back to Senator Hawley's comment. These 
technologies are developed in a way that is good for the bottom 
line of the large companies, but not good for the workers or 
for the people.
    That is actually very easy to deal with. If instead of 
training these models on the vast amount of unreliable 
information on Reddit and also lots of other places, if you 
give them reliable information so the training set of these 
models is much more reliable, then the information they will 
give us much more reliable.
    Why are we training these models on the entire Internet and 
the speech patterns that you see on Twitter, Facebook, Reddit, 
and so on? Because the agenda of many of these companies was to 
create the hype that these are general intelligence-like 
technologies, and to do that they wanted to mimic the way that 
humans talk. The amount of information was not important. It 
was just important to get the human-like speech out of this.
    So different agendas. One is good for the corporations, the 
other one is going to be good for the workers. I think this is 
where government leadership is going to be important.
    Senator Blumenthal. Thank you very much. Fascinating topic, 
and my time is expired. But there is a lot more to discuss, and 
appreciate all your work. Thanks, Mr. Chairman.
    Chairman Peters. Thank you, Senator Blumenthal.
    Senator Ossoff, you are recognized for your questions.

              OPENING STATEMENT OF SENATOR OSSOFF

    Senator Ossoff. Thank you, Mr. Chairman. thank you to our 
panelists for your testimony, and your expertise, and your 
work.
    Obviously, there are and will be intersections between 
privacy law and privacy policy, and any regulatory regime 
established that touches on or manages the development and 
deployment of artificial intelligence.
    Dr. Acemoglu, you mentioned in your statement suggestions 
about a property rights model for data Professor Hu you cited 
some of the work of Jack Balkin in your opening statement, who 
as I understand it, suggested a fiduciary model for data 
whereby custodians and recipients of data from persons would 
have inherent duties of care and confidentiality to the 
individuals whose data they have collected, and which they are 
storing or using.
    When I think about the failure of Congress to make 
effective privacy law, one of the things we see is an effort to 
imitate the European Union's (EU's) regime. Another thing we 
see is a sort of Whac-A-Mole regulatory approach that looks at 
current problems faced by consumers and individuals and tries 
to isolate and target them with certain specific regulatory 
prohibitions, but doesn't seem to propose any kind of more 
basic law upon which fundamental obligations could be 
established that judges then over time could evolve into a more 
comprehensive regime protecting the privacy of individuals.
    Professor Hu, if you could just opine for a moment on your 
thoughts on the notion of a fiduciary model as a means for 
establishing some fundamental obligations for software 
companies, Internet platforms, and others across the private 
sector and the public sector who will receive data from private 
individuals?
    Ms. Hu. Yes. Thank you so much for that excellent question. 
I do think that we are seeing a renegotiation of the social 
contract. This is where Jack Balkin's theory of the fiduciary 
model for privacy and the information fiduciaries in which you 
have, especially under First Amendment rights, an 
acknowledgement that you have a triangle of negotiation of 
constitutional First Amendment rights between the tech 
companies, the citizen, and the government. That new 
renegotiation of rights and obligations is the representation 
of our modern digital economy.
    But I want to go back to your question about do we need 
something more fundamental. I think that this is where we are 
opening the dialog to potentially needing a constitutional 
amendment that enshrines privacy as a fundamental right. If we 
look at that as a launching pad in which to through at some 
type of constitutional amendment, empower Congress to enact 
legislation in order to try to ensure that fundamental privacy 
rights are extended to all citizens, then I think we do not 
need to see it as much of a negotiation in a triangle with the 
companies.
    As we have heard from the other witnesses, we have to 
always ask the question of who is benefiting and how our data, 
for example, is being monetized in a way that is adverse to the 
best interests of the citizenry.
    Senator Ossoff. Thank you, Professor Hu. You know it is an 
intriguing proposition, of course, procedurally, in terms of 
the difficulty of the process. Such an amendment would require 
a tremendous amount of effort. It is not to say it may not be 
worth the effort, but a statute. Although the record of 
Congress thus far enacting any kind of meaningful privacy 
statute is a failure, I think that there is an interest on both 
sides of the aisle in privacy law.
    Dr. Acemoglu, could you comment, please, on your reaction 
to this proposal of a fiduciary model for the protection of 
data and how it contrasts with other sort of property rights 
regime, which you have suggested in your opening remarks?
    Dr. Acemoglu. I think we just do not know which one of 
these different models is going to be most appropriate for the 
emerging data age. I think the fiduciary model has a lot of 
positives. That European Union's General Data Protection 
Regulation (GDPR) regulation, it was motivated by the right 
philosophy, but at the end, we are seeing that it has 
backfired. It is not very effective, and it may have actually 
advantaged some of the large companies because they are better 
able to bear the costs of complying with the regulation.
    I think the general point that we should bear in mind is 
data, and who controls data is going to become more and more 
important. It has become one of the major reasons why the tech 
sector has become more oligopolized, because a few companies 
have a big advantage in controlling data.
    So privacy issues are very important as Professor Hu also 
mentioned, which is privacy is a right. But I think right now 
they are completely intersecting about who controls data, and 
that is the reason why I think I am tempted to favor models in 
which we try to systematize data markets.
    At the end of the day, if data is going to become the 
lifeblood of the new AI economy, it is not going to be OK to 
treat data as an afterthought to solve privacy issues. We 
really need to institute the right sort of regulations and 
legislation about what rights people have to different types of 
data that they have created, and whether those rights are going 
to be exercised individually or collectively.
    That is actually a very tricky, but new issue. The most 
natural thing for economists, and I think for policymakers, is 
to say, ``OK. We are going to create property rights on data. 
So you own your data.'' That would not become a very workable 
model, both because it will be very expensive for individuals 
to track whether their data is being used, but there are also 
lots of market-driven reasons for why individual data rights 
may not work. After all you may find, my data is about 
identifying cats, other people can do that as well as me. That 
creates a lot of race to the bottom. So you may need some sort 
of collective ownership of data.
    Senator Ossoff. With my remaining time, Dr. Acemoglu, you 
have also talked a lot about centralization. The development of 
these frontier models is very energy intensive, technology 
intensive. This is IP produced at great cost, and there are few 
entities in the world with access to the processing power to do 
it.
    Just comment, if you could, on the risks of centralization, 
of ownership of such models, and what kind of policy remedies 
might be available to Congress, if they are necessary at all, 
in order to prevent some of the negative consequences of such 
centralization and cut market concentration.
    Dr. Acemoglu. Again, this is an area we just do not know 
enough about because there are some people who think open 
source is going to be a sufficiently powerful check on the 
power of the largest tech companies. On the other hand, there 
is a lot of doubt about whether open source is going to work. I 
think the most important issue is exactly like you have pointed 
out; there are two resources that are very centralized at the 
moment which is compute power, which is becoming more and more 
expensive because there is a shortage of the compute power at 
the moment, and second is data.
    Bth of these are going to create potentially a much more 
monopolized system which is not good for innovation. It is not 
good for the direction of innovation because then it is going 
to be just a few companies that set the agenda.
    I think antitrust tools are a very effective one. I am 
talking as much about stopping mergers and acquisitions. If you 
look at over the last 10 years, the largest tech companies have 
acquired dozens of rivals. Often, they actually sideline the 
technologies of those rivals because they just do not want the 
competition.
    The second thing is to create the data infrastructure that 
I was talking about that is going to be a channel to create 
more competition. Then the final one that I think we should 
think about is whether there are reasons for the government to 
get involved in the allocation of compute power. If it becomes 
more and more scarce and it is a critical resource, especially 
if a critical resource from a national security point of view, 
I think the government may need to worry about where that 
compute power is going. Thank you.
    Chairman Peters. Thank you, Senator Ossoff.
    Last question for all three of you. We will start with 
Professor Vallor, and we will work this way just to change 
things up a little bit here. Another really broad question. We 
have talked about a variety of issues here today at the 
Committee. There is a huge conversation going on across the 
world right now, but what do you think is missing from the AI 
conversation right now?
    I want to be specific to governments and lawmakers. To 
those of us sitting up here who are thinking about how we deal 
with these issues. Is there something missing from the 
conversation that you really think we should be thinking about?
    Professor Vallor, you are going to be the first shot at 
that, and then we will work down the dais.
    Dr. Vallor. Thanks for that question. I think one of the 
things that is not entirely missing, but I think it is 
underemphasized in the current regulatory conversation is the 
ability to see these systems as governable, as opposed to 
things that are being thrust upon us. As if they are natural 
forces that have arrived.
    Every AI technology has been shaped by human decisions to 
serve particular ends, and driven by particular incentives that 
our systems have set. I do not think we talk enough about the 
incentives that we have created for some of the harms that we 
qre seeing perpetuating and accelerating across the AI 
ecosystem, both profit incentives and power incentives, and 
where those can be changed.
    I also think we are not talking still enough about ensuring 
that when we use AI, that we are using it to amplify human 
intelligence rather than becoming a cheap replacement for it. 
AI tools, as have been mentioned, are overhyped, not because 
they are not powerful, but because their power is of a 
different sort than the people who market them want us to 
believe.
    These tools are not intelligent in the way that we are. 
They do not have the capacity for good judgment and common 
sense. They are very powerful pattern amplifiers, and that is a 
tool that we can use in our decisions. But many people are 
still talking about AI as if it is going to be able to make the 
hard decisions for us. These systems do not make decisions. 
They make calculations. Even if we automate a process by 
letting a computer determine the outcome, that is a human 
decision.
    I do not think we are going to be served well if we forget 
how to make decisions well or even lose sight of the fact that 
we are still the ones making the decisions no matter how much 
we are relying on the technology. Because in that case, we are 
making decisions in the dark, and that is a terrible strategy 
for human progress.
    Chairman Peters. Thank you, Professor. Professor Hu.
    Ms. Hu. Thank you so much, Mr. Chair for that question. I 
think part of what is missing from the discussion is whether or 
not a fundamental assumption is being challenged by AI. That 
assumption is that the rule of law can precede all other forms 
of power, and that the law can govern effectively, especially 
if you have these tech companies and AI seeing themselves as 
co-equals and being able to speak with the law as an equal. 
Therefore, you might have a difficulty of having then the AI, 
apparently, in some instances, being presented as something 
that can now precede the law.
    If we are going to, I think, really address how the law 
will govern AI, I think we need to understand that that is the 
fundamental question. Under our constitutional democracy under 
Article 1 of the Constitution, it gave Congress the power to 
legislate. But how is AI trying to challenge that power?
    Chairman Peters. Thank you. Professor.
    Dr. Acemoglu. I think I have been emphasizing this for a 
while, but it is still, I believe, underappreciated that there 
are different directions in which we can develop AI tools, and 
that we have to make the right choices understanding what these 
different directions are.
    I am an economist. As I responded to Senator Hawley, of 
course, the profit motive and the benefits to corporations 
matter. Those are very important. But I think we are 
underestimating how much the founding ideology or vision of AI 
has influenced the way that the industry has developed.
    That founding vision, as I have argued in a number of 
different contexts, is that we have a desire or a social 
benefit from creating autonomous machine intelligence. Meaning, 
machines that are as intelligent as humans, and they are 
autonomous. If once we do that, a number of conclusions follow 
from that.
    One, is a lot of automation, because if machines are really 
intelligent and autonomous, then they should do a lot of the 
tasks that we do because they can perform them as well as we 
can do. Second, much less need for human judgment because they 
are autonomous and intelligent. Third, much less emphasis on 
humans actually controlling them.
    But this vision has become completely foundational to the 
tech industry. A lot of the emphasis on general intelligence 
follows from that. I think it is very difficult to change the 
direction of the tech industry, with only regulation, unless we 
also cultivate different types of priorities among tech 
leaders, and the leading engineers, and computer scientists.
    That is why I have emphasized, not just my work, but many 
important people. Professor Vallor also emphasized for example, 
Norbert Wiener, and many other inspiring scientists, as early 
as 1949 and 1950s, came up with different visions, but they 
have been completely overshadowed by the artificial general 
intelligence or autonomous machine intelligence vision.
    Putting that on the table, encouraging a broader 
perspective on AI and encouraging or articulating the idea that 
having pro-human AI is both feasible and desirable is both 
missing, and I think quite important for the future of this 
industry. Thank you.
    Chairman Peters. Thank you. Another question for all three 
of you. This time, we will start with Professor Hu. We will mix 
it up here, then Professor Vallor, and Daron. This is a tough 
question, but given the fact that how complex this issue is and 
all of the issues that we have talked about, but as lawmakers 
we have to distill things down to concrete actions that we need 
to take.
    If the United States government can do just one thing, one 
thing to increase the chances that AI is going to increase the 
well-being for everyone, not just a few, but for everyone, what 
would that one thing be given your areas of expertise? 
Professor Hu.
    Ms. Hu. I think the one thing that I would prioritize is an 
amendment to the Civil Rights Act of 1964, so that we 
incorporate and then try to anticipate and address the types of 
AI-driven civil rights concerns that we ha've seen over the 
last decade.
    I think that we can see that across the spectrum of the 
ways in which the AI and automated systems and algorithmic 
decisionmaking can cut across discrimination in the criminal 
justice context, housing, mortgage financing, in employment. 
That would be the one thing that I would emphasize.
    Chairman Peters. Thank you. Professor Vallor.
    Dr. Vallor. I think I would emphasize examining the 
misaligned incentives that we have permitted in the AI 
ecosystem, particularly with the largest and most powerful 
players, and learn the lessons from the past where we have had 
success realigning the incentives of innovation with the public 
interest so that we can create clear and compelling penalties 
for companies who innovate irresponsibly, for companies that 
get it wrong because they have not put in the work to get it 
right.
    While at the same time, perhaps capping the liabilities or 
reducing the risk for innovators who do invest in innovating 
safely and responsibly, and then want to find new ways of using 
those tools to benefit humans. Because we often see some of the 
good actors are hearing about the risks of AI systems, the ways 
that they might fabricate falsehoods, or the way that they may 
amplify bias, and that can actually reduce innovation and 
narrow it to only those powerful actors who can afford to get 
it wrong.
    I think if we adjust those incentives, so that the best and 
most innovative actors in the ecosystem are rewarded for 
innovating responsibly, and the most powerful ones have to be 
held liable for producing harms at scale, then I think we can 
see a way forward that looks much more positive for AI.
    Chairman Peters. Thank you. Professor Acemoglu.
    Dr. Acemoglu. Thank you for this question. There is no 
silver bullet. But I think one of the first steps is to redress 
the fact that the vision of AI is pushing us more and more 
toward automation, surveillance, and monitoring.
    This is really an ecosystem. Senator Johnson pointed out 
the Eisenhower quote that government support for university 
scientists could have negative consequences because it makes 
scientists cater to the government needs.
    Right now, it is actually much worse when it comes to AI. 
All leading computer scientists and AI scientists in leading 
universities are funded and get generous support from AI 
companies, and the leading digital platforms.
    It really creates an ecosystem in academia, as well as in 
the industry, where incentives are very much aligned toward 
pushing more and more for bigger and bigger models, and more 
and more of this machine intelligence vision and trying to 
automate a lot of work.
    I think if we want to have a fighting chance for an 
alternative, the government may need to invest in a new Federal 
agency which is tasked with doing the same things that the U.S. 
Government used to do, for example, with DARPA or with other 
agencies of playing a leadership for new technologies. In this 
instance, that would be more pro-worker, pro-citizen agenda.
    I think something along the lines of, for example, the 
National Institutes of Health (NIH), which has both expertise 
and funding for new research, could be very necessary for the 
field of AI with an explicit aim of investing into things that 
are falling by the wayside to more pro-human pro-worker pro-
citizen directions. Thank you.
    Chairman Peters. Thank you.
    Professor Vallor, you have written quite a bit about the 
connection between technology and human values. Would you share 
some concrete examples of this connection, and in particular, 
talk about but based on your research, how you see AI changing 
our basic societal values?
    Dr. Vallor. Sure, thank you. First of all, I think it is 
important to recognize, and you mentioned this in your opening 
remarks, that AI is not neutral. In fact, no technology is 
neutral. All technologies are mirrors of human values. Every 
technology that human beings have ever created has been a 
reflection of what humans at particular times and places that 
was worth doing, or enabling, or building, or trying.
    But technologies also change what we value. If we think of 
these kinds of AI systems that we are building today trained on 
human-generated data that reflects the patterns of our own 
behaviors and past judgments, we are using AI much like a 
mirror. We are looking at AI increasingly to tell us what we 
value to reflect the patterns and preferences, and to instruct 
us on the patterns and preferences that we and others hold.
    This, to me creates a very perverse relationship between 
technology and values. Because instead of our most fundamental 
human values, the things that are connected most deeply to our 
need for shared human flourishing, instead of those driving the 
tools that we need--and this is to Professor Acemoglu's point 
of there is so much untapped opportunity to direct AI to 
address unmet needs and areas from health, to infrastructure, 
to the environment. But that is only if the values that are 
connected for us to shared human flourishing are what are 
driving those decisions.
    Instead, what is happening, and I mentioned this earlier in 
my testimony, is that we are looking at a mirror of ourselves 
with these systems that actually reflects very old patterns of 
historical valuation, very old prejudices, very old biases and 
priorities, that do not in fact, reflect the values of the 
human family as a whole.
    I think it is partly about being able to recognize what our 
values are without having to find them in the AI mirror. In 
that way, we can ensure that the technology continues to be 
shaped by the values that we hold most deeply.
    Chairman Peters. Thank you, Professor.
    Professor Acemoglu, a question, do you believe or would you 
argue that AI is either causing or going to cause increased 
dysfunction in government? How would we manage that? I think 
you have written on some of these areas.
    Dr. Acemoglu. I do not think right now AI is causing 
increased dysfunction in the government yet, except that I 
think we are falling behind the necessary regulation and 
building of the necessary expertise in AI in the government. It 
is wonderful to see this and several other Senate committees 
deal with the issues of AI because I think the lawmakers need 
to be at the forefront of it.
    But as we move forward, and AI systems become more widely 
used, exactly like my fellow witnesses have pointed out, we 
need to introduce the rights safeguards for making sure that 
individual rights, including privacy rights, but more 
importantly, human and civil rights are correctly recognized 
and protected.
    I do not see huge issues there in the United States at the 
moment, but there are a few local law enforcement agencies that 
started using systems that are not very reliable for law 
enforcement. That needs to be brought under control.
    But you can see from China and other countries how the 
emphasis on surveillance and monitoring is already having a 
tremendous effect. It is particularly important for democratic 
countries to set the right legislation to ensure that both 
companies and government agencies are not tempted to follow 
China in the use of AI in the next decade.
    Dr. Vallor. Can I just briefly add to that?
    Chairman Peters. Please, go.
    Dr. Vallor. Just pointing out that that dynamic is an 
excellent example of how AI can in fact warp our human values 
because it can cause us to become increasingly resigned to 
control and efficiency as values that become more accepted and 
important than particular liberties and considerations of 
justice that are inscribed in our Constitution and in 
international law.
    I think it is also important to remain anchored in those 
value commitments that are written in those documents for a 
reason and ensure that we are not letting the direction of the 
technology that is currently ungoverned, undermine those very 
commitments.
    Chairman Peters. Thank you.
    Professor Hu, you have talked about changing the social 
contract, do you want to talk about that in relation to the----
    Ms. Hu. Yes, absolutely. I think that what we are seeing is 
really a quadrilateral situation with our social contract where 
the rights are being mediated and negotiated across the 
spectrum of not just the government and citizens as it used to 
be when we first established the social contract. But now, it 
is negotiated across the government, citizens, civil society, 
the tech companies, and then the AI as the fourth vertices. I 
think that we need to think through whether or not that type of 
negotiation and mediation is consistent even with our 
constitutional democracy at all in the first instance.
    Chairman Peters. Thank you.
    Senator Rosen, you recognized for your questions.

               OPENING STATEMENT OF SENATOR ROSEN

    Senator Rosen. Thank you, Chair Peters. This really is an 
important hearing, and thank you all for being here today.
    Lo and behold, you set me up perfectly for my question. We 
are going to follow-up about prioritizing values in AI because 
I am a former computer programmer, and systems analysts, and so 
I understand how evolving technology can revolutionize how 
Americans work. But as you have already been talking about, in 
all technology there are traces of human values, whether how it 
is used, or the math behind it, many large language models 
(LLM).
    So human bias and human values, they are baked into the 
system in some form or fashion. Some LLMs we know perform 
better under pressure. For example, when a user tells a model 
that their job is at risk, or people will be hurt if a certain 
decision is made, that is one thing. But those same human 
values can make large language models more fallible and easier 
to be manipulated.
    I am going to go to Dr. Vallor, because in one recent 
study, we found it was easier to evade an AI's system safety 
mechanisms when the system thought the stakes were higher. You 
have talked about this with Senator Peters' question, what 
should we consider when balancing values like efficiency versus 
accuracy? In what context? Should more accuracy be required 
from the model than efficiency, and vice versa?
    Dr. Vallor. That is a great question. The answer is one 
that, I think, highlights the need to invest more in the kind 
of interdisciplinary expertise around these systems that is 
needed to make these kinds of decisions wisely. Because 
whether, for example, efficiency, or accuracy matters more 
depends entirely on the sociotechnical context that you are 
designing and deploying the system in.
    If you are using an AI system to make high stakes, 
irreversible decisions, right, where it is life or death and 
there is no undoing it if you get it wrong, then very clearly 
accuracy becomes a far more vital priority than efficiency. In 
a lower stakes environment where what you are trying to do is 
simply automate a process in a way that actually uses resources 
in the most efficient way so that you do not have a lot of 
waste, which is something, obviously, that from an 
environmental standpoint is of great urgency, right, then 
accuracy in that case, perhaps matters less than the efficiency 
with which the system can drive the operation.
    But one of the things that we have not talked about 
although I think it is in the background, is that AI is not 
just large language models. AI is not just even machine 
learning, right, AI is a host of many different kinds of 
technologies that are suited for different purposes and that 
work well in some environments and not in others.
    What I think what we really need to see more investment in 
is the kind of combined technical, and social, and political, 
and legal expertise such that people understand the 
capabilities of the systems and their effects at the same time.
    Right now what we have, is we have a lot of very smart 
technologists who understand very well the emerging 
capabilities of the systems they are building, but have a very 
limited view of the contexts in which they are being deployed 
and the likely consequences. On the other hand, you have a lot 
of social scientists, and humanities researchers, and law 
scholars who understand those contexts deeply, but are often 
not given the opportunity to/or have not invested enough time 
themselves in understanding the technical capabilities of the 
systems that they are proposing regulating or governing.
    But at the University of Edinburgh and lots of other 
programs around the world, we are seeing more of an investment 
in that kind of interdisciplinary expertise that will be needed 
to govern these systems well, and perhaps this is something 
that Congress can accelerate as well.
    Senator Rosen. It is not enough to have a brain, maybe you 
have to have heart as well. That is when you marry both 
together in the easiest of ways. It is much more complicated 
than that, for sure. I have a few minutes left out, I want to 
move from this. We could talk about this all day because I 
really think this is the fear and the future of AI. This is the 
fear that we have; will it have a heart, right? There will be 
really plenty smart, but we need more.
    But I want to move on to international cooperation because 
that is also really important. Last week, multiple countries, 
including China, India, Japan, they signed their first 
international agreement regarding AI, which is committed to 
deploy and develop AI in a safe and responsible way.
    But this agreement, of course, is very broad and only in 
the first steps. Dr. Acemoglu, in the United States, we hold 
ideals like equality, democracy, and freedom, values that are 
not always shared by our foreign adversaries. That is why 
``safe and responsible'' is in the eye of the beholder, 
perhaps.
    How do we ensure that international standards hold these 
key values? Again, we are talking about values, the same as the 
last question, prioritize them when we set standards of 
allowable AI uses.
    Dr. Acemoglu. That is a very important issue, and I do not 
think we can control what China will do directly. But we can 
form alliances with our allies and neutral countries for 
developing the right ethical standards. But both for this type 
of relationship with friendly countries, but even for where the 
direction of innovation will go in China and Russia, I think, 
U.S. scientific leadership is really important.
    The same sort of concerns that people are voicing about AI 
right now, were also voiced when it came to mitigating climate 
change. The key was we can try to fight climate change in 
Europe and the United States, but China and India won't. But 
what we have seen over the last decade is that when the world 
led by Western countries, but some others as well, invests in 
renewables, that renewables become attractive in China and 
India as well.
    I think it is the same thing. If we let China set the 
leadership, AI is going to go more and more in direction that 
takes control and agency from humans; it does not value 
equality, and it emphasizes surveillance, monitoring, and 
censorship. But if we can take that leadership and push it in a 
direction that's much more pro-human, supporting of democracy, 
supporting of equality, I think it will even have beneficial 
effects on China.
    Senator Rosen. Mr. Chair, can I ask one last thing, a 
follow-up on this. Because we talked about the human and the 
values, and there is a workforce transition that we are 
grappling with, and Dr. you have been talking about pro-worker 
use of AI, could you explain to us here in Congress what you 
mean by that because all of this does not happen on its own, 
and we have to prepare our workforce all across the spectrum. 
Could you just as a finish up just explain that to us?
    Dr. Acemoglu. Absolutely.
    I think it is very important. If I may, I will go back to 
the values discussion that Professor Vallor and Senator Peters 
had, and you contributed as well. The values that we are going 
to have for AI really depends on the direction in which we use 
AI and who controls AI.
    If AI is in the hands of a few elite entrepreneurs and tech 
barons, that's going to be a very different direction of AI. We 
want to put the resources to help low skilled workers. The more 
we do not put the resources to train them to provide them with 
the technologies and the knowledge, the more they will appear 
to us that they are useless. That will centralize AI, that will 
centralize all of the resources.
    Of the pro-worker agenda is using these AI tools to help 
the workers and provide them with better information. I think 
the capabilities that we have with the even existing knowledge 
in AI research is we can provide much better information so 
that educators become better decisionmakers. Skilled 
craftspeople like electricians, plumbers become decisionmakers. 
Healthcare workers, nurses become better decisionmakers. Blue 
collar workers can be much better. We have seen in a few 
companies how the right type of augmented reality in AI tools 
using to train them in precision work can increase their 
productivity. There is a lot we can do for the workers, and 
there is a lot that we can do for democracy.
    Senator Rosen. Thank you. That is a great way to finish. I 
appreciate it.
    Chairman Peters. Thank you, Senator Rosen.
    I would like to take this opportunity to thank our 
witnesses. Thank you for being here. We are certainly all 
grateful for the work you do and for your contribution to this 
very important conversation. Today's hearing offered a new 
perspective on artificial intelligence. Our witnesses helped us 
step back from some of the exciting developments and the hype 
of AI, and consider historical, ethical, and philosophical 
questions that this technology possesses.
    In order to ensure that AI truly works on behalf of the 
American people, we cannot stop here. We must continue this 
deeper examination of our values our agency and the future that 
we want from this technology as we build it in the years ahead.
    The record for this hearing will remain open for 15 days 
until 5 p.m. on November 23, 2023, for the submission of 
statements and questions for the record.
    This hearing is now adjourned.
    [Whereupon, at 11:21 a.m., the hearing was adjourned.]

                            A P P E N D I X

                              ----------                              

[GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]

                                 [all]