[Senate Hearing 118-108]
[From the U.S. Government Publishing Office]
S. Hrg. 118-108
OVERSIGHT OF A.I.:
PRINCIPLES FOR REGULATION
=======================================================================
HEARING
BEFORE THE
SUBCOMMITTEE ON PRIVACY,
TECHNOLOGY, AND THE LAW
of the
COMMITTEE ON THE JUDICIARY
UNITED STATES SENATE
ONE HUNDRED EIGHTEENTH CONGRESS
FIRST SESSION
__________
JULY 25, 2023
__________
Serial No. J-118-27
__________
Printed for the use of the Committee on the Judiciary
[GRAPHIC NOT AVAILABLE IN TIFF FORMAT]
__________
U.S. GOVERNMENT PUBLISHING OFFICE
53-503 PDF WASHINGTON : 2024
-----------------------------------------------------------------------------------
COMMITTEE ON THE JUDICIARY
RICHARD J. DURBIN, Illinois, Chair
DIANNE FEINSTEIN, California LINDSEY O. GRAHAM, South Carolina,
SHELDON WHITEHOUSE, Rhode Island Ranking Member
AMY KLOBUCHAR, Minnesota CHARLES E. GRASSLEY, Iowa
CHRISTOPHER A. COONS, Delaware JOHN CORNYN, Texas
RICHARD BLUMENTHAL, Connecticut MICHAEL S. LEE, Utah
MAZIE K. HIRONO, Hawaii TED CRUZ, Texas
CORY A. BOOKER, New Jersey JOSH HAWLEY, Missouri
ALEX PADILLA, California TOM COTTON, Arkansas
JON OSSOFF, Georgia JOHN KENNEDY, Louisiana
PETER WELCH, Vermont THOM TILLIS, North Carolina
MARSHA BLACKBURN, Tennessee
Joseph Zogby, Chief Counsel and Staff Director
Katherine Nikas, Republican Chief Counsel and Staff Director
Subcommittee on Privacy, Technology, and the Law
RICHARD BLUMENTHAL, Connecticut, Chair
AMY KLOBUCHAR, Minnesota JOSH HAWLEY, Missouri, Ranking
CHRISTOPHER A. COONS, Delaware Member
MAZIE K. HIRONO, Hawaii JOHN KENNEDY, Louisiana
ALEX PADILLA, California MARSHA BLACKBURN, Tennessee
JON OSSOFF, Georgia MICHAEL S. LEE, Utah
JOHN CORNYN, Texas
David Stoopler, Democratic Chief Counsel
John Ehrett, Republican Chief Counsel
C O N T E N T S
----------
JULY 25, 2023, 3:07 P.M.
STATEMENTS OF COMMITTEE MEMBERS
Page
Blumenthal, Hon. Richard, a U.S. Senator from the State of
Connecticut.................................................... 1
Hawley, Hon. Josh, a U.S. Senator from the State of Missouri..... 3
Klobuchar, Hon. Amy, a U.S. Senator from the State of Minnesota.. 5
WITNESSES
Witness List..................................................... 39
Amodei, Dario, chief executive officer, Anthropic, San Francisco,
California..................................................... 6
prepared statement........................................... 40
Bengio, Yoshua, founder and scientific director, Mila--Quebec AI
Institute, and professor, Department of Computer Science and
Operations Research, Universite de Montreal, Quebec, Canada.... 8
prepared statement........................................... 46
Russell, Stuart, professor of computer science, University of
California, Berkeley, Berkeley, California..................... 9
prepared statement........................................... 60
MISCELLANEOUS SUBMISSION FOR THE RECORD
Submitted by Ranking Member Hawley:
``Cleaning Up ChatGPT Takes Heavy Toll on Human Workers,''
Wall Street Journal, July 24, 2023......................... 81
OVERSIGHT OF A.I.:
PRINCIPLES FOR REGULATION
----------
TUESDAY, JULY 25, 2023
United States Senate,
Subcommittee on Privacy, Technology,
and the Law,
Committee on the Judiciary,
Washington, DC.
The Subcommittee met, pursuant to notice, at 3:07 p.m., in
Room 226, Dirksen Senate Office Building, Hon. Richard
Blumenthal, Chair of the Subcommittee, presiding.
Present: Senators Blumenthal [presiding], Klobuchar,
Ossoff, Hawley, and Blackburn.
OPENING STATEMENT OF HON. RICHARD BLUMENTHAL,
A U.S. SENATOR FROM THE STATE OF CONNECTICUT
Chair Blumenthal. This hearing of the Privacy and
Technology Subcommittee will come to order. Thank you to our
three witnesses for being here, I know you've come a long
distance, and to the Ranking Member, Senator Hawley, for being
here, as well, on a day when many of us are flying back. I got
off a plane about less than an hour ago, so forgive me for
being a little bit late. I know many of you have flown in, as
well. And thank you to all of our audience, and many are
outside the hearing room.
Some of you may recall at the last hearing I began with a
voice, not my voice, although it sounded exactly like mine
because it was taken from floor speeches, and an introduction,
not my words but concocted by ChatGPT, that actually mesmerized
and deeply frightened a lot of people who saw and heard it.
The opening today, my opening, at least, is not going to be
as dramatic, but the fears that I heard as I went back to
Connecticut--and also heard from people around the country,
were supported by that kind of voice impersonation and content
creation. And what I have heard, again and again and again, and
the word that has been used so repeatedly, is ``scary''--
``scary,'' when it comes to artificial intelligence.
And as much as I may tell people, ``You know, there's
enormous good here, potential for benefits in curing diseases,
helping to solve climate change, workplace efficiency,'' what
rivets their attention is the science fiction image of an
intelligence device out of control, autonomous, self-
replicating, potentially creating diseases, pandemic-grade
viruses, or other kinds of evils purposely engineered by people
or simply the result of mistakes, not malign intention. And,
frankly, the nightmares are reinforced, in a way, by the
testimony that I've read from each of you.
In no way disparagingly do I say that those fears are
reinforced, because I think you have provided objective, fact-
based views on what the dangers are and the risks and
potentially even human extinction: an existential threat, which
has been mentioned by many more than just the three of you,
experts who know firsthand the potential for harm. But these
fears need to be addressed, and I think can be addressed,
through many of the suggestions that you are making to us and
others, as well.
I've come to the conclusion that we need some kind of
regulatory agency, but not just a reactive body, not just a
passive, rules-of-the-road maker, edicts on what guardrails
should be, but actually investing proactively in research so
that we develop countermeasures against the kind of autonomous,
out-of-control scenarios that are potential dangers: an
artificial intelligence device that is, in effect, programmed
to resist any turning off, a decision by AI to begin nuclear
reaction to a nonexistent attack.
The White House certainly has recognized the urgency with a
historic meeting of the seven major companies which made eight
profoundly significant commitments, and I commend and thank the
President of the United States for recognizing the need to act.
But we all know, and you have pointed out in your testimony,
that these commitments are unspecific and unenforceable. A
number of them, on the most serious issues, say that they will
give attention to the problem. All good, but it's only a start.
And I know the doubters about Congress and about our
ability to act, but the urgency here demands action. The future
is not science fiction or fantasy. It's not even the future.
It's here and now. And a number of you have put the timeline at
2 years before we see some of the biological, most severe
dangers. It may be shorter, because the kinds of pace of
development is not only stunningly fast, it has also
accelerated at a stunning pace because of the quantity of
chips, the speed of chips, the effectiveness of algorithms. It
is an inexorable flow of development. We can condemn it, we can
regret it, but it is real.
And the White House's principles actually align with a lot
of what we have said among us in Congress and, notably, in the
last hearing that we held. We're here now because AI is already
having a significant impact on our economy, safety, and
democracy. The dangers are not just extinction but loss of
jobs, one of potentially the worst nightmares that we have.
Each day, these issues are more common, more serious, and more
difficult to solve, and we can't repeat the mistakes that we
made on social media, which was to delay and disregard the
dangers.
So, the goal for this hearing is to lay the ground for
legislation, go from general principles to specific
recommendations, to use this hearing to write real laws,
enforceable laws.
In our past two hearings, we heard from panelists that
Section 230, the legal shield that protects social media,
should not apply to AI. Based on that feedback, Senator Hawley
and I introduced the No Section 230 Immunity for AI Act.
Building on our previous hearing, I think there are core
standards that we are building bipartisan consensus around.
And I welcome hearing from many others on these potential
rules: establishing a licensing regime for companies that are
engaged in high-risk AI development, a testing and auditing
regimen by objective third parties or by, preferably, the new
entity that we will establish, imposing legal limits on certain
uses related to elections--Senator Klobuchar has raised this
danger directly--related to nuclear warfare--China apparently
agrees that AI should not govern the use of nuclear warfare--
requiring transparency about the limits and use of AI models.
This includes watermarking, labeling, disclosure when AI is
being used, and data access--data access for researchers.
So, I appreciate the commitments that have been made by
Anthropic, OpenAI, and others at the White House related to
security testing and transparency last week. It shows these
goals are achievable and that they will not stifle innovation,
which has to be an objective--avoid stifling innovation.We need
to be creative about the kind of agency or entity, the body or
administration. It can be called an administration, an office.
I think the language is less important than its real
enforcement power and the resources invested in it.
We are really lucky--very, very fortunate to be joined by
three true experts today, one of the most distinguished panels
I have seen in my time in the United States Congress, which is
only about 12 years: one of the leading AI companies, which was
founded with the goal of developing AI that is helpful, honest,
and harmless; a researcher whose groundbreaking work led him to
be recognized as one of the godfathers of AI; and a computer
science professor whose publications and testimony on the
ethics of AI have shaped regulatory efforts like the EU AI Act.
So, welcome to all of you, and thank you so much for being
here. I turn to the Ranking Member, Senator Hawley.
OPENING STATEMENT OF HON. JOSH HAWLEY,
A U.S. SENATOR FROM THE STATE OF MISSOURI
Senator Hawley. Thank you very much, Mr. Chairman. Thanks
to all of our witnesses for being here. I want to start by
thanking the Chairman, Senator Blumenthal, for his terrific
work on these hearings. It's been a privilege to get to work
with him. These have been incredibly substantive hearings. I'm
really looking forward to hearing from each of you today.
I want to thank his staff for their terrific work. It takes
a lot of effort to put together hearings of these substance.
And I want to thank Senator Blumenthal for being willing to do
something about this problem. As he alluded to a moment ago, he
and I, a few weeks ago, introduced the first bipartisan bill to
put safeguards around AI development--the first bill to be
introduced in the United States Senate, which will protect the
right of Americans to vindicate their privacy, their personal
safety, and their interests in court against any company that
would develop or deploy AI.
This is an absolutely critical foundational right. You can
give Americans paper rights, parchment rights, as our Founders
said, all you want. If they can't get into court to enforce
them, they don't mean anything. And so, I think it's
significant that our first bipartisan effort is to guarantee
that every American will have the right to vindicate their
rights, their interests, their privacy, their data protection,
their kids' safety, in court. And I look forward to more to
come with Senator Blumenthal and with other Members who I know
are interested in this.
I think that, for my part, I have expressed my own sense of
what our priorities ought to be when it comes to legislation.
It's very simple: workers; kids; consumers; and national
security. As AI develops, we've got to make sure that we have
safeguards in place that will ensure this new technology is
actually good for the American people.
I'm confident it'll be good for the companies. I have no
doubt about that. The biggest companies in the world, who
currently make money hand over fist in this country and benefit
from our laws, I know they'll be great: Google, Microsoft,
Meta--many of whom have invested in the companies we're going
to talk to today. And we'll get into that a little bit more in
just a minute, but I'm confident they're going to do great.
What I'm less confident of is that the American people are
going to do all right. So, I'm less interested in the
corporations' profitability. In fact, I'm not interested in
that at all. I'm interested in protecting the rights of
American workers and American families and American consumers
against these massive companies that threaten to become a total
law unto themselves.
You want to talk about a dystopia? Imagine a world in which
AI is controlled by one or two or three corporations that are
basically governments unto themselves and then, the United
States Government, and foreign entities. Talk about a massive
accretion of power from the people to the powerful. That is the
true nightmare. And for my money, that is what this body has
got to prevent. We want to see technology developed in a way
that actually benefits the people, the workers, the kids, and
the families of this country.
And I think the real question before Congress is, will
Congress actually do anything? Senator Blumenthal, I think, put
his finger on it precisely. I mean, look at what this Congress
did, or did not do, with regard to these very same companies,
these same behemoth companies, when it came to social media.
It's all the same players. Let's be honest. We're talking about
the same people in AI as we were in social media. It's Google,
again. It's Microsoft. It's Meta. It's all the same people.
And what I notice is, in my short time in the Senate,
there's a lot of talk about doing something about Big Tech and
absolutely zero movement to actually put meaningful legislation
on the floor of the United States Senate and do something about
it.
So, I think the real question is, will the Senate actually
act? Will the leadership in both parties--both parties--will it
actually be willing to act? We've had a lot of talk, but now is
the time for action. And I think if the urgency of the new
generative AI technology does not make that clear to folks,
then you'll never be convinced. And to me, that really defines
the urgent needs of this moment. Thank you, Mr. Chairman.
Chair Blumenthal. I'm going to turn to Senator Klobuchar in
case she has some remarks.
Senator Klobuchar. Thank you. A woman of action, I hope,
Senator Hawley.
Chair Blumenthal. Definitely a woman of action and someone
who has invested a lot of time and----
OPENING STATEMENT OF HON. AMY KLOBUCHAR,
A U.S. SENATOR FROM THE STATE OF MINNESOTA
Senator Klobuchar. Yes. Well, I just want to thank both of
you for doing this. I mostly just want to hear from the
witnesses.
I do agree with both Senator Blumenthal and Senator Hawley:
This is the moment. And the fact that this has been bipartisan
so far, in the work that Senator Schumer, Senator Young are
doing, the work that is going on in this Subcommittee, with the
two of you, and the work Senator Hawley and I are also engaged
in on some of the other issues related to this.
I actually think that if we don't act soon, we could decay
into not just partisanship but inaction. And the point that
Senator Hawley just made is right. We didn't get ahead of--the
Congress didn't get ahead with Section 230 and the like and
some of the things that were done for maybe good reasons at the
time and then didn't do anything.
And now you've got kids getting addicted to fentanyl, and
you've got--that they get online--you've got privacy issues,
you've got kids being exposed to content they shouldn't see,
you've got small businesses that have been pushed down search
engines, and the like. And I still think we can fix some of
that, but this is certainly a moment to engage.
And I'm actually really excited about what we can get done,
the potential for good here, but what we can do to put in
guardrails and have an American way of putting things in place
and not just defer to the rest of the world, which is what's
starting to happen on some of the other topics I raised.
So, I'm particularly interested, which is not as much our
focus today, on the election side and democracy and making sure
that we do not have these ads that aren't the real people, I
don't care what political party people are with, that we give
voters the information they need to make a decision and that we
are able to protect our democracy. And there's some good work
being done on that front. So, thank you.
Chair Blumenthal. Let me introduce the witnesses and seize
this moment to let you have the floor.
We're going to be joined by Dario Amodei, who is the CEO of
Anthropic, an AI safety and research company. It's a public
benefit corporation dedicated to building steerable AI systems
that people can rely on and generating research about the
opportunities and risks of AI. Anthropic's AI assistant,
Claude, is based on its research into training helpful, honest,
and harmless AI systems.
Yoshua Bengio is a recognized--worldwide recognized leading
expert in artificial intelligence. He is known for his
conceptual and engineering breakthroughs in artificial neural
networks and deep learning. He pioneered many of the
discoveries and advances that have led us to this point today.
And he's a full professor in the Department of Computer Science
and Operations Research at the University of Montreal, and the
founder and scientific director of Milo--Quebec Artificial
Intelligence Institute, one of the largest academic institutes
in deep learning and one of the three federally funded centers
of excellence in AI research and innovation in Canada. With
apologies, I'm not going to repeat all the awards and
recognitions that you've received, because it would probably
take the rest of the afternoon.
We're also honored to be joined by Stuart Russell. He
received his B.A. with first-class honors in physics from
Oxford University in 1982 and his Ph.D. in computer science
from Stanford, 1986. He then joined the faculty at the
University of California at Berkeley, where he is professor and
formerly chair of Electrical Engineering and Computer Sciences
and the holder of the Smith-Zadeh Chair in Engineering,
director of the Center for Human-Compatible AI, and director of
the Kavli Center for Ethics, Science, and the Public. He's also
served as an adjunct professor of neurological surgery at UC
San Francisco.
Again, many honors and recognitions all of you have
received.
In accordance with the custom of our Committee, I'm going
to ask you to stand and take an oath.
[Witnesses are sworn in.]
Chair Blumenthal. Thank you. Mr. Amodei, we'll begin with
you.
STATEMENT OF DARIO AMODEI, CHIEF EXECUTIVE OFFICER, ANTHROPIC,
SAN FRANCISCO, CALIFORNIA
Mr. Amodei. Chairman Blumenthal, Ranking Member Hawley, and
Members of the Committee, thank you for the opportunity to
discuss the risks and oversight of AI with you. Anthropic is a
public benefit corporation that aims to lead by example in
developing and publishing techniques to make AI systems safer
and more controllable and by deploying these safety techniques
in state-of-the-art models.
Research conducted by Anthropic includes constitutional AI,
a method for training AI systems to behave according to an
explicit set of principles; early work on red teaming, or
adversarial testing of AI systems to uncover bad behavior; and
foundational work in AI interpretability, the science of trying
to understand why AI systems behave the way they do. This
month, after extensive testing, we were proud to launch our AI
model Claude 2 for U.S. users. Claude 2 puts many of these
safety improvements into practice. While we're the first to
admit that our measures are still far from perfect, we believe
they're an important step forward in a race to the top on
safety. We hope we can inspire other researchers and companies
to do even better.
AI will help our country accelerate progress in medical
research, education, and many other areas. As you said in your
opening remarks, the benefits are great. I would not have
founded Anthropic if I did not believe AI's benefits could
outweigh its risks. However, it is very critical that we
address the risks.
My written testimony covers three categories of risks:
short-term risks that we face right now, such as bias, privacy,
misinformation; medium-term risks related to misuse of AI
systems as they become better at science and engineering tasks;
and long-term risks related to whether models might threaten
humanity as they become truly autonomous, which you also
mentioned in your opening testimony.
In these short remarks, I want to focus on the medium-term
risks, which present an alarming combination of imminence and
severity.
Specifically, Anthropic is concerned that AI could empower
a much larger set of actors to misuse biology. Over the last 6
months, Anthropic, in collaboration with world-class
biosecurity experts, has conducted an intensive study of the
potential for AI to contribute to the misuse of biology.
Today, certain steps in bioweapons production involve
knowledge that can't be found on Google or in textbooks and
requires a high level of specialized expertise, this being one
of the things that currently keeps us safe from attacks.
We've found that today's AI tools can fill in some of these
steps, albeit incompletely and unreliably. In other words, they
are showing the first nascent signs of danger. However, a
straightforward extrapolation of today's systems to those we
expect to see in 2 to 3 years suggests a substantial risk that
AI systems will be able to fill in all the missing pieces,
enabling many more actors to carry out large-scale biological
attacks. We believe this represents a grave threat to U.S.
national security.
We have instituted mitigations against these risks in our
own deployed models; briefed a number of U.S. Government
officials, all of whom found the results disquieting; and are
piloting a responsible disclosure process with other AI
companies, to share information on this and similar risks.
However, private action is not enough. This risk, and many
others like it, requires a systemic policy response.
We recommend three broad classes of actions.
First, the U.S. must secure the AI supply chain in order to
maintain its lead while keeping these technologies out of the
hands of bad actors. This supply chain runs from semiconductor
manufacturing equipment to chips and even the security of AI
models stored on the servers of companies like ours.
Second, we recommend a testing and auditing regime for new
and more powerful models. Similar to cars or airplanes, AI
models of the near future will be powerful machines that
possess great utility but can be lethal if designed incorrectly
or misused. New AI models should have to pass a rigorous
battery of safety tests before they can be released to the
public at all, including tests by third parties and national
security experts in Government.
Third, we should recognize that the science of testing and
auditing for AI systems is in its infancy. It is not currently
easy to detect all the bad behaviors an AI system is capable of
without first broadly deploying it to users, which is what
creates the risk. Thus, it is important to fund both
measurement and research on measurement to ensure a testing and
auditing regime is actually effective. Funding NIST and the
National AI Research Resource are two examples of ways to
ensure America leads here.
The three directions above are synergistic. Responsible
supply chain policies help give America enough breathing room
to impose rigorous standards on our own companies without
ceding our national lead to adversaries, and funding
measurement, in turn, makes these rigorous standards
meaningful. The balance between mitigating AI's risks and
maximizing its benefits will be a difficult one, but I'm
confident that our country can rise to the challenge. Thank
you.
[The prepared statement of Mr. Amodei appears as a
submission for the record.]
Chair Blumenthal. Thank you very much. Why don't we go to
Mr. Bengio.
STATEMENT OF YOSHUA BENGIO, FOUNDER AND SCIENTIFIC DIRECTOR,
MILA--QUEBEC AI INSTITUTE, AND PROFESSOR, DEPARTMENT OF
COMPUTER SCIENCE AND OPERATIONS RESEARCH, UNIVERSITE DE
MONTREAL, QUEBEC, CANADA
Professor Bengio. Chairman Blumenthal, Ranking Member
Hawley, Members of the Judiciary Committee, thank you for the
invitation to speak today. The capabilities of AI systems have
steadily increased over the last two decades, thanks to
advances in deep learning that I and others introduced. While
this revolution has the potential to enable tremendous progress
and innovation, it also entails a wide range of risks, from
immediate ones like discrimination, to growing ones like
disinformation, and even more concerning ones in the future
like loss of control of superhuman AIs.
Recently, I, and many others, have been surprised by the
giant leap realized by systems like ChatGPT to the point where
it becomes difficult to discern whether one is interacting with
another human or a machine. These advancements have led many
top AI researchers, including myself, to revise our estimates
of when human-level intelligence could be achieved. Previously
thought to be decades or even centuries away, we now believe it
could be within a few years or decades.
The shorter timeframe, say, 5 years, is really worrisome
because we'll need more time to effectively mitigate the
potentially significant threats to democracy, national
security, and our collective future. As Sam Altman said here,
if this technology goes wrong, it could go terribly wrong.
These severe risks could arise either intentionally, because of
malicious actors using AI systems to achieve harmful goals, or
unintentionally, if an AI system develops strategies that are
misaligned with our values and norms.
I would like to emphasize four factors that governments can
focus on in their regulatory efforts to mitigate all AI harms
and risks. First, access: limiting who has access to powerful
AI systems, structuring the proper protocols, duties,
oversight, and incentives for them to act safely. Second,
alignment: ensuring that AI systems will act as intended, in
agreement with our values and norms. Third, raw intellectual
power: which depends on the level of sophistication of the
algorithms and the scale of computing resources and of
datasets. And fourth, scope of action: the potential for harm
an AI system can affect indirectly, for example, through human
actions or directly, for example, through the internet. So,
looking at risks through the lens of each of these four
factors--access, alignment, intellectual power, and scope of
action--is critical to designing appropriate Government
intervention.
I firmly believe that urgent efforts, preferably in the
coming months, are required in the following three areas.
First, the coordination of highly agile national and
international regulatory frameworks and liability incentives
that bolster safety. This would require licenses for people and
organizations with standardized duties to evaluate and mitigate
potential harm, allow independent audits, and restrict AI
systems with unacceptable levels of risk.
Second, because the current methodologies are not
demonstrably safe, significantly accelerate global research
endeavors focused on AI safety, enabling the informed creation
of essential regulations, protocols, safe AI methodologies, and
governance structures.
And, third, research on countermeasures to protect society
from potential rogue AIs, because no regulation is going to be
perfect. This research in AI and international security should
be conducted with several highly secure and decentralized labs
operating under multilateral oversight to mitigate an AI arms
race.
Given the significant potential for detrimental
consequences, we must therefore allocate substantial additional
resources to safeguard our future, at least as much as we are
collectively globally investing in increasing the capabilities
of AI. I believe we have a moral responsibility to mobilize our
greatest minds and make major investments in a bold and
internationally coordinated effort to fully reap the economic
and social benefits of AI while protecting society and our
shared future against its potential perils.
Thank you for your attention to this pressing matter. I
look forward to your questions.
[The prepared statement of Professor Bengio appears as a
submission for the record.]
Chair Blumenthal. Thank you very much, Professor. Professor
Russell?
STATEMENT OF STUART RUSSELL, PROFESSOR OF COMPUTER SCIENCE,
UNIVERSITY OF CALIFORNIA, BERKELEY, BERKELEY, CALIFORNIA
Professor Russell. Thank you, Chair Blumenthal and Ranking
Member Hawley and Members of the Subcommittee, for the
invitation to speak today and for your excellent work on this
vital issue. AI, as we all know, is the study of how to make
machines intelligent. Its stated goal is general purpose
artificial intelligence, sometimes called AGI or artificial
general intelligence, machines that match or exceed human
capabilities in every relevant dimension.
The last 80 years have seen a lot of progress toward that
goal. For most of that time, we created systems whose internal
operations we understood, drawing on centuries of work in
mathematics, statistics, philosophy, and operations research.
Over the last decade, that has changed. Beginning with vision
and speech recognition and now with language, the dominant
approach has been end-to-end training of circuits with billions
or trillions of adjustable parameters. The success of these
systems is undeniable, but their internal principles of
operation remain a mystery. This is particularly true for the
large language models, or LLMs, such as ChatGPT.
Many researchers now see AGI on the horizon. In my view,
LLMs do not constitute AGI, but they are a piece of the puzzle.
We're not sure what shape the piece is yet or how it fits into
the puzzle, but the field is working hard on those questions,
and progress is rapid. If we succeed, the upside could be
enormous. I've estimated a cash value of at least $14
quadrillion for this technology, a huge magnet in the future
pulling us forward.
On the other hand, Alan Turing, the founder of computer
science, warned in 1951 that once AI outstrips our feeble
powers, we should have to expect the machines to take control.
We have pretty much completely ignored this warning. It's as if
an alien civilization warned us by email of its impending
arrival, and we replied, ``Humanity is currently out of the
office.'' Fortunately, humanity is now back in the office and
has read the email from the aliens.
Of course, many of the risks from AI are well recognized
already, including bias, disinformation, manipulation, and
impacts on employment. I'm happy to discuss any of these, but
most of my work over the last decade has been on the problem of
control: How do we maintain power forever over entities more
powerful than ourselves?
The core problem we have studied comes from AI systems
pursuing fixed objectives that are mis-specified, the so-called
King Midas problem. For example, social media algorithms were
trained to maximize clicks and learned to do so by manipulating
human users and polarizing societies. But with LLMs, we don't
even know what their objectives are. They learn to imitate
humans and probably absorb all-too-human goals in the process.
Now, regulation is often said to stifle innovation, but
there is no real tradeoff between safety and innovation. An AI
system that harms human beings is simply not good AI. And I
believe analytic predictability is as essential for safe AI as
it is for the autopilot on an airplane. This Committee has
discussed ideas such as third-party testing, licensing,
national agency, an international coordinating body--all of
which I support.
Here are some more ways to, as it's said, move fast and fix
things. First, an absolute right to know if one is interacting
with a person or a machine. Second, no algorithms that can
decide to kill human beings, particularly when attached to
nuclear weapons. Third, a kill switch that must be activated if
systems break into other computers or replicate themselves.
Fourth, go beyond the voluntary steps announced last Friday:
Systems that break the rules must be recalled from the market
for anything from defaming real individuals to helping
terrorists build biological weapons.
Now, developers may argue that preventing these behaviors
is too hard, because LLMs have no notion of truth and are just
trying to help. This is no excuse. Eventually, and the sooner
the better, I would say, we will develop forms of AI that are
provably safe and beneficial, which can then be mandated. Until
then, we need real regulation and a pervasive culture of
safety. Thank you.
[The prepared statement of Professor Russell appears as a
submission for the record.]
Chair Blumenthal. Thank you very much. I'll begin the
questioning. We're going to have 7-minute rounds. I expect
we'll have many more than one, given the challenges and
complexity that you all have raised so eloquently.
I have to say, Professor Russell, you also, in your
testimony, the written testimony, recount a remark of Lord
Rutherford, September 11th, 1933, at a conference, when he was
asked about atomic energy, and he said, quote, ``Anyone who
looks for a source of power in the transformation of the atoms
is talking moonshine,'' end quote.
The ideas about the limits of human ingenuity have been
proven wrong, again and again and again, and we've managed to
do things that people thought unthinkable, whether it's the
Manhattan Project under the guidance of Robert Oppenheimer, who
now has become a boldface term in popular print, or putting man
on the moon, which many thought was impossible to do.
So, we know how to do big things. This is a big thing that
we must do, and we have to be back in the office to answer that
email that is, in fact, a siren blaring for everyone to hear
and see: AI is here, and beware of what it will do if we don't
do something to control it. And not just in some distant point
in the future but, as all of you have said, with a time horizon
that would've been thought unimaginable just a few years ago.
Unimaginably quick.
Let me ask each of you--because part of that time horizon
is our next election, in 2024, and if there's nothing that
focuses the attention of Congress, it is an election. Nothing
better than an election to focus the attention of Congress. Let
me ask each of you what you see as the immediate threats to the
integrity of our election system, whether it's the result of
misinformation or manipulation of electoral counts or any of
the possible areas where you see an immediate danger as we go
into this next election. I'll begin with you, Mr. Amodei.
Mr. Amodei. Yes. So, thanks for the question, Senator. You
know, I think this is obviously a very timely thing to worry
about. You know, when I think of the risks here, my mind goes
to misinformation, generation of deepfakes, use of AI systems
to manipulate people or produce propaganda or just do anything
deceptive.
You know, I can speak a little bit about some of the things
we're doing. You know, we train our model with, you know, this
method called constitutional AI, where you can lay out explicit
principles. It doesn't mean the model will follow the
principles, but there are terms in our constitution, which is
publicly available, that tells the model not to generate
misinformation. The same is true in our business terms of use.
One of the commitments with the White House was to start to
watermark content, particularly in the audio and the visual
domain. I think that's very helpful but would also benefit
from--watermarking gives you the technical capability, you
know, to detect that something is AI generated, but requiring
it on the side of the law to be labeled, I think, would be
something that would be very helpful and timely.
Chair Blumenthal. Thank you. Mr. Bengio?
Professor Bengio. I agree with all of that. I will add a
few things. One concern I have is that even if companies use
watermarking, and especially because there is now several open
source versions to train LLMs or use them, including model
weights that have been made available to the global community,
we also need to understand how things can go wrong on that
front. In other words, people are not all going to obey that
law.
And one important thing I'm concerned about is, one can
take a pretrained model, say by a company that made it public,
and then without huge computing resources--so, not the hundred
million cost that it takes to train them, but something very
cheap, can tune these systems to a particular task, which could
be to play the game of being a troll, for example. There's
plenty of examples of that to train them on, or other examples
in generating deepfakes in a way that might be more powerful
than what we've seen up to now. So, I don't know how to fix
this, but I want to bring that to the attention of this
Committee.
Chair Blumenthal. Thank you. Well, on that point, and on
both of the excellent points that you both have raised, I would
invite fixes, and----
Professor Bengio. Well, I mean, one immediate fix is to
avoid releasing more of these pretrained large models. That's
the thing that governments can do, because right now, very few
companies, including, you know, the seven you brought last
week, can do that. And so that's a place where Government can
act.
Chair Blumenthal. Professor Russell?
Professor Russell. Yes, I would certainly like to support
the remarks of the other two witnesses. And I would say my
major concern with respect to elections would be disinformation
and, particularly, external influence campaigns, because with
these systems, we can present to the system a great deal of
information about an individual, everything they've ever
written or published on Twitter or Facebook, their social media
presence, their floor speeches, and train the system and ask it
to generate a disinformation campaign particularly for that
person. And then we can do that for a million people before
lunch. And that has a far greater effect than, you know, the
sort of spamming and broadcasting of false information that
isn't tailored to the individual.
I think labeling is important. For text, it's going to be
very difficult to tell whether a short piece of text is machine
generated, if someone doesn't want you to know that it's
machine generated. I think an important proposal from the
Global Partnership on AI is actually for a kind of an escrow,
an encrypted storage where every output from a model is stored
in an encrypted form, enabling, for example, a platform to
check whether a piece of text that's uploaded is actually
machine generated by testing it against the escrow storage
without revealing private information, et cetera. So, that can
be done.
Another problem we face is that there are many, many
extremely well-intended efforts to create standards around
labeling and how platforms should respond to labels, in terms
of what should be posted, and media organizations like the BBC,
The New York Times, Wall Street Journal, et cetera, et cetera--
there are dozens of these coalitions. The effort is very
fragmented, and, you know, there are as many standards as there
are coalitions. I think it really needs national and probably
international leadership to bring these together, to have
pretty much a unified approach and standards that all
organizations can sign up to.
And, third, I think there's a lot of experience in other
spheres such as in the equity markets, in real estate, in the
insurance business, where truth is absolutely essential. If you
take the equity markets, if companies can make up their
quarterly figures, then the equity markets collapse. And so
we've developed this whole regulated third-party structure of
accountants' audits, so that the information is reasonably
trustworthy. In real estate, we have title registries, we have
notaries, all kinds of stuff to make it work.
We don't really have that structure in the public
information sphere. And we see, you know, again, it's very
fragmented. There's FactCheck.org, there's Snopes, there's--I
suppose Elon Musk is going to have his TruthGPT, and so on.
Again, this is something that I think governments can help, in
terms of licensing and standards for how those organizations
should function and, again, what platforms do with the
information that the third-party institutions supply to enable
users to have access to high-quality information streams. So, I
think there's quite a lot we can do, but it's pretty urgent.
Chair Blumenthal. Thank you. I think all of these points
argue very, very powerfully against fragmentation, for some
kind of single entity that would establish oversight standards,
enforcement of rules, because as you say, malign actors can not
only eliminate quarterly reports, they can also make up numbers
for corporations that can disastrously impact the stock of the
corporation. I'm going to call Mr.----
Professor Russell. If I just might add one point. We're
absolutely not talking about a Ministry of Truth. In some
sense, it's similar to what happens in the courts. The courts
have standards for finding out what the truth is, but they
don't say what the truth is. And that's what we need.
Chair Blumenthal. But protecting our election system has to
be a priority. I think all of you are very, very emphatically
and cogently making that point. Professor Bengio?
Professor Bengio. Yes. I would like to add one suggestion
which may sound drastic but isn't if you look at other fields
like banking. In order to reduce the chances that AI systems
will massively influence voters through social media, one thing
that should've been done a long time ago is that social media
accounts should be restricted to actual human beings that have
identified themselves, ideally in person. Right?
And right now, social media companies are spending a lot of
money to figure out whether an account is legitimate or not.
They will not, by themselves, force these kinds of regulations,
because it's going to create friction to recruit more users.
But if the Government says everyone needs to do it, they'll be
happy. Well, I'm not them, but that's what I would--if I were
them.
Chair Blumenthal. Thank you. Senator Hawley.
Senator Hawley. Let's start, if we could, by talking about
who controls this technology currently and who's developing it.
Mr. Amodei, if I could just start with you, just help me
understand some of the structure of your company, of Anthropic.
Google owns a significant stake in your company, doesn't it?
Mr. Amodei. Yes. Google was an investor in Anthropic. They
don't control any board seats, but yes, Google is an investor
in Anthropic.
Senator Hawley. Give us a sense of--what are we talking
about? What kind of stake are we talking about?
Mr. Amodei. I don't remember exactly, couldn't give it to
you exactly. I suspect it's low double digits but would need to
follow up on this.
Senator Hawley. Well, the press has reported it at $300
million in investment, with at least a 10 percent stake in the
company. Does that sound broadly correct?
Mr. Amodei. That sounds broadly correct.
Senator Hawley. That's a pretty big stake. Let's talk about
OpenAI, where you used to work. Right?
Mr. Amodei. Yes.
Senator Hawley. OpenAI, it's been reported, has a very
significant chunk of funding that comes from another massive
technology company, Microsoft. It's been reported in the press
that this was one of the reasons that you left the company, you
were concerned about this. You can speak to that, if you want
to. I don't want to put words in your mouth. But the stake that
I believe Microsoft is reported to have in OpenAI approaches 49
percent. So, it's not controlling, but it's awfully, awfully
close.
Tell me this. When Google's stake in your company occurred,
the Financial Times broke the story on this but reported that
the transaction wasn't publicized when it actually happened.
Why was that, do you know?
Mr. Amodei. I couldn't speak to the--yes, I couldn't speak
to the decisions made by Google here. I do want to make one
point, which is our relationship with Google at the present
time--it's primarily focused on hardware. So, in order to train
these models, you need to purchase chips. And, you know, this
investment came with a commitment to spend on the cloud. And
our relationship with Google has been primarily focused on
hardware, hasn't primarily been, you know, commercial or
involved with governance.
Senator Hawley. So, there's no plans to integrate your
Claude, your equivalent of ChatGPT--there's no plans to
integrate that with Google Search, for example?
Mr. Amodei. That's not occurring at the present time.
Senator Hawley. Well, I know it's not occurring, but are
there plans to do it, I guess is my question.
Mr. Amodei. I mean, I can't speak to what--you know, I
can't speak to what the possibilities are for the future, but
that's not something that's occurring at the present.
Senator Hawley. Don't you think that that would be
frightening? I mean, just to come back to something Professor
Russell said a moment ago, he talked about the ability, in the
election context, of AI to--fed the information from, let's
say, one political figure, everything about that person, the
ability to come up with a very convincing misinformation
campaign. Now, imagine if that technology also--if the same
large language model, for example, also had the information,
the voter files of millions of voters and knew exactly what
would capture those voters' attention, what would hold it, what
arguments they found most persuasive, the ability to weaponize
misinformation and to target it toward particular voters would
be exceptionally powerful. Right?
Now, Search is all about getting and keeping users'
attention. That's how Google makes money. I'm just imagining
your technology, a generative AI, aligned and integrated and
folded into Search, the power that that would give Google to
get users' attention, keep their attention, push information to
them. It would be extraordinary, wouldn't it?
Mr. Amodei. Yes. So, I mean, I think--Senator, I think
these are very important issues and, you know, I want to raise
a few points in here. One is some of the things I said in
response to Senator Blumenthal's questioning, which is, you
know, on misinformation. So, we put terms in Claude's
constitution that tell it not to generate misinformation or
political bias in any direction. I, again, want to emphasize,
over and over again, that these methods are not yet perfect,
and the science of producing this is not exact yet, but this is
something we work on.
You know, I think you're also getting at some important
privacy issues here about personal information. And this is an
area where, also, in our constitution, we discourage our models
from producing personal information. We don't train on, you
know, publicly available information. So, you know, it's very
core to our mission, you know, to produce models that at least
try not to have these problems.
Senator Hawley. Well, you say that you tell the model not
to produce misinformation. I'm not sure exactly what that
means, but do you tell it not to help massive companies make a
profit?
Mr. Amodei. Well----
Senator Hawley. This would be Google's interest. Right?
Above all, profits. The whole reason they want to get users'
attention and then keep users' attention and keep us searching
and scrolling is so that they can push products to us and make
lots and lots of money, which they do. It seems to me that your
technology melded with theirs could make them an enormous sum
of money. That would be great for them. Would it be so good for
the American consumer?
Mr. Amodei. Again, I can't speak to--you know, I can't
speak to the decisions made by a different company like Google,
but, you know, we are doing the best we can to make our systems
ethical. You know, in terms of, you know, how do we tell our
model not to do things, there's a training process where, you
know, we train the model in a loop, to tell it, for some given
output, you know, is your response in line with these
principles? And, you know, over the last 6 months, since we've
developed this method of constitutional AI, we've gotten better
and better at getting the model to be in line with what the
constitution says. Again, I would still say it's not perfect,
but, you know, we very much focus on the safety of the model so
that it doesn't do the things that you're concerned about,
Senator.
Senator Hawley. Well, listen, I think this has surfaced an
important point, and I just want to underscore this, because I
think it's important. I appreciate that you want your models to
be ethical and so forth. That's great. But I would just suggest
that that is in the eye of the beholder, and the talk of what
is ethical or what is appropriate is going to really vary
significantly, determined by or depending on who controls the
technology. So, I'm sure that Google or Microsoft, using these
generative models, linking it up with their ad-based models,
would say, ``Oh, it's perfectly ethical for us to try and get
the attention of as many consumers as possible, by any means
possible, and to hold it as long as possible.'' And they would
say, ``There's no problem with that. That's not misinformation.
That's business.''
Now, would that be good for American consumers? I doubt it.
Would that be respectful of American consumers' privacy and
their integrity? Would it prevent them--or would it protect
them, rather, from manipulation? I doubt it. I mean, so I think
we've got to give some serious thought here to who controls
this technology and how they are using it. And I appreciate all
that you're doing. I appreciate your commitments. I think
that's great. I just want to say, I just want to underline,
this is a very serious structural issue here that we're going
to have to think hard about, and the control of this technology
by just a handful of companies and governments is a huge, huge
problem. Hopefully we can come back to this. Thanks, Mr.
Chairman.
Chair Blumenthal. Thanks, Senator Hawley. Senator
Klobuchar.
Senator Klobuchar. Thank you very much. So, I chair the
Rules Committee, and we're working on a number of pieces of
legislation, and I've really appreciated working with Senator
Hawley on some of this. But one bill is, you know, watermarks
and making sure that the election materials say, ``Produced by
AI.'' But I don't think that's enough, when you look at the
fact that someone's going to watch a fake Joe Biden or a fake
Donald Trump or a fake Elizabeth Warren--all of this has really
happened--and then not know who the person is and not know if
it's really them.
And it's not going to help, just at the very end. It might,
for some things, but to just say at the end, ``Oh, by the way,
that was `Produced by AI.' Hope you saw our little mark at the
end that says that.''
So, could you address that, Professor Russell? How, within
the clear confines of the Constitution, for things like satire,
we're going to have to do more than just watermarks?
Professor Russell. I do want to be careful not to veer
into, once again, the sort of Ministry of Truth idea.
Senator Klobuchar. Mm-hmm.
Professor Russell. But I think clear labeling--I mean, if
you look at what happened with credit cards, for example, it
used to be that credit cards came with 14 pages of tiny, tiny
print, and that allowed companies to rip off the consumer all
the time. And eventually, Congress said no, there's got to be
disclosure. You've got to say, ``This is the interest rate,
this is the grace period, this is the late fee,'' and a couple
of other things, and that has to be in big print on the front
of the envelope or on the front page. There are very strict
rules now about how you direct market credit cards and other
lending products, and that's been enormously beneficial,
because it actually allows competition on those primary
features of the product, as opposed----
Senator Klobuchar. Yes, but you can't really compare a
credit card to someone who's telling the United States of
America that there's some kind of a nuclear explosion when
there isn't.
Professor Russell. Right. But the point being, we can
mandate much clearer labeling than just a little thing in the
corner at the end of a 90-second piece. Right? We could say,
for example, there's got to be a big red frame around the
outside of the image, when it's a machine-generated image.
Senator Klobuchar. Okay. I'm just going to--Professor
Bengio, what do you think?
Professor Bengio. Well, my view on this is we should be
very careful with any kind of use of AI for political purposes,
political advertising, whether it's done officially through
some agency that does advertising or in a more direct way.
Senator Klobuchar. But it might not be actual advertising.
It's just put out for----
Professor Bengio. Yes.
Senator Klobuchar [continuing]. Circulation. That's always
what we've----
Professor Bengio. Yes.
Senator Klobuchar [continuing]. Confronted, because----
Professor Bengio. Yes.
Senator Klobuchar [continuing]. The Federal Election
Commission, while deadlocking on this, has asked for authority,
including the Republican-appointed----
Professor Bengio. Yes. So----
Senator Klobuchar [continuing]. Members, to do more. But go
ahead.
Professor Bengio. In many countries, any kind of
advertising, which would include disseminating such videos, is
not allowed for some period before the election, to try to
minimize, you know, the potential effect of these things.
Senator Klobuchar. Right. Could I just--Mr. Amodei, one
significant concern--I'm just switching gears here, because I
talked to some people in the banking community about this,
small banks, is that they are really worried they're going to
see AI used to scam people. You know, pretending to be your
mom's voice or your, more likely, granddaughter's voice,
actually getting that voice right, making a call for money. How
can Congress ensure that companies that create AI platforms
cannot be used for those deceptive platforms? What kind of
rules should we put in place so that doesn't happen?
Mr. Amodei. Yes, Senator. So, I think these questions about
deception and scams are probably closely related to these
questions about misinformation. Right?
Senator Klobuchar. Yes.
Mr. Amodei. They're a little bit two sides of the same
coin. So, I think on the misinformation, I wanted to kind of
clarify, you know, there's technical measures and there's
policy measures. So, you know, watermarking is a technical
measure. Watermarking makes it possible to take the output of
an AI system, run it through some automated process that will
then return an answer that it was generated by AI or not
generated by AI. That's important, and, you know, we're working
on that, and others are working on that.
But I think we also need policy measures, so, going back to
what the other two witnesses said, focusing on, you know, a
requirement to label AI systems is not the same as a
requirement to watermark them. One is for the designer of the
AI system to embed something. The other is for wherever the AI
system ends up----
Senator Klobuchar. Yes.
Mr. Amodei [continuing]. In the end, for----
Senator Klobuchar. That it----
Mr. Amodei [continuing]. Someone to be required to label
it. So, I think we need both and probably, you know, this
Congress can do more on the second thing, and the companies and
researchers can do more on the first thing.
Senator Klobuchar. Mm-hmm. Okay. And so what are you
talking about? The scams where the granddaughter calls, and the
grandma goes out and takes all her money out? We're just going
to----
Mr. Amodei. Yes, I mean----
Senator Klobuchar [continuing]. Let that happen? Or----
Mr. Amodei. Well, I mean, certainly, it's already illegal
to do that. I can think of a number of authorities----
Senator Klobuchar. Mm-hmm.
Mr. Amodei [continuing]. That we could use to strengthen
that for AI in particular. I think, you know, that's kind of up
to the Senate and the Congress to figure out what the best
measure is, but, you know, certainly I'd be in favor of
strengthened protections there.
Senator Klobuchar. Well, I hope so. About half of the
States have laws that give individuals control over the use of
their name, image, and voice, but in the other half of the
country, someone who is harmed by a fake recording purporting
to be them has little recourse. Senators Coons and Tillis just
did a hearing on this. Would you support a Federal law, Mr.
Bengio, that gives individuals control over the use of their
name, image, and voice?
Professor Bengio. Certainly, but I would go further.
Senator Klobuchar. Mm-hmm.
Professor Bengio. If you think about counterfeiting money,
the criminal penalties are very high, and that deters a lot of
people. And when it comes to counterfeiting humans, it should
be at least at the same level.
Senator Klobuchar. Okay. One last thing I wanted to ask
about here is just the ability of researchers to be able to
figure out what is going on, and there's a bill that a number
of us are supporting, including Senator Blumenthal, that allows
for researchers the transparency that we need, and including
Senators Cassidy, Cornyn, Coons, and Romney. It's called the
Platform Accountability and Transparency Act, to require social
media companies to share data with researchers, so we can try
to figure out what's happening with the algorithms and the
like. Dr. Russell, why is researcher access to social media
platform data so important for regulating AI?
Professor Russell. So, our experience actually involved 3
years of negotiating an agreement with one of the large
platforms, only to be told at the end that actually they didn't
want to pursue this collaborative agreement after all.
Senator Klobuchar. We don't really have 3 years to spare on
AI, it sounds like, so----
Professor Russell. No, we don't.
Senator Klobuchar. Continue on. Yes.
Professor Russell. And, you know, I then discussed this
with the director of the digital division of OECD, and he said
I was about the tenth person who had told him the same story.
So, it seems there's a modus operandi of appearing to be open
to collaborations with researchers, only to terminate that
collaboration right before it actually begins. There have been
claims that they have provided open datasets to researchers, to
allow this type of research, but I've talked to those
researchers, and it hasn't happened. It's been----
Senator Klobuchar. And why is it----
Professor Russell [continuing]. Extraordinarily difficult.
Senator Klobuchar [continuing]. So important to have it put
in place, these regulations? We know we'll be--we can't wait
for you to get all the data, obviously, and we can't let it
take 3 years, but putting in place a clear mandate that that
data be shared--why is that helpful?
Professor Russell. Because the effects--for example, the
social media recommender systems, they're correlated across
hundreds of millions of people. So, those systems can shift
public opinion in ways that are not even necessarily
deliberate. They're probably not deliberate. But they can be
massive and polarizing. Unless we have access to the data,
which the companies internally certainly do--and I think the
Facebook revelations from a few years ago suggested that they
are totally aware of what's happening, but that information is
not available to governments and researchers. And I think, you
know, in a democracy, we have a right to know if our democracy
is being subverted by an algorithm, and that seems absolutely
crucial.
Senator Klobuchar. All right. Do you want to add one more
thing, Mr. Bengio?
Professor Bengio. Yes. Trying to respond to your question
from another angle, why researchers? I would say academic
researchers--not all of them, but many of them don't have any
commercial ties. They have a reputation to keep in order to
continue their career. So, they're not perfect, but I think
it's a very good yardstick to judge that something's----
Senator Klobuchar. Except for Professor Russell. Okay. Very
good. Do you agree with it, too, then?
Mr. Amodei. Yes. I just wanted to say I think transparency
is important even as a broader issue. You know, a number of our
research efforts go into looking inside to see what happens
inside AI systems, why they make the decisions that they make.
Senator Klobuchar. Okay.
Mr. Amodei. And--oh.
Senator Klobuchar. Yes, I've got to turn it over to my
colleagues, who have been patiently waiting. Thank you.
Chair Blumenthal. Thank you. We'll circle back to the black
box algorithms, which is a major topic of interest. Senator
Blackburn.
Senator Blackburn. Thank you, Mr. Chairman, and thank you
all for being here. Mr. Amodei, I think you got a little
aggravated trying to answer Senator Hawley's question about
something you may create that you think of as an ethical use.
But let me tell you why this bothers us, the unethical use.
Senator Blumenthal and I have worked together for nearly 4
years on looking at social media and the harms that have
happened to our Nation's youth, and hopefully, this week our
Kids Online Safety Act comes out of Committee.
It wasn't intended. Social media wasn't intended--the
intent was not to harm children, cause mental health crisis,
put children in touch with drug dealers and pedophiles. But we
have heard story after story and have uncovered instance after
instance where the technology was used in a way that nobody
ever thought it was, and now we're trying to clean it up,
because we've not put the right guardrails in place. So, as we
look at AI, the guardrails are very important.
And, Professor Russell, I want to come to you, because the
U.S. is behind the--we're really behind our colleagues in the
EU, the UK, New Zealand, Australia, Canada, when it comes to
online consumer privacy and having a way for consumers to
protect that name, image, voice; having a way for them to
protect their data, their writings, so that AI is not trained
on their data. So, talk for just a minute about how we keep our
position as a global leader in generative AI and, at the same
time, protect consumer privacy. Would a Federal privacy
standard help? What are your recommendations there?
Professor Russell. I think there needs to be absolutely a
requirement to disclose if the system is harvesting the data
from individual conversations. And my guess is that immediately
people would stop using a system that says, ``I am taking your
conversation. I am folding it into the next version of the
model, and anyone in the country can basically listen in on
this conversation because they're going to be asking questions
about what I did.''
Senator Blackburn. Let me ask you this.
Professor Russell. Yes.
Senator Blackburn. Do you think the industry is mature
enough to self-regulate?
Professor Russell. No.
Senator Blackburn. You do not. So, therefore----
Professor Russell. No.
Senator Blackburn [continuing]. It is going to be necessary
for us to mandate a structure?
Professor Russell. Yes. I think there's certainly a change
of heart at OpenAI. Initially, they were harvesting the data
produced by individual conversations, and then more recently
they said, ``We're going to stop doing that.''
And clearly, if you're in a company, even not considering
personal conversation but just in a company, and you want the
system to help you with some internal operation, you're going
to be divulging company proprietary information to the chatbot
to get it to give you the answers you want. And if that
information is then available to your competitors by simply
asking ChatGPT what's going on over in that company, this would
be terrible.
So, having a clear definition of what it is--there's a
technical term, ``oblivious.'' Right? Which basically says,
whatever we talk about, I am going to forget completely. Right?
That's a guarantee that systems should offer. I actually
believe that browsers and any other device that interacts with
individuals should offer that as a formal guarantee.
Let me also make the point about enforcement, which I think
Senator Hawley mentioned at the beginning, a right of action.
But, for example, we have a Federal Do Not Call List. So, as I
understand it, it is a Federal crime for a company to do
robocalls to people who are on the Federal Do Not Call list. My
estimate is that there are hundreds of billions, or possibly a
trillion, Federal crimes happening every year.
Senator Blackburn. Every day. Yes. So----
Professor Russell. And we're not----
Senator Blackburn [continuing]. You would say existing----
Professor Russell [continuing]. Really enforcing anything.
Yes. Right.
Senator Blackburn. Right. So, you would say existing law is
not sufficient for AI?
Professor Russell. Correct.
Senator Blackburn. Okay.
Professor Russell. And existing----
Senator Blackburn. All right. Let me----
Professor Russell [continuing]. Enforcement patterns, as
well.
Senator Blackburn. Yes. Let me move on. In Tennessee, AI is
important. Our auto industry uses so many AI applications, you
know, and we followed this issue for quite a period of time,
because of the auto industry, because of the healthcare
industry and the healthcare technology industry that is
headquartered in Nashville. And, of course, predictive
diagnosis, disease analysis, research, pharmaceutical research
benefits tremendously from AI.
And then you look at the entertainment industry and the
voice cloning, and you look at what our entertainers, our
songwriters, our artists, our authors, our publishers, our TV
actors, our TV producers are facing with AI, and to them, it is
an absolute way that they're robbing them of their ability to
make a living off of their creative work. So, our creative
community has a different set of issues.
Martina McBride, who is no stranger to country music, went
in to Spotify. And the playlists are a big thing, building your
own playlist. So, she was going to build a country music
playlist out of Spotify. She had to refresh that 13 times
before a song by a female artist came up--13 times. So, you
look at the power of AI to shape what people are hearing.
And in Nashville, we like to say you can go on Lower Broad,
you can go to one of the honky-tonks, your band can have a
great night, you can be discovered, and you, too, could end up
with a record deal.
But if you've got these algorithmically AI-generated
playlists that cut out new artists or females or certain
sounds, then you are limiting someone's potential, just as if
you allow AI-generated content like on Jukebox, which OpenAI is
experimenting with, and you train it on that artist's sound and
their songs to imitate them, then you are robbing them of the
ability to be compensated.
So, how do we ensure that that creative community is still
going to have a way to make a living without having AI become a
way to steal their creative talents and works?
Professor Russell. I think this is a very important issue.
I think it also applies to book authors, some of whom are suing
OpenAI. And I'm not really an expert on copyright at all, but
some of my colleagues are, like Pam Samuelson, for example, and
I think she would be a great witness for a future hearing. And
I think the view is that the law, as it's written, simply
wasn't ready for this kind of thing to be possible. So, if by
accident the system produces a song that has the same melody,
then it's going to fall under existing law, that you're
basically plagiarizing. And there have been cases of human
plagiarism----
Senator Blackburn. Well, we've just----
Professor Russell [continuing]. That have succeeded.
Senator Blackburn. We've explored the fair use issue----
Professor Russell. Yes.
Senator Blackburn [continuing]. In this Committee and will
continue to do so. And my time is expired. Thank you, Mr.
Chairman.
Chair Blumenthal. Thanks, Senator Blackburn. We'll begin a
second round of questions, and I want to begin with one of the
points that Senator Blackburn was making about private rights
of action, which I think Senator Hawley and I have discussed
incorporating in legislation.
In many instances, let's be very blunt, agencies become
captive of the industries they're supposed to regulate, and
this one is too important to allow it to become captive.
And one very good check on the captivity of Federal
entities, agencies, or offices is, in fact, private rights of
action. So, I would hope that you would endorse that idea. I
recognize you're not lawyers, you're not in the business of
litigating, but I'm hoping that you would support that idea. I
see nodding heads, for the record.
Let me turn to--also to recap the very important comments
that you all have made about elections, to take action against
deepfakes, against impersonation, whether it's by labeling or
watermarks, some kind of disclosure--without censorship. We
don't want a Ministry of Truth. We want to preserve civil
rights and liberties. The Free Speech rights are fundamental to
our democracy, but the kinds of manipulation that can take
place in an election, including interfering with vote counts,
misdirection to election officials about what's happening,
presents a very dangerous specter.
Superhuman AI. Superhuman AI. I think all of you agree
we're not decades away. We're perhaps just a couple of years
away. And you describe it--well, all of you do, in terms of the
biologic effects, the development of viruses, pandemics, toxic
chemicals.
But superhuman AI evokes, for me, artificial intelligence
that could, on its own, develop a pandemic virus; on its own,
decide Joe Biden shouldn't be our next President; on its own,
decide that the water supply of Washington, DC, should be
contaminated with some kind of chemical and have the knowledge
to do it through the public utility system. And I think that
argues for the urgency--and these are not sort of science
fiction anymore. You describe them in your testimony. Others
have done it, as well.
So, I think your warning to us has really graphic content,
and it ought to give us impetus, with that kind of urgency, to
develop an entity that can not only establish standards and
rules but also research on countermeasures that detect those
misdirections, whether they're the result of malign actors or
mistakes by AI or malign operation of AI itself.
Do you think those countermeasures are within our reach as
human beings? And is that a function for an entity like this
one to develop?
Mr. Amodei. Yes. I mean, I think this is--yes, this is one
of the core things that, you know, whether it's the bio risks
from models that, you know, I kind of stated in testimony, you
know, are likely to come in 2 to 3 years or the risks from
truly autonomous models, which I think are more than that but
might not be a whole lot more than that, I think this idea of
being able to even measure that the risk is there is really the
critical thing. If we can't measure, then, you know, we can put
in place all of these regulatory apparatus, but, you know,
it'll all be a rubber stamp. And so funding for the measurement
apparatus and the enforcement apparatus, working in concert, is
really going to be central here.
I mean, our suggestion was, you know, NIST and the National
AI Research cloud, you know, which can help kind of allow a
wider range of researchers to study these risks and develop
countermeasures. So, I think that seems like a very important
measure. I'm worried about our ability to do this in time, but,
you know, we have to try, and we have to put in all the effort
that we can.
Chair Blumenthal. Mr. Bengio?
Professor Bengio. Yes. I completely agree. About the
timeline, there's a lot of uncertainty, so as I wrote in my
testimony, it could be a few years, but it could also be a
couple of decades, because, you know, research is impossible to
predict. But if we follow the trend, it's very concerning. And
regulation, liability--they will help a lot. My calculations--
you know, we could reduce the probability of a rogue AI showing
up by maybe a factor of 100, if we do the right things in terms
of regulation. So, it's really worth it, but it's not going to
bring those risks to zero, and especially for bad actors that
don't follow the rules anyways. So, we need that investment in
countermeasures, and AI is going to help us with that, but we
have to do it carefully so that we don't create the problem
that we're trying to solve in the first place.
Another aspect of this is, it's not just AI. You know, it
needs to bring expertise in national security, in bioweapons,
chemical weapons, and AI people together. The organizations
that're going to do that, in my opinion, shouldn't be for-
profit. We shouldn't mix the objective of making money, which,
you know, makes a lot of sense in our economic system, with the
objective, which should be single minded, of defending humanity
against a potential rogue AI.
Also, I think we should be very careful to do this with our
allies in the world and not do it alone. There is--first, we
can have a diverse set of approaches, because we don't know how
to really do this. We are hoping that, as we move forward and
we try to solve the problem, we'll find solutions. But we need
a diversity of approaches. And we also need some kind of
robustness against the possibility that one of the governments
involved in this kind of research isn't democratic anymore, for
some reason. Right? This can happen. We don't want a country
that was democratic and has power over a superhuman AI to be
the only country working on this. We need a resilient system of
partners, so that if one of them ends up being a bad actor, the
others are there.
Chair Blumenthal. Thank you very much. I'll turn to
Professor Russell, if you have a comment.
Professor Russell. Yes. So, I completely agree that if
there is a body that's set up, that it should be enabled to
fund and coordinate this type of research, and I completely
agree with the other witnesses that we haven't solved the
problem yet. I think there are a number of approaches that are
promising. I tend toward approaches that provide mathematical
guarantees rather than just best-effort guarantees.
And, you know, we've seen that in the nuclear area, where
originally the standard, I believe, was, you know, you could
have a major core accident every 10,000 years, and you had to
demonstrate that your system design met that requirement. You
know, then it was a million years, and now it's 10 million
years. And so that's progress, and it comes from actually
having a real scientific understanding of the materials, the
designs, redundancy, et cetera. And we are just in the infant
stages of a corresponding understanding of the AI systems that
we're building.
I would also say that no Government agency is going to be
able to match the resources that are going into the creation of
these AI systems. The numbers I've seen are roughly $10 billion
a month going into AGI startups. And just for comparison,
that's about 10 times the amount of the entire National Science
Foundation of the United States, which has to cover physics,
chemistry, basic biology, et cetera, et cetera, et cetera.
So, how do we get that resource flow directed toward
safety? I actually believe that the involuntary recall
provisions that I mentioned would have that effect because if a
company puts out a system that violates one of the rules and
then is recalled until the company can demonstrate that it will
never do that again, then the company can go out of business.
So, they have a very strong incentive to actually understand
how their systems work and, if they can't, to redesign their
systems so that they do understand how they work. That just
seems like basic common sense to me.
I also want to mention, on rogue AI, right, the bad
actors--Professor Bengio has mentioned an approach based on AI
systems that are developed to try to counteract that
possibility. But I also feel that we may end up needing a very
different kind of digital ecosystem, in general. What do I mean
by that? Right now, to a first approximation, a computer runs
any piece of binary code that you load into it. We put layers
on top of that that say, ``Okay, that looks like a virus. I'm
not running that.''
We actually need to go the other way around. The system
should not run any piece of binary code unless it can prove to
itself that this is a safe piece of code to run. So, it's sort
of flipping the notion of permission, and with that approach, I
think we could actually have a chance of preventing bad actors
from being able to circumvent these controls, because for them
to develop their own hardware resources is into the tens or
hundreds of billions of dollars. And so that's an approach I
would recommend.
Chair Blumenthal. I have more questions, but I'm going to
turn to Senator Hawley.
Senator Hawley. Let's talk a little bit about national
security and AI, if we could. Mr. Amodei, to come back to you,
you mentioned in your written testimony, in your policy
recommendations--your first recommendation, in fact, is the
United States must secure the AI supply chain. And then you
mention immediately, as an example of this, chips used for
training AI systems. Where are most of the chips made now?
Mr. Amodei. So----
Senator Hawley. I think your----
Mr. Amodei [continuing]. What I had in mind----
Senator Hawley. Your microphone, I think, may be----
Mr. Amodei. I'm sorry.
Senator Hawley. That's okay. Everyone's eager to hear what
you have to say. Go ahead.
[Laughter.]
Mr. Amodei. Yes. What I had in mind here, yes, is that, you
know, there are certain bottlenecks in the production of AI
systems. You know, that ranges from semiconductor manufacturing
of equipment to chips to the actual produced systems which then
have to be stored on a server somewhere and, in theory, could
be stolen or released in an uncontrolled way. So, I think, you
know, compared to some of the more software elements, those are
areas where there are substantially more bottlenecks.
Senator Hawley. Well, so, okay, understood. But we've heard
a lot about chips, GPUs, about the shortage of them. My
question is--and maybe you don't know the answer to this. Maybe
somebody else does. But do you know where most of them are
currently manufactured?
Mr. Amodei. Yes. There are a number of steps in the
production process for chips. Right?
Senator Hawley. Okay.
Mr. Amodei. If you produce the raw chip or the actual GPU,
you know, those happen in a number of places.
Senator Hawley. For example?
Mr. Amodei. So, you know, an important player on the, you
know, kind of like making up the base fabrication side would be
TSMC, which is in Taiwan. And then within--you know, companies
like NVIDIA within the United States, you know, then, you know,
produce those into GPUs. And I don't know exactly where that
process happens. It could be in a large number of places.
Senator Hawley. As part of securing our supply chain here
in this area, should we consider limitations, if not outright
prohibitions, on components that are manufactured in China?
Mr. Amodei. You know, I think on that particular issue, you
know, that's not one where I have a huge amount of knowledge. I
mean, I think we should think a little bit in the other
direction, of--are things that are produced by our supply
chain, do they end up in places that we don't want them to be?
So, we've worried a lot about that in the context of
models. We just had a blog post out today about AI models,
saying, ``Hey, you might've spent a large number of millions of
dollars--maybe someday it's going to be billions of dollars--to
train an AI system, and then, you don't want some state actor
or criminal or rogue organization to then steal that and use it
in some irresponsible way that you don't endorse.''
Senator Hawley. Let me get at this problem from a slightly
different angle, which is, let's imagine a hypothetical in
which the Communist government of Beijing decides to launch an
invasion of Taiwan. And let's imagine--and, sadly, it doesn't
take very much imagination--let's imagine that they're
successful in doing so. Just give me a back-of-the-envelope
forecast. What might that do to AI production?
Mr. Amodei. Yes. So, I mean, you know, I'm not an
economist, and it's hard to forecast, but a very large fraction
of the chips are indeed--you know, somewhere go through the
supply chain in Taiwan, so I think there's--you know, there's
no doubt that that is a hot spot and, you know, something that
we should be concerned about, for sure.
Senator Hawley. Do either of the other panelists want to
say anything on this, about the--Professor Russell, perhaps?
Professor Russell. Yes. I mean, there are studies. My
colleague Orville Schell, who is a China expert, has been
working on a study of these issues. There are already plans to
diversify away from Taiwan. TSMC is trying to create a plant in
the U.S. Intel is now building some very large plants in the
U.S. and in Germany, I believe, so--but it's taking time. I
think if the invasion that you mention happened tomorrow, we
would be in a huge amount of trouble.
As far as I understand it, there are plans to sabotage all
the TSMC operations in Taiwan, if an invasion were to take
place. So, it's not that all that capacity would then be taken
over by China.
Senator Hawley. What's sad about that scenario is, that
would be the best-case scenario. Right? I mean, if there's an
invasion of Taiwan, the best we could hope for is, maybe all of
their capacity or most of it gets sabotaged, and maybe the
whole world has to be in the dark for however long. That's the
best-case scenario. The point I'm trying to make is, I think
your point, Mr. Amodei, about securing our supply chains is
absolutely critical. And thinking very seriously about
decoupling efforts, strategic decoupling efforts, I think, is
absolutely vital at every point in the supply chain that we
can. And I think if we don't do that with China soon--frankly,
we should've done it a long time ago--if we don't do it very,
very quickly, I think we're in real trouble, and I think we've
got to think seriously about what may happen in the event of a
Taiwan invasion. Yes, go ahead.
Mr. Amodei. Yes. I just wanted to emphasize Professor
Russell's point even more strongly: that we are trying to move
some of the chip fab production capabilities to the U.S., but
that needs to be faster. Right? We're talking about, you know,
2 to 3 years for some of these very scary applications and
maybe not much longer than that for truly autonomous AI.
Correct me if I'm wrong, but I think the timelines for moving
these production facilities look more like, you know, 5 years,
7 years, and we've only started on a small component of them.
So, just to emphasize: I think it is absolutely essential.
Senator Hawley. Yes. Good. Let me ask you about a different
issue related to labor overseas and labor exploitation. The
Wall Street Journal published a piece today entitled,
``Cleaning Up ChatGPT Takes Heavy Toll on Human Workers.''
Contractors in Kenya say they were traumatized by the effort to
screen out descriptions of violence and sexual abuse during the
run-up to OpenAI's hit chatbot, namely ChatGPT. The article
details the widespread use of labor in Kenya to do this
training work on the ChatGPT model. I encourage everyone to
read it, and I'd like to ask the Chairman to be able to enter
this into the record.
Chair Blumenthal. Without objection.
[The information appears as a submission for the record.]
Senator Hawley. One of the disturbing--a couple of
disturbing things. I mean, one is that we're talking about a
thousand or more workers, outsourced overseas. We're talking
about exploitation of those workers. They work 'round the
clock. The material they're exposed to is incredible and I'm
sure extremely damaging, and that constitutes an issue of
lawsuits that they're now bringing. Here's another interesting
tidbit. The workers on the project were paid an average of
between $1.46 an hour and $3.74 an hour. Let me say that again.
The workers on the project were paid, on average, between $1.46
an hour and $3.74 an hour.
Now, OpenAI says, ``Oh, we thought that they were being
paid over $12 an hour.'' And so we have the classic, classic
corporate outsource maneuver, where a company outsources jobs--
couldn't be done in the United States--outsources jobs,
exploits foreign workers to do it, and then says, ``Oh, we
don't know anything about it. We're asking them to engage in
this psychologically harmful activity, we're probably
overworking them doing it, and we're not paying them. But, you
know, oops.''
I guess my question is, how widespread is this in the AI
industry? Because it strikes me that we're told that AI is new
and it's a whole new kind of industry and it's glittery and
it's almost magical, and yet it looks like it depends, in
critical respects, on very old-fashioned, disgusting, immoral
labor exploitation. So, go ahead, Mr. Amodei.
Mr. Amodei. Yes. So, this is actually one area where
Anthropic has a substantially different approach from the one
that you've described. I can't----
Senator Hawley. Good.
Mr. Amodei [continuing]. Speak for what other companies are
doing, but a couple points. One is this constitutional AI
method, which I mentioned, is a way for one copy of the AI
system to moderate or help to train another copy of the AI
system. This is something that reduces--it does not eliminate
but it substantially reduces the need for the kind of human
labor that you're describing.
Second, in our own contracting practices--and, you know, I
would have to talk to you directly for exact numbers, but I
believe that the companies we contract out to are something
like northwards of 75 percent workers from the U.S. and Canada
and all paid above the California minimum wage. So, I share
your concern about these issues, and, you know, we're committed
to both developing research that kind of obviates the need for
some of this kind of moderation and, you know, not exploiting
these workers.
Senator Hawley. Well, that's good, because here is, I
think, what would be terrible to see is, this new technology
that is built by foreign workers, not American workers. That
all seems like the same old story we've heard for 30, 40 years,
in this country, where we're told, ``Oh, no, American workers,
they cost too much. American workers, they're just too
demanding. American workers, they don't have the skills, so
we're going to outsource it. We're going to give it to other
foreign workers.''
Then you mistreat the foreign workers. Then you don't pay
the foreign workers. And then who benefits from it, at the end
of the day? These few companies that we talked about earlier
who make all the profit and control all of it. That seems like
an old, old story that I frankly don't want to see replicated
again. That seems like a dystopia, not like a new future.
So, I think it's critical that we find out what the labor
practices are of these companies. I'm glad that you're charting
a different course, Mr. Amodei, and certainly we want to hold
you to that. But I think it's vital that as we continue to look
at how this technology's developing that we actually push for--
I mean, what's wrong with having a technology that actually
employs people in the United States of America and pays them
well? I mean, why shouldn't American workers and American
families, protected by our labor laws, benefit from this
technology?
I don't think that's too much to ask. And, frankly, I think
that we ought to expect that of companies in this country, with
access to our markets, who are working on this technology. Mr.
Chairman.
Chair Blumenthal. Thank you. I don't think you'll find much
disagreement with that proposition, but to have American
workers do those jobs, we need to train them. Correct? And you
all, in some sense, because you're all teachers, you're all
professors, are engaged in that enterprise. Mr. Amodei, I don't
know whether you can be called, still, a professor, but
probably not.
Mr. Amodei. I was never a professor.
Chair Blumenthal. But we need to train workers to do these
jobs. And for those who want to pause, and some of the experts
have written that we should pause AI development, I don't think
it's going to happen. We right now have a gold rush, literally
much like the Gold Rush that we had in the Wild West, where, in
fact, there are no rules, and everybody's trying to get to the
gold without very many law enforcers out there preventing the
kinds of crimes that can occur. So, I am totally in agreement
with Senator Hawley in focusing on keeping it in America, made
in America, when we're talking about AI, and I think he is
absolutely right that we need to build those kinds of
structures, provide the training and incentives that enable it
and enforce it.
Let me, though, come back to this issue of national
security. Who are our competitors, among our adversaries and
our allies? Who are closest to the United States in terms of
developing AI? Is it China? Are there other adversaries out
there that could be rogue nations, not just rogue actors but
rogue nations, and whom we need to bring into some
international body of cooperation?
Professor Russell. So, I think the closest competitor we
have is probably the UK, in terms of making advances in basic
research, both in academia and in DeepMind, in particular,
which is based in London, now being merged more forcefully into
the larger Google organization. But they have a very distinct
approach, and they've created an ecosystem in the UK that's
really quite productive.
I've spent a fair amount of time in China. I was there a
month ago, talking to the major institutions that are working
on AGI. And my sense is that we've slightly overstated the
level of threat that they currently present. They've mostly
been building copycat systems that turn out not to be nearly as
good as the systems that are coming out from Anthropic and
OpenAI and Google. But the intent is definitely there. I mean,
they publicly stated their goal to be the world leader, and
they are investing probably larger sums of public money than we
are in the U.S., smaller sums in the private sector.
The areas where they are actually most effective--and I was
actually on a panel in Tianjin for the top 50 Chinese AI
startups, and they were giving out awards. But I think about 40
of those 50, their primary customer was state security. So,
they're extremely good at voice recognition, face recognition,
tracking and recognition of humans based on gait, and similar
capabilities that are useful for state security.
Other areas like reasoning and so on, planning--they're
just not in--they're not really that close. They have a pretty
good academic sector, that they are in the process of ruining
by forcing them to meet numerical publication targets and
things like that. They don't give people the freedom to think
hard about the most important problems, and they are not
producing the basic research breakthroughs that we've seen both
in the academic and the private sector in the U.S. I'm also----
Chair Blumenthal. Hard to produce a superhuman thinking
machine if you don't allow humans to think.
Professor Russell. Yup. You know, I've also looked a lot at
European countries. I'm working with the French government
quite a bit, and I don't think anywhere else is in the same
league as those three. Russia, in particular, has been
completely denuded of its experts and was already well behind.
Chair Blumenthal. Mr. Bengio? Professor?
Professor Bengio. On the Allied side, there are a few
countries, including Canada, from which I come from, that have
really important concentration of talent in AI. And, you know,
in Canada we've contributed a lot of the principles behind what
we're seeing today. There is also a lot of really good European
researchers in the UK and outside the UK. So, I think that we
would all gain by making sure we work with these countries to
develop these countermeasures as well as the improved
understanding of the potentially dangerous scenarios and what
methodologies in terms of safety can protect us.
Chair Blumenthal. You've advocated decentralized labs.
Professor Bengio. Yes.
Chair Blumenthal. But under----
Professor Bengio. A common umbrella that would be
multilateral. Maybe this could be--a good starting place could
be Five Eyes or G7, and that would capture pretty much the bulk
of the expertise in these very strong AI systems that could be
important here.
Chair Blumenthal. And there would probably be some way for
our entity, our national oversight body doing licensing and
registration, to still cooperate. In fact, I would guess
that's----
Professor Bengio. Oh, yes.
Chair Blumenthal [continuing]. One of the reasons to have a
single entity, to be able to work and collaborate----
Professor Bengio. Yes. So----
Chair Blumenthal [continuing]. With other countries.
Professor Bengio [continuing]. There's no doubt that
individual countries have their own national security
organizations and are going to do their own laws, but the more
we can coordinate on this, the better. Of course, I think some
of that research should be classified and not shared with
anyone, only trusted parties.
So, there are aspects of what we have to do that have to be
really broad, at the international level, and I think the
guidelines or the maybe mandatory rules for safety should be
something we do internationally, like with the U.N. Like, we
want every country to follow some basic rules, because even if
they don't have the technology, some rogue actor, even here in
the U.S., might just go and do it somewhere else, and then, you
know, viruses--computer or biological viruses don't see any
border. So, we need to make sure there's an international
effort, in terms of these safety measures. We need to agree
with China on these safety measures, as the first interlocutor.
And we need to work with our allies on these countermeasures.
Chair Blumenthal. I think that all those observations are
extremely timely and important. And on the issue of safety, I
know that Anthropic has developed a model card for Claude that
essentially involves evaluation capabilities. Your red teaming
considered the risk of self-replication or a similar kind of
danger. OpenAI engaged in the same kind of testing. We've been
talking a lot about testing and auditing. So, apparently you
share the concern that these systems may get out of control.
Professor Russell recommended an obligation to be able to
terminate an AI system. Microsoft called this requirement,
``safety brakes.'' When we talk about legislation, would you
recommend that we impose that kind of requirement as a
condition for testing and auditing the evaluation that goes on
when deploying certain AI systems? Obviously, again, focusing
on risk, I think everybody has talked about systems that are
vulnerable, risk systems. An AI model spreading like a virus
seems a bit like science fiction, but these safety brakes could
be very, very important to stop that kind of danger. Would you
agree?
Mr. Amodei. Yes. I, for one, think that makes a lot of
sense. I mean, the way I would think about it is, you know, in
the testing and auditing regime that we've all discussed, you
know, the best case is if all of these dangers that we're
talking about don't happen in the first place because we run
tests that detect the dangers and there's basically prior
restraint. Right? If these things are a concern for public
safety and national security, we never want the bad things to
happen in the first place.
But precisely because we're still getting good at the
science of measurement, probably it will happen, at least once,
and unfortunately, perhaps repeatedly, that we run these tests,
we think things are safe, and then they turn out not to be
safe. And so I agree, we also need a mechanism for recalling
things if, and however--or modifying things if the tests ended
up being wrong. So, that seems like common sense to me, for
sure.
Chair Blumenthal. And I think there's been some talk about
AutoGPT? Maybe you can talk a little bit about how that relates
to a safety brake.
Mr. Amodei. Yes. So, AutoGPT refers to use of, you know,
currently deployed AI systems, which are not designed to be
agents, right, which are just chatbots, but kind of
commandeering such systems for taking actions on the internet.
You know, to be honest, such systems are not particularly
effective at that yet, but they may be a taste of the future
and the kinds of things we're worried about in the future, the
long-term risks--that I described in the short-, medium-, and
long-term risks. So, I don't, as of yet, see a particularly
high amount of danger from things like the system you describe,
but it tells us where we're going, and where we're going is
quite concerning to me.
Chair Blumenthal. You know, in some of the areas that have
been mentioned like medicines and transportation, there are
public reporting requirements. For example, when there's a
failure, the FAA's system has an accident and incident report.
They collect data about failures in those kinds of machinery,
and it serves as a warning to consumers. It creates a
deterrence for putting unsafe products on the market, and it
adds to oversight of public safety issues.
We've discussed this afternoon both short-term and long-
term kinds of risks that can cause very significant public
harm. It doesn't seem like AI companies have an obligation to
report issues right now. In other words, there's no place to
report it. They have no obligation to make it known. If they
discover the ``Oh, my God, how did that happen?'' incident, it
can be entirely undisclosed.
Would you all favor some kind of requirement for that kind
of reporting?
[Witnesses nod their heads.]
Professor Bengio. Absolutely.
Chair Blumenthal. And it may be obvious, but let me ask all
of you. I see, again, your heads nodding, for the record. Would
that inhibit creativity or innovation, to have that kind of
requirement? I would think not.
Mr. Amodei. I don't think. I mean, there are many areas
where there's important tradeoffs. I don't think this is one of
them. I think such requirements make sense. I mean, to give a
little of our experience in, you know, red teaming for these
biological harms, you know, we've had to work on, you know,
piloting a responsible disclosure process. I think that's less
about reporting to the public, more about making the other
companies aware, but, you know, the two things are similar to
each other. So, you know, a lot of this is being done on
voluntary terms, and you see some of it coming up in the
commitments that the seven companies make, but, yes, I think
there's a lot of legal and process infrastructure that's
missing here and should be filled in.
Professor Russell. Yes. I think, to go along with the
notion of an involuntary recall, there has to be that reporting
step happening first.
Chair Blumenthal. You know, you mentioned recalls. Both
Senator Hawley and I were State Attorneys General before we got
this job, and both of us are familiar with consumer issues. One
of the frustrations for me always was that even with a recall,
a lot of consumers didn't do anything about it. And so I think
the recall as a concept is a good one, but there have to be
teeth to it. There has to be a cop on the beat, a cop on the AI
beat. And I think the enforcement powers here are tremendously
important.
And the point that you made about the tremendous amount of
money is very important. You know, right now it's all private
funding or mostly private funding, but the Government has an
obligation to invest--I think all of you would agree--invest in
safety, just as it has in other technology and innovation,
because we can't rely on private companies to police
themselves.
That cop on the beat in the AI context has to be not only
enforcing rules but, as I said at the very beginning,
incentivizing innovation and sometimes funding it, to provide
the air bags and the seat belts and the crash-proof kinds of
safety measures that we have in the automobile industry. I
recognize that the analogy is imperfect, but I think the
concept is there. Senator Hawley.
Senator Hawley. This has been a tremendously helpful
hearing. I just want to thank each of you, again, for taking
the time to be here. Can I just ask you, if you could give us
your one or, at most, two recommendations for what you think
Congress ought to do right now, what should we do right now,
based on your expertise, what we've talked about today? I would
be very, very curious to hear. So maybe we'll start with you,
Professor Russell, and go that way.
Professor Russell. So, I gave some, you know, ``move fast
and fix things'' recommendations in my opening remarks, and I
think there's no doubt that we're going to have to have an
agency. You know, if things go as expected, AI is going to end
up being responsible for the majority of economic output in the
United States. So, it cannot be the case that there's no
overall regulatory agency for this technology.
And the second thing, I think, would be just to focus,
again, on systems that violate a certain set of unacceptable
behaviors are removed from the market. And I think that will
have not only a benefit in terms of protecting the American
people and our national security, but also stimulating a great
deal of research on ensuring that the AI systems are well
understood, predictable, controllable. And that's it. Thank
you.
Senator Hawley. Very good. Professor Bengio?
Professor Bengio. What I would suggest, in addition to what
Professor Russell said, is to make sure, either through
incentives to companies but also direct investments in
nonprofit organizations, that we invest heavily--so, totally as
much as we spend on, you know, making more capable AIs, that we
invest heavily in safety, whether it's at a level of the
hardware or it's at a level of cybersecurity and national
security, to protect the public.
Senator Hawley. Very good. Mr. Amodei?
Mr. Amodei. I would, again, emphasize the testing and
auditing regime for all the risks ranging from, you know, those
we faced today, like, misinformation came up, to the biological
risks that I'm worried about in 2 or 3 years, to the, you know,
risks of autonomous replication that are some unspecified
period after that. You know, all of those can be tied to
different kinds of tests that we can run on our model. And so,
that strikes me as a, you know, as a scaffolding on which we
can build lots of different concerns about AI systems. Right?
If we start by testing for only one thing, we can, in the end,
test for a much, much wider range of concerns. And I think
without such testing, we're blind. Like, I give you an AI
system, another company gives you an AI system, you talk to it,
it's not straightforward to determine whether this is a safe
system or a dangerous system.
So, I would, again, make the analogy to, you know, it's
like we're making these machines--you know, cars, airplanes.
These are complex machines. We need an enforcement mechanism
and people who are able to look at these machines and say,
``What are the benefits of these, and what is the danger of
this particular machine, as well as machines in general?'' Once
we measure that, I feel it's all going to work out well.
But, you know, before we've identified and have a process
for this, we're, from a regulatory perspective, shooting in the
dark.
And the final thing I would emphasize is, you know, I don't
think we have a lot of time. You know, I personally am open to
whatever administrative mechanism puts those kinds of tests in
place. You know, very agnostic to whether it's, you know, a new
agency or extending the authorities of existing agencies, but
whatever we do, it has to happen fast. And I think, to focus
people's minds on the biorisks, I would really target 2025,
2026, maybe even some chance of 2024. If we don't have things
in place that are restraining what can be done with AI systems,
we're going to have a really bad time.
Senator Hawley. Thank you, each of you. That's really
helpful. Let me just throw an idea out to you while I have you
here, so to speak, which is, when we think about protecting
individuals and their personal data and making sure that it
doesn't end up being used to train one of these generative AI
systems without the individual's consent--and we know that
there's just an enormous amount of our own personal information
out there in public, kind of, you know, I mean, it's really
without our permission, but it's out there on the Web,
everything from our credit histories to social media posts, et
cetera, et cetera. Should we, in addition to assigning property
rights in individual data, you know, explicitly giving every
American a property right in their data, should we also require
monetary compensation if AI companies want to use individual
data in their model in some way? Professor Bengio, go ahead.
Professor Bengio. It's not always going to be possible to
know, to attribute the output of a system to a particular piece
of data, because these systems are not just copying. They're
integrating information from many, many sources. And so we need
other mechanisms to share to the people who are losing
something, for example, artists. But in some cases, it could be
identified, if an output is close enough to something that has
been, you know, has copyright or something. I think in that
case, yes, we should do it.
Senator Hawley. Any other thoughts? That's all of my
questions, Mr. Chairman.
Chair Blumenthal. I just have----
Senator Hawley. Remarkably.
Chair Blumenthal [continuing]. A couple more questions. I
promise they will be brief. You've been very patient, but this
panel is such a great resource that I want to impose on your
patience and your wisdom. The point that you were making
earlier about the red teaming and the importance of testing and
auditing reminded me about your testimony, your prepared
testimony, but also a conversation that you and I had about how
Anthropic ``went about testing its large language model,
particularly as related to the biological dangers''--where you
``worked with world-class biosecurity experts,'' I think was
your quote, ``over many months, in order to be able to identify
and mitigate the risks that Claude 2 might raise.''
On the other hand, I think you may have mentioned a company
that basically used graduate students to do the same task.
There's an enormous difference in those two testing regimens.
Now--right now, there's no requirement, there's no legal duty,
but would you recommend that when we write legislation, that we
impose some kind of qualifications on the testers and the
evaluators, so as to have that expertise?
Mr. Amodei. Yes. So, spiritually, I'm very aligned with
that. I mean, I want to say clearly, like, all of us--all of
the companies, all the researchers--are trying our best to
figure this out. So, you know, I don't want to call out, you
know, any companies here. I think we're all trying to figure it
out together. But I think it is an object lesson, in that in
testing these models, you know, you can do something that you
might think is a very reliable way of soliciting bad behavior
from the models or, you know, a test that you think is
truthful, and, you know, you can find out later that that
really wasn't the case, even if you had all the good intent in
the world.
In the case of bio, the key was, you know, to have world
experts and to zero in on a few things. In other areas, the key
might be different. And so I think the most important thing may
be not so much the static requirements, although, you know, I
would certainly endorse, you know, the level of expertise has
to be very high, but making the process have some living
element to it, so that it can be adjusted: ``We used to think
that this test was okay. This test was not okay.''
You know, just imagine we're a few years after, you know,
the invention of flying, and we're looking at these big
machines, and we're like, ``Well, how do we know if this thing
is going to crash?'' Right now, we know very little. Somehow,
we need to design the regulatory architecture so that we can
get to the point where, if we learn new things about what makes
planes safe and what makes planes crash, they get kind of
automatically hooked into whatever architecture we built. I
don't know the best way to do that, but I think that should be
the goal.
Chair Blumenthal. Well, you know, that's a very timely
analogy, because a lot of the military aircraft we're building
now basically fly on computers. And the pilot is in the planes,
right now, but we're moving toward such sophisticated and
complicated aircraft, which I know a little bit about because
I'm on the Armed Services Committee, that, you know, they're a
lot smarter than pilots in some of the flying they can do. But
at the same time, they are certainly red teamed to avoid
misdirection and mistakes.
And the kinds of specifics that you just mentioned are
where the rubber hits the road. These kinds of specifics are
where the legislation will be very important. President Biden
has enlisted--or elicited commitments to security, safety,
transparency, announced on Friday. Important step forward. But
this red teaming is an example of how voluntary nonspecific
commitments are insufficient. The advantages are in the
details, not just the devil. The details are tremendously
important, and when it comes to economic pressures, companies
can cut corners. Again, the Gold Rush. These decisions have
real economic consequences.
I want to just, in the last--maybe the last question I
have. On the issue of open source, you each raised the security
and safety risk of AI models that are open source or are leaked
to the public, the danger. There are some advantages to having
open source, as well. It's a complicated issue. I appreciate
that open source can be an extraordinary resource. But even in
the short time that we've had some AI tools and they've been
available, they have been abused. For example, I'm aware that a
group of people took Stable Diffusion and created a version for
the express purpose of creating nonconsensual sexual material.
So, on the one hand, access to AI data is a good thing for
research, but on the other hand, the same open models can
create risks, just because they are open.
Senator Hawley and I, as an example of our cooperation,
wrote to Meta about an AI model that they released to the
public. You're familiar with it, I'm sure, LLaMA. They put the
first version of LLaMA out there with not much consideration of
risk, and it was leaked or it was somehow made known. The
second version had more documentation of its safety work, but
it seems like Meta or Facebook's business decisions may have
been driving its agenda. So, let me ask you about that
phenomenon. I think you have commented on it, Dr. Bengio, so
let me----
Professor Bengio. Yes.
Chair Blumenthal [continuing]. Talk to you first.
Professor Bengio. Yes. I think it's really important,
because when we put open source out there for something that
could be dangerous, which is a tiny minority of all the code
that's open source, essentially we're opening the door to all
the bad actors. And as these systems become more capable, bad
actors don't need to have very strong expertise, whether it's
in bioweapons or cybersecurity, in order to take advantage of
systems like this. And they don't even need to have huge
amounts of compute, either, to take advantage of systems like
this.
Now, I believe that the different companies that committed
to these measures last week probably have a different
interpretation of what is a dangerous system, and I think it's
really important that the Government comes up with some
definition, which is going to keep moving, but make sure that
future releases are going to be very carefully evaluated for
that potential before they're released.
I've been a staunch advocate of open source for all my
scientific career. Open source is great for scientific
progress. But as Geoff Hinton, my colleague, was saying, ``If
nuclear bombs were software, would you allow open source of
nuclear bombs?'' Right?
Chair Blumenthal. And I think the comparison is apt. You
know, I've been reading the most recent biography of Robert
Oppenheimer, and every time I think about AI, the specter of
quantum physics, nuclear bombs, but also atomic energy, both
peaceful and military purposes, is inescapable.
Professor Bengio. So, I have another thing to add on open
source. Some of it is coming from companies like Meta, but
there's also a lot of open source coming out of universities.
Now, usually these universities don't have the means of
training the kind of large systems that we're seeing in
industry. But the code could be then, you know, used by a rich
bad actor and turned into something dangerous. So, I believe
that we need ethics review boards in universities for AI, just
like we have for biology and medicine.
Right now, there's no such thing. I mean, there are ethics,
in principle, they could do that but they're not set up for
that. They don't have the expertise. They don't have the kind
of protocols. We need to move into a culture where universities
across the world but, you know, in the VLOP nations in
particular, adopt these ethics reviews with the same principles
we're doing for other sciences where there is dangerous output,
but in the case of AI.
Mr. Amodei. Yes, I strongly share Professor Bengio's view
here. I want to make sure I'm kind of precise in my views,
because I think there is nuance to it. You know, in line with
Professor Bengio, I think in most scientific fields, open
source is a good thing. It accelerates progress. And I think
even within AI there's room for models on the smaller and
medium side. I don't think anyone thinks those models are
seriously dangerous. They have some risks, but the benefits may
outweigh the costs.
And I think, to be fair, even up to the level of open
source models that have been released so far, the risks are
relatively limited. So, construed very narrowly, I'm not sure I
have an objection. But I'm very concerned about where things
are going. If we talk about 2 to 3 years for the frontier
models for the biorisks, and probably less than that for things
like misinformation--we're there now--I think the path that
things are going, in terms of the scaling of open source
models, I think it's going down a very dangerous path. And,
again, if the path continues, I think we could get to a very
dangerous place.
I think it's worth saying some things on open source models
that are clear to all the experts, but I want to make sure is
understood by this Committee, which is, when you control a
model and you're deploying it, you have the ability to monitor
its usage. It might be misused at one point, but then you can
alter the model. You can revoke a user's access. You can change
what the model is willing to do.
When a model is released in an uncontrolled manner, there's
no ability to do that. It's entirely out of your hands. And so
I think that should be attended to carefully. There may be ways
to release models open source so that it's harder to circumvent
the guardrails, but that's a much harder problem, and we should
confront the advocates of this with that problem and challenge
them to solve it.
Finally, I'd say open source is a little bit of a misnomer
here. Right? Open source normally refers to, you know, smaller
developers who are iterating quickly, and I think that's a good
thing. But I think here we're talking about something a little
bit different, which is a more uncontrolled release of larger
models by, you know, again to your point, Senator Hawley, like
much larger entities that pay tens or even hundreds of millions
of dollars to train them. I think we should think of that in a
little bit of a different category, and their obligations in a
little bit of a different category.
Professor Russell. So, I'd just like to add a couple of
points. I agree with everything the other witnesses said. So,
one issue is being able to trace the provenance of--from the
output that is problematic, through to which model was used to
create it, through to where did that model come from?
And a second point is about liability. And it's not
completely clear where exactly the liability should lie. But to
continue the nuclear analogy, if a corporation decided they
wanted to sell a lot of enriched uranium in supermarkets, and
someone decided to take that enriched uranium and buy several
pounds of it and make a bomb, we say that some liability should
reside with the company that decided to sell the enriched
uranium. They could put advice on it saying, ``Do not use more
than,'' you know, ``three ounces of this in one place,'' or
something. But no one's going to say that that absolves them
from liability.
So, I think those two are really important. And the open
source community has got to start thinking about whether they
should be liable for putting stuff out there that is ripe for
misuse.
Chair Blumenthal. I want to invite any of you who have
closing comments or thoughts that you haven't had an
opportunity to express. Professor?
Professor Bengio. So, I would like to add a point about
international or multilateral collaboration on these things and
how it's related to having maybe a single agency here in the
United States.
If there are 10 different agencies trying to regulate AI in
its various forms, that could be useful, but as Stuart Russell
was saying, this is going to be very big in terms of the space
it takes in the economy, but also we need to have a single
voice that coordinates with the other countries. And having one
agency that does that is going to be very important.
Also, we need an agency in the first place because we can't
predict--we can't put in the law every protection that is
needed, every regulation that is needed. We don't know yet what
the regulations should be in 1 year or 2 years, 3 years from
now. So, we need to build something that's going to be very
agile. And I know it's difficult for governments to do that.
Maybe we can do research to improve on that front, agility in
doing the right thing. But having an agency is at least a tool
toward that goal.
Chair Blumenthal. I would just close by saying that is
exactly why we're here today: to develop an entity or a body
that will be agile, nimble, and fast, because we have no time
to waste. I don't know who the Prometheus is on AI, but I know
we have a lot of work to make sure that the fire here is used
productively.
And there are enormously productive uses, we haven't really
talked about them much. Whether it is curing cancer, treating
diseases, some of them mundane, by screening X-rays, or
developing new technology that can help stop climate change,
there are a vast variety of potentially productive uses, and it
should be done with American workers, I think--very much in
agreement here.
And the last point I would make on agreement. What you've
seen here is not all that common, which is, bipartisan
unanimity that we need guidance from the Federal Government. We
can't depend on private industry. We can't depend on academia.
The Federal Government has a role that is not only reactive and
regulatory. It is also proactive in investing in research and
development of the tools that are needed to make this fire work
for all of us.
So, I want to thank every one of you for being here today.
We look forward to continuing this conversation with you. Our
record is going to remain open for 2 weeks, in case any of my
colleagues have written questions for you. I may have some,
too. If you have additional thoughts, feel free to submit them.
I've read a number of your writings, and I'm sure I will
continue reading them and look forward to talking again. With
that, this hearing is adjourned.
[Whereupon, at 5:22 p.m., the hearing was adjourned.]
[Additional material submitted for the record follows.]
A P P E N D I X
Additional Material Submitted for the Record
[GRAPHICS NOT AVAILABLE IN TIFF FORMAT]
[all]