[Senate Hearing 118-037]
[From the U.S. Government Publishing Office]


                                                        S. Hrg. 118-037

                        OVERSIGHT OF A.I.: RULES
                      FOR ARTIFICIAL INTELLIGENCE

=======================================================================

                                HEARING

                               BEFORE THE

                        SUBCOMMITTEE ON PRIVACY,
                        TECHNOLOGY, AND THE LAW

                                 OF THE

                       COMMITTEE ON THE JUDICIARY
                          UNITED STATES SENATE

                    ONE HUNDRED EIGHTEENTH CONGRESS

                             FIRST SESSION
                               __________

                              MAY 16, 2023
                               __________

                          Serial No. J-118-16
                               __________

         Printed for the use of the Committee on the Judiciary
         

                  [GRAPHIC NOT AVAILABLE IN TIFF FORMAT]  
                  
                               __________

                    U.S. GOVERNMENT PUBLISHING OFFICE
                    
52-706 PDF                WASHINGTON : 2024                     
                  
         

                       COMMITTEE ON THE JUDICIARY

                   RICHARD J. DURBIN, Illinois, Chair
DIANNE FEINSTEIN, California         LINDSEY O. GRAHAM, South Carolina, 
SHELDON WHITEHOUSE, Rhode Island             Ranking Member
AMY KLOBUCHAR, Minnesota             CHARLES E. GRASSLEY, Iowa
CHRISTOPHER A. COONS, Delaware       JOHN CORNYN, Texas
RICHARD BLUMENTHAL, Connecticut      MICHAEL S. LEE, Utah
MAZIE K. HIRONO, Hawaii              TED CRUZ, Texas
CORY A. BOOKER, New Jersey           JOSH HAWLEY, Missouri
ALEX PADILLA, California             TOM COTTON, Arkansas
JON OSSOFF, Georgia                  JOHN KENNEDY, Louisiana
PETER WELCH, Vermont                 THOM TILLIS, North Carolina
                                     MARSHA BLACKBURN, Tennessee
             Joseph Zogby, Chief Counsel and Staff Director
      Katherine Nikas, Republican Chief Counsel and Staff Director

            Subcommittee on Privacy, Technology, and the Law

                 RICHARD BLUMENTHAL, Connecticut, Chair
AMY KLOBUCHAR, Minnesota             JOSH HAWLEY, Missouri, Ranking 
CHRISTOPHER A. COONS, Delaware           Member
MAZIE K. HIRONO, Hawaii              JOHN KENNEDY, Louisiana
ALEX PADILLA, California             MARSHA BLACKBURN, Tennessee
JON OSSOFF, Georgia                  MICHAEL S. LEE, Utah
                                     JOHN CORNYN, Texas
                David Stoopler, Democratic Chief Counsel
                 John Ehrett, Republican Chief Counsel

                            C O N T E N T S

                              ----------                              

                         MAY 16, 2023, 10 A.M.

                    STATEMENTS OF COMMITTEE MEMBERS

                                                                   Page

Blumenthal, Hon. Richard, a U.S. Senator from the State of 
  Connecticut....................................................     1
Hawley, Hon. Josh, a U.S. Senator from the State of Missouri.....     3
Durbin, Hon. Richard J., a U.S. Senator from the State of 
  Illinois.......................................................     4

                               WITNESSES

Witness List.....................................................    57
Altman, Samuel, chief executive officer, OpenAI, San Francisco, 
  California.....................................................     6
    prepared statement...........................................    58
Marcus, Gary, Professor Emeritus, New York University, Vancouver, 
  British Columbia, Canada.......................................     9
    prepared statement...........................................    71
Montgomery, Christina, chief privacy and trust officer, IBM, 
  Cortlandt Manor, New York......................................     7
    prepared statement...........................................    78

                               QUESTIONS

Questions submitted to Samuel Altman by:
    Chair Durbin.................................................    86
    Chair Blumenthal.............................................    87
    Senator Kennedy..............................................    88
    Senator Tillis...............................................    89
Questions submitted to Gary Marcus by:
    Chair Durbin.................................................    90
    Chair Blumenthal.............................................    91
    Senator Kennedy..............................................    92
Questions submitted to Christina Montgomery by:
    Chair Durbin.................................................    93
    Chair Blumenthal.............................................    94

                                ANSWERS

Responses of Samuel Altman to questions submitted by:
    Chair Durbin.................................................    99
      Attachment.................................................   106
    Chair Blumenthal.............................................   103
    Senator Kennedy..............................................    95
    Senator Tillis...............................................    97
Responses of Gary Marcus to questions submitted by:
    Chair Durbin.................................................   167
    Chair Blumenthal.............................................   166
    Senator Kennedy..............................................   169
Responses of Christina Montgomery to questions submitted by:
    Chair Durbin.................................................   172
    Chair Blumenthal.............................................   174

                MISCELLANEOUS SUBMISSIONS FOR THE RECORD

Submitted by Chair Blumenthal:

    Association for Computing Machinery, letter, May 12, 2023....   175
    Center for AI and Digital Policy, letter, May 11, 2023.......   187
    Center for AI and Digital Policy, letter to The Economist, 
      May 11, 2023...............................................   189
    Center for AI and Digital Policy, report, April 2023.........   190
    Chamber of Progress, letter, May 16, 2023....................  1456
    Future of Life Institute, memorandum, May 15, 2023...........  1459
    IEEE-USA, letter, May 22, 2023...............................  1462
    Public Knowledge, letter, May 16, 2023.......................  1464
    Stability AI, letter, May 13, 2023...........................  1466
    Stackhouse, Ed, Alphabet Workers Union member, letter, May 
      15, 2023...................................................  1481

 
                        OVERSIGHT OF A.I.: RULES
                      FOR ARTIFICIAL INTELLIGENCE

                              ----------                              


                         TUESDAY, MAY 16, 2023

                      United States Senate,
  Subcommittee on Privacy, Technology, and the Law,
                                Committee on the Judiciary,
                                                    Washington, DC.
    The Subcommittee met, pursuant to notice, at 10 a.m., in 
Room 226, Dirksen Senate Office Building, Hon. Richard 
Blumenthal, Chair of the Subcommittee, presiding.
    Present: Senators Blumenthal [presiding], Klobuchar, Coons, 
Hirono, Padilla, Ossoff, Hawley, Kennedy, and Blackburn.
    Also present: Chair Durbin and Senators Booker, Welch, and 
Graham.

         OPENING STATEMENT OF HON. RICHARD BLUMENTHAL,
          A U.S. SENATOR FROM THE STATE OF CONNECTICUT

    Chair Blumenthal. Welcome to the hearing of the Privacy, 
Technology, and the Law Subcommittee. I thank my partner in 
this effort, Senator Hawley, Ranking Member, and I particularly 
want to thank Senator Durbin, Chairman of the Judiciary 
Committee, and he will be speaking shortly. This hearing is on 
the oversight of artificial intelligence, the first in a series 
of hearings intended to write the rules of AI. Our goal is to 
demystify and hold accountable those new technologies to avoid 
some of the mistakes of the past. And now for some introductory 
remarks.
    [AI-generated audio segment is played.]
    ``Too often, we have seen what happens when technology 
outpaces regulation: the unbridled exploitation of personal 
data, the proliferation of disinformation, and the deepening of 
societal inequalities. We have seen how algorithmic biases can 
perpetuate discrimination and prejudice and how the lack of 
transparency can undermine public trust. This is not the future 
we want.''
    Chair Blumenthal. If you were listening from home, you 
might have thought that voice was mine and the words from me. 
But, in fact, that voice was not mine, the words were not mine, 
and the audio was an AI voice-cloning software trained on my 
floor speeches. The remarks were written by ChatGPT when it was 
asked how I would open this hearing, and you heard just now the 
result.
    I asked ChatGPT, ``Why did you pick those themes and that 
content?'' And it answered, and I'm quoting, ``Blumenthal has a 
strong record in advocating for consumer protection and civil 
rights. He has been vocal about issues such as data privacy and 
the potential for discrimination in algorithmic decisionmaking. 
Therefore, this statement emphasizes these aspects.''
    Mr. Altman, I appreciate ChatGPT's endorsement. In all 
seriousness, this apparent reasoning is pretty impressive. I am 
sure that we'll look back in a decade and view ChatGPT and GPT-
4 like we do the first cell phone, those big clunky things that 
we used to carry around. But we recognize that we are on the 
verge, really, of a new era.
    The audio, and my playing it, may strike you as curious or 
humorous, but what reverberated in my mind was, ``What if I had 
asked it--and what if it had provided--an endorsement of 
Ukraine surrendering or Vladimir Putin's leadership?'' That 
would have been really frightening. And the prospect is more 
than a little scary, to use the word, Mr. Altman, you have 
used, yourself. And I think you have been very constructive in 
calling attention to the pitfalls as well as the promise, and 
that's the reason why we wanted you to be here today, and we 
thank you and our other witnesses for joining us.
    For several months now, the public has been fascinated with 
GPT, DALLE, and other AI tools. These examples, like the 
homework done by ChatGPT, or the articles and op-eds that it 
can write, feel like novelties. But the underlying advancements 
of this era are more than just research experiments. They are 
no longer fantasies of science fiction. They are real and 
present.
    The promises of curing cancer, or developing new 
understandings of physics and biology, or modeling climate and 
weather--all very encouraging and hopeful. But we also know the 
potential harms, and we've seen them already: weaponized 
disinformation, housing discrimination, harassment of women, 
and impersonation fraud, voice cloning, deepfakes. These are 
the potential risks, despite the other rewards.
    And for me, perhaps the biggest nightmare is the looming 
new industrial revolution, the displacement of millions of 
workers, the loss of huge numbers of jobs, the need to prepare 
for this new industrial revolution in skill training and 
relocation that may be required. And already, industry leaders 
are calling attention to those challenges. To quote ChatGPT, 
``This is not necessarily the future that we want. We need to 
maximize the good over the bad.''
    Congress has a choice now. We had the same choice when we 
faced social media. We failed to seize that moment. The result 
is predators on the internet, toxic content, exploiting 
children, creating dangers for them. And Senator Blackburn and 
I and others like Senator Durbin on the Judiciary Committee are 
trying to deal with it, Kids Online Safety Act. But Congress 
failed to meet the moment on social media. Now we have the 
obligation to do it on AI before the threats and the risks 
become real.
    Sensible safeguards are not in opposition to innovation. 
Accountability is not a burden. Far from it. They are the 
foundation of how we can move ahead while protecting public 
trust. They are how we can lead the world in technology and 
science but also in promoting our democratic values. Otherwise, 
in the absence of that trust, I think we may well lose both.
    These are sophisticated technology, but there are basic 
expectations common in our law. We can start with transparency. 
AI companies ought to be required to test their systems, 
disclose known risks, and allow independent researcher access. 
We can establish scorecards and nutrition labels to encourage 
competition based on safety and trustworthiness, limitations on 
use.
    There are places where the risk of AI is so extreme that we 
ought to impose restriction or even ban their use, especially 
when it comes to commercial invasions of privacy for profit and 
decisions that affect people's livelihoods and, of course, 
accountability or liability. When AI companies and their 
clients cause harm, they should be held liable. We should not 
repeat our past mistakes. For example, Section 230. Forcing 
companies to think ahead and be responsible for the 
ramifications of their business decisions can be the most 
powerful tool of all. Garbage in, garbage out. The principle 
still applies. We ought to beware of the garbage, whether it's 
going into these platforms or coming out of them.
    And the ideas that we develop in this hearing, I think, 
will provide a solid path forward. I look forward to discussing 
them with you today, and I will just finish on this note: The 
AI industry doesn't have to wait for Congress. I hope there are 
ideas and feedback from this discussion and from the industry 
and voluntary action such as we've seen lacking in many social 
media platforms, and the consequences have been huge.
    So, I'm hoping that we will elevate rather than have a race 
to the bottom, and I think these hearings will be an important 
part of this conversation. This one is only the first. The 
Ranking Member and I have agreed there should be more, and 
we're going to invite other industry leaders. Some have 
committed to come--experts, academics--and the public, we hope, 
will participate.
    And with that, I will turn to the Ranking Member, Senator 
Hawley.

             OPENING STATEMENT OF HON. JOSH HAWLEY,
           A U.S. SENATOR FROM THE STATE OF MISSOURI

    Senator Hawley. Thank you very much, Mr. Chairman, and 
thanks to the witnesses for being here. I appreciate that 
several of you had long journeys to make in order to be here. I 
appreciate you making the time. I look forward to your 
testimony. I want to thank Senator Blumenthal for convening 
this hearing, for being a leader on this topic.
    You know, a year ago, we couldn't have had this hearing 
because the technology that we're talking about had not burst 
into public consciousness. That gives us a sense, I think, of 
just how rapidly this technology that we're talking about today 
is changing and evolving and transforming our world right 
before our very eyes. I was talking with someone just last 
night, a researcher in the field of psychiatry, who was 
pointing out to me that the ChatGPT and generative AI, these 
large language models--it's really like the invention of the 
internet, in scale, at least. At least--and potentially far, 
far more significant than that. We could be looking at one of 
the most significant technological innovations in human 
history.
    And I think my question is: What kind of an innovation is 
it going to be? Is it going to be like the printing press that 
diffused knowledge and power and learning widely across the 
landscape, that empowered ordinary, everyday individuals, that 
led to greater flourishing, that led, above all, to greater 
liberty? Or is it going to be more like the atom bomb--huge 
technological breakthrough, but the consequences, severe, 
terrible, continue to haunt us to this day?
    I don't know the answer to that question. I don't think any 
of us in the room know the answer to that question, because I 
think the answer has not yet been written. And to a certain 
extent, it's up to us here and to us, as the American people, 
to write the answer. What kind of technology will this be? How 
will we use it to better our lives? How will we use it to 
actually harness the power of technological innovation for the 
good of the American people, for the liberty of the American 
people, not for the power of the few?
    You know, I was reminded of the psychologist and writer 
Carl Jung, who said, at the beginning of the last century, that 
our ability for technological innovation, our capacity for 
technological revolution, had far outpaced our ethical and 
moral ability to apply and harness the technology we developed. 
That was a century ago. I think the story of the 20th century 
largely bore him out.
    And I just wonder, what will we say, as we look back at 
this moment, about these new technologies, about generative AI, 
about these language models, and about the host of other AI 
capacities that are even right now under development, not just 
in this country but in China, the countries of our adversaries, 
and all around the world? And I think the question that Jung 
posed is really the question that faces us: Will we strike that 
balance between technological innovation and our ethical and 
moral responsibility to humanity, to liberty, to the freedom of 
this country? And I hope that today's hearing will take us a 
step closer to that answer. Thank you, Mr. Chairman.
    Chair Blumenthal. Thanks. Thanks, Senator Hawley. I'm going 
to turn to the Chairman of the Judiciary Committee and the 
Ranking Member, Senator Graham, if they have opening remarks, 
as well.

          OPENING STATEMENT OF HON. RICHARD J. DURBIN,
           A U.S. SENATOR FROM THE STATE OF ILLINOIS

    Chair Durbin. Yes, Mr. Chairman. Thank you very much, and 
Senator Hawley, as well. Last week in this Committee, full 
Committee, Senate Judiciary Committee, we dealt with an issue 
that had been waiting for attention for almost two decades, and 
that is what to do with the social media when it comes to the 
abuse of children. We had four bills, initially, that were 
considered by this Committee, and--what may be history in the 
making--we passed all four bills with unanimous roll calls. 
Unanimous roll calls.
    I can't remember another time when we've done that on an 
issue that important. It's an indication, I think, of the 
important position of this Committee in the national debate on 
issues that affect every single family and affect our future in 
a profound way.
    1989 was a historic watershed year in America because 
that's when ``Seinfeld'' arrived, and we had a sitcom which was 
supposedly about little or nothing, which turned out to be 
enduring. I like to watch it, obviously, and I always marvel 
when they show the phones that he used in 1989, and I think 
about those in comparison to what we carry around in our 
pockets today. It's a dramatic change. And I guess the 
question, as I look at that, is: Does this change in phone 
technology that we've witnessed through this sitcom really 
exemplify a profound change in America? Still unanswered.
    But the basic question we face is whether or not the issue 
of AI is a quantitative change in technology or a qualitative 
change. The suggestions that I've heard from experts in the 
field suggest it's qualitative. Is AI fundamentally different? 
Is it a game changer? Is it so disruptive that we need to treat 
it differently than other forms of innovation? That's the 
starting point.
    And the second starting point is one that's humbling, and 
that is the fact that when you look at the record of Congress 
in dealing with innovation, technology, and rapid change, we're 
not designed for that. In fact, the Senate was not created for 
that purpose but just the opposite: slow things down, take a 
harder look at it, don't react to public sentiment, make sure 
you're doing the right thing.
    Well, I've heard of the potential, the positive potential 
of AI, and it is enormous. You can go through lists of the 
deployment of technology that would say that an idea you can 
sketch for a website on a napkin can generate functioning code. 
Pharmaceutical companies could use the technology to identify 
new candidates to treat disease. The list goes on and on.
    And then, of course, the danger, and it's profound, as 
well. So, I'm glad that this hearing has taken place. I think 
it's important for all of us to participate. I'm glad that it's 
a bipartisan approach. We're going to have to scramble to keep 
up with the pace of innovation in terms of our Government, 
public response to it, but this is a great start. Thank you, 
Mr. Chairman.
    Chair Blumenthal. Thanks. Thanks, Senator Durbin. It is 
very much a bipartisan approach, very deeply and broadly 
bipartisan. And in that spirit, I'm going to turn to my friend, 
Senator Graham.
    Senator Graham. In the spirit of wanting to hear from them, 
I'm going to not say anything, and thank you both for that.
    Chair Blumenthal. Thank you. That was not written by AI, 
for sure.
    [Laughter.]
    Chair Blumenthal. Let me introduce, now, the witnesses. 
We're very grateful to you for being here. Sam Altman is the 
cofounder and CEO of OpenAI, the AI research and deployment 
company behind ChatGPT and DALLE. Mr. Altman was president of 
the early stage startup accelerator Y Combinator from 1914--I'm 
sorry, 2014 to 2019. OpenAI was founded in 2015.
    Christina Montgomery is IBM's vice president and chief 
privacy and trust officer, overseeing the company's global 
privacy program policies, compliance, and strategy. She also 
chairs IBM's AI Ethics Board, a multidisciplinary team 
responsible for the governance of AI and emerging technologies. 
Christina has served in various roles at IBM, including 
corporate secretary to the company's board of directors. She is 
a global leader in AI ethics and governance, and Ms. Montgomery 
also is a member of the United States Chamber of Commerce AI 
Commission and the United States National AI Advisory 
Committee, which was established in 2022 to advise the 
President and the National AI Initiative Office on a range of 
topics related to AI.
    Gary Marcus is a leading voice in artificial intelligence. 
He's a scientist, bestselling author, and entrepreneur; founder 
of the robust AI and geometric AI acquired by Uber, if I'm not 
mistaken; and emeritus professor of psychology and neuroscience 
at NYU. Mr. Marcus is well known for his challenges to 
contemporary AI, anticipating many of the current limitations 
decades in advance, and for his research in human language 
development and cognitive neuroscience. Thank you for being 
here.
    And as you may know, our custom on the Judiciary Committee 
is to swear in our witnesses before they testify, so if you 
would all please rise and raise your right hand.
    [Witnesses are sworn in.]
    Chair Blumenthal. Thank you. Mr. Altman, we're going to 
begin with you, if that's okay.

 STATEMENT OF SAMUEL ALTMAN, CHIEF EXECUTIVE OFFICER, OPENAI, 
                   SAN FRANCISCO, CALIFORNIA

    Mr. Altman. Thank you. Thank you, Chairman Blumenthal, 
Ranking Member Hawley, Members of the Judiciary Committee. 
Thank you for the opportunity to speak to you today about large 
neural networks. It's really an honor to be here, even more so 
in the moment than I expected. My name is Sam Altman. I'm the 
chief executive officer of OpenAI.
    OpenAI was founded on the belief that artificial 
intelligence has the potential to improve nearly every aspect 
of our lives, but also that it creates serious risks we have to 
work together to manage. We're here because people love this 
technology. We think it can be a printing press moment. We have 
to work together to make it so.
    OpenAI is an unusual company, and we set it up that way 
because AI is an unusual technology. We are governed by a 
nonprofit, and our activities are driven by our mission and our 
charter, which commit us to working to ensure that the broad 
distribution of the benefits of AI and to maximizing the safety 
of AI systems. We are working to build tools that one day can 
help us make new discoveries and address some of humanity's 
biggest challenges, like climate change and curing cancer.
    Our current systems aren't yet capable of doing these 
things, but it has been immensely gratifying to watch many 
people around the world get so much value from what these 
systems can already do today. We love seeing people use our 
tools to create, to learn, to be more productive. We're very 
optimistic that there are going to be fantastic jobs in the 
future and that current jobs can get much better.
    We also love seeing what developers are doing to improve 
lives. For example, Be My Eyes used our new multimodal 
technology in GPT-4 to help visually impaired individuals 
navigate their environment. We believe that the benefits of the 
tools we have deployed so far vastly outweigh the risks, but 
ensuring their safety is vital to our work, and we make 
significant efforts to ensure that safety is built into our 
systems at all levels.
    Before releasing any new system, OpenAI conducts extensive 
testing, engages external experts for detailed reviews and 
independent audits, improves the model's behavior, and 
implements robust safety and monitoring systems. Before we 
released GPT-4, our latest model, we spent over 6 months 
conducting extensive evaluations, external red teaming, and 
dangerous capability testing.
    We are proud of the progress that we made. GPT-4 is more 
likely to respond helpfully and truthfully and refuse harmful 
requests than any other widely deployed model of similar 
capability; however, we think that regulatory intervention by 
governments will be critical to mitigate the risks of 
increasingly powerful models. For example, the U.S. Government 
might consider a combination of licensing and testing 
requirements for development and release of AI models above a 
threshold of capabilities.
    There are several other areas I mention in my written 
testimony where I believe that companies like ours can partner 
with governments, including ensuring that the most powerful AI 
models adhere to a set of safety requirements, facilitating 
processes to develop and update safety measures, and examining 
opportunities for global coordination. And as you mentioned, I 
think it's important that companies have their own 
responsibility here, no matter what Congress does.
    This is a remarkable time to be working on artificial 
intelligence. But as this technology advances, we understand 
that people are anxious about how it could change the way we 
live. We are, too. But we believe that we can and must work 
together to identify and manage the potential downsides so that 
we can all enjoy the tremendous upsides.
    It is essential that powerful AI is developed with 
democratic values in mind, and this means that U.S. leadership 
is critical. I believe that we will be able to mitigate the 
risks in front of us and really capitalize on this technology's 
potential to grow the U.S. economy and the world's. And I look 
forward to working with you all to meet this moment, and I look 
forward to answering your questions. Thank you.
    [The prepared statement of Mr. Altman appears as a 
submission for the record.]
    Chair Blumenthal. Thank you, Mr. Altman. Ms. Montgomery?

  STATEMENT OF CHRISTINA MONTGOMERY, CHIEF PRIVACY AND TRUST 
            OFFICER, IBM, CORTLANDT MANOR, NEW YORK

    Ms. Montgomery. Chairman Blumenthal, Ranking Member Hawley, 
and Members of the Subcommittee, thank you for today's 
opportunity to present. AI is not new, but it's certainly 
having a moment. Recent breakthroughs in generative AI and the 
technology's dramatic surge in the public attention has 
rightfully raised serious questions at the heart of today's 
hearing. What are AI's potential impacts on society? What do we 
do about bias? What about misinformation, misuse, or harmful 
content generated by AI systems? Senators, these are the right 
questions, and I applaud you for convening today's hearing to 
address them head on.
    While AI may be having its moment, the moment for 
government to play a role has not passed us by. This period of 
focused public attention on AI is precisely the time to define 
and build the right guardrails to protect people and their 
interests. But at its core, AI is just a tool, and tools can 
serve different purposes. To that end, IBM urges Congress to 
adopt a precision regulation approach to AI. This means 
establishing rules to govern the deployment of AI in specific 
use cases, not regulating the technology itself.
    Such an approach would involve four things:
    First, different rules for different risks. The strongest 
regulation should be applied to use cases with the greatest 
risks to people and society.
    Second, clearly defining risks. There must be clear 
guidance on AI uses or categories of AI-supported activity that 
are inherently high risk. This common definition is key to 
enabling a clear understanding of what regulatory requirements 
will apply in different use cases and contexts.
    Third, be transparent. So, AI shouldn't be hidden. 
Consumers should know when they're interacting with an AI 
system and that they have recourse to engage with a real person 
should they so desire. No person anywhere should be tricked 
into interacting with an AI system.
    And finally, showing the impact. For higher-risk use cases, 
companies should be required to conduct impact assessments that 
show how their systems perform against tests for bias and other 
ways that they could potentially impact the public and to 
attest that they've done so. By following risk-based, use case-
specific approach at the core of precision regulation, Congress 
can mitigate the potential risks of AI without hindering 
innovation.
    But businesses also play a critical role in ensuring the 
responsible deployment of AI. Companies active in developing or 
using AI must have strong internal governance, including, among 
other things, designating a lead AI ethics official responsible 
for an organization's trustworthy AI strategy, standing up an 
ethics board or a similar function as a centralized 
clearinghouse for resources to help guide implementation of 
that strategy. IBM has taken both of these steps, and we 
continue calling on our industry peers to follow suit.
    Our AI ethics board plays a critical role in overseeing 
internal AI governance processes, creating reasonable 
guardrails to ensure we introduce technology into the world in 
a responsible and safe manner. It provides centralized 
governance and accountability while still being flexible enough 
to support decentralized initiatives across IBM's global 
operations. We do this because we recognize that society grants 
our license to operate, and with AI, the stakes are simply too 
high. We must build, not undermine, the public trust.
    The era of AI cannot be another era of move fast and break 
things. But we don't have to slam the brakes on innovation, 
either. These systems are within our control today, as are the 
solutions. What we need at this pivotal moment is clear, 
reasonable policy and sound guardrails. These guardrails should 
be matched with meaningful steps by the business community to 
do their part. Congress and the business community must work 
together to get this right. The American people deserve no 
less. Thank you for your time, and I look forward to your 
questions.
    [The prepared statement of Ms. Montgomery appears as a 
submission for the record.]
    Chair Blumenthal. Thank you. Professor Marcus?

    STATEMENT OF GARY MARCUS, PROFESSOR EMERITUS, NEW YORK 
            UNIVERSITY, VANCOUVER, BRITISH COLUMBIA,
                             CANADA

    Professor Marcus. Thank you, Senators. Today's meeting is 
historic. I'm profoundly grateful to be here. I come as a 
scientist, someone who's founded AI companies, and as someone 
who genuinely loves AI but who is increasingly worried. There 
are benefits, but we don't yet know whether they will outweigh 
the risks.
    Fundamentally, these new systems are going to be 
destabilizing. They can and will create persuasive lies at a 
scale humanity has never seen before. Outsiders will use them 
to affect our elections, insiders to manipulate our markets and 
our political systems. Democracy itself is threatened. Chatbots 
will also clandestinely shape our opinions, potentially 
exceeding what social media can do. Choices about data sets 
that AI companies use will have enormous unseen influence. 
Those who choose the data will make the rules, shaping society 
in subtle but powerful ways.
    There are other risks, too, many stemming from the inherent 
unreliability of current systems. A law professor, for example, 
was accused by a chatbot of sexual harassment: untrue. And it 
pointed to a Washington Post article that didn't even exist. 
The more that that happens, the more that anybody can deny 
anything. As one prominent lawyer told me on Friday, defendants 
are starting to claim that plaintiffs are making up legitimate 
evidence. These sorts of allegations undermine the abilities of 
juries to decide what or who to believe and contribute to the 
undermining of democracy.
    Poor medical advice could have serious consequences, too. 
An open-source large language model recently seems to have 
played a role in a person's decision to take their own life. 
The large language model asked the human, ``If you wanted to 
die, why didn't you do it earlier?''--and then followed up 
with, ``Were you thinking of me when you overdosed?''--without 
ever referring the patient to the human help that was obviously 
needed. Another system rushed out and made available to 
millions of children--told a person posing as a 13-year-old how 
to lie to her parents about a trip with a 31-year-old man.
    Further threats continue to emerge regularly. A month after 
GPT-4 was released, OpenAI released ChatGPT plugins, which 
quickly led others to develop something called Auto-GPT, with 
direct access to the internet, the ability to write source 
code, and increased powers of automation. This may well have 
drastic and difficult-to-predict security consequences. What 
criminals are going to do here is to create counterfeit people. 
It's hard to even envision the consequences of that. We have 
built machines that are like bulls in a china shop: powerful, 
reckless, and difficult to control.
    We all, more or less, agree on the values we would like for 
our AI systems to honor. We want, for example, for our systems 
to be transparent, to protect our privacy, to be free of bias, 
and above all else, to be safe. But current systems are not in 
line with these values. Current systems are not transparent, 
they do not adequately protect our privacy, and they continue 
to perpetuate bias. And even their makers don't entirely 
understand how they work.
    Most of all, we cannot remotely guarantee that they're 
safe, and hope, here, is not enough. The Big Tech companies' 
preferred plan boils down to ``Trust us.'' But why should we? 
The sums of money at stake are mindboggling. Missions drift.
    OpenAI's original mission statement proclaimed, ``Our goal 
is to advance AI in the way that is most likely to benefit 
humanity as a whole, unconstrained by a need to generate 
financial return.'' Seven years later, they're largely beholden 
to Microsoft, embroiled in part in an epic battle of search 
engines that routinely make things up, and that's forced 
Alphabet to rush out products and deemphasize safety. Humanity 
has taken a back seat.
    AI is moving incredibly fast, with lots of potential but 
also lots of risks. We obviously need government involved, and 
we need the tech companies involved, both big and small. But we 
also need independent scientists, not just so that we 
scientists can have a voice, but so that we can participate 
directly in addressing the problems and evaluating solutions--
and not just after products are released, but before. And I'm 
glad that Sam mentioned that. We need tight collaboration 
between independent scientists and governments, in order to 
hold the companies' feet to the fire.
    Allowing independent scientists access to these systems 
before they are widely released, as part of a clinical trial 
like safety evaluation, is a vital first step. Ultimately, we 
may need something like CERN: global, international, and 
neutral, but focused on AI safety rather than high-energy 
physics. We have unprecedented opportunities here, but we are 
also facing a perfect storm of corporate irresponsibility, 
widespread deployment, lack of adequate regulation, and 
inherent unreliability.
    AI is among the most world-changing technologies ever, 
already changing things more rapidly than almost any technology 
in history. We acted too slowly with social media. Many 
unfortunate decisions got locked in, with lasting consequence. 
The choices we make now will have lasting effects for decades, 
maybe even centuries. The very fact that we are here today in 
bipartisan fashion to discuss these matters gives me some hope. 
Thank you, Mr. Chairman.
    [The prepared statement of Professor Marcus appears as a 
submission for the record.]
    Chair Blumenthal. Thanks very much, Professor Marcus. We're 
going to have 7-minute rounds of questioning, and I will begin.
    First of all, Professor Marcus, we are here today because 
we do face that perfect storm. Some of us might characterize it 
more like a bomb in a china shop, not a bull. And as Senator 
Hawley indicated, there are precedents here, not only the 
atomic warfare era, but also the Genome Project, the research 
on genetics, where there was international cooperation as a 
result. And we want to avoid those past mistakes, as I 
indicated in my opening statement, that were committed on 
social media. That is precisely the reason we are here today.
    ChatGPT makes mistakes. All AI does. And it can be a 
convincing liar, what people call hallucinations. That might be 
an innocent problem in the opening of a Judiciary Subcommittee 
hearing where a voice is impersonated--mine, in this instance--
or quotes from research papers that don't exist. But ChatGPT 
and Bard are willing to answer questions about life-or-death 
matters: for example, drug interactions. And those kinds of 
mistakes can be deeply damaging.
    I'm interested in how we can have reliable information 
about the accuracy and trustworthiness of these models and how 
we can create competition and consumer disclosures that reward 
greater accuracy. The National Institute of Standards and 
Technology actually already has an AI accuracy test, the face 
recognition vendor test. It doesn't solve for all the issues 
with facial recognition, but the scorecard does provide useful 
information about the capabilities and flaws of these systems. 
So, there's work on models to assure accuracy and integrity.
    My question--let me begin with you, Mr. Altman--is: Should 
we consider independent testing labs to provide scorecards and 
nutrition labels, or the equivalent of nutrition labels, 
packaging that indicates to people whether or not the content 
can be trusted, what the ingredients are, and what the garbage 
going in may be, because it could result in garbage going out?
    Mr. Altman. Yes. I think that's a great idea. I think that 
companies should put their own sort of, you know, ``Here are 
the results of our tests of our model before we release it, 
here's where it has weaknesses, here's where it has 
strengths.'' But also, independent audits for that are very 
important.
    These models are getting more accurate over time. You know, 
as we have, I think, said as loudly as anyone, this technology 
is in its early stages. It definitely still makes mistakes. We 
find that people, that users are pretty sophisticated and 
understand where the mistakes are or are likely to be, that 
they need to be responsible for verifying what the models say, 
that they go off and check it. I worry that, as the models get 
better and better, the users can have sort of less and less of 
their own discriminating thought process around it. But I think 
users are more capable than we often give them credit for in 
conversations like this.
    I think a lot of disclosures--which, if you've used 
ChatGPT, you'll see--about the inaccuracies of the model are 
also important. And I'm excited for a world where companies 
publish, with the models, information about how they behave, 
where the inaccuracies are, and independent agencies or 
companies provide that, as well. I think it's a great idea.
    Chair Blumenthal. I alluded, in my opening remarks, to the 
jobs issue, the economic effects on employment. I think you 
have said, in fact, and I'm going to quote, ``Development of 
superhuman machine intelligence is probably the greatest threat 
to the continued existence of humanity,'' end quote. You may 
have had in mind the effect on jobs, which is really my biggest 
nightmare, in the long term. Let me ask you what your biggest 
nightmare is and whether you share that concern.
    Mr. Altman. Like with all technological revolutions, I 
expect there to be significant impact on jobs, but exactly what 
that impact looks like is very difficult to predict. If we went 
back to the other side of a previous technological revolution, 
talking about the jobs that exist on the other side--you know, 
you can go back and read books of this. It's what people said 
at the time. It's difficult. I believe that there will be far 
greater jobs on the other side of this and that the jobs of 
today will get better.
    I think it's important--first of all, I think it's 
important to understand and think about GPT-4 as a tool, not a 
creature, which is easy to get confused. And it's a tool that 
people have a great deal of control over, in how they use it. 
And, second, GPT-4 and other systems like it are good at doing 
tasks, not jobs. And so you see already people that are using 
GPT-4 to do their job much more efficiently by helping them 
with tasks.
    Now, GPT-4 will, I think, entirely automate away some jobs, 
and it will create new ones that we believe will be much 
better. Again, my understanding of the history of technology is 
one long technological revolution, not a bunch of different 
ones put together. But this has been continually happening. As 
our quality of life raises and as machines and tools that we 
create can help us live better lives, the bar raises for what 
we do, and our human ability and what we spend our time going 
after goes after more ambitious, more satisfying projects.
    So, there will be an impact on jobs. We try to be very 
clear about that. And I think it will require partnership 
between the industry and government, but mostly action by 
government, to figure out how we want to mitigate that. But I'm 
very optimistic about how great the jobs of the future will be.
    Chair Blumenthal. Thank you. Let me ask Ms. Montgomery and 
Professor Marcus for your reactions to those questions, as 
well. Ms. Montgomery?
    Ms. Montgomery. On the jobs point. Yes. I mean, well, it's 
a hugely important question, and it's one that we've been 
talking about for a really long time at IBM. You know, we do 
believe that AI--and we've said it for a long time--is going to 
change every job. New jobs will be created, many more jobs will 
be transformed, and some jobs will transition away. I'm a 
personal example of a job that didn't exist when I joined IBM, 
and I have a team of AI governance professionals who are in new 
roles that we created, you know, as early as 3 years ago. I 
mean, they're new and they're growing.
    So, I think the most important thing that we could be doing 
and can and should be doing now is to prepare the workforce of 
today and the workforce of tomorrow for partnering with AI 
technologies and using them. And we've been very involved for 
years now in doing that, in focusing on skills-based hiring, in 
educating for the skills of the future. Our SkillsBuild 
platform has 7 million learners and over 1,000 courses 
worldwide focused on skills. And we've pledged to train 30 
million individuals by 2030 in the skills that are needed for 
society today.
    Chair Blumenthal. Thank you. Professor Marcus?
    Professor Marcus. May I go back to the first question, as 
well?
    Chair Blumenthal. Absolutely.
    Professor Marcus. On the subject of nutrition labels, I 
think we absolutely need to do that. I think that there are 
some technical challenges and that building proper nutrition 
labels goes hand in hand with transparency. The biggest 
scientific challenge in understanding these models is how they 
generalize. What do they memorize and what new things do they 
do? The more that there's in the data set, for example, the 
thing that you want to test accuracy on, the less you can get a 
proper read on that. So, it's important, first of all, that 
scientists be part of that process, and, second, that we have 
much greater transparency about what actually goes into these 
systems.
    If we don't know what's in them, then we don't know exactly 
how well they're doing when we give something new, and we don't 
know how good a benchmark that will be for something that's 
entirely novel. So, I could go into that more, but I want to 
flag that.
    Second is, on jobs, past performance history is not a 
guarantee of the future. It has always been the case in the 
past that we have had more jobs, that new jobs, new professions 
come in as new technologies come in. I think this one's going 
to be different, and the real question is, over what time 
scale? Is it going to be 10 years? Is it going to be 100 years? 
And I don't think anybody knows the answer to that question.
    I think, in the long run, so-called artificial general 
intelligence really will replace a large fraction of human 
jobs. We're not that close to artificial general intelligence, 
despite all of the media hype and so forth. I would say that 
what we have right now is just a small sampling of the AI that 
we will build. In 20 years, people will laugh at this, as I 
think it was Senator Hawley made the--but maybe Senator Durbin 
made the example about this--it was Senator Durbin--made the 
example about cell phones. When we look back at the AI of 
today, 20 years ago, we'll be like, ``Wow, that stuff was 
really unreliable. It couldn't really do planning, which is an 
important technical aspect. Its reasoning abilities were 
limited.''
    But when we get to AGI, or artificial general intelligence, 
maybe let's say it's 50 years, that really is going to have, I 
think, profound effects on labor. And there's just no way 
around that. And last, I don't know if I'm allowed to do this, 
but I will note that Sam's worst fear I do not think is 
employment. And he never told us what his worst fear actually 
is. And I think it's germane to find out.
    Chair Blumenthal. Thank you. I'm going to ask Mr. Altman if 
he cares to respond.
    Mr. Altman. Yes. Look, we have tried to be very clear about 
the magnitude of the risks here. I think jobs and employment 
and what we're all going to do with our time really matters. I 
agree that when we get to very powerful systems, the landscape 
will change. I think I'm just more optimistic that we are 
incredibly creative, and we find new things to do with better 
tools, and that will keep happening.
    My worst fears are that we cause significant--we, the 
field, the technology, the industry cause significant harm to 
the world. I think that could happen a lot of different ways. 
It's why we started the company. It's a big part of why I'm 
here today and why we've been here in the past and we've been 
able to spend some time with you.
    I think if this technology goes wrong, it can go quite 
wrong, and we want to be vocal about that. We want to work with 
the Government to prevent that from happening. But we try to be 
very clear-eyed about what the downside case is and the work 
that we have to do to mitigate that.
    Chair Blumenthal. Thank you. And our hope is that the rest 
of the industry will follow the example that you and IBM, Ms. 
Montgomery, have set by coming today and meeting with us, as 
you have done privately, in helping to guide what we're going 
to do so that we can target the harms and avoid unintended 
consequences, to the good. Thank you.
    Senator Hawley. I----
    Chair Blumenthal. Senator Hawley.
    Senator Hawley. Thank you, again, Mr. Chairman. Thanks to 
the witnesses for being here. Mr. Altman, I think you grew up 
in St. Louis, if I'm----
    Mr. Altman. I did.
    Senator Hawley [continuing]. Not mistaken. It's great to 
see a fellow----
    Mr. Altman. Missouri's a great place.
    Senator Hawley [continuing]. Missourian here. It is. Thank 
you. I want that noted, especially underlined in the record: 
Missouri is a great place. That is the takeaway from today's 
hearing. Maybe we'll just stop there, Mr. Chairman.
    Let me ask you--Mr. Altman, I think I'll start with you, 
and I'll just preface this by saying my questions here are an 
attempt to get my head around and to ask all of you to help us 
to get our heads around what this generative AI--particularly 
the large language models--what it can do, because I'm trying 
to understand its capacities and then its significance. So, I'm 
looking at a paper here entitled, ``Large Language Models 
Trained on Media Diets Can Predict Public Opinion.''
    This was just posted about a month ago. The authors are 
Chu, Andreas, Ansolabehere, and Roy, and their conclusion--this 
work was done at MIT and then also at Google. Their conclusion 
is that large language models can indeed predict public 
opinion, and they go through and model why this is the case, 
and they conclude ultimately that an AI system can predict 
human survey responses by adapting a pretrained language model 
to subpopulation-specific media diets. In other words, you can 
feed the model a particular set of media inputs and it can, 
with remarkable accuracy--and the paper goes into this--
predict, then, what people's opinions will be.
    I want to think about this in the context of elections. If 
these large language models can, even now, based on the 
information we put into them, quite accurately predict public 
opinion, you know, ahead of time--I mean, predict, it's before 
you even ask the public these questions--what will happen when 
entities, whether it's corporate entities or whether it's 
governmental entities or whether it's campaigns or whether it's 
foreign actors, take this survey information, these predictions 
about public opinion, and then fine-tune strategies to elicit 
certain responses, certain behavioral responses?
    I mean, we already know--this Committee has heard 
testimony, I think 3 years ago, now, about the effect of 
something as prosaic, it now seems, as Google Search, the 
effect that this has on voters in an election, particularly 
undecided voters in the final days of an election who maybe try 
to get information from Google Search, and what an enormous 
effect--the ranking of the Google search, the articles that it 
returns is going to have an enormous effect on an undecided 
voter. This, of course, is orders of magnitude, far more 
powerful, far more significant, far more directed, if you like.
    So, Mr. Altman, maybe you can help me understand here what 
some of the significance of this is. Should we be concerned 
about models that can--large language models that can predict 
survey opinion and then can help organizations into these fine-
tune strategies to elicit behaviors from voters? Should we be 
worried about this for our elections?
    Mr. Altman. Yes. Thank you, Senator Hawley, for the 
question. It's one of my areas of greatest concern: the more 
general ability of these models to manipulate, to persuade, to 
provide sort of one-on-one, you know, interactive 
disinformation. I think that's like a broader version of what 
you're talking about, but given that we're going to face an 
election next year and these models are getting better, I think 
this is a significant area of concern.
    I think there's a lot of policies that companies can 
voluntarily adopt, and I'm happy to talk about what we do 
there. I do think some regulation would be quite wise on this 
topic. Someone mentioned earlier--it's something we really 
agree with. People need to know if they're talking to an AI, if 
content that they're looking at might be generated or might 
not. I think it's a great thing to do, is to make that clear.
    I think we also will need rules, guidelines about what's 
expected in terms of disclosure from a company providing a 
model that could have these sorts of abilities that you talk 
about. So, I'm nervous about it. I think people are able to 
adapt quite quickly. When Photoshop came onto the scene a long 
time ago, you know, for a while people were really quite fooled 
by Photoshopped images and then pretty quickly developed an 
understanding that images might be Photoshopped. This will be 
like that, but on steroids. And the interactivity, the ability 
to really model, predict humans well, as you talked about, I 
think is going to require a combination of companies doing the 
right thing, regulation, and public education.
    Senator Hawley. Professor Marcus, do you want to address 
this?
    Professor Marcus. Yes. I'd like to add two things. One is, 
in the appendix to my remarks, I have two papers to make you 
even more concerned. One is in The Wall Street Journal just a 
couple of days ago, called, ``Help! My Political Beliefs Were 
Altered by a Chatbot!'' And I think the scenario you raised was 
that we might basically observe people and use surveys to 
figure out what they're saying, but as Sam just acknowledged, 
the risk is actually worse: that the systems will directly, 
maybe not even intentionally, manipulate people. And that was 
the thrust of the Wall Street Journal article.
    And it links to an article that I've also linked to, called 
``Interacting''--and it's not yet published, not yet peer 
reviewed--``Interacting with Opinionated Language Models 
Changes Users' Views.'' And this comes back ultimately to data. 
One of the things that I'm most concerned about with GPT-4 is 
that we don't know what it's trained on. I guess Sam knows, but 
the rest of us do not. And what it is trained on has 
consequences for essentially the biases of the system. We could 
talk about that in technical terms, but how these systems might 
lead people about depends very heavily on what data is trained 
on them. And so we need transparency about that, and we 
probably need scientists in there doing analysis in order to 
understand what the political influences, for example, of these 
systems might be.
    And it's not just about politics. It can be about health. 
It could be about anything. These systems absorb a lot of data, 
and then what they say reflects that data, and they're going to 
do it differently depending on what's in that data. So, it 
makes a difference if they're trained on The Wall Street 
Journal as opposed to The New York Times or Reddit. I mean, 
actually, they're largely trained on all of this stuff, but we 
don't really understand the composition of that. And so we have 
this issue of potential manipulation, and it's even more 
complex than that because it's subtle manipulation. People may 
not be aware of what's going on. That was the point of both The 
Wall Street Journal article and the other article that I called 
your attention to.
    Senator Hawley. Let me ask you about AI systems trained on 
personal data, the kind of data that, for instance, the social 
media companies, the major platforms--Google, Meta, etc.--
collect on all of us, routinely. And we've had many a chat 
about this, in this Committee, over many a year, now. But the 
massive amounts of data, personal data that the companies have 
on each one of us--an AI system that is trained on that 
individual data, that knows each of us better than ourselves 
and also knows the billions of data points about human 
behavior, human language interaction, generally--wouldn't we be 
able--can't we foresee an AI system that is extraordinarily 
good at determining what will grab human attention and what 
will keep an individual's attention?
    And for the war for attention, the war for clicks, that is 
currently going on, on all of these platforms--it's how they 
make their money--I'm just imagining an AI system, these AI 
models supercharging that war for attention such that we now 
have technology that will allow individual targeting of a kind 
we have never even imagined before, where the AI will know 
exactly what Sam Altman finds attention grabbing, will know 
exactly what Josh Hawley finds attention grabbing, will be able 
to grab our attention and then elicit responses from us in a 
way that we have heretofore not even been able to imagine. 
Should we be concerned about that, for its corporate 
applications, for the monetary applications, for the 
manipulation that could come from that? Mr. Altman?
    Mr. Altman. Yes, we should be concerned about that. To be 
clear, OpenAI does not--you know, we don't have an ad-based 
business model, so we're not trying to build up these profiles 
of our users. We're not trying to get them to use it more. 
Actually, we'd love it if they'd use it less, because we don't 
have enough GPUs. But I think other companies are already--and 
certainly will, in the future, use AI models to create, you 
know, very good ad predictions of what a user will like. I 
think that's already happening, in many ways.
    Senator Hawley. Okay. Mr. Marcus, anything you want to add, 
or Professor Marcus?
    Professor Marcus. Yes, and perhaps Ms. Montgomery will want 
to, as well, I don't know, but hypertargeting of advertising is 
definitely going to come. I agree that that's not been OpenAI's 
business model. Of course, now they're working for Microsoft, 
and I don't know what's in Microsoft's thoughts, but we will 
definitely see it. Maybe it will be with open-source language 
models. I don't know. But the technology is, let's say, partway 
there to being able to do that and will certainly get there.
    Ms. Montgomery. So, we're an enterprise technology company, 
not consumer focused, so the space isn't one that we 
necessarily operate in, in terms of--but these issues are 
hugely important issues, and it's why we've been out ahead in 
developing the technology that will help to ensure that you can 
do things like produce a fact sheet that has the ingredients of 
what your data is trained on--data sheets, model cards, all 
those types of things--and calling for, as I've mentioned 
today, transparency, so you know what the algorithm was trained 
on, and then you also know and can manage and monitor 
continuously over the life cycle of an AI model the behavior 
and the performance of that model.
    Chair Blumenthal. Senator Durbin.
    Chair Durbin. Thank you. I think what's happening today in 
this hearing room is historic. I can't recall when we've had 
people representing large corporations or private sector 
entities come before us and plead with us to regulate them. In 
fact, many people in the Senate have based their careers on the 
opposite, that the economy will thrive if Government gets the 
hell out of the way. And what I'm hearing instead today is that 
``stop me before I innovate again'' message. And I'm just 
curious as to how we're going to achieve this.
    As I mentioned Section 230 in my opening remarks--we 
learned something there. We decided that under Section 230 that 
we were basically going to absolve the industry from liability 
for a period of time as it came into being. Well, Mr. Altman, 
on a podcast earlier this year, you agreed with host Kara 
Swisher that Section 230 doesn't apply to generative AI and 
that developers like OpenAI should not be entitled to full 
immunity for harms caused by their products. So, what have we 
learned from 230 that applies to your situation with AI?
    Mr. Altman. Thank you for the question, Senator. I don't 
know yet exactly what the right answer here is. I'd love to 
collaborate with you to figure it out. I do think, for a very 
new technology, we need a new framework. Certainly, companies 
like ours bear a lot of responsibility for the tools that we 
put out in the world, but tool users do, as well, and also 
people that will build on top of it, between them and the end 
consumer. And how we want to come up with a liability framework 
there is a super important question, and we'd love to work 
together.
    Chair Durbin. The point I want to make is this, when it 
came to online platforms, the inclination of the Government 
was, ``Get out of the way. This is a new industry. Don't 
overregulate it. In fact, give them some breathing space and 
see what happens.'' I'm not sure I'm happy with the outcome, as 
I look at online platforms----
    Mr. Altman. Me, either.
    Chair Durbin [continuing]. And the harms that they've 
created, problems that we've seen demonstrated in this 
Committee: child exploitation, cyberbullying, online drug 
sales, and more. I don't want to repeat that mistake again. And 
what I hear is the opposite suggestion from the private sector, 
and that is, ``Come in on the front end of this thing and 
establish some liability standards, precision regulation.'' For 
a major company like IBM to come before this Committee and say 
to the Government, ``Please regulate us''--can you explain the 
difference in thinking from the past and now?
    Ms. Montgomery. Yes, absolutely. So, for us, this comes 
back to the issue of trust and trust in the technology. Trust 
is our license to operate, as I mentioned in my remarks. And so 
we firmly believe--and we've been calling for precision 
regulation of artificial intelligence for years now. This is 
not a new position. We think that technology needs to be 
deployed in a responsible and clear way, that people--we've 
taken principles around that, trust and transparency, we call 
them, are principles that were articulated years ago, and build 
them into practices. That's why we're here advocating for 
precision regulatory approach. So, we think that AI should be 
regulated at the point of risk, essentially, and that's the 
point at which technology meets society.
    Chair Durbin. Let's take a look at what that might appear 
to be. Members of Congress are a pretty smart lot of people. 
Maybe not as smart as we think we are, many times. And 
Government certainly has the capacity to do amazing things. But 
when you talk about our ability to respond to the current 
challenge and perceived challenge of the future, challenges 
which you all have described in terms which are hard to 
forget--as you said, Mr. Altman, things can go quite wrong. As 
you said, Mr. Marcus, democracy is threatened. I mean, the 
magnitude of the challenge you're giving us is substantial. I'm 
not sure that we respond quickly and with enough expertise to 
deal with it.
    Professor Marcus, you made a reference to CERN, the 
international arbiter of nuclear research, I suppose. I don't 
know if that's a fair characterization, but it's a 
characterization I'll start with. What is it? What agency of 
this Government do you think exists that could respond to the 
challenge that you've laid down today?
    Professor Marcus. We have many agencies that can respond in 
some ways, for example, the FTC, the FCC. There are many 
agencies that can. But my view is that we probably need a 
Cabinet-level organization within the United States in order to 
address this. And my reasoning for that is that the number of 
risks is large. The amount of information to keep up on is so 
much. I think we need a lot of technical expertise. I think we 
need a lot of coordination of these efforts.
    So, there is one model here where we stick to only existing 
law and try to shape all of what we need to do, and each agency 
does their own thing. But I think that AI is going to be such a 
large part of our future and is so complicated and moving so 
fast--and this does not fully solve your problem about a 
dynamic world, but it's a step in that direction to have an 
agency that's full-time job is to do this. I personally have 
suggested, in fact, that we should want to do this in a global 
way. I wrote an article in The Economist, I have a link in 
here, an invited essay for The Economist, suggesting we might 
want an international agency for AI.
    Chair Durbin. Well, that's what I wanted to go to next, and 
that is the fact that--I'll get it aside from the CERN and 
nuclear examples, because Government was involved in that from 
day one, at least in the United States. But now we're dealing 
with innovation which doesn't necessarily have a boundary.
    Professor Marcus. That's correct.
    Chair Durbin. We may create a great U.S. agency, and I hope 
that we do, that may have jurisdiction over U.S. corporations 
and U.S. activity but doesn't have a thing to do with what's 
going to bombard us from outside the United States. How do you 
give this international authority the authority to regulate in 
a fair way for all entities involved in AI?
    Professor Marcus. I think that's probably over my pay 
grade. I would like to see it happen, and I think it may be 
inevitable that we push there. I mean, I think the politics 
behind it are obviously complicated. I'm really heartened by 
the degree to which this room is bipartisan and supporting the 
same things, and that makes me feel like it might be possible. 
I would like to see the United States take leadership in such 
organization. It has to involve the whole world and not just 
the U.S., to work properly. I think even from the perspective 
of the companies, it would be a good thing.
    So, the companies themselves do not want a situation where 
you take these models, which are expensive to train, and you 
have to have 190, some of them, you know, one for every 
country. That wouldn't be a good way of operating. When you 
think about the energy costs, alone, just for training these 
systems, it would not be a good model if every country has its 
own policies and, for each jurisdiction, every company has to 
train another model and maybe--you know, different States are 
different, so Missouri and California have different rules. And 
so then that requires even more training of these expensive 
models, with huge climate impact.
    And, I mean, it would be very difficult for the companies 
to operate if there was no global coordination. And so I think 
that we might get the companies on board if there's bipartisan 
support here, and I think there's support around the world, 
that it is entirely possible that we could develop such a 
thing. But obviously there are many, you know, nuances here of 
diplomacy that are over my pay grade. I would love to learn 
from you all to try to help make that happen.
    Chair Durbin. Mr. Altman----
    Mr. Altman. Can I weigh in just briefly?
    Chair Durbin. Briefly, please.
    Mr. Altman. I want to echo support for what Mr. Marcus 
said. I think the U.S. should lead here and do things first, 
but to be effective, we do need something global. As you 
mentioned, this can happen everywhere. There is precedent. I 
know it sounds naive to call for something like this, and it 
sounds really hard.
    There is precedent. We've done it before with the IAEA. 
We've talked about doing it for other technologies. Given what 
it takes to make these models, the chip supply chain, the sort 
of limited number of competitive GPUs, the power the U.S. has 
over these companies, I think there are paths to the U.S. 
setting some international standards that other countries would 
need to collaborate with and be part of that are actually 
workable, even though it sounds on its face like an impractical 
idea. And I think it would be great for the world.
    Chair Durbin. Thank you, Mr. Chairman. Thank you.
    Chair Blumenthal. Thanks, Senator Durbin. And, in fact, I 
think we're going to hear more about what Europe is doing. The 
European Parliament already is acting on an AI Act. On social 
media, Europe is ahead of us. We need to be in the lead. I 
think your point is very well taken. Let me turn to Senator 
Graham--Senator Blackburn.
    Senator Blackburn. Thank you, Mr. Chairman, and thank you 
all for being here with us today. I put into my ChatGPT 
account, ``Should Congress regulate AI ChatGPT?'' And it gave 
me four pros, four cons, and says ultimately the decision rests 
with Congress and deserves careful consideration. So, on that--
--
    Chair Blumenthal. Seems reasonable.
    Senator Blackburn [continuing]. You know, it was very 
balanced. I recently visited with the Nashville Technology 
Council--I represent Tennessee. And, of course, you had people 
there from healthcare, financial services, logistics, 
educational entities, and they're concerned about what they see 
happening with AI, with the utilizations for their companies.
    Ms. Montgomery, you know, similar to you, they've got--
healthcare people are looking at disease analytics, they are 
looking at predictive diagnoses, how this can better the 
outcomes for patients, logistics industry looking at ways to 
save time and money and yield efficiencies. You've got 
financial services that are saying, ``How does this work with 
quantum? How does it work with blockchain? How can we use 
this?''
    But I think, as we have talked with them, Mr. Chairman, one 
of the things that continues to come up is, yes, Professor 
Marcus, as you were saying, the EU, different entities, are 
ahead of us in this, but we have never established a federally 
given preemption for online privacy, for data security, and put 
some of those foundational elements in place, which is 
something that we need to do as we look at this. And it will 
require that Commerce Committee, Judiciary Committee decide how 
we move forward so that people own their virtual you.
    And, Mr. Altman, I was glad to see last week that your 
OpenAI models are not going to be trained using consumer data. 
I think that that is important. And if we have a second round, 
I've got a host of questions for you on data security and 
privacy. But I think it's important to let people control their 
virtual you, their information in these settings. And I want to 
come to you on music and content creation, because we've got a 
lot of songwriters and artists.
    And I think we have the best creative community on the face 
of the earth, there in Tennessee, and they should be able to 
decide if their copyrighted songs and images are going to be 
used to train these models. And I'm concerned about OpenAI's 
Jukebox. It offers some re-renditions in the style of Garth 
Brooks, which suggests that OpenAI is trained on Garth Brooks 
songs. I went in this weekend, and I said, ``Write me a song 
that sounds like Garth Brooks,'' and it gave me a different 
version of ``Simple Man.'' So, it's interesting that it would 
do that. But you're training it on these copyrighted songs, 
these MIDI files, these sound technologies.
    So, as you do this, who owns the right to that AI-generated 
material? And, using your technology, could I remake a song, 
insert content from my favorite artist, and then own the 
creative rights to that song?
    Mr. Altman. Thank you, Senator. This is an area of great 
interest to us. I would say, first of all, we think that 
creators deserve control over how their creations are used and 
what happens sort of beyond the point of them releasing it into 
the world. Second, I think that we need to figure out new ways 
with this new technology that creators can win, succeed, have a 
vibrant life. And I'm optimistic that this will present it----
    Senator Blackburn. Okay. Then let me ask you this. How do 
you compensate the artist?
    Mr. Altman. That's exactly what I was going to say.
    Senator Blackburn. Okay.
    Mr. Altman. We're working with artists now, visual artists, 
musicians, to figure out what people want. There's a lot of 
different opinions, unfortunately, and at some point, we'll 
have----
    Senator Blackburn. Okay. Let me ask you this. Do you favor 
something like SoundExchange, that has worked in the area of 
radio and----
    Mr. Altman. I'm not familiar with SoundExchange.
    Senator Blackburn [continuing]. FreePlay----
    Mr. Altman. I'm sorry.
    Senator Blackburn [continuing]. Streaming. Okay. You've got 
your team behind you. Get back to me on that. That would be a 
third-party entity.
    Mr. Altman. Okay.
    Senator Blackburn. So, let's discuss that. Let me move on. 
Can you commit, as you've done with consumer data, not to train 
ChatGPT, OpenAI Jukebox, or other AI models on artists and 
songwriters' copyrighted works or use their voices and their 
likenesses without first receiving their consent?
    Mr. Altman. So, first of all, Jukebox is not a product we 
offer. That was a research release, but it's not--you know, 
unlike ChatGPT or DALLE.
    Senator Blackburn. Yes, but we've lived through Napster.
    Mr. Altman. Yes.
    Senator Blackburn. And----
    Mr. Altman. But what----
    Senator Blackburn [continuing]. That was something that 
really cost a lot of artists a lot of money, and----
    Mr. Altman. Oh, I understand. Yes. For sure.
    Senator Blackburn. In the digital distribution era. So----
    Mr. Altman. I don't know the numbers on Jukebox on the top 
of my head as a research release. I can follow up with your 
office, but Jukebox is not something that gets much attention 
or usage. It was put out to show that something's possible.
    Senator Blackburn. Well, Senator Durbin just said, you 
know, and I think it's a fair warning to you all, if we're not 
involved in this from the get-go, and you all already are a 
long way down the path on this, but if we don't step in, then 
this gets away from you. So, are you working with the copyright 
office? Are you considering protections for content generators 
and creators in generative AI?
    Mr. Altman. Yes. We are absolutely engaged on that. Again, 
to reiterate my earlier point, we think that content creators, 
content owners need to benefit from this technology. Exactly 
what the economic model is--we're still talking to artists and 
content owners about what they want. I think there's a lot of 
ways this can happen. But very clearly, no matter what the law 
is, the right thing to do is to make sure people get 
significant upside benefit from this new technology. And we 
believe that it's really going to deliver that. But the content 
owners' likenesses--people totally deserve control over how 
that's used and to benefit from it.
    Senator Blackburn. Okay. So, on privacy, then, how do you 
plan to account for the collection of voice and other user-
specific data, things that are copyrighted, user-specific data 
through your AI applications? Because if I can go in and say, 
``Write me a song that sounds like Garth Brooks,'' and it takes 
part of an existing song, there has to be a compensation to 
that artist for that utilization and that use. If it was 
RadioPlay, it would be there. If it was streaming, it would be 
there. So, if you're going to do that, what is your policy for 
making certain you're accounting for that and you're protecting 
that individual's right to privacy and their right to secure 
that data and that created work?
    Mr. Altman. So, a few thoughts about this. Number one, we 
think that people should be able to say, ``I don't want my 
personal data trained on.'' I think that's like this----
    Senator Blackburn. Right. That gets to a national privacy 
law, which many of us here on the dais are working toward 
getting something that we can use.
    Mr. Altman. Yes. I think strong privacy----
    Senator Blackburn. My time's expired. Let me----
    Mr. Altman. Okay.
    Senator Blackburn [continuing]. Yield back. Thank you, Mr. 
Chair.
    Chair Blumenthal. Thanks, Senator Blackburn. Senator 
Klobuchar.
    Senator Klobuchar. Thank you very much, Mr. Chairman. And, 
Senator Blackburn, I love Nashville, love Tennessee, love your 
music. But I will----
    Senator Blackburn. Come on down.
    Senator Klobuchar [continuing]. Say I used ChatGPT and just 
asked, ``What are the top creative song artists of all time?'' 
And two of the top three were from Minnesota. That would be 
Prince and----
    Senator Blackburn. I'm sure they moved to----
    Senator Klobuchar [continuing]. Bob Dylan.
    Senator Blackburn [continuing]. Nashville at some point.
    Senator Klobuchar. Okay. All right. So, let us----
    Chair Blumenthal. There is one thing----
    Senator Klobuchar. Let us continue on.
    Chair Blumenthal [continuing]. AI won't change, and you're 
seeing it here.
    [Laughter.]
    Senator Klobuchar. All right. So, on a more serious note, 
though, my staff and I, in my role as Chair of the Rules 
Committee and leading a lot of the Election bill--and we just 
introduced a bill that Representative Yvette Clarke from New 
York introduced over in the House, Senators Booker and Bennet, 
and I did, on political advertisements. But that is just, of 
course, the tip of the iceberg. You know this from your 
discussions with Senator Hawley and others about the images. 
And my own view is Senator Graham's, of Section 230--is that we 
just can't let people make stuff up and then not have any 
consequence.
    But I'm going to focus in on what my job, one of my jobs 
will be on the Rules Committee, and that is election 
misinformation. And we just asked ChatGPT to do a tweet about a 
polling location in Bloomington, Minnesota, and said, ``There 
are long lines at this polling location at Atonement Lutheran 
Church. Where should we go?'' Now, albeit it's not an election 
right now, but the answer, the tweet that was drafted, was a 
completely fake thing: ``Go to 1234 Elm Street.''
    And so you can imagine what I'm concerned about here, with 
an election upon us, with the primary elections upon us, that 
we're going to have all kinds of misinformation. And I just 
want to know what you're planning on doing about it. I know 
we're going to have to do something soon, not just for the 
images of the candidates, but also for misinformation about the 
actual polling places and election rules.
    Mr. Altman. Thank you, Senator. We talked about this a 
little bit earlier. We are quite concerned about the impact 
this can have on elections. I think this is an area where 
hopefully the entire industry and the Government can work 
together quickly. There's many approaches, and I'll talk about 
some of the things we do, but before that, I think it's 
tempting to use the frame of social media, but this is not 
social media. This is different. And so the response that we 
need is different.
    You know, this is a tool that a user is using to help 
generate content more efficiently than before. They can change 
it, they can test the accuracy of it. If they don't like it, 
they can get another version, but it still then spreads through 
social media or other ways. Like, ChatGPT is a, you know, 
single-player experience where you're just using this. And so I 
think, as we think about what to do, that's important to 
understand.
    There's a lot that we can--and do--do, there. There's 
things that the model refuses to generate. We have policies. We 
also, importantly, have monitoring. So, at scale, we can detect 
someone generating a lot of those tweets, even if generating 
one tweet is okay.
    Senator Klobuchar. Yes. And of course there's going to be 
other platforms, and if they're all spouting out fake election 
information, I think what happened in the past with Russian 
interference and the like, it's just going to be a tip of the 
iceberg with some of those fake ads. So, that's number one.
    Number two is the impact on intellectual property. And 
Senator Blackburn was getting at some of this with song rights, 
and I have serious concerns about that, but news content. So, 
Senator Kennedy and I have a bill that was really quite 
straightforward, that would simply--allowed the news 
organizations an exemption to be able to negotiate with 
basically Google and Facebook--Microsoft was supportive of the 
bill--but basically negotiate with them to get better rates and 
be able to have some leverage. And other countries are doing 
this, Australia and the like.
    And so my question is, when we already have a study by 
Northwestern predicting that one-third of the U.S. newspapers 
that roughly existed two decades are going to go, are going to 
be gone by 2025, unless you start compensating for everything 
from movies, books, yes, but also news content, we're going to 
lose any realistic content producers. And so I'd like your 
response to that. And of course there is an exemption for 
copyright in Section 230, but I think asking little newspapers 
to go out and sue all the time just can't be the answer. 
They're not going to be able to keep up.
    Mr. Altman. Yes. Like, it is my hope that tools like what 
we're creating can help news organizations do better. I think 
having a vibrant national media is critically important. And, 
let's call it, round one of the internet has not been great for 
that.
    Senator Klobuchar. Right, but we're talking here about 
local, the, you know, report on your high school football----
    Mr. Altman. For sure.
    Senator Klobuchar [continuing]. Scores and a scandal in 
your city council, those kinds of things.
    Mr. Altman. For sure.
    Senator Klobuchar. They're the ones that are actually 
getting the worst, the little radio stations and broadcasts. 
But do you understand that this could be exponentially worse in 
terms of local news content if they're not compensated?
    Mr. Altman. Well----
    Senator Klobuchar. Because what they need is to be 
compensated for their content and not have it stolen.
    Mr. Altman. Yes. Again, our model--you know, the current 
version of GPT-4 ended training in 2021. It's not a good way to 
find recent news, and I don't think it's a service that can do 
a great job of linking out, although maybe with our plugins, 
it's possible. If there are things that we can do to help local 
news, we would certainly like to. Again, I think it's 
critically important.
    Senator Klobuchar. Okay. One last----
    Professor Marcus. May I add something there?
    Senator Klobuchar. Yes, but let me just ask you a question, 
you can combine them quick. More transparency on the 
platforms--Senator Coons and Senator Cassidy and I have the 
Platform Accountability Transparency Act, to give researchers 
access to this information of the algorithms and the like on 
social media data. Would that be helpful? And then why don't 
you just say yes or no and then go at his--the question on 
newspapers.
    Professor Marcus. Transparency is absolutely critical here. 
To understand the political ramifications, the bias 
ramifications, and so forth, we need transparency about the 
data. We need to know more about how the models work. We need 
to have scientists have access to them.
    I was just going to amplify your earlier point about local 
news. A lot of news is going to be generated by these systems. 
They're not reliable. NewsGuard already has a study--I'm sorry 
it's not in my appendix, but I will get it to your office--
showing that something like 50 websites are already generated 
by bots.
    We're going to see much, much more of that, and it's going 
to make it even more competitive for the local news 
organizations. And so the quality of the sort of overall news 
market is going to decline as we have more generated content by 
systems that aren't actually reliable in the content they've 
generated.
    Senator Klobuchar. Thank you. And thank you on a very 
timely basis to make the argument why we have to mark up this 
bill again in June. I appreciate it. Thank you.
    Chair Blumenthal. Senator Graham.
    Senator Graham. Thank you, Mr. Chairman and Senator Hawley, 
for having this. I'm trying to find out how it is different 
than social media and learn from the mistakes we made with 
social media. The idea of not suing social media companies is 
to allow the internet to flourish, because if I slander you, 
you can sue me. If you're a billboard company and you put up 
the slander, can you sue the billboard company? We said no.
    Basically, Section 230 is being used by social media 
companies to avoid liability for activity that other people 
generate when they refuse to comply with their terms of use. A 
mother calls up the company and says, ``This app is being used 
to bully my child to death. You promised, in the terms of use, 
you would prevent bullying.'' And she calls three times, she 
gets no response, the child kills herself, and they can't sue. 
Do you all agree we don't want to do that again?
    Mr. Altman. Yes.
    Professor Marcus. If I may speak for one second, there's a 
fundamental distinction between reproducing content and 
generating content.
    Senator Graham. Yes, but you would like liability where 
people are harmed?
    Professor Marcus. Absolutely.
    Ms. Montgomery. Yes. In fact, IBM has been publicly 
advocating to condition liability on a reasonable care 
standard.
    Senator Graham. So, let me just make sure I understand the 
law as it exists today. Mr. Altman, thank you for coming. Your 
company is not claiming that Section 230 applies to the tool 
you have created?
    Mr. Altman. Yes. We're claiming we need to work together to 
find a totally new approach. I don't think Section 230 is even 
the right framework.
    Senator Graham. Okay. So, under the law that exists today, 
this tool you've created, if I'm harmed by it, can I sue you?
    Mr. Altman. That is beyond my area of legal expertise.
    Senator Graham. Have you ever been sued?
    Mr. Altman. Not for that, no.
    Senator Graham. Have you ever been sued at all, your 
company?
    Mr. Altman. Yes, OpenAI gets sued.
    Senator Graham. Huh?
    Mr. Altman. Yes, we've gotten sued before.
    Senator Graham. Okay. And what for?
    Mr. Altman. I mean, they've mostly been, like, pretty 
frivolous things, like I think happens to any company.
    Senator Graham. But, like, the examples my colleagues have 
given from artificial intelligence that could literally ruin 
our lives--can we go to the company that created that tool and 
sue them? Is that your understanding?
    Mr. Altman. Yes. I think there needs to be clear 
responsibility by the companies.
    Senator Graham. But you're not claiming any kind of legal 
protection, like Section 230 applies to your industry. Is that 
correct?
    Mr. Altman. No, I don't think we're saying anything like 
that.
    Senator Graham. Mr. Marcus, when it comes to consumers, 
there seems to be, like, three time-tested ways to protect 
consumers against any product: statutory schemes, which are 
nonexistent here; legal systems, which may be here, but not 
social media; and agencies. Go back to Senator Hawley. The atom 
bomb has put a cloud over humanity, but nuclear power could be 
one of the solutions to climate change. So, what I'm trying to 
do is make sure that--you just can't go build a nuclear power 
plant. ``Hey, Bob, what would you like to do today?'' ``Let's 
go build a nuclear power plant.'' You have a Nuclear Regulatory 
Commission that governs how you build a plant and is licensed. 
Do you agree, Mr. Altman, that these tools you're creating 
should be licensed?
    Mr. Altman. Yes. We've been calling for this. We think 
any----
    Senator Graham. Okay. That's the simplest way. You get a 
license. And do you agree with me that the simplest way and the 
most effective way is to have an agency that is more nimble and 
smarter than Congress, which should be easy to create, 
overlooking what you do?
    Mr. Altman. Yes. We'd be enthusiastic about that.
    Senator Graham. Do you agree with that, Mr. Marcus?
    Professor Marcus. Absolutely.
    Senator Graham. Do you agree with that, Ms. Montgomery?
    Ms. Montgomery. I would have some nuances. I think we need 
to build on what we have in place already today.
    Senator Graham. We don't have an agency----
    Ms. Montgomery. Regulators----
    Senator Graham [continuing]. That's working. Wait a minute. 
Nope, nope, nope.
    Ms. Montgomery. We don't have an agency that regulates the 
technology.
    Senator Graham. So, should we have one?
    Ms. Montgomery. But a lot of the issues--I don't think so. 
A lot of the issues----
    Senator Graham. Okay. Wait a minute. Wait a minute. So, IBM 
says we don't need an agency. Interesting. Should we have a 
license required for these tools?
    Ms. Montgomery. So, what we believe is that we need to 
regulate----
    Senator Graham. That's a simple question. Should you get a 
license to produce one of these tools?
    Ms. Montgomery. I think it comes back to--some of them, 
potentially, yes. So, what I said at the onset is that we need 
to clearly----
    Senator Graham. Do you believe that----
    Ms. Montgomery [continuing]. Define risks.
    Senator Graham. Do you claim Section 230 applies in this 
area at all?
    Ms. Montgomery. We're not a platform company, and we've, 
again, long advocated for a reasonable care standard in Section 
230.
    Senator Graham. I just don't understand how you could say 
that you don't need an agency to deal with the most 
transformative technology maybe ever.
    Ms. Montgomery. Well, I think we have existing----
    Senator Graham. Is this a transformative technology that 
can----
    Ms. Montgomery. Yes. Absolutely.
    Senator Graham [continuing]. Disrupt life as we know it, 
good and bad?
    Ms. Montgomery. I think it's a transformative technology, 
certainly. And the conversations that we're having here today 
have been really bringing to light the fact that the----
    Senator Graham. You know, this----
    Ms. Montgomery [continuing]. Domains and the issues----
    Senator Graham. This one with you has been very 
enlightening to me. Mr. Altman, why are you so willing to have 
an agency?
    Mr. Altman. Senator, we've been clear about what we think 
the upsides are, and I think you can see from users how much 
they enjoy and how much value they're getting out of it. But 
we've also been clear about what the downsides are.
    Senator Graham. But it's a tool.
    Mr. Altman. And so that's why we think we need an agency.
    Senator Graham. Right. So, it's a major tool to be used by 
a lot of people, right?
    Mr. Altman. It's a major new technology.
    Senator Graham. Okay. If you build it----
    Mr. Altman. We think it'll be----
    Senator Graham. Yes. If you make a ladder and the ladder 
doesn't work, you can sue the people that made the ladder. But 
there are some standards out there to make a ladder. So----
    Mr. Altman. That's why we're agreeing with you.
    Senator Graham. Yes. That's right. I think you're on the 
right track. So, here's what--my two cents' worth for the 
Committee is that we need to empower an agency that issues them 
a license and can take it away. Wouldn't that be some----
    Mr. Altman. Yes.
    Senator Graham [continuing]. Incentive to do it----
    Mr. Altman. That should be----
    Senator Graham [continuing]. Right, if you could actually 
be taken out of business?
    Mr. Altman. Clearly, that should be part of what an agency 
can do.
    Senator Graham. Now, and you also agree that China is doing 
AI research. Is that right?
    Mr. Altman. Correct.
    Senator Graham. This world organization that doesn't 
exist--maybe it will, but if you don't do something about the 
China part of it, you'll never quite get this right. Do you 
agree?
    Mr. Altman. Well, that's why I think it doesn't necessarily 
have to be a world organization, but there has to be some sort 
of--and there's a lot of options here. There has to be some 
sort of standard, some sort of set of controls----
    Senator Graham. Right. Some----
    Mr. Altman [continuing]. That do have global effect.
    Senator Graham [continuing]. Kind of--you know, because, 
you know, other people are doing this. I've got 15--military 
application. How can AI change the warfare? And you've got 1 
minute.
    Mr. Altman. I've got 1 minute?
    Senator Graham. Yes.
    Mr. Altman. All right. That's a tough question for 1 
minute. This is very far out of my area of expertise, but I----
    Senator Graham. Well, let me give you one example: a drone. 
You can plug into a drone the coordinates, and it can fly out, 
and it goes over this target, and it drops a missile on this 
car moving down the road, and somebody's watching it. Could AI 
create a situation where a drone can select a target itself?
    Mr. Altman. I think we shouldn't allow that.
    Senator Graham. Well, can it be done?
    Mr. Altman. Sure.
    Senator Graham. Thanks.
    Chair Blumenthal. Thanks, Senator Graham. Senator Coons.
    Senator Coons. Thank you, Senator Blumenthal, Senator 
Hawley, for convening this hearing, for working closely 
together to come up with this compelling panel of witnesses and 
beginning a series of hearings on this transformational 
technology. We recognize the immense promise and substantial 
risks associated with generative AI technologies. We know these 
models can make us more efficient, help us learn new skills, 
open whole new vistas of creativity.
    But we also know that generative AI can authoritatively 
deliver wildly incorrect information. It can hallucinate, as is 
often described. It can impersonate loved ones. It can 
encourage self-destructive behaviors, and it can shape public 
opinion and the outcome of elections. Congress, thus far, has 
demonstrably failed to responsibly enact meaningful regulation 
of social media companies, with serious harms that have 
resulted that we don't fully understand. Senator Klobuchar 
referenced in her questioning a bipartisan bill that would open 
up social media platforms' underlying algorithms. We have 
struggled to even do that, to understand the underlying 
technology and then to move towards responsible regulation.
    We cannot afford to be as late to responsibly regulating 
generative AI as we have been to social media, because the 
consequences, both positive and negative, will exceed those of 
social media by orders of magnitude. So, let me ask a few 
questions designed to get at both how we assess the risk, 
what's the role of international regulation, and how does this 
impact AI?
    Mr. Altman, I appreciate your testimony about the ways in 
which OpenAI assesses the safety of your models through a 
process of iterative deployment. The fundamental question 
embedded in that process, though, is how you decide whether or 
not a model is safe enough to deploy and safe enough to have 
been built and then let go into the wild.
    I understand one way to prevent generative AI models from 
providing harmful content is to have humans identify that 
content and then train the algorithm to avoid it. There's 
another approach that's called constitutional AI that gives the 
model a set of values or principles to guide its 
decisionmaking. Would it be more effective to give models these 
kinds of rules instead of trying to require or compel training 
the model on all the different potentials for harmful content?
    Mr. Altman. Thank you, Senator. It's a great question. I'd 
like to frame it by talking about why we deploy at all: like, 
why we put these systems out into the world. There's the 
obvious answer about there's benefits and people are using it 
for all sorts of wonderful things and getting great value, and 
that makes us happy. But a big part of why we do it is that we 
believe that iterative deployment and giving people and our 
institutions and you all time to come to grips with this 
technology, to understand it, to find its limitations and 
benefits, the regulations we need around it, what it takes to 
make it safe--that's really important. Going off to build a 
super powerful AI system in secret and then dropping it on the 
world all at once, I think would not go well.
    So, a big part of our strategy is, while these systems are 
still relatively weak and deeply imperfect, to find ways to get 
people to have experience with them, to have contact with 
reality, and to figure out what we need to do to make it safer 
and better. And that is the only way that I've seen in the 
history of new technology and products of this magnitude to get 
to a very good outcome. And so that interaction with the world 
is very important.
    Now, of course, before we put something out, it needs to 
meet a bar of safety. And again, we spent well over 6 months 
with GPT-4, after we finished training it, going through all of 
these different things and deciding also what the standards 
were going to be, before we put something out there, trying to 
find the harms that we knew about it and how to address those. 
One of the things that's been gratifying to us is even some of 
our biggest critics have looked at GPT-4 and said, ``Wow, 
OpenAI made huge progress on''----
    Senator Coons. If you could focus briefly on whether or not 
a constitutional model that gives values would be worth it.
    Mr. Altman. I was just about----
    Senator Coons. I'm down to----
    Mr. Altman [continuing]. To get there.
    Senator Coons [continuing]. 2\1/2\ minutes.
    Mr. Altman. All right. Sorry about that. Yes. I think 
giving the models values up front is an extremely important 
set. You know, RLHF is another way of doing that same thing. 
But somehow or other, with synthetic data or human-generated 
data, you're saying, ``Here are the values. Here's what I want 
you to reflect,'' or ``Here are the wide bounds of everything 
that society will allow, and then within there, you pick, as 
the user, you know, if you want value system over here or value 
system over there.''
    We think that's very important. There's multiple technical 
approaches, but we need to give policymakers and the world as a 
whole the tools to say, ``Here's the values, and implement 
them.''
    Senator Coons. Thank you. Ms. Montgomery, you serve on an 
AI ethics board of a long-established company that has a lot of 
experience with AI. I'm really concerned that generative AI 
technologies can undermine the faith of democratic values and 
the institutions that we have.
    The Chinese are insisting that AI, as being developed in 
China, reinforce the core values of the Chinese Communist Party 
and the Chinese system. And I'm concerned about how we promote 
AI that reinforces and strengthens open markets, open 
societies, and democracy. In your testimony, you're advocating 
for AI regulation tailored to the specific way the technology 
is being used, not the underlying technology itself. And the EU 
is moving ahead with an AI Act which categorizes AI products 
based on level of risk.
    You all, in different ways, have said that you view 
elections and the shaping of election outcomes and 
disinformation that can influence elections as one of the 
highest-risk cases, one that's entirely predictable. We have 
attempted, so far unsuccessfully, to regulate social media 
after the demonstrably harmful impacts of social media on our 
last several elections. What advice do you have for us about 
what kind of approach we should follow and whether or not the 
EU direction is the right one to pursue?
    Ms. Montgomery. Yes. The conception of the EU AI Act is 
very consistent with this concept of precision regulation, 
where you're regulating the use of the technology in context. 
So, absolutely, that approach makes a ton of sense. It's what I 
advocated for at the onset. Different rules for different 
risks. So, in the case of elections, absolutely, any algorithm 
being used in that context should be required to have 
disclosure around the data being used, the performance of the 
model. Anything along those lines is really important. 
Guardrails need to be in place.
    And on the point--just come back to the question of whether 
we need an independent agency. I mean, I think we don't want to 
slow down regulation to address real risks right now. Right? 
So, we have existing regulatory authorities in place who have 
been clear that they have the ability to regulate in their 
respective domains. A lot of the issues we're talking about 
today span multiple domains, elections and the like. So----
    Senator Coons. If I could, I'll just assert that those 
existing regulatory bodies and authorities are under resourced 
and lack many of the----
    Ms. Montgomery. Yes.
    Senator Coons [continuing]. Statutory----
    Ms. Montgomery. Absolutely.
    Senator Coons [continuing]. Regulatory powers that they 
need.
    Ms. Montgomery. Correct.
    Senator Coons. We have failed to deliver on data privacy, 
even though industry has been----
    Ms. Montgomery. Yes.
    Senator Coons [continuing]. Asking us to regulate data 
privacy. If I might, Mr. Marcus, I'm interested, also, what 
international bodies are best positioned to convene 
multilateral discussions to promote responsible standards? 
We've talked about a model being CERN and nuclear energy. I'm 
concerned about proliferation and nonproliferation. I would 
suggest that the IPCC, a U.N. body, helped at least provide a 
scientific baseline of what's happening in climate change, so 
that even though we may disagree about strategies, globally 
we've come to a common understanding of what's happening and 
what should be the direction of intervention. I'd be 
interested, Mr. Marcus, if you could just give us your thoughts 
on who's the right body internationally to convene a 
conversation and one that could also reflect our values?
    Professor Marcus. I'm still feeling my way on that issue. I 
think global politics is not my specialty. I'm an AI 
researcher. But I have moved towards policy in recent months, 
really, because of my great concern about all of these risks. I 
think certainly the U.N., UNESCO, has its guidelines, should be 
involved and at the table, and maybe things work under them and 
maybe they don't, but they should have a strong voice and help 
to develop this. The OECD has also been thinking greatly about 
this. A number of organizations have, internationally. I don't 
feel like I personally am qualified to say exactly what the 
right model is there.
    Senator Coons. Well, thank you. I think we need to pursue 
this both at the national level and the international level. 
I'm the Chair of the IP Subcommittee of the Judiciary 
Committee. In June and July, we will be having hearings on the 
impact of AI on patents and copyrights. You can already tell 
from the questions of others there will be a lot of interest. I 
look forward to following up with you about that topic. I know, 
Mr. Chairman, I'm a little over my time.
    Professor Marcus. I look forward to helping as much as 
possible.
    Senator Coons. Thank you very much.
    Chair Blumenthal. Thanks, Senator Coons. Senator Kennedy.
    Senator Kennedy. Thank you all for being here. Permit me to 
share with you three hypotheses that I would like you to assume 
for the moment to be true. Hypothesis number one: Many Members 
of Congress do not understand artificial intelligence. 
Hypothesis number two: That absence of understanding may not 
prevent Congress from plunging in with enthusiasm and trying to 
regulate this technology in a way that could hurt this 
technology. Hypothesis number three that I would like you to 
assume: There is likely a berserk wing of the artificial 
intelligence community that intentionally or unintentionally 
could use artificial intelligence to kill all of us and hurt us 
the entire time that we are dying.
    Assume all of those to be true. Please tell me, in plain 
English, two or three reforms, regulations, if any, that you 
would implement if you were queen or king for a day. Ms. 
Montgomery?
    Ms. Montgomery. I think it comes back again to transparency 
and explainability in AI. We absolutely need to know and have 
companies attest.
    Senator Kennedy. What do you mean by transparency?
    Ms. Montgomery. So, disclosure of the data that's used to 
train AI, disclosure of the model and how it performs, and 
making sure that there's continuous governance over these 
models, that we are the leading edge in terms of----
    Senator Kennedy. Governance by whom?
    Ms. Montgomery [continuing]. That regulation. Technology 
governance, organizational governance, rules, and clarification 
that are needed that this----
    Senator Kennedy. Which rules?
    Ms. Montgomery [continuing]. Congress----
    Senator Kennedy. I mean, this is your chance, folks, to 
tell us how to get this right. Please use it.
    Ms. Montgomery. All right. I mean, I think, again, the 
rules should be focused on the use of AI in certain contexts. 
So, if you look at, for example, the----
    Senator Kennedy. Such as?
    Ms. Montgomery. So, if you look at the EU AI Act, it has 
certain uses of AI that it says are just simply too dangerous 
and will be outlawed in----
    Senator Kennedy. Okay. So----
    Ms. Montgomery [continuing]. The EU.
    Senator Kennedy [continuing]. We ought to first pass a law 
that says you can use AI for these uses but not others. Is that 
what you're saying?
    Ms. Montgomery. We need to define the highest-risk uses of 
AI.
    Senator Kennedy. Is there anything else?
    Ms. Montgomery. And then, of course, requiring things like 
impact assessments and transparency, requiring companies to 
show their work, protecting data that's used to train AI in the 
first place, as well.
    Senator Kennedy. All right. Professor Marcus, if you could 
be specific. This is your shot, man. Talk in plain English and 
tell me what, if any, rules we ought to implement. And please 
don't just use concepts. I'm looking for specificity.
    Professor Marcus. Number one, a safety review like we use 
with the FDA prior to widespread deployment. If you're going to 
introduce something to 100 million people, somebody has to have 
their eyeballs on it.
    Senator Kennedy. There you go. Okay. That's a good one.
    Professor Marcus. Number----
    Senator Kennedy. I'm not sure I agree with it, but that's a 
good one. What else?
    Professor Marcus. You didn't ask for three that you would 
agree with. Number two, a nimble monitoring agency to follow 
what's going on, not just prereview but also post as things are 
out there in the world, with authority to call things back, 
which we've discussed today. And number three would be funding 
geared towards things like AI constitution, AI that can reason 
about what it's doing. I would not leave things entirely to 
current technology, which I think is poor at behaving in 
ethical fashion and behaving in honest fashion.
    And so I would have funding to try to basically focus on AI 
safety research. That term has a lot of complications in my 
field. There's both safety, let's say, short term and long 
term. And I think we need to look at both. Rather than just 
funding models to be bigger, which is the popular thing to do, 
we need to fund----
    Senator Kennedy. Let me cut----
    Professor Marcus [continuing]. Models to be more 
trustworthy.
    Senator Kennedy [continuing]. You off, Professor, because I 
want to hear from Mr. Altman. Mr. Altman, here's your shot.
    Mr. Altman. Thank you, Senator. Number one, I would form a 
new agency that licenses any effort above a certain scale of 
capabilities and can take that license away and ensure 
compliance with safety standards. Number two, I would create a 
set of safety standards focused on what you said in your third 
hypothesis, as the dangerous capability evaluations. One 
example that we've used in the past is looking to see if a 
model can self-replicate and self-exfiltrate into the wild. We 
can give your office a long other list of the things that we 
think are important there, but specific tests that a model has 
to pass before it can be deployed into the world. And then, 
third, I would require independent audits, so not just from the 
company or the agency, but experts who can say the model is or 
isn't in compliance with these stated safety thresholds and 
these percentages of performance on question X or Y.
    Senator Kennedy. Can you send me that information?
    Mr. Altman. We will do that.
    Senator Kennedy. Would you be qualified, if we promulgated 
those rules, to administer those rules?
    Mr. Altman. I love my current job.
    [Laughter.]
    Senator Kennedy. Are there people out there that would be 
qualified?
    Mr. Altman. We'd be happy to send you recommendations for 
people out there, yes.
    Senator Kennedy. Okay. You make a lot of money, do you?
    Mr. Altman. No. I'm paid enough for health insurance. I 
have no equity in OpenAI.
    Senator Kennedy. Really?
    Mr. Altman. Yes.
    Senator Kennedy. That's interesting. You need a lawyer.
    Mr. Altman. I need a what?
    Senator Kennedy. You need a lawyer or an agent.
    Mr. Altman. I'm doing this because I love it.
    Senator Kennedy. Thank you, Mr. Chairman.
    Chair Blumenthal. Thanks, Senator Kennedy. Senator Hirono.
    Senator Hirono. Thank you, Mr. Chairman. Listening to all 
of you testifying--thank you very much for being here. Clearly, 
AI truly is a game-changing tool, and we need to get the 
regulation of this tool right, because my staff, for example, 
asked AI--it might have been GPT-4; it might've been, I don't 
know, one of the other entities--to create a song that my 
favorite band, BTS--a song that they would sing somebody else's 
song, but neither of the artists were involved in creating what 
sounded like a really genuine song. So, you can do a lot.
    We also asked: Can there be a speech created talking about 
the Supreme Court decision in Dobbs and the chaos that it 
created, using my voice, my kind of voice? And it created a 
speech that was really good. It almost made me think about, 
what do I need my staff for? So, don't worry. That's not where 
we are.
    Mr. Altman. Nervous laughter behind you.
    Senator Hirono. Their jobs are safe. But there's so much 
that can be done, and one of the things that you mentioned, Mr. 
Altman, that intrigued me was you said GPT-4 can refuse harmful 
requests. So, you must have put some thought into how your 
system, if I can call it that, can refuse harmful requests. 
What do you consider a harmful request? You can just keep it 
short.
    Mr. Altman. Yes. I'll give a few examples. One would be 
about violent content. Another would be about content that's 
encouraging self-harm. Another is adult content. Not that we 
think adult content is inherently harmful, but there's things 
that could be associated with that that we cannot reliably 
enough differentiate, so we refuse all of it.
    Senator Hirono. So, those are some of the more obvious 
harmful kinds of information. But in the election context, for 
example, I saw a picture of former President Trump being 
arrested by NYPD, and that went viral. I don't know. Is that 
considered harmful? I've seen all kinds of statements 
attributed to any one of us that could be put out there that 
may not rise to your level of harmful content, but there you 
have it.
    So, two of you said that we should have a licensing scheme. 
I can't envision or imagine right now what kind of a licensing 
scheme we would be able to create to pretty much regulate the 
vastness of this game-changing tool. So are you thinking of an 
FTC kind of a system, an FCC kind of a system? What do the two 
of you even envision as a potential licensing scheme that would 
provide the kind of guardrails that we need, to protect, 
literally, our country from harmful content?
    Mr. Altman. To touch on the first part of what you said, 
there are things besides, you know, ``Should this content be 
generated or not?'' that I think are also important. So, that 
image that you mentioned was generated--I think it'd be a great 
policy to say generated images need to be made clear in all 
contexts that they were generated. And, you know, then we still 
have the image out there, but we're at least requiring people 
to say this was a generated image.
    Senator Hirono. Okay. Well, you don't need an entire 
licensing scheme in order to make that a reality.
    Mr. Altman. Where I think the licensing scheme comes in is 
not for what these models are capable of today, because, as you 
pointed out, you don't need a new licensing agency to do that. 
But as we head--and, you know, this may take a long time. I'm 
not sure. As we head towards artificial general intelligence 
and the impact that will have and the power of that technology, 
I think we need to treat that as seriously as we treat other 
very powerful technologies. And that's why I personally think 
we need such a scheme.
    Senator Hirono. I agree. And that is why the--by the time 
we're talking about AGI, we're talking about major harms that 
can occur through the use of AGI. So, Professor Marcus, I mean, 
what kind of a regulatory scheme would you envision? And we 
can't just come up with something, you know, that is going to 
take care of the issues that will arise in the future, 
especially with AGI. So what kind of a scheme would you 
contemplate?
    Professor Marcus. Well, first, if I can rewind just a 
moment, I think you really put your finger on the central 
scientific issue in terms of the challenges in building 
artificial intelligence. We don't know how to build a system 
that understands harm in the full breadth of its meaning. So, 
what we do right now is we gather examples and we say, ``Is 
this like the examples that we have labeled before?'' But 
that's not broad enough. And so I thought your questioning 
beautifully outlined the challenge that AI itself has to face 
in order to really deal with this. We want AI itself to 
understand harm, and that may require new technology, so I 
think that's very important.
    On the second part of your question, the model that I tend 
to gravitate towards--but I am not an expert here--is the FDA, 
at least as part of it, in terms of, you have to make a safety 
case and say why the benefits outweigh the harms, in order to 
get that license. Probably we need elements of multiple 
agencies. I'm not an expert there, but I think that the safety 
case part of it is incredibly important. You have to be able to 
have external reviewers that are scientifically qualified look 
at this and say, ``Have you addressed enough?''
    So, I'll just give one specific example. Auto-GPT frightens 
me. That's not something that OpenAI made, but something that 
OpenAI did make called ChatGPT plugins led a few weeks later to 
someone building open-source software called Auto-GPT. And what 
Auto-GPT does is it allows systems to access source code, 
access the internet, and so forth. And there are a lot of 
potential, let's say, cybersecurity risks there. There should 
be an external agency that says, ``Well, we need to be 
reassured, if you're going to release this product, that there 
aren't going to be cybersecurity problems or there are ways of 
addressing it.''
    Senator Hirono. So, Professor, I am running out of time. 
You know, I just want to mention, Ms. Montgomery, your model is 
a use model similar to what the EU has come up with, but the 
vastness of AI and the complexities involved, I think, would 
require more than looking at the use of it. I think that, based 
on what I'm hearing today, don't you think that we're probably 
going to need to do a heck of a lot more than to focus on what 
use AI is being used for?
    For example, you can ask AI to come up with a funny joke or 
something, but you can use the same--you can ask the same AI 
tool to generate something that is like an election fraud kind 
of a situation. So, I don't know how you will make a 
determination, based on where you're going with the use model, 
how to distinguish those kinds of uses of this tool. So, I 
think that if we're going to go toward a licensing kind of a 
scheme, we're going to need to put a lot of thought into how 
we're going to come up with an appropriate scheme that is going 
to provide the kind of future reference that we need to put in 
place.
    So, I thank all of you for coming in and providing further 
food for thought. Thank you, Mr. Chairman.
    Chair Blumenthal. Thanks very much, Senator Hirono. Senator 
Padilla.
    Senator Padilla. Thank you, Mr. Chairman. I appreciate the 
flexibility as I've been back and forth between this Committee 
and Homeland Security Committee, where there's a hearing going 
on right now on the use of AI in government. So, it's AI day on 
The Hill, or at least in the Senate, apparently.
    Now, for folks watching at home, if you never thought about 
AI until the recent emergence of generative AI tools, the 
developments in this space may feel like they've just happened 
all of a sudden. But the fact of the matter is, Mr. Chair, is 
that they haven't. AI is not new, not for government, not for 
business, not for the public. In fact, the public uses AI all 
the time.
    And just for folks to be able to relate, I want to offer 
the example of anybody with a smartphone. Many features on your 
device leverage AI, including suggested replies, right, when 
we're text messaging or even to email, autocorrect features, 
including but not limited to spelling in our email and text 
applications. So, I'm frankly excited to explore how we can 
facilitate positive AI innovation that benefits society while 
addressing some of the already known harms and biases that stem 
from the development and use of the tools today.
    Now, with language models becoming increasingly ubiquitous, 
I want to make sure that there's a focus on ensuring equitable 
treatment of diverse demographic groups. My understanding's 
that most research into evaluating and mitigating fairness 
harms has been concentrated on the English language, while non-
English languages have received comparatively little attention 
or investment. And that we've seen this problem before, and 
I'll tell you why I raise this.
    Social media companies, for example, have not adequately 
invested in content moderation tools and resources for their 
non-English--in non-English language. And I share this not just 
out of concern for non-U.S.-based users, but so many U.S.-based 
users prefer a language other than English in their 
communication. So, I'm deeply concerned about repeating social 
media's failure in AI tools and applications.
    Question: Mr. Altman and Ms. Montgomery, how are OpenAI, 
IBM ensuring language and cultural inclusivity, that they're in 
their large language models and it's even an area of focus in 
the development of your products?
    Ms. Montgomery. So, bias and equity in technology is a 
focus of ours and always has been, diversity in terms of the 
development of the tools, in terms of their deployment, so 
having diverse people that are actually training those tools, 
considering the downstream effects, as well. We're also very 
cautious, very aware of the fact that we can't just be 
articulating and calling for these types of things without 
having the tools and the technology to test for bias and to 
apply governance across the lifecycle of AI. So, we were one of 
the first teams and companies to put toolkits on the market, 
deploy them, contribute them to open source, that will do 
things like help to address--you know, be the technical aspects 
in which we help to address issues like bias.
    Senator Padilla. Can you speak just for a second 
specifically to language inclusivity?
    Ms. Montgomery. Yes. So, we don't have a consumer platform, 
but we are very actively involved with ensuring that the 
technology we help to deploy and the large language models that 
we use in helping our clients to deploy technology is focused 
on and available in many languages.
    Senator Padilla. Thank you. Mr. Altman?
    Mr. Altman. We think this is really important. One example 
is that we worked with the government of Iceland, which is a 
language with fewer speakers than many of the languages that 
are well represented on the internet, to ensure that their 
language was included in our model. And we've had many similar 
conversations, and I look forward to many similar partnerships 
with lower resource languages to get them into our models. GPT-
4 is unlike previous models of ours, which were good at English 
and not very good at other languages--now pretty good at a 
large number of languages. You can go pretty far down the list, 
ranked by number of speakers, and still get good performance.
    But for these very small languages, we're excited about 
custom partnerships to include that language into our model 
run. And the part of the question you asked about values and 
making sure that cultures are included, we're equally focused 
on that, excited to work with people who have particular data 
sets and to work to collect a representative set of values from 
around the world, to draw these wide bounds of what the system 
can do.
    I also appreciate what you said about the benefits of these 
systems and wanting to make sure we get those to as wide of a 
group as possible. I think these systems will have lots of 
positive impact on a lot of people, but in particular, 
historically underrepresented groups in technology, people who 
have not had as much access to technology around the world. 
This technology seems like it can be a big lift up.
    Senator Padilla. Very good. And I know my question was 
specific to language inclusivity, but I'm glad there's 
agreement on the broader commitment to diversity and inclusion. 
And I'll just give a couple more reasons why I think it's so 
critical. You know, the largest actors in this space can afford 
the massive amount of data, the computing power, and they have 
the financial resources necessary to develop complex AI 
systems. But in this space, we haven't seen, from a workforce 
standpoint, the racial and gender diversity reflective of the 
United States of America. And we risk, if we're not thoughtful 
about it, contributing to the development of tools and 
approaches that only exacerbate the bias and inequities that 
exist in our society. So, a lot of follow-up work to do there.
    In my time remaining, I do want to ask one more question. 
This Committee and the public are right to pay attention to the 
emergence of generative AI. Now, this technology has a 
different opportunity and risk profile than other AI tools. And 
these applications have felt very tangible for the public, due 
to the nature of the user interface and the outputs that they 
produce. But I don't think we should lose sight of the broader 
AI ecosystem as we consider AI's broader impact on society, as 
well as the design of appropriate safeguards.
    So, Ms. Montgomery, in your testimony, as you noted, AI is 
not you. Can you highlight some of the different applications 
that the public and policymakers should also keep in mind as we 
consider possible regulations?
    Ms. Montgomery. Yes. I mean, I think the generative AI 
systems that are available today are creating new issues that 
need to be studied, new issues around the potential to generate 
content that could be extremely misleading, deceptive, and the 
like. So, those issues absolutely need to be studied. But we 
shouldn't also ignore the fact that AI is a tool. It's been 
around for a long time. It has capabilities beyond just 
generative capabilities. And again, that's why I think going 
back to this approach where we're regulating AI where it's 
touching people and society is a really important way to 
address it.
    Senator Padilla. Thank you. Thank you, Mr. Chair.
    Chair Blumenthal. Thanks, Senator Padilla. Senator Booker 
is next, but I think he's going to defer to Senator Ossoff.
    Senator Booker. That's because Senator Ossoff's a very big 
deal. I don't know if you know that.
    Senator Ossoff. I have a meeting at noon, and I'm grateful 
to you, Senator Booker, for yielding your time. You are, as 
always, brilliant and handsome. And thank you to the panelists 
for joining us. Thank you to the Subcommittee leadership for 
opening this up to all Committee Members.
    If we're going to contemplate a regulatory framework, we're 
going to have to define what it is that we're regulating. So, 
you know, Mr. Altman, any such law will have to include a 
section that defines the scope of regulated activities, 
technologies, tools, products. Just take a stab at it.
    Mr. Altman. Yes. Thanks for asking, Senator Ossoff. I think 
it's super important. I think there are very different levels 
here, and I think it's important that any new approach, any new 
law does not stop the innovation from happening with smaller 
companies, open-source models, researchers that are doing work 
at a smaller scale. That's a wonderful part of this ecosystem 
and of America, we don't want to slow that down. There still 
may need to be some rules there, but I think we could draw a 
line at systems that need to be licensed in a very intense way.
    The easiest way to do it--I'm not sure if it's the best, 
but the easiest would be to talk about the amount of compute 
that goes into such a model. So, you know, we could define a 
threshold of compute--and it'll have to change. It could go up 
or down; down, as we discover more efficient algorithms, that 
says, ``Above this amount of compute, you are in this regime.''
    What I would prefer--it's harder to do, but I think more 
accurate--is to define some capability thresholds and say, ``A 
model that can do things X, Y, and ''Z--up to you all to 
decide--``that's now in this licensing regime.'' But models 
that are less capable--you know, we don't want to stop our 
open-source community, we don't want to stop individual 
researchers, we don't want to stop new startups--can proceed, 
you know, with a different framework.
    Senator Ossoff. Thank you. As concisely as you can, please 
state which capabilities you'd propose we consider for the 
purposes of this definition.
    Mr. Altman. I would love, rather than to do that off the 
cuff, to follow up with your office with, like, a thoughtful--
--
    Senator Ossoff. Well, perhaps opine, understanding that 
you're just responding. You're not making law.
    Mr. Altman. All right. In the spirit of just opining, I 
think a model that can persuade, manipulate, influence a 
person's behavior or a person's beliefs--that would be a good 
threshold. I think a model that could help create novel 
biological agents would be a great threshold. Things like that.
    Senator Ossoff. I want to talk about the predictive 
capabilities of the technology, and we're going to have to 
think about a lot of very complicated constitutional questions 
that arise from it. With massive data sets, the integrity and 
accuracy with which such technology can predict future human 
behavior is potentially pretty significant at the individual 
level, correct?
    Mr. Altman. I think we don't know the answer to that for 
sure, but let's say it can at least have some impact there.
    Senator Ossoff. Okay. So, we may be confronted by 
situations where, for example, a law enforcement agency 
deploying such technology seeks some kind of judicial consent 
to execute a search or to take some other police action on the 
basis of a modeled prediction about some individual's behavior. 
But that's very different from the kind of evidentiary 
predicate that normally police would take to a judge in order 
to get a warrant. Talk me through how you're thinking about 
that issue.
    Mr. Altman. Yes. I think it's very important that we 
continue to understand that these are tools that humans use to 
make human judgments and that we don't take away human 
judgment. I don't think that people should be prosecuted based 
off of the output of an AI system, for example.
    Senator Ossoff. We have no national privacy law. Europe has 
rolled one out, to mixed reviews. Do you think we need one?
    Mr. Altman. I think it'd be good.
    Senator Ossoff. And what would be the qualities or purposes 
of such a law that you think would make the most sense, based 
on your experience?
    Mr. Altman. Again, this is very far out of my area of 
expertise. I think there's many, many people that are privacy 
experts that could weigh in on what a law needs much better 
than I can.
    Senator Ossoff. I'd still like you to weigh in.
    Mr. Altman. I mean, I think a minimum is that users should 
be able to sort of opt out from having their data used by 
companies like ours or the social media companies. It should be 
easy to delete your data. I think those are--but the thing that 
I think is important, from my perspective running an AI 
company, is that if you don't want your data used for training 
these systems, you have the right to do that.
    Senator Ossoff. So, let's think about how that would be 
practically implemented. I mean, as I understand it, your tool, 
and certainly similar tools, one of the inputs will be 
scraping, for lack of a better word, data off of the open web, 
right, as a low-cost way of gathering information. And there's 
a vast amount of information out there about all of us. How 
would such a restriction on the access or use or analysis of 
such data be practically implemented?
    Mr. Altman. So, I was speaking about something a little bit 
different, which is the data that someone generates, the 
questions they ask our system, things that they input there, 
training on that. Data that's on the public web, that's 
accessible--even if we don't train on that, the models can 
certainly link out to it. So, that was not what I was referring 
to. I think that, you know, there's ways to have your data or 
there should be more ways to have your data taken down from the 
public web, but certainly models with web-browsing capabilities 
will be able to search the web and link out to it.
    Senator Ossoff. When you think about implementing a safety 
or a regulatory regime to constrain such software and to 
mitigate some risk, is your view that the Federal Government 
would make laws such that certain capabilities or 
functionalities themselves are forbidden in potential? In other 
words, one cannot deploy or execute code capable of X?
    Mr. Altman. Yes.
    Senator Ossoff. Or is it the act itself, X only when 
actually executed, that----
    Mr. Altman. Well, I think both.
    Senator Ossoff [continuing]. Is illegal?
    Mr. Altman. I'm a believer in defense in depth. I think 
that there should be limits on what a deployed model is capable 
of and then what it actually does, too.
    Senator Ossoff. How are you thinking about how kids use 
your product?
    Mr. Altman. Well, you have to be--I mean, you have to be 18 
or up, or have your parent's permission at 13 and up, to use a 
product. But we understand that people get around those 
safeguards all the time. And so what we try to do is just 
design a safe product.
    And there are decisions that we make that we would allow if 
we knew only adults were using it, that we just don't allow in 
the product because we know children will use it some way or 
other, too. In particular, given how much these systems are 
being used in education, we, like, want to be aware that that's 
happening.
    Senator Ossoff. I think what--and Senator Blumenthal has 
done extensive work investigating this. What we've seen 
repeatedly is that companies whose revenues depend upon volume 
of use, screen time, intensity of use, design these systems in 
order to maximize the engagement of all users, including 
children, with perverse results in many cases. And what I would 
humbly advise you is that you get way ahead of this issue, the 
safety for children of your product, or I think you're going to 
find that Senator Blumenthal, Senator Hawley, others on the 
Subcommittee, and I will look very harshly on the deployment of 
technology that harms children.
    Mr. Altman. We couldn't agree more. I think we're out of 
time, but I'm happy to talk about that if I can respond.
    Senator Ossoff. Go ahead. Well, that's up to the Chairman.
    Chair Blumenthal. Go ahead.
    Mr. Altman. Okay. First of all, I think we try to design 
systems that do not maximize for engagement. In fact, we're so 
short on GPUs, the less people use our products, the better. 
But we're not an advertising-based model. We're not trying to 
get people to use it more and more. And I think that's a 
different shape than ad-supported social media.
    Second, these systems do have the capability to influence 
in obvious and in very nuanced ways, and I think that's 
particularly important for the safety of children, but that 
will impact all of us. One of the things that we'll do 
ourselves, regulation or not, but I think a regulatory approach 
would be good for, also, is requirements about how the values 
of these systems are set and how these systems respond to 
questions that can cause influence. So, we'd love to partner 
with you. Couldn't agree more on the importance.
    Senator Ossoff. Thank you.
    Senator Booker. Mr. Chairman, for the record, I just want 
to say that the Senator from Georgia is also very handsome and 
brilliant, too.
    [Laughter.]
    Senator Booker. But----
    Chair Blumenthal. I will allow that comment to stand 
without objection.
    Senator Booker. Without objection. Okay. Mr. Chairman and 
Ranking Member, it's been----
    Chair Blumenthal. Senator Booker, you are now recognized.
    Senator Booker. Thank you very much. Thank you. It's nice 
that we finally got down to the bald guys down here at the end. 
I just want to thank you both. This has been one of the best 
hearings I've had this Congress and just a testimony to you 
two, and seeing the challenges and the opportunities that AI 
presents. So, I appreciate you both.
    I want to just jump in, I think, very broadly, and then 
I'll get a little more narrow. Sam, you said very broadly, 
technology has been moving like this, and a lot of people have 
been talking about regulation. And so I use the example of the 
automobile. What an extraordinary piece of technology. I mean, 
New York City did not know what to do with horse manure. They 
were having crises, forming commissions, and the automobile 
comes along, ends that problem. But at the same time, we have 
tens of thousands of people dying on highways every day. We 
have emissions crises and the like.
    There are multiple Federal agencies, multiple Federal 
agencies that were created or are specifically focused on 
regulating cars. And so this idea that this equally 
transforming technology is coming, and for Congress to do 
nothing--which is not what anybody here is calling for, little 
or nothing--is obviously unacceptable. I really appreciate--
Senator Welch and I, who've been going back and forth during 
this hearing, and him and Bennet have a bill talking about 
trying to regulate in this space. Not doing so for social media 
has been, I think, very destructive and allowed a lot of things 
to go on that are really causing a lot of harm.
    And so the question is, what kind of regulation? You all 
have spoken of that to a lot of my colleagues. And I want to--
Ms. Montgomery--and I have to give full disclosure, I'm the 
child of two IBM parents. But, you know, you talked about 
defining the highest-risk uses. We don't know all of them. We 
really don't. We can't see where this is going, regulating at 
the point of risk.
    And you sort of called not for an agency--and I think when 
somebody else asks you to specify--because you don't want to 
slow things down; we should build on what we have in place. But 
you can envision that we can try to work on two different ways 
that ultimately a specific--like we have in cars: EPA, NHTSA, 
the Federal Motor Carrier Safety Administration, all of these 
things. You can imagine something specific that is, as Mr. 
Marcus points out, a nimble agency that could do monitoring and 
other things. You can imagine the need for something like that, 
correct?
    Ms. Montgomery. Oh, absolutely. Yes.
    Senator Booker. And so, just for the record, then, in 
addition to trying to regulate with what we have now, you would 
encourage Congress and my colleague, Senator Welch, to move 
forward in trying to figure out the right tailored agency to 
deal with what we know and perhaps things that might come up in 
the future?
    Ms. Montgomery. I would encourage Congress to make sure it 
understands the technology, has the skills and resources in 
place to impose regulatory requirements on the uses of the 
technology, and to understand emerging risks as well. So, yes.
    Senator Booker. Yes. Mr. Marcus, there's no way to put this 
genie in the bottle. Globally, it's exploding. I appreciate 
your thoughts, and I shared some with my staff about your ideas 
of what the international context is. But there's no way to 
stop this moving forward. So, with that understanding, just 
building on what Ms. Montgomery said, what kind of 
encouragement do you have--as specifically as possible--to 
forming an agency, to using current rules and regulations? Can 
you just put some clarity on what you've already stated?
    Professor Marcus. Let me just insert, there are more genies 
yet to come, from more bottles. Some genies are already out, 
but we don't have machines that can really, for example, self-
improve themselves. We don't really have machines that have 
self-awareness, and we might not ever want to go there. So, 
there are other genies to be concerned about.
    On to the main part of your question. I think that we need 
to have some international meetings very quickly with people 
who have expertise in how you grow agencies and the history of 
growing agencies. We need to do that in the Federal level. We 
need to do that in the international level.
    I'll just emphasize one thing I haven't as much as I would 
like to, which is that I think science has to be a really 
important part of it. And I'll give an example. We've talked 
about misinformation. We don't really have the tools right now 
to detect and label misinformation with nutrition labels that 
we would like to. We have to build new technologies for that. 
We don't really have tools yet to detect a wide uptick in 
cybercrime, probably. We probably need new tools there. We need 
science to probably help us to figure out what we need to build 
and also what it is that we need to have transparency around, 
and so forth.
    Senator Booker. Understood. Understood. Sam, just go to you 
for the little bit of time I have left. Real quick. First of 
all, you're a bit of a unicorn when I sat down with you first. 
Could you explain, why nonprofit? In other words, you're not 
looking--and you've even capped the VC people. Just really 
quickly, I want folks to understand that.
    Mr. Altman. We started as a nonprofit, really focused on 
how this technology was going to be built. At the time, it was 
very outside the Overton Window that something like AGI was 
even possible. That's shifted a lot. We didn't know at the time 
how important scale was going to be, but we did know that we 
wanted to build this with humanity's best interests at heart 
and a belief that this technology could, if it goes the way we 
want, if we can do some of those things Professor Marcus 
mentioned, really deeply transform the world. And we wanted to 
be as much of a force for getting to a positive----
    Senator Booker. I'm going to interrupt you. I think that's 
all good. I hope more of that gets out on the record--the 
second part of my question, as well. I found it fascinating. 
But are you ever going to--for a revenue model, for a return on 
your investors, are you ever going to do ads or something like 
that?
    Mr. Altman. I wouldn't say never. Like, I think there may 
be people that we want to offer services to, and there's no 
other model that works, but I really like having a 
subscription-based model. We have API developers pay us, and we 
have ChatGPT consumers pay us.
    Senator Booker. Okay. Can I jump to the--then can I just 
jump----
    Mr. Altman. Sure.
    Senator Booker [continuing]. Real quickly--one of my 
biggest concerns about this space is what I've already seen in 
the space of Web2, Web3--is this massive corporate 
concentration. It is really terrifying to see how few companies 
now control and affect the lives of so many of us. And these 
companies are getting bigger and more powerful. And I see, you 
know, OpenAI backed by Microsoft. Anthropic is backed by 
Google. Google has its own in-house products, we know Bard.
    So, I'm really worried about that. And I'm wondering if, 
Sam, you can give me a quick acknowledgment. Are you worried 
about the corporate concentration in this space and what effect 
it might have and the associated risks, perhaps, with market 
concentration in AI? And then, Mr. Marcus, can you answer that, 
as well?
    Mr. Altman. I think there will be many people that develop 
models. What's happening on the open-source community is 
amazing, but there will be a relatively small number of 
providers that can make models at the true leading edge----
    Senator Booker. And is there danger in that?
    Mr. Altman. I think there is benefits and danger to that. 
Like, as we were talking about all of the dangers with AI, the 
fewer of us that you really have to keep a careful eye on, on 
the absolute, like, bleeding edge of capabilities, there's 
benefits there, but I think there needs to be enough--and there 
will, because there's so much value--that consumers have 
choice, that we have different ideas.
    Senator Booker. Mr. Marcus, real quick?
    Professor Marcus. There is a real risk of a kind of 
technocracy combined with oligarchy, where a small number of 
companies influence people's beliefs through the nature of 
these systems. Again, I put something in the record about The 
Wall Street Journal about how these systems can subtly shape 
our beliefs and has enormous influence on how we live our 
lives. And having a small number of players do that with data 
that we don't even know about--that scares me.
    Senator Booker. Sam. I'm sorry.
    Mr. Altman. One more thing I wanted to add. One thing that 
I think is very important is that what these systems get 
aligned to, whose values, what those bounds are, that that is 
somehow set by society as a whole, by governments as a whole. 
And so creating that data set, the alignment data set--it could 
be an AI constitution, whatever it is--that has got to come 
very broadly from society.
    Senator Booker. Thank you very much, Mr. Chairman. My 
time's expired, and I guess the best for last.
    Chair Blumenthal. Thank you, Senator Booker. Senator Welch.
    Senator Welch. First of all, I want to thank you, Senator 
Blumenthal and you, Senator Hawley. This has been a tremendous 
hearing. Senators are noted for their short attention spans, 
but I sat through this entire hearing and enjoyed every minute 
of it.
    Chair Blumenthal. You have one of our longer attention 
spans in the United States Senate.
    [Laughter.]
    Chair Blumenthal. To your great credit.
    Senator Welch. Well, we've had good witnesses, and it's an 
incredibly important issue. All the questions I have have been 
asked, really, but here's kind of a takeaway and what I think 
is the major question that we're going to have to answer as a 
Congress. Number one, you're here because AI is this 
extraordinary new technology that everyone says can be 
transformative, as much as the printing press.
    Number two, it's really unknown what's going to happen, but 
there's a big fear you've expressed, all of you, about what bad 
actors can do and will do if there's no rules of the road. 
Number three, as a Member who served in the House and now in 
the Senate, I've come to the conclusion that it's impossible 
for Congress to keep up with the speed of technology.
    And there have been concerns expressed about social media 
and now about AI that relate to fundamental privacy rights, 
bias rights, intellectual property, the spread of 
disinformation, which in many ways for me is the biggest threat 
because that goes to the core of our capacity for self-
governing. There's the economic transformation, which can be 
profound. There's safety concerns.
    And I've come to the conclusion that we absolutely have to 
have an agency. What its scope of engagement is has to be 
defined by us. But I believe that unless we have an agency that 
is going to address these questions from social media and AI, 
we really don't have much of a defense against the bad stuff, 
and the bad stuff will come. So, last year I introduced in the 
House side--and Senator Bennet didn't--since it was the end of 
the year--Digital Commission Act, and we're going to be 
reintroducing that this year.
    And the two things that I want to ask--one, you've somewhat 
answered, because I think two of the three of you have said you 
think we do need an independent commission. You know, Congress 
established an independent commission when railroads were 
running rampant over the interest of farmers, when Wall Street 
had no rules of the road and we had the SEC. I think we're at 
that point now. But what the commission does would have to be 
defined and circumscribed.
    But also there's always a question about the use of 
regulatory authority and the recognition that it can be used 
for good--J.D. Vance actually mentioned that when we were 
considering his and Senator Brown's bill about railroads in 
that event in East Palestine, regulation for the public health. 
But there's also legitimate concern about regulation getting in 
the way of things, being too cumbersome, and being a negative 
influence.
    So, A, two of the three of you have said you think we do 
need an agency. What are some of the perils of an agency that 
we would have to be mindful of, in order to make certain that 
its goals of protecting many of those interests I just 
mentioned--privacy, bias, intellectual property, 
disinformation--would be the winners and not the losers? And 
I'll start with you, Mr. Altman.
    Mr. Altman. Thank you, Senator. One, I think America has 
got to continue to lead. This happened in America. I'm very 
proud that it happened in America.
    Senator Welch. By the way, I think that's right, and that's 
why I'd be much more confident if we had our agency, as opposed 
to--get involved in international discussions. Ultimately you 
want the rules of the road, but I think if we lead and get 
rules of the road that work for us, that is probably a more 
effective way to proceed.
    Mr. Altman. I personally believe there's a way to do both. 
And I think it is important to have the global view on this, 
because this technology will impact Americans and all of us 
wherever it's developed. But I think we want America to lead. 
We want----
    Senator Welch. So, get to the perils issue, though, because 
I know----
    Mr. Altman. Well, that's one. I mean, that is a peril----
    Senator Welch [continuing]. My Republican colleagues--
right.
    Ms. Montgomery. That's right.
    Mr. Altman [continuing]. Which is, you slow down American 
industry in such a way that China or somebody else makes faster 
progress. A second, and I think this can happen with--like, the 
regulatory pressure should be on us. It should be on Google. It 
should be on the other small set of people in the lead the 
most.
    We don't want to slow down smaller startups. We don't want 
to slow down open-source efforts. We still need them to comply 
with things. They can still--you can still cause great harm 
with a smaller model, but leaving the room and the space for 
new ideas and new companies and independent researchers to do 
their work and not putting a regulatory burden that, say, a 
company like us could handle but a smaller one couldn't. I 
think that's another peril, and it's clearly a way that 
regulation has gone.
    Senator Welch. Okay. Mr. Marcus, or Professor Marcus?
    Professor Marcus. The other obvious peril is regulatory 
capture. If we make it appear as if we are doing something, but 
it's more like greenwashing, and nothing really happens, we 
just keep out the little players because we put so much burden 
that only the big players can do it. So, there are also those 
kinds of perils. I fully agree with everything that Mr. Altman 
said, and I would add that to the list.
    Senator Welch. Okay. Ms. Montgomery?
    Ms. Montgomery. One of the things I would add to the list 
is the risk of not holding companies accountable for the harms 
that they're causing today. Right? So, we talk about 
misinformation in electoral systems.
    Senator Welch. So, no Section 230----
    Ms. Montgomery. Agency or no agency----
    Senator Welch [continuing]. Kind of flavor here.
    Ms. Montgomery [continuing]. We need to hold companies 
responsible today and accountable for the AI that they're 
deploying that disseminates misinformation on things like 
elections and where the----
    Senator Welch. Yes.
    Ms. Montgomery [continuing]. Risk is.
    Senator Welch. You know, a regulatory agency would do a lot 
of the things that Senator Graham was talking about. You know, 
you don't build a nuclear reactor without getting a license. 
You don't build an AI system without getting a license that 
gets tested independently.
    Mr. Altman. I think it's a great analogy.
    Senator Welch. All right.
    Professor Marcus. We need both predeployment and 
postdeployment.
    Senator Welch. Okay. Thank you all very much. I yield back, 
Mr. Chairman.
    Chair Blumenthal. Thanks. Thanks, Senator Welch. Let me ask 
a few more questions. You've all been very, very patient, and 
the turnout today, which is beyond our Subcommittee, I think 
reflects both your value in what you're contributing as well as 
the interest in this topic.
    There are a number of subjects that we haven't covered at 
all. One was just alluded to by Professor Marcus, which is the 
monopolization danger, the dominance of markets that excludes 
new competition and thereby inhibits or prevents innovation and 
invention, which we have seen in social media as well as some 
of the old industries: airlines, automobiles, and others, where 
consolidation has narrowed competition.
    And so I think we need to focus on kind of an old area of 
antitrust, which dates more than a century. It's still 
inadequate to deal with the challenges we have right now in our 
economy, and certainly we need to be mindful of the way that 
rules can enable the big guys to get bigger and exclude 
innovation and competition and responsible good guys, such as 
are represented in this industry right now.
    We haven't dealt with national security. There are huge 
implications for national security. I will tell you, as a 
Member of the Armed Services Committee, classified briefings on 
this issue have abounded. And the threats that are posed by 
some of our adversaries--China has been mentioned here. But the 
sources of threats to this Nation in this space are very real 
and urgent. We're not going to deal with them today, but we do 
need to deal with them, and we will, hopefully, in this 
Committee.
    And then on the issue of a new agency, you know, I've been 
doing this stuff for a while. I was Attorney General of 
Connecticut for 20 years. I was a Federal prosecutor, the U.S. 
attorney. Most of my career has been in enforcement. And I will 
tell you something, you can create 10 new agencies, but if you 
don't give them the resources--and I'm talking not just about 
dollars, I'm talking about scientific expertise--you guys will 
run circles around them. And it isn't just the models or the 
generative AI that will run circles around them, but it is the 
scientists in your companies.
    For every success story in Government regulation, you can 
think of five failures. That's true of the FDA, it's true of 
the IAEA, it's true of the SEC, it's true of the whole alphabet 
list of Government agencies. And I hope our experience here 
will be different. But the Pandora's box requires more than 
just the words or the concepts, licensing, new agency. There's 
some real hard decisionmaking, as Ms. Montgomery has alluded 
to, about how to frame the rules to fit the risk. First, do no 
harm. Make it effective, make it enforceable, make it real.
    I think we need to grapple with the hard questions here 
that, you know, frankly, this initial hearing, I think, has 
raised very successfully but not answered. And I thank our 
colleagues who have participated and made these very creative 
suggestions. I'm very interested in enforcement. I, you know, 
literally 15 years ago, I think, advocated abolishing Section 
230. What's old is new again. You know, now people are talking 
about abolishing Section 230. Back then, it was considered 
completely unrealistic, but enforcement really does matter.
    I want to ask Mr. Altman, because of the privacy issue--and 
you've suggested that you have an interest in protecting the 
privacy of the data that may come to you or be available--what 
specific steps do you take to protect privacy?
    Mr. Altman. One is that we don't train on any data 
submitted to our API. So, if you're a business customer of ours 
and submit data, we don't train on it at all. We do retain it 
for 30 days, solely for the purpose of trust and safety 
enforcement. But that's different than training on it. If you 
use ChatGPT, you can opt out of us training on your data. You 
can also delete your conversation history or your whole 
account.
    Chair Blumenthal. Ms. Montgomery, I know you don't deal 
directly with consumers, but do you take steps to protect 
privacy as well?
    Ms. Montgomery. Absolutely. And we even filter our large 
language models for content that includes personal information 
that may have been pulled from public data sets, as well. So, 
we apply additional level of filtering.
    Chair Blumenthal. Professor Marcus, you made reference to 
self-awareness, self-learning. Already, we're talking about the 
potential for jailbreaks. How soon do you think that new kind 
of generative AI will be usable, will be practical?
    Professor Marcus. New AI that is self-aware and so forth, 
or----
    Chair Blumenthal. Yes.
    Professor Marcus. I mean, I have no idea on that one. I 
think we don't really understand what self-awareness is, and so 
it's hard to put a date on it. In terms of self-improvement, 
there's some modest self-improvement in current systems, but 
one could imagine a lot more, and that could happen in 2 years, 
it could happen in 20 years. There are basic paradigms that 
haven't been invented yet. Some of them we might want to 
discourage, but it's a bit hard to put timelines on them.
    And just going back to enforcement for one second, one 
thing that is absolutely paramount, I think, is far greater 
transparency about what the models are and what the data are. 
That doesn't necessarily mean everybody in the general public 
has to know exactly what's in one of these systems, but I think 
it means that there needs to be some enforcement arm that can 
look at these systems, can look at the data, can perform tests, 
and so forth.
    Chair Blumenthal. Let me ask you, all of you: I think there 
has been a reference to elections and banning outputs involving 
elections. Are there other areas where you think--what are the 
other high-risk or highest-risk areas where you would either 
ban or establish especially strict rules? Ms. Montgomery?
    Ms. Montgomery. The space around misinformation, I think, 
is a hugely important one, and coming back to the points of 
transparency, you know, knowing what content was generated by 
AI is going to be a really critical area that we need to 
address.
    Chair Blumenthal. Any others?
    Professor Marcus. I think medical misinformation is 
something to really worry about. We have systems that 
hallucinate things. They're going to hallucinate medical 
advice. Some of the advice they'll give is good, some of it's 
bad. We need really tight regulation around that. Same with 
psychiatric advice, people using these things as kind of ersatz 
therapists. I think we need to be very concerned about that.
    I think we need to be concerned about internet access for 
these tools, when they can start making requests both of people 
and internet things. It's probably okay if they just do search, 
but as they do more intrusive things on the internet, like, do 
we want them to be able to order equipment or order chemistry 
and so forth? So, as we empower these systems more by giving 
them internet access, I think we need to be concerned about 
that.
    And then we've hardly talked at all about long-term risk. 
Sam alluded to it briefly. I don't think that's where we are 
right now, but as we start to approach machines that have a 
larger footprint on the world, beyond just having a 
conversation, we need to worry about that and think about how 
we're going to regulate that and monitor it and so forth.
    Chair Blumenthal. In a sense, we've been talking about bad 
guys or certain bad actors manipulating AI to do harm.
    Professor Marcus. Manipulating people.
    Chair Blumenthal. And manipulating people. But also, 
generative AI can manipulate the manipulators.
    Professor Marcus. It can. I mean, there's many layers of 
manipulation that are possible, and I think we don't yet really 
understand the consequences. Dan Dennett just sent me a 
manuscript last night that will be in The Atlantic in a few 
days, on what he calls counterfeit people. It's a wonderful 
metaphor. These systems are almost like counterfeit people, and 
we don't really honestly understand what the consequence of 
that is. They're not perfectly humanlike yet, but they're good 
enough to fool a lot of the people a lot of the time, and that 
introduces lots of problems, for example, cybercrime and how 
people might try to manipulate markets and so forth. So, it's a 
serious concern.
    Chair Blumenthal. In my opening, I suggested three 
principles: transparency, accountability, and limits on use. 
Would you agree that those are a good starting point, Ms. 
Montgomery?
    Ms. Montgomery. One hundred percent. And as you also 
mentioned, industry shouldn't wait for Congress. That's what 
we're doing here at IBM.
    Chair Blumenthal. There's no reason that----
    Ms. Montgomery. Absolutely.
    Chair Blumenthal [continuing]. Industry should wait for 
Congress.
    Ms. Montgomery. Yes.
    Chair Blumenthal. Professor Marcus?
    Professor Marcus. I think those three would be a great 
start. I mean, there are things like the White House ``Bill of 
Rights,'' for example, that show, I think, a large consensus. 
The UNESCO guidelines and so forth show a large consensus 
around what it is we need, and the real question is definitely, 
now, how are we going to put some teeth in it, try to make 
these things actually enforce? So, for example, we don't have 
transparency yet. We all know we want it, but we're not doing 
enough to enforce it.
    Chair Blumenthal. Mr. Altman?
    Mr. Altman. I certainly agree that those are important 
points. I would add that--and Professor Marcus touched on this. 
I would add that as we--we spent most of the time today on 
current risks, and I think that's appropriate, and I'm very 
glad we have done it. As these systems do become more capable--
and I'm not sure how far away that is, but maybe not super 
far--I think it's important that we also spend time talking 
about how we're going to confront those challenges.
    Chair Blumenthal. Having talked to you privately, I agree--
--
    Mr. Altman. You know how much I care.
    Chair Blumenthal. I agree that you care, deeply and 
intensely, but also that prospect of increased danger or risk 
resulting from even more complex and capable AI mechanisms 
certainly may be closer than a lot of people appreciate.
    Professor Marcus. Let me just add, for the record, that I'm 
sitting next to Sam, closer than I've ever sat to him except 
once before in my life, and his sincerity in talking about 
those fears is very apparent physically in a way that just 
doesn't communicate on the television screen----
    Mr. Altman. Thank you.
    Professor Marcus [continuing]. But communicates from here.
    Chair Blumenthal. Thank you. Senator Hawley.
    Senator Hawley. Thank you again, Mr. Chairman, for a great 
hearing. Thanks to the witnesses.
    So, I've been keeping a little list here of the potential 
downsides or harms, risks of generative AI, even in its current 
form. Let's just run through it. Loss of jobs. And this isn't 
speculative, I think your company, Ms. Montgomery, has 
announced that it's potentially laying off 7,800 people, a 
third of your non-consumer-facing workforce, because of AI. So, 
loss of jobs; invasion of privacy, personal privacy, on a scale 
we've never before seen; manipulation of personal behavior; 
manipulation of personal opinions; and potentially the 
degradation of free elections in America. Did I miss anything? 
I mean, this is--this is quite a list.
    I noticed that an eclectic group of about 1,000 technology 
and AI leaders, everybody from Andrew Yang to Elon Musk, 
recently called for a 6-month moratorium on any further AI 
development. Were they right? Do you join those calls? Are they 
right to do that? Should we pause for 6 months or----
    Professor Marcus. Your characterization's not quite 
correct. I actually signed that letter. About 27,000 people 
signed it. It did not call for a ban on all AI research, nor on 
all AI, but only on a very specific thing, which would be 
systems like GPT-5. Every other piece of research that's ever 
been done, it was actually supportive or neutral about. And it 
specifically called for more research on trustworthy and safe 
AI.
    Senator Hawley. So, you think that we should take a 
moratorium, a 6-month moratorium or more on anything beyond 
Chat--GPT-4?
    Professor Marcus. I took the letter--what is the famous 
phrase--spiritually, not literally. What was the famous phrase?
    Senator Hawley. Well, I'm asking for your opinion now, 
though. So, would you----
    Professor Marcus. My--my----
    Senator Hawley. Did you endorse the 6-month moratorium?
    Professor Marcus. My opinion is that the moratorium that we 
should focus on is actually deployment until we have good 
safety cases. I don't know that we need to pause that 
particular project, but I do think its emphasis on focusing 
more on AI safety, on trustworthy, reliable AI is exactly 
right.
    Senator Hawley. Deployment means not making it available to 
the public?
    Professor Marcus. Yes. So----
    Senator Hawley. You'd pause that?
    Professor Marcus. So, my concern is about things that are 
deployed at a scale of, let's say, 100 million people without 
any external review. I think that we should think very 
carefully about doing that.
    Senator Hawley. What about you, Mr. Altman? Do you agree 
with that? Would you pause any further development for 6 months 
or longer?
    Mr. Altman. So, first of all, after we finished training 
GPT-4, we waited more than 6 months to deploy it. We are not 
currently training what will be GPT-5. We don't have plans to 
do it in the next 6 months. But I think the frame of the letter 
is wrong. What matters is audits, red teaming safety standards 
that a model needs to pass before training. If we pause for 6 
months, then I'm not really sure what we do then. Do we pause 
for another six? Do we kind of come up with some rules then?
    The standards that we have developed and that we've used 
for GPT-4 deployment--we want to build on those, but we think 
that's the right direction, not a calendar-clock pause. There 
may be times--I expect there will be times when we find 
something that we don't understand and we really do need to 
take a pause, but we don't see that yet, never mind all the 
benefits.
    Senator Hawley. You don't see what yet? You're comfortable 
with all of the potential ramifications from the current 
existing technology----
    Mr. Altman. I'm sorry.
    Senator Hawley [continuing]. We've talked about today?
    Mr. Altman. We don't see reasons to not train a new one. 
For deploying, as I mentioned, I think there's all sorts of 
risky behavior, and there's limits we put. We have to pull 
things back sometimes, add new ones. I meant we don't see 
something that would stop us from training the next model, 
where we'd be so worried that we'd create something dangerous 
even in that process, let alone the deployment.
    Senator Hawley. What about----
    Mr. Altman. But that may happen.
    Senator Hawley. What about you, Ms. Montgomery?
    Ms. Montgomery. I think we need to use the time to 
prioritize ethics and responsible technology as opposed to 
pausing development.
    Senator Hawley. Well, wouldn't a pause in development help 
the development of protocols for safety standards and ethics?
    Ms. Montgomery. I'm not sure how practical it is to pause, 
but we absolutely should be prioritizing safety protocols.
    Senator Hawley. Okay. The point about practicality leads me 
to this. I'm interested in this talk about an agency, and, you 
know, maybe that would work. Although, having seen how agencies 
work in this Government, they usually get captured by the 
interests that they're supposed to regulate. They usually get 
controlled by the people who they're supposed to be watching. I 
mean, that's just been our history for 100 years. Maybe this 
agency would be different.
    I have a little different idea. Why don't we just let 
people sue you? Why don't we just make you liable in court? We 
can do that. We know how to do that. We can pass a statute. We 
can create a Federal right of action that will allow private 
individuals who are harmed by this technology to get into court 
and to bring evidence into court.
    And it can be anybody. I mean, you want to talk about 
crowdsourcing, we'll just open the courthouse doors. We'll 
define a broad right of action, private right of action, 
private citizens to be class actions. We'll just open it up. 
We'll allow people to go into court. We'll allow them to 
present evidence. They say that they were harmed by--they were 
given medical misinformation, they were given election 
misinformation, whatever. Why not do that? Mr. Altman?
    Mr. Altman. I mean, please forgive my ignorance. Can't 
people sue us? It's not like----
    Senator Hawley. Well, you're not protected by Section 230, 
but there's not currently, I don't think, a Federal right of 
action, private right of action that says that if you are 
harmed by generative AI technology, we will guarantee you the 
ability to get into court.
    Mr. Altman. Oh. Well, I think there's, like, a lot of other 
laws where if, you know, technology harms you, there's 
standards that we could be sued under, unless I'm really 
misunderstanding how things work. If the question is, ``Are 
clearer laws about the specifics of this technology and 
consumer protections a good thing?'' I would say definitely 
yes.
    Professor Marcus. The laws that we have today were designed 
long before we had artificial intelligence, and I do not think 
they give us enough coverage. The plan that you propose, I 
think, as a hypothetical, would certainly make a lot of lawyers 
wealthy, but I think it would be too slow to effect a lot of 
the things that we care about. And there are gaps in the law. 
For example, we don't really know----
    Senator Hawley. Wait, you think it'd be slower than 
Congress?
    Professor Marcus. Yes, I do, in some ways.
    Senator Hawley. Really?
    Professor Marcus. Well----
    Senator Hawley. Do you know----
    Professor Marcus [continuing]. Litigation can take a decade 
or more.
    Senator Hawley. Oh, but the threat----
    Professor Marcus. I think you guys----
    Senator Hawley [continuing]. Of litigation is a powerful 
tool. I mean, how would IBM like to be sued for----
    Professor Marcus. I'm----
    Senator Hawley [continuing]. $100 billion?
    Professor Marcus [continuing]. In no way asking to take 
litigation off the table, among the tools. But I think, for 
example, if I can continue, there are areas like copyright 
where we don't really have laws. We don't really have a way of 
thinking about wholesale misinformation, as opposed to 
individual pieces of it, where, say, a foreign actor might make 
billions of pieces of misinformation, or a local actor. We have 
some laws around market manipulation we could apply, but we'd 
get in a lot of situations where we don't really know which 
laws apply. There would be loopholes. The system is really not 
thought through.
    And, in fact, we don't even know that 230 does or does not 
apply here, as far as I know. I think that that's something a 
lot of people speculated about this afternoon, but it's not 
solid.
    Senator Hawley. Well, we could fix that.
    Professor Marcus. Well, the question is, how?
    Senator Hawley. Oh, easy. It would be easy for us to say 
that Section 230 doesn't apply to generative AI. Ms. 
Montgomery, I'll give you----
    Professor Marcus. I think that's----
    Senator Hawley [continuing]. The last word, and then I'll--
--
    Professor Marcus [continuing]. An important start.
    Senator Hawley [continuing]. Yield, Mr. Chairman.
    Ms. Montgomery. Just on the point of----
    Chair Blumenthal. You suggested, Ms. Montgomery, a duty of 
care, which I think fits the idea of a private right of action.
    Ms. Montgomery. Yes, that's exactly right. And also, AI is 
not a shield. Right? So, if a company discriminates in granting 
credit, for example, or in the hiring process, by virtue of the 
fact that they relied too significantly on an AI tool, they're 
responsible for that today, regardless of whether they used a 
tool or a human to make that decision.
    Chair Blumenthal. I'm going to turn to Senator Booker for 
some final questions, but I just want to make a quick point 
here on the issue of the moratorium. I think we need to be 
careful. The world won't wait. The rest of the global 
scientific community isn't going to pause. We have adversaries 
that are moving ahead, and sticking our head in the sand is not 
the answer. Safeguards and protections, yes, but a flat stop 
sign, sticking our head in the sand, I would be very, very 
worried about doing.
    Professor Marcus. Without militating for any sort of pause, 
I would just again emphasize there is a difference between 
research, which surely we need to do to keep pace with our 
foreign rivals, and deployment at really massive scale. You 
know, you could deploy things at a scale of a million people or 
10 million people, but not 100 million people or a billion 
people. And if there are risks, you might find them out sooner 
and be able to close the barn doors before the horses leave 
rather than after.
    Chair Blumenthal. Senator Booker.
    Senator Booker. Yes. There will be no pause. I mean, 
there's no enforcement body to force a--it's just not going to 
happen. It's nice to call for it, for any just reasons or 
whatsoever, but forgive me for sounding skeptical. Nobody's 
pausing. This thing is----
    Professor Marcus. I would agree.
    Senator Booker. You----
    Professor Marcus. I would agree. And I don't think it's a 
realistic thing in the world. The reason I personally signed 
the letter was to call attention to how serious the problems 
were and to emphasize spending more of our efforts on 
trustworthy and safe AI rather than just making a bigger 
version of something we already know to be unreliable.
    Senator Booker. Yes. So, I'm a futurist. I love excitement 
of the future, and I guess there's a famous question, ``If you 
couldn't control for your race, your gender, where you would 
land on the planet Earth, at what time in humanity would you 
want to be born?'' Everyone would say, ``Right now.'' It's 
still the best time to be alive because of technology, 
innovation, and everything. And I'm excited about what the 
future holds.
    But the destructiveness that I've also seen, as a person 
that's seen the transformative technologies of a lot of the 
technologies of the last 25 years, is what really concerns me. 
And one of the things, especially with companies that are 
designed to want to keep my attention on screens--and I'm not 
just talking about new media; 24-hour cable news is a great 
example of people that want to keep your eyes on screens--I 
have a lot of concerns about the corporate intention. And, Sam, 
this is again why I find your story so fascinating to me and 
your values--that I believe in, from our conversations--so 
compelling to me.
    But absent that, I really want to just explore what happens 
when these companies that are already controlling so much of 
our lives--a lot has been written about the FAANG companies. 
What happens when they are the ones that are dominating this 
technology, as they did before? So, Professor Marcus, does that 
have any concern, the role that corporate power, corporate 
concentration has in this realm, that a few companies might 
control this whole area?
    Professor Marcus. I radically changed the shape of my own 
life in the last few months, and it was because of what 
happened with Microsoft releasing Sydney. And it didn't go the 
way I thought it would. In one way, it did, which is I 
anticipated the hallucinations. I wrote an essay, which I have 
in the appendix, ``What to Expect When You're Expecting . . . 
GPT-4.''
    And I said that it would still be a good tool for 
misinformation, that it would still have trouble with physical 
reasoning, psychological reasoning, that it would hallucinate. 
And then along came Sydney, and the initial press reports were 
quite favorable. And then there was the famous article by Kevin 
Roose, in which it recommended he get a divorce. And I had seen 
Tay, and I had seen Galactica, from Meta, and those had been 
pulled after they had problems. And Sydney clearly had 
problems.
    What I would have done, had I run Microsoft, which clearly 
I do not, would have been to temporarily withdraw it from the 
market. And they didn't. And that was a wakeup call to me and a 
reminder that even if you have a company like OpenAI that is a 
nonprofit--and Sam's values, I think, have become clear today--
other people can buy those companies and do what they like with 
them. And, you know, maybe we have a stable set of actors now, 
but the amount of power that these systems have to shape our 
views and our lives is really, really significant.
    And that doesn't even get into the risks that someone might 
repurpose them deliberately for all kinds of bad purposes. And 
so, in the middle of February, I stopped writing much about 
technical issues in AI, which is most of what I have written 
about for the last decade, and said, ``I need to work on 
policy. This is frightening.''
    Senator Booker. And, Sam, I want to give you an 
opportunity, as my sort of last question or so--don't you have 
concerns about--I graduated from Stanford. I know so many of 
the players in the Valley, from VC folks, angel folks, to a lot 
of founders of companies that we all know. Do you have some 
concern about a few players with extraordinary resources and 
power, power to influence Washington?
    I mean, I see us--I'm a big believer in the free market, 
but the reason why I walk into a bodega and a Twinkie is 
cheaper than an apple or a Happy Meal costs less than a bucket 
of salad is because of the way the Government tips the scales 
to pick winners and losers. So, the free market is not what it 
should be, when you have large corporate power that can even 
influence the game here. Do you have some concerns about that 
in this next era of technological innovation?
    Mr. Altman. Yes. I mean, again, that's so much of why we 
started OpenAI. We have huge concerns about that. I think it's 
important to democratize the inputs to these systems, the 
values that we're going to align to. And I think it's also 
important to give people wide use of these tools. When we 
started the API strategy, which is a big part of how we make 
our systems available for anyone to use, there was a huge 
amount of skepticism over that. And it does come with 
challenges, that's for sure. But we think putting this in the 
hands of a lot of people and not in the hands of a few 
companies is really quite important, and we are seeing the 
resultant innovation boom from that.
    But it is absolutely true that the number of companies that 
can train the true frontier models is going to be small, just 
because of the resources required. And so I think there needs 
to be incredible scrutiny on us and our competitors. I think 
there is a rich and exciting industry happening of incredibly 
good research and new startups that are not just using our 
models but creating their own. And I think it's important to 
make sure that whatever regulatory stuff happens, whatever new 
agencies may or may not happen, we preserve that fire, because 
that's critical.
    Senator Booker. Well, I'm a big believer in the 
democratizing potential of technology, but I've seen the 
promise of that fail, time and time again, where people said, 
``Oh, this is going to have a big democratizing force.'' My 
team works on a lot of issues about the reinforcing of bias 
through algorithms, the failure to advertise certain 
opportunities in certain zip codes. But you seem to be saying, 
and I heard this with Web3----
    Mr. Altman. Yes.
    Senator Booker [continuing]. That this is going to be--
DeFi, decentralized finance, all these things are going to 
happen. But this seems to me not even to offer that promise, 
because the people who are designing these--it takes so much 
power, energy, resources. Are you saying that my dreams of 
technology further democratizing opportunity and more are 
possible within a technology that is ultimately, I think, going 
to be very centralized to a few players who already control so 
much?
    Mr. Altman. So, this point that I made about use of the 
model and building on top of it, as a--this is really a new 
platform, right? It is definitely important to talk about who's 
going to create the models. I want to do that. I also think 
it's really important to decide to whose values we're going to 
align these models.
    But in terms of using the models, the people that build on 
top of the OpenAI API do incredible things. And, you know, 
people frequently comment, like, ``I can't believe you get this 
much technology for this little money.'' And so what people 
are--the companies people are building, putting AI everywhere, 
using our API, which does let us put safeguards in place--I 
think that's quite exciting. And I think that is how it is 
being--not how it's going to be, but how it is being 
democratized right now. There is a whole new Cambrian explosion 
of new businesses, new products, new services happening by lots 
of different companies on top of these models.
    Senator Booker. And so I'll say, Chairman, as I close, that 
most industries resist even reasonable regulation, from 
seatbelt laws to--we've been talking a lot recently about rail 
safety. The only way we're going to see the democratization of 
values, I think--and while there are noble companies out 
there--is if we create rules of the road that enforce certain 
safety measures, like we've seen with other technology. Thank 
you.
    Chair Blumenthal. Thanks, Senator Booker. And I couldn't 
agree more that, in terms of consumer protection, which I've 
been doing for a while, participation by the industry is 
tremendously important, and not just rhetorically, but in real 
terms, because we have a lot of industries that come before us 
and say, ``Oh, we're all in favor of rules, but not those 
rules. Those rules we don't like.'' And it's every rule, in 
fact, that they don't like.
    And I sense that there is a willingness to participate here 
that is genuine and authentic. I thought about asking ChatGPT 
to do a new version of ``Don't Stop Thinking About Tomorrow,'' 
because that's what we need to be doing here.
    [Laughter.]
    Chair Blumenthal. And as Senator Hawley has pointed out, 
Congress doesn't always move at the pace of technology, and 
that may be a reason why we need a new agency, but we also need 
to recognize the rest of the world is going to be moving, as 
well.
    And you've been enormously helpful in focusing us and 
illuminating some of these questions and performed a great 
service by being here today. So, thank you to every one of our 
witnesses. And I'm going to close the hearing, leave the record 
open for 1 week in case anyone wants to submit anything. I 
encourage any of you who have either manuscripts that are going 
to be published or observations from your companies to submit 
them to us.
    And we look forward to our next hearing. This one is 
closed.
    [Whereupon, at 12:54 p.m., the hearing was adjourned.]
    [Additional material submitted for the record follows.]

                            A P P E N D I X

              Additional Material Submitted for the Record

[GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]


                                 [all]