[House Hearing, 118 Congress]
[From the U.S. Government Publishing Office]






                      ADVANCES IN AI: ARE WE READY 
                         FOR A TECH REVOLUTION?

=======================================================================

                                HEARING

                               before the

               SUBCOMMITTEE ON CYBERSECURITY, INFORMATION
                 TECHNOLOGY, AND GOVERNMENT INNOVATION

                                 of the

                         COMMITTEE ON OVERSIGHT
                           AND ACCOUNTABILITY

                        HOUSE OF REPRESENTATIVES

                    ONE HUNDRED EIGHTEENTH CONGRESS

                             FIRST SESSION

                               __________

                             MARCH 8, 2023

                               __________

                            Serial No. 118-7

                               __________

  Printed for the use of the Committee on Oversight and Accountability


                       Available on: govinfo.gov 
                         oversight.house.gov or 
                             docs.house.gov 
                             _________
                              
                 U.S. GOVERNMENT PUBLISHING OFFICE
                 
51-473 PDF               WASHINGTON : 2023
                             
                             
                             
                             
                             
                             
                             
                             
                             
                             
                             
                             
                             
                             
                             
                             
                             
                             
                             
                             
                             
                             
                             
                             
                             
                             
                             
                             
                             
               COMMITTEE ON OVERSIGHT AND ACCOUNTABILITY

                    JAMES COMER, Kentucky, Chairman

Jim Jordan, Ohio                     Jamie Raskin, Maryland, Ranking 
Mike Turner, Ohio                        Minority Member
Paul Gosar, Arizona                  Eleanor Holmes Norton, District of 
Virginia Foxx, North Carolina            Columbia
Glenn Grothman, Wisconsin            Stephen F. Lynch, Massachusetts
Gary Palmer, Alabama                 Gerald E. Connolly, Virginia
Clay Higgins, Louisiana              Raja Krishnamoorthi, Illinois
Pete Sessions, Texas                 Ro Khanna, California
Andy Biggs, Arizona                  Kweisi Mfume, Maryland
Nancy Mace, South Carolina           Alexandria Ocasio-Cortez, New York
Jake LaTurner, Kansas                Katie Porter, California
Pat Fallon, Texas                    Cori Bush, Missouri
Byron Donalds, Florida               Shontel Brown, Ohio
Kelly Armstrong, North Dakota        Jimmy Gomez, California
Scott Perry, Pennsylvania            Melanie Stansbury, New Mexico
William Timmons, South Carolina      Robert Garcia, California
Tim Burchett, Tennessee              Maxwell Frost, Florida
Marjorie Taylor Greene, Georgia      Becca Balint, Vermont
Lisa McClain, Michigan               Summer Lee, Pennsylvania
Lauren Boebert, Colorado             Greg Casar, Texas
Russell Fry, South Carolina          Jasmine Crockett, Texas
Anna Paulina Luna, Florida           Dan Goldman, New York
Chuck Edwards, North Carolina        Jared Moskowitz, Florida
Nick Langworthy, New York
Eric Burlison, Missouri

                       Mark Marin, Staff Director
       Jessica Donlon, Deputy Staff Director and General Counsel
             Raj Bharwani, Senior Professional Staff Member
                 Lauren Lombardo, Senior Policy Analyst
                      Peter Warren, Senior Advisor
      Mallory Cogar, Deputy Director of Operations and Chief Clerk

                      Contact Number: 202-225-5074

                  Julie Tagen, Minority Staff Director
                      Contact Number: 202-225-5051
                                 ------                                

 Subcommittee on Cybersecurity, Information Technology, and Government 
                               Innovation

                 Nancy Mace, South Carolina, Chairwoman
William Timmons, South Carolina      Gerald E. Connolly, Virginia 
Tim Burchett, Tennessee                  Ranking Minority Member
Marjorie Taylor Greene, Georgia      Ro Khanna, California
Anna Paulina Luna, Florida           Stephen F. Lynch, Massachusetts
Chuck Edwards, North Carolina        Kweisi Mfume, Maryland
Nick Langworthy, New York            Jimmy Gomez, California
Eric Burlison, Missouri              Jared Moskowitz, Florida 






















                         C  O  N  T  E  N  T  S

                              ----------                              
                                                                   Page
Hearing held on March 8, 2023....................................     1

                               Witnesses

Dr. Eric Schmidt, Chair, Special Competitive Studies Project
Oral Statement...................................................     6

Dr. Aleksander Mafdry, Director & Cadence Design Systems 
  Professor of Computing, MIT Center for Deployable Machine 
  Learning & Massachusetts
  Institute of Technology
Oral Statement...................................................     7

Dr. Scott Crowder, Vice President & CTO, IBM Quantum/IBM Systems,
  Technical Strategy, and Transformation
Oral Statement...................................................     9

Ms. Merve Hickok, Chair and Research Director, Center for AI and 
  Digital Policy
Oral Statement...................................................    11

Written opening statements and statements for the witnesses are 
  available on the U.S. House of Representatives Document 
  Repository at: docs.house.gov.

                           Index of Documents

                              ----------                              


  * Questions for the Record: to Dr. Crowder; submitted by Rep. 
  Mace.

  * Questions for the Record: to Dr. Crowder; submitted by Rep. 
  Connolly.

  * Questions for the Record: to Dr. Schmidt; submitted by Rep. 
  Mace.

  * Questions for the Record: to Dr. Schmidt; submitted by Rep. 
  Connolly.

  * Questions for the Record: to Ms. Hickok; submitted by Rep. 
  Connolly.

  * Questions for the Record: to Dr. Mafdry; submitted by Rep. 
  Mace.

  * Questions for the Record: to Dr. Mafdry; submitted by Rep. 
  Connolly.

 
                      ADVANCES IN AI: ARE WE READY 
                         FOR A TECH REVOLUTION? 

                              ----------                              


                        Wednesday, March 8, 2023

                        House of Representatives

               Committee on Oversight and Accountability

 Subcommittee on Cybersecurity, Information Technology, and Government 
                               Innovation

                                           Washington, D.C.

    The Subcommittee met, pursuant to notice, at 2:19 p.m., in 
room 2154, Rayburn House Office Building, Hon. Nancy Mace 
[Chairwoman of the Subcommittee] presiding.
    Present: Representatives Mace, Timmons, Burchett, Greene, 
Luna, Edwards, Langworthy, Burlison, Connolly, Lynch, Khanna, 
Mfume, and Gomez.
    Ms. Mace. All right. Good afternoon, everyone. The 
Subcommittee on Cybersecurity, Information Technology, and 
Government Innovation will come to order.
    Welcome and good afternoon to everyone who is here on both 
sides of the aisle. Without objection, the Chair may declare a 
recess at any time. I recognize myself for the purpose of 
making an opening statement, if I may.
    Thank you all for being here today, the time and the effort 
and commitment to this congressional hearing on our artificial 
intelligence. As Chair of this committee, I recognize myself 
for five minutes to provide an opening statement on this very 
important topic which many of us here today are extremely 
passionate about.
    The field of artificial intelligence is rapidly evolving, 
and one of the most exciting developments in recent years has 
been the emergence of generative models. These models have 
shown the ability to produce human-like language and even 
generate images, videos, and music. While the potential 
applications of generative models are vast and impressive, 
there are also serious concerns about the ethical implications 
of their use.
    As we explore the potential of AI and generative models, it 
is essential that we consider the impact they may have on 
society. We must work together to ensure that AI is developed 
and used in a way that is ethical, transparent, and beneficial 
to all of society. This will require collaboration between 
government, industry, and academia to ensure that the AI we 
develop is reliable, trustworthy, and aligned with public 
policy goals.
    Moreover, we must consider the operational legal 
responsibilities of companies that use these models. AI can 
help us make better decisions, but we must also ensure that 
those decisions are ethical, unbiased, and transparent. To 
achieve this, we need to establish guidelines for AI 
development and use. We need to establish a clear legal 
framework to hold companies accountable for the consequences of 
their AI systems.
    The Federal Government has an important role to play in the 
development and deployment of AI. As the largest employer in 
the United States, the government can use AI to improve 
operations and provide better services to the public. AI can 
help reduce costs, improve efficiency, and enhance the accuracy 
of decision-making, for example. AI can be used to analyze vast 
amounts of data to identify patterns and make predictions which 
can help government agencies make more informed decisions.
    As we move forward, we must also ensure that AI is used for 
the benefit of society as a whole. While AI has the potential 
to improve efficiency, increase productivity, and enhance the 
quality of life, it can also be used to automate jobs, invade 
privacy, and perpetuate inequality. We must also work together 
to ensure that AI is used in a way that benefits everyone, not 
just a privileged few.
    In conclusion, the emergence of generative models 
represents a significant step forward in the development of 
artificial intelligence. However, with the progress comes 
responsibility. We must ensure that AI is developed and used in 
a way that is ethical, transparent, and beneficial to society, 
and the Federal Government has an important role in this 
effort.
    I look forward to working with my colleagues on both sides 
of the aisle on this committee to ensure that the U.S. remains 
a leader in the development of AI technologies. Thank you for 
your time and attention.
    Now before I yield back, I'd like to note that everything I 
just said in my opening statement was, you guessed it, written 
by ChatGPT in AI.
    The advances that have been made just in the last few weeks 
and months have been radical, they've been amazing, and show 
the technology is rapidly evolving. Every single word up until 
this sentence was generated entirely by ChatGPT. And perhaps 
for the first time in a committee hearing--I know Jake 
Auchincloss said a statement on the floor a couple weeks ago, 
but I believe this is the first opening statement of a hearing 
generated by ChatGPT or other AI models.
    I now yield to the distinguished Ranking Member, Mr. 
Connolly, for your opening statement.
    Mr. Connolly. Thank you, Madam Chairwoman. And let me first 
thank you for reaching out on a bipartisan basis to talk about 
this Subcommittee and our agenda. I really appreciate that, and 
I wish more committees and subcommittees operated that way. And 
I think we had fruitful conversation. We actually had a meeting 
with certain cyber officials of the executive branch while we 
were in Munich at the Security Conference. And, again, I just 
appreciate your approach, and hope we can collaborate and make 
music together over the next two years.
    The Cybersecurity, Information Technology, and Government 
Innovation Subcommittee has dedicated its first hearing to 
examining advances in artificial intelligence and its 
revolutionary impact on society. This decision reflects our 
membership's interest in commitment of exploring, 
understanding, and implementing emergent technologies.
    Last Congress, Chairwoman Nancy Mace, Representative Ro 
Khanna, and I introduced the Quantum Computing and 
Cybersecurity Preparedness Act, which encourages Federal 
agencies to adopt post-quantum cryptography. I'm also pleased 
the bill was signed into law just a few months ago. I look 
forward to future bipartisan collaboration as we define the 
problem sets associated with AI design solutions and that 
promote innovation while simultaneously mitigating the dangers 
and risks inherent in AI technology.
    The Federal Government has a historic, necessary, and 
appropriate role guiding and investing research development for 
new and emerging technologies. The Defense Advanced Research 
Projects Agency, DARPA, the well-known research and development 
agency of the United States Department of Defense, is 
responsible for the development of myriad emerging 
technologies.
    One of the most famous successes includes the ARPANET, 
which eventually evolved into the internet which we know today. 
Other innovations include microelectronics, global positioning 
systems, infrared--inferred night imaging, unmanned vehicles, 
and what eventually became cloud technology.
    AI will require similar Federal investment and engagement. 
As stated in the January 2023 final report from the National 
Artificial Intelligence Research Task Force, the recent CHIPS 
and Science Act reinforces the importance of democratizing 
access to a national AI research cyber infrastructure. U.S. 
talent and frontier science and engineering, including AI, in 
the report calls for 2.6 billion over the next six years for 
the purpose of funding national AI research infrastructure.
    While government certainly plays a role in R&D, a very 
important role, it also has a regulatory role. Congress has the 
responsibility to posture careful and thoughtful discussions to 
balance the benefits of innovation with the potential risks of 
emerging technology.
    A recent National Bureau of Economic Research report found 
that AI could save the United States healthcare industry more 
than $360 billion a year and be used as a powerful tool to 
detect health risks. A GAO report predicts AI could help 
identify and patch vulnerabilities and defend against cyber 
attacks, automate arduous tasks, and expand jobs within the 
industry.
    As with all technologies, in the wrong hands, AI could be 
used to hack financial data, steal national intelligence, and 
create deep fakes, blurring people's abilities to certify 
reality, and sow further distress within our democracy. AI can 
cause unintentional harms. GAO found that certain groups, such 
as workers with no college education, tended to hold jobs 
susceptible to automation and eventually unemployment.
    Another concern relates to machine learning and data. ML, 
machine learning, uses data samples to learn and recognize 
patterns, such as scanning hundreds or thousands of pictures of 
lungs to better understand pulmonary fibrosis and revolutionize 
medical care. But what happens if those lung samples only come 
from a homogeneous portion of the population? And that medical 
breakthrough is inaccurately applied. When it comes to data, 
equity is accuracy and must ensure datasets include as much and 
as comprehensive a universe of data as possible.
    It is paramount that during this hearing we begin to create 
a flexible and robust framework, particularly for government's 
use of AI to protect democratic values and preemptively address 
social, economic, and moral dilemmas AI might raise.
    During the last Congress, this committee voted to pass the 
AI Training Act and the AI in Counterterrorism Oversight 
Enhancement Act, with bipartisan support. The committee is not 
entirely new to the AI space, and we look forward to continuing 
efforts to support transformative research. We also look 
forward to building on the Biden Administration's efforts such 
as the National Artificial Intelligence Resource Task Force. 
Just over a month ago, that task force released its report, 
providing a roadmap to stand up a national research 
infrastructure that would broaden access to the resources 
essential to AI.
    AI is already integrated within the world around us, and 
its growing use throughout society will continue to drive 
advancements. America must implement an aggressive, research-
forward Federal AI policy to spur competition with other 
countries that have already established nationwide strategies, 
and additional supporting policy strategies might also include 
promoting open data, policies, or outcome-based strategies when 
assessing algorithms.
    Finally, and more importantly, our country needs the work 
force to properly develop, test, understand, and deploy AI. 
This work force of the future will include technologists who 
will help govern AI responsibly.
    I look forward to hearing from our witnesses today. I look 
forward to collaborating with you, Madam Chairwoman, on any 
subsequent legislation we might want to develop.
    I yield back.
    Ms. Mace. Thank you, Congressman Connolly. And I, too, 
agree, I hope and I believe we will make music together, 
continue to do that. Cybersecurity has been one of the few 
places in Congress where we have been able to be bipartisan and 
not crazy. And so, I appreciate the ability to work with folks 
on both sides of the aisle.
    I'm pleased to introduce our four witnesses today for this 
Subcommittee's inaugural hearing of the 118th Congress. Our 
first witness is Dr. Eric Schmidt, Chair of the Special 
Competitive Studies Project. Dr. Schmidt is a former Google 
executive, where he held multiple senior-level positions, 
working alongside founders Sergey Brin and Larry Page.
    Google literally changed the world, and it's a huge honor 
to have one of the godfathers of modern day technology here 
with us today talking about the advent of AI and what comes 
next, because I believe this will be one of the greatest 
technological revolutions of our lifetime and around the world.
    Dr. Schmidt is an accomplished technologist, entrepreneur, 
and philanthropist. Dr. Schmidt founded SCSP in 2021. This is a 
bipartisan, nonprofit initiative that works on issues relating 
to AI and other emerging technologies. Dr. Schmidt also co-
authored a book in 2021 with Dr. Henry Kissinger and MIT dean, 
Dr. Daniel Huttenlocher, titled, ``The Age of AI: And Our Human 
Future.'' The book attempts to explain artificial intelligence 
while raising thought-provoking questions about the role of AI 
in topics such as security and world order. And there is a Wall 
Street Journal article that was an excerpt from the book that 
folks should pick up and read, ``ChatGPT Heralds an 
Intellectual Revolution.'' I'm going to encourage folks in this 
space to read it.
    Our second witness is Dr. Aleksander Mafdry, director of 
the MIT Center for Deployable Machine Learning. Dr. Mafdry is 
also a member of the MIT Computer Science and Artificial 
Intelligence Laboratory, Cadence Design Systems professor of 
computing, and co-lead of the MIT AI Policy Forum. Dr. Mafdry's 
research interests span algorithms, continuous optimization, 
the science of deep learning, and developing reliable, 
trustworthy, and secure machine learning systems.
    We look forward to hearing from you about the policy 
challenges and moral and ethical questions surrounding AI.
    Our third witness is Dr. Scott Crowder, vice president of 
Quantum Computing and IBM, and chief technology officer, IBM 
Systems, Technical Strategy and Transformation. Dr. Crowder's 
responsibilities include leading the commercialization effort 
for quantum computers and accelerating innovation within 
development through special projects.
    The Subcommittee is very interested in learning more about 
quantum AI and how quantum computing may some day change the 
way AI models can store, process, and even report data.
    Our fourth witness is Ms. Merve Hickok, Chair and research 
director for the Center for AI and Digital Policy.
    We welcome everyone who is here today, and we are so 
pleased to have all of you here this afternoon.
    Pursuant to committee rule 9(g), the witnesses, if you will 
please, stand up and raise your right hands.
    Do you solemnly swear or affirm that the testimony you are 
about to give is the truth, the whole truth, and nothing but 
the truth, so help you God?
    Let the record show that the witnesses all answered in the 
affirmative.
    Thank you, and you may be seated.
    We appreciate all of you being here today and look forward 
to your testimony. I want to remind the witnesses that we have 
read your written statements, and they will appear in full in 
the hearing record. Please limit your oral arguments to five 
minutes, initially. As a reminder, please press the button on 
the microphone in front of you so we can all hear you when you 
are speaking.
    When you speak--begin to speak, the light in front of you 
will turn green. And after four minutes, the light will turn 
yellow. And then the light--red light comes on after your five 
minutes has expired. And we would ask that you please try to 
wrap up your comments at that time so that all the Members who 
are here today as part of this Subcommittee will get a chance 
to speak and ask you all questions.
    I would like to first recognize our first witness, Dr. 
Schmidt, to please begin your testimony.

              STATEMENT OF DR. ERIC SCHMIDT, CHAIR
              SPECIAL COMPETITIVE STUDIES PROJECT

    Mr. Schmidt. Chairwoman and Ranking Member, thank you so 
much, all of you, for spending some time on this incredibly 
important issue.
    I've been doing this for 50 years, and I have never seen 
something happen as fast as this round. It took five days for 
ChatGPT to get to a million users, and now we have it being 
used here in Congress. And, if you look throughout the country, 
throughout America, throughout the world I live in, machine 
learning in the broad form has taken it by storm. I'm used to 
hype cycles, but this one is real in the sense that enormous 
amounts of money are being raised to implement and build these 
systems.
    The sense to me is that this moment is a clear demarcation: 
A before and an after. And in our book, ``Age of AI,'' which 
you kindly mentioned, we actually talk about this is actually 
more than just an industrial strategy, it is actually a new 
epic in human experience. The last epic, of course, was the age 
of reason 400 years ago which came from the century of the 
printing press and the Reformation and things like that.
    The ability to have nonhuman intelligences that we work 
with and occasionally have to deal with is a major change in 
human history and not one that we will go back to. And you can 
imagine, if you speculate 10, 20, 30 years from now, at the 
rate at which this innovation is going, what it would be like 
to having these nonhuman intelligences in the midst, right? A 
topic for another day.
    The two most interesting things that have emerged in the 
last year have been large language models. Large language 
models can be understood as a system that was originally built 
to predict the next word, the next sentence, the next 
paragraph. But if you make them big enough--and when I say big, 
I mean huge--to the cost of a hundred million dollars, 200 
million to build them, they appear to have emergent properties. 
They have what is technically known as capability overhang. In 
other words, we don't know exactly what they know. Although we 
do know they know an awful lot of things that are wrong, but we 
also know that they have a lot of insights.
    This has spurred enormous industry and a set of competitors 
that will be emerging in the next month or two. It's literally 
that fast. So, boom, boom, boom.
    The other one is the term ``generative AI,'' which for me 
is largely focused on the ability to generate new language, new 
pictures, new videos, and so forth. It's reasonable to expect 
that, in the next few years, a great deal of the content that 
we consume will be generated for us.
    Now, these are very, very, very powerful technologies. And 
the impact on society is going to be profound, and I don't 
think any of us understand how broad and how deep it will go.
    When I look at some of the issues that you all should face, 
I think the most obvious one is, what do you do about how 
people interact with the platforms? And I'll offer three 
principles.
    One is the platforms need to know where the content came 
from and they need to be able to tell you--this is to avoid 
misinformation, Russian actors, that sort of thing. You need to 
know who the users are. Even if you don't tell the end user who 
they are, there needs to be some notion of who they are and 
where they came from. True anonymity hidden behind a paywall 
would allow nation-state attacks. And the third is that these 
systems have to publish how their algorithms work, and then 
they have to be held to how their algorithms work. Those simple 
principles, I think, will help us manage the extreme cases 
here.
    We all, everyone in this room, wants the U.S. to win in 
this. And, again, Ranking Member, you mentioned--Connolly, you 
mentioned this issue around the national resource. My colleague 
to the left can speak about what it's like to be in a 
university where you don't have access to these models. We need 
that, and we need the computing capability as it transforms, 
not just language, but also every aspect of science and health 
and biology and material science.
    We want democratic partners, that is other countries. This 
is something where the West can do this together, and we can 
beat China, who is my primary focus. And, obviously, we need 
more AI and software talent in the government. And we wrote a 
long report for you all called the NSCAI that goes into that in 
great detail.
    What I want you to do is imagine the alternative. China 
announced a couple of years ago that they are going to be the 
dominant force in AI in 2030. Can you imagine the technology 
that imbues how we think, how we teach, how we entertain, and 
how we interact with each other imbued with Chinese values, not 
American values, not the values and rules that we have in our 
democracy? It's chilling.
    The military consequences are also profound, as are the 
biological, which we can talk about if you're interested. But 
the most important thing to understand is that we need to win 
because we want America to win, and this is our best, great 
opportunity to create trillions of dollars of wealth for 
American firms and American partners.
    Thank you.
    Ms. Mace. Thank you, Dr. Schmidt.
    I would now like to recognize our second witness, Dr. 
Mafdry, for his opening statement.

 STATEMENT OF DR. ALEKSANDER MAfDRY, DIRECTOR, MIT CENTER FOR 
    DEPLOYABLE MACHINE LEARNING, AND CADENCE DESIGN SYSTEMS 
 PROFESSOR OF COMPUTING, MASSACHUSETTS INSTITUTE OF TECHNOLOGY

    Mr. Mafdry. Chairwoman Mace, Ranking Member Connolly, 
Members of the committee, thank you for inviting me to testify.
    Today, I want to make three points: First, AI is no longer 
a matter of science fiction, nor is confined to research labs. 
The genie is out of the bottle. AI is being deployed, broadly 
adopted, as we speak.
    The key factor that made recent AI tools so popular is the 
accessibility. Tools like ChatGPT can be directed using simple 
language commands. We can ask it to draft us a memo or a speech 
or summarize a movie in much the same way we would ask any 
human. No AI expertise required.
    As the barrier to adopting AI gets lower and lower, AI will 
spread across our economy and our society. It will assist us in 
mental and creative tasks, such as writing, visual design, and 
coding. It will bolster and expand our capabilities. It can 
even help us integrate our accumulative knowledge; for example, 
in healthcare, in science, and engineering.
    But along with these opportunities, AI also brings risks. 
OK. Its lack of reliability; its propensity for promoting bias 
and enhancing social inequities; its undermining of 
accountability; its facilitation of deep fakes and manipulated 
media; its ability to fuel personalized, online phishing and 
harassment at scale.
    It's critical we proactively identify these emerging risks 
and develop clear and actionable ways to mitigate them. While 
doing that, we need to recognize, though, all the positives of 
AI and balance them against the negatives. In the end, the 
impact of AI is not a foregone conclusion as much as rapid 
progress of AI might suggest otherwise.
    This brings me to my second point. As we engage with AI 
more directly, we expose ourselves to interactions that go 
against our intuition. Because AI exploits our cognitive 
biases, we are often too likely to accept its results as 
gospel. Indeed, as we are able to communicate with AI so 
easily, so seamlessly, it's natural for us to think of them as 
human, but this is a mistake. These tools aren't human. They're 
a simple computation executed at impressive scale.
    By creating--by treating them as human, we fool ourselves 
into thinking that we understand how AI tools behave. We fool 
ourselves into thinking that we can straightforwardly adapt 
policies designed for humans to work in AI-driven contexts.
    Indeed, our intuition often fails us. Take ChatGPT. Given a 
question, it can write a really convincing answer, even if 
everything it is writing is factually incorrect. It can trick 
us into thinking the answer is critical by using prose that 
sounds like human experts. Therefore, an unmitigated reliance 
on such tools in our day-to-day lives, or even worse in our 
education, can have disastrous consequences. It can erode our 
analytical and reasoning capabilities.
    The final point I want to make is that we also need to pay 
attention to how AI is deployed. That is, if we focus solely on 
what I have discussed so far, we'll have a major blindspot. A 
key feature of measuring AI systems is that they can be used as 
foundation on top of which other systems are being built, 
forming what I would call AI supply chain.
    The upstream of this chain are organizations that create 
the foundation AI tools, like ChatGPT. And here, very few 
players will be able to compete, given the highly specialized 
skills and enormous capital investments the building of such 
systems requires. In contrast, we should expect an almost 
Cambrian explosion of startups and new use cases downstream of 
the supply chain, all leveraging the capabilities of upstream 
AI systems.
    This leads to a couple of policy-relevant observations. 
First, the limited number of large upstream systems may 
introduce new challenges, such as hidden systemic fragilities 
or structural biases. Imagine, for instance, if one of these 
upstream models goes suddenly offline. What happens downstream?
    Second, AI system won't be developed by a single entity. 
They will be products of multiple AI systems grouped together 
each from a different place. These composite systems will 
become even harder to predict, harder to audit, harder to 
regulate. For instance, who will be responsible and legally 
liable when something goes wrong?
    Third, this AI supply chain can redistribute power, control 
over where, when, and how AI is used. This factor will be 
paramount from a societal standpoint, from a geopolitical 
standpoint, from a national security standpoint.
    To conclude, let me say, we are at an inflection point in 
terms of what future AI will bring. Seizing this opportunity 
means discussing the role of AI, what exactly we want it to do 
for us, and how to ensure it benefits at all. This will be a 
difficult conversation, but we do need to have it and have it 
now.
    Thank you for the opportunity to speak with the 
Subcommittee. I look forward to the questions.
    Ms. Mace. Thank you.
    And I would like to recognize our third witness, Dr. 
Crowder, for your opening statement.

 STATEMENT OF DR. SCOTT CROWDER, VICE PRESIDENT, IBM QUANTUM, 
 AND CTO, IBM SYSTEMS, TECHNICAL STRATEGY AND TRANSFORMATION, 
                              IBM

    Mr. Crowder. Chairwoman Mace, Ranking Member Connolly, and 
distinguished Members of the Subcommittee, thank you for this 
opportunity to testify before you today.
    Today, I represent IBM Quantum where we have two goals: to 
bring usable quantum computing to industry and research and to 
make our digital infrastructure quantum safe. We have a network 
of over 200 industry and research partners exploring the use of 
quantum computing for business and science, and have developed 
technology to make the transition to quantum safe cryptography 
easier.
    There is a common perception that classical computers can 
solve any problem if they're just big enough. That is not the 
case. There is a whole class of problems that classical 
computers are not good at and never really will be.
    When I talk to leading U.S. companies about their unsolved 
problems that if solved could bring them huge economic 
benefit--these types of problems turn up everywhere. Some of 
these longstanding problems could be solved with a combination 
of quantum computing and artificial intelligence.
    Quantum computing is a rapidly advancing and radically 
different computing paradigm which could launch a new age of 
human discovery. Just seven years ago, the notion of a quantum 
developer didn't exist. IBM was the first to put a real quantum 
computer on the cloud; at the time it was just five qubits. 
Today, IBM has systems over 400 qubits. And if we continue on 
this technology roadmap, by the middle of this decade, we'll 
have 4,000 qubit systems and will demonstrate the first 
practical use of quantum computing.
    IBM alone has deployed over 60 systems, and our 500,000 
registered users have published over 2,000 research papers. One 
key thread in this research is the application of quantum 
computation within artificial intelligence. Many of our 
partners have published research results using quantum machine 
learning techniques. Examples include financial institutions 
exploring quantum algorithms for improved fraud detection; 
Boeing exploring optimization of composite materials for better 
airplane wings; and CERN exploring applications in high-energy 
physics.
    One primary reason quantum computing has benefit for 
artificial intelligence is because it uses a different method 
to find patterns in data. For example, in fraud detection, a 
quantum algorithm may be better at detecting true fraud and 
reducing false positives. A data scientist may choose to use 
either a quantum fraud model or a classical AI fraud model or a 
combination for the best results. Put simply, quantum will be 
another computational tool to use to improve AI results.
    Generally, we see the future of computing as a combination 
of classical, specialized AI, and quantum computing resources. 
It will not be based solely on classical bits, but rather built 
upon bits and neurons and quantum bits, or qubits. This will 
enable the next generation of intelligent, mission-critical 
systems and accelerate the rate of science-driven discovery. 
Researchers, companies, and governments that leverage this 
technology will have a distinct competitive advantage.
    That leads to a critical point: When one examines the 
financial commitment other countries are making in quantum 
computing, our belief is the U.S. Government investment in 
driving this critical technology is insufficient to stay 
competitive. At its inception in 2018, the $1.7 billion 
National Quantum Initiative stood as a leading public 
investment. Today, the planned global public investment in 
quantum technology is estimated to exceed $30 billion, with 
China at $15 billion. It is critical that we not only 
reauthorize the NQI, but also increase its investment in the 
critical area of research of use of quantum computers for 
mission-critical applications.
    The same importance for ethical and trustworthy AI applies 
whether classical compute or quantum compute underpins the 
solution. We know that trustworthiness is key to AI adoption, 
and the first step in promoting trust is effective risk 
management policies and practices. Companies must have strong 
internal governance processes, including, among other things, 
designating a lead AI ethics official responsible for its 
trustworthy AI strategy, and standing up an AI ethics board as 
a centralized clearinghouse for resources to help guide that 
strategy. IBM has implemented both, and we continue to advocate 
others in the industry to do likewise.
    Additionally, it's important to establish best practices 
for AI bias mitigation, similar to BSA's framework published in 
2021.
    It's difficult to pinpoint the precise benefits and 
possible challenges presented by any new emerging technology. 
Quantum computing is no different. However, those countries 
that make investments in this transformative technology today 
will reap benefits in the years to come. Those countries that 
do not will be at a competitive disadvantage in the future. At 
the same time, countries will also need to invest time and 
energy in developing an appropriate regulatory environment that 
supports the adoption of trustworthy AI regardless of the 
underlying compute technology.
    Thank you again for inviting me to testify, and I look 
forward to today's discussion.
    Ms. Mace. Thank you.
    And I would like to recognize our fourth witness, Ms. 
Hickok, for your opening statement.

  STATEMENT OF MS. MERVE HICKOK, CHAIR AND RESEARCH DIRECTOR, 
                CENTER FOR AI AND DIGITAL POLICY

    Ms. Hickok. Thank you so much.
    Good afternoon, Chairwoman Mace and distinguished Members 
of the committee. I'm Merve Hickok, Chair and research director 
for Center for AI and Digital Policy. It's an honor to be here 
today, and thank you for the opportunity to testify.
    CAIDP is a global research organization based in D.C. We 
educate and train future AI policy leaders, collaborate with AI 
policy experts around the world. We also publish AI and 
Democratic Values index, analyzing AI policies and practices 
across 75 countries.
    I also train and build capacity in organizations on 
responsible AI development and governance. And prior to CAIDP, 
I was in the corporate world as a senior leader at Bank of 
America Merrill Lynch, responsible for recruitment technologies 
internationally.
    I provide this background because we believe in the promise 
of AI. However, we also know that AI systems, if not developed 
and governed with safeguards, have negative impacts on 
individuals and society. We believe that AI should first and 
foremost serve members of the society, their rights, their 
freedoms; our social, moral, and ethical values.
    The title of the hearing today asks if you are ready for a 
tech revolution. My brief answer is no. We don't have the 
guardrails in place, the laws that we need, the public 
education, or the expertise in the government to manage the 
consequences of the rapid technological changes.
    Internationally, we are losing AI post leadership. 
Domestically, Americans say they are more concerned about--
concerned than excited by AI making important life decisions 
about them, knowing their behavior. AI systems now produce 
results we cannot assess or replicate. Opaque systems put 
governments, companies, and individuals at risk. AI expands our 
research and innovation capabilities; however, it also 
replicates existing biases in the datasets and biases in the 
choices of the developers, resulting in disadvantaging people 
with disabilities in hiring, for example; inaccurate health 
predictions for patients of color; offering women lower credit, 
lower home valuations; innocent people being arrested due to 
biased facial recognition.
    We are now debating generative systems which produce 
synthetic text, image, video, and audio. The systems will 
certainly add to our creativity, there is no doubt about it, 
but they're already impacting the original creators. They will 
also be used by malicious actors to fabricate events, people, 
speeches, and news for disinformation, cyber fraud, 
blackmailing, and propaganda purposes.
    I give this testimony on International Women's Day, when 
unregulated opaque AI systems deepen discrimination and online 
harassment against women.
    Both governments and private companies know that public 
trust is a must-have for further innovation, investment, 
adoption, and expansion. Companies, large and small, are 
calling for regulatory guidance.
    Administrations of both parties have called for trust for 
the AI. President Trump's Executive Order 13960 explained that 
ongoing adoption and acceptance of AI will depend significantly 
on public trust, and AI should be worthy of people's trust, and 
that this order signals to the world U.S. commitment to develop 
and use AI underpins by democratic values. The order 
characterized trustworthy AI as being lawful, respective of 
civil rights, accurate, reliable, safe, understandable, 
responsible, transparent, accountable, and regularly monitored.
    Office of Science and Technology has recently published 
AI--Blueprint for an AI Bill of Rights, a critical policy 
framework underscoring similar qualities for AI, emphasizing 
democratic values and civil rights.
    President Biden has called for bipartisan legislation to 
keep companies accountable, and reiterated the same principles: 
transparency, accountability, and safeguarding our values.
    We very much support this committee and its bipartisan 
nature, but there are real challenges ahead, and I will 
conclude with a few recommendations toward those.
    We really need the Congress to hold more hearings like 
this, explore the challenges, the risks and benefits, and hear 
from the public and those impacted. We need the Office of 
Management and Budget to move forward with the long-delayed 
rulemaking for the use of AI in Federal agencies as part of the 
executive order. We need to build multidisciplinary capacity in 
Federal Government to ensure the work force understands the 
benefits and risks of AI. We need the wider work force to 
understand benefits and risks of AI as well. We need R&D 
capabilities expanded beyond a handful of companies, campuses, 
and labs, and demand trustworthy AI with our research agenda. I 
urge you to act now to enact the legislation reflecting the 
bipartisan nature.
    Absent a legislative agenda or implementation of AI policy, 
American people, American companies, and allies are lost about 
U.S. AI policy objectives.
    Thank you.
    Ms. Mace. And thank you.
    And I think one of the things that sticks out to me today 
is, actually, this is the first AI hearing this Congress in the 
U.S. House of Representatives. But also, this same day, the 
U.S. Senate had their first hearing on AI on this subject 
matter; they beat us by four hours this morning at 10 a.m.
    But I would now like to recognize myself for five minutes 
for a few questions of our panelists.
    Thank you, Dr. Schmidt, for painting what I would describe 
as a very vivid picture of what is happening in this space, 
because I agree with you, it's been rapid and, in your words, 
epic. And I'm not sure that the world is ready for what is to 
come in the next few months, years, et cetera. And so, it 
reminds me of Einstein. He said: ``I never think of the future. 
It comes soon enough.'' And it is here. And it is moving faster 
than the speed of light.
    So, my first question today is for Dr. Schmidt. How can we 
ensure that AI technology is developed in a way that is safe, 
transparent, and beneficial for society without stifling 
innovation?
    Mr. Schmidt. I'm always worried about AI conversations, 
because everyone believes the AI that we are building is what 
they see in the Terminator movies. And we are precisely not 
working on those things. So we are clear, we are not doing----
    Ms. Mace. Not yet.
    Mr. Schmidt. We are not doing it yet, and we are not likely 
to. But what we are doing is working on systems that will 
affect the way people perceive their world. And I think the 
best thing for America to do is to follow American values, 
which include robust competition with government funding of 
basic research, and using the innovators, including the folks 
to my left, to actually deliver on this.
    I think that one of the things that is not appreciated in 
the last 30 or 40 years of tech companies is--speaking as a 
person who is associated with a number of them--is how good 
they are as American exports of our values. So, I come back to 
a much simpler formulation that American ingenuity, American 
scientists, the American government, and American corporations 
invent this future and will get something pretty close to what 
we want. And then you guys can work on the edges where you have 
misuse.
    The alternative is, think about if it comes from somewhere 
else which doesn't have our values. And I really believe that. 
Everything that you can do to finish, to support that 
innovation cycle, the universities, the graduate students, 
getting foreigners who are high-skilled in to help us, building 
those corporations, creating shareholder wealth, hiring lots of 
people--it's the American formula, and it will work here too.
    Ms. Mace. And then, on that note, in terms of the 
personnel, the resources, training folks in the technology so 
that it can advance, having that innovation. And lot of it is 
on the software side, but how does hardware figure into that? 
CHIPS, for example.
    Mr. Schmidt. So, in our AI commission that you all 
commissioned a while ago, we spent a lot of time on this. We 
felt it was very important for America to retain its 
leadership, which of course we didn't have, we gave it to 
Taiwan. The best result was to basically get the Taiwanese 
firms, primarily TSMC, and Korean firms, primarily Samsung, to 
locate plants in the United States, which has occurred.
    The Trump and Biden administrations have done a good job of 
restricting some of the sales and access to these tools to the 
Chinese. But, fundamentally, this is a race, it's a 
competition, and we're not that far ahead. So, we have to keep 
innovating, which is why your support for the CHIPS Act was so 
helpful. And so, on behalf of the whole industry I'll thank all 
of you for doing that. That's a good example of the government 
getting ahead of a real problem.
    Ms. Mace. Thank you.
    And, Dr. Mafdry, my next question for you, do we need to be 
worried about too much advancement too fast in AI? Are we 
capable of developing AI that could pose a danger to humanity's 
existence all over the world, some of the things that people 
talk about out of fear in this conversation because of a lack 
of knowledge, or is that just science fiction?
    Mr. Mafdry. Well, it really depends on what do we view as 
this kind of catastrophic risk. So, a Terminator-style 
scenario, I'm not too worried about this, as Dr. Schmidt just 
said. What I am worried is about something more mundane but 
essentially very, very corrosive, right. So, we see how this 
works out in social media where, essentially, AI also runs in 
social media. That's what decides what we see. And we are 
seeing the effect of that. Well, this is kind of not really 
transparent, not really aligned with the societal goal.
    So, that is--now think about things like this new 
generative models developed essentially in a way when we just 
maximize the profit, we just try to get maximum adoption. I'm 
worried about that.
    Having said that, I do think we can figure it out how to 
not stifle innovation, just moderate it so we still can 
progress. But just, again, ensure that the companies that we 
talk to, they are not only--only driven by profit, but realize 
they have some responsibilities, and they need to acknowledge 
them.
    Ms. Mace. And I would agree. And I think, you know, we've 
talked about algorithms for years, like on social media, and 
the use of divisiveness of politics today, and each side 
getting the extreme of their side and getting fed more of that 
information. I sort of feel like it would be--we were putting 
it on steroids of the future, immediate future of what the 
advances in AI might be. What are your concerns there?
    Mr. Mafdry. Yes. So, essentially, I think saying that this 
might be like social media on steroids is very much--is very 
much justified. So, again, now I told you that ChatGPT will be 
so much more pervasive than social media. And, essentially, we 
don't exactly know what will be the effects on our thinking 
here or like the way our children learn to think. Like, do they 
just fully trust what ChatGPT tells them or do they learn how 
to reason?
    So, again, I'm really worried about this, but I think--and 
that is where the government really needs to step up. We can--
you know, with enough involvement with government, which I 
think might not be too much in context of social media, but 
here we have to do it differently, and I think that we will do 
it well.
    Ms. Mace. Thank you.
    I would now like to recognize my esteemed colleague, Mr. 
Connolly, for questions.
    Mr. Connolly. Thank you so much, Madam Chairwoman.
    Listening to Ms. Hickok and the potential of AI is actually 
really positive in terms of how it can complement the quality 
of life for humans and make things better and promote peace and 
harmony. But we know that technology can be used for good and 
evil.
    And I'm listening to what you just said, Dr. Mafdry, in 
terms of your hope for the government's role. And yet, if you 
look at social media and you look at technology in general, 
Congress has been very reluctant to get into the game of 
regulation. And as a result, awesome power has been developed 
by and deployed by entrepreneurs who became billionaires in, 
largely, Silicone Valley without any interference by the 
government. They make all kinds of massive decisions in terms 
of content, in terms of what will or won't be allowed, in terms 
of who gets to use it, et cetera.
    And so, why should we believe that AI would be much 
different in terms of its future in the hands of the Federal 
Government?
    Mr. Mafdry. Well, again, the hope here is that we will--
don't play the same playbook we played for social media. And in 
particular, I think the point of start here is before we go--
so, first of all, I want to say that I strongly believe that 
regulation is a very important tool to make sure that, you 
know, just certain technologies are aligned with like broad 
societal benefits, and they need to be used.
    Having said that, before we go to premature regulation and 
we kind of rush regulation, first of all, even the rush 
regulation might not be fast enough for AI because AI is a very 
fast moving target. But even we forget, I think what we need to 
start, we need to start ask questions. And, in particular, 
government needs to ask questions of this company saying, what 
are you doing? Why are you doing this? What are the objectives 
of the algorithms that you are developing? Why is there no 
objectives? How will we know that you are accomplishing these 
objectives? How can we have some input into what these 
objectives showed.
    I think this change of tone, together with the government 
recognizing that you cannot abdicate AI to the big tech, as 
capable as they are, that they have different use cases. They 
have different priorities. Like, that's what needs to change. 
If this doesn't change, I'm extremely worried.
    Mr. Connolly. Well, I just think, if we look at the past 
and we look at social media, I wouldn't bet the farm on any 
kind of rapid regulatory regime coming from the Federal 
Government.
    Mr. Mafdry. And just to clarify, that's what I--that's 
exactly what I'm worried about. So, let's have conversations we 
can have now.
    Mr. Connolly. Right.
    Mr. Mafdry. Hopefully, we'll learn from the mistakes.
    Mr. Connolly. Thank you.
    Dr. Schmidt, you want to see the United States get ahead in 
this lane of technology and to compete successfully against the 
Chinese. Can you talk a little bit about what is the nature of 
that threat? How well are they doing in this sphere, and what 
do we need to be concerned about?
    Mr. Schmidt. There are four or five companies, Mr. Leader, 
that are American or British firms that have extremely large 
language models. There's also at least one large one in Baidu 
in China. I was interested to note that the largest 
noncorporate such example in the world that is not owned by a 
corporation is also in Tsinghua University in Beijing.
    So, there's every reason to believe that the Chinese 
understand everything that we're talking about now extremely 
well. They've published their intent, so we can read about it. 
And I view it as a national emergency. This technology is so 
powerful in terms of its ability to transform science, 
innovation, biology, semiconductors, you name it--and along 
with quantum, I should add--that we need to get our act 
together to win and to win a competition.
    If we don't--let me give you some examples. AI can be used 
to generate good things in biology, but also lots of bad 
viruses. You all have created a Bioterror Commission, which I'm 
fortunate to serve on, to take a look at this and the impact of 
that. That's another example of national security.
    The issues of misinformation of the nation-state could be 
very significant. Think about the progress of war and conflict 
where decision-making can be done faster than the OODA loop or 
faster than human decision-making. These are all challenges, 
and our government is behind where it needs to be in the 
adoption of these technologies for national security as well.
    Mr. Connolly. I just would end by saying, I couldn't agree 
with you more. And I think really we need to be looking at sort 
of like, you know, the race to the moon kind of shot in, you 
know, quantum computing, AI, cyber, 5G. Because if the Chinese 
dominate those areas, the future is theirs.
    I yield back. Thank you, Madam Chair.
    Ms. Mace. Thank you, Mr. Connolly.
    I would now like to recognize a fellow South Carolinian, 
Congressman Timmons.
    Mr. Timmons. Thank you, Madam Chair. That's great to say: 
Madam Chairwoman. Congratulations on being the Chair.
    Ms. Mace. Thank you.
    Mr. Timmons. First up, thank you so much for your 
attendance here today. You all are experts in your field, and 
we really appreciate you taking the time to come and share your 
thoughts on this important topic.
    Congress is grappling with technology. Our country's 
grappling with technology. And we're doing our best to try to 
figure out a regulatory environment that fosters innovation and 
allows economic growth, while managing the potential adverse 
impacts that technology can have.
    Obviously, we are working on cryptocurrency and digital 
assets, and that's a major challenge for us. Congress is not 
the youngest, most tech savvy part of our society, and we are 
doing our best.
    But I do want to talk about AI's potential impact on our 
work force, particularly how tech can be leveraged to further 
individual efficiency rather than possibly displace workers.
    So, Mr. Crowder, I want to start with you. What are the 
most promising use cases for AI as a tool in the work force, 
and how do you anticipate AI will be--will influence industries 
such as the financial services sector?
    Mr. Crowder. Yes, I think it's going to be pretty broad. 
And one of the exciting things that we didn't really talk about 
is that leveraging some of the underlying technologies like 
base or foundation models can be applied to things, not just 
writing a haiku or coming up with a speech, but also, you know, 
looking at language-like things in other fields, like tabular 
data and finance, et cetera, et cetera.
    So, in addition to the kind of things I talked about fraud 
detection, I think we've all experienced, you know, maybe 
traveling abroad and having your credit card be declined, and 
that's bad for banks because they want that credit card money. 
So, even a small percentage improvement in false positives is a 
lot of money for our financial institutions. So, there's lots 
and lots of applications.
    But to your point, I mean, I think we need to look at AI as 
augmenting what humans can do as opposed to replace. And I 
think good utilization of AI is to make it--make our work force 
more efficient. And I would argue one of the things that we do 
a good job in the United States is funding basic science. But 
we also need to look at how do we encourage our work force to 
be able to use the technology as opposed to just develop the 
technology. Because I think the use of AI is going to be a 
differentiating factor on, you know, making the U.S. Government 
as well as our companies more effective and more competitive.
    Mr. Timmons. As businesses try to compete in the free 
market, they're inevitably going to try to cut costs and 
replace work force with technology. What--how are we going to 
manage that challenge?
    Mr. Crowder. That's a good question. I don't know if I have 
a perfect answer for it. But I think having a more productive 
work force that is focusing on value creation, I think at the 
end of the day is what really drives success in business. And 
the more that you can automate tasks that aren't really value 
creation so you can free up your workforce to create value, I 
think that is good. And I think that is a more positive way of 
driving additional productivity as opposed to thinking about it 
as removal of cost.
    Mr. Timmons. Sure, sure. Dr. Schmidt, what jobs do you 
think will be created in the wake of AI and what jobs do you 
think will be threatened?
    Mr. Schmidt. I think one of the general comments to make is 
I've spent 20 years listening to the theme that jobs will be 
displaced or lost because of technology. And today we have a 
huge shortage of workers to fill the jobs in America. The 
biggest category is truck drivers. Remember how truck drivers 
are going to be replaced by automation.
    So, it looks to me like the most likely scenario in the 
next 20 years or so is not enough people to fill these jobs. 
And the best way to get people who can fill those jobs, to have 
them have better tools, better education, better knowledge, and 
a partner, if you will--all of the evidence that I've studied 
indicates that having a digital partner increases your wage, 
right? Literally, when you are using a computer to help do the 
job, the job has a higher salary.
    So, it looks to me like as we get more diffusion of this 
technology, on average, jobs go up. There are jobs that are 
lost, there are jobs that are created.
    Mr. Timmons. Sure. I couldn't imagine my life without 
Google, Apple, and Amazon. I feel like I'm attached to my 
phone. And I haven't been to the grocery store in three years, 
and it is great. So, I'm sure that this is going to create 
additional opportunities to make my life more efficient and 
make me more capable of having a greater impact. So, I 
appreciate that.
    And thank you, Madam Chair. I yield back.
    Ms. Mace. Thank you.
    I now recognize Congressman Lynch for five minutes.
    Mr. Lynch. Thank you, Madam Chair. And congratulations to 
you and to the Ranking Member.
    Dr. Schmidt, in March 2021, the National Security 
Commission on Artificial Intelligence released its 
comprehensive, I think it was like 800 pages. It actually 
defended itself against the risk of being read by its sheer 
thickness. But right after your report came out, I was the 
Chair of the National Security Subcommittee, and we invited you 
to testify regarding that. I see your staff all nodding. They 
have painful memories of this, I'm sure. But we invited you in, 
and we went over the report. It had 16 major recommendations, 
and then probably another 20 other subsidiary ancillary 
recommendations.
    I'm wondering if you could talk about the progress, or the 
lack of progress, we've made over these two years now since you 
last testified before this committee about this issue.
    You had some very pointed recommendations, you know, for 
DARPA. You had recommendations and action items for Congress, 
for the executive, for this interagency task force that you 
envisioned.
    Can you talk a little bit about where you think--how much 
progress do you think we have made? And, you know, would you 
give us--what kind of grade would you give all of us.
    Mr. Schmidt. Well, in the first place, you guys give us the 
grade, and we are happy to serve. I would say about half of our 
recommendations have been adopted through the NDAA and other 
processes. We were kind enough to write the legislation for you 
as a hint, and you all were able to adopt it fairly quickly, 
and it worked.
    The area that I'm most focused on right now is basically 
the training problem. And I just don't see the progress in the 
government to reform the way it hires and promotes technical 
people. As part of the--part of the AI report, we proposed a 
civilian, essentially, trading academy for digital skills. And 
there are various different forms of this. But I don't think 
the government is going to get what it needs unless it has a 
program which is modeled on the military academies but for 
civilians, where civilians get trained in technology in return 
for serving in the government in the civilian capacity to help 
the government out.
    This is a young person's game. These technologies are too 
new. You need new students, new ideas, new invention. I think 
that is going to be the fastest way to get it. I don't think 
the government is going get there without it. That'd be my 
highest--I think that's the largest omission.
    Mr. Lynch. Is there a way--so I actually was confronted 
with this same problem in my district, where a lot of high-tech 
firms, biotech firms moving into the district, and I grew up in 
the local public housing projects and those kids--our kids 
weren't getting those jobs. So, I started a--I founded a 
charter school that focuses on STEM, you know, math, science, 
technology. And it's doing really, really well. But it's one 
school, you know, out of a hundred.
    And is there a way to--I'm not so sure if it is efficacious 
to try to take somebody who is coming out of high school or in 
college and then make them a tech person. I think it's a much 
longer runway and better chances of success if we start at a 
very early age. I mean, is there any thoughts about, you know--
I mean, you know, we are having problems with our public 
education system anyway. But is there a way to amp that up at 
an early age in early grades to produce the type of workers 
that you envision will be necessary to maintain our edge, not 
only in artificial intelligence, but everything else we have 
got to do.
    Mr. Schmidt. Israel does something interesting in this 
area. If you are 15 or 16 and a math prodigy, they actually put 
you in a special school that is combined with their mandatory 
military training. I'm not suggesting a mandatory military 
training for our math people. God knows how they would do. But 
the important thing is identifying the talent early and then 
getting it into the right track. And, again, the educators to 
my left can talk about this at more length.
    But I think that at a Federal level, the easiest thing to 
do is to come up with some program that is administered by the 
states or by leading universities. Every state has a big land 
grant university that is interested in this. And getting them 
money so that they can build these programs, and then they get 
paid back for that with service. I like those models, and that 
is a model that takes your idea and scales it. There is a lack 
of money to build these systems at scale, and that idea or some 
variant of it would do it.
    Aleksander?
    Mr. Lynch. Thank you.
    Yes. Mr. Mafdry.
    Mr. Mafdry. If I can just add, is that AI's technology is 
to learn best by applying it to used cases. And government--so 
I actually was discussing exactly this problem with DOD because 
they have exactly suffered from this--and instead of thinking 
of this is a weakness, this could be a strength. Once it is 
tried to apply AI internally to your problems, that's where 
people will learn. And this way actually people come, let's say 
to DOD, or to government programs for three, five years, and 
they come back to the civilian sector. And really like, also, 
well, we are lacking this talent also in our civilian economy 
too. So, I think that is the way to go, and the government 
could play a big role here.
    Mr. Lynch. Thank you.
    Madam Chair, I know I have another one witness need to 
answer the question, but I think I've run out of time.
    Mr. Burchett. Chairlady, why don't you let him go. I'm 
next, but go ahead.
    Mr. Lynch. OK. Ms. Hickok?
    Ms. Hickok. I just wanted to follow up with the last remark 
as well in terms of education. I echo the task force 
nationally. I researched task force reports and recommendations 
on democratizing the research and development capabilities 
within the Federal Government for the government as well.
    Sometimes our brightest minds are forced to go to a handful 
of companies and labs and campuses to do their research in 
areas that they are interested. But if you have this capacity 
within the government's infrastructure as well, that would also 
be another way to attract this work force.
    And I will expand also the education piece from the schools 
to consumers, and expand the education need from technology 
jobs to all the jobs. We need lawyers who understand these 
concepts. We need sociologists, anthropologists, ethicists, 
policymakers. We need understanding and capacity building in 
this topic across the whole domain and industry.
    Mr. Lynch. Thank you.
    Madam Chair, I yield back. And I thank you for the 
courtesy, and I thank the gentleman as well. Thank you.
    Ms. Mace. Thank you.
    I would now like to yield to Congressman Burchett from 
Tennessee.
    Mr. Burchett. I'd tell my friend across the aisle, you'll 
get nowhere calling me a gentleman; I just want you to know 
that. And I didn't miss my thought process when I came in here, 
and we have a lady who is chairman and it is international day 
of the woman. And I think that is pretty cool that you Chair. 
If my momma were alive, she would think that is very cool, too, 
because she was a pretty cool lady.
    Thank you all for being here. I'm probably the least 
qualified person of ever asking y'all questions, but as the 
435th most powerful Member of Congress, I feel very empowered 
today, and I'm kind of digging this subject matter. And I'll 
try to go through these quick.
    Mr. Schmidt, I did Google, brother. And I don't know what 
it is. I hit that button--you know, my mom and daddy would 
always say look it up. Now I tell my daughter, Google it, 
honey. You know, so I think it is pretty cool.
    But the development of AI, how will that impact our 
international relations specifically with China? I fear what 
they will do if they get control of it, as you have stated 
there. I think you mentioned the date that they said they were 
going to control it, and I would say they probably be doing 
that five years ahead of time.
    Go ahead, brother.
    Mr. Schmidt. Thank you, Congressman. I worry about the 
following scenario: In the future, there's a war. It's an 
attack by North Korea on the U.S. Sorry. China stops the war 
between North Korea and the U.S., and the entire war took one 
millisecond.
    And the reason I worry about that is I can't figure out how 
we are going to build offensive and defensive systems that are 
reliable enough to put them in charge of a war that occurs 
faster than human decision-making. That, to me, is the ultimate 
threat of this technology, that the things occur faster than 
humans can judge them. I don't have a good solution for that.
    My second observation is that China is very smart, and they 
have identified these areas as these underlying technologies to 
provide leadership that dominates industries. A good example is 
synthetic biology, which is an area which was imbedded in the 
United States, likely to be, again, trillions of dollars of 
wealth. China has now maximized its investment in this area. 
Not only is it good for their national security, but it's good 
for their businesses.
    So, when you have got a nation-state that's smart, 
technocratic, focused on its own defense and innovation, and 
proposing its own companies in the form of civil military 
fusion, we have got a serious competitor. That's why we have to 
act.
    Mr. Burchett. Are you aware--or maybe you are not. I'm 
not--of the Chinese infiltrating any of our AI companies?
    Mr. Schmidt. I am not. You must assume the Chinese have 
operatives pretty much everywhere, based on their history.
    Mr. Burchett. OK. Is there any way that we could 
proactively protect against AI-generated cyber attacks?
    Mr. Schmidt. Well, you defend against them.
    Mr. Burchett. Right.
    Mr. Schmidt. Technology--we looked a lot at this. The 
question is could you create the equivalent of a Manhattan 
Project that was secret and you'd keep it all in one place, in 
one location, New Mexico, what have you. The knowledge is 
moving too quickly. There's too many people globally. We are 
going to have to win by staying ahead, which means building 
powerful defensive systems.
    Mr. Burchett. OK. Thank you.
    Dr. Mafdry, what are some of the personal risks to personal 
privacy that are associated with the use of AI?
    Mr. Mafdry. Sir, could you clarify what kind of risk?
    Mr. Burchett. Well, I guess I should ask my research 
person. As I stated, it is a little out of my league.
    Mr. Mafdry. No, I just didn't hear. I just didn't hear.
    Mr. Burchett. No, I just said what are some of the 
potential risks?
    Mr. Mafdry. I see. So, again, there is many, and they 
really depend which sector you look at because there are 
different levels of credibility. So, one of them and like the 
one big risk, and that's something I research myself so I'm 
intimately familiar with, this technology is not fully 
reliable. It works most of the time, but not always.
    Mr. Burchett. It's not fully what? I didn't----
    Mr. Mafdry. It's not fully reliable. So essentially, it 
works most of the time but not always. And then what is worse, 
you might not realize when it's not working. OK. So, for 
instance, in ChatGPT they hallucinate things sometimes, and you 
might not realize they are hallucinating things because it 
looks very convincing.
    The other aspect of this is, essentially, as the systems 
ingest our data, they can really essentially know us better 
than we do. And again, that was true also of Google, also of 
the social media, but I think with this next generation of 
models, this will become even more so.
    And then the third level of risk is exactly the one that 
Dr. Schmidt talked about. I'm actually really worried about 
that. Not even about the actual war happening, but us preparing 
for the war like something can go wrong. And it becomes like 
when things are happening within a millisecond, like, we have 
no good intuition or no good ways to actually figure out how to 
make it safe.
    Mr. Burchett. OK. Thank you.
    Running out of time but, Dr. Crowder, real quickly, how 
will the quantum computing impact the security of encrypted 
data?
    Mr. Crowder. In the long-term, quantum computers, someone 
proved on a blackboard, that a lot of our current cryptography, 
how we basically send keys around and how we digitally sign 
things, eventually will get broken by a quantum computer. The 
good news is that people have come up with algorithms that 
quantum computers are not good at solving and classical 
computers are not good at solving.
    So, our challenge is really transitioning from the 
cryptography we use today to that new form of cryptography. And 
we want to do that as quickly as possible once we have got 
really safe standards because we're worried that people will 
take all the data today and decrypt it later. So, for some 
applications, that doesn't matter, but for a lot of 
applications that makes a big deal.
    Mr. Burchett. OK. Thank you. I've run out of time.
    Chairlady, thank you, ma'am, very much.
    Ms. Mace. You did a great job.
    I would now like to recognize Congressman Mfume.
    Mr. Mfume. Thank you very much, Madam Chair. Again, my real 
thanks to you and the Ranking Member for having discussions 
that led us to this point.
    This, for lack of a better term, has scared the hell out of 
me. And I thought I knew something about AI. I'm bopping around 
on the campus talking to students in a classroom now and then 
teaching them, but what I have heard today is unlike anything I 
have ever heard, particularly in terms of our national 
security.
    I think the Chairwoman mentioned or quoted Einstein a few 
minutes ago. He also said that great minds have always 
encountered violent opposition from mediocre spirits. And I 
don't know if you are encountering violent opposition, but I 
think you are encountering a great deal of inattentive or 
unattentive population groups who just, for whatever reason, 
are not paying attention to what is going on. It is very scary, 
and I would strongly support, Madam Chair, any future hearings 
on this. I just don't that think we have much of a choice. It 
is that imperative.
    Dr. Schmidt, you said it was a national imperative, almost 
a national emergency. That got my attention, and it will keep 
my attention.
    I don't know that we can do enough to ring the bell on this 
so that our institutions, whether it's government or business 
or academia, all start paying the kind of attention that we 
really, really need.
    Dr. Mafdry, you, in your testimony, talked about the 
overarching points. And the fourth one you talked about was 
that we pay attention, critical attention, to the artificial 
intelligence supply chain, that it will structure the 
distribution of power in a new AI world. Could you take just a 
moment to explain that?
    Mr. Mafdry. Of course. So, essentially the way AI is being 
deployed right now is no longer just one entity who gathers the 
data, trains the model, and applies it and deploys it to a 
given task. The way things happen is that there is a supply 
chain, in particular with this new generative models, like, 
they are very expensive to train, but they are very capable. 
And essentially what happens is that, you know, one of these 
companies--there is very few companies that can afford training 
such a model--they essentially develop it and then let other 
companies build on top of it.
    So, think of this as just like having initial capability 
and just adjusting it. For instance, you have a model like 
ChatGPT. It is able to summarize texts and understand to some 
extent the texts. So, maybe you have a hiring tool that then 
you build on top of it that essentially uses this capability to 
screen resumes and something. There is many risks there, but 
this is just an example. So, we are heading this kind of supply 
chain of interdependencies.
    And again, we have upstream where there is very few 
players. There is very few critical important models on which a 
lot of the economy depends, and then there are all these 
interactions between these other things. So, this is something 
that now we have to think. There's an ecosystem.
    Mr. Mfume. I see. I see.
    This sort of boasting that China has been doing that, by 
2030, they will be the dominant player is scary also. And the 
fact that the universities in Beijing and elsewhere are openly 
sort of trying to develop this and develop thinking that way, 
and that it's only seven years from now makes me, again, very 
concerned.
    I want to talk about risk for just a moment, and then my 
time will have expired. This whole notion of a war and 
decisions being made with the use of AI in a millisecond that 
counter and then counter-counter the decision. I don't know to 
what extent the military establishment--I assume they're 
looking at this as much as you are, but it is interesting.
    Now, I'd like to ask also, there are, as you know, fallible 
algorithms. You know better than I. They are just misleading or 
they are incorrect. What happens in consumerism, in business, 
in law enforcement, in military context that frighten you the 
most as risk as a result of an infallible algorithm? Any of 
you?
    Mr. Schmidt. Quickly, the biggest issue, as I mentioned, is 
the compression of time. Let's assume you have time. Then the 
question is, who gets to decide between the system and the 
human? And I'm very concerned about a misalignment of interests 
where the human has one set of incentives, the computer has 
been trained against a different set of outcomes, and the whole 
society wants an even different goal.
    And I spent lots of time with the military, who I'm really, 
really proud of and fond of, and they all want systems that 
help them automate their decisions. In practice, their use of 
technology will be largely to replace boring and uninteresting 
jobs, like watching TVs and things like that. These are things 
like Project Maven and its successors and so forth. So, I think 
at the moment, the government at the military level is going to 
use these more for sensing and analysis and not decision-
making. Just to make it very clear, I think we would all agree 
it is not ready to make a final life critical decision. It may 
never be, but it is certainly not now.
    Mr. Mfume. Yes. Thank you. My time is expired.
    Thank you, Madam Chair, appreciate it.
    Ms. Mace. Thank you. Great questions.
    I would now like to recognize Congressman Burlison.
    Mr. Burlison. Thank you, Madam Chair.
    I, for one, am not afraid of the advent of AI. In fact, I 
want to welcome our future overlords. But I will say, I do see 
a lot of promise. You know, working in healthcare technology, 
we see an amazing opportunity to be able to comb the data 
records of patients, be able to use--be able to take that data 
and be able to accurately diagnose better than probably any 
medical professional possibly ever could to a greater degree of 
accuracy what you might be facing. To me, there is tremendous 
opportunity, but I also do recognize some of the threats, 
obviously.
    To that end, my question for you, Dr. Schmidt, first is 
that, given the state--the size and scope of the equipment 
that's necessary today, we're limited to what actors do have 
the ability to use AI, right? So, at least we know who has 
access to it, who's utilizing it. It's not like we have people 
in a Nigerian criminal syndicate using AI at this point. Is 
that correct?
    Mr. Schmidt. I can assure you it is coming because of 
diffusion. Basically, the models are trained very expensively, 
but when they're used for inference, which is where they answer 
questions, it's quite simple. So, I would expect us to see 
terrorists, other bad actors begin to use this technology in 
ways that we should anticipate.
    Mr. Burlison. And they--but at this point, they would have 
to access it on another platform. Someone would have to spend 
the resources to develop the tech--to house the data, et 
cetera?
    Mr. Schmidt. And Aleksander can--Professor Mafdry can help 
me here because we work together. The simplest way to 
understand it is the training part is really expensive, but you 
can take the trained information and put it on a laptop, and 
then it can be used. So unfortunately, in this scenario, all 
you need is a computer or a phone to do your evil acts.
    Mr. Burlison. OK. Dr. Crowder, my question to you relates 
to the quantum computers. These too, these are machines that 
you just couldn't walk around, handheld devices, right? Can you 
walk us through what it takes to--what the environment 
requirements are, what it takes to have a quantum computer?
    Mr. Crowder. Yes. I mean, the--we deploy them right now in 
regular data centers, but they are not laptops. They are not 
mobile phones. They are large, complex systems, and they are 
very, very hard to calibrate and manage, and that is a major 
trade secret of how to keep them up and running. That's 
probably going to be true for quite a while.
    So, you know, right now, we don't actually sell systems. We 
sell cloud access, because there's a small number of people who 
know how to actually keep them running, operating, et cetera, 
et cetera. So, you know, obviously that has some benefits from 
an IP protection security point of view as well.
    Mr. Burlison. And so, this is being used at what scale? How 
many businesses or----
    Mr. Crowder. So, we have over 200 partners, and we 
carefully select what regions of the world we do business in 
and carefully select who we partner with. But we have got over 
200 industry and academic and research partners who are 
leveraging--we've got about 26 computers right now that are 
accessible via the cloud.
    Mr. Burlison. To that end, I know that from other testimony 
from other hearings, we have been hearing that the Chinese 
Government has a pattern of sending students to American 
universities, who then are able to glean data in working in 
cooperation on some of these projects.
    Are you aware of that activity?
    Mr. Crowder. Not personally aware of that activity. We 
obviously are very--for business reasons for our, whatever you 
want to call them, crown jewels or most protected IP, we are 
very careful not to--we are very careful who is part of that 
work.
    Mr. Burlison. And then again, my last question to you is, 
in the subject of quantum entanglement, has that had any real 
world applications or potential real world applications?
    Mr. Crowder. Right now, nobody from our perspective have 
proved that there is practical use, which means it is better 
than simulating it or just using classical computers. But we 
think that is going to change in the next very soon amount of 
time. The computers are getting rapidly advancing, and we think 
by the middle of this decade, there will be practical use.
    We are working with a lot of, you know, U.S. companies on 
applications today. I had mentioned a couple of them before, 
like Boeing on looking at better airplane wing optimization 
materials, fraud detection for banks, looking at medical health 
records and trying to predict more efficient treatment for 
patients with healthcare, like sciences companies. We have a 
big partnership with Cleveland Clinic looking more broadly 
across five sciences. But it is not practical today to be 
better than what we have classical. But we think that's going 
to come in the next couple of years.
    Mr. Burlison. Thank you.
    Ms. Mace. Thank you.
    I would now like to recognize Congressman Gomez for five 
minutes.
    Mr. Gomez. Thank you, Madam Chair.
    Before we begin, I want to--I was thinking about this: How 
do we rank AI in the history of development in humankind? It is 
something that I believe is--could be extremely startling. It's 
one issue that I have random people bring up to me on the 
streets. Some people compare it to the invention of the 
television or computer, or the internet, and I think it's 
beyond that, because this is something that--it makes it hard 
to discern something that you are looking at, a photo, a video, 
or even words on a piece of paper if it was actually written or 
developed by a human.
    And that is something that I think most people are trying 
to wrap their minds around. How is this revolutionary 
technology going to fundamentally change the way we live our 
lives, the way we interact with one another, the way we 
interpret information that is coming in? Because when you can't 
discern what is actually created by a person and what is 
developed by a computer program, then people start questioning 
all sorts of things.
    And that's one of the challenges. Maybe that is a 
philosophical challenge. Maybe it is a real life government 
regulatory challenge, but it is something that really, I think, 
is at the heart of it, when people start to question what is 
real and what is not.
    But I recognize it has--AI has a lot of great potential. 
Everything from predicting new variants of COVID to detecting 
certain types of cancers that doctors miss. The potential is 
staggering. But people want guardrails on this new technology. 
If not, it can and will be misused.
    When I first got elected, one of the--somebody ran a test 
of Members of Congress, and there was about--I think it was 
about 28 of us that got matched with people that had committed 
crimes. And this was under the best circumstances. They were 
using our photos from our websites that were taken with the 
best lighting and the best quality. So, AI has a potential also 
to have inherent bias built into it, and it often 
disproportionately impacts people of color, women, and that is 
a concern.
    So, how do we address those limitations on AI? How do we 
safeguard against the violation of people's civil liberties? 
And this is something that even my--when Mark Meadows was on 
this committee, him and I and others agreed that this was a 
problem. We just couldn't figure out a solution.
    So, Ms. Hickok, how can Congress best help address AI's 
racial bias? What can we do as a body and the Federal 
Government do to protect individual's civil liberties and, at 
times, their right to privacy as well?
    Ms. Hickok. Thank you for the question. And I'm really 
thankful that you mentioned the civil liberties, because with 
AI systems, as I mentioned earlier, you're talking about every 
single industry and domain that is going to be--is going to be 
even further impacted by this.
    You talk about civil liberties and access to resources and, 
unfortunately, that spans from anything from housing to 
employment to education to insurance loan to policing and 
criminal justice decisions and judiciary decisions. My concern 
is, as AI also, if you don't have the guardrails now and these 
systems are imbedded in the public sector services as well as 
private services, that they are also going to eventually 
connected.
    So, one erroneous decision from one system is going to be 
the input to another system, and we are going to completely 
lock people out of resources and opportunities, and that is 
going to widen the gap between haves and have nots. And it is 
also going to widen the gap within the society that we are all 
trying to narrow.
    How can we narrow that? How can we keep the systems 
accountable? It is really about the people and organizations 
and how we use them that we should be focusing on. Putting the 
civil liberty is putting the freedoms and rights at the center 
of it, and making sure that these systems--the systems that we 
use, especially that impact the resources and the rights, are 
built accountably, transparently, replicable. We heard from my 
co-panelists and witnesses that a lot of the times the systems 
are opaque. We don't know how they work, and we also cannot 
replicate the decisions.
    So, you might be denied a credit. You might be denied 
insurance or a job. It might not be--if you are trying to keep 
the organization accountable, we will not be able to trace back 
and keep them liable as well. So, we need to make sure from the 
start, from the very start, from the design data and design 
stages, that we put those guardrails in place, and we keep 
organizations and the users accountable.
    Mr. Gomez. Thank you. And my time has expired, but the 
question is: Is it too late to put those guardrails on?
    Ms. Hickok. It is not at all. In fact, at CAIDP, our 
students, especially our law students, asked that question last 
week, is it too late, is it inevitable, has the ship sailed? 
No, it is not. The humans, organizations, lawmakers, the 
humans, the users behind it hold the power.
    Mr. Gomez. Thank you so much. I yield back.
    Ms. Mace. Thank you, Congressman.
    I would now like to recognize Congresswoman Greene.
    Ms. Greene. Thank you. I think this is a very important 
hearing to have as AI is progressing and working in a lot of 
different sectors. And I really appreciate each of you being 
here.
    I'm definitely not an expert in AI, but I would like to 
talk about the fears and concerns that people in my district 
and people all over the country have when it comes to AI.
    We certainly don't like the idea of AI replacing humans and 
replacing people, especially when it comes to jobs. And so, 
when there's headlines like Alphabet announcing 12,000 job cuts 
globally while chief executive officer singled out AI as the 
key investment area, that is what people start to think about. 
Or when Microsoft announces its $10 billion investment in 
OpenAI just days after saying it would lay off 10,000 
employees, those are the kinds of things people think about.
    Now, there is a difference between robotics and AI. 
Obviously, robotics are a good thing. For example, tightening 
bolts or moving heavy objects like in manufacturing, we really 
appreciate that. But when it comes to AI being able to be 
smarter than humans or replace humans on the job, I think that 
is a major concern, especially for a country that's over $30 
trillion in debt and an economy that is struggling.
    This is something also concerning for education, learning 
that ChatGPT scored higher than many people on the MBA exam 
that was administered at Penn's elite Wharton School. That's 
definitely concerning, especially when thinking about how that 
could affect education. Just recently, ChatGPT was--is 
currently banned in New York City schools over cheating 
concerns, but then you think about what would this look like if 
AI became teachers, especially after the devastation caused to 
children's education levels, but also more importantly, kids 
being taught at home on computers. That was more devastating to 
them.
    The idea that AI could replace software engineers, 
journalists, graphic designers, that is also extremely 
concerning. So, I think these are important conversations to 
have.
    But something that happened, I just learned about in 
researching for this hearing, was that there are scams that 
happen to people where AI is so intelligent, it is able to 
imitate people's voices and images. And there is been people 
taken advantage of in horrible ways where they have gotten 
phone calls from who they thought was their loved ones but was 
not. And their loved one, which was really an artificial 
intelligence voice, mimicking their loved one was calling for 
help in serious distress, and then they got scammed out of a 
lot of money. That's terrifying and concerning that that can 
happen.
    But another thing that happened recently was when San 
Francisco officials voted in December against a controversial 
measure that would have allowed police to deploy robots to use 
lethal force in extreme situations, but this happened after the 
San Francisco Board of Supervisors came a week--it was a week 
after the board voted to approve the policy to allow what 
people called killer robots. But this is what people think 
about when they think of AI. They think of a robot that has the 
artificial intelligence to replace the police officer.
    But then the application to the military is where I thought 
was pretty concerning.
    Dr. Schmidt, I wanted to ask you, because I took to Google 
on this issue. And I wanted--I saw a headline that said ``AI's 
impact on warfare will be as big as nuclear weapons.'' And I 
also saw another headline that said ``Eric Schmidt Is Building 
the Perfect AI War-Fighting Machine.'' So, I thought you would 
be the perfect person to ask about this. Could you explain a 
little bit?
    Mr. Schmidt. Let me be clear, that's for the benefit of the 
United States.
    Ms. Greene. Only if it is in the United States' hands, 
though, Dr. Schmidt.
    Mr. Schmidt. And it will be.
    The trends in the military are fundamentally autonomy and 
drones and intelligence sensing gathering. The military spends 
most of its time looking at things and trying to analyze it. 
So, in the near term, the benefits to the military are 
profound. It allows the service people who we have trained 
exquisitely who are watching dumb screens to use their higher 
factory skills and have the computer say, hey, look, this tank 
moved or, hey, this thing happened over here, can you analyze 
it.
    I think a better framing for your constituents' fear is to 
say that AI will make the people much more successful in what 
they do, and that will drive higher incomes, higher jobs. And I 
think that that's the best, at least in the next 20 years, 
narrative about AI. It is true in the military. It is true in 
civilians as well.
    Ms. Greene. One more question. My time is expired, but how 
do we--with China and their ability to constantly spy on us and 
steal our technology and information, how could we prevent 
China from stealing this type of artificial intelligence with 
our military? And thank you.
    Mr. Schmidt. Of course. The bad news is that these research 
ideas are in the public domain and international, so we can't 
prevent China from getting it. The Trump and Biden 
administrations have done a good job of restricting access to 
hardware, which is helpful. So good job, all of you.
    With respect to software, the biggest answer is more 
software people, trained in the West, trained under our values, 
building systems that you as our Representative have some level 
of regulatory control over. When they do it in China, you can't 
pass a law to change that, but you can in the United States.
    Ms. Mace. Thank you.
    All right. I would like to now toss it over to Congressman 
Khanna for five minutes.
    Mr. Khanna. Thank you, Madam Chair, and thank you for your 
leadership, for your bipartisan cooperation and collaboration 
on the quantum bill, and the approach you have taken to work 
across the aisle.
    Dr. Schmidt, I respect your leadership in Silicon Valley. 
There's a paradox in my mind that I would love your insight. On 
the one hand, DARPA in the Department of Defense gave us the 
internet, as you know, with Vinton Cerf, gave us GPS, gave us 
the drone, gave us the mouse, probably the most innovation in 
the history of the 20th century, defense technology. And yet, 
now it seems there is this problem of the adoption of 
innovative technology.
    Why is the model that gave us all of this revolutionary 
technology not working?
    Mr. Schmidt. Thank you, Congressman. And you have really 
helped in a lot of these areas.
    If you go back to Vannevar Bush, the National Science 
Foundation, and DARPA, those are the engines that got us all 
here. We are all here fundamentally because of early decisions 
made by the Federal Government to invest in researchers who 
then we built on top of. So, I'm incredibly grateful to them.
    In the case of the government, and particularly the 
military, those innovations go into a bureaucracy that is not 
organized in a way to take them. And a simple example is 
software. The military is organized around procurements of a 
15-year cycle and complicated bidding among a small number of 
contractors. That is not how software works. And a number of us 
have worked hard to get software treated more as a continuous 
process. But the military, for example, would benefit by a 
large expansion of a number of more software people, just 
fixing stuff, making things work, making them smarter. That is 
a simple thing that you could do.
    Mr. Khanna. To that end, what do you think about an actual 
service academy around technology, cyber, AI?
    Mr. Schmidt. We looked hard at creating a military service 
academy when I was doing the AI commission. And the military 
has really, really good people in their academies. And what 
they do is, because of the way military promotions work, you 
take some brilliant person, you make them go stand guard duty 
for a while, which is stupid. Sorry to be blunt. It is much 
better to change the HR policies, which the military is trying 
to do now. In particular, Secretary Brown in the Air Force is 
trying to create a technical path to keep these people. That is 
how you solve that problem. And let me give the rest of my time 
to Aleksander.
    Mr. Mafdry. Yes. So, I just wanted to add because it is a 
very important question that you ask. So, I actually happen to 
co-lead and codevelop at MIT an executive education class, AI 
for national security leaders, which essentially hosts a number 
of general offices from Pentagon and other places to come and 
learn about AI. And this is a three-day program. Half of this 
program is not about AI; it's about organizational management 
aspects.
    So, this is what you recognize. There's a lot of 
frustration in DOD in your top military leaders that the 
technology is developed. DARPA did their part, although they 
should do more particularly in generative languages. But then 
we hit the bureaucracy. And there is just a lot of 
organizational problems that are kind of silly that the DOD is 
completely crippled in terms of adoption of AI. So, that is 
where we need the attention.
    Mr. Khanna. Well, I look forward to working with you
    [inaudible] with Representative Mace and Representative 
Gallagher.
    One other question--I mean, I'm back home in Silicon 
Valley. It seems the new thing there is everyone is doing AI. 
I'd be curious, Dr. Schmidt and Dr. Mafdry, how do you see--
will Silicon Valley lead the world in AI? How are we doing 
compared to China?
    And then one comment from my own version of American 
exceptionalism, it drives me crazy when Europeans are lecturing 
us about AI and technology. You know, I don't see Google, 
Apple, Tesla. I get they say they are going to innovate in 
policy, they are going to also innovate in technology. How are 
we compared to Europe as well?
    Mr. Schmidt. My cynical answer about Europe is that Europe 
is going to lead in regulation and, therefore, not lead in 
anything else. Their efforts do not appear to be successful, as 
you have pointed out.
    The reason we are so excited about AI is that anything that 
makes humans smarter and makes algorithms smarter and makes 
discoveries quicker is a horizontal technology that is 
transformative. The opportunities to make basic advancements 
outside of language models, right, are profound in terms of 
science, materials, plastics, every kind of logistics, every 
kind of analytical problem, as has been summarized by the 
panel.
    So, I think that AI is here to stay. It is the next big 
wave. I don't know when it will end, but we are still very 
early. Remember, we still don't understand exactly how these 
algorithms work. We also don't understand how big the models 
have to be. At some point, we'll know. But we are not anywhere 
close to being able to answer those questions.
    Mr. Mafdry. If I can just add very quickly because you 
asked the question about Silicon Valley. Silicon Valley is 
doing great. They will do great job. They are clearly 
harnessing this progress, but we as a country should not 
abdicate the progress on the strategically important technology 
just for Silicon Valley. Again, they will do great, but we 
should be doing more, and the U.S. Government should be doing 
more.
    Mr. Schmidt. Speaking as a professor at MIT.
    Mr. Mafdry. Yes. But I like Silicon Valley.
    Ms. Mace. In closing this afternoon, first of all, I just 
want to thank you all, all of our panelists for your time and 
your talent and everything that you have shared with us today.
    This will be, Congressman Mfume, the first of a series of 
hearings that I hope that we'll have on AI. I don't think that 
we are ready for what is going to happen in a very short period 
of time. And I think, if it is not happening already, it will 
be in the next five years where AI will be programming AI, and 
then what's next?
    And so, this was a great first discussion to start this 
conversation about what needs--what we need to be talking about 
in regards to this.
    So, in closing, I want to thank all of our panelists once 
again for your insightful testimony today. You have given us a 
tremendous amount to think about, and AI was created by humans, 
but it doesn't mean that it is going to be easy for all of us, 
especially up here on the Hill, to grasp what is before us and 
what is imminently coming. We appreciate the panel's expertise 
and ability to shed light on the state of the science and the 
broader societal implications that policymakers must consider.
    And I would like to yield to the Ranking Member, 
Congressman Connolly, for your closing remarks.
    Mr. Connolly. Thank you so much, Madam Chairwoman.
    And I found this an intriguing conversation, but wanting 
more. And like Mr. Mfume, I think we have opened the door to a 
lot further in-depth exploration, hopefully by this 
Subcommittee and by the Congress, because there are lots of 
issues we have to face.
    And while you may be right, Dr. Schmidt, about dismissing 
the Europeans as regulators but not innovators, on the other 
hand, given what we heard from Ms. Hickok and Dr. Mafdry about 
the need for some Federal intervention here, there have to be 
guidelines and guideposts so that we are off on the right foot 
and not facing profound issues later on where the technology is 
advanced and we never either anticipated it or addressed it. 
Maybe there are things we can learn from the Europeans in the 
regulatory guidelines, either things not to do or things to do.
    But any rate, I just think there is a lot more for us to 
explore, and I really appreciate this being the first of a 
series of hearings.
    Thank you, Madam Chairwoman. I yield back.
    Ms. Mace. Thank you. And I look forward to working with 
everyone on both sides of the aisle on this issue. It is very 
important.
    With that and without objection, all Members will have five 
legislative days within which to submit materials and to submit 
additional written questions for the witnesses which will be 
forwarded to the witnesses for their response.
    If there is no further business, without objection, my 
first Subcommittee stands adjourned.
    [Whereupon, at 3:57 p.m., the Subcommittee was adjourned.]

                                 [all]