[Senate Hearing 119-143]
[From the U.S. Government Publishing Office]
S. Hrg. 119-143
WINNING THE AI RACE: STRENGTHENING U.S.
CAPABILITIES IN COMPUTING AND INNOVATION
=======================================================================
HEARING
before the
COMMITTEE ON COMMERCE,
SCIENCE, AND TRANSPORTATION
UNITED STATES SENATE
ONE HUNDRED NINETEENTH CONGRESS
FIRST SESSION
__________
MAY 8, 2025
__________
Printed for the use of the Committee on Commerce, Science, and
Transportation
[GRAPHIC NOT AVAILABLE IN TIFF FORMAT]
Available online: http://www.govinfo.gov
______
U.S. GOVERNMENT PUBLISHING OFFICE
61-426 PDF WASHINGTON : 2025
SENATE COMMITTEE ON COMMERCE, SCIENCE, AND TRANSPORTATION
ONE HUNDRED NINETEENTH CONGRESS
FIRST SESSION
TED CRUZ, Texas, Chairman
JOHN THUNE, South Dakota MARIA CANTWELL, Washington,
ROGER WICKER, Mississippi Ranking
DEB FISCHER, Nebraska AMY KLOBUCHAR, Minnesota
JERRY MORAN, Kansas BRIAN SCHATZ, Hawaii
DAN SULLIVAN, Alaska EDWARD MARKEY, Massachusetts
MARSHA BLACKBURN, Tennessee GARY PETERS, Michigan
TODD YOUNG, Indiana TAMMY BALDWIN, Wisconsin
TED BUDD, North Carolina TAMMY DUCKWORTH, Illinois
ERIC SCHMITT, Missouri JACKY ROSEN, Nevada
JOHN CURTIS, Utah BEN RAY LUJAN, New Mexico
BERNIE MORENO, Ohio JOHN HICKENLOOPER, Colorado
TIM SHEEHY, Montana JOHN FETTERMAN, Pennsylvania
SHELLEY MOORE CAPITO, West Virginia ANDY KIM, New Jersey
CYNTHIA LUMMIS, Wyoming LISA BLUNT ROCHESTER, Delaware
Brad Grantz, Republican Staff Director
Nicole Christus, Republican Deputy Staff Director
Liam McKenna, General Counsel
Lila Harper Helms, Staff Director
Melissa Porter, Deputy Staff Director
Jonathan Hale, General Counsel
C O N T E N T S
----------
Page
Hearing held on May 8, 2025...................................... 1
Statement of Senator Cruz........................................ 1
Statement of Senator Cantwell.................................... 3
Statement of Senator Sheehy...................................... 29
Statement of Senator Moreno...................................... 33
Statement of Senator Klobuchar................................... 35
Statement of Senator Schatz...................................... 40
Statement of Senator Budd........................................ 42
Statement of Senator Kim......................................... 44
Statement of Senator Schmitt..................................... 46
Statement of Senator Hickenlooper................................ 48
Statement of Senator Curtis...................................... 50
Statement of Senator Duckworth................................... 53
Statement of Senator Young....................................... 55
Statement of Senator Blunt Rochester............................. 56
Statement of Senator Moran....................................... 58
Statement of Senator Lujan....................................... 60
Statement of Senator Lummis...................................... 63
Statement of Senator Rosen....................................... 64
Statement of Senator Sullivan.................................... 66
Statement of Senator Markey...................................... 68
Statement of Senator Peters...................................... 71
Statement of Senator Fetterman................................... 73
Witnesses
Sam Altman, Co-Founder and Chief Executive Officer, OpenAI....... 5
Prepared statement........................................... 7
Dr. Lisa Su, Chief Executive Officer and Chair, Advanced Micro
Devices (AMD).................................................. 9
Prepared statement........................................... 10
Michael Intrator, Co-Founder and Chief Executive Officer,
CoreWeave...................................................... 12
Prepared statement........................................... 14
Brad Smith, Vice Chair and President, Microsoft Corporation...... 21
Prepared statement........................................... 22
Appendix
Response to written questions submitted to Sam Altman by:
Hon. Roger Wicker............................................ 83
Hon. Marsha Blackburn........................................ 83
Hon. Maria Cantwell.......................................... 84
Hon. Amy Klobuchar........................................... 86
Hon. Brian Schatz............................................ 86
Hon. Edward Markey........................................... 89
Hon. Tammy Baldwin........................................... 91
Hon. Jacky Rosen............................................. 91
Hon. Lisa Blunt Rochester.................................... 92
Response to written questions submitted to Dr. Lisa Su by:
Hon. Todd Young.............................................. 92
Hon. Maria Cantwell.......................................... 93
Hon. Amy Klobuchar........................................... 94
Hon. Brian Schatz............................................ 95
Hon. Jacky Rosen............................................. 96
Hon. John Fetterman.......................................... 97
Response to written questions submitted to Michael Intrator by:
Hon. Maria Cantwell.......................................... 99
Hon. Amy Klobuchar........................................... 100
Hon. Brian Schatz............................................ 101
Hon. Edward Markey........................................... 102
Hon. Jacky Rosen............................................. 103
Response to written questions submitted to Brad Smith by:
Hon. Roger Wicker............................................ 104
Hon. Todd Young.............................................. 104
Hon. Maria Cantwell.......................................... 106
Hon. Brian Schatz............................................ 109
Hon. Edward Markey........................................... 111
Hon. Tammy Baldwin........................................... 114
Hon. Jacky Rosen............................................. 115
Hon. John Fetterman.......................................... 117
Hon. Lisa Blunt Rochester.................................... 118
WINNING THE AI RACE: STRENGTHENING U.S.
CAPABILITIES IN COMPUTING AND INNOVATION
IN COMPUTING AND INNOVATION
----------
THURSDAY, MAY 8, 2025
U.S. Senate,
Committee on Commerce, Science, and Transportation,
Washington, DC.
The Committee met, pursuant to notice, at 10:02 a.m., in
room SR-253, Russell Senate Office Building, Hon. Ted Cruz,
Chairman of the Committee, presiding.
Present: Senators Cruz [presiding], Moran, Sullivan, Young,
Budd, Schmitt, Curtis, Moreno, Sheehy, Lummis, Cantwell,
Klobuchar, Schatz, Markey, Peters, Duckworth, Rosen, Lujan,
Hickenlooper, Fetterman, Kim, and Blunt Rochester.
OPENING STATEMENT OF HON. TED CRUZ,
U.S. SENATOR FROM TEXAS
The Chairman. Good morning. The Senate Committee on
Commerce, Science, and Transportation is called to order.
Welcome to our witnesses. Thank you for joining us this
morning.
In the last two years AI has brought the United States and
the world to a critical inflection point. AI may be a
technology as transformative as the Internet or even more so.
It has unleashed a new global industrial revolution with
the potential to unlock opportunities that improve our quality
of life, create jobs, and stimulate economic growth.
The country that leads in AI will shape the 21st century
global order. As a matter of economic security, as a matter of
national security, America has to beat China in the AI race.
China has made AI central to its national strategy and
China aims to lead the world in AI by 2030, investing heavily
in AI adoption across industries like manufacturing and
defense.
In this race the United States is facing a fork in the
road. Do we go down the path that embraces our history of
entrepreneurial freedom and technological innovation or do we
adopt the command and control policies of Europe?
I would suggest that Congress draw on the lessons we can
learn from the dawn of the internet. In the early 1990s
Washington embraced the Internet and explicitly adopted a style
of regulation that was intentionally and decisively a light
touch.
Congress chose to deregulate under the Telecommunications
Act of 1996 while President Clinton pursued tariff agreements
and treaties that protected America's intellectual property and
technological exports.
Further, in 1998 Congress enacted a 10-year Internet tax
moratorium so that state laws would not balkanize and stymie
the promise of e-commerce.
The results of these decisions were extraordinary. By 2000,
the United States has recorded five straight years of historic
highs in productivity gains and investment growth.
Hundreds of thousands of new jobs were created and the
United States became a top tech exporter with massive sums of
private investment pouring into the U.S. digital economy.
By contrast, EU countries pursued a series of heavy-handed
regulations that proved enormously costly. In 1993 the United
States and Europe had economies virtually identical in size.
Today, the American economy is more than 50 percent larger
than Europe's. The drivers of that are tech and the shale
revolution. Those two comprise virtually the entirety of that
massive growth over Europe.
According to one EU Commission report only 6 percent of
global AI startup funding flows to EU firms--6 percent. That is
one-tenth of the amount that is going to American companies.
The report directly blames this yawning chasm on the EU's
nasty regulatory approach. And, yet, the Biden administration
for inexplicable reasons tried to align AI policy with the EU
to adopt their failed policies.
President Biden's sweeping AI executive order, the longest
executive order in American history, cast AI as dangerous and
opaque, laying the groundwork for audits, for risk assessments,
and regulatory certifications.
Biden's approach inspired similar efforts in state
legislatures across the country, threatening to burden
startups, developers, and AI users with heavy compliance costs.
Some of my colleagues suggest that a friendlier version of
the Biden approach makes sense. They want a testing regime to
guard against AI, quote, ``discrimination'' and have government
agents provide, quote, ``guidance documents,'' seemingly
something out of Orwell, that will usher in what they call best
practices, as if AI engineers lack the intelligence to
responsibly build AI without the bureaucrats.
Many in the industry foolishly have supported such
paternalism. Harmful regulations take many forms. Biden's
misguided midnight AI diffusion rule on chips and model weights
would have crippled American tech companies' ability to sell AI
to the world.
The Biden plan would have handed over key markets to China.
We should want foreign countries, particularly our allies, to
buy American.
I vocally opposed this rule for months and, indeed, the
Ranking Member and I together urged the Biden administration
not to adopt it and I am very pleased that President Trump has
now confirmed he plans to rescind it.
All of this busybody bureaucracy, whether Biden's
industrial policy on chip exports or industry and regulator-
approved guidance documents, is a wolf in sheep's clothing.
To lead in AI the United States cannot allow regulation,
even the supposedly benign kind, to choke innovation or
adoption.
American dominance in AI depends on two factors: innovation
and adoption. Innovation drives breakthroughs in global
competitiveness. Adoption ensures that these tools empower
American workers and businesses, enabling the United States to
become the world's leading adopter and exporter of AI.
Thankfully, President Trump has, largely, reversed Biden's
misguided AI agenda. In fact, I think AI was a sleeper issue in
this last election.
Americans wanted to see President Trump and Republicans
and, indeed, all senators champion AI policies focused on
innovation and adoption.
The contrast has been astounding. This year, there have
been over $1 trillion of new AI projects including major
investments in Texas like the CoreWeave data center in Plano
and the $500 billion project Stargate in Abilene by OpenAI and
Oracle and others.
Adopting a light touch regulatory style for AI will require
Congress to work alongside the President just as Congress did
with President Clinton. We need to advance legislation that
promotes long-term AI growth and innovation.
That is why I will soon release a new bill that creates a
regulatory sandbox for AI modeled on the approach taken by
Congress and President Clinton at the dawn of the Internet that
will remove barriers to AI adoption, prevent needless state
overregulation, and allow the AI supply chain to rapidly grow
here in the United States.
That is how we will accelerate economic growth, secure U.S.
dominance in AI, and beat China.
And with that, I turn to Ranking Member Cantwell.
STATEMENT OF HON. MARIA CANTWELL,
U.S. SENATOR FROM WASHINGTON
Senator Cantwell. Thank you, Mr. Chairman.
Thank you for this hearing, and welcome to the witnesses
before us--Mr. Altman, Dr. Su, Mr. Intrator, and Mr. Smith.
It is a great pleasure to have all of you here, but it is
an especially prideful moment for the Pacific Northwest to have
Mr. Smith and Mr. Altman here, both representing an OpenAI
approach.
By that I mean an approach where we want to win against
China and a closed system by making sure that what is developed
here in the United States and around the globe is an
architecture where the United States wins and is open.
To do that we need to focus in winning on computing power,
on algorithms, and on robust data sources. All of that will be
key.
Personally, I believe a continued investment in NSF helps
in all of those areas as a good public-private partnership with
the industry that is represented here today.
I am so proud that we passed the CHIPS and Science Act,
because the CHIPS and Science Act also set a foundation for
investing in the United States of America and bringing more of
the supply chain back to the United States of America to build
on a future leadership that we already have, I believe, in the
computing power.
But we also need to understand that we have to move forward
on the CHIPS Act like the University of Washington $10 million
grant on multi-design sets for chips, the very large-scale
integrated designs I am sure that Dr. Su will tell us about
today.
But the fact that the United States has to continue to lead
on the future designs and the implementation of that also
requires us to be very smart about data centers, about sources
of electricity, and how we are going to build that supply that
could be up to 12 percent of electricity demand in the very
near future.
So how do we do that? I have noticed in each of your
testimonies you all explain this, but I am also very proud that
Microsoft has already signed an agreement with one company,
Fusion Energy in Everett, Washington, for a power source
supply--maybe Mr. Altman in his testimony will talk about
this--but that they hope to get very near future energy source
from that.
So, clearly, the United States leading on electricity and
development so, Mr. Smith, very much appreciate in your
testimony the accentuation on the fact that the United States
of America needs hundreds of thousands of new electricians,
something we should all want to get behind.
The fact that having electricity and the electricians and
the data source centers here in the United States and in other
places will be key.
While I want to see us move forward, as the Chairman said--
we signed a letter saying we needed a broader support for
export controls--I want to be clear.
Export controls are not a trade strategy. They are not a
back pocket issue that the President of the United States whips
out in trade negotiations.
We are going to move fast because we are going to set
standards. I believe those standards should be encouraging very
broad distribution of U.S.-manufactured and -made AI chips and
technology, and that we are asking our partners overseas to
comply with the rules that we establish, things like making
sure that there is no circumvention of the supply that somehow
gets into China's hands, making sure that we have access, and
making sure that we can verify on that, and also making sure
that U.S. data companies and cloud-based companies are allowed
to be in that market.
We should not be going to markets overseas only to have
them tell us that organizations with cloud services from the
U.S. would not be allowed. This, I believe, would be a robust
initiative on getting U.S. AI chips and U.S. AI open systems
dominated around the globe.
Why do we need to move fast? We need to move fast because
if we do not we are looking at another Huawei, another instance
where the United States is behind and also saying we should
tear out this system that now we do not like for lots of
reasons and back door policies.
So I am all for winning. That is why we passed the CHIPS
and Science Act. I am all for winning and that is why we have
passed seven bills out of this committee last year that, kind
of, got stuck in the lame duck.
I think the Chairman of the Committee was not ready to move
forward in negotiations with the House and Senate on those
seven bills. But those bills, a bill between myself and Senator
Young on the Institute for Standards--NIST standards--which I
think we still need to do.
My colleague and I--Moran--on education and scholarships,
small business, and the bill by my colleague here, Senators
Klobuchar and Thune, which was also related to the NIST
standards.
So we had an opportunity a year ago to move fast. We did
not do it. So let us do this now. Let us get together and
figure this out. The faster the United States moves now, I like
this great Paul Romer quote, which was about collaboration is
the next phase of innovation.
If we do not collaborate here, if we throw down on politics
instead of getting the policy right, we will not move fast. Let
us allow these people to do what they do best and let us make
sure the United States has the right policies in place so that
our OpenAI standard wins the day.
Thank you, Mr. Chairman.
The Chairman. Thank you.
I would now like to introduce our witnesses for today. Each
of our witnesses and their companies represent critical parts
of the AI infrastructure, hardware, and software supply chain.
Our first witness is Sam Altman, the Co-Founder and CEO of
OpenAI. OpenAI is one of the world's most advanced AI
companies, known best for its ChatGPT product.
Our second witness is Lisa Su, the Chair and CEO of
Advanced Micro Devices--AMD. AMD develops high-performance
processors, graphic chips, and AI accelerators that power
artificial intelligence, and Dr. Su is also a Texan.
Our third witness today is Michael Intrator, the CEO and
Co-Founder of CoreWeave, an AI hyperscaler. CoreWeave is the
world's largest purpose built AI cloud platform.
And our final witness is Brad Smith, the Vice Chair and
President of Microsoft. I believe everyone is familiar with his
company.
Mr. Altman, you are recognized for your opening statement.
If you could turn on the volume.
Mr. Altman. Sorry about that.
The Chairman. And I do enjoy telling techies how to operate
the tech.
Mr. Altman. It is pretty embarrassing that I could not
figure that out.
STATEMENT OF SAM ALTMAN, CO-FOUNDER AND CHIEF EXECUTIVE
OFFICER, OPENAI
Mr. Altman. Anyway, thank you, Chairman. Thank you, Ranking
Member Cantwell. Thank you, all senators and fellow panelists.
It is a real honor to be here.
I was here about two years ago and at that time ChatGPT had
recently launched. It was a curiosity in the world. People were
not sure what it was going to mean, what it was going to be
used for.
Today, we have made significant progress. ChatGPT is used
by more than 500 million people a week. I just saw yesterday
that according to Similarweb it is now the fifth biggest
website on the Internet globally and growing very quickly, but
most of all, it is being used in really important ways.
It is significantly increasing productivity. We hear
scientists say they are two or three times more productive than
they could be before.
We hear people that are getting medical advice or learning
in ways they could not before and it is really--it is no longer
this thing that was going to come in the future but it is here
now and people are really using it. We are very proud to be one
of the leaders of this.
We are very proud that America is leading in AI so
significantly and I think that is critical. What Senator Cruz
said about the importance of innovation in America and that we
have the--what happened with the Internet we have happen again.
I believe this will be at least as big as the internet,
maybe bigger. That needs to happen. For that to happen
investment in infrastructure is critical.
I believe the next decade will be about abundant
intelligence and abundant energy, making sure that those--that
America leads in both of those, that we are able to usher in
these dual revolutions that will change the world we live in I
think in incredibly positive ways. It is critical.
I got to go to Abilene, Texas yesterday where we are
building out what will be the largest AI training facility in
the world. It is coming along beautifully. Super exciting to
see. We need a lot more of that.
There is a whole sort of AI factory, like, a supply chain
of energy chips, standing up data centers, building the racks
and more.
We have got to do that really well in the U.S. so that we
can continue to innovate, continue to lead, and continue to,
sort of, shape this revolution.
Speaking of that, I was very inspired by what Chairman Cruz
said so I would like to deviate from script here and tell a
story. In my prepared written testimony I covered the basics.
So if it is OK I would love to tell you a story. I grew up
in St. Louis and I was a computer nerd, and it was the time of
the Internet boom and I thought it was the coolest thing ever.
We kind of lived in this beautiful, old brick house in this
suburb of St. Louis and I lived in the attic, and I had this
computer and I would stay up all night and I would learn to
program, and I got to kind of use the Internet and it was,
like, a crazy time of tons of innovation. All sorts of stuff
was happening.
It was amazing and it was all happening here. All the
Internet companies were in the U.S. I used a Mac that was built
here. I used chips that were started, you know, near where I
now live.
And I learned about computers. I thought it was the coolest
thing ever, and I can draw a straight line from that experience
to founding OpenAI and getting to work on companies like
Helion.
The spirit of American innovation and support of
entrepreneurship. I do not think the Internet could have
happened anywhere else and if that did not happen I do not
think the AI revolution would have happened here.
I am a child of the Internet revolution. I have the great
honor to be one of the parents of the many parents of the AI
revolution and I think it is no accident that that is happening
in America again and again and again.
But we need to make sure that we build our systems and that
we set our policy in a way where that continues to happen.
I think this is magic. I do not want to live in Europe
either. I think America is just an incredible and special
thing, and it will not only be the place where the AI
revolution happens but all the revolutions after.
I was home visiting St. Louis recently. Drove by our old
house and I kind of, like--it was at night and I looked up and
in that, like, top floor window the light was on and I thought,
you know, hopefully there is some kid in there staying up late
at night playing with ChatGPT, figuring out how he or she is
going to start whatever company comes next and whatever the
next thing is after AI will happen here, too.
That is, to me, the magic of this country. It is incredibly
personally important and I hope it keeps going.
Thank you very much for having me.
[The prepared statement of Mr. Altman follows:]
Prepared Statement of Sam Altman, Co-Founder
and Chief Executive Officer, OpenAI
Thank you, Chairman Cruz, Ranking Member Cantwell, and Members of
the Committee.
I'm Sam Altman, Chief Executive Officer of OpenAI. It is an honor
to return to the Senate and share our view of where AI is today and
where we see it going.
OpenAI is not a normal company and never will be.
Our mission is to ensure that artificial general intelligence--
AGI--benefits all of humanity. AGI is a weakly defined term, but
generally speaking we mean it to be a system that can tackle
increasingly complex problems, at human level, in many fields. When we
formed OpenAI more than 10 years ago, we stared at each other around a
kitchen table, wondering how to get started. AI then was a niche tool
for researchers, not the general public.
In 2016, Chairman Cruz convened his first AI hearing, and my co-
founder, Greg Brockman, testified that AGI models were probably between
10 and 100 years away. Today, the science of AI has advanced so
significantly that we are now confident that we'll reach that milestone
during President Trump's time in office.
Throughout history, people have crafted tools to scale our
abilities--and we believe AGI will be the most powerful tool ever
created. It will enable people to build incredible things for each
other and improve their quality of life.
But AGI's full potential won't be realized unless it's safe. The
same capabilities that will enable AGI to support scientific
breakthroughs and accelerate human progress will also create new risk
areas. That's why we red-team relentlessly and lead the industry in
transparency.
Ultimately, I believe the good will outweigh the bad by orders of
magnitude, and that AGI will help bring us into what I call the
Intelligence Age--an era when everyone's lives can be better than
anyone's life today.
This future can be almost unimaginably bright, but only if we take
concrete steps to ensure that an American-led version of AI, built on
democratic values like freedom and transparency, prevails over an
authoritarian one.
The stakes could not be higher--and Congress is right that the
United States must lead the way.
At OpenAI, we're committed to the path of democratic AI, and we are
humbled that ChatGPT is being used by more than 500 million people each
week to create, discover, and achieve breakthroughs that were once out
of reach.
America is a nation of innovators, and we want to supercharge
people's ability to use our technology to make their lives better.
We want to open source very capable models.
We want to give our users a great deal of freedom in how they use
our tools, and let them personalize ChatGPT to best meet their needs.
We want to build a brain for the world and make it super easy for
people to use it, with common-sense restrictions to prevent harm.
And the truth is that AI is already changing the world for the
better.
Scientists at the U.S. National Laboratories--including Oak Ridge
National Laboratory, Los Alamos National Laboratory, Argonne National
Laboratory, the Princeton Plasma Physics Laboratory, and the Pacific
Northwest National Laboratory--are using our reasoning models to
accelerate breakthroughs in areas like energy.
In Pennsylvania, ChatGPT is helping state employees do
administrative tasks more quickly, freeing up more time to improve the
delivery of public services.
And universities in states like Texas, North Carolina, and
California are putting ChatGPT in the hands of students and educators
to build an AI-ready workforce.
AI will be vitally important to ensuring that today's students are
ready for tomorrow's jobs. In the US, more than one-third of college-
aged young people use our models, mainly for learning and tutoring.
Around the world, most ChatGPT users are under age 35.
We're proud to offer free access to a technology that is doing so
much for so many people, but AI's biggest gains are still to come.
Our work at OpenAI suggests that as AI advances, progress
accelerates and becomes increasingly affordable, as reflected in these
three scaling principles:
Investing more in AI will continue to make it better and more
capable. The intelligence of an AI model roughly equals the log of the
resources used to train and run it. Until recently, scaling progress
has primarily come from training compute and data, but we have shown
how
to make intelligence scale from inference compute, as well. The
scaling laws that predict these gains are incredibly precise over many
orders of magnitude. It follows that further investment will lead to
further gains, and further benefits to society: We believe that the
socioeconomic value of linearly increasing intelligence is super-
exponential in nature.
The cost to use a given level of AI capability falls by about 10x
every 12 months, and lower prices lead to much more use. We saw this in
the change in token cost between GPT-4 in early 2023 and GPT-4o in mid-
2024, where the price per token dropped about 150x in that time period.
Moore's Law predicted that the number of transistors on a microchip
would double roughly every two years; the decrease in the cost of using
AI is even more dramatic.
The amount of time it takes to improve an AI model keeps
decreasing. Put another way, AI models are catching up with human
intelligence at an increasing rate. The typical time it takes for a
computer to beat humans at a given benchmark has fallen from 20 years
after the benchmark was introduced, to five years, and now to one to
two years--and we see no reason why those advances will stop in the
near future.
So what does that mean practically?
I believe we'll see many major advances over the next three years,
but here are some examples.
In 2025, we will release AI-powered tools that can handle
sophisticated software engineering, and AI agents that can handle real-
world tasks like making doctor's appointments and helping to run a
business. These agents will be super assistants who can collaborate
with workers in every industry, doctors in all specialties, and
scientists in every field of research.
In 2026, AI may unlock a new wave of scientific breakthroughs by
designing experiments to tackle America's toughest challenges in
climate, health, and national security.
And in 2027, AI-powered robotics could push AI-driven productivity
gains into the physical world, handling routine tasks so people can
spend more time on the work and activities they enjoy.
As AI systems become more capable, people will want to use them
even more. Meeting that demand requires more chips, training data,
energy, and supercomputers.
Infrastructure is destiny, and we need a lot more of it.
Earlier this year I joined President Trump and the CEOs of Oracle
and SoftBank to announce the Stargate Project, a $500 billion dollar
investment in American AI infrastructure.
Since launching Stargate, governments around the world have asked
about bringing AI infrastructure to their countries and how we can
ensure that democratic AI systems become the global standard.
In response, we're offering a new kind of partnership--OpenAI for
Countries--to help these countries build up their data center capacity
and ecosystems of AI start-ups and developers. In exchange, these
countries would invest in the Stargate Project--and thus in continued
US-led AI leadership and a global, growing network effect for
democratic AI.
To close on a personal note, I grew up in St. Louis, part of a
close-knit and competitive family that played 20 Questions to guess
what we were having for dessert. When I was eight, my parents bought me
a Mac LC II. The computer was a literal dividing line in my life. There
was the time before I had a computer, and there has been the time
after. I believe that AI will play a similarly formative role for kids
across the country, including my own.
I want to thank Chairman Cruz, Ranking Member Cantwell, and the
members of this Committee for your continued leadership on AI. I
appreciate the opportunity to testify today and look forward to
answering your questions.
The Chairman. Thank you.
Dr. Su.
STATEMENT OF DR. LISA SU, CHIEF EXECUTIVE OFFICER AND CHAIR,
ADVANCED MICRO DEVICES (AMD)
Ms. Su. Chairman Cruz, Ranking Member Cantwell, members of
the Committee, it is a real honor to be here on such an
important topic.
I am Chair and CEO of AMD. We are a U.S. headquartered
semiconductor company founded 56 years ago and we build high-
performance computing chips for the modern economy.
Every day billions of people rely on our products and
services powered by our technologies but our chips are also
extremely important to support the critical missions including
powering defense systems and secure communications as well as
enabling breakthrough scientific research.
I have to say our proudest moments, though, are when we see
amazing public-private partnerships and our work in
supercomputing is an example of that.
Through more than a decade of partnership with the
Department of Energy, AMD now powers the two fastest
supercomputers in the world, one that is housed at Oak Ridge
National Labs that was put into place in 2021 and the other at
Lawrence Livermore National Labs that was just recently put
into commission late last year.
These systems are really critical from a national
infrastructure standpoint and solve many, many large research
issues as well as national security and scientific leadership.
Now, in terms of AI, you know, there is so much that has
been stated about AI. I really want to thank Chairman Cruz and
Ranking Member Cantwell for having this hearing. I think it is
a wonderful opportunity to talk about how we win.
AI is truly the most transformative technology of our time.
The United States leads today, but what I would like to say is
it is a race. Leadership is absolutely not guaranteed. It is a
global race that will shape the outcome of national security
and economic prosperity for many decades to come.
Now, maintaining our lead actually requires excellence at
every layer of the stack so I am really honored to be here with
my panelists as well.
We have deep partnerships with Microsoft and OpenAI that
demonstrate how you need silicon, you need software, you need
systems and, really, the application layer to be successful.
Now, in terms of what to do, I thought about what would be
the most important things to say today and I put them in five
categories.
I think the first and probably the foremost is we must
continue to run faster. This is a race and the race does not
stand still. Nobody in the world stands still.
We lead today because of the bold decisions that we have
made and because of the innovation economy that we have. But we
need to continue to run faster and that means ensuring that we
have computing available.
I think Sam's story about Abilene is an excellent example
of how when you allow the computing infrastructure to expand at
the rate and pace that the private sector wants, you actually
make tremendous progress.
I would also like to mention the importance of open
ecosystems. I think open ecosystems are really a cornerstone of
U.S. leadership and that allows, you know, frankly, ideas to
come from everywhere and every part of the innovation cycle,
reducing barriers to entry and strengthening security as well
as creating, frankly, a competitive marketplace for ideas.
Third, we are very happy to see the focus on a robust
domestic supply chain. For us in the semiconductor world we
used to not get so much attention.
Now we get a lot of attention thanks to the importance of
chips, and the fact is we need more manufacturing in the U.S.
The efforts so far have made good progress but there is a lot
more that can be done and that should be done in public-private
partnership.
Fourth, we must invest in talent. Frankly, the United
States should be the best place to study AI, to work in AI, to
really move forward all of the innovations that we need and I
think, again, this can also be done in significant public-
private partnership.
And then fifth, of course, in the area of export controls
we must--we totally understand as an industry the importance of
national security and that is, you know, without--that goes
without saying as a U.S. company.
But we also want to ensure--as Chairman Cruz and Ranking
Member Cantwell stated, it is important to have widespread
adoption of U.S. technologies. We lead today because we have
the best technology.
However, if we are not able to fully have our technology
adopted in the rest of the world there will be other
technologies that will come to play. They may not be as good as
we are today but, frankly, usage really spurs innovation and
this is something that we certainly need to work with in
public-private partnership.
And I would, frankly, end by saying, you know, like Sam I
had a computer when I was growing up. I grew up in New York. I
am a little older than Sam so my first computer was a Commodore
64 and then I graduated to the Apple II. But the fact is this
is the best place to do computing innovation in the world. We
want it to stay that way with, really, a very rich and broad
ecosystem.
So thank you again for the opportunity to be here today.
[The prepared statement of Ms. Su follows:]
Prepared Statement of Dr. Lisa Su, Chair and CEO,
Advanced Micro Devices Inc. (AMD)
Chairman Cruz, Ranking Member Cantwell, Members of the Committee,
thank you for the opportunity to speak with you at such a consequential
moment.
I am Chair and CEO of AMD, a U.S.-headquartered semiconductor
company founded 56 years ago. We build high-performance computing chips
that power the modern economy.
Every day, billions of people rely on products and services powered
by AMD technologies. AMD chips also play a vital role advancing many of
our Nation's most critical missions, from powering defense systems and
secure communications to enabling breakthrough scientific research,
medical innovations, and quantum computing.
Our work in supercomputing showcases the full strength of AMD's
innovation and public impact. Through more than a decade of partnership
with the Department of Energy, AMD now powers the world's two fastest
supercomputers: Frontier, which went into operation at Oak Ridge
National Labs in 2021, and El Capitan, which went into operation at
Lawrence Livermore National Labs late last year. These systems are
critical infrastructure for U.S. national security and scientific
leadership, including the latest advances in drug discovery, medical
research, climate research, hypersonic flight, and even training future
generations of more capable AI models.
Today we are here to talk about AI. No technology today better
demonstrates the power of high-performance computing than AI.
AI is the most transformative technology of our time. The United
States leads today, but leadership is not guaranteed. This is a global
race, and the outcome will shape economic growth, national security,
and technological influence for decades to come.
Maintaining our lead requires excellence at every layer of the AI
stack. AMD's collaborations with Microsoft and OpenAI demonstrate how
industry leaders can work together across hardware, software, and
systems to advance state-of-the-art AI.
Underneath every model, every breakthrough, and every application
is massive amounts of computing power. If we want to lead in AI, we
must lead in the infrastructure that powers it. That requires urgency
across five national priorities.
First, we must keep running faster. America leads when it moves
fast and thinks big. From semiconductors to the internet, speed has
turned bold American ideas into global industries. In AI, speed
requires accelerating chip and system innovations that deliver more
performance with greater efficiency. It also means making AI compute
infrastructure readily available across the industry. This will require
rapidly building data centers at scale and powering them with reliable,
affordable, and clean energy sources. Moving faster also means moving
AI beyond the cloud. To ensure every American benefits, AI must be
built into the devices we use every day and made as accessible and
dependable as electricity. From vehicles and sensors to PCs and medical
tools, bringing the power of AI to every enterprise and every American
will enable faster decisions, smarter systems, and better services
where they matter most. We have the technology, intellectual property,
and talent to do that today, but it is a global race and we must keep
accelerating our pace.
Second, we must champion open ecosystems. Open standards have long
been a cornerstone of U.S. leadership. The same approach must guide our
path with AI as well. Open ecosystems allow hardware, software, and
models from different vendors to work together. This accelerates
innovation, reduces barriers to entry, strengthens security through
transparency, and creates healthier, more competitive markets.
Third, we must build a robust domestic supply chain for advanced
semiconductor manufacturing and packaging. AI leadership depends on the
ability to build complete, integrated systems. That means ensuring we
have domestic capabilities in both wafer manufacturing at the most
advanced nodes and next-generation packaging technologies as well as
the advanced system capabilities needed to bring it all together. AMD
is proud to be one of the first partners producing leading-edge chips
at TSMC's new fab in Arizona. The domestic semiconductor manufacturing
projects announced to date represent meaningful progress, but there is
much more that we can do. This is an area where strong public-private
partnerships are critical. The entire semiconductor industry is aligned
on the need to work together and partner with the government to
significantly scale U.S. chip production and advanced packaging
capabilities here at home.
Fourth, we must invest in talent and ensure our national strategy
for STEM education, workforce training, and immigration supports
sustained AI leadership. The private sector can certainly do more,
including expanding university partnerships, investing in reskilling
programs, and developing the cross-disciplinary talent required for
success. We should incentivize companies to increase their most
critical AI R&D efforts here at home and ensure our immigration
policies attract and retain the world's best AI talent. We should make
America the absolute best place for AI talent in the world.
Fifth, we must balance the need for national security with the
imperative to enable the widespread adoption of U.S. technologies. As
the government considers policies like AI diffusion, it is important to
remember that the U.S. leads in AI today and we want the rest of the
world building on our platforms. If our international partners cannot
access U.S. platforms, they will adopt alternatives that may be less
advanced today but will mature over time. Threading this needle
requires closer collaboration between government and industry to ensure
rules are clear, consistent, and aligned with both competitiveness and
security.
This is a pivotal moment. A once-in-a-generation opportunity to
secure U.S. leadership in AI and advanced computing. This is not just
about developing a transformative technology. It's about shaping the
future of our economy, safeguarding our national security, and
enhancing our global competitiveness.
Now is the time to ensure the United States doesn't just keep up,
but takes the decisive steps needed to cement our leadership.
Thank you again. I look forward to your questions.
The Chairman. And I had an Apple II as well with a shoe box
of floppy disks and somehow I ended up taking a wrong turn and
ending up in politics instead.
Mr. Intrator.
STATEMENT OF MICHAEL INTRATOR, CO-FOUNDER AND CHIEF EXECUTIVE
OFFICER, COREWEAVE
Mr. Intrator. I started out with a VIC-20.
Chairman Cruz, Ranking Member Cantwell, and distinguished
members of the Committee, thank you for the opportunity to
testify today. I am honored to appear alongside my industry
colleagues and partners.
My name is Michael Intrator. I am the Co-Founder and CEO of
CoreWeave founded 7 years ago. CoreWeave started like many
innovative ventures, humbly in a garage, experimenting,
initially with graphics processing units, or GPUs, for
cryptocurrency mining.
Recognizing the transformational potential, we pivoted to
support powerful AI applications, dramatically scaling the
vision and operation.
Today, CoreWeave stands at the forefront of America's AI
infrastructure revolution, operating more than 30 data centers
across 15 states. We manage more than 250,000 GPUs currently
using 360 megawatts of power.
Over two short years our revenue has surged by 12,000
percent, reaching $1.9 billion in 2024. As a result of this
progress, CoreWeave became a publicly traded company on March
28th of 2025.
CoreWeave's rapid growth is a testimony not only to the
technology but also to the surging global demand for advanced
AI infrastructure. Our infrastructure enables American
businesses to rapidly translate AI aspirations into impactful
economic realities.
By empowering companies to accelerate innovation we are
fueling America's competitive edge while improving productivity
and prosperity.
Modern AI requires specialized infrastructure, purpose
built computing capabilities that surpass traditional cloud
computing in scale and performance, today's general purpose
cloud that was built to support and scale the complexity of AI
workloads.
We cannot run a 21st century economy on the 20th century's
infrastructure. AI workloads involve trillions of simultaneous
calculations demanding unprecedented computing power, advanced
cooling systems, cutting-edge chip technology, ultra high-speed
networks and accelerated storage.
Since 2018 the computing power necessary for advanced AI
models has multiplied approximately 100,000 fold. At CoreWeave
our facilities symbolize America's great tradition of
innovation.
Our data centers built, maintained, and staffed by skilled
American workers embody how modern technology not only
stimulates economic growth and enhances national security, but
also improves humans' lives.
We are at a critical juncture in the global AI competition.
The nation that leads in infrastructure will set the global
economic agenda and shape human outcomes for decades.
Our largest competitor, China, recognizes the stakes and is
spending significant resources to strengthen their position. I
want to focus on four elements of policy that will help
determine whether the U.S. secures its leadership role in the
AI race.
First, strategic investment stability. AI infrastructure is
deeply capital intensive and requires a significant level of
coordination across industry stakeholders.
Stable predictable policy frameworks, secure supply chains,
and regulatory environments that foster innovation are crucial.
Policy makers must provide clear and consistent policy and
regulations across all jurisdictions that enables long-term
investment and rapid scaling of AI technology.
Second, energy infrastructure development. To support the
rapid deployment of AI infrastructure America must ensure
abundant and affordable supplies of energy. Careful reforms and
permitting and regulatory process are necessary to accelerate
infrastructure projects, facilitate more rapid construction,
interconnections, and energy for data centers.
Third is global market access. Maintaining America's
leadership also means ensuring our technology has fair access
to global markets.
Export controls and trade agreements can be calibrated to
both address national security risks and support global
diffusion of American AI technology.
And, finally, public-private partnerships and workforce
development. America's unique advantage in the AI race is
enhanced by our powerful tradition of public-private
partnership.
CoreWeave is proud to co-found the New Jersey AI hub with
Microsoft, Princeton University, and the New Jersey economic
development initiative.
Initiatives like this develop critical workforce skills,
foster innovation, and ensure economies and communities are
prepared for the AI-driven future.
America stands ready to lead the AI revolution, which will
bring enormous benefits. It is a rare moment in time that we
must meet.
If government, industry, and all affected parties work
together the United States can win this race and seize the vast
opportunity ahead of us.
Thank you again for the opportunity to testify. I look
forward to answering your questions.
[The prepared statement of Mr. Intrator follows:]
Prepared Statement of Michael Intrator, Co-Founder
and Chief Executive Officer, CoreWeave
Introduction
Chairman Cruz, Ranking Member Cantwell, and distinguished Members
of the Committee, I want to thank you for the opportunity to provide
testimony today on how we can ensure the United States remains the
global leader in Artificial Intelligence (AI) innovation.
Today we're on the verge of the next great revolution of AI: a
technology dramatically reshaping industries, innovation, and
productivity at a massive scale through systems of unprecedented
complexity. Millions of hours of training, billions of inference
queries, trillions of model parameters, and continuous dynamic scaling
are all driving an insatiable hunger for compute and energy that
borders on exponential.
At CoreWeave, we are not only deploying the infrastructure that
hosts some of the most massive and powerful AI models in existence, but
also doing it at scale.
I appreciate the opportunity to describe CoreWeave's journey and
our role in this historic race.
CoreWeave's Role
CoreWeave powers AI innovations by bridging the gap between AI
ambition and execution. CoreWeave provides the cloud platform, purpose-
built for AI, that delivers the speed, performance, and expertise
needed to unleash AI's full potential.
To train their next generation of AI models, AI labs require
compute resources. And to maintain their leadership position at the
forefront of AI innovation, researchers demand the latest and most-
performant computing infrastructure. They leverage these compute
resources to run the trillions of operations that operate algorithms
and process data to train the next generation of models.
The ``Scaling Law'' \1\ has demonstrated that increasing the
compute deployed against models translates to better performance. The
relationship is exponential. Orders of
---------------------------------------------------------------------------
\1\ Scaling laws describe how the performance of AI systems improve
as the size of the training data, model parameters, or computational
resources increases.
---------------------------------------------------------------------------
magnitude of increased compute are required to unlock incremental
gains in model performance. For example, in 2018, one of the most
popular and advanced generative AI models required a certain amount of
compute to train. Just seven years later, in 2025, the amount of
compute needed for the latest frontier models grew by about 100,000
times--an extraordinary increase.\2\ Performance, usefulness, and real-
world adoption has increased dramatically. This demonstrates that the
limiting boundary of AI development is high-performance compute,
delivered at scale and operated at peak efficiency.
---------------------------------------------------------------------------
\2\ Epoch AI, ``Training Compute of Frontier AI Models Grows by 4-
5x per Year,'' May 28, 2024.
---------------------------------------------------------------------------
To access the increased need for compute and to avoid being left
behind, enterprises and labs are continuously demanding deployments of
the latest generation chips at larger scales. In just the two short
years between the most recent generations of Nvidia chips, training
performance increased by 4x from Hopper to Grace Blackwell.\3\
---------------------------------------------------------------------------
\3\ Nvidia, ``NVIDIA GB200 NVL72: Powering the new era of
computing.''
---------------------------------------------------------------------------
CoreWeave enables AI labs, platforms, and enterprises to move the
boundaries of compute forward. Our end-to-end cloud platform is
purpose-built for the scale, performance, and expertise needed to power
AI innovation and meet the demands of accelerated computing. We
construct and power data centers, and provision high-performance
computing infrastructure which enables enterprise and labs to access
these essential resources. Our expertise is in managing a complex and
fragile ecosystem across supply chain, energy, financing, and
technology partnerships to build, optimize, and deploy our platform at
scale.
As of December 2024, we operated in 32 data centers in the U.S. and
Europe, deploying more than 250,000 GPUs utilizing 360MW of active
power. We have now contracted for 1.3GW of power. Adequate, reliable
supplies of power are essential to drive this revolution and for the
U.S. to win this race.
This year, CoreWeave was the sole cloud to be ranked Platinum and
as the #1 leader in AI cloud performance, as attributed by
SemiAnalysis's Platinum ClusterMAXTM Rating.\4\ And we have
established a track record as among the first to market with the latest
generations of hardware, such as Nvidia's most recent GB200 NVL72 chip,
which leading labs, such as IBM, Mistral AI, and Cohere, are already
using to improve and accelerate their training jobs.\5\ We also support
OpenAI--our strategic deal of nearly $12B provides compute capacity for
training and delivering its latest models at scale to its hundreds of
millions of users around the world.\6\
---------------------------------------------------------------------------
\4\ CoreWeave, ``CoreWeave Ranks as #1 AI Cloud, Backed by
SemiAnalysis' Platinum ClusterMAXTM Rating,'' April 10,
2025.
\5\ Nvidia, ``Thousands of NVIDIA Grace Blackwell GPUs Now Live at
CoreWeave, Propelling Development for AI Pioneers,'' April 15, 2025.
\6\ CoreWeave, ``CoreWeave Announces Agreement with OpenAI to
Deliver AI Infrastructure,'' March 10, 2025.
---------------------------------------------------------------------------
CoreWeave is purpose-built for the demands of accelerated
computing. We deliver this infrastructure with cutting-edge performance
and scale, and provide the expertise with the infrastructure that AI
needs today and in the future. And as a result, our customers are able
to train earlier, build quicker, and get to market faster, which is
critical for the U.S. to maintain its lead in AI. CoreWeave is the
engine that will propel the U.S. forward in the AI race.
CoreWeave's History
CoreWeave began as many start-ups and great entrepreneurial
companies do--in a garage with an idea, which was to try leveraging
GPUs for crypto mining. CoreWeave's founders purchased their first GPU
in 2016, which turned into hundreds, then thousands, and now hundreds
of thousands. Over the course of the next few years, CoreWeave began
looking for opportunities to use its fleet of GPUs for other high-
performing use cases beyond crypto mining, such as visual effects
(VFX), and then AI.
In 2020, CoreWeave launched as the ``world's first
specialized cloud.''
In 2021, CoreWeave operated the largest Nvidia A40 fleet in
North America.
By 2022, the world began to realize that more compute was
required to scale AI model training. CoreWeave began to scale
even more rapidly.
By 2023, the company had three data centers running more
than 17,000 GPUs with approximately 10 megawatts (MWs) of
active power.
By 2024, CoreWeave had ten data centers running more than
53,000 GPUs with more than 70MW of active power.
And, one year later, at the end of 2024, we had 32 data
centers running more than 250,000 GPUs with approximately 360MW
of active power.
CoreWeave became a publicly traded company on March 28, 2025.
This progress occurred in five short years. And that is the speed
which is required to drive this technological revolution. Most
recently, CoreWeave completed its acquisition of Weights & Biases, a
leading AI developer platform. Our vision is that CoreWeave + Weights &
Biases will deliver the leading AI Cloud Platform--purpose-built to
develop, deploy, and iterate AI faster. Together, we will enable
faster, more efficient AI development, empower AI developers to
innovate quickly, provide seamless integrations for AI development, and
support the world's most advanced AI innovators to unleash AI's full
potential.
The Need for AI Infrastructure and Re-platforming
AI requires a fundamentally different computing infrastructure from
the existing one. Training state-of-the-art models and running
inference at scale requires trillions of simultaneous calculations
across billions of parameters. To fulfill this requirement, high-
performing compute infrastructure necessitates a more concentrated
power footprint, increased cooling needs, the latest chips, high-
throughput networking, accelerated storage, and more.
In contrast, the generalized clouds that serve the world today were
not built to serve the specific requirements of AI. These cloud
platforms were built for day-to-day web hosting, database management,
and running SaaS applications--workloads that rely on simple, fixed-
logic calculations and lightweight processing.
Operating compute at scale and at the intensity of AI is highly
complex. There are significant inefficiencies associated with operating
AI workloads ranging from hardware failures to scheduling optimal
usage. A single 32,000 GPU cluster may require the deployment of
approximately 600 miles of fiber cables and approximately 80,000 fiber
connections, along with highly-specialized heat management capabilities
to support high-power density. Each of these variables increases the
number and complexity of possible failure points. When a cluster
suffers a component failure (GPU, network, memory, cable, cooling,
etc), it can adversely impact the entire cluster by reducing training
performance, or even causing the entire project to fail.
The difficulty in managing large clusters leads to what we call the
``AI Efficiency gap,'' which we evaluate based on Model FLOPs
Utilization (MFU). This is a measure of the observed throughput
compared to system maximum if the system were operating at peak
capacity. Typically, the complexity of managing AI infrastructure means
that a majority of the compute capacity in GPUs can be lost to system
inefficiencies, with empirical evidence suggesting observed levels of
performance in the 35 percent to 45 percent range.
As a result, the world is undergoing a ``re-platforming'' from
traditional generalized cloud computing infrastructure to AI cloud
computing infrastructure. And to achieve this, cloud platforms are
being fundamentally reimagined, with every layer of the technology
stack being specifically optimized for AI workloads. This is the
purpose-built computing infrastructure needed to support the scale and
complexity of AI workloads.
CoreWeave's Cloud Platform
We have built our platform for the new requirements of AI cloud
computing infrastructure.
CoreWeave's cloud platform is an integrated solution that is
purpose-built for running AI workloads such as model training and
inference at superior performance and efficiency. It includes
infrastructure services, managed software services, and application
software services, all of which are augmented by our proprietary
Mission Control and observability software. This proprietary software
enables the provisioning of infrastructure, the orchestration of
workloads, and the monitoring of our customers' training and inference
environments to ensure high availability and minimize downtime.
To unlock the full potential of AI infrastructure, CoreWeave helps
to bridge the MFU ``efficiency gap'' between the observed 35-45 percent
and the theoretical 100 percent, driving as much as 20 percent higher
performance than public benchmarks.\7\ To achieve this, performance
optimizations are built into every layer of the platform to enhance
distributed training throughput. And our ability to close this gap
significantly enhances performance, improves model quality, accelerates
development timelines, and reduces overall AI model costs.
---------------------------------------------------------------------------
\7\ CoreWeave, ``CoreWeave leads the Charge in AI Infrastructure
Efficiency, with up to 20 percent Higher GPU Cluster Performance than
Alternative Solutions,'' March 19, 2025.
---------------------------------------------------------------------------
What does this mean for the U.S.? As we improve our efficiency, and
close this gap, the United States will maintain its edge in the global
AI race, stimulating economic activity, and enhancing national security
while improving the provision of essential services for all. This is
what the race is all about.
U.S. Global Leadership in AI
We stand at a critical inflection point in the global AI race,
representing a pivotal moment that will influence economic prosperity,
national security, technological standards and how we provide essential
services to all Americans. AI represents the next major evolution of
technology with the potential to transform society. This is America's
AI moment, and a strategic opportunity America cannot afford to miss.
Economic Prosperity: AI is projected to generate a cumulative
global economic impact of $20 trillion, representing 3.5 percent of
global GDP, by 2030.\8\ The country that leads this transformation will
capture a disproportionate share of this new economic frontier. If
America maintains global leadership in AI, the productivity gains, new
products, high-value jobs, and breakthroughs across industries from
healthcare to manufacturing created by AI will help drive prosperity
across the American economy benefitting all people.
---------------------------------------------------------------------------
\8\ IDC, ``The Global Impact of Artificial Intelligence on the
Economy and Jobs,'' September 2024.
---------------------------------------------------------------------------
National Security: As advanced AI capabilities become essential to
modern defense including improvements to weapons systems and
battlefield capabilities, intelligence, and cybersecurity systems,
maintaining America's technological edge becomes inseparable from our
national security. Falling behind is not an option when other countries
are rapidly advancing their own AI capabilities with explicit aims to
challenge American global economic and military leadership.
Shaping the Future of AI: The country that leads AI development
will shape how this technology evolves globally. The standards,
protocols, and ethical frameworks that will govern AI will reflect the
values of whichever country wins this race.
The foundational AI infrastructure being built today will help
determine where AI development occurs. Success in the global AI race
will increasingly depend on purpose-built AI computing infrastructure,
not just general-purpose systems deployed at scale. Nations that
successfully ``re-platform'' gain compounding advantages in model
capabilities and development speed.
CoreWeave is at the forefront of developing the purpose-built
infrastructure that powers America's AI capabilities. Leading companies
and AI labs such as IBM, Mistral, and Cohere rely on CoreWeave's
infrastructure. Our success supports broader national objectives by
ensuring the U.S. maintains the world's most advanced computing
infrastructure which is required to drive AI.
Factors Critical to Continued U.S. Leadership in AI Infrastructure
America's leadership position in AI depends in part on maintaining
its edge in the underlying infrastructure that drives it. Based on
CoreWeave's experience building and operating AI computing
infrastructure, I would like to highlight several critical areas that
will determine if our Nation maintains its leadership position. Many of
these areas focus on the critical elements of policy which will impact
how this sector evolves.
Strategic Investment Stability
AI infrastructure investment requires a significant level of
coordination across multiple industry and government stakeholders due
to the scale and timeline of these projects, representing substantial
capital commitments with years-long development and operational
horizons. CoreWeave benefits from robust collaborations with leading
chipmakers, original equipment manufacturers (``OEMs''), and software
providers to supply us with infrastructure components and other
products. The highly specialized infrastructure that is required to
unlock the potential of AI is immensely challenging to build and
operate, especially at scale. This requires: (i) tens of thousands of
GPUs, (ii) thousands of miles of high-speed networking cables, (iii)
hundreds of thousands of interconnects coming together to create
``superclusters'' for training and serving AI models, and (iv) hundreds
of MWs of power and substantial amounts of storage.
To sustain U.S. leadership in AI, it is important for U.S. AI cloud
computing companies to maintain access to a reliable supply chain
necessary to access all of the components necessary to develop and run
the cutting-edge AI infrastructure. Acquiring these necessary high-
performance components to power AI workloads requires managing a
complex global supply chain and maintaining robust supply chain
relationships. Continued engagement with leading global suppliers and
strategic partners is vital to ensuring the continued operation,
expansion, and rapid deployment of U.S. AI infrastructure and to uphold
U.S. competitiveness. Predictable policy is essential for this.
Significant private sector investment and development has helped
the United States establish an early and important lead in AI
infrastructure. The U.S. accounts for roughly 40 percent of the global
market for data center capacity, with six of the top ten markets.\9\
---------------------------------------------------------------------------
\9\ Cushman & Wakefield Research, ``2024 Global Data Center Market
Comparison.''
---------------------------------------------------------------------------
The importance of AI and U.S. leadership is not lost on our
competitors. Intensifying global competition for AI infrastructure
demands that this initial lead must be carefully and actively
maintained. The European Union launched its AI Continent Action Plan in
April setting out ambitious goals to triple data center capacity across
member states in the next five to seven years. The EU also announced a
=20 billion investment into five gigafactories-massive high performance
computing facilities equipped with approximately 100,000 state-of-the-
art AI chips--and reforms related to permitting, energy issues and
water usage.\10\ China has made its ambitions regarding AI clear
through coordinated national strategies and streamlined deployment
timelines that can compress years into months in their effort to shrink
America's current AI lead.
---------------------------------------------------------------------------
\10\ European Commission, ``The AI Continent Action Plan,'' April
9, 2025, https://digital-strategy.ec.europa.eu/en/library/ai-continent-
action-plan.
---------------------------------------------------------------------------
Countries around the world are aggressively pursuing coordinated AI
strategies, implementing policies which subsidize infrastructure, and
accelerate deployment timelines. In this high-stakes environment, the
capacity of American companies to build AI infrastructure swiftly and
with assurance will be a decisive factor in the AI race and determine
whether the United States retains its leadership position.
Sustained American leadership in AI infrastructure faces potential
headwinds from multiple sources of uncertainty. These include
volatility in the global supply chain for critical components, such as
advanced semiconductors and networking equipment which can disrupt
deployment timelines. These fluctuations can lead to delays or
unanticipated cost overruns, adversely affecting American companies'
ability to rapidly scale AI capabilities.
Changes in regulation at both the Federal and state levels
introduce substantial uncertainty for leading businesses making
investment decisions. Changes in export controls, energy policies, the
potential need to add gigawatts of power to meet increased demand, and
the emerging landscape of AI-specific regulations at different levels
of government also affect the pace and scope of infrastructure
deployment. The lack of regulatory clarity can deter investment and
slow down the innovation cycle. Additionally, American companies will
be affected by the rules and institutions being developed around the
world both in individual nations and in important forums in which key
competitors participate that will govern the use of AI. As the world's
dominant economic player and technological leader, it is important for
the U.S. government to drive the rules which shape the future playing
field for American companies.
Finally, the potential for a fragmented regulatory framework, with
differing requirements from state to state and potentially at the
Federal level, poses a unique challenge. For instance, inconsistent
definitions of key terms which define various AI activities across
jurisdictions could force companies to navigate a complex web of
compliance regimes for fundamentally similar activities. These types of
policies would require participants in the AI infrastructure to
consider designing alternative products and strategies to do business
in different jurisdictions. This regulatory patchwork would lead to
increased costs, operational inefficiencies, and ultimately, a
competitive disadvantage for American companies in the global AI race.
These uncertainties disproportionately affect newer entrants like
CoreWeave and other specialized providers, potentially stifling the
very innovation that drives American leadership in this critical
sector.
To ensure the United States remains at the forefront of AI,
American companies must lead AI infrastructure development. This
requires a coordinated policy strategy to mitigate key uncertainties,
many of which this section touches upon, maintain appropriate
oversight, and create a stable, predictable policy environment that
fosters investment, continued growth and innovation.
Energy and Infrastructure Development: Powering AI Leadership
The race to build AI infrastructure is fundamentally tied to the
Nation's ability to continue to develop a new generation of data
centers that drive innovation, to bring them online and ensure there is
sufficient electricity to power them. This will be affected by the
processes and permitting systems that are used to develop data centers
and to develop adequate, reliable supplies of new power generation and
the interconnection and transmission systems capable of delivering it
at pace and scale.
The dual challenges of adding new power supplies and streamlining
infrastructure development, are not merely logistical hurdles, but
critical factors that will determine whether America can maintain its
global AI leadership. Failure to address these challenges effectively
risks ceding ground to international competitors, particularly China,
who are aggressively pursuing their own AI ambitions.
Energy considerations are critical to the development and operation
of AI infrastructure. After a prolonged period of relatively flat
electricity consumption, according to analysis, the U.S. is now
experiencing a significant and accelerating increase in power demand.
This surge is driven by several concurrent trends, including the
onshoring of new manufacturing facilities, the widespread
electrification of transportation and heating, and the growth in data
centers.
AI computation is energy-intensive. Training large language models,
running complex simulations, and deploying AI applications all require
significant amounts of power. Widespread AI adoption will further
increase this demand, even as companies continue to innovate and
improve efficiency. According to a report released by the U.S.
Department of Energy, data centers consumed approximately 4.4 percent
of total U.S. electricity in 2023. This figure is projected to rise in
the coming years, potentially consuming between 6.7 percent and 12
percent of total U.S. electricity by 2028.\11\ This projected increase
underscores the urgent need for policymakers at all levels of
government to put policies in place that will enable the development of
new power supplies and the infrastructure to deliver it. Given that
these projects can cost in the hundreds of millions of dollars and
years to implement, there is no time to lose in getting started.
---------------------------------------------------------------------------
\11\ ``2024 United States Data Center Energy Usage Report,''
Lawrence Berkeley National Laboratory, December 2024.
---------------------------------------------------------------------------
The implications for global AI leadership are clear and
consequential. Regions that can provide abundant, reliable, and cost-
effective energy will attract billions of dollars of AI infrastructure
investment. Conversely, energy constraints, whether in the form of
limited supply, unreliable delivery, policy uncertainty, or prohibitive
costs, can and will push development and the associated investment
elsewhere.
CoreWeave's site selection consideration for data centers
illustrates these priorities:
Availability of abundant, reliable power, and where
available, non-emitting sources
Competitive rates
Diverse energy sources
Pathways for capacity expansion
Efficient permitting processes that provide timeline
certainty
However, obtaining the necessary approvals to build both energy
infrastructure and data centers is often a critical bottleneck. Every
month of delay represents lost ground in a field where the pace of
innovation is measured in weeks, not years.
In particular, it is challenging to develop energy projects.
Securing the necessary approvals and permits can take a significant
amount of time. Inefficiencies in the permitting process can
significantly impact both energy availability and whether attractive
sites for data center development can move forward. The variability in
permitting timelines across jurisdictions and the potential for
multiple, sequential review processes and litigation can increase the
time required to develop a project leading to delay or potentially
stopping projects.
There will be challenges in streamlining the permitting and
regulatory processes required to develop the energy and data center
infrastructure necessary for the U.S. to maintain its leadership in AI.
Goals in streamlining these processes include:
Maintaining and growing a balanced portfolio of generation
powered by diverse energy sources that can meet increasing
demand to ensure availability and reliability at reasonable
costs
Expanding and modernizing the Nation's transmission systems
Providing developers of data center capacity and associated
infrastructure with predictable timelines and reduced wait
times for feasibility studies, interconnections, and builds
Streamlining permitting processes while maintaining
appropriate oversight
CoreWeave understands that the processes put in place to achieve
these important objectives need to consider the views of key players
that will make these investments and the communities in which these
facilities are located.
We hope efforts to streamline the permitting process to enable the
addition of new sources of generation and the transmission
infrastructure to deliver it will receive attention by this
Congress.\12\
---------------------------------------------------------------------------
\12\ CoreWeave is a member of the Data Center Coalition. DCC's AI
Action Plan submission includes additional discussion related to
permitting and energy infrastructure, available here. [https://
static1.squarespace.com/static/63a4849eab1c756a1d3e97b1/t/
67d84a70db36cf08e2a3
29cb/1742228107114/DCC+Comments+-+RFI+AI+Action+Plan.pdf]
---------------------------------------------------------------------------
We recognize that this issue will not be resolved solely at the
Federal level. All levels of government have a role to play in
addressing the challenges in adding the necessary infrastructure to
meet energy requirements. A coordinated effort amongst federal, state,
and local government, industry, and other affected parties is required
to address these interrelated challenges which include creating
efficient, transparent processes that allow infrastructure to be built
at the pace and scale that technological advancement requires and for
the U.S. to maintain its dominant position in AI.
Global Diffusion of the American AI Stack
Global market access is a pivotal factor in determining which
nation will lead in the AI domain. Export controls and trade agreements
can be designed to achieve multiple objectives: they can facilitate
legitimate market access for American businesses while also mitigating
potential national security risks. However, controls that are not
calibrated can inadvertently bolster foreign competitors by
incentivizing AI development and deployment outside of the U.S., and
competitors filling the void left by U.S. firms. This could result in
the loss of technological expertise and economic benefits.
To bolster American AI leadership, export controls and
international agreements should consider:
Precision Targeting of National Security Risks: Controls
should be focused on technologies, entities and nations that
pose genuine and demonstrable threats to national security,
with clear and specific parameters.
Supporting American Technological Leadership: Restrictions
imposed on U.S. technologies and where they can be exported
should consider negative impacts on the ability of U.S.
companies to compete in global markets. This includes
considering the potential for retaliatory measures from other
nations and the risk of creating a 'chilling effect' on
investment and innovation.
Strategic Alignment with Allies: Close coordination with
like-minded international partners is essential to ensure the
effectiveness of export controls and prevent the fragmentation
of the global AI market. Aligning with allies can also foster
the expansion of a collaborative, secure, and trustworthy AI
ecosystem.
These considerations are crucial in shaping the future landscape of
AI innovation. A well-calibrated approach will ensure that the next
generation of AI development is anchored in the United States,
leveraging American infrastructure and expertise.
Conversely, controls could inadvertently have the effect of
limiting opportunities for the export of U.S. technology and expertise,
with adverse impacts. A strategy that carefully differentiates between
markets, tailors export restrictions to mitigating specific risks, and
fosters international cooperation, can effectively protect national
security while simultaneously enhancing America's ability to lead and
shape the global AI diffusion race.
Public-Private Partnerships Accelerating American Innovation
A unique American advantage in the AI race is our ability to forge
effective partnerships between government, industry, academia and
elements of civil society. These collaborations combine the agility,
ingenuity, expertise and resources of the private sector with the long-
term vision of the public sector and the basic and applied research
capabilities of academic and research institutions. This approach helps
foster an innovation ecosystem that is difficult for competitors to
replicate.
CoreWeave is proud to be a founding partner of the New Jersey AI
Hub, along with Microsoft, Princeton University, and the New Jersey
Economic Development Authority. This AI Hub will focus on research and
development efforts, applications of AI in several industry sectors,
and AI workforce development and education.\13\
---------------------------------------------------------------------------
\13\ Governor Phil Murphy, ``Governor Murphy, Princeton University,
Microsoft & CoreWeave Cut Ribbon on Major Artificial Intelligence
Hub,'' March 27, 2025, https://www.nj.gov/governor/news/news/562025/
approved/20250327a.shtml.
---------------------------------------------------------------------------
CoreWeave is deeply committed to supporting AI education and
research, and public-private collaborative partnerships. Similar
partnerships across the country could further accelerate America's AI
capabilities, and we encourage policymakers to explore this model of
collaboration. These types of partnerships also accelerate AI and data
center workforce development in the US. Worker shortages in the data
center industry are becoming commonplace as the skills gap widens. Data
center employers have struggled to find enough trained workers. Half or
50 percent of surveyed data center managers in 2020 reported having
difficulty finding skilled workers to fill positions, and 71 percent
continued to report being concerned about finding qualified staff in
2023.\14\ A skilled and trained workforce is vital for the stability
and expansion of AI data centers--which rely on specialized data center
technicians, network and electrical engineers, cybersecurity
professionals, and project managers. CoreWeave supports efforts to
develop a domestic workforce comprised of the skilled workers required
to meet the growing AI demand and to accelerate AI innovation while
creating the skilled good-paying jobs of the future.
---------------------------------------------------------------------------
\14\ The White House, ``AI Talent Report,'' January 14, 2025
---------------------------------------------------------------------------
Looking Forward: Ensuring Continued American Leadership in AI
The United States has built a remarkable lead in artificial
intelligence through our unique combination of innovative and
entrepreneurial private companies, world-class research institutions, a
talented workforce, and a policy environment that fosters dynamic
growth. This advantage is especially pronounced in AI infrastructure,
where companies like CoreWeave have established global technological
leadership in this critical layer of the AI stack.
However, the conditions that have enabled this position must be
actively maintained while being flexible in order to adjust to new
technological developments and political considerations. Countries
around the world rightfully recognize the strategic importance of AI
and are making coordinated efforts to build AI infrastructure. The
decisions we make now will help determine whether America can maintain
AI leadership in the years to come.
The current moment demands a thoughtful, transparent, and
predictable approach that maintains our competitive edge, and seizes
future opportunities while addressing legitimate concerns. As we
consider policy options to address this dynamic sector, we should be
attentive to how different approaches affect the entire AI ecosystem,
from established players to new entrants, from model developers to
infrastructure providers like CoreWeave.
In order to further America's lead in AI, we encourage the Federal
Government to consider policies that:
Foster a predictable investment environment through the
implementation of nationally consistent regulatory frameworks
for areas most critical to strengthen competitiveness and drive
innovation.
Ensure that there are adequate, reliable supplies of power
at the lowest possible cost through policies which enhance the
ability to add generation and transmission to power next-
generation AI infrastructure. Careful reforms of existing
permitting and regulatory processes that enable affected
parties to participate in the process are needed for this to
occur.
Maintain global competitiveness and strengthen U.S. industry
through strategically calibrated export policies that protect
national security while supporting the diffusion of the
American AI stack.
Strengthen public-private partnerships, like the New Jersey
AI Hub, that accelerate innovation across research, industry,
and government.
CoreWeave appreciates the Committee's leadership on these critical
issues and we look forward to working with the Committee as it develops
policies enabling the U.S. to maintain its leadership in the AI race.
The Chairman. Thank you.
Mr. Smith.
STATEMENT OF BRAD SMITH, VICE CHAIR AND PRESIDENT, MICROSOFT
CORPORATION
Mr. Smith. Chairman Cruz, Ranking Member Cantwell, members
of the Committee, thank you for the opportunity to be here
today. Let me just build on what my three colleagues have said
and offer a few thoughts.
The first is I just wanted to refer to the chart on this
easel that shows the AI tech stack that was also in my written
testimony. It makes a simple but, I think, important point--we
are all in this together.
If the United States is going to succeed in leading the
world in AI it requires infrastructure. It requires success at
the platform level. It requires people who create applications.
Interestingly, we at Microsoft get to work with all three
of these leaders and their companies. Our success, each of our
success, depends on each other's success, and what is true of
the four of us is true when you look across the country and
around the world at open source developers, people who are
building power plants, electricians and pipe fitters who are
going to work every single day.
So what do we need from the Congress and the country in
order to succeed? I think it is three things. I described them
in my written testimony.
First, as Chairman Cruz said, we need innovation.
Innovation will go faster with, as Sam said, more
infrastructure, faster permitting, more electricians.
We need more innovation fueled, as Ranking Member Cantwell
said, by support from our universities and the Federal agencies
that support basic research across the country, one of this
country's crown jewels.
We also need, as Chairman Cruz said, faster adoption, what
people refer to as AI diffusion--the ability to put AI to work
across every part of the American economy to boost
productivity, to boost economic growth, to enable people to
innovate in their work, and the number-one ingredient for that
history shows time and time again is skilling, investing in
education.
And, finally, we need to export. If America is going to
lead the world we need to connect with the world. We need to
remember--I believe always--that as a country only 4 and a half
percent of the world's people live in the United States of
America.
Our global leadership relies on our ability to serve the
world with the right approach to export controls and always,
especially in technology, in our ability to sustain the trust
of the rest of the world.
Ultimately, I think people who take the time--if they take
the time to watch or read about this hearing may wonder what is
this all about--what are we at this table trying to do? What do
these two letters, AI, really mean to them?
Are we who are working in this industry trying to build
machines that are better than people or are we trying to build
machines that will help people become better?
Emphatically, it is and needs to be the latter. Are we
trying to build machines that will outperform people in all the
jobs that they do today or are we trying to build machines that
will help people pursue better jobs and even more interesting
careers in the future?
Indisputably, it needs to be the second, not the first, and
I believe that is what we are and can do together.
As somebody who has now spent almost 32 years in this
industry there are two things that always strike me. The first
probably will not surprise you.
Never underestimate what technology can do, how quickly it
can move, what it can accomplish. But the second is one that I
think is too seldom discussed even though every day it stares
us in the face.
Never estimate what people can do. Never underestimate
human ambition. Never underestimate what a person can do if
given a better technology tool and the ability to learn how to
put it to use.
That is the story of this industry. It is the story of the
country. It is, as you heard, the story of Sam Altman. Not
everybody becomes a Sam Altman or a Satya Nadella or a Bill
Gates but everybody deserves the opportunity to try.
Tonight across America, whether it is the attic of a house
or the basement or just an everyday bedroom there are kids with
computers, with phones, with access to the Internet and now the
ability to put AI to work.
Let us invest in their education. Let us invest in the
skills that the American public needs. Let us then invest in
creating the future that the American public deserves.
Thank you.
[The prepared statement of Mr. Smith follows:]
Prepared Statement of Brad Smith, Vice Chair and President,
Microsoft Corporation
Chairman Cruz, Ranking Member Cantwell, and Members of the
Committee,
Thank you for the opportunity to testify on the critical issue of
artificial intelligence. I am Brad Smith, the Vice Chair and President
of Microsoft Corporation.
AI has the potential to become the most useful tool for people ever
invented. Like the general purpose technologies that preceded it, such
as electricity, machine tools, and digital computing, AI will impact
every part of our economy. It will shape not just how we work and live,
but how we compete, prosper, and stay secure as a nation between now
and the middle of this century.
The notice for this hearing aptly refers to an ``AI race.'' I would
like to talk today about what is needed to win this race.
The AI race involves both technology and economics. It requires
both innovation and diffusion. It is both a sprint and a marathon. The
country can win a lap but lose the race if it fails to bring together
all the ingredients needed for success.
It is a race that no company or country can win by itself.
To win the AI race, the United States will need to support the
private sector at every layer of the AI tech stack. The nation will
need to partner with American allies and friends around the world.
In my testimony today, I will focus on three strategic priorities
where this Congress and the Federal government will make a difference.
First, the country must win the AI innovation race. This will
require massive datacenters and AI infrastructure that need Federal
support to expand and modernize the electrical grid on which they
depend. The country must recruit and train skilled labor like
electricians and pipefitters that are in short supply. We all must
summon the best of our researchers at national labs and universities,
supported by Federal basic research programs and partnerships that have
become the envy of the world. We will need to continue to excel in
moving innovative ideas from academic labs into companies and new
products. And we will need to support AI developers with open and broad
access to public data.
Second, the Nation must win the AI diffusion race. This will
require that we promote broad AI adoption that will enable productivity
growth across every sector of the economy. More than anything, this
requires new initiatives to promote the AI skilling of the American
workforce. This will involve basic AI fluency in our schools and new AI
training programs in our community colleges. It will also include
advanced AI education that will represent the next generation of
computer science degrees, organizational skills that will be mastered
in the country's business schools, and new courses in the nation's law
schools. When combined, these will enable companies, non-profits, and
government agencies alike to put AI to effective use. Governments at
the federal, state, and local levels can then help accelerate this
diffusion by adopting AI services to improve the effectiveness and
efficiency of the services they provide to the public.
Third, the United States must export AI to American allies and
friends. No company or country is so powerful that it can master the
future of AI without friends. The United States and China are competing
not only to innovate but to spread their respective technologies to
other countries. This part of the race likely will be won by the
fastest first mover. The United States needs a smart export control
strategy that protects our national security while assuring other
countries that they will have reliable and sustained access to critical
American AI components and services. Perhaps as much as anything, this
requires that we collectively sustain international trust in our
products, our companies, and the country itself.
AI as a General Purpose Technology
Economists sometimes put technologies into two categories, general
purpose technologies and single-purpose tools. Most things in the world
are single-purpose tools, like a smoke detector or a lawn mower. They
do one thing very well. But over the course of history, certain so-
called general purpose technologies impact and sometimes even redefine
almost every sector of the economy. Electricity is the prototypical
example, because when you think about it, electricity changed the way
every economic sector works.
The key to mastering the future of AI starts in part by
understanding the role technology has played in the past. The past
three centuries have brought the world three industrial revolutions,
each driven by these general purpose technologies. First, it was iron
working in the United Kingdom, starting in the 1700s. And then it was
electricity and machine tools in the 1800s, when the United States
overtook the United Kingdom by putting these technologies to work more
broadly than any other country. And then there was the third industrial
revolution during the last 50 years, driven by computer chips and
software.
Without question, being a global leader in advancing a general
purpose technology gives a country a major edge. But one lesson of
history is that the countries that benefit the most and advance the
fastest are not necessarily the countries where the technology is
invented. Rather, it's where the technology is diffused--or adopted--
the most quickly and broadly. This is for good reason. If a technology
improves productivity and changes every part of an economy, then the
country that uses it the most broadly and quickly will benefit the
most.
This both frames and defines the AI opportunity and challenge for
the United States. As a nation, we need to focus both on advancing
innovation and driving diffusion, both domestically and as a leading
American export.
The AI Tech Stack
The key to driving both innovation and diffusion is to recognize
that AI, like all general purpose technologies, is built on what we in
the industry call a tech stack--a stack of technologies that are used
together. This is true for every great general purpose technology. You
can see this, for example, if we go back in time and think about
electricity. Thomas Edison first succeeded in 1878 in using electricity
to light a lightbulb. But the illumination of lights across a city
quickly required the construction of power plants, the fuel to run
them, the creation of an electrical grid, the standardization of
circuits, and a wide range of electrical appliances beyond the
lightbulb itself. In short, a tech stack for electricity.
Artificial intelligence similarly is built on an AI tech stack.
Fundamentally, it is divided into three layers, infrastructure, the
platform layer, and applications. You can see this illustrated below.
[GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]
The infrastructure layer is massive. Microsoft is spending more
than $80 billion this Fiscal Year on the capital investment needed for
this layer, with more than half this amount being spent in the United
States. This goes to buying land, investing in electricity and
broadband connectivity, procuring chips like GPUs, and installing
liquid cooling. These lead to the construction of datacenters--or often
datacenter campuses with many buildings with potentially hundreds of
thousands of computers. This infrastructure supports both the training
of new AI models and their deployment, so they can be used for AI-based
services around the world.
On top of this infrastructure, there is the platform layer. The
heart of this layer consists of AI foundation models, including
frontier models created by companies like OpenAI, as well as open
source and other models from a wide variety of other firms--including
Anthropic, Google, Mistral, DeepSeek, and Microsoft itself. The
platform layer relies on data to train and ground models. And it
includes a new generation of software-based AI platform services that
are used to help build AI applications.
Ultimately, both the infrastructure and platform layers support the
applications layer. These are devices and software applications that
use AI to deliver better services to people. ChatGPT and Microsoft's
Copilot are both examples of AI applications. One of the amazing things
about the applications layer is it's not just companies--large or small
or established or startup--that are creating AI applications. It's
everybody. It's researchers using new AI-infused applications to change
drug discovery. It's non-profits changing the way they deliver
services. It's teachers using AI as a tool to improve the way they
prepare material for a classroom. It's governments making everything
from the filing of a tax return to the renewal of a driver's license
easier and more efficient.
To build a new AI economy, it's critical to get all three of these
layers working and to get a flywheel turning across the ecosystem. It's
essential to build the infrastructure layer so people can develop and
deploy the models at the platform layer. It's essential to use the AI
models so that people will build the applications on top of them. And
it's essential for customers to adopt the applications, so the market
can grow, and drive increased investment to expand the infrastructure
further. The process repeats itself. This is how a new economy is born.
Success Requires an Entire Ecosystem
The flywheel effect makes clear that success requires not only
national progress at one layer of the tech stack, but at every layer.
That is what the private sector currently is pursuing in the United
States better than in any other country. And it's what this Congress
and the Executive Branch can help support with a strategy that promotes
both AI innovation and diffusion up and down this stack.
National AI leadership requires not only success by a few
companies, but by many. Today's panel, involving leading firms such as
OpenAI, AMD, CoreWeave, and Microsoft, reflects important slices of the
new AI economy. The AI economy requires a multifaceted and integrated
ecosystem that includes ``Big Tech'' and ``Little Tech,'' startups and
more established firms, open source and proprietary developers,
suppliers and customers, firms that create data and firms that consume
it, all working together. Governments as both regulators and leading AI
adopters have critical roles to play.
Commentators sometimes focus on the tensions between different
participants in this tech ecosystem. These deserve attention. What's
often overlooked is that the different participants also depend on each
other. And this means that the different contributors to the AI
ecosystem all need to be healthy.
A large technology company like Microsoft has a unique
opportunity--and responsibility--to partner with and support the
participants at every level of the tech stack. We strive to advance not
just innovation but an economic architecture, business models, and
responsible practices that will help grow the AI market on a long-term
basis. Not just for the United States, but the country's friends and
allies.
Winning the Innovation Race
Although the AI economy is being built mostly by the private
sector, government policies and initiatives need to play a critical
role. This starts with work needed to help fuel innovation. A few areas
deserve particular attention in this hearing.
Power the growth of datacenters
Just as you can't have reliable electricity in your home without a
powerplant, you can't have AI without datacenters and AI
infrastructure. And these datacenters require a vast supply chain to
construct and large amounts of electricity to operate.
America's advanced economy relies on 50-year-old infrastructure
that cannot meet the increasing electricity demands driven by AI,
reshoring of manufacturing, and increased electrification. The United
States will need to invest in more transmission and energy resources,
onshore our supply chains, and modernize our electric grid to support
forecasted increases in electrical loads. Microsoft is investing in
these areas itself.
We urge the Federal government to streamline the Federal permitting
process to accelerate growth in all these areas. The current Federal
permitting processes often involve multiple agencies and complex,
unpredictable, multi-year reviews. This hinders progress. The Federal
government should take immediate steps to establish reliable,
reasonable, and transparent timelines for permitting decisions. This
can also be done by standardizing Federal permitting processes and
designating a lead agency to shepherd the permits through the process.
Further, the permitting agencies should utilize AI and digital tools to
improve timelines and transparency for applicants and ensure the
permitting agencies have quick access to information to assist them in
their review and decision-making process.
We were pleased to see President Trump's recent Executive Order,
``Updating Permitting Technology for the 21st Century,'' directing
agencies to make maximum use of technology in the environmental review
and permitting process. The Congress should also look to the Federal-
State Modern Grid Deployment Initiative as a proven program that can be
leveraged to deliver results.
This is just the start of what is needed to modernize and expand
America's energy grid. We need to recognize that new investments in the
grid are just as important today as they were a century ago, when the
United States led the world in private and public sector support for
electricity.
Grow the AI Infrastructure workforce
Perhaps the single biggest challenge for data center expansion in
the United States is a national shortage of people--including skilled
electricians and pipefitters. Electricians, for example, are essential
to datacenter construction, installing a complex system of electrical
panels, transformers and backup power systems. We have hired thousands
of electricians across the country, including in Arizona, Georgia,
Virginia, Washington, and Wisconsin. But the United States doesn't have
enough electricians to fill the growing demand. We estimate that over
the next decade, the United States will need to recruit and train half
a million new electricians to meet the country's growing electricity
needs. We need a national strategy to ensure we meet this opportunity
for American workers.
These are good jobs that will provide great long-term careers for
people across the country. We recommend making existing Federal
education and training funds, as well as tax incentives, available to
scale up these opportunities. These could include targeting current
Federal apprenticeship investments in regions that have identified
major AI infrastructure initiatives and supporting existing training
centers to quickly increase the number of registered apprenticeships
focused on electricians.
We commend President Trump's recent Executive Order, ``Preparing
Americans for High-Paying Skilled Trade Jobs of the Future,'' for
highlighting the importance of skilled trades in the building of AI
infrastructure and for paving the way to meet this moment. As Federal
agencies work to implement the order, it will be critical that industry
forecasters and union training centers work together to maximize
impact.
Ultimately, we need new steps at every level of government and in
communities across the country. For example, we need to do more as a
nation to revitalize the industrial arts and shop classes in American
high schools. This should be a priority for local school boards and
state governments. Similarly, the nation's community colleges will need
to do more to support a national initiative to help train a new
generation of skilled labor, including electricians and pipefitters.
Invest in AI research and development
To uphold America's position as a global scientific leader, it is
imperative to enhance Federal investment in fundamental scientific
research. The United States boasts a storied history of employing
public-private partnerships. The decisions made decades ago to publicly
fund research infrastructure and provide financial support to talented
scientists and entrepreneurs paved a pathway to American technological
leadership. Through federal, state and local government initiatives,
investments were made in regional economies and programs, betting on
the ingenuity of the American people. Notable incubators of the 20th
century--such as Bell Labs and the network of Federal national
laboratories--were the result of deliberate efforts to unite industry,
government, and academia to propel scientific advancement. We must
deploy a similar strategy today for AI and quantum technologies.
Investments in these areas are critical to advancing the development of
innovative technological solutions that address complex global
challenges.
To outcompete nations like China, which have significantly boosted
their research and development (R&D) investments, the United States
must accelerate strategic investments in scientific research for future
technologies. Experts predict China will continue to invest substantial
resources in next-generation technologies such as AI, advanced
manufacturing, clean energy, quantum computing, and semiconductors over
the next decade.
Since the Second World War, America's technological innovation has
been driven by R&D based on two critical ingredients that the rest of
the world has both studied and envied. The first is sustained support
for basic research. While a few tech companies invest substantial sums
in basic research, as we do through Microsoft Research (MSR), most
world-leading basic research is pursued by academics at American
universities, often based on funding from the National Science
Foundation and other Federal agencies. Driven by curiosity rather than
a profit motive, this research often leads to unexpected but profound
discoveries that are published publicly.
The second ingredient is a sustained commitment to investments in
product development by companies of all sizes. The United States, more
than any other country, has mastered the process of moving new ideas
quickly from universities to the private sector. This success rests on
healthy investments in both R and D, recognizing that basic research is
often publicly funded and typically in universities, while product
development is robustly and privately funded through companies. It's
the combination of the two that makes American R&D so successful.
In 2019, President Trump approved an executive order designed to
strengthen America's lead in artificial intelligence. It rightly
focused on Federal investments in AI research and making Federal data
and computing resources more accessible. Six years later, the President
and Congress should expand on these efforts to support advancing
America's AI leadership. More funding for basic research at the
National Science Foundation and through our universities is one good
place to start.
Ensure public data is open and accessible
Data is the fuel that powers artificial intelligence. The quality,
quantity, and accessibility of data directly determines the strength
and sophistication of AI models. While the Internet has been a major
source of training data, the Federal government remains one of the
largest untapped sources of high-quality and high-volume data. Yet
today, many of these datasets are either inaccessible or not usable for
AI development.
By making government data readily available for AI training, the
United States can significantly accelerate the advancement of AI
capabilities, driving innovation and discovery. Opening access to these
datasets would allow for the analysis of themes, patterns, and insights
across broad datasets, propelling the country to the forefront of
global AI development.
Importantly, accessible public data levels the playing field. It
empowers not only large companies but startups, academic institutions,
and nonprofits to train and refine AI models. This fosters a more
competitive and inclusive AI ecosystem, where innovation is driven by
ideas and ingenuity--not just proprietary data.
In comparison, countries like China and the United Kingdom are
already investing heavily in their data resources, recognizing the
economic and strategic value of national-scale data management. China's
comprehensive system to manage datasets as a strategic resource and the
UK's National Data Library underscore a growing global trend of
treating data as a common good for economic competitiveness.
Winning the AI Diffusion Race
History teaches us that the true impact of a general purpose
technology is not measured solely by the caliber of its leading
inventions, but by how quickly, widely, and effectively these are
adopted across society. But the reality is that technology diffusion
takes time, investment, partnerships, and sound public policy.
The history of electricity offers an important insight for AI. Once
Thomas Edison proved in 1878 that electricity could power a lightbulb,
why would anyone choose to sit at night in a room illuminated by a
candle or kerosene? Yet tonight, almost 150 years later, more than 700
million people on the planet still live without electricity in their
homes. Diffusion requires not only great technology, but sound
economics.
The economics of tech diffusion start with skilling. Countries need
to invest in the skills needed to use new technology, both as
individuals and across organizations. It is easy to underestimate both
the role that skilling plays and the need for public policy to support
it. But in each industrial revolution, the country that best harnessed
the leading general purpose technology of its time was the Nation that
skilled its population the most quickly and broadly.
Skill the American workforce
In the new AI economy, Americans of all backgrounds will need
critical AI skills to compete. To meet the totality of the skilling
challenge, the country must pursue a new national goal to make AI
skilling accessible and useful for every American. This will require a
very broad range of partnerships and new policy ideas, spanning across
geographic, organizational, economic, and political divides.
President Trump's recent executive orders focused on AI education
and the workforce provide critical steps towards a national skilling
strategy for AI. The ``Advancing Artificial Intelligence Education for
American Youth'' EO establishes a clear policy to promote AI literacy
by responsibly integrating AI into education for teachers and students.
By fostering this early exposure, the Nation's youth will be better
positioned for AI-enabled work. Congress can also consider leveraging
existing Federal funding to the Nation's school districts to encourage
AI learning and literacy in K-12 education.
Businesses and non-profits have important roles to play. At
Microsoft, we are seeking to do our part to meet this skilling
challenge. In 2025 alone, we are on a path to train 2.5 million
Americans in basic AI skills. We're partnering with the National Future
Farmers of America (FFA) to train educators in every state to integrate
AI into the agricultural classroom through our Farm Beats for Students
program. We are partnering with the American Federation of Teachers
(AFT), the largest organization representing the Nation's educators in
America, to deliver a co-developed training program to 10,000 AFT
members. And we're partnering with the State of New Jersey, Princeton
University, and CoreWeave on an AI Hub in New Jersey that will include
support for AI education in local community colleges.
When it comes to AI skilling, the most important thing we need to
do is recognize that this is a critical field that is ripe for
attention, learning, partnership, and innovation. It will have a huge
impact on broadening access to this technology across our economy and
society. Generative AI is a new and young technology. So is our
knowledge of the full extent of need in terms of AI skilling programs
and support. This is a first-class priority that deserves as much
attention and support as innovation in AI technology itself.
Encourage AI adoption
The Federal Government also will play a critical role in AI
diffusion by using AI itself. There are opportunities across the
government to use AI to improve the quality and efficiency of public
services for citizens.
It's encouraging to see the recent OMB publication of M-Memos
focused on Federal government use and procurement of AI. Both memos
emphasized the importance of removing barriers to innovation,
maximizing the use of domestically developed AI products, and
encouraging AI leaders within the Federal government to facilitate
responsible AI adoption.
We're seeing activity in the states as well. We partnered with the
Texas Department of Transportation to launch a six-week pilot program
aimed at boosting productivity and improving decision-making across
various departments. The program saw strong results with 97 percent of
participants using the AI digital assistant during the pilot, 68
percent have integrated it into their daily workflow, and participants
reporting saving an average of 12 hours a week on routine tasks.
Exporting American AI
The ability to export our AI is essential to sustaining our global
competitiveness and ensuring that our technological progress benefits
not only our nation, but also our allies and partners around the world.
Building on recent AI diplomacy efforts, the United States offers a
compelling and trusted value proposition in the global technology
landscape.
American tech companies, including Microsoft, are making
unprecedented investments in AI infrastructure around the world.
Microsoft alone is building AI infrastructure in more than forty
countries, including regions where China has focused its investments.
We urgently need a national policy that provides the right balance of
export controls and trade support for these investments.
While the U.S. government rightly has focused on protecting
sensitive AI components in secure datacenters through export controls,
an even more important element of AI competition will involve a race
between the United States and China to spread their respective
technologies to other countries. Given the nature of technology markets
and their potential network effects, this race between the United
States and China for international influence likely will be won by the
fastest first mover. The United States needs a smart international
strategy to rapidly support American AI around the world.
This fundamental lesson emerges from the past twenty years of
telecommunications equipment exports. Initially, American and European
companies such as Lucent, Alcatel, Ericsson, and Nokia built innovative
products that defined international standards. But as Huawei invested
in innovation and China's government subsidized sales of its products,
especially across the developing world, adoption of these Chinese
products outpaced the competition and became the backbone of numerous
countries' telecommunications networks. This created the technology
foundation for what later became an important issue for the Trump
Administration in 2020, as it grappled with the presence of Huawei's 5G
products and their implications for national and cybersecurity.
Early signs suggest the Government of China is interested in
replicating its successful telecommunications strategy. China is
starting to offer developing countries subsidized access to scarce
chips, and it's promising to build local AI datacenters. The Chinese
wisely recognize that if a country standardizes on China's AI platform,
it likely will continue to rely on that platform in the future.
International partnerships will be critical. This is why Microsoft
has partnered with entities like the UAE's G42 and investment funds
like Blackrock and MGX, aiming to raise up to $100 billion for AI
infrastructure and supply chains. American tech companies and private
capital markets are forging stronger ties with key nations and
sovereign investors in the Middle East, surpassing previous efforts to
counter Chinese subsidies in telecommunications and reflecting our
commitment to innovation and cooperation. While China's government may
subsidize its technology adoption in developing regions, it will
struggle to match the scale and impact of America's private sector
investments.
Pragmatic American export control policies are essential, balancing
security protections with the ability to expand rapidly. Protecting
national security by preventing adversaries from acquiring advanced AI
technology is crucial. Rules should include qualitative standards for
secure datacenter deployments to prevent chip diversion to China and
ensure advanced AI services are safeguarded. We support this type of
approach.
However, we have expressed our concerns about the quantitative caps
imposed on GPU shipments by the interim final AI Diffusion Rule issued
in January. These place key American allies and partners in a Tier Two
category, imposing limits on AI datacenter expansion. This includes
countries like Switzerland, Poland, Greece, Singapore, India,
Indonesia, Israel, the UAE, and Saudi Arabia. Customers in these
countries now fear restricted access to American AI technology--
potentially benefitting China's AI sector by turning to alternatives.
The Trump administration has an opportunity to revise the rule,
eliminating quantitative caps and retaining qualitative standards. This
approach ensures American allies and partners remain confident in
accessing American AI products.
Ultimately, we need to recognize that countries around the world
will use American AI only if they can trust it. This creates
responsibilities for American companies to develop and deploy AI
infrastructure and products in a responsible manner that meets local
needs. And it requires that countries have confidence in sustained and
uninterrupted access to critical AI components and services. The United
States has long built a reputation for trustworthy technology that
China has been unable to match. But this reputation, like everything
that truly matters, requires constant attention and care.
STATEMENT OF HON. TIM SHEEHY,
U.S. SENATOR FROM MONTANA
Senator Sheehy [presiding]. Thank you, witnesses, for your
testimony and I will start off with the first round of
questions and move down dais to our Ranking Member here.
Thank you for your testimony. It certainly makes me sleep
better at night worried about Terminator and Skynet coming
after us, knowing that you guys are behind the wheel.
But in five words or less, starting with you, Mr. Smith,
what are the first--what are the five words you need to see
from our government to make sure we win this AI race?
Mr. Smith. More electricians. That is two words.
[Laughter.]
Mr. Smith. Broader AI education.
Senator Sheehy. And no using ChatGPT as a phone a friend.
Mr. Intrator. Thank you. I would say that we need to focus
on streamlining the ability to build large things.
Ms. Su. Policies to help us run faster in the innovation
race.
Mr. Altman. Allow supply chain. Sensible policy.
Senator Sheehy. That was good. So what I hear there is
something pretty similar to the races we have won before--
nuclear energy, for example.
You know, the Germans and Austrians really led the
innovation around that but we won the race because we put a
massive government effort, collaborating with our universities
and others to win that race.
Space--you know, the Soviets put the first satellite up,
put the first man in space, but we won the space race because
we adopted a framework to ensure that we won. Aviation,
automobiles, et cetera.
So what I hear from you is you do need support from our
government but you also need the government to stay out of your
way so you can innovate and win this race.
How do we incentivize companies to do business here in
America to make sure we win this race in America and America
leads not just China but other nonstate actors, too?
I mean, I think that the scariest thing about AI from a
capability standpoint is it does not have to be a state actor
to win this race. It is not like nuclear energy. It is not like
space technology. A nonstate actor could just as easily win
this race and wield more power than anyone else.
So how do we encourage innovators' investment to happen
here in America to ensure we win this race?
Mr. Altman, do you want to start?
Mr. Altman. We were honored to announce back in January,
Project Stargate, a $500 billion investment in United States
infrastructure. That is now well underway, as I mentioned,
getting to see it yesterday in Abilene. The first site was
incredible.
We need a lot more of that. We need certainty on the
ability to build out this entire supply chain, build the data
centers, permit the electricity. We would love to bring chip
production here, network production here, server rack
production here.
And I think the world does want to invest. We have a lot of
global investment flowing into the U.S. to do this. We also
want to make sure that other countries are able to build with
our technology, use our models, and sort of, like, be in our
orbit and, you know, use U.S. diffusion of technology here.
So that is really important. We need to make sure that the
highest skilled researchers that want to come work at U.S.
companies can come here and do that. We need to make sure that
companies like OpenAI and others have legal clarity on how we
are going to operate.
Of course, there will be rules. Of course, there need to be
some guardrails. This is a very impactful technology. But we
need to be able to be competitive globally. We need to be able
to train.
We need to be able to understand how we are going to offer
services and sort of where the rules of the road are going to
be, so clarity there. And I think an approach like the
internet, which did lead to flourishing of this country in a
very big way, we need that again.
Senator Sheehy. Dr. Su.
Ms. Su. I would add I think computing is a foundation to
all of this. We want to have more compute built in the U.S. by
U.S. companies and ensure that we have a great environment for
that. We want to ensure that our technology around the world is
also used broadly and in the right ways.
So I think the conversation about export controls and rules
should just be simple, easy to follow, easy to enforce, and
enable U.S. AI platforms to be the foundation.
And then, certainly, the comments around bringing
manufacturing back home and ensuring that we have the right
talent base are all extremely important elements of that.
Senator Sheehy. Are companies weighing doing business in AI
in America versus China? Are the companies making that side by
side comparison?
Ms. Su. I think if you look across the world there are
countries and companies that will ask those questions. You
know, if it is hard to obtain U.S. technology--although U.S.
technology is the best, if it is hard to obtain then there is a
hunger for AI and they will choose what is available and if
China is available that will certainly be a outcome that we
would not like to see.
Senator Sheehy. Well, I think I hear the words
infrastructure, electricians, universities, regulatory
framework, and I think those are things we can help with.
I hear words like innovation and talent and I say--I hear
Dr. Su, run faster. Those are not things--the government cannot
manufacture talent. We cannot make you run faster.
But we can give you the tools to do that and I think it is
time that we create a framework so that you have the tools you
need to win this race because you are going to be the ones that
win it, not us.
Thank you for your testimony.
Ranking Member Cantwell.
Senator Cantwell. Thank you, Mr. Chairman. I would like to
continue that same theme generally about competitiveness. Do we
need NIST to set standards?
If you could just yes or no and just go down the line.
Mr. Altman. I do not think we need it. It can be helpful.
Ms. Su. Yes.
Mr. Intrator. Yes.
Mr. Smith. Yes.
Senator Cantwell. OK. So in the context of what we are
talking about here, we are really just talking--I do not know,
Mr. Smith or Mr. Intrator or Dr. Su, any.
The issue here is if we want to move fast we want to create
just like with electricity the standards by which we want to
move fast.
Here, I would just call it code for code is what we want,
right? We want NIST to do something in the standard setting
that will allow us to move much faster. Is that right?
Either Mr. Smith or Mr. Intrator.
Mr. Smith. What I would say is this.
First of all, NIST is where standards go to be adopted but
it is not necessarily where they first go to be created. So we
have got what----
Senator Cantwell. Right. Thank you for that clarity. We are
talking about a industry IEEE, you know, lots of different
organizations, industry input, and then they are adopted.
So yes, let us clarify that. Let us clarify that.
Mr. Smith. Yes. I think that is the way it works.
Senator Cantwell. Yes. But you think we need to do that,
particularly if the United States wants to lead?
Mr. Smith. We will need industry standards. We will need
American adoption of standards. And you are right, we will need
U.S. efforts to really ensure that the world buys into these
standards.
Senator Cantwell. OK. Mr. Intrator.
Mr. Intrator. I think it is important that when you are
working with standards what that allows for is a common
vocabulary which allows for acceleration.
And so to the extent that we can step into that role and
establish touch points where everyone can agree on specific
things that will lead to an acceleration both domestically and
abroad.
Senator Cantwell. And I do not know if, you know, drilling
down more on what you think those are but in general, you know,
when I think about the Internet and HTTP or HTML or any of
the--TCPIP--we are talking about things that allowed us to move
faster and getting those standards established helped us do
that.
On the export issue, Mr. Intrator, the issue of cloud
sources should not be left out. If we say let us go with
Malaysia, Malaysia is going to tell us that they can certify
that there is no, you know, diversion of these ships to--you
know, to China and we basically have a way that we can make
sure that this is understood and monitored then we also want
access, right? We want access by U.S. companies.
Mr. Intrator. Yes. I think Lisa's point was excellent,
right? At the end of the day, the world wants to be able to
build and deploy artificial intelligence in a very broad way
and if we--you know, nature abhors a vacuum.
If we do not step into that role other technology will step
in that role. If it is suboptimal so be it. It is better to
have something that is suboptimal than have nothing and so that
is what----
Senator Cantwell. Well, we do not want a reoccurrence of a
Huawei that develops faster and then has a government back door
and then we all have to raise opposition. I am for a tech NATO.
I am for the five most sophisticated democracies and tech
nations setting the rules of the road and saying, this is who
you should buy from. Do not buy from anybody else who has a
government back door. Not a good idea.
So that is how we get leverage. You know, I am not so hot
on the President's tariff agenda for this very reason because
we are not building the alliances.
We are creating the enemies, and what I want to do is get
the supply chain here, get the semiconductor flow here, lower
the cost, and go as fast as we can.
Mr. Intrator. Yes, I agree with that. I do not think that
that is--I do not think anybody is not going to agree with
that, right? I think that is an excellent objective.
I just think that what will happen beyond the five NATO
companies is that there will be a demand for artificial
intelligence and they will proceed with what they can proceed
with.
Senator Cantwell. Dr. Su, what is your view of this about
how we win, how we protect our objectives, but we are more
aggressive on the export strategy?
Ms. Su. Well, I think there is a clear recognition that we
need an export strategy and so having--you know, having this
conversation is very important, and from our perspective the
idea is to ensure that our allies--and, frankly, I use allies
in the very broadest sense--get access to the great American
technology that we have with the appropriate controls in place,
and I think you can do both.
To your earlier comment, Ranking Member Cantwell, about the
need to have U.S. technologies in those countries I think those
countries are actually very interested in doing that because we
do have the best technology today, and using that to really
build this broad AI ecosystem is really our opportunity.
Senator Cantwell. I agree. Thank you so much.
Senator Sheehy. The senior senator from Ohio.
STATEMENT OF HON. BERNIE MORENO,
U.S. SENATOR FROM OHIO
Senator Moreno. Thank you, Chairman Sheehy. Make sure
Senator Cruz heard that one.
So, first of all, thank you for being here and taking the
time. If I could just real quickly just confirm that I have
heard what you said pretty unanimously, which is we need
dramatically more power generation in this country. Is that
correct?
All right. So, Dr. Su, you just recently did a partnership
with TSMC to manufacture your chips here in America. Thank you.
I think it is a little bit long overdue. I wish you had--we had
done more of that earlier.
Are those semiconductor fabs high energy users?
Ms. Su. Thank you, Senator.
We are very pleased with our efforts together with the
government on bringing more manufacturing back to the United
States.
To your question, certainly, semiconductor manufacturing
plants are high energy users and we do need more power for both
manufacturing as well as for data centers, as you mentioned.
Senator Moreno. And without chips this just does not work.
Like, if we do not have the highest performance chips made here
in the United States this is not going to happen here, correct?
Ms. Su. We absolutely need the highest performing chips and
we also need the entire ecosystem for chip manufacturing. So
wafers are one piece but there are many other pieces as well.
Senator Moreno. And are those chips powered by solar power
and windmills?
Ms. Su. Today they are not but I think there are
opportunities to certainly do that.
Senator Moreno. So do you think it is outrageous that last
year because of the policies of the Biden administration that
90 percent of new power generation in this country was
windmills and solar panels and we absolutely kneecapped
American energy?
We have a thousand years of natural gas sitting in
Pennsylvania, Ohio, and West Virginia, and yet 90 percent of
power generation in this country last year was solar panels and
windmills. Does that make this country more competitive or less
competitive?
Anybody can jump into that one that wants to answer that.
Mr. Smith. Let me say two things. One, you are right, we
need more electricity. I think our industry, it is worth
remembering, is only going to account for 15 percent of the
total additional electricity the country is going to need.
We are going to need electricity from a variety of sources.
Today in the United States 56 percent of our electricity comes
from carbon. Forty-four percent comes from carbon-free energy,
meaning nuclear wind, or solar. We need a broad based approach
and we need a diversity of sources.
Senator Moreno. And, again, 90 percent was energy that is
not affordable, it is not abundant, and it is not reliable.
Let me just shift gears. Mr. Altman, thank you for, first
of all, creating your platform in an open basis and agreeing to
stick to the principles of nonprofit status. I think that is
very important.
Do you think that the Internet age did a good job between
the beginning of the 1990s through the 2000s of protecting
children?
Mr. Altman. I would say not particularly.
Senator Moreno. Yes. And you are a new father, correct?
Mr. Altman. Yes.
Senator Moreno. Congratulations.
Mr. Altman. Thank you very much.
Senator Moreno. He is doing well?
Mr. Altman. He is. It is the most amazing thing ever.
Senator Moreno. Yes. I do not think you want your best--
your child's best friend to be an AI bot.
Mr. Altman. I do not.
Senator Moreno. So what can we do? How can we work together
to protect children?
Mr. Altman. We have talked a lot about some of the things
we are doing here. We are trying to learn the lessons of the
previous generation and, you know, that is kind of the way it
goes. People make mistakes and you do it better next time.
One thing we say a lot internally is we want to treat our
adult users like adults. We want to give them a lot of
flexibility.
We want to let them use the service with a lot of freedom,
and for children there needs to be a much higher level of
protection, which means the service will not do things that
they might want.
Now, we are still early so sometimes people say, oh, you
are being too strict on the rules and it is just we cannot
perfectly, like, tell this.
But if we could draw a line and if we knew for sure when a
user was a child or an adult we would allow adults to be much
more permissive and we would have tighter rules for children.
Senator Moreno. So I think what I would ask is if you could
have your team commit to having your teams work with our teams
to make certain that we put together the right framework early
on I think is the best way we can move forward, because we do
not want to overregulate but we cannot repeat the mistakes of
the Internet and social media era where children got harmed.
Mr. Altman. We would be delighted to work with you all. I
think it is super important.
Senator Moreno. Thank you.
Mr. Altman. Can I say one more thing about what you said?
Senator Moreno. Of course.
Mr. Altman. This idea of AI and social relationships I
think this is a new thing that we need to pay a lot of
attention to.
People are relying on AI more and more for life advice,
sort of, emotional support, that kind of thing. It is a newer
thing in recent months, but I--and I do not think it is all
bad, but I think we have to, like, understand it and watch it
very carefully.
Senator Moreno. Thank you, and thank you for that
commitment. It is very appreciated. I have talked to your team
already. Good people.
Mr. Altman. Great.
Senator Moreno. Mr. Intrator, real quickly, can you talk
about the intersection between the importance of a robust
stablecoin ecosystem here in America and how that has a future
with payments and how AI will factor into that? Because I do
not think people see how this fits into the broader puzzle.
Mr. Intrator. So thank you for the question.
And we did start out as a crypto-based company hobby that
kind of got away from us a little bit.
Look, I think that stablecoins, crypto, AI, they share
certain DNA in common which is that they are attempts to build
into a future where new technology will make things better for
society and there is a huge potential for us to use
stablecoins, crypto, and AI in a combination for better
outcomes.
Senator Moreno. All right. Thank you.
And that was the quickest coup since 1959.
[Laughter.]
The Chairman [presiding]. Senator Klobuchar.
STATEMENT OF HON. AMY KLOBUCHAR,
U.S. SENATOR FROM MINNESOTA
Senator Klobuchar. Thank you very much, Senator Cruz.
A lot of exciting things with AI, especially from a state
like mine that is home to the Mayo Clinic with the potential to
unleash scientific research while we have mapped the human
genome and we have rare diseases that can be solved.
So there is a lot of positive but we all know, as you have
all expressed, there are challenges that we need to get at with
permitting reform. I am a big believer in that. energy
development.
Thank you, Mr. Smith, for mentioning this with wind and
solar and the potential for more fusion and nuclear, but wind
and solar the price going down dramatically in the last few
years and to get there we are going to have to do a lot better.
I think David Brooks put it the best when he said, ``I
found it incredibly hard to write about AI because it is
literally unknowable whether this technology is leading us to
heaven or hell''.
We want it to lead us to heaven and I think we do that by
making sure we have some rules of the road in place so it does
not get stymied or set backward because of scams or because of
use by people who want to do us harm.
As mentioned by Senator Cantwell, Senator Thune and I have
teamed up on legislation to set up basic guardrails for the
riskiest nondefense applications of AI.
Mr. Altman, do you agree that a risk-based approach to
regulation is the best way to place necessary guardrails for AI
without stifling innovation?
Mr. Altman. I do. That makes a lot of sense to me.
Senator Klobuchar. OK, thanks. And did you figure that out
in your attic?
Mr. Altman. No, that was a more recent discovery.
Senator Klobuchar. Thank you. Very good. Just want to make
sure.
Our bill directs, Mr. Smith, the Commerce Department to
develop ways of educating consumers on how to safely use AI
systems. Do you agree that consumers need to be more educated?
This was one of your answers to your five words so I assume you
do.
Mr. Smith. Yes, and I think it is incumbent upon us as
companies and across the business community to contribute to
that education as well.
Senator Klobuchar. OK, very good.
Back to Mr. Altman. Americans rely on AI, as we know,
increasingly on some high impact problems. To make them be able
to trust that we need to make sure that we can trust the model
output.
The New York Times recently reported earlier this week that
AI hallucinations--a new word to me--where models generate
incorrect or misleading results are getting worse. That is
their words.
What standards or metrics does OpenAI use to evaluate the
quality of its training data and model outputs for correctness?
Mr. Altman. On the whole, AI hallucinations are getting
much better. We have not solved the problem entirely yet, but
we have made pretty remarkable progress over the last few
years.
When we first launched ChatGPT it would hallucinate things
all the time. This idea of robustness, being sure you can trust
the information, we have made huge progress there. We cite
sources.
The models have gotten much smarter. A lot of people use
these systems all the time and we were worried that if it was
not 100.0 percent accurate, which is still a challenge with
these systems, it would cause a bunch of problems.
But users are smart. People understand, you know, what
these systems are good at, when to use them, when not, and as
that robustness increases, which it which it will continue to
do, people will use it for more and more things.
But we have made--as an industry we have made pretty
remarkable progress in that direction over the last couple of
years.
Senator Klobuchar. I know we will be watching that. Another
challenge that has been--we have seen, and Senator Cruz worked
and I worked on a bill together for quite a while and that is
the Take It Down Act, and that is that we are increasingly
seeing Internet activity where kids looking for a boyfriend or
a girlfriend, maybe they put out a real picture of themselves.
It ends up being distributed at their school or they
somehow they--someone tries to scam them for financial gain, or
it is AI, as we have increasingly seen where it is not even
someone's photos but someone puts a fake body on there and we
have had about over 20 suicides in one year of young people
because they felt like their life was ruined because this was--
they were going to be exposed in this way.
So this bill we passed and through the Senate and the
House. The First Lady supported it and it is headed to the
President's desk. Could you talk about how we can build models
that can better detect harmful deep fakes, Mr. Smith?
Mr. Smith. Yes. I mean, we are doing that. OpenAI is doing
that. A number of us are and I think the goal is to first
identify content that is generated by AI and then often it is
to identify what kind of content is harmful, and I think we
have made a lot of strides in our ability to do both of those
things.
There is a lot of work that is going on across the private
sector and in partnership with groups like NCMEC to then
collaboratively identify that kind of content so it can be
taken down.
We have been doing this in some ways for 25 years since the
Internet and we are going to need to do more of it.
Senator Klobuchar. And on the issue--last question, Mr.
Chair. Since the last one was about your bill I figured it is
OK. The newspapers, and you testified before the Senate
Judiciary Committee, Mr. Smith, about the bill.
Senator Kennedy and I still think that there is an issue
here about negotiating content rates. We have seen some action
recently in Canada and other places.
Can you talk about those evolving--the dynamics with AI
developers and what is happening here to make sure that content
providers and journalists get paid for their work?
Mr. Smith. Yes, it is a complicated topic but I will just
say a couple of things.
First, I think we should all want to see newspapers in some
form flourish across the country including, say, rural counties
that increasingly have become news deserts. Newspapers have
disappeared.
Second, and it has been the issue that we discussed in the
Judiciary Committee, there should be an opportunity for
newspapers to get together and negotiate collectively. We have
supported that. That will enable them to basically do better.
Third, every time there is new technology there is a new
generation of a copyright debate. That is taking place now.
Some of it will probably be decided by Congress, some by the
courts.
A lot of it is also being addressed through collaborative
action, and we should hope for all of these things to, I will
just say, strike a balance. We want people to make a living
creating content and we want AI to advance by having access to
data.
Senator Klobuchar. OK, thanks. I will ask other questions
on the record. Thank you, Mr. Chair.
The Chairman. Thank you.
You know, Senator Klobuchar asked whether AI will lead us
to heaven or hell. It reminded me of a famous observation by
Yale Law Professor Grant Gilmore that in heaven there is no law
and the lion will lie down with the lamb. In Hell there is
nothing but law and due process is meticulously observed.
Let me ask you this, and this is to each of the four
witnesses. In the race for AI who is winning, America or China?
If the answer is America how close is China to us, and what do
we do to make sure the answer remains America will win?
Mr. Altman, we will start with you.
Mr. Altman. It is our belief that the American models,
including some models from our company OpenAI and Google and
others are the best models in the world.
It is very hard to say how far ahead we are but I would say
not a huge amount of time, and I think to continue that
leadership position and the influence that comes with that and
all of the incredible benefits of the world using American
technology products and services, the things that my colleagues
have spoken about here, the need to win in infrastructure,
sensible regulation that does not slow us down, the sort of
spirit of innovation and entrepreneurship that I think is a
uniquely American thing in the world, none of this is rocket
science.
We just need to keep doing the things that have worked for
so long and not make a silly mistake.
The Chairman. Dr. Su.
Ms. Su. I will answer in the realm of chips. I would say
America is ahead in chips today. We have the best AI
accelerators in the world.
I think China, although they have restrictions, given their
ability to use advanced technologies, the one thing that is
very important for us all to remember is there are multiple
ways to do things. You know, having the best chips is great,
but even if you do not have the best chips you can get a lot
done.
So I think this conversation about how far behind China is
they are certainly catching up because there are many ways to
do things.
I think relative to what we can do I will continue to say
really ensure that our spirit of innovation is allowed to work
and that is having very supportive government policies to do
that, having very consistent policies and allowing us to do
what we do best, which is innovate at every layer of the stack.
The Chairman. Mr. Intrator.
Mr. Intrator. So I will speak to it from the physical
infrastructure and software stack to deliver that.
America is ahead, but it is the Achilles heel from the
perspective of the ability, as I started to--better? Sorry
about that.
So the ability to build very large solutions to the
computing infrastructure component of this is an area that we
are going to struggle with from a permitting and building large
projects to be able to deliver the power to allow those
building artificial intelligence to continue to move as fast as
they can in the race that we are in.
The Chairman. Mr. Smith.
Mr. Smith. I think the United States has a lead today in
what is a close race and a race that will likely remain close.
The number-one factor that will define whether the United
States or China wins this race is whose technology is most
broadly adopted in the rest of the world.
This is a global market and it will be defined as
technology markets typically are by network effects. Eighteen
percent of the people of the world live in China. Four percent
live in the United States. Seventy-eight percent live somewhere
else.
The lesson from Huawei and 5G is whoever gets there first
will be difficult to supplant. We need to export with the right
kinds of controls. We need to win the trust of the rest of the
world.
We need to have the financial architecture that gets not
only to the countries that are industrialized but the nations,
say, across Africa where typically China and Huawei have done
so well.
The Chairman. So some of my colleagues have made reference
to standards as something that is desirable, and I will say
standards is often code word for regulations and, indeed, the
EU stifling standards concerning the Internet is what killed
tech in Europe.
We are seeing now state legislatures mimicking the EU such
as California's S.B. 1047 which, thankfully, was overwhelmingly
defeated but would have created essentially a California DMV
for AI model registration.
How harmful would it be to winning the race for AI if
America goes down the road of the EU and creates a heavy handed
prior approval government regulatory process for AI?
Mr. Altman. I think that would be disastrous. To give a
more specific answer to your previous question, which I think
touches on why it would be so bad, there are three key inputs
to these AI systems.
There is compute, all the infrastructure we are talking
about, there is algorithms that we all do research on, and
there is data.
If you do not have any one of those you cannot succeed in
making the best models and, as Brad said, the way for America
to influence the world here is to have the technology that
people most want to use and most adopt.
The world uses iPhones and Google and Microsoft products,
and that is wonderful. Like, that is how we have our influence.
We do not want that to stop happening. So systems that stop us
on any of these areas, you know, if we have rules about what
data we can train on that are not competitive with the rest of
the world then things can fall apart.
If we are not able to build the infrastructure and
particularly if we are not able to manufacture the chips in
this country the rules can fall apart. If we cannot build the
products that people want that naturally win in the market--and
I think people do want to use American products.
We can make them the best. But if we are prevented from
doing that people will use a better product made from somebody
else that does not have the sort of--you know, that is not
stymied in the same way.
So it is--I am nervous about standards being set too early.
I am totally fine, you know, at the position that some of my
colleagues took that standards, once the industry figures out
what they should be, it is fine for them to be adopted by a
government body and sort of made more official.
But I believe the industry is moving quickly toward
figuring out the right protocols and standards here and we need
the space to innovate and to move quickly.
The Chairman. So if each of you could briefly answer that
question because my time has expired. So I want to be
respectful of that.
Ms. Su. I agree with the comments that Sam put up.
Mr. Smith. I agree, and I would just say and I think the
point you are making is we have to be very careful not to have
these preapproval requirements including at state levels
because that would really slow innovation in the country.
Mr. Intrator. I think that a patchwork of regulatory
overlays will cause friction in the ability to build and extend
what we are doing.
The Chairman. Thank you.
Senator Curtis? Schatz?
Senator Schatz? Apologies.
STATEMENT OF HON. BRIAN SCHATZ,
U.S. SENATOR FROM HAWAII
Senator Schatz. No problem, Chairman.
Thank you for being here. I just want to follow up on the
Chairman's question and a sort of--maybe an emerging consensus
on the Committee.
OK. I do not think there is anybody even on this side of
the dais that is proposing a sort of European style
preapproval.
I think there are some people who would like to do nothing
at all in the regulatory space but I think most people
understand that some guardrails--those are the words that you
use, Mr. Altman--rules and guardrails are necessary.
Are you saying that self-regulation is sufficient at the
current moment?
Mr. Altman. No. I think some policy is good. I think it is
easy for it to go too far, and as I have learned more about how
the world works I am more afraid that it could go too far and
have really bad consequences.
But people want to use products that are generally safe.
You know, when you get on an airplane you kind of do not think
about doing the safety testing yourself.
You are, like, this is--well, maybe this is a bad time to
use the airplane example but you kind of like want to just
trust that you can get on an----
Senator Schatz. It is an excellent time to use the airplane
example. But I think your point is exactly right is that, look,
there is a race but we need to understand what we are racing
for, right?
And it also has to do with American values. It is not just
a sort of commercial race so we can edge out our near peer
competitor both in the public sector and the private sector. We
are trying to win a race so that American values prevail
internationally.
Mr. Smith, I want to move on to another topic. It seems to
me that on the consumer side that one of the most basic rights
of a user on the Internet is to understand what they are
looking at or listening to and whether or not it was created
solely by a person, a person using an AI, or automatically
generated using AI.
Do you think a labeling regime--not a prohibition on the
use of AI but just the disclosure, especially as it relates to
images, music, creativity--do you think a label would be
helpful for consumers?
Mr. Smith. Generally, yes, and I think that is what we in
the industry have been working to create. I think you are right
to make the distinction and focus especially on, say, images,
video, audio files.
There is a standard called C2PA that we and a number of
companies now have been advancing. It has content credentials.
It enables people to know where something was created, who
created it, and I think--you are right--to know whether it was
created by a person, by AI, or a person with the help of, say,
AI.
Senator Schatz. I just want to use sort of common language,
not the language that all of you use or that we have all
learned to use.
When you talk about the data as one of the three elements
that makes a model work, data really is intellectual property.
It is human innovation, human creativity, and I do think we may
have a disagreement--and I agree with Senator Klobuchar about
the need to understand that these models have been trained on
data but what we are really talking about is human achievement
all the way up to now.
And I have a deep worry--look, I am actually an optimist in
the energy space and the public service space, certainly in
health innovation. There are a lot of really exciting
opportunities here.
But we got to pay people for their knowledge and I am
concerned that these models are going to be so successful in
spitting out what appears to be knowledge that we are going to,
on the back end, not pay people for all of the inputs and we
will have a sort of stalling out of these models.
And you talked about a tension but I am trying to figure
out what the tension really is other than you would like to pay
as little as possible for these inputs.
Go ahead, Mr. Smith.
Mr. Smith. Well, you had me until the last sentence.
Senator Schatz. I know.
Mr. Smith. Look, we create intellectual property. We
respect intellectual property. So we are emphatically of the
view that intellectual property and the creation of it should
be rewarded.
Ultimately, intellectual property laws are always about
drawing the line. It is really the line that you refer to. In
copyright, there is expression that is protected. If you write
a book and somebody copies it then you are entitled to be paid.
But there are ideas. If someone reads your book, if someone
remembers that Shakespeare wrote a story about two teenagers
who fell in love----
Senator Schatz. Sure. Then that is fair use.
Mr. Smith [continuing]. Then that is fair use. That is why
this country and Congress created it.
Senator Schatz. OK. That is where the tension is.
Mr. Smith. That is what we need to focus on.
Senator Schatz. With your permission, Chairman, I want to
ask one final question.
The Chairman. Proceed.
Senator Schatz. Thank you. I am actually quite excited
about the prospect that in 20 years people are going to say,
remember when you had to wait on the phone to talk to Kaiser
Permanente or the VA?
So I just--maybe Mr. Altman and Mr. Smith, I want you to--
you know, a buddy of mine used to say, paint a picture and
paint me in it.
OK. For the government actually delivering services I want
you to describe what an AI agent or AI can do to kind of reduce
those pain points that we accept as a fact of life in
interacting with the government.
It seems to me so much of what makes us irritated with the
government is the lack of sorting data that exists somewhere
but we cannot get access to it.
So just very quickly, you have 15 seconds each for some
cheerleading.
Mr. Altman. I can imagine a future where the U.S.
Government offers a AI-powered service that makes it really
easy to use all government services to get great health care,
to get great education.
You have this thing in your pocket and if you have any
medical problem you get an answer if you need to, you know,
like, appeal something on some process you are having with the
government or file your taxes or whatever. You just do it
instantly. You have an agent in your pocket fully integrated
with the U.S. Government and life is easy.
Senator Schatz. Anything to add?
Mr. Smith. Remember when you had to stand in line to renew
your driver's license? Remember when you did not know how to
report a pothole that needed to be repaired on your street?
Remember when you had a fender bender in a car and you had
to fill out all these forms and talk to all these people to get
insurance coverage?
Now you can do it all with one AI system. You can use your
phone and, by the way, you can do this today in Abu Dhabi. We
need to bring it to America.
Senator Schatz. Thank you.
Senator Young [presiding]. Senator Budd.
STATEMENT OF HON. TED BUDD,
U.S. SENATOR FROM NORTH CAROLINA
Senator Budd. Thank you, Chairman.
Again, thank you all for being here. I have enjoyed various
conversations with each of you.
The ability for the U.S. to deploy new energy generation
capacity and upgrade its grid it is in many ways the key to the
race against China.
Energy is how we can win and it is also how we can lose.
Permitting in this country takes too long. China's command and
control system means that they will not fail to deploy the
energy needed to achieve the scale necessary to develop the
most advanced models which will drive all the benefit of AI.
So I am glad to be working with Senator Lummis on the FREE
Act, which would set up a permit by rule structure which would
let large projects meet comprehensive standards at the front
end instead of dragged out on a case by case process.
We all want to protect the environment and we all want to
maintain U.S. economic and technological leadership.
So, Mr. Intrator, what has CoreWeave's experience been in
contracting power and are you concerned that the current
permitting system can make it harder for the U.S. to achieve
capital investment in the scale needed to win this AI race?
Mr. Intrator. So, as you said, access to power, access to
scale power is certainly one of the keys to our ability to win
this race. There are others but it is one that I spend a lot of
time thinking about.
I separated the comment into access to power and access to
scale power because I do think that we are moving toward a
period of this race where the size, the magnitude of the
infrastructure that is being required to move our artificial
intelligence--the labs that are building it, the companies that
are building it--forward at the velocity that is necessary is
going to be a specific challenge that really requires a lot of
thought.
We have a huge part of our organization focused on not just
getting access to power but getting access to the size and
scale of power that is going to be able to build the
infrastructure, you know, at the scale of Abilene or close to
it in order to, you know, allow this to move forward.
It is tough, right, and it will get harder as we move
through time because the existing infrastructure that does have
opportunities it has some level of elasticity, is going to be
consumed, and once that is consumed you are going to get down
to kind of a first principle how do we get power online now,
and that is really going to be challenging within the
regulatory environment as it currently is configured.
Senator Budd. Thank you.
Mr. Smith, a similar question. How is Microsoft trying to
secure power for its data centers? I mean, we read about that
in the news recently but what does Federal policy need to focus
on to make sure that we do not lose this race because we cannot
get enough energy?
Mr. Smith. Well, we invest to bring more electricity
generation onto the grid and then to bring it through the grid
to our data centers. We probably have more permitting
applications in more countries than quite possibly any company
on the planet.
Last time I looked at it, it was 872 applications in more
than 40 countries. The number-one challenge in the United
States when it comes to permitting, interestingly enough, is
not local. It is not state. It is the Federal wetlands permit
that is administered by the Army Corps of Engineers.
We can typically get our local and state permits done in
about six to 9 months. The national--the wetlands permit is
taking often 18 to 24 months.
Both the outgoing Biden administration and the incoming
Trump administration have focused on this, but if we could just
solve that we could accelerate a lot here in this country.
Senator Budd. Very helpful. Thank you.
Mr. Altman, much has been made about the Chinese open
source models like DeepSeek. We spoke about that a month or two
ago.
A concern that I have is that accessible Chinese models
promoted by the Chinese Communist Party might be an attractive
option for AI application developers to build on top of,
particularly in developing world economies.
So how important is U.S. leadership in either open source
or closed AI models?
Mr. Altman. I think it is quite important to lead in both.
We realize that OpenAI can do more to help here so we are going
to release an open source model that we believe will be the
leading model this summer because we want people to build on
the U.S. stack in terms of closed source models.
A lot of the world uses our technology and the technology
of our colleagues. We think we are in good shape there.
Senator Budd. So how could Federal policy further help
encourage the AI ecosystem to be developed right here in the
U.S.?
Mr. Altman. Well, you touched on a great point with energy.
I think it is hard to overstate how important energy is to the
future here. You know, eventually chips, network gear, that
will be made by robots and we will make that very efficient and
we will make that cheaper and cheaper.
But an electron is an electron. Eventually, the cost of
intelligence, the cost of AI, will converge to the cost of
energy and it will be how much you can have. The abundance of
it will be limited by the abundance of energy.
So in terms of long-term strategic investments for the U.S.
to make I cannot think of anything more important than energy.
You know, chips and all the other infrastructure also but
energy is where this--I think this ends up.
Senator Budd. Thank you. Chairman?
Senator Young. Senator Kim.
STATEMENT OF HON. ANDY KIM,
U.S. SENATOR FROM NEW JERSEY
Senator Kim. Thank you.
Mr. Smith, I think I would like to start with you because I
thought your point about what exactly is the race, right--you
know, we keep talking about the race, and you framed it in a
particular way saying that it is about adoption in the rest of
the world, the 78 percent.
I guess I just wanted to ask you to tease that out some
more in terms of understanding what role we could play in
Congress, in government, in terms of trying to accelerate and
champion that AI adoption internationally?
Mr. Smith. I think there is two things. The first is it
just shines light on the importance of getting it right for
export controls, which is the AI diffusion rule that is being
discussed right now, and I think what it shows is we want to
have, I believe, as a country the kinds of national security
controls that ensure that, say, chips do not get diverted to
China or get accessed by the wrong users, say, in China for the
wrong reasons.
And that is something that people have drafted in the
Department of Commerce. At the same time, we need, I believe,
to say get rid of the quantitative caps that were created for
all of these tier two countries because what they did was send
a message to 120 nations that they could not necessarily count
on us to provide the AI they want and need.
And just think about it. I mean, if this is a critical part
of your country's infrastructure how can you make a bet on
suppliers if you are not confident that they will be able to
fulfill your needs?
So I think you in Congress and the Senate can help the
White House and the Department of Commerce get this right.
Senator Kim. Mr. Altman, I wanted your thoughts on this. Is
that the right framing of the race? Is it about the adoption
internationally in terms of other countries? I guess I am
trying to think through it.
Like, part of what you just said in your previous response
was that we want other nations to be able to build upon the
U.S. AI stack. Is that the right framework? Is that what we are
thinking about?
Or is it more about the consumer? Is it more about getting
the rest of the world and the 78 percent of the population to
adopt AI applications that are U.S.? Or is it interrelated?
Mr. Altman. I think it is heavily interrelated. To me, the
stack is, you know, from the chips at the bottom to the
applications on the top, and we want the whole world on the
U.S. stack. We want them to use U.S. chips. We want them to use
services like ChatGPT.
Senator Kim. Does having other nations building on the
infrastructure component of the stack--does that more or less
than guarantee or at least have a high likelihood that then the
consumers in that country will be using our products and
applications? Is that the sort of theory of the case?
Mr. Altman. It probably does make it marginally more
likely. But I also think the--if someone is using a stack that
we do not trust to train models, like, who knows what it is
going to do?
Who knows what sort of back doors would be possible? Who
knows what sort of, you know, data corruption issues could be
possible?
I think the AI stack is increasingly going to be a jointly
designed system from the chip all the way up to the end
consumer product and, you know, lots of stuff in between.
I think separating that will not work that well in practice
and we should not want to. Like, again, I think this point--
this is a very critical point that the leverage and the power
the U.S. gets from having iPhones be the mobile device people
most want and, you know, Google being the search engine that
people most want around the world is huge.
We talk maybe less about how much people want to use chips
and other infrastructure developed here but I think it is no
less important, and we should aim to have the entire U.S. stack
be adopted by as much of the world as possible.
Senator Kim. Yes. I mean, when we are looking at--you know,
you are talking about our investment into models and building
of that nature.
How are we doing in terms of development of the
application--the AI tools and applications, though, that are
trying to embed in people's lives?
You know, not necessarily just the overarching models but
do you feel like we are putting the level of intensity that we
need to in terms of that type of development?
Mr. Altman. ChatGPT is the most adopted AI service in the
world--not just in the United States but in the world--by a
quite significant margin.
We are very proud that people like it and we need to keep
pushing on that. I think it is important for all the reasons
you just discussed.
There are many other U.S. companies building incredible
products and services that are also getting globally adopted.
This is what the U.S. does best.
Senator Kim. Dr. Su, I want to just ask one last point to
you. Over and over again each of you is talking about talent as
this incredible power but also could be a bottleneck to us.
How are we doing when it comes to development of talent in
this country? If you were to give us a grade what would you
grade us at in terms of our development right now?
Ms. Su. Thank you, Senator, for the question. Look, I think
the smartest engineers are in the United States. We have a
great base of talent.
But what I will say is we need more. We need more hardware
developers, software developers, application developers.
Senator Kim. How wide is that delta? If we are talking
about this as a race, as you did, you know, is that a space
where we have a larger amount of delta or is that a place where
it is closing rapidly, too?
Ms. Su. Well, I think we do have a very talented overall
talent base but we also have the desire to have the best and
that includes not only, you know, U.S. nationals but also
having the best international students.
Senator Kim. Drawing the talent from----
Ms. Su. That is right. I think high-skilled immigration is
one of those areas where we want the best people in the world
to be doing their work in the United States.
And, Senator, if I can just add something to your previous
point about the cycle and what race we are trying to win.
You know, technology is one of those things where you can
have a very vicious positive cycle. So, in other words, when we
lead and more people adopt that means more developers that make
our technology better.
That increases our lead. So that is what we want is to have
our leadership just increase over time.
Senator Young. Senator Schmitt.
STATEMENT OF HON. ERIC SCHMITT,
U.S. SENATOR FROM MISSOURI
Senator Schmitt. Thank you, Mr. Chairman.
Mr. Altman, I will start with you. I really enjoyed and was
inspired by your story with the light on in the home you grew
up in in St. Louis and you talked about the spirit of
innovation. That is the Spirit of St. Louis.
As a fellow St. Louis native that is a good story to hear,
and we just look forward to more investment in St. Louis from
your company. That would be great, too. So I will put a plug in
for that.
But I do want to ask you specifically, there is a lot made
of sort of the comparison between the United States and the
regulatory environment and what exists in Europe.
What specifically--and I will open this up too--what
specifically has gone wrong in Europe that we can draw some
conclusions from?
Mr. Altman. First of all, we would love to figure out how
to invest more in St. Louis. I would love an excuse to get to
go home more often.
I will point out one example that I think is just very
painful to users. When we launch a new feature or a major new
model we have what is now considered a little bit of like an in
joke where we say we have this great new thing not available in
the EU and a handful of other countries because they have this
long process before a model can go out.
And there will be, I believe, great models and services
that are quite safe and robust that we will be unable to offer
in other regulatory regimes, and if you are trying to be
competitive in this new world and if you are consistently some
number of months behind what other people in other countries
get access to, that is an example that is extremely painful to
users.
Senator Schmitt. And you mentioned sort of your observation
that the AI stack may make it more vertically integrated. So
how does that work then?
Because right now the best estimates, I suppose, right, is
that--I do not know, with China is 2 months to 6 months behind
maybe on large language models. Hopefully, some of the advances
we are seeing in the U.S. maybe there is a degree of
separation. It is hard to know exactly, right, with DeepSeek.
But then you get down to the chips and that advantage is
more like a couple of years probably, something like that.
So if that is where we are headed does that increase the
U.S.' advantage, in your view, or does that sort of allow China
to catch up quicker as we get more vertically integrated?
Mr. Altman. I think there are a lot of things that can
increase U.S. leadership. But we touched on this earlier--I
think it is so important. There will be great chips made around
the world. There will be great models trained around the world.
If the United States companies can win on products and
the--sort of all of the positive feedback loops that come from
how you can improve this once, you know, real users are using
your products in their daily lives for their hardest tasks,
that is something special that is not so easy to catch up with
just by doing good chips and good models.
So making sure that the U.S. can win at the product level
here. Obviously, I am, like, talking my book a little bit, but
I really do believe it--is quite important, and that is in
addition to all of the chips algorithm or the infrastructure
algorithms and data. I think this is a new area where the U.S.
is really winning and has a very strong compounding effect.
Senator Schmitt. Mr. Intrator--did I pronounce that
correctly, by the way?
Mr. Intrator. Yes.
Senator Schmitt. OK, thank you.
I want to turn a little bit, sort of staying on this
regulatory environment, one of the things I think that is most
concerning that is coming out of Europe is this sort of
censorship regime that exists not just online but in real life.
But, certainly, it is happening online. I mean, people are
being arrested for things that they say online, and one of the
concerns I have with AI, I suppose, is that if we end up with a
place where it is somehow policing, quote/unquote,
``misinformation'' and, you know, I think even in NIST's most
recent voluntary standards one of the risks to be on the
lookout for was the spread of misinformation.
So the point of the question is how do we make sure that--I
think part of what is going wrong in Europe is it is a sort of
a--it is funneling information and, in my view, whether I agree
with the point of view or not it ought to be out there. People
can make their own decisions. You combat speech you do not
agree with not by censoring it but by--with more speech.
What are some lessons to be learned there and make sure
that does not happen here?
Mr. Intrator. So Europe is moving forward with its
regulatory regime in a European way, and from our seat where we
have to make these enormous capital investments one of the
things about the approach that Europe is taking that we are
deeply concerned about every day is the balkanization to use--
of how they go about allowing information to flow and how they
go about regulating it, how they go about with each component
of their union having its own set of rules, which will be
tremendously challenging in Europe as time goes on because it
is really hard to make the magnitude of investments that we--
where we are----
Senator Schmitt. Beyond that, though, jurisdictionally I am
talking about content now.
Mr. Intrator. So we are not--the role of our company is
really kind of below that, you know--and, you know, Sam and,
you know, Microsoft you are going to get a lot more attention
paid to the content level because of the role that they play in
the stack. It is not really where we are primarily focused. We
are really focused on the investment side of it.
Senator Schmitt. Yes. If any of you would like to--Sam, if
you--or, Mr. Altman, if you would like to respond to that I
would like to get some answer.
Mr. Altman. I think--well, first of all, I strongly agree
that people getting, you know, like, put in jail for stuff they
say online is very--not American and not what we should be
doing.
AI is quite different than social media, at least in its
current evolution. People are using these tools in this sort of
one on one way instead of this massive thing online.
So I think it is easy to make too many analogies but it is
a little bit dangerous to try to talk about AI and the things
we are going to face here in the same way that we did for
social media. But our stance is that we need to give adult
users a lot of freedom to use AI in the way that they want to
use it and to trust them to be responsible with the tool.
And I know there is increasing pressure in other places
around the world and some in the U.S. to not do that but I
think this is, like--this is a tool and we need to make it a
powerful and capable tool.
We will, of course, put some guardrails and very wide
bounds but I think we need to give a lot of freedom here.
Senator Schmitt. Yes. I am out of time but there is a lot
more questions there that we will follow up with.
Thank you, Madam Chair.
Senator Cantwell [presiding]. Thank you. Thank you.
Senator Hickenlooper.
STATEMENT OF HON. JOHN HICKENLOOPER,
U.S. SENATOR FROM COLORADO
Senator Hickenlooper. I appreciate that line of
questioning. I was ready for you to continue as well. I could
have given you a minute or two.
Mr. Smith, Microsoft has a long and deep history in
transforming workplaces all over the world through software,
from Windows operating system to its office applications like
PowerPoint, Excel, and now the AI-powered Copilot application.
In software development, life cycles seem to be becoming
increasingly shorter, updates becoming more frequent.
What are the internal processes that Microsoft follows to
evaluate Copilot's accuracy and performance before it was
released and what kind of independent review teams other than
Microsoft's own product developers are involved in that? Who do
you bring in to help with that?
Mr. Smith. Well, first of all, since most of what we are
talking about here when you are talking about our Copilot's
start with models that are developed at OpenAI, I would say
OpenAI has its internal process.
There is then a joint--what is called the DSB, a Deployment
Safety Board, where we decide together whether something is
safe to deploy, as the name implies.
We then at the applications level have our own internal
Deployment Safety Board. We have a variety of engineering tools
that we use to assess these features. We test these features.
We have red teams, meaning sort of competing teams that
often go to work to sort of attack the features, and then
ultimately the product is released when those tests are
completed and the results are satisfactory.
Senator Hickenlooper. I like that. Well, let me go over to
Mr. Altman. Obviously, you all have a natural incentive to
ensure that the products are high quality and safe.
But the field is so competitive and, you know, in applied
research and with rigorous testing these constant improvements
really are fundamental steps to the performance of a model.
So risk assessments are that key tool, and I am a big
believer in evidence-based technical standards. I have been
accused of being the only real scientist who has published
peer-reviewed papers in the Senate.
So, Mr. Altman, do you believe that under appropriate
circumstances independent evaluations based on standards
performed by qualified evaluators and done voluntarily could
help validate the testing that you are performing internally
and in conjunction with peer companies?
Mr. Altman. Thank you, Senator, and I think it is awesome
that you are--have published peer-reviewed papers and would
love to see more of that.
Senator Hickenlooper. I was--on the Maslow's triangle of
science I was near the bottom. I was a geologist. So that is
not high up in that----
Mr. Altman. Geology is great.
Yes, I think what you say is very important. It is an
important part of our process today. External testing helps us
find things that we may have missed internally and--we are very
proud of our safety record on the whole, not that we--you know,
we have not been perfect and we are continuing to learn new
things, but I think we do have a process that is leading toward
models that the public generally thinks are safe and robust to
use, and we have developed a lot of techniques to be able to
continue to deliver that.
But external testers and red teamers are a critical part of
that process and I think they have helped us find many things
in the models to improve.
Senator Hickenlooper. Mr. Smith, would you add anything to
that?
Mr. Smith. No.
Senator Hickenlooper. OK. Got it. Someone giving testimony
who does not have something to add--it is a moment of
scientific reflection.
Dr. Su, the bipartisan CHIPS and Science Act, an historic
effort to try and maintain U.S. leadership in emerging
technologies like semiconductors but others as well, as the
technology arms race continues globally, and you were talking
about this, AMD plays a key role in delivering state-of-the-art
designs, the best for the new chips that are going to power our
electronics and the devices that are going to allow AI to
become global.
As scientists work around the clock to develop new
breakthroughs and to try to increase and improve performance
but at the same time shorten R&D timelines, what do you see as
the next frontier of chip technology in terms of energy
efficiency and how can--and that is not just based on the
Chinese competitors but how can we work together to improve
direct to chip cooling for high-performance computing?
Ms. Su. Well, thank you for the question, Senator.
I would say, look, there is a tremendous amount of
innovation that is going on in the semiconductor sector today.
The CHIPS and Science Act was certainly helpful in raising the
profile of chips in the United States.
Relative to, you know, what are we doing to go faster and
build better and more power efficient chips, frankly, we are
using AI extensively through our chip development cycles and it
does allow us to augment what are typically very long cycles,
many years--you know, several years for us to develop chips.
We can shorten the time and also improve the efficiency,
and there are lots and lots of great new technologies in terms
of cooling technologies that are super important for us to
build the large-scale systems that we talked about earlier
today. So thank you for the question.
Senator Hickenlooper. All right. I am out of time. I will
yield back to the Chair. Thank you all.
The Chairman [presiding]. And, Senator Hickenlooper, I will
say as a Texan whose parents were in the oil and gas business I
think geologists are awesome.
Senator Hickenlooper. We have a consensus.
The Chairman. Senator Curtis.
STATEMENT OF HON. JOHN CURTIS,
U.S. SENATOR FROM UTAH
Senator Curtis. Thank you, Mr. Chairman. It is a delight to
be here.
Mr. Altman, you started kind of a one-upmanship on
computers and I will just tell you in 1985 the month you were
born I was attending a class at Brigham Young University and
carried in a laptop and was almost kicked out.
Mr. Altman. What laptop?
Senator Curtis. It was a TRS 80----
Mr. Altman. Oh, awesome.
Senator Curtis.--made by Radio Shack. I upgraded the memory
from 40K to 80K. Ran on four AA batteries and------
Mr. Altman. That is incredible.
Senator Curtis. Yes. So I am very envious of your
generation. Let me start with you, if I would.
I think, you know, Utah would aspire to lead out with data
centers and advanced technologies. Could you just address for
states and Utah specifically what it is that makes them
attractive to projects like Stargate?
Mr. Altman. Yes, and I know that we are having productive
discussions about some potential sites in Utah. Power cooling,
fast permitting process, labor force that can build these
things--the electricians, the construction workers, the entire
stack.
A state that wants to, like, partner with us to move
quickly. Texas really has been unbelievable on this. I think
that would be a good thing for other states to study but we
would be excited to try to figure something out.
Senator Curtis. Thank you. I think I could speak for our
state leaders. We would be excited as well. But as you know,
this also brings challenges and one of those challenges are the
demands for energy, and what are your thoughts on how we
protect rate payers and kind of put a little bit of a firewall
between them?
Mr. Altman. I mean, I think the best way is just much more
supply, more generation. You know, like, I think if you make it
easy to reasonably profitably create a lot of additional
generative capacity the market will do that.
That will not only not drive up rates because of the AI
workload--hopefully it will drive it down for everything.
And we have talked a lot about the importance of energy to
AI. Energy is just really important to quality of life. One of
the things that seems to me the most consistent throughout
history is every time the cost of energy falls the quality of
life goes up, and so doing a lot to make energy cheaper in the
short term--I think this probably looks like more natural gas,
although there are some applications where I think solar can
really help in the medium term.
I hope it is advanced nuclear fission and fusion. More
energy is important well beyond AI. You know, in some sense we
have these dual revolutions going of AI and energy, the ability
to have new ideas and the ability to get them done, to make
them happen in the physical world where we all live. Like,
these are kind of the limiting reagents of prosperity and let
us have a lot more.
Senator Curtis. Thank you.
Mr. Smith, we have talked about how significant power was--
is to the success here. What role do you think Microsoft and
other tech leaders have in developing energy and particularly
the right type of energy?
Mr. Smith. I think we have a tremendous responsibility to
contribute to the solution and I think Sam helped with his
list.
I would highlight two things, and I just would, I guess,
illustrate it with what we do everywhere but most recently with
a major site in southeastern Wisconsin. We went from zero,
basically, to becoming the largest industrial user of
electricity in the state--roughly, 400 megawatts.
And so we worked with the local utility. We made the
investment to help and really enable them to expand their
electricity generation.
Now, that electricity then needed to be delivered from
their power plant through the grid to our data center. We went
to the Public Utilities Commission and we proposed a rate
increase on ourselves because we thought it was important that
we pay for that improvement to the grid so that the neighbors,
so to speak, would not have to.
And I think what it really illustrates is the collaborative
partnerships that are needed to provide the capital, to do the
construction, to improve the grid, and to be, I think, very
sensitive to the community as a whole.
Senator Curtis. Thank you.
Mr. Altman, let me come back to you. I was a small business
owner. I have a special spot in my heart for small business
owners.
Can we talk a little bit about ChatGPT and how that might
assist small business owners? And let me paint a little broader
picture. We have heard a lot about other tools that are,
perhaps, out of favor, particularly with the U.S. Government,
that are very helpful for small businesses.
But I do not know if small businesses are fully
understanding the platform that you have and how they might use
it for marketing, for data research, and ways to help their
small business be successful.
Mr. Altman. One of--there were all these moments as ChatGPT
was beginning to take off where we would be, like, oh, we may
have, like, a hit on our hands.
There is, like, that is--someone is using it for this and
this and that, you know, strangers talking about it. You see
someone using it in a coffee shop.
But one of the ones that really sticks out for me is pretty
quickly after ChatGPT launched, like, in the first six months,
say, I was in an Uber and the driver was making conversation.
He is, like, have you heard of this thing called ChatGPT?
It is amazing. And I was, like, yes. Like, what do you think
about it?
And he was using it to run basically his entire small
business. He was, like, I had--he ran a Laundromat and he was,
like, I had all these problems, you know, like, could not find
good people to write my ads, could not get, like, legal
documents reviewed, could not, like, answer customer support e-
mails.
And he was, like, a mega early adopter but he was one of
these people that was using AI to, like, make a small business
work and that was--we talked about that story a lot at the time
but it is nice to reflect on it again now.
We have now heard that at scale from a lot of people, but
that was one of those moments early on we were, like, oh, this
is maybe going to work.
Senator Curtis. So and I am out of time but just to mark
this is more than just something that helps proofread e-mails,
right? And you do not need to comment because I am out of time,
but I think we would all agree with that.
Mr. Altman. It is.
Senator Curtis. And look forward to seeing these
applications move forward.
Mr. Chairman, I yield my time.
The Chairman. Senator Duckworth.
STATEMENT OF HON. TAMMY DUCKWORTH,
U.S. SENATOR FROM ILLINOIS
Senator Duckworth. Thank you, Mr. Chairman. I thank you,
the panel, for all of you being here today.
I want to begin by talking about the importance of
partnerships between the private sector and our National
Laboratories in maintaining United States leadership in AI.
Illinois is the proud home of two crown jewels of the
National Laboratories, Fermilab, America's premier particle
physics and accelerator laboratory, and Argonne National
Laboratory, home to the Aurora supercomputer that will
accelerate breakthroughs in AI, cancer research, and
fundamental physics.
There is nothing more important than sustaining and
amplifying investments in our Nation's incredible network of
National Labs.
Yet, Donald Trump and Elon Musk, with the support of some
Republicans in Congress, are plotting to take a chainsaw to the
vital research initiatives being carried out across our
country.
This is a self-sabotaging attack, plain and simple, and if
allowed to proceed Trump and Musk will inflict lasting harm on
our innovative capabilities and capacity that our enemies could
only dream of achieving.
Does anyone truly have confidence that had DOGE been around
decades ago they would not have cut the project that created
the Internet as an example of wasteful publicly funded research
and development?
So my question to any member of the panel is the following.
Can you explain the importance of the National Labs system to
maintaining our research edge and discuss any partnerships you
have established or are currently pursuing, especially those
threatened by massive cuts to the National Labs' research?
Mr. Altman. We partner with the National Labs so maybe I
could take a first cut of this.
Senator Duckworth. Please.
Mr. Altman. Also, Senator, I would love to get to visit
Fermilab someday. That would be, like, unreal.
Senator Duckworth. That was my next question. You are
welcome.
Mr. Altman. That would be a real life highlight. That would
be very cool.
There are many wonderful things that AI is going to do for
the world but the one that I am personally most excited about
is the impact AI will have on scientific discovery. I believe
that new scientific discovery is the most important input to
the world getting better and people's quality of life getting
better over time.
It is hard to overstate where we would be if--where we are
because of scientific advancement and where we would be without
it. So we are thrilled to get to partner with the National Labs
on this.
I think science has not been as efficient as it can be, and
we are also thrilled to hear from scientists that they are, you
know, multiples more effective than they used to be and I think
that AI tools will mean we can accomplish at some point a
decade worth of scientific progress in a year for the same cost
or even less.
This will be one of the most important contributions, in my
opinion, that AI makes to the world. And it is no longer
theoretical. Like, the National Labs are a great example.
It is the only partnership where we have given a copy of
our model weights to another organization. It is a very deep
and important partnership to us and I expect that that will
really bear fruit.
Senator Duckworth. Thank you. Anybody else on the panel?
Mr. Smith. Yes, I think you highlight a very important
issue. This country has 17 National Labs administered by the
Department of Energy and about 85 to 90 research universities,
and together they are the fabric of much of scientific
discovery and have been since the Manhattan Project in World
War II.
We in the tech sector, we at Microsoft, work with most,
almost all of them, and there is a particular cycle of
innovation that the United States has mastered. You have
curiosity-driven research in these institutions and then the
advances move out of those institutions into startups and into
larger companies.
And what I always find interesting, as I meet with
officials around the world they have studied this. They seek to
emulate it, and I always worry that in the United States we run
the risk of taking it for granted.
We should never take this for granted. It is the foundation
for the country's technological leadership.
Senator Duckworth. Very much so.
Dr. Su.
Ms. Su. I just wanted to add to that. We are also very
large supporters of the public-private partnerships with the
National Labs.
I think the National Labs have, you know, in a way always
tried to look ahead of the curve and, you know, that is a great
place for us to invest.
We think they are a key piece. We have partnered with all
of the National Labs as well, you know, over the last decade
and that continues to be a place where I think there can be
significant public-private partnership.
Senator Duckworth. Thank you. Mr. Intrator.
Mr. Intrator. I just think it would be really interesting
to come to these AI factories and to walk or travel through
these institutions and identify all the different pieces of the
science that leads back and was ultimately driven and founded
on something that came out of those institutions. It is
amazing, actually.
Senator Duckworth. Thank you. And would any of the
remaining three of you like to come to a lab in Illinois,
either Fermi or Argonne? I will give you personal tours.
[Laughter.]
Senator Duckworth. All right. All four of you. It is done.
Thank you, Mr. Chairman.
The Chairman. Thank you.
Senator Young.
STATEMENT OF HON. TODD YOUNG,
U.S. SENATOR FROM INDIANA
Senator Young. Thank you, Mr. Chairman, for holding this
important hearing on winning the AI race. It is good to see our
panelists here.
One of the things that I like to underscore whenever I talk
about this issue is we are not just discussing a race to create
jobs, not just discussing a race to figure out how to eke out
more growth from our economy, although that is important.
Not just trying to identify how humans can flourish more,
especially Americans, through application of AI solutions to
our daily lives in various ways.
But this is an issue of national and economic security. I
want folks at home to get that. I know all our panelists are
highly conversant and knowledgeable about that.
In my discussions with you and many others I have heard we
need to work with like-minded partners and allies to win this
race, and it is only going to be done collectively.
I have heard here today from a number of you that this race
is in part about getting market share, diffusion of our AI
models and solutions into other countries.
It is through that means for me to, perhaps, elaborate on
your thoughts that we can see that our own values are advanced.
These models presumably they will be embedded with our
values related to privacy and transparency and property rights
and freedom of speech and religion, not the values of the
Chinese Communist Party on each of those various fronts.
And then if we can establish digital trade rules, digital
cross-border agreements on digital trade with these other
countries, we could conceivably erect higher barriers to entry
for models that do not come embedded with our standards, models
of, say, the Chinese Communist Party has given sanction to.
So there is a geopolitical national security overlay to
this entire conversation, which is why I think the Chairman's
emphasis on not overly constraining innovation or deployment is
very important.
But it is also why I think it is important that we be
thinking about how to work with other countries in their
standards development.
And so that is that is where I want to begin asking
questions. I will start with Mr. Smith.
If the United States does not adopt some standards through
some entity, whether it is NIST or another Federal entity or
federally sanctioned entity, then won't other nations go ahead
and feel the need to adopt their standards without any
consultation with the United States?
Mr. Smith. I think it is a really important point you make
and it is the lesson from the evolution of privacy law. The
United States did not adopt a national privacy law.
Europe did twice, and most American companies of any size
today apply across the United States work that complies with
European privacy law. It is just more efficient.
So I think the United States needs to be in the game
internationally to influence the rest of the world, and you
cannot be in the game if you do nothing. You must do something.
So you take Senator Cruz's idea--a lightweight approach----
Senator Young. Yes.
Mr. Smith.--and then you build support around it.
Senator Young. So just to unpack that--and I will stick
with Mr. Smith with apologies to everyone else because my time
is limited--would it be easier to shape the standards of other
large economy countries that share most of our values if we
already have a set of standards adopted?
Mr. Smith. Generally, yes. I think we always have to be
careful because if you go too soon you go before the standards
have really come together. But you have got to have some kind
of model that you can show the rest of the world and win
support for.
Senator Young. And then presumably standards could be
harmonized, right? They are not set in--and chiseled onto a
tablet, so to speak, right?
Mr. Smith. That is indispensable. I mean, if our technology
is going to go around the world we need a set of laws or
regulations that, in effect, create that basis for reciprocity
and interoperability.
Senator Young. OK. I only have 25 seconds left. Are there
any violent objections to Mr. Smith's position? Because that
seems eminently reasonable to me.
Seems consistent with the light touch approach but it also
shows a certain sense of urgency that the United States needs
to act.
The last thing I will say in my remaining 10 seconds is
that I am planning on introducing legislation today called the
AI Public Awareness and Education Campaign Act with several of
my colleagues and our aim is to have a whole of government
approach to foster greater awareness of AI literacy and grow
STEM opportunities to create the next generation of our
workforce, and looking forward to moving that forward.
So it will be available for public review, critique, even
accolades and, Mr. Chairman, I yield back.
The Chairman. Thank you.
Senator Blunt Rochester,
STATEMENT OF HON. LISA BLUNT ROCHESTER,
U.S. SENATOR FROM DELAWARE
Senator Blunt Rochester: Thank you, Chairman Cruz, and
thank you so much to the witnesses. This is such an important
hearing. Five minutes will not suffice for me. I will be
submitting some questions for the record.
I notice that for Mr. Altman and Mr. Smith when the
question of paint me a picture of the future came up there
was--you actually leaned up in your chair. There was a level of
excitement, and that is how I am about the future.
When I came into the House of Representatives in 2017 I
started a Future of Work bipartisan caucus because I had a
concern that, number one, there were certain groups of people
that were going to be left behind but there--also as a country
that we could be left behind.
And I started--I had an event where we had everyone walk
into the room and use a word cloud and tell me what you think
of when you hear the future of work. The biggest word coming in
the door was fear. The biggest word walking out the door was
opportunity.
And so, to me, this conversation is so vital to think about
the opportunities but also making sure that we are watching out
for ethics, watching out for scams, watching out that
technology does not take over the human.
And so I am just grateful for the conversation and, Mr.
Altman, I listened to an interview about--that you gave with
Lester Holt maybe a year or so ago and you talked in that
interview about how OpenAI--it was not initially even about
making a product.
It was not about the money. And so I know you are
incorporated in Delaware and I understand you have been working
with our attorney general during the previously proposed
legislation to transition to a for profit--not legislation but
to transition to for profit--and this Monday, OpenAI decided to
apply to become a public benefit corporation instead and to
have the PBC govern your nonprofit arm.
What went into this decision and what considerations
influenced the timing of the organizational change?
Mr. Altman. So we never--thank you for the question,
Senator, and the chance to explain this. It is a complicated
thing that I think has gotten misrepresented. So this is a
wonderful forum to talk about it.
We never planned to have the nonprofit convert into
anything. The nonprofit was always going to be the nonprofit,
and we also planned for a PBC from the very beginning.
There were a bunch of other considerations about is it the
PBC board that would control the nonprofit somehow or, you
know, how our capital structure was going to work that there
was a lot of speculation on most of it, inaccurate in the
press.
But our plan has always been to have a robust nonprofit. We
hope our nonprofit will be one of the best, maybe someday the
best resourced nonprofit in the world, and a PBC with the same
mission that would make it possible for us to raise the capital
needed to deliver these tools and services at the quality level
and availability level that people want to use them at but
still stick to our mission, which we have been proud over the
last almost decade of our progress toward.
So we had a lot of productive conversations with a lot of
stakeholders and a lot of lawyers and a lot of regulators about
the best way to do this.
It took longer than we thought it was going to. You know, I
would have guessed that we would have been talking about this
last year. But now we have a proposal that people seem pretty
excited about and we are trying to now advance.
Senator Blunt Rochester. And, Dr. Su, your company
primarily operates in the physical hardware portion of the AI
stack.
I have a bill with Senators Cantwell and Blackburn called
the ``Promoting Resilient Supply Chains Act'', which authorizes
the Department of Commerce to strengthen American supply chains
for critical industries and emerging technologies.
Dr. Su and others, semiconductor and chips manufacturing is
critical to advancing the advancement of AI but we are facing
these global supply chain constraints.
What specific policies--and I know you mentioned policies
as well for supply chains--would we need to adopt to help
American companies overcome the supply chain issues and compete
in international with our rivals?
Ms. Su. Thank you, Senator, for the question.
There is no question the semiconductor supply chain and
overall supply chains are really critical for us to win the AI
race. I think from a semiconductor standpoint the efforts that
have been made to move manufacturing back to the United States
have been positive.
I think they are a start. There is a lot more that we can
do, and one of the most important aspects of it is really to
think about it end to end.
There are so many steps to go from beginning to end in a
semiconductor supply chain including advanced wafers, including
packaging, including the back ends and system tests.
All of those avenues need to have a footprint in the United
States, and then we have many allies around the world which
are, you know, very excellent partners as part of the global
resiliency in the supply chain and we would like to see those
partnerships continue to flourish.
Senator Blunt Rochester. Last question, if I can.
Mr. Smith, how do you see the interdependence between the
AI stack sections creating either vulnerabilities or
opportunities in the AI supply chain?
Mr. Smith. I think they create more opportunities than
vulnerabilities because it enables companies to do what they do
best and that we can work together.
And the world today has an integrated supply chain for
anything that you buy. We just do not think about it when we go
to the grocery store.
I think one of the strengths of the tech sector is that we
have--I will call it a string of pearls, great companies in
very--in every layer of the stack and we are going to need,
frankly, more great companies, especially at the applications
layer, and that it is how we work together.
Senator Blunt Rochester. Thank you so much. I am out of
time but we will be following up with questions for the record
as well as individually. Thank you, and I yield back.
Senator Lummis [presiding]. Mr. Moran.
STATEMENT OF HON. JERRY MORAN,
U.S. SENATOR FROM KANSAS
Senator Moran. Chairman Lummis, thank you very much.
Mr. Smith mentioned data privacy, which has been a topic of
mine for a long time, and we have been unsuccessful in
legislation being adopted. But I still have the goal of making
certain that consumers have control over their own data.
And I was going to ask you, Mr. Altman, how can we provide
consumers with more control over how their data is used by AI
companies while preserving the utility of the AI system? So how
do you get more privacy and still get the benefits?
Mr. Altman. So there is all of the standard privacy
controls that companies like ours and others build and should,
but there is a new area that I would love to flag for your
consideration, which is people are sharing more information
with AI systems than I think they have with previous
generations of technology, and the maximum utility of these
systems happens when the model can get very personalized to
you.
So this is a wonderful thing and we should find a way to
enable it, but the fact that these AI systems will get to know
you over the course of your life so well I think presents a new
challenge and level of importance for how we think about
privacy in the world of AI--how we are going to think about
guaranteeing people privacy when they talk to an AI system
about whatever is happening in their lives--how we make sure
that when one system connects to another it shares the
appropriate information and does not share other information
and that users are in control of that.
I believe this will become one of the most important issues
with AI in the coming years as people come to integrate this
technology more into their lives, and I think it is a great
area for you all to think about and take quite seriously.
Senator Moran. We do. We just do not have any success in
finding the conclusions. But thank you for the encouragement.
I chair a Commerce, Justice, Science Appropriations
Subcommittee that funds the Department of Justice and it plays
a significant role in cybersecurity of our country.
I just came in from a budget hearing with the FBI Director
Dr. Patel in which we covered cybersecurity threats.
AI can--and I think this is true--AI can be used on both
sides of a cybersecurity attack and it can be used to automate
phishing, malware creation. But machine learning can also
increase our ability to detect and respond to cyber threats.
What should Congress think about allocation of Federal
resources for cybersecurity and what should we consider when it
comes to AI?
Mr. Smith. I would say that AI, as you said, is both an
offensive weapon and a defensive shield when it comes to
cybersecurity, and as with many other things the front line of
this the last few years has been in Ukraine because Russia has
such a sophisticated cyberattack capability.
And, you know, what we have found is a company that has
been involved in supporting Ukraine since literally the moment
that war began is that AI is a game changer.
We have intercepted attacks against Ukraine faster than a
human could detect them and we block those attacks from taking
place.
So you deploy AI into--call it the front line of the
products themselves. We have to recognize that it is ultimately
the people who defend not just countries but companies and
governments, the chief information security officers, or the
CISOs.
So we have created what is called a cybersecurity copilot
that basically automates for those individuals much of the
workflow that takes their time so that they can be more
effective and efficient.
When it comes to Federal appropriations I think that, to
put it simply, the U.S. Government must remain at the forefront
of having for itself the cybersecurity capabilities that it
needs to defend the government and every day--I mean, we are in
government agencies today during this hearing, you know,
pushing Chinese out of agencies and the like, and this will
happen every day of every year from now to probably eternity.
So we must keep the U.S. Government well funded in this
space and I think we also need our intelligence agencies and
especially the NSA to be well funded so they can remain at the
forefront when it comes to global leadership in this field.
Senator Moran. Thank you for your observations and
encouragement.
My final question--rural areas, a place I come from, often
lack high-speed broadband, and since many AI tools rely upon
connectivity I am concerned that many parts of the country and
many parts of Kansas may not be able to access the benefits
that AI will bring to business, schools, health care, et
cetera.
What can the Federal Government do to be supportive of
development and availability of on device or low broadband
width AI systems that do not rely on constant connectivity?
Mr. Altman. I am generally pretty excited about what AI
will do here because you can offload so much of the processing
to the cloud and then ship a relatively small amount of data.
If you think about, you know, ChatGPT as text comes in
there is, like, a brain that thinks about it really hard and
some text comes back, we can support people in low connectivity
areas quite well with the same quality of service.
Separately to that, I think getting great connectivity
everywhere is important but in the specific area of AI I think
we can actually address that gap quite well.
Senator Moran. That is good to know.
Thank you very much.
Senator Lummis. Mr. Lujan.
STATEMENT OF HON. BEN RAY LUJAN,
U.S. SENATOR FROM NEW MEXICO
Senator Lujan. Thank you, Madam Chair.
And, first, I want to begin by recognizing and thanking Mr.
Altman and Mr. Smith for your organizations' ongoing
involvement in the NIST USAI Safety Institute, as well as Dr.
Su and Mr. Altman for your ongoing partnerships with our
National Laboratories.
Now, Dr. Su and Mr. Altman, can you explain how your
partnership with the National Labs support scientific research?
You explained this to a question that was asked by Senator
Duckworth as well but if you could just touch on that quickly.
Mr. Altman. Our latest models, like, 2003 are good at
scientific reasoning and so scientists are able to use these to
help them review literature, come up with new ideas, propose
experiments, analyze data in a way that the previous
generations of models just could not.
We have had the National Labs and other scientists spend
time with previous models and they say, oh, this is, you know,
kind of cool. It is interesting. It is not transforming things.
These new models are the first time we are hearing from
scientists at the National Labs and elsewhere that this is a
legitimate game changer to their research output.
Senator Lujan. I appreciate that. Dr. Su?
Ms. Su. Yes, I would add the same. I think our partnerships
with the National Labs have seen just tremendous opportunity.
We have large-scale compute across the National Labs and the
ability to really develop new applications that take advantage
of, let us call it traditional high-performance computing,
together with the new AI model capability that we just talked
about is, I think, a great opportunity to substantially move
forward the ability for scientific discovery.
Senator Lujan. To both of you, again, can you explain why
Federal investment in foundational research and standards
bodies are crucial to your companies?
Mr. Altman. I think standards can help increase the rate of
innovation, but it is important that the industry figure out
what they should be first. I think a bad standard can really
set things back and we have seen many examples of that in
history.
I do think there is a new protocol to discover here at the
level of importance of HTP. This is just one example. There is
many other things, too.
I believe the industry will figure that out through some
fits and starts and then I think officially adopting that can
be helpful.
Senator Lujan. Dr. Su.
Ms. Su. I believe public-private partnerships really enable
us to think, let us call it, ahead of the curve. So there are
lots of things that we do in industry and we do them very, very
well.
However, the beauty of the National Labs and Federal
research is it does allow let us call it a bit more bluesky
research, and I think that is a very, you know, positive add.
So I think the key is how we can make sure that, you know,
one Federal dollar goes much, much further than that with a
private investment on top of that.
Senator Lujan. Yesterday I reintroduced a piece of
legislation called the ``Test AI Act'', which has bipartisan
support, which would simply improve the Federal Government's
capacity to test and evaluate in this area as well. So very
much appreciate both your responses.
But this is just one of many steps I would argue that is
needed to ensure that the United States stays ahead. Now,
despite strong support across the country including from
industry leaders here today President Trump is annihilating
budgets for basic research, and there are questions abound by
so many.
I will argue that this will destroy our Nation's
competitive advantage. I simply just call on all my colleagues
that we look at the investments to the National Science
Foundation, National Institutes of Health, and Department of
Energy, Office of Science.
Let us work together. If there is questions that we have
let us find ways to address those. But let us ensure that these
investments are making a positive difference so that we have
more successes and more hearings celebrating what we are
celebrating today.
Now, beyond your partnership with the Federal Government I
would like to know more about how you partner with local
communities when building out centers.
Data centers put a strain on energy and water resources.
However, unlike other businesses they do not introduce many
long-term jobs and economic benefits necessarily.
So, Mr. Smith, how many engineers do you have dedicated to
model or hardware optimization to reduce energy use, and when
you build a Center what initiatives do you have in place to
reduce water use?
Mr. Smith. I do not know off the top of my head the number
of engineers we have working on optimization but I would be
happy to track down an answer and get it to you.
Water use is a huge priority especially, you know, in data
centers, for example, in the southwestern United States and
other countries around the world where water is in short
supply.
If you look at our data centers today they run on liquid
cooling. It is a closed loop system. The liquid is a
combination of, frankly, water and other chemicals but
basically once it starts running almost all of the water is
recycled. So the amount of water that we consume is typically
far, far smaller than what most people would estimate.
We also have a commitment to water replenishment. Our goal
is to be water positive, meaning that we are providing more
water to the community than we are consuming.
So, for example, across the United States today we have
more than 90 water replenishment projects including one that
focuses on the San Juan River in your state of New Mexico,
which focuses on water security for the river.
So I think it is a good example of how we can play a
responsible role in addressing an issue that is of growing
importance.
Senator Lujan. I appreciate it.
Mr. Intrator, same question.
Mr. Intrator. Yes, I cannot answer the question of how many
engineers we have focused on it but I will say that the ability
to extract more computational power out of a given megawatt is
of paramount importance to my company, to all of us in this
room, and we spend an enormous amount of time integrating the
most bleeding-edge technology, which is a step function more
efficient in terms of its computational output than the legacy
technology has historically done.
You know, so moving to liquid cooling has just been an
incredible improvement in efficiency and, ultimately, we face
this problem from, you know, within a given data center, within
a given power envelope. How much can we move the computational
resources forward, and that is really an important part of what
we do.
Senator Lujan. I appreciate it.
Mr. Chairman, I have other questions I will submit into the
record.
Mr. Moran did ask one question. Mr. Altman, you responded
to it. But can you all just answer yes or no, is it important
to ensure that in order for AI to reach its full prominence
that people across the country should be able to connect to
fast, affordable internet?
Dr. Su.
Ms. Su. Yes.
Mr. Intrator. Yes.
Mr. Smith. Yes.
Senator Lujan. Thank you. Appreciate it. I yield back.
Thank you.
The Chairman [presiding]. Thank you.
Senator Lummis.
STATEMENT OF HON. CYNTHIA LUMMIS,
U.S. SENATOR FROM WYOMING
Senator Lummis. Thank you, Mr. Chairman, and thank you all
for coming today.
I really have been amazed at the outstanding progress that
continues to be made in this field and I am already seeing
people in Wyoming that are using ChatGPT or Claude to improve
their businesses, whether it is health care or mining or oil
and gas or education, ranching, even. I am just really excited
about what this opportunity brings to America.
Now, as I see it, the world has presented us with two
paths. On one hand, the EU has chosen to regulate first and ask
questions later. The GDPR is already limiting European access
to the most capable AI models.
On the other hand, China appears to be fast tracking AI
development, standing up large amounts of energy very quickly
in an attempt to outcompete America.
So, I would like to ask a few questions about how we can
make sure we get the full benefit of this technology and
accelerate its development.
So first question, over the past year we have seen many
states including California and Texas consider their own AI
frameworks, each one significantly burdensome in their own
right. At the same time, our lead against China is shrinking to
about only 6 months.
So, first of all, Mr. Altman, could you please sketch out
what the world could look like if the U.S. were to have a
patchwork regulatory framework and how that could impact our
competitiveness?
Mr. Altman. I think it would be quite bad. I think it is
very difficult to imagine us figuring out how to comply with 50
different sets of regulation and in many of these states there
have been dozens of different bills proposed that I understand
several of which could be passed. That will slow us down at the
time where I do not think it is in anyone's interest for us to
slow down.
One Federal framework that is light touch that we can
understand and that lets us, you know, move with the speed that
this moment calls for seems important and fine, but the sort of
every state takes a different approach here I think would be
quite burdensome and significantly impair our ability to do
what we need to do and, hopefully, you all want us to do, too.
Senator Lummis. Does anyone disagree with Mr. Altman's
assessment of a patchwork?
Thank you. I have some questions about the infrastructure
that is going to be necessary to lead and compete in AI so my
next questions are for our infrastructure providers, Mr. Smith
and Mr.--is it Intrator?
Mr. Intrator. That is correct.
Senator Lummis. Intrator. Thank you. Could you elaborate on
how current permitting processes have impacted your ability to
rapidly deploy AI infrastructure? The more specific you can be
the better.
Mr. Intrator. So a quick comment on the patchwork and then
I will dive in here. The investment that we are making on the
infrastructure side is enormous, and the idea that you can make
an investment that could then become trapped in a jurisdiction
that has a particular type of regulation that would not allow
you to make full use of it is really very, very suboptimal and
makes the decisionmaking around infrastructure challenging.
As far as the permitting goes, whenever this topic comes up
the discussion around permitting is excruciating and it is
excruciating from the ability to quickly build and to build
large, and I think that is kind of from the data center forward
without even beginning the discussion from the data center back
through the energy infrastructure that is necessary to be able
to power these large investments at the scale that make them of
relevance to moving artificial intelligence forward. I am happy
to spend more time kind of digging into more details but
probably do that directly.
Senator Lummis. OK. And I will look forward to that
conversation because I am worried about Wyoming's very clean
natural gas being something your industry is concerned about
because President Trump likes natural gas but President Biden
did not.
And if you build huge data centers and another President
comes along who is anti-natural gas that is a concern for you
as you are deciding how to deploy capital.
Mr. Smith, do you agree?
Mr. Smith. Generally, I do. I mean, I would say we need
consistency across administrations in this country. We need to
find more opportunities for bipartisan agreement, and I will
just highlight that in Cheyenne where we have long had a data
center complex, you know, we do have backup generators that run
on natural gas. So there are a variety of ways for us to put
different energy supplies to good use.
Senator Lummis. Are you exploring small modular nuclear?
Mr. Smith. Yes, including with people in Wyoming.
Senator Lummis. Thank you.
Mr. Altman, I am pleased to hear you are releasing an
open--oh, my time is up. Excuse me. It goes so fast.
Mr. Altman. I would love to talk to you about it another
time. We are very excited about it, too.
Senator Lummis. Yes, thank you. I yield back.
The Chairman. Thank you.
Senator Rosen.
STATEMENT OF HON. JACKY ROSEN,
U.S. SENATOR FROM NEVADA
Senator Rosen. Thank you, Chairman Cruz. So I am ready to
push the button and, anyway, time does go by very fast. Thank
you for having this hearing.
I really believe in the promise of AI. So exciting, and we
have to ask the right questions in order to promote its growth
on one hand, and how can explore and create these new
possibilities and pathways and also how do we protect ourselves
from bad actors or outcomes as best as we can know at the time.
And, Mr. Altman, thank you for spending some time with me
yesterday. I look forward to continuing to work with you on
this.
So I want to start a little bit today at DeepSeek, an
adversarial AI, because in February I introduced bipartisan
legislation with Senator Husted to prohibit using DeepSeek on
government devices, and earlier this week Senator Cassidy and I
introduced a bill that would expand those prohibitions to
include Federal contractors.
So, Mr. Smith, what should our approach be to AI models
that are developed in or by adversarial countries like the PRC?
Should we be concerned about our adversaries co-opting AI
to promote a particular ideology, collect sensitive U.S. data,
and how are you combating this threat?
Mr. Smith. Well, I think you can take the DeepSeek example
and it illustrates it well, and I think it is just worth
thinking about the fact that DeepSeek produced two things. They
have a model that is an open source model and they have an
application, the DeepSeek app.
At Microsoft we do not allow our employees to use the
DeepSeek app. We did not put the DeepSeek app in our app store
because of the kinds of concerns that you mentioned, namely,
data going back to China and the app creating the kinds of
content that I think people would say were associated with
Chinese propaganda.
At the same time, because the model itself is an open
source model it was possible for us to go in it, analyze it,
and change the code in the model, which we and other people
have the permission to do to remove the harmful side effects.
And so I think we have to always think about the different
aspects of the technology. I will say put security first and
then go forward from there.
Senator Rosen. Thank you. I think we all know that data is
the real power in our current world. He or she or whomever owns
the data really can control a lot of what we do.
But I want to move on and speak with you, Mr. Altman, about
AI and anti-Semitism a little bit because earlier this year ADL
released a report showing that several major generative AI
models have perpetuated dangerous anti-Semitic stereo tropes
and, sadly, conspiracy theories.
So, Mr. Altman, what steps is industry taking to ensure
that AI models do not perpetuate anti-Semitism? Will you
consider collaborating with civil society to create kind of a
standard benchmark for AI related to anti-Semitism, use it as a
form of evaluation, and then maybe we could use those for other
forms of hate as well?
Mr. Altman. Of course, we do collaborate with civil society
on this topic and we are excited to continue to do so.
We want our users to have freedom to use models in the way
they want, but we also do not want them to be damaging to sort
of the fabric of society or particular groups.
There will always be some debate and the question of free
speech in the context of AI is novel and I think it is
different than what we faced before.
We really do view these as tools for users one-on-one but,
of course, we are not here to, you know, make horrible anti-
Semitic products.
Senator Rosen. Thank you. I want to move on to--Senator
Lujan talked about data center energy use, water use, something
we are all really concerned about. I want to put on top of that
a little bit about data center security, add that to the mix.
So last Congress I actually got a bill passed into law, my
bipartisan ``Federal Data Center Enhancement Act''. It
establishes cybersecurity and resiliency standards for Federal
data centers.
And so to Mr. Smith, or--I am sorry, Dr. Su. Thank you.
Dr. Su, I want to ask you a little bit about hardware. Are
there ways the hardware like the chips AMD designs, new chips
that we are hoping to think about--I know my career as a
software developer we just know things have gotten smarter,
faster, and they just--the cooler they can be the better we can
compute.
So how can we make our chips cooler? How can we make our
data centers, our computing power, more secure? And I know
interoperability is sometimes a factor. But can you talk about
this a little bit?
Ms. Su. Sure. Thank you for the question, Senator.
Look, I think all of those things are extremely important,
as you said. So in our part of the energy efficiency, you know,
power constraints that we have from a chip standpoint, you
know, our job is to continue to make our chips more and more
efficient every year.
We have seen, you know, 30 times improvement over the last
few years and we will continue to focus, you know, in that
area.
And then to your comments about, you know, security and
ensuring that our chips are secure and people are not somehow
breaking into them, those are also very high priorities in our
overall development cycle for future generation chips as well.
Senator Rosen. Thank you. I look forward to working with
all of you again on these important issues.
Mr. Chairman.
The Chairman. Thank you.
Senator Sullivan.
STATEMENT OF HON. DAN SULLIVAN,
U.S. SENATOR FROM ALASKA
Senator Sullivan. Thank you, Mr. Chairman. I want to thank
the witnesses for the testimony today. I appreciate the
Chairman calling this hearing, and I agree with Senator Cruz's
opening statement about this is a matter of national economic
and national security in terms of our race, however you want to
call it--competition with China.
So I know this topic has been pressed but I want to just
get--I want to dig down a little bit deeper. Do you agree with
that, all of you?
I am just going to ask some quick questions, that this is a
huge issue of national security, economic security, relative to
China, and we as America need to win in that regard. Very
important.
Everybody nodding their head. And then I know that it had
been touched, but is the consensus among the witnesses that we
are ahead right now but as a kind of tentative lead?
What would be--very quickly we will start with you, Mr.
Altman. What is your assessment on that? I know you have
already talked about it. I just want to set the context for
some of the questions.
Mr. Altman. Yes, I believe we are leading the world right
now. I believe we will continue to do so. We want to make AI in
the United States and we want the whole world to get the
benefit from that.
I think that is the strongest thing for the United States.
I think it is also the right thing to do for all the people of
the world.
And I really appreciate you all being with here--with us
here today because I think we will need your help, and
everything you are saying or almost everything you are saying
sounds great.
Senator Sullivan. So as I ask this question I will ask if
you guys think we are ahead, but then the key things when you
say we need your help what would--very succinctly, sometimes we
are not so smart up here--what would the key things be that you
would need from the U.S. Government to help us maintain that
lead and dominate this space, which is what I think we need to
do?
Mr. Altman, again, to you real quick on that.
Mr. Altman. We have talked a little about infrastructure
but I think we cannot overstate how important that is, and the
ability to have that whole supply chain or as much of it as
possible in the United States.
The previous technological revolutions have also been about
infrastructure and the supply chain, but AI is different in
terms of the magnitude of resources that we need.
So projects like Stargate that we are doing in the U.S.,
things like bringing chip manufacturing, certainly, chip design
to the U.S., permitting power quickly. Like, these are
critical. If we do not get this right I do not think anything
else we do can help.
On the model creation side, we have talked about the need
for certainty in our ability to train and to have fair footing
with the rest of the world to make sure we can remain
competitive.
The ability to offer products under a reasonable, fair,
light touch regulatory framework where we can go win in the
market, because the products will be so key to the sort of
feedback loops and making them better and better, and the
ability to deploy them quickly and win at the product level in
addition to the model and infrastructure and data area is
really quite important.
The ability to bring the most talented people in the world
here, the most talented researchers. We have a ton in the
United States. There are more out in the world. We should try
to get them all here, improving models here. I think those are
some of the specifics.
Senator Sullivan. Good. That is very helpful.
Let me ask, Mr. Smith, two other ones that I want to touch
on. I agree fully with Senator Lummis.
I am sure Senator Cruz has the same view. One of our
comparative advantages over China, in my view, has to be
energy--all of the above energy.
Hopefully, you have seen in Alaska we have a very large-
scale LNG project that I think we are going to get off the
ground here we have been working on for a long time.
We will have a hundred years supply of natural gas. So we
want you guys all to come up to Alaska with your data centers.
We have got cold weather. We got a lot of cold weather. We got
gas. We got land. We got water. We got it all.
Mr. Altman. That is very compelling.
Senator Sullivan. So, yes, come on up. When this project is
done, 100-year gas supply. A little colder than Texas.
So two questions that relate to our comparative advantage,
Mr. Smith, and then any others who want to jump in.
Energy--do we think that is? I think it is. And then second
it is, I think, somewhat of a disadvantage. It frustrates me.
Maybe you guys do not see this.
We have had American finance companies, venture capital
firms, banks, others, that, remarkably, all the opportunities
we have in America are helping fund some of these projects in
China.
I have been a real staunch opponent of Americans who have
opportunities to invest in other places investing in Chinese
AI, Chinese quantum, because we all know they are going to use
that to help make their military more lethal. I mean, that is
what they do.
I was reading recently about this Benchmark Capital. I do
not know these guys but they evidently did a $75 million round
for some--an AI company in China. Is that another problem as
well, Mr. Smith?
Advantage energy problem--American companies financing our
competition?
Mr. Smith. I would connect three things: energy, people and
access to capital. The U.S. has huge resources in energy, but
never underestimate the ability of China to build a lot of
electrical power plants, maybe more and faster than any other
country.
So we are better off going into that with the mindset that
we have to keep up and not take anything for granted. But then
I would say the number-one comparative advantage of the United
States throughout the 50 years that have defined digital
technology has been bringing the world's best people to our
country and giving them access to venture capital, and we
should continue to burnish both of those.
And I think you are right to ask where else is venture
capital going. I will just say this. If we can keep bringing
the best people to the United States and if we can keep
educating the best people in the United States, I believe the
money will be here to enable them to succeed.
But let us make sure we are continuing to bring the best
people in the world and giving them the opportunity to build
great companies here in the United States.
Senator Sullivan. And American venture capital funds
funding Chinese AI, is that in our national interest?
Mr. Smith. I think there is a really good question about
whether it is and I recognize that you all are quite rightly
focused on that.
I will just keep saying, bring the people here. They will
have access to the money and we will outcompete the world.
Senator Sullivan. Great. Thank you. Thank you, Mr.
Chairman.
The Chairman. Thank you.
Senator Markey.
STATEMENT OF HON. EDWARD MARKEY,
U.S. SENATOR FROM MASSACHUSETTS
Senator Markey. Thank you, Mr. Chairman, very much.
I would like to talk about the environmental impact of
artificial intelligence. Artificial intelligence can help us
combat climate change by improving weather forecasts and
enabling us to better predict power supply and demand. But
designing and training and deploying AI models also poses real
risks for our environment.
The massive data centers that are critical for AI
development requires substantial amounts of electricity,
putting stress on the grid and potentially raising costs for
consumers.
These data centers also generate significant heat. Cooling
them requires huge volumes of water, often in regions already
facing droughts because of climate change, and some data
centers have onsite backup diesel generators, which can cause
respiratory and cardiovascular issues and can increase the risk
of cancer for the surrounding community.
The truth is, we know too little about both the
environmental costs and benefits of AI.
Mr. Smith, do you agree that it would be helpful for the
government to conduct a comprehensive study on environmental
impact of artificial intelligence?
Mr. Smith. Generally, yes. One study was just completed
last December and I think it is worth updating periodically.
Senator Markey. Do you think it would be helpful for the
government to convene stakeholders including from industry and
academia to help better measure AI's environmental impact?
Mr. Smith. I think as well as many other things that need
to be measured. Yes, I think there is a role to be played.
Senator Markey. Mr. Altman, do you agree that the Federal
Government should help with studying and measuring the
environmental impact of AI?
Mr. Altman. I think studying and measuring is usually a
good thing. I do think that the conversation about the
environmental impact on--of AI and the relative challenges and
benefits has gotten somewhat out of whack.
I am hopeful that AI--you know, we have been trying to
address climate environmental challenges unsuccessfully or not
successfully enough for a long time. I think we need help.
I think AI can help us do that. We have proposed or we are
in the process of building a 10-gigawatt facility and we have
got another----
Senator Markey. My question is should the Federal
Government be on an ongoing basis studying the impact of AI?
Mr. Altman. Sure, and I think you should use AI to help.
Senator Markey. So that is why this Congress introduced the
``Artificial Intelligence Environmental Impact Act'' to study
both the positive and negative consequences of AI.
As the technology continues to develop, as models become
more efficient, and as we build out the infrastructure, we need
to do it.
Yes, AI might find--may find a cure for cancer. It may, but
AI also could help to contribute to a climate disaster. That is
also equally true.
So we need to just keep both of those things right on the
table, especially as the Trump administration is ignoring the
fact that last year 94 percent of all new installed electrical
generation capacity in the United States was wind, solar, and
battery, and Trump has said he is going to destroy all
incentives for continuation of that.
That is something you have to weigh in on to make sure he
does not do that. So I look forward to working with you on
that.
Now I want to turn to AI's impact on disadvantaged
communities. After all, we are not just talking about using
artificial intelligence to write e-mails or plan grocery lists.
We are talking about technology used to calculate a
family's mortgage, screen an individual's job application, and
determine a senior's medical care.
When used in these situations it is absolutely essential
that AI-powered algorithms are free from bias and
discrimination. So let us start with a simple question.
Mr. Smith, can algorithms be biased and cause
discrimination?
Mr. Smith. They can, which is why we test to avoid that
outcome.
Senator Markey. Same question, Mr. Altman. Can algorithms
be biased and cause discrimination?
Mr. Altman. Of course.
Senator Markey. Of course. Mr. Altman, does OpenAI work to
guard against such bias and discrimination in ChatGPT?
Mr. Altman. Of course.
Senator Markey. Of course. So I am glad to hear that
because you recently stated that the government should not
implement privacy regulations on AI but instead, quote,
``respond very quickly as the problems emerge,'' and I am very
deeply worried about that approach.
We do not need to wait and see if poorly tested and trained
algorithms will harm marginalized communities. Artificial
intelligence is already supercharging the bias and
discrimination prevalent in our society. Biased and
discriminatory algorithms mean black and brown families are
less likely to obtain a mortgage.
It means people with disabilities are less likely to be
recommended for a job opening and it means women are less
likely to receive scholarships for higher education.
These are real harms that are happening right now. It is
Congress' job to address these existing problems that come with
the rapid development and deployment of AI and it is why I am
the proud author of the ``AI Civil Rights Act'' which would
ensure that companies review and eliminate bias and
discrimination in their algorithms before developing and
deploying them.
It has to happen simultaneously, and it will hold companies
accountable when their algorithms cause harms against
marginalized population.
I will be fighting to ensure AI does not stand for
accelerating inequality in our Nation. All of the protections
we have in the real world should be moved to the virtual world
because the same discrimination--again, women, black, brown,
communities with disabilities, LGBTQ community--are going to
move online and we have to build in the protections against
that bias right up front, because otherwise those same
discriminatory practices will just migrate immediately and the
responsibility of the industry will be to work with Congress to
make sure we put those protections on the books.
Thank you, Mr. Chairman.
The Chairman. Thank you.
Senator Peters.
STATEMENT OF HON. GARY PETERS,
U.S. SENATOR FROM MICHIGAN
Senator Peters. Thank you, Mr. Chairman, and thanks to all
our witnesses. Thank you for being here.
It is an incredibly important topic and we appreciate your
expertise.
As we are looking at making sure that the United States is
the world leader in AI, certainly, we have been talking about
supply chains and infrastructure and all of those aspects.
But one area that I want to particularly focus on is
workforce and people to make sure that we have the talent
there. That is why I authored the ``AI Scholarship for Service
Act'' and the ``AI Training Act''.
Both of those were signed into law in 2022. Earlier this
year I introduced my ``AI and Critical Technology Workforce
Framework Act'' to continue the effort along those lines, and
love to work with each of you as we look at other legislation
necessary to make sure we have got the workforce trained to
take advantage of this amazing technology.
I do want to do a shout out to the University of Michigan
that actually became the first university in the world to
provide generative AI tools for their entire student body to
prepare them for the workforce of tomorrow. So I want to talk a
little bit about the workforce.
Mr. Altman, when met last year in my office and had a great
conversation you said that upwards of 70 percent of jobs could
be eliminated by AI and you acknowledge the possible social
disruption of this.
If that is happening we have to prepare for it. We are not
going to stand in the way of the incredible opportunities here
but if this is, indeed, going to occur, we have got to be
thinking pretty deeply about how that will be managed and make
sure that everybody can benefit from AI, not just a select few
that benefit.
So talk to me about how you believe leaders in your
industry can help mitigate job losses or deal with what could--
as you described it last year, major social disruption?
Mr. Altman. The thing that I think is different this time
than previous technological revolutions is the potential speed.
Technological revolutions have impacted jobs and the economy
for a long time.
Some jobs go away. Some new jobs get created. Many jobs
just get more efficient and people are able to do more and earn
more money and create more and that is great.
Over some period of time society can adapt to a huge amount
of job change, and you can look at the last couple of centuries
and see how much that has happened.
I do not know. I do not think anyone knows exactly how fast
this is going to go, but it feels like it could be pretty fast.
The most important thing or one of the most important
things, I think, we can do is to put tools in the hands of
people early. We have a principle that we call ``iterative
deployment''.
We want people to be getting used to this technology as it
is developed. We have been doing this now for almost five years
since our first product launch.
As society and this technology co-evolve putting great,
capable tools in the hands of a lot of people and letting them
figure out the new things that they are going to do and create
for each other and come up with and provide sort of value back
to the world on top of this new building block we have and the
sort of scaffolding of society that is, I think, the best thing
we can do as OpenAI and as our industry to be--sort of help
smooth this transition.
Senator Peters. The idea we want to get to the point where
AI is not displacing work but actually enhancing work, that
people are more productive and doing things that we probably
cannot even imagine what people will do. If we would look a
hundred years ago we have jobs that no one----
Mr. Altman. You cannot imagine, and I do not think we can
imagine the jobs on the other side of this. But even if you
look today at what is happening with programming, which I will
pick because it is sort of my background and near and dear to
my heart, what it means to be a programmer and an effective
programmer in May 2025 is very different than what it meant
last time I was here in May 2023.
These tools have really changed what a programmer is
capable of, the amount of code and software that the world is
going to get.
And it is not like people do not hire software engineers
anymore. They work in a different way and they are way more
productive.
Senator Peters. Right. Right.
Dr. Su, we certainly talk a lot about open source AI but
most of the conversation has been about software. However,
making technology open and able to work together matters at
every level, as you know, from chips that power the devices to
the servers that are running behind the scenes.
So my question for you is, what are the benefits of open
standards and system interoperability at the hardware level,
not the software level, and what are the implications for
innovation, national security, as well as resilience in the
supply chain?
Ms. Su. Thank you for the question, Senator.
I think there are an incredible number of advantages to
having an open ecosystem at the hardware and the software and
the application level.
The idea is, you know, there is no one organization or one
group that has all the good ideas and so enabling the ecosystem
to work together so that you can choose the best solution at
every level and then also optimization across a broad set of
constituents is a good thing.
I think it is also very good from a security standpoint to
ensure that, you know, again, there are many choices so that we
are not dependent on a single ecosystem. So, you know, we
continue to be very forward thinking in open standards as well
as open ecosystems.
Senator Peters. So your model is an open model. I
understand Nvidia is a closed model. Is there--what are the
advantages and disadvantages? What should we be thinking about?
Ms. Su. I think the major advantage in an open model, and
that is something that we very much support, is the idea that
we can have innovation come from many different parties and,
you know, whether that is hardware innovation so on the
different chips or that is system innovation on putting all
these things together.
And, you know, our goal is to make sure that we always have
the best of the best and there are many different ways--many
different parties that can contribute to that and that is why
we are very forward leaning in terms of open ecosystems.
Senator Peters. Great. Thank you. Thank you, Mr. Chairman.
The Chairman. Thank you.
Senator Fetterman.
STATEMENT OF HON. JOHN FETTERMAN,
U.S. SENATOR FROM PENNSYLVANIA
Senator Fetterman. Thank you, Mr. Chairman. Hello.
Mr. Smith, I am a big supporter of energy. For me energy
security is national security and, of course, you know,
renewables is about that. But, of course, other things as well,
too--fossil. But also that also includes nuclear, of course.
Nuclear is important.
And now then there is that kind of energy transition. My
focus is also that I want to make sure that rate payers in
Pennsylvania really are not hit too hard for throughout all of
this.
Now, the Washington Post reported that increasing
electricity demand for the data centers is going to raise up
residential power bills, perhaps, as much by 20 percent.
Now, to me, that is really a concern for me and certainly
for Pennsylvania families. Now, the data center, you know, has
important jobs during construction and doing those things and
that is a great thing, of course.
But they are not, I guess, long term. But the rate--those
rates might last longer for that.
And now, I have been very--tracking the plan to reopen TMI.
I mean, I had my own personal story is I had to grab my hamster
and evacuate, you know, in that--during the meltdown in 1979.
You might consume--you might assume that I was anti-nuclear
and that is not--it is a--I actually am very supportive of
nuclear because that is an important part of the stack. If you
really want to have--address climate change you cannot turn
your back on nuclear, in my opinion.
But I know that is the power nuke--Microsoft's data center
so now--and I really appreciate that. But if I am saying, now,
if you are able to commit that the power purchase agreement,
you know, it is not going to raise electricity for Pennsylvania
families.
Mr. Smith. No, I think you raise a critical point. We have
two principles that we follow when we are constructing these
data centers.
Number one, we will invest to bring onto the grid an amount
of electricity that equals the amount of electricity that we
will use so that we are not tapping a constricted supply.
Number two, we will manage all of this in a way that
ensures that our activity does not raise the price of
electricity to the community.
And so I was describing earlier how if there is
improvements that need to be made to the grid, as there often
are, we will go to the utility commission. We will propose a
change in the rate that we are charged so that we can pay for
that improvement.
I just think it is a fact of life because I think you
highlight something critical. There are a lot of jobs when the
construction takes place. There are jobs afterwards but they
are not as many.
One will wear out the welcome quickly if we tax, in effect,
the neighborhood by asking everyone to pay more for their
electricity because we have arrived. We get it. We know we have
to be a good and responsible member of the neighborhood.
Senator Fetterman. Now, you know, one of the perks of being
a senator is that--for me, anyway, I get an opportunity to meet
people that have much more impressive kinds of jobs or careers
that I have led.
And, now, Mr. Altman, now, this is going to--I am going to
count this as a highlight.
Recently, like, I know the work that you have done you are
really one of the people that are moving AI and now it is an
opportunity. I was excited to meet you.
And now, people--you know, people ask me it is, like, if
you are going to talk about AI and now I get to ask you, I
mean, like, the literal--the expert.
You know, some people are worried about AI or whatever and
I am, like, you know, what about the singularity so, you know,
the people like that.
If you would address that, please.
Mr. Altman. Thank you, Senator, for the kind words and for
normalizing hoodies in more spaces. I love to see that.
I am incredibly excited about the rate of progress but I
also am cautious, and I would say, like, I do not know--I feel
small next to it or something. I think this is beyond something
that we all fully yet understand where it is going to go.
This is, I believe, among the biggest--maybe still trying
to be the biggest technological revolutions humanity will have
ever produced and I feel privileged to be here. I feel curious
and interested in what is going to happen.
But I do think things are going to change quite
substantially. I think humans have a wonderful ability to adapt
and things that seem amazing will become the new normal very
quickly. We will figure out how to use these tools to just do
things we could never do before and I think it will be quite
extraordinary.
But these are going to be tools that are capable of things
that we cannot quite wrap our heads around, and some people
call that--you know, as these tools start helping us to create
next and future iterations some people call that singularity.
Some people call that the take off. Whatever it is, it
feels like a sort of new era of human history and I think it is
tremendously exciting that we get to live through that and we
can make it a wonderful thing. But we have got to approach it
with humility and some caution.
Mr. Fetterman. I mean, I just did--for me, it has been--I
get a chance to ask questions to a lot of Edisons as well, too.
The kinds of things that you are all collectively involved
are going to transform our society, and people will look back
50, 60 years ago and see what has happened. So to me, over to
the Chairman. Thank you.
The Chairman. Thank you, Senator Fetterman.
Senator Klobuchar.
Senator Klobuchar. Thank you. Good thought, Senator
Fetterman. Thank you.
So you guys have been sitting here so long that the Pope
has been chosen.
[Laughter.]
Senator Klobuchar. We do not know who.
The Chairman. Congratulations, Amy.
Senator Klobuchar. The white smoke has come up.
The Chairman. Congratulations.
[Laughter.]
Senator Klobuchar. You are welcome. Probably would not
work.
But in any case, it was--I left for some other things, came
back because I had one more question that I wanted to ask and
it is related to just the whole deep fake issue just because
Senator Blackburn and Senator Coons and Senator Tillis and I
worked on this really hard, and they are--Blackburn and Coons
are in the lead of the bill.
But we have recently seen deep fake videos of Al Roker
promoting a cure for high blood pressure, a deep fake of Brad
Pitt asking for money from a hospital bed. Sony Music has
worked with platforms to remove more than 75,000 songs with
unauthorized deep fakes including voices of Harry Styles and
Beyonce.
I recently--I mean, it is not just famous people. There is
a Grammy-nominated artist from Minnesota. Talked to him about
what is going on with digital replicas. So there is a real
concern and it kind of gets at what Senator Schatz and I were
talking about earlier with the news bill.
But I just wanted to make you all aware of this legislation
because there were some differences on this and now we have
gotten a coalition, including YouTube, supporting it as well as
the Recording Industry Association, Motion Picture Association,
SAG-AFTRA. So it is a big deal and I am hoping it is something
that you will all look at.
But could you just comment? I would go to you, Mr. Smith,
first about protecting people from having their likenesses
replicated through AI without permission, and even if you all
pledge to do it our obvious concern is that there will may be
other companies that would not and that is why I think as we
look at what these guardrails are the protection of digital--
people's digital rights should be part of this.
Mr. Smith.
Mr. Smith. Yes. No, I think you are right to point to it.
It has become a growing area of concern. You know, during the
Presidential election last year both campaigns, both political
parties, were concerned about the potential for deep fakes to
be created.
We worked with both campaigns and both parties to address
that. We see it being used in, really, ways that I would call
abusive including of celebrities and the like.
I think it starts with an ability to identify when
something has been created by AI and is not a genuine, say,
photographic or video image, and we do find that AI is much
more capable at doing that than, say, the human eye and human
judgment.
I think it is right that there be certain guardrails and
some of these we can apply voluntarily. We have been doing that
across the industry.
OpenAI and Microsoft were both part of that last year, and
there are certain uses that probably should be considered
across the line and, therefore, should be unlawful, and I think
that is where the kinds of initiatives that you are describing
have a particularly important role to play.
Senator Klobuchar. And could you look at that legislation?
Mr. Smith. Absolutely.
Senator Klobuchar. Appreciate it. Mr. Altman, just the same
question, same thing.
Mr. Altman. Of course, we would be happy to look at the
legislation. I think this is a big issue and it is one coming
quickly.
I do not believe--I think there are a few areas to attack
it. You can talk about AI that generates content, platforms
that distribute it, how takedowns work, how we educate society,
and how we build in robustness to expect this is going to
happen.
I do not believe it will be possible to stop the generation
of the content. I think open source, open weight models are a
great thing on the whole and something we need to pursue. But
it does mean that there is going to be just a lot of these
models floating around that can do this.
The mass distribution, I think it is possible to put some
more guardrails in place and that seems important but I do not
want to neglect the sort of societal education piece.
I think with every new technology there is some sort of--
almost always some sort of new scams that come. The sooner we
can get people to understand these, be on the lookout for them,
talk about this as a thing that is coming and then a thing that
is happening I think the better.
People are very quickly understanding that content can be
AI generated and building new kinds of defenses in their own
minds about it.
But still, you know, if you get a call and it sounds
exactly like someone you know and they are panicked and they
need help, or if you see a video that--like the videos you
talked about, this, like, gets at us in a very deep
psychological way and I think we need to build societal
resilience because this is coming.
Senator Klobuchar. Mmm-hmm. It is coming, but we can
there--there has got to be some ways to protect people's
privacy rights----
Mr. Altman. We should do everything--for sure.
Senator Klobuchar.--and you have got to have some way to
either enforce it--damages, whatever. There is just not going
to be any consequences in that----
Mr. Altman. Absolutely. We should have all of that. Bad
actors still do not always follow the laws and so I think we
need an additional shield or whenever we can have them. But
yes, we should absolutely protect that.
Senator Klobuchar. All right. Look forward to working with
you on it. Thank you.
The Chairman. So I have to say Senator Klobuchar's question
about fakes and AI fakes made me feel guilty because I did, in
fact, tweet out an AI-generated picture of Senator Fetterman as
the Pope of Greenland. So I am guilty of doing so, although it
may not be a fake. It may be a real thing.
Senator Klobuchar. OK. Oh, whoa, parody is allowed under
the law. Parody is allowed. That is different than what I am
talking about but Senator Fetterman should respond.
Senator Fetterman. Or if it was AI.
Senator Klobuchar. I know.
The Chairman. It may be--it is a good shot, actually.
All right. I have a few more questions and then we will
wrap up.
Mr. Altman, what has been the most surprising use for
ChatGPT you have seen? What are applications that you are
seeing that are surprising?
Mr. Altman. People message ChatGPT billions of times per
day so they use it for all sorts of incredibly creative things.
I will tell one personal story, which as mentioned earlier I
recently had a newborn.
Clearly, people did it but I do not know how people figured
out how to take care of newborns without ChatGPT. That has been
a real lifesaver.
The Chairman. So I will tell you a story that I have told
you before but my teenage daughter several months ago sent me
this long, detailed text, and it was emotional and it was it
was really well written and I actually commented. I am, like,
wow, this is really well written.
She said, oh, I used ChatGPT to write it. Like, wait, you
are texting your dad and you do not--it is something about the
new generation that it is so seamlessly integrated into life
that she is sending an e-mail, she is doing whatever, and she
does not even--does not even hesitate to think about going to
ChatGPT to capture her thoughts.
Mr. Altman. I have complicated feelings about that.
[Laughter.]
The Chairman. Well, use the app and then tell me what your
feelings are.
Mr. Altman. OK.
The Chairman. Google just revealed that their search
traffic on Safari declined for the first time ever.
Mr. Altman. It did not send me a Christmas card.
The Chairman. Will ChatGPT replace Google as the primary
search engine, and if so, when?
Mr. Altman. Probably not. I mean, I think some use cases
that people use search engines for today are definitely better
done on a service like ChatGPT, but Google is like a ferocious
competitor.
They have a very strong AI team, a lot of infrastructure, a
very well-protected business, and they are making great
progress putting AI into their search.
The Chairman. All right. So a question that I have spent a
lot of time talking to business leaders, CEOs in the tech
space, AI, and one question that I have asked that I get
different answers on--and I am curious what the four of you
say--how big a deal was DeepSeek?
Is it a major, seismic, shocking development from China? Is
it not that big a deal? Is it somewhere in between and what is
coming next?
And let us each of the four of you.
Mr. Altman. Not a huge deal. There are two things about
DeepSeek. One is that they made a good open source model and
the other is that they made a consumer app that, for the first
time, briefly surpassed ChatGPT as the most downloaded AI tool,
maybe the most downloaded app overall.
There are going to be a lot of good open source models and,
clearly, there are incredibly talented people working at
DeepSeek doing great research.
So I would expect more great models to come. Hopefully,
also us and some of our colleagues will put out great models
too.
On the consumer app, I think if a--if the DeepSeek consumer
app looked like it was going to beat ChatGPT and our American
colleagues' apps as sort of the default AI systems that people
use that would be bad. But that does not currently look to us
like what is happening.
Ms. Su. I would say it is somewhere in between, Chairman
Cruz. When you think about what we have learned, what we
learned is, you know, there are different ways of doing things.
So we have lots of incredibly innovative people in the
United States. American models are, clearly, the best by far.
However, when you have constraints that are placed there are
other ways of doing things and I think we learned a few things
in the process.
I think the open source nature of DeepSeek was one of the
things that probably was most impactful in just terms of how
much can be done in an open source type of model and open
ecosystem.
But, clearly, the United States is leading and we need to
continue, as we have said, to accelerate innovation and
adoption as you started this hearing with.
Mr. Intrator. I think DeepSeek did a lot of things. One of
the things that it did was it sort of raised the specter of
China's AI capability to a much broader audience than was
perhaps focused on it prior to that, right, and so you saw that
kind of reverberate through the financial markets.
You saw, like, a broad-based reaction and suddenly everyone
knows what DeepSeek is and the fact that China is not
theoretically in the race for AI dominance but actually is very
much a formidable competitor.
And so, you know, it was a starting gun in some ways for
the broader population and kind of maybe the broader
consciousness of the fact that that this is not a fait accompli
and that we are going to have to work as America together to
kind of propel our solutions forward. And so I think that was
one of the lasting impacts that we will see from that.
Mr. Smith. I would say like Lisa that it was somewhere in
between. It was not shocking. I mean, it was one of a number of
startups that we were following in China that we saw as having
the potential to be innovative in this space.
I do think there is a really interesting and important
point that constraints encourage innovation in other ways and I
just think one of the interesting facts about DeepSeek is that
of their, say, 200 or more employees--that was the their size
when they released these models--almost all of their employees
by design were 4 years or less out of university.
They wanted to hire people that would not bring to their
work traditional ways of doing things.
The Chairman. So the kids are taking over the world?
Mr. Smith. They do every generation.
[Laughter.]
The Chairman. Related to that--were you finished with that,
Mr. Smith?
Related to that, we talked at the outset about the AI
diffusion rule being rescinded, which I am glad. I think it was
a bad rule. I think it was overly complex. I think it put on a
number of our trading partners unfair restrictions and so I am
glad the President is rescinding it.
That does not necessarily mean that there should be no
restrictions and there are a variety of views on whether--what
the rules should be concerning AI diffusion.
Nvidia has argued that we want American chips everywhere,
even in China. Others have argued that we want to restrict at
least the most advanced processors.
I am curious--each of the four of you what do you think the
rule should be if anything is to replace the AI diffusion rule?
And, Mr. Altman, we will start with you.
Mr. Altman. I also was glad to see that rescinded. I agree
there will need to be some constraints. But I think if our--if
the sort of mental model is winning diffusion instead of
stopping diffusion that directionally seems right.
That does not mean there is no guardrails. It does not mean
we say, like, we are going to go build a bigger data center in
some other country than the U.S. Our intention is to build our
biggest and best data centers in the U.S. Do training in the
U.S. Build models here. Have our core research here.
But then we do want to build inference centers with our
partners around the world and we have been working with the
U.S. Government on that. I think that will be good.
To this point that influence comes from people adopting
U.S. products and services up and down the stack, maybe most
obviously if they are using ChatGPT versus DeepSeek but also if
they are using U.S. chips and U.S. data center technology and
all of the amazing stuff Microsoft does, that is a win for us,
and I think we should embrace that but make sure that, you
know, the most critical stuff--the creation of these models
that will be so impactful--that should still happen here.
The Chairman. Dr. Su.
Ms. Su. I think we would totally agree with the concept
that some restrictions are necessary. This is a matter of
national security as much as it is about AI diffusion.
That being the case, we were happy to see the rescinding as
well and we view this as an opportunity to really simplify,
right.
At the end of the day, you know, we have talked about the
need to drive widespread adoption of our technology and our
ecosystem. You know, simple rules that can be easily applied
that really allow our allies to protect our technology while
still utilizing the best that the United States has to offer I
think is a good start in terms of where we are going and, you
know, again, this is an area where I think the devil is in the
details and it requires a lot of balance.
And so from an industry standpoint, you know, it is our job
to put on the broader hat and work hand-in-hand with the
administration and Congress to, you know, make our best
recommendations so that it is a policy that has some stability
as we go forward as well.
The Chairman. Mr. Intrator.
Mr. Intrator. So I will echo what Sam and Lisa said. But,
you know, national security is paramount, and then once you
have addressed the limitations around national security the
opportunity to work with regulators to put together a
regulatory framework beyond that makes a lot of sense, and the
diffusion rule did not allow us that opportunity to participate
fully enough to feel like we were going to come away with what
would be an optimal outcome at this point.
The Chairman. Mr. Smith.
Mr. Smith. I think we have all discussed the right recipe.
Simplify, eliminate these tier two quantitative restrictions
that undermine confidence and access to American technology,
but enable even the most advanced GPUs the country has to be
exported to data centers that are run by a trusted provider,
that meet certain security standards.
That means both physical and cybersecurity standards. That
there is protection against diversion of the chips and there
are precautions against certain uses, and that means two
things.
One is that there are controls in place to ensure that,
say, the PLA--the Chinese military--is not accessing and using
these advanced models or advanced chips in a data center
regardless of the country that it is in, and there are certain
harmful uses that one should want to prohibit and preclude like
using a model to create the next pandemic, a biological weapon,
a nuclear weapon.
And I think that there is an approach that is coming
together that can be retained and can move forward and that
strikes the right balance.
The Chairman. OK. Final question for each of you. Would you
support a 10-year learning period on states issuing
comprehensive AI regulation or some form of Federal preemption
to create an even playing field for AI developers and
deployers?
Mr. Altman. I am not sure what a 10-year learning period
means, but I think having one Federal approach focused on light
touch on an even playing field sounds great to me.
Ms. Su. Aligned Federal approach with, you know, really
thoughtful regulation would be very, very much appreciated.
Mr. Intrator. I agree with both my colleagues.
Mr. Smith. Yes, I think that builds, obviously, on the op-
ed that you and Senator Graham published last year and I think
giving the country time--your analogy, your example, was this
worked for the internet.
There is a lot of details that need to be hammered out, but
giving the Federal Government the ability to lead, especially
in the areas around product safety and pre-release reviews and
the like, would help this industry grow.
The Chairman. Well, I want to thank each of the witnesses.
This was a very interesting hearing. It was informative. These
issues matter.
You saw a great deal of interest on both sides of the aisle
in this topic and so I appreciate--each of you are very busy
and doing a lot of things and I appreciate your being here
today.
Senators will have until the close of business on Thursday,
May 15, to submit questions for the record and the witnesses
will have until the end of the day on Thursday, May 29 to
respond to those questions.
And with that, that concludes today's hearing. The
Committee stands adjourned.
[Whereupon, at 1:13 p.m., the hearing was adjourned.]
A P P E N D I X
Response to Written Question Submitted by Hon. Roger Wicker to
Sam Altman
Political and Ethical Decisions by AI Technology
Background: xAI was founded by Elon Musk on March 9, 2023, and
develops technology similar to OpenAI, which Musk helped found
alongside Sam Altman and others in 2015. Grok is xAI's flagship
product, which runs on X (formerly Twitter), the social media platform.
Musk, along with its developers, has expressed an intent to measure and
potentially modify the political and ethical preferences embedded in AI
systems. Studies have shown that popular AI models like OpenAI's
ChatGPT tend to exhibit specific ideological leanings, particularly
favoring environmental protection and expressing left-leaning,
libertarian viewpoints.
Question. Mr. Altman, President Trump has said that to maintain
U.S. leadership in AI, we must develop systems that are free from
ideological bias or engineered social agendas. You have said that you
expect AI to be capable of superhuman persuasion well before it is
superhuman at general intelligence. According to research recently
published by xAI and Scale AI advisor Dan Hendrycks, AI systems exhibit
significant left-wing biases in their value systems. What should be
done to prevent superhuman persuasion by AI? Should superhuman
persuasion by AI be banned? What are you doing to prevent superhuman
persuasion in OpenAI's systems?
Answer. Our tools enable the freedom to learn, the freedom to
create, and the freedom to innovate. We are accelerating knowledge,
creativity, and free expression. Our systems have robust guardrails. We
are transparent about how those guardrails work and we work hard to
make sure we are enabling creativity and protecting everyone's freedom
to use AI. Our models are specifically designed not to ``have an
agenda'' which is outlined in our public documentation, such as the
Model Spec, describing how our models work; the goal of an AI assistant
is to assist humanity, not to shape it.
______
Response to Written Questions Submitted by Hon. Marsha Blackburn to
Sam Altman
Question. The strength of American businesses has long depended on
the enforcement of intellectual property (IP) rights. Recently, OpenAI
has publicly argued that unless AI companies in the U.S. are permitted
to broadly claim fair use of copyrighted content, the Nation will lose
its competitive advantage to China in the AI sector.[1] Specifically, a
recent comment letter by OpenAI stated ``there is little doubt that the
PRC's AI developers will enjoy unfettered access to data--including
copyrighted data--that will improve their models.'' This assertion
stands in stark contrast to the long-standing American principles that
prioritize IP protection as a driver of innovation and a safeguard
against foreign competition. The United States' commitment to upholding
property rights and the rule of law has been central to its leadership
in technological development. Would you suggest that the U.S. adopt an
approach to intellectual property rights more akin to that of China?
Answer. America's intellectual property laws have underpinned
generations of American technology leadership, from the personal
computer to the commercial internet. America's creative professionals
have benefitted from the strong protections our laws give to creators,
and America's technology innovators have benefitted from existing and
longstanding IP doctrines such as fair use, which permits new
technologies to interact with copyrighted works in transformative ways.
This balanced framework has enabled the success of American AI, and the
United States should stand by the existing American legal framework
that has served our country so well.
______
Response to Written Questions Submitted by Hon. Maria Cantwell to
Sam Altman
AI Standards
The U.S. driving development of AI standards alongside the most
advanced democracies in the world offers us an opportunity to set the
``rules of the road'' for AI on the global stage.
Question 1. In response to my question regarding NIST standards,
you stated that NIST standards would not be necessary, but that they
could be helpful in improving our competitiveness. Can you explain how
you view NIST standards as helping the United States' competitiveness?
Answer. We know that in the race for 5G China took an active role
in subsidizing and supporting companies in standard setting bodies.
Their influence in voting bodies helped set the rules of the road for
5G. Similar dynamics are occurring with AI standards and NIST can play
a role in supporting American companies to navigate these processes,
especially for non-legacy technology companies.
In order for AI to benefit the world, technical standards would
help countries build on U.S. technology and promote democratic AI. This
is what our initiative, OpenAI for Countries, aims to achieve.
Question 2. What standards would you like to see NIST develop and
promote to improve U.S. competitiveness?
Answer. As models become more capable, it will be important to
further the science of evaluations and metrics for measurement of such
capabilities. NIST's measurement science initiatives (benchmarks, test
beds, cryptographic validation, AI evaluations) can help further the
development of these methods of evaluation and give industry a common
vocabulary around safety and security.
Separately, various countries are proposing frameworks for risk
management of frontier systems. NIST could play a role in harmonizing
those across jurisdictions to help American products get to new
markets.
Finally, agents will present a new challenge to how tasks and
communication are conducted on the internet. NIST could help drive and
establish consensus based industry technical standards to address this
challenge.
AI Safety
We are seeing a proliferation of deepfakes and other AI content
that threatens the average person's ability to discern truth in media.
And that's just one area in the field of AI that presents complicated
safety questions. The U.S. AI Safety Institute plays a critical role in
ensuring that AI systems are developed responsibly and that the most
advanced models are fully tested. This is crucial for building trust
and promoting wider adoption.
Question 3. Do you support the work of the U.S. AI Safety
Institute?
Answer. OpenAI has had a constructive partnership with the U.S. AI
Safety Institute, focused on national security risks posed by dual-use
AI capabilities. In our view, this is a good model for how a voluntary
partnership between the Federal government and private sector can
protect American national security and strengthen our economic
competitiveness.
Public Investment in Science
Government investment in fundamental science has been the backbone
of American success in technology and innovation. If the United States
wants to outcompete foreign adversaries, it cannot defund the National
Science Foundation, National Institute of Standards and Technology
(NIST), Department of Energy labs, or STEM education programs that
power the AI workforce and ecosystem. Leadership in AI requires
sustained public investment, not ill-conceived cuts that are not data
driven.
Question 4. How has your company benefited from or collaborated
with the National Science Foundation, NIST or the Department of Energy
Labs in artificial intelligence development?
Answer. OpenAI has greatly valued the opportunity to collaborate
with U.S. public institutions, particularly the Department of Energy
National Laboratories, including Los Alamos National Laboratory, in
advancing the safe and beneficial use of artificial intelligence for
scientific discovery.
Highlights of our collaboration include:
Secure deployment of OpenAI models at Los Alamos National
Laboratory (LANL): In a first-of-its-kind partnership, we
enabled the deployment of our models on the Venado
supercomputing cluster, supporting high-assurance scientific
research within secure government environments.
Wider engagement across the national lab ecosystem: We have
partnered with scientists from across the DOE laboratory
network, including hosting a ``1,000+ Scientists Jam Session''
where over 1,500 researchers at nine national labs explored how
AI can accelerate research. These collaborations provide mutual
learning: our models improve with real scientific feedback, and
lab scientists gain hands-on experience with frontier tools.
Ongoing discussions on future-focused projects: We continue
to engage with the labs on mission-aligned areas such as energy
research, bioscience, and materials discovery, with the shared
goal of responsibly harnessing AI to support U.S. scientific
and technological leadership.
Question 5. How will cuts to NSF funding impact your workforce and
search for talent?
Answer. OpenAI benefits from and deeply values the strong
scientific ecosystem fostered by U.S. institutions such as the National
Science Foundation (NSF). NSF-funded programs play an important role in
supporting the researchers, students, and discoveries that shape the
future of artificial intelligence.
A healthy academic research environment:
Expands the pool of AI-ready talent, including many of the
researchers we are proud to have hired.
Strengthens foundational science, much of which underpins
progress in machine learning and adjacent fields.
Supports broader innovation, ensuring that developments in
AI benefit from and contribute to the wider scientific
enterprise.
We are committed to working with government and academic partners
to ensure the U.S. remains a global leader in both talent and
innovation.
Question 6. What impact will cuts to Federal funding for science
and research at universities have on U.S. competitiveness in AI?
Answer. The strength of America's research universities has long
been a key driver of national competitiveness in advanced technologies,
including AI. Public investment in university research plays a unique
role in enabling both fundamental discovery and talent development.
In the context of AI:
Many core innovations have emerged from university labs
supported by Federal grants.
AI models themselves are increasingly being used to support
scientific discovery, amplifying the value of research
investment.
Continued support for university-based research ensures that the
U.S. remains at the forefront of scientific and technological progress.
We are enthusiastic about the opportunity to continue contributing to
this shared mission alongside academic and Federal partners.
Energy Needs and R&D for Fusion Energy
The growing demand for electricity to power AI data centers is
staggering. By some estimates, global electricity demand from data
centers is projected to more than double by 2030 exceeding 945
terawatt-hours (TWh). It will strain electric grids and energy
providers.
Question 7. What plan does your company have to meet energy needs
for AI, and what investments are you making into non-fossil fuel
sources of energy such as fusion?
Answer. We anticipate that AI's energy needs will incentivize
substantial new investment in grid infrastructure and drive innovation
in energy technologies. Advances in AI, including reasoning models,
hold significant promise for scientific discovery, including in the
field of abundant, affordable energy solutions. Indeed, our existing
partnership to deploy reasoning models for use by the National Labs
includes Lawrence Livermore, whose scientists were first in the world
to demonstrate fusion ignition. Alongside others within the industry,
we also will continue our work to find new ways to ensure our
technology is as efficient as possible, including when it comes to
energy consumption. Even as we continue to see promising research and
innovation, we also remain focused on optimal use of available
computing power, both in research and deployment.
Question 8. With respect to fusion energy, how can the government
partner with the private sector to scale fusion technology as it
continues to develop?
Answer. The scaling laws are clear. American AI leadership is a
function of energy, data, and chips. Government support for fusion
research and pilots can be crucial, as companies look to identify
viable paths to raising the capital they need for continued scientific
progress and ultimately development at scale. As the government
continues to support fusion research by private entities and at our
national labs, we hope that our work with the National Labs as well as
early explorations with a range of fission and fusion companies
provides early indication of the role that AI can play in advancing
energy abundance.
______
Response to Written Question Submitted by Hon. Amy Klobuchar to
Sam Altman
Topic: Workforce Development of Engineers
In your testimony you said that by the end of this year Open AI
``will release AI powered tools that can handle sophisticated software
engineering.'' I'm concerned we won't be able to grow senior engineers
if AI replaces junior engineers.
Question. How will you ensure Open AI grows the talent needed for
future success?
Answer. We are very focused on fostering and training software
engineering talent--including through the role our technology can play
in bolstering American education at all levels and expanding computer
science capabilities across our schools and all sectors of the economy.
Our AI tools are highly complementary to existing software know-how and
can dramatically increase the capabilities of small and large
businesses. One example is how our tools can reduce a software
engineer's manual work and allow those engineers to focus on more
complex tasks, thereby strengthening their skills in important areas
like critical thinking, creativity, and problem-solving.
______
Response to Written Questions Submitted by Hon. Brian Schatz to
Sam Altman
Future of Work
Question 1. How is your company taking advantage of the automation
you're empowering to scale productivity without leaving workers behind?
Answer. Our AI tools are used in a wide range of ways at OpenAI,
such as helping people with tasks like developing code, interacting
with customers, and analyzing data. We see advanced AI as a way to help
people and businesses gain new capabilities, increase innovation, and
uplift productivity. As AI tools become increasingly capable, we think
it's important to bolster education at all levels, from elementary
schools through mid-career workers. Everyone should have the
opportunity to benefit from AI.
Question 2. Once OpenAI has ``generated orders of magnitude'' of
returns on investments, do you believe the Federal government has any
responsibility to make sure those benefits are distributed equitably
across all Americans?
Answer. OpenAI is a relatively young company and we continue to
invest at scale to develop new and more capable AI models, and we
invest heavily in the infrastructure required to achieve our mission.
We are not a profitable company at this stage, unlike more mature
technology firms that have large, sustained profit margins. We believe
the Federal government should ensure Americans have the freedom to
access and benefit from AI as it advances.
Question 3. OpenAI's capped profit structure was originally
designed, in part, to mitigate the harms of workforce automation by
using excess profits from AI to support those who lost their jobs.
However, per your corporate restructuring announcement, you now intend
to remove that cap. Do you still intend to support potential displaced
workers under this new structure?
Answer. Earlier this month we reaffirmed our commitment to the
OpenAI nonprofit having control over the organization. The previously
existing for-profit subsidiary--originally structured as a ``capped-
profit'' LLC--will be converted into a Public Benefit Corporation
(PBC), and this new entity will remain under the control of the
nonprofit. The PBC's mission will be the same as the nonprofit's, which
is to ensure AGI benefits all of humanity. The new structure will allow
us to strengthen our ability to attract capital, talent, and resources,
while preserving our founding mission to make our services broadly
available to all of humanity. We believe our mission of achieving safe
and beneficial advanced AI will help people and businesses gain new
capabilities, increase innovation, and uplift productivity.
Question 4. Will OpenAI commit to developing clear standards not
only for data quality, but for labor protections and responsible
practices across the AI training data supply chain?
Answer. OpenAI has a strong track record of transparency around how
our models are built and how we ensure they're safe and designed to
prevent a wide range of potential harms. We will continue to work with
the public and private sectors to provide insights and understanding as
our technology advances. We also have made public our supplier code of
conduct.
Corporate Restructuring
In 2017, you said ``That's why we're a nonprofit: we don't ever
want to be making decisions to benefit shareholders. The only people we
want to be accountable to is humanity as a whole.'' In your previous
testimony before the Senate, you explained the specific safeguards in
OpenAI's structure that ensure it remains true to its charitable
mission. On May 5, 2025, OpenAI announced that it would transition its
for-profit operations to a Public Benefit Corporation (PBC), but that
the nonprofit would retain control.
Question 5. What mechanisms are in place to prevent mission drift
and ensure that the PBC's actions align with OpenAI's foundational
goals?
Answer. The nonprofit will control and be a large shareholder of
the PBC. Both the nonprofit and the PBC will have the same mission--to
ensure that AGI benefits all of humanity. We have described our plans,
including our ongoing commitment to our mission, in this recent
statement. Other AI labs, like Anthropic and xAI, are also PBCs, as are
other purpose driven companies like Patagonia.
Question 6. What criteria, metrics, or benchmarks will OpenAI use
to evaluate whether its actions serve the public interest?
Answer. We will continue to be transparent about how our models
work, including publishing a ``model spec'' that outlines how our
models are designed and the safety guardrails incorporated into
training. We also maintain a public preparedness framework, which
details how we evaluate and mitigate potential AI harms.
In addition, our corporate structure is fundamentally designed to
serve the public interest. As a Public Benefit Corporation (PBC), the
PBC board of directors will hold a fiduciary duty to uphold the public
benefit objectives outlined in our charter. These objectives are
aligned with--and in fact identical to--those of the OpenAI nonprofit.
This means that serving the public interest is not just a guiding
principle, but a legal obligation embedded in our governance model.
By combining technical transparency, safety-focused metrics, and a
mission-aligned corporate structure, we ensure that our actions remain
squarely focused on advancing the public good.
Question 7. Will OpenAI commit to regular public disclosures about
its operations, decision-making processes, and AI developments?
Answer. We have a long track record of transparency and engagement
on these issues, including published, in-depth research, system cards,
safety specifications and testing information, ongoing research
programs and academic partnerships, and disclosures about how our
models and safety work are developed and implemented.
Question 8. Do you still agree that the interests of your
shareholders are not the same as the interests of the public, and might
not always be aligned with America's security interests?
Answer. Our mission is to ensure that AGI benefits all of humanity,
and that will not change. We believe that advancing democratic AI, led
by the U.S. and like-minded nations and anchored by a commitment to
freedom and democratic principles, is the best way to ensure both our
mission and America's security interests. As we announced earlier in
May, OpenAI was founded as a nonprofit, and is today overseen and
controlled by that nonprofit. Going forward, it will continue to be
controlled by that nonprofit. Our for-profit LLC, which has been under
the nonprofit since 2019, will transition to a Public Benefit
Corporation (PBC)--a purpose-driven company structure that has to
consider the interests of both shareholders and the mission. The
nonprofit will control and also be a large shareholder of the PBC,
giving the nonprofit better resources to support many benefits. Our
mission remains the same, and the PBC will have the same mission.
Question 9. How will OpenAI manage potential tensions between
profit-driven investor expectations and its nonprofit mission?
Answer. Our mission remains the same--to ensure that AGI benefits
all of humanity. The nonprofit and PBC share the same mission.
Safety and Security
Question 10. Do you agree that one of the board's core
responsibilities is to ensure OpenAI's models are thoroughly tested
before their release to ensure they won't harm the public?
Answer. We conduct extensive safety testing and outline both how
our models are designed and the work we do to safeguard against
potential risks. This process is outlined on our website and is
discussed and detailed in extensive public documentation. Last year,
the Board formed the Safety and Security Committee, an independent
oversight committee focused on model safety and security. The Committee
is briefed by company leadership on safety evaluations and exercises
oversight over major model releases.
We also maintain a productive partnership with the U.S. AI Safety
Institute that enables AI research, testing, and evaluation focused on
national security risks.
Question 11. Do you agree that employees should be encouraged to
raise concerns about threats to the security of the United States?
Answer. OpenAI believes that open communication is essential to a
successful work environment and that all employees should feel free to
raise issues of concern without fear of reprisal. We have worked to
foster a culture where people feel a responsibility to raise potential
safety concerns and work to address them. This is encompassed in our
Raising Concerns policy, which is made available to all OpenAI
employees.
Question 12. Does OpenAI maintain the commitments it made under the
Biden Administration Voluntary AI Commitments?
Answer. OpenAI continues to prioritize the safe, secure, and
transparent development and use of AI technology.
Question 12a. If so, will OpenAI continue to maintain these
commitments under its new PBC structure?
Answer. Regardless of our corporate structure, OpenAI will always
be committed to the safe, secure, and transparent development and use
of AI technology.
Question 13. Does OpenAI maintain the commitments it made under the
Frontier AI Safety Commitments?
Answer. OpenAI remains committed to fulfilling the voluntary
Frontier AI Safety Commitments made at the Seoul AI Summit in May 2024.
In advance of the Paris AI Action Summit in February 2025, we published
an update showing our progress on the voluntary commitments made at
Seoul and at previous AI summits, available here. [https://
cdn.openai.com/global-affairs/paris-summit-update-on-voluntary-
commitments-20250207.pdf]
Question 13a. If so, will OpenAI continue to maintain these
commitments under its PBC structure?
Answer. We will continue to prioritize the safe, secure, and
transparent development and use of AI technology.
Accelerating Scientific Research
Question 14. How is OpenAI expanding partnerships or tool
development to accelerate breakthroughs in biotechnology and other
scientific research, and what additional support do you need from
Federal agencies to scale these efforts responsibly?
Answer. We are incredibly excited about how our AI models can
accelerate scientific progress. We have established a strong
partnership with the National Laboratories to further these efforts. We
would continue to encourage public sector uptake of AI tools across
institutions that work on science and health. And we strongly encourage
efforts to incorporate AI into education at all levels, which will
equip many more people to harness AI for scientific breakthroughs.
Question 15. OpenAI models increasingly generate content used in
scientific inference. What evaluation protocols or benchmarks do you
believe are necessary to ensure that AI-generated results in scientific
fields are robust, reproducible, up-to-date, and trustworthy?
Answer. Different fields have different approaches to this, but
generally speaking, robust science should be reproducible and peer
reviewed. This helps ensure the validity, significance, and originality
of scientific work and helps improve the overall quality of research
across various fields.
United Arab Emirates (UAE)
Question 16. Please share why you believe the UAE is a trustworthy
partner for increased collaboration on AI.
Answer. The UAE is aiming to be a world leader in integrating AI
into their industry and society. They have spent years developing plans
that bring together government, investor, private sector and
educational institutions towards this end. We believe it's important
that American companies don't sit on the sidelines as they proceed with
their plans.
As we partner with countries under our OpenAI for Countries
initiative we are ensuring that democratic values shape the future of
AI. To do this securely, we need smart export controls that balance
innovation and safety, while aligning nations around rights like free
expression and safeguards against surveillance. We are working with the
Federal government to ensure our international partnerships meet the
highest standards of security and compliance.
Question 17. What actions by the UAE would cause you to reconsider
support for cooperation and collaboration on AI?
Answer. Our partnerships with foreign governments will only succeed
by following the standards of security and compliance set forth by the
United States government. Foreign governments will also be required to
comply with our usage policies.
Energy Consumption and Cost of Winning
Question 18. Do you support President Trump's efforts to expand
coal power for AI data centers?
Answer. We support efforts to ensure AI infrastructure is powered
by energy that is affordable, reliable, and scalable--goals that are
critical to U.S. economic competitiveness and job creation. How best to
achieve that, across a mix of energy sources, is a matter for
policymakers and the broader energy sector.
Question 19. How are you addressing the costs associated with new
infrastructure development in the short-term, including in terms of
water consumption, pollution, and climate impacts?
Answer. We take very seriously the impacts associated with AI
infrastructure projects. We intend to actively engage with federal,
state, and local officials, as well as communities, to responsibly
manage water usage and climate impacts. Our aim is to ensure that our
AI infrastructure projects not only support technological advancement
but also contribute positively to sustainability and quality of life.
Question 20. What plans do you have to source clean energy and to
publicly report your companies' emissions?
Answer. We expect our near-term infrastructure projects to
incorporate significant clean energy components, including nuclear
energy, solar generation, wind and large-scale battery storage, as well
as to comply with all applicable environmental and reporting
requirements. We are also actively exploring how we can support
accelerated timelines for advanced clean energy technologies, including
advanced nuclear and fusion.
______
Response to Written Questions Submitted by Hon. Edward Markey to
Sam Altman
Comprehensive Impacts of Data Center Construction
Question 1. When planning for data center construction, does your
company conduct a cradle-to-grave infrastructure study that includes
wildlife, community, and pollution impacts during and beyond the
operational lifespan of a data center? If yes, what have you learned
from those studies? If no, why not?
Answer. In our ongoing site selection process, we are assessing
sites in accordance with all applicable Federal and state environmental
laws, as well as utilizing industry experiences, consultants, and best
practices. We work closely with our infrastructure partners such as
Oracle and Microsoft which have been responsible for these activities
to date.
Backup Energy Generation
Question 2. Does your company use backup diesel generators at any
facilities?
Question 2a. If yes, please provide a list of each facility where
diesel generators are being used, along with the location, quantity,
and type of generators.
Question 2b. If yes, did your company consider the use of battery
storage technology as an alternative to diesel generators? Please
explain your decision process.
Answer. We are not currently utilizing backup diesel generators. In
our site selection and design process, we will consider alternatives to
diesel generators, including BESS technologies. We have battery backup
power.
Energy Mix
Question 3. Does your company utilize any on-site or colocated
energy generation to power your data centers?
Question 3a. If yes, please provide detail how much power comes
from on-site and colocated energy generation.
Question 3b. If yes, please list all on-site and colocated energy
sources (e.g., renewable, nuclear, hydropower, gas-powered turbines,
etc.) that are being utilized to power your data centers.
Answer. In developing our AI infrastructure, we will consider both
on-site and utility power, including co-located energy sources.
Question 4. How does your company ensure local ratepayers are not
responsible for paying the cost of new energy infrastructure, such as
transmission lines, needed to meet the data center's energy demand?
Answer. We intend to actively collaborate with utilities,
regulators, and local governments to ensure that infrastructure costs
associated with our data centers do not unfairly burden local
ratepayers, maintaining fairness and affordability for all energy
consumers
Energy Consumption in AI Model Training
Question 5. How many GWh of energy do you estimate was used to
train GPT-4.1 and any new AI models?
Answer. We are actively working to reduce energy consumption across
all stages of AI development, from initial training to practical
inference. We invest continuously in improvements to both hardware and
software efficiency, collaborating closely with researchers,
policymakers, and industry partners to establish stronger industry
standards and credible, open benchmarks, such as those developed by ML
Energy Initiative and Epoch AI.
As we explore new ways to enhance efficiency, we remain committed
to thoughtful and responsible use of computing resources. Innovations
like model distillation, which allows us to train large, comprehensive
models that can subsequently be refined into smaller, highly efficient
versions for targeted applications, are a key part of this strategy. As
algorithms and AI technologies continue evolving, we anticipate ongoing
improvements in both energy efficiency and resource conservation,
including reductions in water usage.
Question 6. Data centers require vast amounts of water for cooling.
When water is at critically low levels, does your company continue to
pull water for building cooling? Does it have a contingency for
operating as to not put further stress on the water supply and
potentially take limited resources from households, agriculture, or
small businesses?
Answer. We are currently evaluating strategies to ensure
sustainable water use, including in site selection, through advanced
cooling technologies, and via operational adjustments and contingency
plans that prioritize community water needs, ensuring responsible
resource management and minimizing potential impacts on local
agriculture, households, and small businesses.
Government Partnerships
Question 7. Your company offers AI products specifically for the
public sector, which are now used across Federal agencies and state and
local governments. Given the especially heightened risks related to
governments' use of AI--including the denial of rights or access to
services and false or incorrect information about government benefits
and programs--what additional steps have you taken to ensure that these
tools are safe and effective to use in the context of government?
Answer. We provide advanced AI tools to consumers, businesses, and
governments. Each of these use cases is different, but each requires
safe and beneficial AI. We have outlined our safety and security work
on our website and have published a ``model spec'' which provides
extensive details about how we engineer safety in model development.
Question 8. What protocols do you have in place to work with
government agencies to rectify any harms or errors when they occur?
Answer. We solicit feedback from customers and partners, including
government agencies. We don't decide how our models will be used by
particular customers but work to ensure these models are highly capable
and safe to deploy. We also provide a secure way to report harms
through our Trust & Transparency reporting resources.
Business Partnerships
Question 9. When you make your AI systems available to other users/
deployers, what are the types of issues you agree on?
Answer. We have usage policies in place for our various products
and tools, including our API, which businesses typically use as a base
to build new products and services. Our usage policies are publicly
available.
Question 10. What information do you provide to those other
parties/deployers?
Answer. We share publicly available information about how the
models are trained, how they are designed to be safe, and our usage
policies.
Question 11. What step does your company take any steps to reduce
the likelihood that, for instance, those downstream users don't use
your services in ways that could harm people or that violate your terms
of use?
Answer. We use a combination of automated systems, human review,
and user reports to find and assess uses that potentially violate our
policies. Violations can lead to actions against the content or a
customer account, such as warnings, sharing restrictions, or
ineligibility for inclusion in our services.
AI Hallucinations
Recent reporting suggests that generative AI hallucinations are
getting worse as the technologies become more powerful. Hallucinations
can lead to great harm in certain scenarios, such as when assessing job
applications, or even more dangerously in the national security
context.
Question 12. Do you agree that we need guardrails to ensure that AI
tools are not used or misused in ways that could cause harm to people?
Answer. Yes and we have usage policies that are designed to protect
against potential harms.
Question 13. What steps is your company taking to address this
issue?
Answer. We have usage policies that are designed to protect against
potential harms.
______
Response to Written Question Submitted by Hon. Tammy Baldwin to
Sam Altman
Question. You have spoken publicly on how you believe AI can be
used to advance medical research and improve patient outcomes. However,
I am concerned about sensitive personal information being compromised
in the development of such models, and how systems may intrench bias.
Can you share what role you envision AI having in health care and what
protections need to be in place to safeguard patient privacy?
Answer. OpenAI has developed and implemented a range of privacy
safeguards across the AI lifecycle, from model development to model
deployment. We undertake extensive efforts at the training stages to
limit the presence of personal information and decrease the likelihood
that our models output sensitive personal information, and we provide
users information and tools to address their privacy concerns. We are
strongly committed to keeping secure any information we obtain from
users or about users. These safeguards are outlined in our privacy
policy.
We are already seeing our tools being used to accelerate scientific
research and help fight disease. We have partnerships across medical
research and clinical care. We have outlined our approach to working
with healthcare providers, including privacy and security settings for
patients. This is critical to ensuring AI tools are used safely in
healthcare, where AI is a game changer.
______
Response to Written Questions Submitted by Hon. Jacky Rosen to
Sam Altman
Model Security
Question 1. It's essential we ensure the AI models we use do not
become another cybersecurity vulnerability. Would voluntary
cybersecurity standards for large AI models or high-risk models and the
infrastructure they were trained on be helpful in establishing trust?
Answer. We are committed to continuously upleveling our
cybersecurity practices, both from a security perspective (protecting
our users, systems, and intellectual property) and from a safety lens
(preparing for dual-use frontier capabilities, such as cyber, that have
the potential to cause harm).
To ensure the security of our models and systems, we are leveraging
our own AI technology to scale our cyber defenses and protect our
users, systems, and intellectual property; partnering with third-party
cybersecurity experts to rigorously test our cyber defenses through
realistic red-teaming; and working to adopt industry-leading security
practices such as zero-trust architectures and hardware-backed security
solutions, together with our partners.
To ensure the safety of our models, we rigorously evaluate model
capabilities consistent with our Preparedness Framework, and publish
extensive documentation about how safeguards we have built into our
models and approaches we use to ensure safety and security. We work
closely with cybersecurity experts to conduct rigorous third-party
assessments of our models, including with government agencies that are
focused on combatting cyber risks. Lastly, we continuously monitor and
disrupt attempts by malicious actors to exploit our technology.
AI T&E Workforce
A key factor in ensuring the U.S. continues to lead the world in
the AI race is by ensuring the AI we develop is the best and therefore
the most trustworthy. Validating model outputs is an important step in
establishing trust. Right now, however, the U.S. has neither the
standards nor the trained workforce to evaluate AI models to establish
that we can trust model outputs.
Question 2. What should Congress consider to incentivize and grow
the AI test and evaluation workforce?
Answer. We support congressional action to further educational
opportunities to grow the AI workforce of the future. Additionally,
OpenAI maintains a productive partnership with the U.S. AI Safety
Institute that enables AI research, testing, and evaluation focused on
national security risks. This partnership relies on skilled technical
experts on the government side. More broadly, as AI tools are adopted
across government, it will be important to promote AI skills and
literacy across the Federal workforce, in order to ensure a capable
workforce for AI development and implementation, and for the type of AI
safety engineering and testing that you describe.
Question 3. How can Congress support more interdisciplinary
approaches to testing and evaluating AI? For example, how do we ensure
a model being used in a healthcare setting has been evaluated both by
experts in the model technology, but also experts in the healthcare
setting in which it will be deployed?
Answer. Different AI use cases will require different approaches.
We have worked extensively with regulated industries like financial
services and healthcare to ensure our tools comply with their
regulatory requirements. This is very important to ensuring AI is
adopted across these sectors and that safety and security requirements
are met.
______
Response to Written Question Submitted by Hon. Lisa Blunt Rochester to
Sam Altman
Mr. Altman, AI is becoming integrated into our critical economic
and societal infrastructure, with McKinsey stating that long-term AI
opportunity could be about $4.4 trillion in added productivity growth
potential from corporate use cases.
But vendor lock in could be a real issue, where an AI vendor
dramatically falls behind the competition and leaves its client with a
vastly inferior product, which could threaten key industries the AI
product operates in.
Question. Do you have any plans or strategies regarding mitigating
lock-in for your AI products operating in critical sectors, like the
financial and medical sectors, to prevent potential lock-in effects
that might harm these critical sectors and the folks therein?
Answer. AI is a highly competitive space, with lots of companies
developing advanced model capabilities and new products. The state of
the art is advancing rapidly. Firms typically work with multiple models
and multiple developers, reducing the risk of lock-in to a particular
provider. We think it's important that consumers and businesses are
able to choose the best AI models and the best cloud infrastructure on
which to run those models.
______
Response to Written Question Submitted by Hon. Todd Young to
Dr. Lisa Su
AI Public Awareness and Education
Winning the diffusion race not only requires providing a pathway
for greater adoption of the technology and its applications into the
general stream of commerce but also bolstering our public awareness and
education of AI.
Question. Dr. Su, would you like to comment on this, especially as
it relates to building a workforce capable of solving more advanced
scientific R&D challenges? If the workforce isn't here at home, where
will it go and what are the consequences of that?
Answer. U.S. leadership in artificial intelligence is ultimately
based on talent. The single greatest determinant of our long-term
competitiveness is whether we can attract, train, and retain the people
who will design, build, and govern these systems. Today, there are
critical gaps across three main categories:
Advanced AI research and systems engineering. There is a
significant shortfall of PhD-level talent capable of designing
and scaling frontier AI models. This includes expertise in
machine learning, algorithm optimization, and systems co-
design. Many of the leading researchers in this field are being
aggressively recruited--and retained--by overseas institutions,
often with the backing of state-directed strategies.
Applied AI and deployment talent. We also lack enough
engineers who can safely and responsibly integrate AI into
critical sectors like healthcare, manufacturing, national
security, and logistics. These are cross-disciplinary roles
that require fluency in both AI and the domain in which it's
being applied. Bridging that gap is essential if we want AI
innovation to translate into broad-based economic benefit.
Technical infrastructure and hardware specialization. AI
leadership requires leading in the underlying hardware stack--
everything from chip architecture and interconnect design to
advanced packaging and power management. The U.S. has a chronic
shortage of semiconductor engineers, firmware developers, and
skilled technicians. These roles are vital for making AI models
performant, efficient, and scalable.
Addressing these gaps requires more than just graduate fellowships
or university funding. A coordinated, national approach--from STEM
education in K-12 to community college pathways, visa policy for high-
skill immigrants, and public-private partnerships that give students
and workers real exposure to cutting-edge AI development.
If the workforce is not in the United States, it will be in other
countries, including strategic competitors, to their benefit and our
detriment. This includes being able to attract and retain properly
vetted foreign talent, many of whom seek to study, work and live here
to contribute to our technology leadership.
If we want to lead the world in AI, we must lead the world in
talent. That is the foundation--and right now, we are playing catch-up
in too many areas.
______
Response to Written Questions Submitted by Hon. Maria Cantwell to
Dr. Lisa Su
AI Standards
The U.S. driving development of AI standards alongside the most
advanced democracies in the world offers us an opportunity to set the
``rules of the road'' for AI on the global stage.
Question 1. How do NIST standards help the United States'
competitiveness?
Answer. NIST's Risk Management Framework, including its comparisons
with other AI frameworks and international standards, supports U.S.
competitiveness because it was developed through a transparent public
process which improves customer confidence. NIST has successfully used
this framework in areas such as cybersecurity and AI, which AMD has
relied upon.
Question 2. What standards would you like to see NIST develop or
promote to improve U.S. competitiveness?
Answer. While some standards, like chip design, are best driven by
industry consortium, standards that protect national security, such as
cybersecurity and silicon providence, are best when NIST partners with
industry to promote U.S. competitiveness.
Question 3. How has your company benefited from or collaborated
with the National Science Foundation, NIST or the Department of Energy
Labs in artificial intelligence development?
Answer. Through more than a decade of partnership with the
Department of Energy, AMD now powers the world's two fastest
supercomputers: Frontier, which went into operation at Oak Ridge
National Labs in 2021, and El Capitan, which went into operation at
Lawrence Livermore National Labs late last year. These systems are
critical infrastructure for U.S. national security and scientific
leadership, including the latest advances in drug discovery, medical
research, climate research, hypersonic flight, and even training future
generations of more capable Al models.
Question 4. How will cuts to NSF funding impact your workforce and
search for talent?
Answer. NSF funding for science and engineering research and
education at U.S. colleges and universities provide formative training
ground for the American semiconductor workforce pipeline. We support
continued funding for these purposes to maintain U.S. AI leadership.
Question 5. What impact will cuts to Federal funding for science
and research at universities have on U.S. competitiveness in AI?
Answer. U.S. competitiveness in AI is based on five priorities: (1)
accelerating U.S. chip and system innovations to keep our leadership in
Al compute infrastructure where U.S. universities have played a key
role; (2) open ecosystems that enable universities to directly
contribute innovation; (3) U.S. research funding for advanced
semiconductor manufacturing and packaging; (4) research investment do
drive U.S. AI talent development; and (5) research in cyber techniques
to strengthen U.S. defense capabilities. Federal funding for research
related to these priorities at U.S. universities has been critical to
maintaining our lead. To the extent less Federal funding is available
for research related to these priorities at U.S. universities, the
private sector will need to provide additional funding and support.
Question 6. What are you most concerned with when it comes to your
supply chains?
Answer. As a fabless company, AMD focuses on ensuring that the
products it designs can be readily produced and delivered to customers.
AMD consequently supports trade policies that allow those efforts to
continue, including policies that do not impair the work underway to
enhance semiconductor manufacturing capabilities in the United States.
We must build a robust domestic supply chain for advanced semiconductor
manufacturing and packaging. AI leadership depends on the ability to
build complete, integrated systems. That means ensuring we have
domestic capabilities in both wafer manufacturing at the most advanced
nodes and next-generation packaging technologies, as well as the
advanced system capabilities needed to bring it all together. This is
an area where strong public-private partnerships are critical. The
entire semiconductor industry is aligned on the need to work together
and partner with the government to significantly scale U.S. chip
production and advanced packaging capabilities here at home.
Question 7. How will the higher costs from tariffs and potential
supply chain disruptions impact your plans for building AI
infrastructure?
Answer. AMD supports efforts to bring leading-edge manufacturing
back to the U.S. We should be clear-eyed about the fact that building
leading-edge semiconductor manufacturing will be a years-long project.
AMD is encouraged by the new announcement by TSMC of an increase of
$100 billion investment in the U.S. It is not viable to have 100
percent of semiconductor manufacturing in one location. As an industry,
we need to find ways to invest in the U.S. while building resiliency in
our supply chain.
Question 8. Do you anticipate any delays in construction or other
work on AI infrastructure around the country? If so, where might these
impacts hit the hardest?
Answer. This is a question best answered by parties building AI
infrastructure in the U.S.
______
Response to Written Questions Submitted by Hon. Amy Klobuchar to
Dr. Lisa Su
Topic: Chips Supply Chain
The bipartisan CHIPS and Science Act was a landmark investment in
domestic semiconductor R&D and manufacturing that I strongly support.
Yet design and manufacturing are only part of the supply chain. For
example, according to the Council on Foreign Relations, chips are
typically sent to Southeast Asia for assembly, testing, and packaging.
Question 1. What are additional key points in the chip supply chain
that we need to focus on to bolster U.S. economic resiliency?
Answer. We appreciate the continued focus on strengthening
America's semiconductor capabilities. Looking ahead, any future
legislation offers an opportunity to address areas that are essential
to our national and economic security but may have been underweighted
in prior legislation. I'd offer three key priorities.
First, U.S. leadership in chip design must be treated as a
national priority. Design is where technological
differentiation happens--where performance, efficiency, and
capability are defined. Other nations are investing
aggressively in this space, not just in manufacturing. A
forward-looking policy framework should include targeted tax
incentives and refundable R&D credits to ensure the U.S.
becomes a more attractive environment for advanced
semiconductor architecture, software tooling, and AI
accelerator innovation.
Second, advanced packaging deserves far greater strategic
emphasis. The future of computing increasingly depends on
chiplet architectures and 3D integration. Yet today, as you
note, most of the high-end advanced packaging capacity resides
in Asia. This is a growing risk for both commercial and
national security applications. Focused investments in domestic
packaging R&D hubs and supply chain readiness can create
meaningful capability in an area where the U.S. still has time
to lead--but the window is closing.
Third, talent must remain at the center of any long-term
strategy. The current shortage of semiconductor engineers,
software-hardware co-design specialists, and packaging
technicians is a structural challenge. We encourage continued
and expanded Federal support for workforce development
programs--from apprenticeship-style models and community
colleges to advanced university research partnerships that
align directly with industry needs.
In short, we believe future action should be calibrated toward
areas where strategic leverage is highest--design, packaging, and
talent--and should be structured in a way that encourages broad
private-sector investment without distorting the market.
We thank the Committee for its leadership and look forward to
working together on policies that secure U.S. semiconductor leadership
for the decades ahead.
______
Response to Written Questions Submitted by Hon. Brian Schatz to
Dr. Lisa Su
Securing American AI Systems
Question 1. Could you please detail what physical security and
cybersecurity standards you have adopted, or will commit to adopting,
to prevent IP theft and cyber disruptions by foreign adversaries?
Answer. U.S. companies have long-standing partnerships with
manufacturers who treat chip technology IP with the utmost sensitivity
to ensure customers continue using them for manufacturing.
For the physical security of AMD's products after manufacturing, we
take our obligations under export control laws very seriously. As chip
manufacturers, we implement strict Know Your Customer procedures and
work closely with government partners to ensure compliance.
Effective enforcement ultimately depends on a strong partnership
between industry and government, clear regulations, and rigorous
oversight across the supply chain.
Question 2. What investments, including know-your-customer
processes, are you making in hardware security research and development
to prevent your products from falling into the wrong hands?
Answer. AMD maintains a rigorous export control compliance program,
which was developed in accordance with guidance from the U.S. Bureau of
Industry and Security (``BIS''). Every single order must pass an export
compliance review. We have well-established management policies on
global trade compliance, which we regularly review and update, and
distribute to all our employees. Each element of our program addresses
specific requirements associated with U.S. and international
regulations to make sure we export our products consistent with our
obligations. Our compliance system also leverages software controls to
block prohibited sales, such as to government sanctions lists, as well
as controls reflecting bans against exports to certain locations and
for specified end-uses.
Additionally, we leverage the latest technology and take a risk-
based approach to our compliance efforts. Our goal is to know our
customers, and we also gather information on indirect customers with
point-of-sale records. Before we consider doing business with any
party, we screen them for sanctions issues. We have deployed leading
third-party resources--including the same ones trusted by the U.S.
government--to scrutinize potential customers. These platforms
continually study open-source data to see if it yields new
intelligence, and we refresh our systems with their up-to-date
findings. If red flags arise, our team conducts enhanced due diligence
before taking any further steps.
At AMD, compliance is an enterprise-wide effort. All of our
employees undertake mandatory compliance training on a yearly basis in
order to stay current with their export control responsibilities. We
also have a core GTC team of specialists around the world who are
responsible for our export control policies, processes, and procedures.
This team is located in six countries and collectively has decades of
experience navigating export compliance regulations--and managing and
executing export compliance policies--on a global scale.
Question 3. Are these processes industry standards? How do we
ensure that your competitors also exercise them?
Answer. AMD strives to have a best-in-class compliance program, as
described above. Encouraging all companies to do the same and ensuring
the government has sufficient resources to monitor compliance and apply
the regulations in a consistent and transparent manner, will enhance
the effectiveness of the current controls.
PRC Deployment of AI
Question 4. What do you believe is the greatest national security
threat posed by the People's Republic of China's deployment of AI
systems?
Answer. AMD is a proud American company for over five decades. We
want the U.S. to thrive, innovate and lead. That is good for us as a
company and good for our employees, not just in the U.S. but around the
world. As a company, AMD is not in a position to identify or prioritize
national security threats.
Question 5. What are your recommendations for addressing these
threats?
Answer. The export control regulations describe the threats and
impose restrictions, accordingly. As noted above, AMD's extensive
compliance program is designed to comply with those restrictions,
address the identified threats and prevent diversion, working closely
with government partners to ensure compliance.
Accelerating Scientific Research
Question 6. How can AMD's collaboration with U.S. national labs be
deepened to ensure scientists across disciplines can access next-
generation compute?
Answer. AMD's work in supercomputing illustrates the benefits of
collaboration with the national labs. As mentioned above, through more
than a decade of partnership with the Department of Energy, AMD now
powers the world's two fastest supercomputers: Frontier, which went
into operation at Oak Ridge National Labs in 2021, and El Capitan,
which went into operation at Lawrence Livermore National Labs late last
year. These systems are critical infrastructure for U.S. national
security and scientific leadership, including the latest advances in
drug discovery, medical research, climate research, hypersonic flight,
and even training future generations of more capable Al models. In a
similar vein, AMD stands ready to continue and enhance its
collaboration with U.S. national labs in next-generation compute.
______
Response to Written Questions Submitted by Hon. Jacky Rosen to
Dr. Lisa Su
Competition Across the AI Ecosystem
It's essential that as we build a strong AI industry in the U.S.,
we also focus on establishing a competitive one.
Question 1. What should Congress consider to promote competition in
the AI ecosystem and across the tech stack, such as interoperability
requirements or securing access to computing power for researchers?
Answer. At AMD, we strongly believe that open ecosystems and
interoperability are foundational to driving innovation, enhancing
security, and fostering robust competition--especially in the fast-
evolving AI sector.
Open ecosystems allow developers, researchers, and companies of all
sizes to collaborate, build upon shared standards, and accelerate
breakthroughs. By designing our platforms to support open-source
software and industry-standard interfaces, we lower the barriers to
entry for innovators, enabling a broader and more diverse set of
participants to contribute. This democratization of access leads to
faster iteration cycles and richer innovation pipeline because no one
company, however large, has a monopoly on good ideas.
Interoperability ensures that technologies from different vendors
can work together seamlessly. This not only gives customers more
choice, but it also prevents vendor lock-in and encourages healthy
market competition. From a national competitiveness standpoint, it
strengthens the entire AI supply chain by creating resilience,
redundancy, and flexibility.
Critically, openness also promotes transparency and security. Open
standards and community-driven software allow vulnerabilities to be
identified and resolved more quickly. This collaborative scrutiny
results in more secure systems than those built in closed silos.
In short, interoperability and open ecosystems are not just good
engineering practices--they are strategic imperatives. They catalyze
innovation, enhance trust, and ensure that the U.S. remains at the
forefront of global technology leadership in AI.
As we consider how to foster a more inclusive and competitive AI
ecosystem, it's essential to emphasize that this is not just an
innovation issue--it's also a national security imperative.
Open ecosystems and interoperability are particularly useful to
help ensure that we are not concentrating risk, capability, or
decision-making power in the hands of a few companies or platforms. In
a world where AI systems will increasingly be used to support defense,
critical infrastructure, and intelligence operations, resilience and
diversity in the ecosystem are just as important as raw capability.
AI Skills
Question 2. Both of you mentioned the importance of digital skills
in your testimony. Can you discuss how it might hurt the U.S.'s ability
to compete with China if we don't leverage congressionally-mandated
Federal programs like those created under the Digital Equity Act, which
were explicitly designed to help Americans build digital skills, like
teaching seniors, small businesses, and veterans how to use AI?
Answer. AMD recognizes the importance of digital skills for the
U.S. to compete and lead on the international stage. Talent must remain
at the center of any long-term strategy to succeed. The current
shortage of semiconductor engineers, software-hardware co-design
specialists, and packaging technicians is a structural challenge. We
encourage continued and expanded Federal support for workforce
development programs--from apprenticeship-style models and community
colleges to advanced university research partnerships that align
directly with industry needs.
We must invest in talent and ensure our national strategy for STEM
education and workforce training. The private sector can certainly do
more, including expanding university partnerships, investing in
reskilling programs, and developing the cross-disciplinary talent
required for success. We should incentivize companies to increase their
most critical Al R&D efforts. We should make America the absolute best
place for Al talent in the world.
To that end, we recognize education holds immense power in shaping
the leaders and innovators of tomorrow, and we are passionate about
enhancing digital literacy and preparing students for the demands of
the 21st century workforce. This is why we partner with schools,
educators, and local nonprofit organizations to outfit AMD Learning
Labs with AMD processor-based equipment, helping empower teachers and
inspire students to pursue STEM education.
As additional examples of AMD's commitment to skills development
and training the leaders of tomorrow, AMD supports the Ann Richards
School for Young Women Leaders, a unique public school in Austin,
Texas, which provides out-of-the-box education strategies and
enrichment opportunities that incorporate real-world, hands-on projects
that prepare and equip students to tackle big problems with big ideas.
High school students complete a college-to-career pathway in STEM
fields where women are historically underrepresented. In the university
context, AMD University Program offers professors and lecturers free
software licenses, hardware donations, and educational resources to
support classroom teaching in digital design, embedded systems,
computer science and AI.
Beyond the classroom, AMD also welcomed our first cohort of
veterans in late 2023 through the Hiring Our Heroes (HOH) Corporate
Fellowship Program, strengthening our workforce. The HOH Program,
developed by the U.S. Chamber of Commerce, immerses transitioning
service members, veterans, military spouses, and military caregivers in
the civilian workforce, creating economic opportunity by building their
experience and increasing the possibility of a job in the industry.
Through private sector initiatives, such as those at AMD, and the
support of Federal programs, we can pave the way for the digital skills
literacy for the U.S. to compete and succeed on the international
stage.
______
Response to Written Questions Submitted by Hon. John Fetterman to
Dr. Lisa Su
Question 1. Dr. Su, data centers use far too much energy--U.S.
government data from the Lawrence Berkley National Lab expects that
they'll use up to 12 percent of all U.S. electricity by 2028.\1\ While
my Republican colleagues have been busy overturning important energy
efficiency standards, it's more critical than ever for American
industry to lead on energy efficiency and renewable energy. What role
can hardware manufacturers play in improving data center energy
efficiency?
---------------------------------------------------------------------------
\1\ https://newscenter.lbl.gov/2025/01/15/berkeley-lab-report-
evaluates-increase-in-electricity-demand-from-data-centers/
---------------------------------------------------------------------------
Answer. We agree that, as AI systems grow more capable, the demand
for compute--and the energy to power them--is rising quickly. Without
thoughtful planning, we risk that pressure falling unfairly on everyday
Americans who share the grid.
One of the most important ways that we can address this energy
concern is at the chip level. Improving compute per watt--that is, how
much useful work we get out of each unit of energy--is essential. As a
chip design company, we see this as a central part of our mission.
Every generation of AI hardware must be significantly more energy
efficient than the last. That's not just good engineering--it's good
economics and good energy policy.
We also need to be smarter about where and how AI workloads are
run. More efficient chips, paired with intelligent system design, can
dramatically reduce the power needed to train and deploy these models--
especially at scale.
At the same time, industry needs to partner with utilities and
policymakers to ensure the costs of growth are not passed on to
households. That means investing in clean energy, modernizing grid
infrastructure, and being part of the solution--financially and
operationally.
AI leadership shouldn't come at the cost of affordability or
sustainability. If we lead on efficiency--from the chip level all the
way up--we can deliver more innovation with less energy, and make sure
the benefits of AI are shared broadly.
Question 2. I appreciate that you referenced technological and
process efficiency in your testimony. Specifically, what is AMD doing
to improve the energy efficiency of its chips for data center use?
Answer. AMD is highly focused on improving the energy efficiency of
its chips for data center use. AMD takes a holistic approach to energy
efficient design, balancing advancements across the many complex
architectural levers that make up chip design, incorporating tight
integration of compute and memory with chiplet architectures, advanced
packaging, software partitions, and new interconnects. One of our
primary goals across all of our products is to extract as much
performance as possible while balancing energy use.
In addition, AMD is not only working on improving the efficiency of
its own solutions, but working with partners and the larger ecosystem
to optimize virtually every aspect of the AI pipeline. Optimization of
its processing units and myriad of connectivity technologies, which
link chips, systems and racks, will all help enhance efficiency, along
with quantizing models, improving software, and tweaking algorithms.
AMD's holistic approach to optimizing power efficiency means
continually addressing every link in the virtual AI chain to maximize
performance-per-watt.
This is an important consideration because it means the power and
energy requirements of products when they initially hit the market,
typically improve over the lifetime of the product. AMD has made
significant efficiency gains year over year, and supercomputers built
using AMD technologies have earned top rankings on the GREEN500. At one
point, the AMD-powered Frontier TDS (test and development system) at
Oak Ridge National Labs actually topped the GREEN500 list. The GREEN500
ranks supercomputers from the TOP500 list, in terms of energy
efficiency.
One of the key areas where significant efficiency gains are
possible relate to data movement. The largest AI models require huge
amounts of data. As bits move from the tiny register files inside GPUs
or accelerator chips, to cache memory, out to High Bandwidth Memory,
and to the CPU, and so on, energy consumption grows exponentially.
Keeping as much data as close to the accelerator as possible is
paramount to maximizing energy efficiency. It's why, for example, AMD
continues to increase the amount of cache and memory on its Instinct
accelerators generation-on-generation, and why the company continually
explores ways to optimize how the data is actually processed.
If we look at typical, large-scale AI system today, roughly half
the total power required to run the system is consumed by the GPU's
high bandwidth memory (HBM), but the other half is comprised of CPU,
scale-up and scale-out networking, and various things like cooling and
other data center facility overhead. AMD's goal is to maximize system-
level performance, while also minimizing total power consumption, not
just from its chips, but from everything around them in the data center
as well.
While the significant amounts of compute resources required for AI
today are a concern, AMD is working hard to maximize the efficiency of
its platforms and meet its related efficiency goals.
______
Response to Written Questions Submitted by Hon. Maria Cantwell to
Michael Intrator
AI Standards
The U.S. driving development of AI standards alongside the most
advanced democracies in the world offers us an opportunity to set the
``rules of the road'' for AI on the global stage.
Question 1. How do NIST standards help the United States'
competitiveness?
Answer. NIST special publications, developed through broad
stakeholder consultation, offer American companies specific and
adaptable guidance for creating frameworks to develop and deploy AI
technologies. For example, the NIST Cybersecurity Framework assists
U.S. firms by providing a common vocabulary and voluntary guidelines
for information security and cyber risk management.
Question 2. What standards would you like to see NIST develop and
promote to improve U.S. competitiveness?
Answer. By providing the technical foundation and establishing
common vocabulary for industry-led standards development, NIST can help
ensure that U.S. security approaches shape global benchmarks.
Public Investment in Science
Government investment in basic research has contributed
significantly to U.S. advances in scientific discovery, innovation and
U.S. competitiveness. Continued investment in the U.S. research complex
is essential to maintaining U.S. economic strength and national
security. This is also the case with AI given the need to continue to
develop our high performance computing capabilities and next generation
energy technologies required to power the data centers of the future.
Question 3. How has your company benefited from or collaborated
with the National Science Foundation, NIST or the Department of Energy
Labs in artificial intelligence development?
Answer. As of May 2025, to the best of its knowledge CoreWeave has
not formally collaborated with the Department of Energy Labs, the
National Science Foundation, or NIST. However, we do look forward to
opportunities to collaborate with the Department of Energy on its
efforts regarding potential data center development on Federal lands.
Question 4. How will cuts to NSF funding impact your workforce and
search for talent?
Answer. CoreWeave supports NSF's important role in workforce
development by training U.S. workers in emerging technologies. For
example, NSF's Experiential Learning in Emerging and Novel Technologies
program aims to provide experiential learning opportunities for
individuals interested in career pathways in key technologies such as
artificial intelligence, semiconductors, advanced manufacturing, and
more. These types of programs bolster U.S. competitiveness and ensure
that the U.S. has a pipeline of talent to support AI infrastructure and
AI development.
A skilled and trained workforce is vital for the stability and
expansion of AI data centers--which rely on specialized data center
technicians, network and electrical engineers, cybersecurity
professionals, and project managers. CoreWeave supports efforts to
train the domestic workforce comprised of the skilled workers required
to meet the growing AI demand and to accelerate AI innovation.
Question 5. What impact will cuts to Federal funding for science
and research at universities have on U.S. competitiveness in AI?
Answer. Science and research universities play an important role in
maintaining global leadership in AI. Researchers and students
collaborate with the technology industry on cutting-edge projects to
shape the future landscape of AI innovation. A unique American
advantage in the AI race is our ability to support our researchers and
students at universities, as well as to create effective partnerships
between academia, industry, and government.
CoreWeave is deeply committed to supporting AI research, science,
and education, and public-private collaborative partnerships.
For example, CoreWeave is proud to be a founding partner of the New
Jersey AI Hub, along with Microsoft, Princeton University, and the New
Jersey Economic Development Authority. The founding partners will
collectively invest over $72 million to support the long-term success
of the NJ AI Hub, which focuses on research and development efforts,
applications of AI in several industry sectors, and AI workforce
development and education.
Tariff Impacts
High tariffs and overreaching export controls, especially that are
not well coordinated with the private sector and U.S. allies, have the
potential to disrupt supply chains and raise costs for U.S. companies.
That makes building AI infrastructure like data centers, chips, power
plants, and grid modernization more expensive.
Question 6. What are you most concerned with when it comes to your
supply chains?
Answer. CoreWeave invests billions of dollars in the equipment
which powers AI. Like any business which purchases long-lived capital
assets, CoreWeave relies on cost and policy certainty to plan its
business and make critical investments in the U.S. This enables us to
provide predictable pricing to our customers.
One of our top concerns is ensuring strategic investment stability
and predictable prices in the supply chain. We believe it's important
to continue to maintain robust supply chain relationships with like-
minded and reliable international strategic partners, including
chipmakers, original equipment manufacturers, and software providers,
to continue to scale AI data center operations in the U.S. and lead in
the global AI race.
Question 7. How will the higher costs from tariffs and potential
supply chain disruptions impact your plans for building AI
infrastructure?
Answer. Volatility in the global supply chain for critical
components, such as advanced semiconductors and networking equipment,
can raise costs or disrupt deployment timelines, adversely affecting
American companies' ability to rapidly scale the AI capabilities
necessary to meet the requirements of leading enterprise companies and
AI labs.
Acquiring the necessary high-performance components to power AI
workloads requires managing a complex global supply chain and
maintaining robust supply chain relationships. Continued engagement
with leading global suppliers and strategic partners is vital to
ensuring the continued operation, expansion, and rapid deployment of
U.S. AI infrastructure and to uphold U.S. competitiveness.
We are focused on a supply chain strategy that maintains robust,
resilient access to the critical components we need to continue to
develop AI infrastructure. We hope the ongoing discussions with trading
partners will provide the clarity and certainty American companies
require to make the large scale capital necessary to build the AI
infrastructure at the scale and urgency this moment requires.
Question 8. Do you anticipate any delays in construction or other
work on AI infrastructure around the country? If so, where might these
impacts hit the hardest?
Answer. No, we do not anticipate delays in construction or machine
(e.g., GPU, storage, networking) work on AI infrastructure. CoreWeave
does encounter challenges that we factor into our planning and
construction processes. Acquiring utility power can involve long lead
times for switch gear and large transformers due to high demand and the
availability of critical components. Access to both data center
equipment and underlying materials, such as steel or concrete, requires
close collaboration with data center operators and supply chain
vendors.
Labor constraints can pose another challenge, given the high demand
for data center infrastructure and the limited pool of skilled
tradesmen and technicians. Even in markets experienced with data center
zoning and permitting, government administrative capacity to manage
these processes can be a limiting factor.
______
Response to Written Question Submitted by Hon. Amy Klobuchar to
Michael Intrator
Topic: Competition in Cloud Infrastructure
The Federal Trade Commission has raised concerns that startups face
significant barriers to entry into artificial intelligence (AI) markets
because large technology firms have more access to troves of data,
individuals with the necessary expertise, and computational resources.
Question. Can you describe the importance of ensuring that
innovative AI startups are able to thrive in markets alongside large,
established tech companies? As a relatively new entrant into the AI
infrastructure market, what factors are important to ensure a
competitive ecosystem?
Answer. CoreWeave's infrastructure is the only cloud platform that
was purpose-built for AI and Machine Learning (ML) workloads at the
maximum performance and efficiency. CoreWeave's success to date
demonstrates that cutting-edge innovation, rapid execution, and
unparalleled performance has enabled the company to compete with
incumbent platforms.
Our recent growth shows that the market can reward companies that
differentiate based on performance, delivering specialized solutions to
customers' specific needs. Through our proprietary software
capabilities, we enable our customers to achieve substantially higher
total system performance and more favorable uptime relative to other AI
offerings within existing infrastructure cloud environments and unlock
speed at scale.
Policies that ensure a level playing field for all industry
stakeholders help ensure that technical merit and innovation determine
success.
______
Response to Written Questions Submitted by Hon. Brian Schatz to
Michael Intrator
Energy Consumption and Cost of Winning
Question 1. Do you support President Trump's efforts to expand coal
power for AI data centers?
Answer. At CoreWeave, we believe that there are abundant solutions
for powering AI data centers. Our company is committed to advancing AI
infrastructure powered by modern, efficient energy technologies. While
we understand the challenges of meeting the rapidly growing electricity
demands of AI data centers, we believe the future lies in a diversified
energy mix that prioritizes renewables, nuclear power, and fossil
energy sources using the most efficient, low emitting technologies.
Question 2. How are you addressing the costs associated with new
infrastructure development in the short-term, including in terms of
water consumption, pollution, and climate impacts?
Answer. CoreWeave prioritizes balancing growth with also being
efficient and responsible with resources. Regarding water consumption,
CoreWeave often deploys closed-loop, air-cooled infrastructure and
liquid cooling technologies that drastically reduce or eliminate water
use compared to traditional data centers. Many of the data centers
where we operate provide non and low-emitting energy sources for our
operations.
Question 3. What plans do you have to source clean energy and to
publicly report your companies' emissions, if you are not already?
Answer. CoreWeave is currently in several data centers that use
clean energy, have on-site renewables and/or buy Renewable Energy
Certificates. As we expand into new data centers, we are actively
engaged in procuring clean energy to power our services.
To meet its compliance obligations under California law SB 253,
CoreWeave will report its Scope 1 and Scope 2 greenhouse gas emissions
in 2026.
Question 4. Do you believe current incentive structures drive or
ignore the opportunity for energy innovation?
Answer. Past and current incentive structures have helped to drive
down the cost of all energy sources during the 21st century. These
include research and development and tax policies.
To maintain and strengthen the U.S. position in the AI-continued
policies are required to ensure adequate supplies of clean, reliable
supplies of affordable energy CoreWeave supports policies which will
enable a balanced diverse portfolio of energy technologies including
advanced nuclear, carbon capture and sequestration, and continued
advances in renewable energy. The nation also needs to modernize the
grid and improve energy efficiency.
PRC Deployment of AI
Question 5. What do you believe is the greatest national security
threat posed by the People's Republic of China's deployment of AI
systems?
Answer. The national security concerns surrounding the People's
Republic of China's (PRC) deployment of AI systems are interconnected.
These include the integration of AI into their long-standing civil-
military fusion strategy, resulting in dual-use capabilities that
further their economic and defense goals. At the same time, the PRC
aims to expand its global AI influence by establishing its AI ecosystem
as a foundational element in numerous countries, increasing its
capacity to shape the rules governing AI development and deployment
worldwide.
Question 6. What are your recommendations for addressing these
threats?
Answer. To maintain its leadership in AI and effectively address
national security threats, the U.S. must scale its AI ecosystem and
foster innovation across all layers of the AI stack. Carefully
calibrating export controls and trade agreements to address security
concerns will limit the PRC's access to vital AI technologies and
diminish its global AI influence. Enabling the diffusion of the
American AI ecosystem to allies and like-minded nations that commit to
security and technology frameworks will also address this threat.
Strengthening American AI leadership will ensure that democratic values
and security interests shape the rules governing global AI development.
Accelerating Scientific Research
Question 7. What barriers do you see to making advanced cloud
computing capacity available to academic and national lab researchers?
Answer. We are well-positioned to serve academic and national lab
researchers because we offer products designed with those users in
mind. We may face challenges in finding available capacity due to
increasingly large demand from the private sector and AI labs.
Question 7a. How could Congress help to responsibly reduce those
barriers?
Answer. Congress could consider establishing opportunities and
resources to enable partnerships between the private sector and Federal
research centers. A dedicated resource or consortium that aggregates
computing demand across research institutions could create more
procurement power and reduce uncertainty that currently acts as a
constraint.
Relatedly, implementing streamlined, standardized procurement
vehicles for advanced cloud computing could reduce administrative
barriers that better align with both research timelines and industry
business models.
Question 8. Should there be dedicated Federal channels or
procurement pathways to ensure U.S. Federal researchers can access AI
compute at the pace of scientific need?
Answer. Compute allocation is presently dictated through large
procurement contracts with private sector companies. We seek out and
prioritize these contracts as they help us derisk our financial
operations due to their large size and length. This leads us to
deprioritize shorter-term and smaller-value contracts. The Federal
government could consider mechanisms to aggregate demand across
academic and national lab institutions to ensure they can access the
compute needed to maintain the U.S. leadership in AI.
______
Response to Written Questions Submitted by Hon. Edward Markey to
Michael Intrator
Comprehensive Impacts of Data Center Construction
Question 1. When planning for data center construction, does your
company conduct a cradle-to-grave infrastructure study that includes
wildlife, community, and pollution impacts during and beyond the
operational lifespan of a data center? If yes, what have you learned
from those studies? If no, why not?
Answer. CoreWeave is developing a more formal site selection
process that will ensure we are tracking climate risk and resiliency,
as well as examine broader ecosystem risks. While we do not currently
conduct formal cradle-to-grave lifecycle analyses for every facility,
our approach integrates rigorous environmental due diligence,
stakeholder engagement, and adaptive best practices to minimize
ecological and community impacts.
We select sites with adaptive reuse as the focus and we often
retrofit existing structures rather than developing greenfield sites,
allowing us to have less impact on land use. In addition, we focus on
using our existing data centers as efficiently as possible so that we
can fit more servers into less space, therefore reducing the need for
overall square footage.
Backup Energy Generation
Question 2. Does your company use backup diesel generators at any
facilities?
Answer. Our data center providers use backup diesel generators in
the data centers in which we operate. These backup generators are
assets that belong to our data center providers. Their service to us
includes redundancy to the utility power source for the data center.
Question 2a. If yes, please provide a list of each facility where
diesel generators are being used, along with the location, quantity,
and type of generators.
Answer. Our data center providers use backup generators, and these
assets belong to the providers.
Question 2b. If yes, did your company consider the use of battery
storage technology as an alternative to diesel generators? Please
explain your decision process.
Answer. We do this currently. Our data centers have uninterruptible
power supply infrastructure in place that would cover the immediate 4-6
minutes of a power outage to the data center.
CoreWeave is exploring ways to incorporate battery storage into our
back-up generation as part of maintaining our competitive edge with a
focus on network resiliency. Backup battery storage would allow us to
have greater resiliency for CoreWeave's operations and the grid
overall.
We see battery storage as a complementary technology that can
further strengthen our microgrid capabilities, manage renewable
intermittency, and provide fast-responding backup for mission-critical
workloads.
Energy Mix
Question 3. Does your company utilize any on-site or colocated
energy generation to power your data centers?
Answer. No.
Question 3a. If yes, please provide detail how much power comes
from on-site and colocated energy generation.
Question 3b. If yes, please list all on-site and colocated energy
sources (e.g., renewable, nuclear, hydropower, gas-powered turbines,
etc.) that are being utilized to power your data centers.
Question 4. How does your company ensure local ratepayers are not
responsible for paying the cost of new energy infrastructure, such as
transmission lines, needed to meet the data center's energy demand?
Answer. As we review new sites, we are assessing the cost structure
of our energy. For at least one of our large expansions coming later
this year, the rate CoreWeave pays will include additional surcharges
in order to fund rate protection for local ratepayers. We understand
that there is a concern that incumbent ratepayers will bear the cost of
increased data center power demand. CoreWeave is committed to paying
its fair share of the costs required to meet its power demand.
Energy Consumption in AI Model Training
Question 5. Data centers require vast amounts of water for cooling.
When water is at critically low levels, does your company continue to
pull water for building cooling? Does it have a contingency for
operating as to not put further stress on the water supply and
potentially take limited resources from households?
Answer. CoreWeave aims to reduce the amount of water it uses and we
have a different approach to cooling data centers. Whenever possible,
we set up air cooled chillers, which are a closed loop system. Once the
system is filled, there is a limited amount of water required. Each
data center is evaluated to consider the environmental and climate
cost-benefit analysis of power usage versus water usage.
______
Response to Written Questions Submitted by Hon. Jacky Rosen to
Michael Intrator
Competition Across the AI Ecosystem
It's essential that as we build a strong AI industry in the U.S.,
we also focus on establishing a competitive one.
Question 1. What should Congress consider to promote competition in
the AI ecosystem and across the tech stack, such as interoperability
requirements or securing access to computing power for researchers?
Answer. Competitive dynamism is essential for continued American AI
leadership. Policies that ensure a level playing field for all industry
stakeholders will benefit American dynamism and innovation more
broadly. In this vein, it is important to streamline government
certification processes, ensure the efficient administration of
existing requirements like export license reviews, and ensure that
newer entrants can effectively bid for government contracts. The key is
ensuring that technical merit and innovation determine success.
In addition, open standards are an important factor for operating
our cloud at scale. CoreWeave currently participates in industry groups
that define open standards and hardware interoperability. Without these
industry groups defining hardware interoperability, CoreWeave would not
be able to deploy the heterogeneous mix of hardware that our customers
demand at the scale and speed that they need. Open standards can
enhance supply chain resiliency and accelerate innovation, while also
reducing barriers for different and new stakeholders to participate.
Model Security
Question 2. It's essential we ensure the AI models we use do not
become another cybersecurity vulnerability. Would voluntary
cybersecurity standards for large AI models or high-risk models and the
infrastructure they were trained on be helpful in establishing trust?
Answer. Yes, voluntary cybersecurity standards for AI
infrastructure can be helpful in establishing trust and establishing
common vocabulary for stakeholders, particularly through industry-led
initiatives.
______
Response to Written Question Submitted by Hon. Roger Wicker to
Brad Smith
Fiber Optic Cable Supply Chain
Background: Despite some domestic manufacturing expansion, the U.S.
remains heavily dependent on foreign imports of fiber optic cable from
Thailand, with a particular dependency on components from China. In
2024, China had 300 million km of excess fiber capacity, which has
depressed global prices.
Question. Mr. Smith, when we talk about AI infrastructure, we often
focus on compute needs and the gaps in sufficient energy to power AI
data centers. But there's also a critical need for infrastructure. This
committee often considers fiber infrastructure in terms of its
importance to broadband, but it is also essential for AI and our
ability to stay ahead in AI development. Can you describe the
importance of connecting AI data centers to one another and why this
matters?
Answer. AI workloads require significant computational power and
fiber networks play a crucial role in delivering high-speed, low-
latency connections for real-time data transfer in and out of
datacenters. Hyperscalers often have significant fiber optic transport
infrastructure, some of which they own and some of which they contract
with telecom service providers to use. For example, Microsoft has
constructed a fiber-optic AI wide-area network connecting its data
center footprint with a 400 terabyte per second fiber-optic backbone,
capacity that is ten times what was enabled for traditional data
centers. This private network is critical to the functioning of the
Microsoft cloud as it moves data between datacenters.
Connecting AI data centers to one another is vital for enhancing
performance, reliability, and scalability for AI operations.
Interconnected centers enable distributed processing and optimize
resource utilization while facilitating load balancing to ensure no
single datacenter is overwhelmed. Enabling redundancy and failover
mechanisms also enhances disaster recovery and ensures continuous
operation. Public networks play a crucial role too and must be
sufficiently robust to reliably carry data to and from the data center
network.
______
Response to Written Questions Submitted by Hon. Todd Young to
Brad Smith
Federal Investment in Science Research
America's global leadership in technology didn't happen by
accident--it's been built on decades of strong, sustained Federal
investment in basic research.
Question 1. Mr. Smith, Can you speak to why that kind of
foundational support from the Federal government remains essential
today, and how it has helped position the United States as a global
engine of innovation, particularly in emerging fields like AI and
quantum?
Answer. For the last 80 years, the United States has led the world
with its scientific and technological prowess, resulting in
transformative products and capabilities. To outcompete nations like
China, we must significantly boost our R&D investments and ensure we
have skilled researchers and scientists focused on emerging
technologies. Experts predict China will continue to invest substantial
resources into next-generation technologies such as AI, advanced
manufacturing, clean energy, quantum computing, and semiconductors over
the next decade. The United States needs the same level of
intentionality today that we had in the early 20th century--where a
deliberate effort brought together industry, government and academia to
propel scientific advancement.
AI technology is at an inflection point. It is precisely at this
stage--when scientific breakthroughs are on the cusp of scaling--that
public investment matters most. Cuts to basic research, particularly in
AI, risk delaying or even derailing the U.S. trajectory in this modern
AI moment. Without predictable, long-term investment, the United States
will fall behind in both scientific leadership and the
commercialization of critical technologies.
AI Public Awareness and Education
Winning the diffusion race not only requires providing a pathway
for greater adoption of the technology and its applications into the
general stream of commerce but also bolstering our public awareness and
education of AI.
Question 2. Mr. Smith, can you speak to how the Secretary of
Commerce can have a whole of government approach to fostering greater
awareness for AI literacy and growing STEM opportunities to create the
next generation of our workforce?
Answer. AI, like all new technologies, will disrupt the economy and
displace some jobs. However, we believe AI will help lower the barriers
to entry for many professions, replace rote tasks, and create a
foundation for human creativity that builds on AI tools. AI will create
new economic opportunities, allowing entrepreneurs to start new
businesses and create new jobs. We are already seeing some of these
benefits both at Microsoft and across the economy. A recent LinkedIn
report highlighted research on how businesses using AI are seeing the
benefits in innovation and creativity, and even in expanding their
workforce.
Americans of all ages and backgrounds need AI skills to complete in
this new world of work. A key opportunity for most people will be to
develop an AI fluency that will enable them to use AI in their jobs,
much as they use laptops, smartphones, software applications, and the
Internet today.
The U.S. Department of Commerce includes key agencies like NIST,
NTIA, MBDA, and the Census Bureau, each playing a unique role in
economic development, data governance, and innovation. AI literacy can
be integrated across these departments by aligning with their
missions--for example, NIST can support standards-based AI education,
and NTIA can embed AI training in their digital programs. Recent
legislation and national strategies also call on the Department to lead
public awareness campaigns and fund AI learning initiatives through
libraries, nonprofits, and workforce programs.
AI and Quantum
We know AI and quantum computing are both strategic frontiers, but
what's becoming clearer is that the breakthroughs we'll need most may
come from how these technologies interact.
Question 3. Mr. Smith, from your vantage point, how should the
United States be thinking about the convergence of AI and quantum--not
just as two separate priorities, but as part of a unified strategy to
outpace China in foundational technologies?
Answer. Artificial Intelligence and quantum computing are two
strategic frontiers of technology--and their convergence is poised to
unlock unprecedented breakthroughs. In our view, both AI and quantum
computing are ``foundational technologies'' that will drive innovation
across industries, from healthcare to defense. We believe there is a
powerful synergy in combining AI with quantum computing. For example,
quantum computing can vastly accelerate AI. Certain computations that
choke classical computers can run exponentially faster on quantum
machines. This hints at quantum's potential to turbocharge AI tasks
like machine learning optimization, large-scale data analysis, and
complex simulations. In practical terms, a future quantum co-processor
might, for example, dramatically speed up a clustering algorithm
working on high-dimensional data or enable AI to analyze combinatorial
scenarios that are infeasible for classical computers. Conversely, AI
can help quantum progress. AI algorithms assist in designing better
quantum circuits and error-correction methods, essentially using AI to
overcome quantum engineering challenges. AI can also generate synthetic
data or heuristics to guide quantum algorithm development--essentially,
using AI to explore which problems a quantum computer should tackle and
how. By co-developing these technologies, we unlock new capabilities--
from more accurate drug discovery (e.g., AI identifying candidates,
quantum evaluating molecular interactions) to climate modeling and
materials science breakthroughs.
Harnessing this dual momentum is not just a scientific imperative,
it is a strategic necessity. Given China's coordinated, state-led
advancements in both artificial intelligence and quantum technologies,
we believe the United States must pursue bold, comprehensive investment
across all dimensions: research, policy, industry, and workforce. This
includes targeted support for their convergence, cross-disciplinary
collaboration, and enabling private-sector innovation.
______
Response to Written Questions Submitted by Hon. Maria Cantwell to
Brad Smith
AI Standards
The U.S. driving development of AI standards alongside the most
advanced democracies in the world offers us an opportunity to set the
``rules of the road'' for AI on the global stage.
Question 1. How do NIST standards help the United States'
competitiveness?
Answer. Industry-led voluntary standards are instrumental in
driving U.S. competitiveness by establishing globally recognized
benchmarks for quality, safety, and interoperability in AI systems.
These standards ensure that American AI technologies are seen as
reliable, trustworthy, and high performing, which is critical for
adoption both domestically and internationally. NIST's convening
function across government, industry, and civil society encourages
active participation by U.S. entities in the development of pre-
standardization materials--which will ultimately influence the global
AI landscape. NIST's priorities--as a reflection of the
Administration's priorities--also provide an important signal to
industry, encouraging strategic alignment on shared values and global
trade.
Question 2. What standards would you like to see NIST develop or
promote to improve U.S. competitiveness?
Answer. Today, there is a dearth of consensus reference points--
even across industry--that identify the most likely AI risks and how to
evaluate and mitigate them reliably. Microsoft is working with industry
peers to develop consensus best practices, but there are opportunities
to expand and accelerate these efforts. NIST's expertise in test,
evaluation, validation, and verification (TEVV) and measurement science
should be leveraged to provide structure and guidance to developers and
deployers on how to best develop and use evaluations to accelerate
adoption while minimizing risks. For example, NIST could develop
practices for evaluating evaluations to ensure they are scientifically
valid and could assemble panels of evaluations for use by the
government and industry to evaluate specific risks or capabilities.
AI Exports and Export Controls
The U.S. needs a strong national export strategy for technology and
other U.S. exports. Alliances create markets and engaging our allies is
essential to effective coordination and implementation of export
controls.
Question 3. What is the best way for the U.S. Government--
specifically, the U.S. Department of Commerce--to support U.S.
companies in exporting AI technologies? Where should we target to make
sure our foreign adversaries do not get there first?
Answer. Congress and the Administration both have key roles in
protecting our national security by preventing adversaries from
obtaining advanced technology components. Policies must be balanced so
that American companies can thrive and set the global standard for the
technology.
The proposed ``AI Diffusion'' rule, sets forth a robust set of
security standards expected of companies building AI datacenters. We
believe these qualitative guardrails point in a sensible direction for
U.S. policy especially as it relates to national security concerns.
However, we disagree with the ``tiering'' of countries as well as
certain quantitative restricts--absent a universal license--as we think
arbitrary restrictions undermine international confidence in access to
critical American AI technology and therefore restrict the ability of
U.S. firms to compete globally.
Question 4. You mentioned that countries around the world will only
use American AI if they can trust it. How can the U.S. government
partner with the private sector to build that trust?
Answer. Business planning and investment decisions rely heavily on
predictable and trusted relationships on both sides. By the same token,
uncertainty complicates and slows decision making and execution. Clear
and consistent policies enable companies like Microsoft to make long-
term commitments to innovation, workforce development, and
infrastructure investment both abroad and domestically. The Federal
government has an opportunity to lead in their own procurement rules
and the adoption and utilization of AI products and services.
Privacy
Data is a foundational element in the tech stack for any AI system.
Advances in AI will spur an increase in demand for data, both to train
and ground AI models. This enhances the need for bright lines related
to consumer data collection and usage. The best way to set these bright
lines is through a strong, comprehensive Federal data privacy law.
Question 5. Do you agree that bright lines around consumer privacy
will spur innovation in artificial intelligence?
Answer. Yes, raising privacy protections will spur innovation and
benefit the U.S. economy by bolstering consumer confidence and ensuring
that data will be appropriately protected while still being available
for beneficial uses. Both components are critical to AI development.
Consumers remain deeply concerned about the collection and processing
of their personal information.\1\ They report confusion about where it
is going and cite concern over a loss of privacy from social media, AI,
and impact on children.\2\ They also report little faith in
government's ability to solve the problem, with 71 percent saying that
they do not expect social media companies to be reined in even as 77
percent say they would like more government action.\3\ Concern is
shared across the political spectrum with 68 percent of Republicans and
78 percent of Democrats saying there should be more regulation over
personal information.\4\ By some measures the problem is actually
getting worse.
---------------------------------------------------------------------------
\1\ How Americans View Data Privacy, Colleen McClain, Michelle
Faverio, Monica Anderson and Eugenie Park, Pew Research Center, October
18, 2023. See: https://www.pewresearch.org/internet/2023/10/18/how-
americans-view-data-privacy/
\2\ Id.
\3\ Id.
\4\ Id.
---------------------------------------------------------------------------
The share of adults reporting that they understand privacy laws
very little/not at all has risen from 63 percent in 2019 to 72 percent
in 2023.\5\ None of this should be a surprise. Consumers are deluged by
the ways that lost or misused personal information can harm them.
Whether it's through data breach notifications and fears over
increasingly sophisticated identity theft scams,\6\ misuse of personal
information by employers,\7\ or unexpected negative outcomes such as a
fitness app revealing the location of secret U.S. military bases,\8\
the use and misuse of personal information can spill into every facet
of a consumer's life.
---------------------------------------------------------------------------
\5\ Id.
\6\ Verizon 2024 Data Breach Investigations Report, see: 2024 Data
Breach Investigations Report | Verizon
\7\ Michelle Boorstein, Marisa Iati, and Annys Shin, ``Top U.S.
Catholic Church official resigns after cellphone data used to track him
on Grindr and to gay bars,'' Wash. Post (July 21, 2021), see: https://
www.washingtonpost.com/religion/2021/07/20/bishop-misconduct-resign-
burrill/ .
\8\ Jeremy Hsu, ``The Strava Heat Map and the End of Secrets'',
Wired (Jan 29, 2018), see: Strava Data Heat Maps Expose Military Base
Locations Around the World | WIRED
---------------------------------------------------------------------------
This lack of consumer confidence is directly impacting economic
growth. A recent report by the World Trade Organization and the OECD
found that if all nations adopted privacy safeguards, global exports
would increase by 3.6 percent and global GDP would grow by 1.77
percent.\9\ An absence of privacy protections would cause global GDP to
fall by almost 1 percent and global imports by over 2 percent with the
largest impact on high-income economies.\10\ A fragmented U.S. privacy
landscape--such as the state patchwork currently in place in the US--
also risks negatively impacting the U.S. economy. One industry estimate
puts the cost of a patchwork of state laws at over $1 trillion over 10
years, with $200 billion of that burden falling on small
businesses.\11\ Given these important benefits and looming potential
costs, passage of Federal comprehensive privacy legislation is critical
for spurring AI innovation.
---------------------------------------------------------------------------
\9\ OECD/WTO (2025), Economic Implications of Data Regulation:
Balancing Openness and Trust, OECD Publishing, Paris, https://doi.org/
10.1787/aa285504-en.
\10\ Id pg. 39.
\11\ ITIF, ``The Looming Cost of a Patchwork of State Privacy
Laws,'' (January 2022) see: https://itif.org/publications/2022/01/24/
50-state-patchwork-privacy-laws-could-cost-1-trillion-more-single-
federal/
Question 6. Will Microsoft work with me on Federal comprehensive
privacy legislation to set those bright lines?
Answer. Yes, Microsoft will work with you and any other members of
congress on Federal comprehensive privacy legislation. We've been
advocating for it since 2005, because we believe it is critical to
building consumer trust and driving economic growth in the United
States.\12\
---------------------------------------------------------------------------
\12\ ``Microsoft Advocates Comprehensive Federal Privacy
Legislation,'' Microsoft News, November 3, 2005, see: https://
news.microsoft.com/2005/11/03/microsoft-advocates-comprehensive-
federal-privacy-legislation/.
---------------------------------------------------------------------------
Energy Needs and R&D for Fusion Energy
The growing demand for electricity to power AI data centers is
staggering. By some estimates, global electricity demand from data
centers is projected to more than double by 2030 exceeding 945
terawatt-hours (TWh). It will strain electric grids and energy
providers.
Question 7. What plan does your company have to meet energy needs
for AI, and what investments are you making into non-fossil fuel
sources of energy such as fusion?
Answer. As we consider any power sources, we look for reliable,
scalable and cost-effective options that can be developed in rapid
timelines aligned to our datacenter growth. To match our carbon
emitting electricity use, Microsoft has entered into power purchase
agreements to add 34 GW of renewable energy capacity globally. In 2024,
we entered into a power purchase agreement to add 835 MW of carbon-
free, nuclear energy from Constellation's Crane Clean Energy Center
that is anticipated to be put into service by 2028, which will provide
additional capacity to the PJM interconnect.
In 2023, we announced an agreement with Helion Energy to procure
power from its first fusion power plant that is under development. We
are optimistic that fusion energy can play an important part of the
grid mix in the future.
Question 8. With respect to fusion energy, how can the government
partner with the private sector to scale fusion technology as it
continues to develop?
Answer. To achieve fusion breakthrough and scale fusion energy,
government and private sector partnerships are essential in order to
reduce risk, accelerate innovation, and build the foundation for fusion
commercialization. The government can support cost-sharing programs and
adopt clear, adaptive regulatory frameworks specific to fusion to
accelerate deployment. Aligning public resources with private
innovation can help bring fusion energy to market faster and more
affordably.
AI Safety
We are seeing a proliferation of deepfakes and other AI content
that threatens the average person's ability to discern truth in media.
And that's just one area in the field of AI that presents complicated
safety questions. The U.S. AI Safety Institute plays a critical role in
ensuring that AI systems are developed responsibly and that the most
advanced models are fully tested. This is crucial for building trust
and promoting wider adoption.
Question 9. Do you support the work of the U.S. AI Safety
Institute?
Answer. Yes, we support the work of the U.S. Center for AI
Standards and Innovation, formerly known as the U.S. AI Safety
Institute. Their efforts to promote reliable development of artificial
intelligence, establish rigorous safety standards, foster cross-sector
collaboration, and address potential risks are crucial for responsible
innovation. We share their commitment to ensuring AI benefits society
through principles of reliability, transparency, and accountability.
AI Workforce
The U.S. needs a skilled workforce to build and maintain AI
infrastructure, including electricians, pipefitters, carpenters, and
engineers. Labor shortages are already slowing down data center
construction, yet there is no national strategy to train or retain this
talent.
Question 10. What do you think Congress and the administration
should be doing to support AI education, training, and workforce
development?
Answer. AI, like all new technologies, will disrupt the economy and
displace some jobs, and we which we know causes concern for many people
and the workforce at large. We believe AI will create new opportunities
that will outweigh many of the challenges ahead. AI will help lower the
barriers to entry for many professions, replace rote tasks, and create
a foundation for human creativity that builds on AI tools. AI will
create new economic opportunities, allowing entrepreneurs to start new
businesses and create new jobs. We are already seeing some of these
benefits both at Microsoft and across the economy. A recent LinkedIn
report highlighted research on how businesses using AI are seeing the
benefits in innovation and creativity, and even in expanding their
workforce. We encourage congress to invest in Federal apprenticeships
focused on major AI infrastructure initiatives, establish Federal
programs for on-the-job training and support the reauthorization of the
Workforce Innovation and Opportunity Act.
Question 11. What challenges do your companies and others face from
the administration's immigration policies, and what concerns do you
have about impacts on high-skilled immigration?
Answer. The United States faces a critical talent bottleneck. Our
universities educate some of the world's most talented engineers,
scientists, and entrepreneurs, but outdated immigration policies often
force them to leave the country shortly after graduation. Companies
like ours face long delays and constant uncertainty in the visa and
green card process, which makes it difficult to hire and retain the
talent we need. In high-demand fields like artificial intelligence,
advanced engineering, and quantum computing, the need for specialized
expertise far exceeds the available domestic supply.
This isn't about replacing American workers. It's about enabling
innovation by complementing the U.S. workforce with global talent. To
build the technologies of the future, we need to work with the best
minds from around the world. American companies serve customers across
the globe, and having a smart people from around the world work on our
teams helps us create products that truly meet the needs of a global
audience.
To remain globally competitive, the United States needs an
immigration system that attracts and retains top talent. That means
creating fast, reliable pathways for highly skilled individuals--
especially graduates of U.S. universities--and addressing the green
card backlog for workers from countries facing extreme wait times. If
we want to lead the world in AI and other emerging technologies, we
must continue to be a magnet for the world's best and brightest--and
ensure they have the opportunity to build a future in the United
States.
______
Response to Written Questions Submitted by Hon. Brian Schatz to
Brad Smith
Future of Work
Question 1. What is your vision of the future of work and what are
the valuable jobs of the future, in the near-term and long-term?
Answer. We believe that AI is going to impact the future of work
tremendously--largely for the better. Additionally, this is an
important moment for the government to be engaged in these
conversations. At the beginning of the year, I published a blog post
connected to this question. LinkedIn also recently published a report,
AI and the Global Economy, examining how AI is already impacting the
workforce and the economy at large.
Question 2. How is your company taking advantage of the automation
you're empowering to scale productivity without leaving workers behind?
Answer. We design our technologies to augment human capability--
empowering people to achieve greater impact with each hour they spend
at work. That means investing in broad-based skilling initiatives,
including apprenticeships and workforce partnerships, so every worker,
not just those in tech, can benefit from AI. We encourage employees and
partners to experiment with tools like Copilot to find the best use
cases for their work. And we've built feedback mechanisms to ensure
continuous improvement and keep worker experience at the center of our
innovation.
PRC Deployment of AI
Question 3. What do you believe is the greatest national security
threat posed by the People's Republic of China's deployment of AI
systems?
Answer. The People's Republic of China's deployment of AI systems
presents a range of national security risks. For example, the spread of
Chinese technology to third countries may undermine global
cybersecurity and information integrity. Through advanced AI-driven
surveillance, cyber-espionage, and disinformation campaigns, China
could exploit vulnerabilities in critical infrastructure, manipulate
public opinion, and erode trust in democratic institutions.
Question 4. What are your recommendations for addressing these
threats?
Answer. Microsoft advocates a comprehensive, multi-layered approach
to counter the national security risks posed by China. To address AI-
driven surveillance and cyber-espionage, Microsoft recommends deploying
zero-trust architectures, enhancing endpoint detection and response
capabilities, and investing in AI-powered threat intelligence to detect
and mitigate advanced persistent threats. Strengthening public-private
partnerships is essential to ensure real-time information sharing and
coordinated responses to cyber intrusions.
Microsoft also recognizes the risks posed by nation-state actors
including China employing cyber enabled influence operations targeting
critical institutions within America. Microsoft harnessed the data
science and technical capabilities of our AI for Good Lab and Microsoft
Threat Analysis Center (MTAC) teams to assess these risks including
whether or not these actors were utilizing AI in these operations. When
appropriate, the team calls on the expertise of Microsoft's Digital
Crimes Unit to invest in and operationalize the early detection of AI-
powered criminal activity and respond fittingly, through the filing of
affirmative civil actions to disrupt and deter that activity and
through threat intelligence programs and data sharing.
In addition, Microsoft is committed to advancing information
integrity and believes that including content credentials is an
important driver for this. We are a founding member of the Coalition
for Content Provenance and Authenticity (C2PA). To achieve
transparency, support information integrity, and empower our users, we
are leveraging C2PA's ``content credentials'' open standard across
several products. For example, content containing the ``Content
Integrity'' technology has been automatically labelled on LinkedIn,
with users beginning to see the ``Cr'' icon on images and videos that
contain C2PA metadata.
Beyond technological solutions that improve defenses, Microsoft
also stresses the importance of political solutions to these threats--
governments taking a more proactive and coordinated role in
attributing, exposing, and deterring Chinese malicious cyber activity.
This includes timely public attribution, diplomatic pressure, and legal
action where appropriate. Clear consequences for state-sponsored
cyberattacks are essential to shift the cost-benefit calculus and
reinforce international norms against digital aggression.
This is especially true for cyber intrusions which target critical
infrastructure in order to ``preposition'' for disruptive or
destructive attacks in a future contingency. Such operations put
civilians at significant risk and are meaningfully different from
traditional espionage. Prepositioning cyberattacks should be recognized
as a ``threat'' of force prohibited by international law that must be
deterred via sufficient consequences across domains.
Finally, ensuring that U.S. AI technology retains its global
leadership and that regulations and policies do not unnecessarily
hinder the global diffusion of U.S. AI is a key component to countering
China.
Accelerating Scientific Research
Question 5. Microsoft's AI for Health and AI for Earth programs
have supported hundreds of academic and nonprofit research projects.
How would you suggest the government can better structure
collaborations to accelerate scientific discovery using AI?
Answer. *See answer below.
Question 6. You've led efforts to aggregate and standardize
environmental and biomedical datasets. What further steps should
industry and government take together to ensure researchers have access
to well-structured, interoperable, and annotated datasets for AI-driven
discovery?
Answer. The government can play a pivotal role in increasing access
to datasets and accelerating scientific discovery by supporting
initiatives that make data more discoverable, accessible, and usable.
Launching a national open data campaign would empower institutions such
as the National Archives, the Library of Congress, the Smithsonian, and
other government agencies to digitize, organize, and share non-
classified and non-sensitive data for AI training. Additionally, the
creation and expansion of open data commons within non-profit,
academic, and cultural institutions could further democratize access to
valuable datasets. Addressing paywalls that restrict access to
scientific research and establishing dedicated funds to unlock closed-
access journals would also contribute significantly to the availability
of critical knowledge for innovation.
The government should also prioritize the development and
enforcement of clear guidelines for data sharing and annotation.
Stronger incentives and monitoring mechanisms are needed to ensure
timely and secure sharing of federally funded research data, in
alignment with funding agreements. Supporting the adoption of metadata
and provenance standards for datasets can enhance their utility and
reliability, particularly in the era of synthetic data generated by AI
models. By taking these steps, the government can foster a robust
ecosystem where data serves as the backbone for scientific advancement
and technological progress.
Energy Consumption and Cost of Winning
Question 7. Do you support President Trump's efforts to expand coal
power for AI data centers?
Answer. The availability of reliable, resilient, and cost-effective
electricity is essential for economic growth in the United States. The
recent Executive Orders from the President on coal generation highlight
the urgency to meet energy sector demand growth, including as a bridge
to future carbon free generation as new energy options are built out.
At Microsoft, when we consider any potential dedicated power source, we
look for generation options that align with our need for reliable,
cost-effective, and sustainable electricity. We also invest in the next
generation of energy supply technologies to support accelerated
innovation and cost-declines. In practice, this has meant that we use
and invest in a diverse mix of electricity generation technologies that
align with our needs for reliable and cost-effective electricity. These
technologies also must align with our commitment to be carbon negative
by 2030. Across all of these priorities--reliability, cost-
effectiveness, and sustainability--we do not see a business case for
expanded use of coal to meet the energy needs of our data centers.
Rather, we see increasingly strong arguments for integrating more
renewables, nuclear, net-zero, and net-negative electricity generation
technologies into the system in addition to a diverse set of energy
storage technologies. While many of these new technologies present
promising options in the future, we also recognize that there will
sometimes be a need to build other options in the near term as we grow
our Nation's energy infrastructure.
Question 8. How are you addressing the costs associated with new
infrastructure development in the short-term, including in terms of
water consumption, pollution, and climate impacts?
Answer. When improvements to the grid are required to serve our
load, we work with our local utility to ensure that we pay for the
improvements required to serve our site. We are thoughtful about the
resource intensity of AI from the moment that we decide to build a
datacenter. We're working to advance low-carbon materials and create
global markets to help advance sustainability across industries. When
operating our datacenters, we optimize energy and water efficiency,
including announcing a new datacenter design that consumes zero water
for cooling. We also are increasingly adopting a circular approach to
reach our target of zero waste by 2030. We target preventing waste
first, then focus on reusing, and recovering materials. This includes
reusing and recycling construction and demolition waste, diverting
operational waste and advancing circular cloud hardware and packaging.
______
Response to Written Questions Submitted by Hon. Edward Markey to
Brad Smith
Comprehensive Impacts of Data Center Construction
Question 1. When planning for data center construction, does your
company conduct a cradle-to-grave infrastructure study that includes
wildlife, community, and pollution impacts during and beyond the
operational lifespan of a data center? If yes, what have you learned
from those studies? If no, why not?
Answer. Datacenters are long-term investments, and planning is a
multi-year, capital intensive program that requires alignment across a
range of factors to be successful. These factors include energy, water,
fiber, land suitability, environmental considerations including
wildlife impacts, and an available, trained workforce to ensure that we
deploy the data center on the timelines our customers expect.
We are thoughtful about resource intensity of AI from the moment
that we decide to build a datacenter. We're working to advance low-
carbon materials and create global markets to help advance
sustainability across industries. When operating our datacenters, we
optimize energy and water efficiency, including announcing a new
datacenter design that consumes zero water for cooling.
We also are increasingly adopting a circular approach to reach our
target of zero waste by 2030. We target preventing waste first, then
focus on reusing, and recovering materials. This includes reusing and
recycling construction and demolition waste, diverting operational
waste and advancing circular cloud hardware and packaging.
Backup Energy Generation
Question 2. Does your company use backup diesel generators at any
facilities?
Answer. Yes. Generators at datacenters, most often powered by
diesel fuel, play a key role in delivering reliable backup power so we
can meet the needs of the many customers that rely on our services,
including hospitals, first responders, and educational institutions.
Each of these generators is used for no more than a few hours a year or
less at our datacenter sites, most often for routine maintenance or for
backup power during a grid outage.
Question 2a. If yes, please provide a list of each facility where
diesel generators are being used, along with the location, quantity,
and type of generators.
Answer. We use generators at all of our datacenter sites. A
complete list of our datacenter sites can be found here. [https://
datacenters.microsoft.com/globe/explore/] We operate our backup
generators sparingly.
Question 2b. If yes, did your company consider the use of battery
storage technology as an alternative to diesel generators? Please
explain your decision process.
Answer. As we consider any back-up power sources, we look for
reliable, scalable and cost-effective options that can be developed and
deployed in rapid timelines aligned to our datacenter growth and
operate within the constraints of our datacenter facilities, including
batteries and other technologies (e.g., alternative fuels). The primary
requirements for such backup generation are that they can quickly ramp
to meet the emergency event and can store sufficient energy to supply
the datacenter through an outage in case site access is limited.
Energy Mix
Question 3. Does your company utilize any on-site or colocated
energy generation to power your data centers?
Answer. No.
Question 3a. If yes, please provide detail how much power comes
from on-site and colocated energy generation.
Answer. N/A
Question 3b. If yes, please list all on-site and colocated energy
sources (e.g., renewable, nuclear, hydropower, gas-powered turbines,
etc.) that are being utilized to power your data centers.
Answer. N/A
Question 4. How does your company ensure local ratepayers are not
responsible for paying the cost of new energy infrastructure, such as
transmission lines, needed to meet the data center's energy demand?
Answer. When improvements to the grid are required to serve our
load, we work with our local utility to ensure that we pay for the
improvements required to serve our site.
Energy Consumption in AI Model Training
Question 5. In the past year, how many GWh of energy do you
estimate was used to train new AI models?
Answer. To date, most of our infrastructure sites have been serving
both AI and traditional cloud services for our customers and the
critical business and communications requirements they rely on us for.
AI has shared space and resources in those locations, so we are not yet
able to specifically separate out AI energy use for prior years. We
continue to drive efficiency into every part of this infrastructure as
we deploy this new technology at scale. We are working on further
analysis of energy use.
Question 6. Data centers require vast amounts of water for cooling.
When water is at critically low levels, does your company continue to
pull water for building cooling? Does it have a contingency for
operating as to not put further stress on the water supply and
potentially take limited resources from households, agriculture, or
small businesses?
Answer. For the datacenter, water requirements will vary depending
on the cooling technology used. Microsoft is moving to solutions that
utilize zero water for cooling in water-stressed areas. However, water
and sewer connections are typically required for basic safety and
administrative functions, like restrooms and breakrooms.
Government Partnerships
Question 7. Your company offers AI products specifically for the
public sector, which are now used across Federal agencies and state and
local governments. Given the especially heightened risks related to
governments' use of AI--including the denial of rights or access to
services and false or incorrect information about government benefits
and programs--what additional steps have you taken to ensure that these
tools are safe and effective to use in the context of government?
Answer. The Federal government's leadership in AI adoption is vital
for setting standards and keeping the U.S. at the forefront of
innovation. By integrating AI into its operations and using existing AI
applications, it can speed up public service delivery, drive widespread
adoption, improve services, and boost industry confidence. To do this
effectively, we've taken additional steps to ensure the tools used in
government are safe and effective. For example:
Microsoft has cloud instances designed specifically for the public
sector, meaning specialized versions of its cloud services designed to
meet unique security, compliance, and operational needs of government
agencies. For example, Microsoft Azure Government and Microsoft 365
Government are built to handle sensitive data and regulated workloads
and are certified to meet the standards of FedRAMP High. While the
underlying technology is often the same as in our commercial offerings,
these instances are configured and governed to align with public sector
requirements. Our AI for regulated customers is in most cases exactly
the same as what we offer our commercial customers. Our commercial
Azure cloud is also FedRAMP High authorized and hosts an array of
State, Local, Federal agencies who utilize this as their primary cloud
offering today.
Question 8. What protocols do you have in place to work with
government agencies to rectify any harms or errors when they occur?
Answer. Microsoft offers a layered framework that combines
technical, procedural, and ethical safeguards. We recognize the
heightened responsibility that comes with deploying AI in the public
sector, especially where consequential decisions can impact access to
rights, services, and benefits. Our approach to Responsible AI is
grounded in principles such as accountability (including human
oversight and control), transparency, fairness, and reliability &
safety.
For example, we utilized the following techniques:
Human-in-the-Loop Oversight: We ensure that critical
decisions involving eligibility or access to services include a
human review and override mechanisms.
Incident Response Framework: We work closely with agencies
to establish clear escalation paths and remediation protocols
for identifying, reporting, and correcting AI-related errors or
harms.
Bias and Fairness Audits: Our models undergo rigorous pre-
deployment and ongoing audits to detect and mitigate bias,
especially in sensitive use cases.
Transparent Documentation: We provide model cards, data
sheets, and decision traceability to support explainability and
accountability.
Microsoft also provides access to our responsible AI dashboard,
which enables agencies to monitor fairness, accuracy, and error rates
across demographic groups. We are committed to continuous collaboration
with government partners to ensure AI systems are safe, equitable, and
aligned with public values. We welcome the opportunity to co-develop
governance frameworks tailored to your mission needs.
Business Partnerships
Question 9. When you make your AI systems available to other users/
deployers, what are the types of issues you agree on?
Answer. Microsoft works closely with its customers, the deployers
of its AI systems, to ensure that AI technologies are used responsibly,
safely, and in line with legal requirements. When Microsoft makes an AI
system available to a customer, both parties agree on key issues of
responsible AI use, Microsoft provides extensive information and
guidance to the customer, and Microsoft takes proactive steps to
prevent misuse of the AI services. These are codified in Microsoft's
Terms of Use, Acceptable Use Policy, and the Microsoft AI Services Code
of Conduct, to which customers must adhere. In addition, some services,
which have a higher risk of misuse, are available only through a
Limited Access Program.
Question 10. What information do you provide to those other
parties/deployers?
Answer. Microsoft provides extensive information, tools, and
guidance to customers deploying its AI systems, so they can understand
how to use the technology responsibly and effectively. This includes
detailed documentation and transparency notes that explain how AI
technology works, its capabilities and limitations, and how to achieve
the best results. We also provide usage guidelines, best practices, and
responsible AI resources to guide deployers in safe implementation
(e.g., recommendations for human oversight, testing, and fairness
checks).
Additionally, we build in safety features like content filtering
and abuse detection models directly into our AI services and offer
ongoing support, including technical support channels and updates, to
help customers deploy AI correctly. This helps ensure deployers
understand the system and have the tools to uphold the agreed
principles.
Question 11. What step does your company take any steps to reduce
the likelihood that, for instance, those downstream users don't use
your services in ways that could harm people or that violate your terms
of use?
Answer. We take active steps to reduce the risk of downstream
misuse of our AI services. Our services have built-in safety controls
at multiple levels (the model, the API service, and the application) to
automatically filter or block harmful content. We enforce our terms of
use, including by pursuing legal action against actors who try to
bypass safety measures, such as cybercriminals who intentionally
develop tools specifically designed to circumvent the safety guardrails
to create offensive and harmful content. Additionally, over a year ago,
we implemented security policies to include, if we observe a nation-
state actor, cybercriminal or other malicious actor using our AI tools
or services, we will disrupt and disable them immediately, notify any
service providers they may be using and share our learnings with the
public and stakeholders to improve the AI ecosystem. Staying ahead of
threat actors in the age of AI | Microsoft Security Blog Furthermore,
we continually strengthen our guardrails by learning from new threats
and updating our safety systems, and we work with partners and provide
channels for users to report abuses. These measures collectively help
ensure that even after deployment, our AI services are used in ways
that do not harm people or violate the agreed-upon terms.
AI Hallucinations
Recent reporting suggests that generative AI hallucinations are
getting worse as the technologies become more powerful. Hallucinations
can lead to great harm in certain scenarios, such as when assessing job
applications, or even more dangerously in the national security
context.
Question 12. Do you agree that we need guardrails to ensure that AI
tools are not used or misused in ways that could cause harm to people?
Answer. Yes. We agree that thoughtful, risk-based guardrails are
necessary for advancing safe, trustworthy, and responsible AI.
Question 13. What steps is your company taking to address this
issue?
Answer. Over several years, Microsoft has developed a structured
approach to responsibly releasing generative AI applications, guided by
a ``map, measure, and manage'' framework. At each stage of this
process, we've embedded best practices, guidelines, and tools informed
by real-world experience. A key focus has been addressing the risk of
hallucinations, also known as ungroundedness, where AI models generate
plausible but unsupported content. As part of this comprehensive
approach, product teams are equipped with centralized tools to evaluate
the likelihood of ungrounded outputs and are supported with design
patterns and mitigation strategies tailored to their specific
applications.
A key example of how these risk mitigation practices work is the
2023 release of Copilot Studio, which harnesses generative AI to enable
customers without programming or AI skills to build their own copilots.
One of the key risks for this product is groundedness, and, as with all
generative applications, the Copilot Studio engineering team mapped,
measured, and managed risks according to our governance framework prior
to deployment. By improving groundedness mitigations through metaprompt
adjustments, the Copilot Studio team significantly enhanced in-domain
query responses, increasing the in-domain pass rate from 88.6 percent
to 95.7 percent. This means that when a user submits a question that is
in-domain--or topically appropriate--copilots built with Copilot Studio
are able to respond more accurately. This change also resulted in a
notable 6 percent increase in answer rate within just one week of
implementation. In other words, the improved groundedness filtering
also reduced the number of queries that copilots declined to respond
to, improving the overall user experience.
The team also introduced citations for outputs, so copilot users
have more information about the source of information included in AI-
generated outputs. By amending the safety system message and utilizing
content filters, the Copilot Studio team improved citation accuracy
from 85 percent to 90 percent. Following the map, measure, and manage
framework and supported by robust governance processes, the Copilot
Studio team launched an experience where customers can build safer and
more trustworthy copilots.
Just as we measure and manage AI risks across the platform and
application layers of our generative products, we empower our customers
to do the same. We offer features to our customers that detect
ungrounded statements within generative AI outputs in applications
using grounded documents, such as Q&A Copilots and document
summarization applications. Groundedness detection finds ungrounded
statements in AI-generated outputs and allows the customer to implement
mitigations such as triggering rewrites of ungrounded statements.
______
Response to Written Questions Submitted by Hon. Tammy Baldwin to
Brad Smith
Question 1. I serve as Ranking Member of the LHHS subcommittee of
the Appropriations Committee where we are tasked with ensuring we are
investing in the education of the next generation of American workers.
Part of Microsoft's investment in the Mount Pleasant data center
includes partnering with Gateway Technical College to build a Data
Center Academy. Can you share why Microsoft has decided to invest in
local STEM education?
Answer. Microsoft is committed to local workforce development. This
initiative is designed to train over 1,000 students in 5 years,
equipping them with the skills needed for careers in IT and data center
operations. By aligning infrastructure investment with education,
Microsoft aims to ensure that local communities benefit directly from
the economic opportunities created. It also supports the company's
broader goal of building a diverse, future-ready workforce in regions
poised for tech growth.
Question 2. At the end of 2023, Microsoft entered into a first of
its kind partnership with AFL-CIO to ensure workers are at the table in
the development and implementation of artificial intelligence. I
believe it is critically important that technology advances in a way
that enhances our workforce instead of eliminating jobs. Can you share
why Microsoft decided to pursue this partnership and what you have
gained from it?
Answer. Microsoft partnered with the AFL-CIO to ensure that workers
have a voice in how AI is developed and deployed. The partnership
focuses on expanding access to AI education, incorporating labor
feedback into product design, and shaping public policy that supports
inclusive economic opportunity. It reflects Microsoft's broader
commitment to responsible AI and to building a future of work that
benefits everyone. Microsoft has learned the value of embedding worker
voice directly into the deployment of AI technologies and reinforced
the need for inclusive AI skilling strategies.
______
Response to Written Questions Submitted by Hon. Jacky Rosen to
Brad Smith
Adversarial AI
The reason cybersecurity experts were able to identify gaps in
security in DeepSeek's platform was because it is an open-source model.
Question 1. How can Congress protect Americans from future models
developed by entities affiliated with foreign adversaries that may put
users' data at risk, whether they're open source or not?
Answer. *See the answer below
Question 2. Are there ways users--whether Federal or commercial--
could safely use AI models developed by companies in adversarial
nations?
Answer. There is a clear demand for this technology. As more and
more businesses and individuals look to innovate and explore new
markets, they'll want to consider a wide range of AI model types that
best fit their needs. Access to the latest AI technologies enables
innovation, but they should be used in a trusted and safe environment.
When handling sensitive data or used in high-risk use cases, additional
measures, such as red teaming, automated assessments, and in-depth
security reviews, can be used to discover and mitigate potential risks
in models and the systems they're used in. Consistent with shared
responsibility practices, we encourage customers to carefully review
model and system documentation and transparency notes and adopt
platform safeguards, such as content safety filters, and to conduct
their own security and safety evaluations tailored for their specific
use case.
Model Security
Question 3. It's essential we ensure the AI models we use do not
become another cybersecurity vulnerability. Would voluntary
cybersecurity standards for large AI models or high-risk models and the
infrastructure they were trained on be helpful in establishing trust?
Answer. Voluntary risk-based cybersecurity standards for AI models,
particularly cutting-edge models and the infrastructure they're trained
on--would be a valuable step toward building trust.
Previous waves of technology have highlighted that to drive
innovation and adoption of new technologies, users must both trust in
how technology itself performs and trust that the technology can be
used successfully, safely and securely. Cybersecurity frameworks and
standards provide a foundation for that trust. For the past decade,
risk-based cybersecurity standards developed by organizations such as
NIST and ISO/IEC have helped public and private sector stakeholders
manage cybersecurity risks effectively, regardless of their size or
maturity. These standards also support compliance and assurance schemes
that deliver both economic and security benefits.
In our public comments to the White House Office of Science and
Technology Policy (OSTP) in its Request for Information (RFI) for the
Development of an AI Action Plan, Microsoft encouraged the Federal
government to avoid developing duplicative policies by leveraging
existing risk-based and outcome-focused cybersecurity standards, such
as the NIST Cybersecurity Framework and the Secure Software Development
Framework (SSDF). These standards offer the flexibility needed to
streamline regulation, promote consistency, and foster innovation and
cross-border collaboration.
Unlike prescriptive cybersecurity requirements, risk-based
standards offer flexibility and agility to streamline regulations,
drive consistency, and incentivize innovation and cross-border
commerce. Developing new standards from scratch is a resource-intensive
process, and the Federal government can save substantial resources
using and building existing risk-based standards while still ensuring
robust cybersecurity measures are in place. In our comments, we also
encourage the administration to invest in NIST, as they play a critical
role in informing the global AI conversation, bringing U.S.
perspectives to international standardization bodies like ISO/IEC.
Alignment and interoperability between U.S. cybersecurity and
international standardization approaches is important for the growth of
cross-border commerce and trust in American AI.
AI Skills
Question 4. You mentioned the importance of digital skills in your
testimony. Can you discuss how it might hurt the U.S.'s ability to
compete with China if we don't leverage congressionally-mandated
Federal programs like those created under the Digital Equity Act, which
were explicitly designed to help Americans build digital skills, like
teaching seniors, small businesses, and veterans how to use AI?
Answer. Americans of all ages and backgrounds will need AI skills
to compete in this new world of work. A key opportunity for most people
will be to develop an AI fluency that will enable them to use AI in
their jobs, much like they use laptops, smartphones, software
applications, and the Internet today. China is home to some of the
world's most talented computer science researchers. They also lead the
world in graduating STEM students. Therefore, it is critical the United
States, government, private sector, and non-profits, to invest in AI
and digital skilling. For example, we launched the AI for Community
Colleges program, in collaboration with the American Association of
Community Colleges, designed to empower both students and educators by
providing valuable resources and support. The program offers AI
training for faculty and staff tailored to all skill levels, ensuring a
comprehensive understanding of AI concepts. It will deliver AI-focused
curriculum that equips students with in-demand skills to meet regional
workforce needs, as well as AI technology to enhance various
departmental functions at community colleges. By helping educators
integrate AI skilling directly into their classrooms, the program
ensures students are well-prepared to enter the workforce as AI-ready
professionals, addressing the growing demand for AI expertise across
industries.
AI For Spectrum
Artificial Intelligence has the potential to dramatically improve
how we manage and utilize spectrum, particularly unlicensed bands like
those used for Wi-Fi.
Question 5. What are some of the most promising ways AI can be
applied to enhance dynamic spectrum access, reduce interference, and
optimize performance in congested environments--and what should
Congress do to support those efforts?
Answer. There are companies that are applying machine learning and
AI to create automated tools to better manage spectrum utilization in
various frequency bands through shared use. There are also universities
examining some of the more fundamental questions associated with
dynamic spectrum sharing. Presumably, some of this research is
exploring how AI tools can be applied. However, Microsoft is not
involved in these efforts.
Question 6. The Citizens Broadband Radio Service (CBRS) model has
demonstrated how dynamic sharing between Federal and non-federal users
can unlock valuable spectrum for innovation. How can AI enhance and
expand these types of spectrum sharing frameworks by enabling more
agile, real-time spectrum coordination, and what steps should the
Federal government take to accelerate the development of AI-powered
spectrum management tools?
Answer. Although Microsoft participated in multiple CBRS
proceedings that go back over a decade, in recent years the company has
not been engaged.
Need for Broadband and Wi-Fi
Question 7. Wi-Fi is the foundation of connectivity in our homes,
schools, hospitals, and workplaces. As AI becomes more embedded in
applications across sectors--from diagnostics and patient monitoring to
smart factories and personalized education--how critical is it that we
continue to invest in broadband infrastructure across the U.S. and
robust, high-capacity Wi-Fi networks to realize AI's full economic and
social potential?
Answer. Microsoft agrees that Wi-Fi-enabled devices play an
essential role in today's communications networks and will continue to
do so in the future. The Federal Communication Commission's (FCC)
decisions to authorize unlicensed Low Power Indoor (LPI) and Very Low
Power (VLP) devices across the entire 5925-7125 Megahertz (6 GHz) band
and higher-power Standard Power (SP) devices under control of an
Automated Frequency Coordination (AFC) system over portions of the 6
GHz band, allow for multiple, large bandwidth channels that serve as
large on-ramps for individuals and enterprise users to access broadband
services, and which is a prerequisite for high-capacity low-latency Wi-
Fi networks. FCC authorization of LPI, VLP, and SP (under control of an
AFC) devices has set the stage for significant private sector
investments. With respect to AI enabled devices, several of Microsoft's
more recently released Copilot + PC computer models incorporate Wi-Fi 7
radios that feature 320 Megahertz ultra-wide channels. Along with our
customers, we are learning in real-time how AI embedded systems and
services can best leverage high-capacity low latency Wi-Fi connections.
AI T&E Workforce
A key factor in ensuring the U.S. continues to lead the world in
the AI race is by ensuring the AI we develop is the best and therefore
the most trustworthy. Validating model outputs is an important step in
establishing trust. Right now, however, the U.S. has neither the
standards nor the trained workforce to evaluate AI models to establish
that we can trust model outputs.
Question 8. What should Congress consider to incentivize and grow
the AI test and evaluation workforce?
Answer. Congress should consider creating AI upskilling sector-
based collaboratives, where governments fund efforts that bring
together companies that use the same or similar AI technologies to help
train workers more efficiently and at scale, which would be especially
beneficial to subject matter experts that may not otherwise have the
infrastructure to establish AI training programs. Additionally, we
recommend advocating for expanding, streamlining and promoting Section
127, which allows employers to provide tax-free educational assistance
to employees pursuing education while working. As part of any effort to
expand or streamline Section 127, legislation could also highlight or
further incentivize employers using this benefit for AI upskilling.
There may be an opportunity to further promote Section 127 benefits,
which are often underutilized, by highlighting the ways in which it can
address AI skilling needs.
Question 9. How can Congress support more interdisciplinary
approaches to testing and evaluating AI? For example, how do we ensure
a model being used in a healthcare setting has been evaluated both by
experts in the model technology, but also experts in the healthcare
setting in which it will be deployed?
Answer. Microsoft has consistently emphasized that cross-
disciplinary testing of AI--specifically through testing specific to a
deployment setting--is essential to secure and trustworthy AI use.
During our own product development, we conduct stress testing, or red
teaming if necessary, at both the model and the application layer. Red
teaming the model helps to identify how a model can be misused, scope
its capabilities, and understand its limitations. These insights not
only guide the development of platform-level evaluations and
mitigations for use of the model in applications but can also be used
to inform future versions of the model. Application-level AI red
teaming takes a system view, of which the base model is one part. This
helps to identify failures beyond just the model, by including the
application specific mitigations and safety system. Red teaming
throughout AI product development, when appropriate, can surface
previously unknown risks, confirm whether potential risks materialize
in an application, and inform measurement and risk management. The
practice also helps clarify the scope of an AI application's
capabilities and limitations, identify potential for misuse, and
surface areas to investigate further.
Congress can assist these efforts by supporting and funding NIST,
which is currently considering scoping for workstreams to develop
methods and metrics for AI testing, evaluation, verification, and
validation.
______
Response to Written Question Submitted by Hon. John Fetterman to
Brad Smith
Mr. Smith, I'm a big supporter of renewable energy, and that
includes nuclear. Whenever we talk about the energy transition, as we
discussed at this hearing, my focus has been clear: making sure
ratepayers in Pennsylvania aren't hurt. The Washington Post has
reported that increasing electricity demand from data centers is
jacking up residential power bills by 20 percent.\13\ That's
unacceptable. These data centers don't even offer long lasting, stable
jobs: the new jobs are in the construction phase, but the higher
utility rates last forever. I've been tracking the plan to reopen Three
Mile Island to power Microsoft's data center energy needs. I appreciate
innovative plans, but Pennsylvanians come first.
---------------------------------------------------------------------------
\13\ https://www.washingtonpost.com/business/2024/11/01/ai-data-
centers-electricity-bills-google-amazon/
Question. Will you commit to me--as you did at the hearing--that
your power purchase agreement with Constellation will not raise
electricity rates for PA households?
Answer. The power purchase agreement that Microsoft entered into
with Constellation Energy will add 835 MW of electricity to the PJM
region. By entering into this power purchase agreement, Microsoft is
guaranteeing a customer for this power produced by the nuclear unit at
the Crane Clean Energy Center and fully responsible for the costs of
the energy and capacity from this facility.
______
Response to Written Questions Submitted by Hon. Lisa Blunt Rochester to
Brad Smith
Cybersecurity and AI
Mr. Smith, I know that many tech companies like yours see AI agents
as a big part of AI advancement.
But AI agents also can contain sensitive data about its users, like
a flight agent containing payment information and location data. This
data needs to be secure from cyberattacks.
Also, there is potential for AI agents to be used by cybercriminals
to orchestrate attacks more quickly and at a far larger scale than
humans could.
Question 1. I know that Microsoft has a good amount of visibility
into global cybersecurity threats. How would you assess the current
state of AI cybersecurity, and what concrete steps should this
committee consider to strengthen the cyber-protection of AI agents?
Answer. Earlier this month at Microsoft Build, our annual developer
conference, we outlined our vision for a world where AI agents make
decisions and perform tasks across users, teams, or organizations. To
realize our vision, AI agents must be both capable and secure. Today's
threat landscape--from the unprecedented volume of ransomware attacks
by cybercriminals to the sustained cyberespionage and attacks from
state-sponsored adversaries--already demands a strong, coordinated
response. AI agents, especially those interacting with sensitive data,
introduce new risks that must be addressed proactively and through
trusted multistakeholder partnerships.
AI agents can operate autonomously, make real-time decisions,
interact with external tools and data, and even collaborate with other
agents. This has the potential to transform industries, from optimizing
energy grids to coordinating fleets of autonomous vehicles. More than
230,000 organizations--including 90 percent of the Fortune 500--have
already used Copilot Studio to build AI agents and automations.
These powerful capabilities also introduce new risks. AI agents can
be manipulated through prompts or data sources to perform harmful
actions--like profiling employees, crafting targeted phishing e-mails,
or leaking sensitive information. In multi-agent systems, a single
compromised agent can trigger cascading failures. For example, a hacked
warehouse robot could disrupt an entire supply chain. An attacker could
hide secret instructions in a document used for training of or accessed
by a public-facing agent. When a secure AI agent later interacts with
the public-facing agent, the embedded instructions can trick it into
bypassing its safeguards and leak sensitive data. These risks are
amplified because these systems often operate without direct human
oversight, raising important questions about accountability.
We would welcome the opportunity to engage with you further on this
topic, and hope to see the Committee's support for the following:
Emphasize robust cybersecurity foundations, such as secure-
by-design-and default practices and Zero Trust Architecture for
all AI systems--including agents.
Support voluntary, risk-based AI cybersecurity standards
that are adaptable, technically grounded, and internationally
aligned.
Remove barriers to cloud adoption, recognizing the security,
scalability and efficiency benefits cloud services provide for
AI agent deployment.
Encourage open, secure protocols like Agent2Agent (A2A) and
Model Context Protocol (MCP) to enable safe and interoperable
agent-to-agent and agent-to-tool collaboration.
Invest in public-private collaboration to share threat
intelligence, simulate adversarial scenarios, and evolve best
practices.
AI and Competition
Mr. Smith, AI is becoming integrated into our critical economic and
societal infrastructure, with McKinsey stating that long-term AI
opportunity could be about $4.4 trillion in added productivity growth
potential from corporate use cases.
But vendor lock in could be a real issue, where an AI vendor
dramatically falls behind the competition and leaves its client with a
vastly inferior product, which could threaten key industries the AI
product operates in.
Question 2. Do you have any plans or strategies regarding
mitigating lock-in for your AI products operating in critical sectors,
like the financial and medical sectors, to prevent potential lock-in
effects that might harm these critical sectors and the folks therein?
Answer. Today's AI ecosystem is open, evolving, and increasingly
decentralized. Microsoft is working to ensure it stays that way.
We are seeing a surge of innovation across sectors, with new
models, tools, and platforms emerging regularly. This is especially
true in financial services and public health, where AI's potential is
still be discovered and applied. From fraud detection to disease
surveillance, the applications are expanding rapidly. And so are the
choices available to customers.
At Microsoft, we are committed to supporting that diversity. Azure
hosts a wide range of AI models, including those from OpenAI, Meta,
Mistral, and open-source communities; we recently added xAI's Grok 3
and Grok 3 Mini into the Azure AI Foundry. Customers can also bring
their own models, fine-tune them, or use pre-trained ones. They can run
models in Azure, on-premises, or across multiple clouds. We support
that flexibility because we believe it's essential to trust and
innovation.
We also recognize that many of our customers operate in regulated
environments. That's why we invest in interoperability, portability,
and compliance--so they can move workloads as needed, with minimal
friction and constraint.
[all]