[Senate Hearing 119-284]
[From the U.S. Government Publishing Office]
S. Hrg. 119-284
AI'VE GOT A PLAN: AMERICA'S AI ACTION PLAN
=======================================================================
HEARING
BEFORE THE
SUBCOMMITTEE ON SCIENCE, MANUFACTURING,
AND COMPETITIVENESS
OF THE
COMMITTEE ON COMMERCE,
SCIENCE, AND TRANSPORTATION
UNITED STATES SENATE
ONE HUNDRED NINETEENTH CONGRESS
FIRST SESSION
__________
SEPTEMBER 10, 2025
__________
Printed for the use of the Committee on Commerce, Science, and
Transportation
[GRAPHIC NOT AVAILABLE IN TIFF FORMAT]
Available online: http://www.govinfo.gov
__________
U.S. GOVERNMENT PUBLISHING OFFICE
62-739 PDF WASHINGTON : 2026
-----------------------------------------------------------------------------------
SENATE COMMITTEE ON COMMERCE, SCIENCE, AND TRANSPORTATION
ONE HUNDRED NINETEENTH CONGRESS
FIRST SESSION
TED CRUZ, Texas, Chairman
JOHN THUNE, South Dakota MARIA CANTWELL, Washington,
ROGER WICKER, Mississippi Ranking
DEB FISCHER, Nebraska AMY KLOBUCHAR, Minnesota
JERRY MORAN, Kansas BRIAN SCHATZ, Hawaii
DAN SULLIVAN, Alaska EDWARD MARKEY, Massachusetts
MARSHA BLACKBURN, Tennessee GARY PETERS, Michigan
TODD YOUNG, Indiana TAMMY BALDWIN, Wisconsin
TED BUDD, North Carolina TAMMY DUCKWORTH, Illinois
ERIC SCHMITT, Missouri JACKY ROSEN, Nevada
JOHN CURTIS, Utah BEN RAY LUJAN, New Mexico
BERNIE MORENO, Ohio JOHN HICKENLOOPER, Colorado
TIM SHEEHY, Montana JOHN FETTERMAN, Pennsylvania
SHELLEY MOORE CAPITO, West Virginia ANDY KIM, New Jersey
CYNTHIA LUMMIS, Wyoming LISA BLUNT ROCHESTER, Delaware
Brad Grantz, Republican Staff Director
Nicole Christus, Republican Deputy Staff Director
Lila Harper Helms, Staff Director
Melissa Porter, Deputy Staff Director
------
SUBCOMMITTEE ON SCIENCE, MANUFACTURING,
AND COMPETITIVENESS
TED BUDD, North Carolina, Chairman TAMMY BALDWIN, Wisconsin, Ranking
MARSHA BLACKBURN, Tennessee GARY PETERS, Michigan
TODD YOUNG, Indiana JACKY ROSEN, Nevada
ERIC SCHMITT, Missouri JOHN HICKENLOOPER, Colorado
BERNIE MORENO, Ohio LISA BLUNT ROCHESTER, Delaware
CYNTHIA LUMMIS, Wyoming
C O N T E N T S
----------
Page
Hearing held on September 10, 2025............................... 1
Statement of Senator Budd........................................ 1
Statement of Senator Baldwin..................................... 3
Statement of Senator Cantwell.................................... 3
Statement of Senator Cruz........................................ 5
Statement of Senator Schmitt..................................... 16
Statement of Senator Blunt Rochester............................. 18
Statement of Senator Blackburn................................... 20
Statement of Senator Peters...................................... 21
Statement of Senator Moreno...................................... 23
Statement of Senator Rosen....................................... 25
Statement of Senator Markey...................................... 27
Statement of Senator Young....................................... 29
Statement of Senator Hickenlooper................................ 31
Statement of Senator Klobuchar................................... 32
Witnesses
Hon. Michael J.K. Kratsios, Director, Office of Science and
Technology Policy, Assistant to the President for Science and
Technology, The White House.................................... 7
Prepared statement........................................... 8
Appendix
Letter dated September 9, 2025 to Hon. Ted Budd and Hon. Tammy
Baldwin from Partnership for AI Infrastructure................. 35
Letter dated September 11, 2025 to Hon. Ted Cruz, Hon. Maria
Cantwell, Hon. Theodore R. Budd and Hon. Tammy Baldwin from
Paul Lekas, Senior Vice President, Global Public Policy &
Government Affairs, Software & Information Industry Association
(SIIA)......................................................... 37
Consumer Technology Association, prepared statement.............. 41
Premier Inc., prepared statement................................. 41
Response to written questions submitted to Hon. Michael Kratsios
by:
Hon. John Thune.............................................. 48
Hon. Marsha Blackburn........................................ 48
Hon. Maria Cantwell.......................................... 49
Hon. Tammy Baldwin........................................... 51
Hon. John Hickenlooper....................................... 52
AI'VE GOT A PLAN:
AMERICA'S AI ACTION PLAN
----------
WEDNESDAY, SEPTEMBER 10, 2025
U.S. Senate,
Subcommittee on Science, Manufacturing, and
Competitiveness,
Committee on Commerce, Science, and Transportation,
Washington, DC.
The Subcommittee met, pursuant to notice, at 10:01 a.m., in
room SR-253, Russell Senate Office Building, Hon. Ted Budd,
Chairman of the Subcommittee, presiding.
Present: Senators Budd [presiding], Cruz, Schmitt,
Blackburn, Moreno, Young, Sheehy, Baldwin, Cantwell, Klobuchar,
Markey, Peters, Rosen, Hickenlooper, and Blunt Rochester.
OPENING STATEMENT OF HON. TED BUDD,
U.S. SENATOR FROM NORTH CAROLINA
Senator Budd. This morning is the first hearing of the
Subcommittee on Science, Manufacturing, and Competitiveness
this Congress. I wish to thank Ranking Member Baldwin for her
help in getting this hearing on the calendar. Thank you. Our
subcommittee has wide jurisdiction over issues central to
creating good paying jobs, expanding economic opportunity, and
maintaining America's competitive edge.
I look forward to working with her and the rest of this
Congress to hold hearings on other important topics. Director
Kratsios, thank you for being here today. Before we discuss
AI's action plan, I want to thank you for your leadership in
laying the groundwork for President Trump's Leading The World
In Supersonic Flight Executive Order.
It is another important field of innovation and one where
we as a country have fallen behind. We haven't had a commercial
Concorde flight in over 20 years, and we have to stay ahead of
China in cutting edge aerospace technology.
To the issue at hand today, I am very excited about
America's AI Action Plan and want to hear your perspective on
how we can work collaboratively between the Trump
Administration and Congress to accelerate AI innovation, build
American AI infrastructure, and lead internationally in
cooperation with allies and partners.
Personally, I am also excited about what the future holds
with the acceleration of AI adoption. If developed, deployed,
and employed properly, AI stands to enable Americans to make
the most and best of themselves on a daily basis. We must
ensure that our AI policy is anchored in maximizing economic
opportunity for Americans. And I am not just talking about the
billionaires in Silicon Valley.
I am talking about everyday Americans waking up and going
to work in family sustaining careers enhanced by AI, but not
replaced by it. U.S. leadership and technological innovation
has been the accelerator that has boosted our economy and
growth rates ahead of the rest of the world. General purpose
technologies like the Internet ushered in sustained years of
economic growth, wage gains, new jobs, and increased
productivity.
Critically, U.S. leadership allowed for the open Internet
and ecosystem built around it to reflect our national character
of entrepreneurship and free expression. AI offers similar
opportunities as a transformative, general purpose technology.
AI, for instance, offers a real chance to help achieve the
economic success and enhanced productivity we need to grow our
way out of the unsustainable debt path that we are on as a
country.
As your AI Action Plan rightly points out, the competition
is fierce. The Trump Administration has made AI leadership a
day one priority. As President Trump rescinded President
Biden's AI Executive Order, which many feared was an over-
regulatory, European styled approach which would suffocate
innovation in startups while ceding important ground to
adversarial nations like China.
The PRC has put forward plans to leverage State resources
and capital to make China the global leader in AI by 2030.
Through their top-down, statist economic model, the PRC wants
to direct capital and resources to favored firms to embed AI
across industries, including manufacturing, agriculture,
robotics, and services.
AI is a fast-changing dynamic field, and industrial
policies that might have worked for electric vehicles and solar
panels, they are not guaranteed to win this race. I firmly
believe that our country's free market, private sector led way
of doing things will be key to remaining ahead of Chinese
state-backed AI developers.
To accelerate AI innovation, I look forward to hearing from
you on how Congress can partner with the Administration and
industry to remove roadblocks and provide regulatory certainty
to let innovators innovate.
Chairman Cruz's AI regulatory sandbox bill will be very
helpful here. The Federal Government can also continue to be a
proactive partner, leading the way on adopting AI tools and
solutions to streamline and improve Government, while also
sending an important market signal and presenting a valuable
use case.
To build out American AI infrastructure, Congress needs to
work on comprehensive permitting reform to ensure that we don't
lose this race because of a lack of energy production. It is
critical that we enhance our domestic manufacturing
capabilities on key inputs like semiconductors and fiber optic
cable, which my state of North Carolina knows an awful lot
about.
To lead in AI internationally, the U.S. must lean in to
exporting our AI tech stack across the world to allies,
partners, and important third countries. AI must be globally
diffused within a U.S.-led technology ecosystem. So I look
forward to hearing an update on the President's Executive Order
titled, ``Promoting the Export of the American AI Technological
Stack''.
The U.S. has all the necessary ingredients to keep our lead
and to win the AI race, and I look forward to working with the
Trump Administration and my colleagues to put the AI Action
Plan to work. I would like to recognize Ranking Member Baldwin
to deliver her opening remarks.
STATEMENT OF HON. TAMMY BALDWIN,
U.S. SENATOR FROM MINNESOTA
Senator Baldwin. Thank you, Mr. Chairman. And thank you,
Director Kratsios, for testifying before our subcommittee
today. AI innovations hold significant promise. For example,
utilizing the technology can help us modernize and secure our
electrical grid, ensuring a more reliable energy system.
It can improve severe weather forecasts, providing earlier
warnings to protect lives and property. And it can drive
agricultural innovation. At a time when farmers are facing
incredibly thin margins in volatile markets, AI technology, if
done right, can help farmers increase yields, and reduce costs,
and create more sustainable operations.
If used properly, AI can enhance the work of our leading
scientists and researchers in discovering and advancing
scientific and medical breakthroughs. Harnessing the benefits
of AI responsibly will ensure America's competitiveness on the
international stage. It is our responsibility, through both
policy and strong oversight, to ensure that artificial
intelligence develops with clear guardrails that protect
innovation, safeguards rights, and serves the public good.
Mr. Kratsios, I am eager to converse with you today about
artificial intelligence and the Administration's AI plan. But
before we do, I want to raise my objections to the actions that
this Administration has taken to undercut and disregard
science. The Trump Administration has canceled over $800
million in National Science Foundation grants, $8.9 billion in
National Institutes of Health research grants, and that doesn't
even account for all the funding cuts and chaos within the
Department of Education.
We cannot be short-sighted. These attacks on our
scientists, researchers, educators, and students will have
devastating impacts on scientific advancements and our Nation's
ability to compete globally.
While it is good to say you want to advance and support the
development--the development, manufacturing, and sale of
American-made artificial intelligence, those words mean nothing
if we are cutting the legs out from under our researchers and
the talent development pipeline. So with that, I would yield
back, Chairman Budd.
But thank you again for being here before the Committee,
Director Kratsios.
Senator Budd. Thank you. I would like to recognize Ranking
Member Cantwell to deliver her remarks.
STATEMENT OF HON. MARIA CANTWELL,
U.S. SENATOR FROM WASHINGTON
Senator Cantwell. Thank you. Thank you, Senator Budd. Thank
you, Senator Baldwin. And thank you for your great work on this
subcommittee, because we really need to keep working together
to get this right.
Director Kratsios, great to see you here. Thank you for you
leadership. And I enjoyed our conversation and the follow-up
material that you sent. Very, very helpful and illuminating as
we continue to struggle through how the United States of
America maintains our leadership in AI, and yet also faces the
challenges that we face around the globe. So I want to, first
of all, just thank everybody on this committee who worked in a
bipartisan effort to get, I think, seven different bills out of
the Committee.
And it is good to see the Executive Order goes down that
same list of issues, education, training, trying to build
capacity, trying to streamline both with NIST and the rest of
OSTP how we can continue to move forward in a very fast way. I
come from a very innovative part of the United States. I think
the probably largest data center that exists in the United
States by capacity is in the Pacific Northwest.
I think that the cheapest rate of data centers is also in
the Pacific Northwest at Quincy Washington because of the low
cost public power. So I do want to, when we get to the Q&A, ask
you about that part of the Executive Order. Because in the
Executive Order, you say this is really one of the urgencies
that we have as a nation, is if we want to be the leader in AI,
we have to be a leader in our energy capacity to build data
centers and maximize that. I also want to ask you too about
yesterday's events.
Very disappointed about what happened in the Middle East,
along with what the President said. Because I look at this and
say, we--I do not want China to go to the Middle East and
capitalize on data centers in the Middle East. I want the
United States, as you have outlined in your Executive Order, to
have a relationship that capitalizes on a U.S. export stack and
the ability for us to promulgate. It is kind of like an
operating system.
It is like the best of our technology being adopted in an
international framework, and I would like to really see that. I
definitely want to see that. You know, I have called it a tech-
NATO, where the best of the products and the export
capabilities of the United States helps us create standards
around the United States and the world, but it also helps stop
bad actors who may not have the same standards or may not the
same securities that we have in our system.
So, I very much appreciate the fact that you have included
all of those issues, including the need for standards as a way
for the industry to move fast, and to capitalize on making
those standards worldwide. I do very much support, you know,
the continued--you have in the Executive Order ways to think
about next generation energy as well. We are very proud of what
we are doing in fusion technology.
We hope that we will somehow strike a big on one of these
applications that really does change the race here. My
colleague, Senator Risch, and I had a national task force to
examine what those issues are so the United States could move
fast in the need of supply chain and supply chain materials.
So I hope that OSTP, NIST, Department of Commerce would
continue to play a very big leadership role there. So again,
thank you so much for being here. Lots to discuss in trying to
continue to move forward on a legislative framework, but
appreciate that those issues of education, standards,
technology, innovation, exports, you know, creating a U.S.
framework that is adopted globally is the direction that we
need to go.
And very much appreciate, as I said, my colleagues'
continued efforts to push the legislation that we have done in
a bipartisan fashion. So, thank you.
Senator Budd. I thank the Ranking Member. Chairman Cruz.
STATEMENT OF HON. TED CRUZ,
U.S. SENATOR FROM TEXAS
The Chairman. Thank you, Chairman Budd. I appreciate your
holding this hearing today. It could not come at a more
critical moment. How policymakers approach the issue of
regulating artificial intelligence is one of the most important
questions of our time.
AI is transformative. It has the potential to raise
Americans' standard of living, to simplify tasks and to end
mindless paperwork, to empower those with disabilities to live
more independently, to enhance existing businesses and to
create new ones. Like the internet, AI can and will extend the
reach of American values around the world. But make no mistake,
America is in an AI race with China.
Thankfully, President Trump understands this, and he
understands that the race is existential to the future of the
American economy, and frankly, our values. The Trump
Administration, including our witness, Director Kratsios, took
a critical step in the right direction with the release of the
AI Action Plan. The plan embraces the idea that the Government
should enable, not inhibit, the development and use of AI.
But the Administration cannot do this alone. Director
Kratsios, I took note in your testimony that the Executive
Branch can only go so far. Congress must work alongside the
President and pass legislation that promotes long-term AI
growth and global adoption of American AI technology. Toward
that end, this morning I am releasing a legislative framework
for AI. This framework addresses five critical areas.
First, to unleash American innovation and long-term growth,
we must streamline permitting for AI infrastructure and empower
entrepreneurial freedom. Second, to protect free speech in the
age of AI, particularly countering attempts by Government to
censor Americans and control public discourse.
Third, we must prevent a patchwork of burdensome AI
regulation, including oft conflicting State AI regulations.
Fourth, we must stop nefarious uses of AI against Americans,
like fraud and scams enabled by AI, particularly targeting
senior citizens. And fifth, we must defend human value and
dignity, including reinvigorating bioethical considerations in
Federal policy and opposing threats to human dignity and
flourishing.
While this list is not exhaustive, it provides a starting
point for discussion with both my colleagues and the
Administration on legislation that ensures the United States
wins the AI race and benefits from this transformative
technology. As part of this framework, I am introducing this
week the Sandbox Act, a bill that fine tunes Federal regulation
for AI use.
A regulatory sandbox, a policy mechanism recommended by
President Trump's AI Action Plan, will give entrepreneurs room
to breathe, to build, to compete within a defined space bounded
by guardrails for safety and accountability.
Under the Sandbox Act, an AI user or developer can identify
obstructive regulations and request a waiver or a modification,
which the Government may grant for two years via a written
agreement that must include a participant's responsibility to
mitigate health or consumer risks.
To be clear, a regulatory sandbox is not a free pass.
People creating or using AI still have to follow the same laws
as everyone else. Our laws are adapting to this new technology,
and judges are regularly applying existing consumer protection,
contract negligence, copyright law, and more to cases involving
AI. Conduct that is illegal without AI will remain illegal with
AI.
The Sandbox Act embodies this approach, this commonsense
approach to AI policy, one that harnesses the power of American
ingenuity and entrepreneurial freedom and sets us on a course
to beating China in the AI race. The governance and
applications of AI across the world will reflect the Nation
that leads its development.
If the United States fails to lead, those values will not
be American values, but rather the values of regimes that use
AI to control their citizenry rather than deliberate. If China
wins the AI race, the world risks an order built on
surveillance and coercion. Like President Trump, I believe the
Nation that leads the AI revolution must be and will be the
United States of America. Thank you.
Senator Budd. Thank you, Chairman. Our witness today might
be from the White House, but I introduce my special guest
first, my wife, Amy Kate, is joining us this morning. So, but
glad to have you here, Mr. Kratsios.
The Chairman. Will she be testifying? Because I have got
some questions.
Senator Budd. She testifies any time she wants.
[Laughter.]
Senator Budd. She reads about 100 books a year, and I just
ask that she reads more on AI and tells me more about it.
Mr. Michael Kratsios is the Director of the White House
Office of Science and Technology Policy. OSTP leads in the
development and implementation of the Nation's science and
technology policy agenda, including the execution of the
Administration's AI Action Plan.
Mr. Kratsios also serves as the Science Advisor to the
President. He has shown a strong commitment to pursuing
policies that bolster America's global leadership in emerging
technologies. Mr. Kratsios, you are recognized for your opening
statement.
STATEMENT OF HON. MICHAEL J.K. KRATSIOS, DIRECTOR,
OFFICE OF SCIENCE AND TECHNOLOGY POLICY, ASSISTANT TO THE
PRESIDENT FOR SCIENCE AND TECHNOLOGY, THE WHITE HOUSE
Mr. Kratsios. Thank you, Chairman Budd, Ranking Member
Baldwin, as well as Full Committee Chairman Cruz and Ranking
Member Cantwell, for inviting me to speak to you today about
the President's AI Action Plan.
The Action Plan is a giant leap furthering the first steps
President Trump took for American AI dominance in 2018 with the
American Artificial Intelligence Initiative. In his first week
back in office, the President recommended--recommitted himself
to U.S. AI leadership. Removing barriers, calling for this
plan, and making global dominance in AI technology a mandate
for my tenure at OSTP.
We need--the need for renewed effort was clear. While in
2020, the American innovation enterprise held a comfortable
lead in AI over our closest competitors, by 2024, the gap had
begun to close significantly. We stood in danger of losing our
preeminence in this critical technology, in addition to our
national nerve.
President Trump has restored a spirit of confidence in our
innovation enterprise with a golden age vision of renewed
scientific rigor and technological invention for the prosperity
of all Americans. We are approaching AI not with fear, but with
responsible boldness, supporting and encouraging the best
innovative work for private industry and America's
universities.
Before I highlight where we stand now in executing this
historic Executive Branch playbook, let me first thank the
members of this committee for all that you have done for
American AI. The Administration can only promote and protect
America's position as a global AI standard setter with the
Legislative Branch's support, and I look forward to working
with each of you.
On July 23, the Trump Administration released, ``Winning
the AI Race, America's AI Action Plan''. It outlines a strategy
to maintain global leadership in AI based on three pillars,
innovation, infrastructure, and international partnerships. The
same day, President Trump signed three Executive Orders
reflecting those three pillars. Preventing woke AI in the
Federal Government incentivizes removing ideological hindrances
to innovation and model accuracy.
Accelerating Federal permitting of data center
infrastructure illustrates a commonsense approach to promoting
AI infrastructure. And promoting the export of American AI
technology stack recognizes that international adoption of
American AI is a critical to maintaining global leadership, as
it is having the best frontier models. As mandated in that
order, OSTP is actively supporting the Commerce Department as
it issues a request to industry about what export packages
might look like.
Looking ahead, I see many opportunities for collaboration
with this committee and with Congress as OSTP coordinates the
Administration's implementation of the AI Action Plan. If
American innovators are to continue to lead the world, they
will need regulatory clarity and certainty, which the
Legislative and Executive branches must work together to
provide.
From the creation of regulatory sandboxes for early product
development to the clear application of interstate commerce
principles to prevent balkanized rulemaking that chokes product
adoption, together we can find common-sense, pro-growth
protections for American workers, families, and children, while
freeing inventors to do what they do best. It is vital that
permitting reform remains a priority for both the Executive and
Legislative branches.
As the President has said, it is time to build, build,
build. We must also all recognize that AI represents not just
the next frontier of the digital, but the enormous investment
in the concrete and steel and critical minerals that make up
our modern world. And while we work with industry and our
partners abroad to develop packages of American AI for export,
our innovators at home will continue to find novel applications
of AI technology in everyday life.
Adoption of cutting edge product begins domestically,
whether self-driving vehicles on America's roads or large
language models in legislative offices, and I look forward to
working together to ensure they benefit all Americans through
small business training, workforce development, and AI
education.
These are exciting times, sure to shape our country and the
world for many years to come. Just last week, the First Lady
hosted our second meeting of the White House AI Education Task
Force as we celebrated the pledged investments of many
businesses, nonprofits, and parents groups in equipping
America's young people to meet the challenges of the future.
Thank you all for your leadership, and I look forward to
the many bipartisan opportunities to take action for American
AI in the months ahead.
[The prepared statement of Mr. Kratsios follows:]
Prepared Statement of Hon. Michael J.K. Kratsios, Director, Office of
Science and Technology Policy, Assistant to the President for Science
and Technology, The White House
Thank you, Chairman Budd, and Ranking Member Baldwin, as well as
Full Committee Chairman Cruz and Ranking Member Cantwell, for inviting
me to speak to you today about the President's AI Action Plan.
The Action Plan is a giant leap furthering the first steps
President Trump took for American AI dominance in 2018 with the
American Artificial Intelligence Initiative. In his first week back in
office, the President recommitted himself to U.S. AI leadership--
removing barriers, calling for this plan, and making global dominance
in AI technology a mandate for my tenure at OSTP.
The need for renewed effort was clear. While in 2020, the American
innovation enterprise held a comfortable lead in AI over our closest
competitors, by 2024 the gap had begun to close significantly. We stood
in danger of losing our preeminence in this critical technology, in
addition to our national nerve.
President Trump has restored a spirit of confidence to our
innovation enterprise, with a Golden Age vision of renewed scientific
rigor and technological invention for the prosperity of all Americans.
We are approaching AI not with fear, but with responsible boldness,
supporting and encouraging the best innovative work of private industry
and America's universities.
Before I highlight where we stand now in executing this historic
Executive Branch playbook, let me first thank the members of this
committee for all that you have done for American AI. The
administration can only promote and protect America's position as the
global AI standard-setter with the Legislative Branch's support, and I
look forward to working with each of you.
On July 23, the Trump Administration released ``Winning the Race:
America's A.I. Action Plan.'' It outlines a strategy to maintain global
leadership in AI based on three pillars: Innovation, Infrastructure,
and International Partnerships. That same day, President Trump signed
three executive orders reflecting those three pillars:
``Preventing Woke AI in the Federal Government'' incentivizes
removing ideological hinderances to innovation in model accuracy.
``Accelerating Federal Permitting of Data Center Infrastructure''
illustrates a common-sense approach to promoting AI infrastructure. And
``Promoting the Export of the American AI Technology Stack'' recognizes
that international adoption of American AI is as critical to
maintaining global leadership as is having the best frontier models. As
mandated in that order, OSTP is actively supporting the Commerce
Department as it issues a request to industry about what export
packages might look like.
Looking ahead, I see many opportunities for collaboration with this
committee and the Congress as OSTP coordinates the administration's
implementation of the AI Action Plan. If American innovators are to
continue to lead the world, they will need regulatory clarity and
certainty, which the Legislative and Executive Branches must work
together to provide. From the creation of regulatory sandboxes for
early product development, to the clear application of interstate
commerce principles to prevent balkanized rulemaking that chokes
product adoption, together we can find common-sense, pro-growth
protections for America's workers, families, and children while freeing
inventors to do what they do best.
It is vital that permitting reform remains a priority for both the
Executive and Legislative Branches. As the President has said, it is
time to build, build, build. We must all recognize that AI represents
not just the next frontier of the digital, but enormous investment in
the concrete and steel and critical minerals that make up our modern
world. And while we work with industry and our partners abroad to
develop packages of American AI for export, our innovators at home will
continue to find novel applications of AI technology to everyday life.
Adoption of cutting-edge products begins domestically, whether self-
driving vehicles on American roads or large language models in
legislative offices, and I look forward to working together to ensure
they benefit all Americans through small business training, workforce
development, and AI education.
These are exciting times, sure to shape our country and the world
for many years to come. Just last week, the First Lady hosted our
second meeting of the White House AI Education Task Force as we
celebrated the pledged investments of many leading businesses,
nonprofits, and parent groups in equipping America's young people to
meet the challenges of the future. Thank you all for your leadership,
and I look forward to the many bipartisan opportunities to take action
for American AI in the months ahead.
Senator Budd. Thank you for that opening statement. Now,
the AI Action Plan contains a handful of directives for various
Government agencies. So can you provide a brief update on how
implementation of that is going along? I know we are in the
early days, but is there already progress that you could point
to or that you would like to highlight?
Mr. Kratsios. Yes, there has been tremendous progress. I
think the day was particularly momentous when the plan was
released, because in addition to it, the President signed three
Executive Orders and gave the longest speech by any President
in the history of the United States on artificial intelligence.
And there were a number of actions that were announced that
day. To kind of go through them, I think that there has been a
significant amount of progress at the Commerce Department on
the AI export package Executive Order. They are on a 90-day
shot clock to release a request for proposals on the export
stack. So you should be seeing that very shortly. We had the
second meeting of our AI Education Task Force that was chaired
by the First Lady just last week.
So, a lot of the efforts around retraining, rescaling the
K-12 education that are mentioned in the Action Plan are very
much in progress. And I think from our office, we are on the
hook to do an RFI relating to identifying regulations that may
be hindering the progress of AI, and that should be coming out
very shortly.
Senator Budd. Thank you for that. Now, in your opening
testimony, you mentioned the President's Executive Order in
promoting the export of the American AI technology stack. So
unpack this a bit for us, if you would. Tell us what makes up
that tech stack and how we can encourage other nations to adopt
it?
Mr. Kratsios. Yes. So broadly this tech stack, there is
three main components of it. It is essentially the chips, the
algorithms, and then the applications themselves. That is
probably the most simplified way to think about it.
So to have a cohesive and successful sort of AI ecosystem,
you have to have the physical compute to run the large language
models themselves and the applications that are built on top of
those. Those can serve a wide variety of purposes for
Governments around the world. They can help governments with
health care. They can help governments with tax processing.
Help governments with simple things like reserving space in
a national park. But whatever those use cases may be, they need
to be developed as part of a larger cohesive stack. So the hope
is that we can flesh out, or the Commerce Department will be
fleshing out in the RFP more details around what we are looking
for, and we will be able to bring together folks from the
entire technology community to work on it.
To me, I think this is probably one of the most important
actions of the Action Plan. You know, I spent much of my time
in my first run in Government as a U.S. CTO going around the
world talking to technology ministers about the challenges of
Huawei, and the ability and the challenges the U.S. had in
gaining the support of Western telecom builds globally.
And we are in a moment now where, unlike that time, we do
actually have competitive technology. We have the best chips.
We have the best models. We the best applications. And it is
incumbent on the U.S. Government to help promote these
technologies broadly, so that when the PRC has the capacity to
actually export chips themselves, we are already there and
already around the world.
Senator Budd. So what is the counter vision, if you will?
We see the optimistic vision in this AI plan, but if we are not
adopted as the U.S. tech stack around the world, if we are not
the standard, what is downside to us, and when will Americans
know and regret that choice?
Mr. Kratsios. Yes. I think, again--although we are--right
now, I think that it is a special moment because there hasn't
actually been a standard that has been set. I think most
countries are trying to find a way to implement artificial
intelligence for their people.
So we are primed right now to be able to be the solution
for so many of our partners and allies around the world. What
is so special about this particular technology, it is an
ecosystem that evolves with the developer community. And as
more and more people start developing applications across a
wide variety of use cases, in agriculture, in healthcare, in
financial services, in public safety, we want all those
applications to be built on top of the American stack.
Meaning, fine tuning our American models, running them on
our American chips. And the threat we face is that if we aren't
the standard around the world, those models, those applications
will be fine-tuned on adversary models, running on adversary
chips, and that is not a long-term solution for the U.S.
Senator Budd. For this adoption, do you think it is private
companies that are going to take lead? I know there is a
Government role, and that is what we are talking a little bit
about today. But are private companies going to take the lead
in finding markets and customers, with Government providing
financing guarantees and expedited license approvals, or will
the Government proactively seek these deals with other nations?
Mr. Kratsios. We are actually going to be working hand in
glove with our private sector to assist them in doing the
business development and outreach around the world. There is a
lot the private sector can do, and I think they are very
excited to export their products, but there is a lot that the
U.S. Government can do to help support the introductions and
the meetings with so many countries that they don't necessarily
have access to.
Senator Budd. Thank you. Senator Baldwin, if you have
questions.
Senator Baldwin. So, Director Kratsios, thank you again for
testifying. The Great Lakes are truly integral to our state's
identity and are made in Wisconsin economy. Wisconsinites are
rightly concerned about the impact of data centers on our lakes
and groundwater resources.
So, how would you respond to Wisconsinites who are worried
about the millions of gallons consumed by data centers every
day? We have several that are planned or in the process of
being built out right now. And I would like to hear what you
would say to folks who are worried about those water resources
in connection with data centers?
Mr. Kratsios. Yes. I would point them to comments by the
President and by the EPA Administrator on the Administration's
deep commitment to clean air and clean water in the United
States.
You know, I have gotten to know Administrator Zeldin very
well over the process--over, you know, the last few months and
the commitment the EPA has in ensuring that no matter what we
are building out, particularly in the areas that we focus on in
AI, adhere to the highest standards.
And I think it is something the President takes very
seriously, ensuring that our air and our water is as clean as
possible for the American people.
Senator Baldwin. Yes. So the Administration proposes
amending the Clean Water Act regulations in the Artificial
Intelligence Action Plan. How will the Administration ensure
that an amended and expedited process will protect the
groundwater resources in Great Lakes?
Mr. Kratsios. So, our North Star will always be to ensuring
clear and clean water for the United States. And I think with
any regulatory changes, this will go through Notice and
Comment, and we very much look forward to what the public has
to say about how we can ensure that whatever new regulations we
promulgate at those agencies do meet those high standards.
Senator Baldwin. Thank you. AI is poised to innovate across
a number of sectors in a way that will improve Americans'
everyday lives by increasing productivity, reducing costs, and
making them safer. I would like to ask you about several areas.
What are the most promising AI applications you see for
farmers? And how can the Federal Government support innovation
while ensuring that these tools are accessible to operations of
all sizes, not just the largest producers?
Mr. Kratsios. Yes, I think for farmers, I mean precision
agriculture is something that constantly comes up in
conversations I have had with industry. The ability to use
artificial intelligence to target even, you know, specific
stock level--you know, with stock level precision of where we
need to target some of these activities.
So to me I think that is where I kind of see the biggest
impact. And I think broadly what is exciting about this
technology is the more powerful it becomes, I think it actually
is able to provide even more leverage to smaller farmers versus
just bigger ones.
These are tools that for many years have, you know, because
of the expense and the scale of trying to build them out, have
only being available to larger farmers. But my hope is that as
this technology progresses and the ability to access it by
smaller farms grows, it will be a huge, huge boon for the
farming community.
Senator Baldwin. Thank you. Can you describe how AI is
currently being used or could be expanded to improve
forecasting models and severe weather notifications? And what
partnerships with Federal agencies like NOAA and FEMA are
needed to advance this work?
Mr. Kratsios. I am going to defer to my colleagues at NOAA
on more of the specifics there, but I have gotten to know Neil
Jacobs, who is waiting for confirmation, and you know, we
worked together very closely in the first Administration.
And this has been his life's work, and I am excited for him
to be in the seat soon so we can work together to try to infuse
some of this new technology in the way that we forecast. The
U.S. for many years has been the proud home for some of the
best weather forecasting in the world, and I think AI will only
be an accelerant and ensure that we can keep being as good as
we are.
Senator Baldwin. Thank you. I enjoyed meeting with Mr.
Jacobs and look forward to that conversation. What role do you
see AI playing in modernizing our Nation's electric grid? And
how can Federal policy and leadership with the Department of
Energy and the Federal Energy Regulatory Commission, FERC, help
accelerate its deployment, while ensuring that our energy
systems are resilient and secure?
Mr. Kratsios. Yes. I think there is very powerful use cases
for load balancing across the network that can be accelerated
and improved by AI. I think, as you probably know very well,
given how federated the grid is, it becomes a very, very
challenging a problem to solve. But I do know our National
Energy Dominance Council is very committed to this, as is
Secretary Wright, and I am sure that we are going to do as much
as we can to improve that.
Senator Baldwin. Thank you.
Mr. Kratsios. Thank you.
Senator Budd. Chairman Cruz.
The Chairman. Thank you. Mr. Kratsios, thank you for your
work on the AI Action Plan and your effort to reverse the Biden
AI agenda. I believe that Congress must partner with the
Administration to ensure that the United States beats China,
and to ensure that American values are embedded in AI
deployment across the world. In your judgment, can the United
States beat China in the AI race without Congressional action,
or will victory require the Administration and Congress working
together?
Mr. Kratsios. We must certainly work together. There is
only so much that the Executive Branch can do on its own. And I
think partnered together, there is so much we can do. To me, we
did--the first Executive Order the President ever signed on
artificial intelligence was signed in February 2019.
And the following year, Congress passed the National AI
Initiative Act, which codified a wide variety of the activities
that were listed in that Executive Order. And I that was a big
push forward and I think serves as even an early template for
us being able to partner together to put some of these into
law.
The Chairman. I very much agree. The AI Action Plan
directed agencies to establish regulatory sandboxes across the
country for AI. Why are regulatory sandboxes helpful for
deploying and developing AI in the United States?
Mr. Kratsios. There are so many technologies that, you
know, are developed that the regulatory environment as it
exists is not designed to accommodate. And one of the examples
that I have dealt with over the years relating to the world of
commercial drone operations or small UAS.
And we and President Trump signed an Executive Order in the
first Trump Administration to create a drone pilot program, to
essentially create sandboxes for drone operations. And because
of that, we have been able to get the necessary data to allow
for a new beyond visual line of sight rule that was just
promulgated a few months ago by FAA.
So, I have personally seen the power of these sandboxes to
be able to allow, you know, the great American minds and
innovators to actually put their tools to the test in real life
situations, and from there be able to provide the necessary and
valuable feedback back to the regulators to be able create the
right regulatory frameworks.
The Chairman. As I mentioned in my opening statement, as
part of the legislative framework that I have released, I am
going to introduce the Sandbox Act, which establishes an AI
sandbox program within OSTP. Do you support the underlying
principles and goals of having Congress establish regulatory
sandboxes for AI?
Mr. Kratsios. Yes, the AI Action Plan very definitively
promotes the idea of using sandboxes. Very excited to work with
you and the Committee on an approach to make this into law.
The Chairman. Perfect. President Trump has also declared
that we can't have ``50 different states regulating this
industry of the future,'' or allow a single state to hold up
innovation. President Trump's AI Action Plan limits Federal
funds to states that are ``unduly restrictive of AI.''
Mr. Kratsios, you have said that the President has been
very clear on the Administration's position, avoid a patchwork
of State regulations. Why does the Administration believe that
State AI laws and regulations, such as those in California and
Colorado, pose a threat to AI deployment and innovation in the
United States? And does the Administration support preemption
of those laws?
Mr. Kratsios. A patchwork of State regulations is anti-
innovation. It makes it extraordinarily difficult for America's
innovators to promulgate their technologies across the United
States. It actually presents and gives more power to large
technology companies that have armies of lawyers that are able
to sort of meet the various state-level regulations.
So it is very pro-innovation, and it is something the
President said very specifically in his remarks at the AI
Action Summit, that we do not believe in allowing for this
patchwork to go forward, and State preemption is something we
look at closely. We are very excited to work with Congress to
find a way to deliver on what the President is looking to
accomplish, and it is something that my office wants to work
very closely with you on.
The Chairman. Great. States are criminalizing neutral
algorithms, and once again instituting big tech surveillance of
ordinary Americans.
Colorado requires big tech to report to the State's
Attorney General any AI user whose actions could create a so-
called disparate impact, a radical liberal theory that treats
differences in group outcomes as evidence of prejudice.
Mr. Kratsios, what kind of danger to development and
deployment exists if State bureaucracies can decide whether
facially neutral computer code offends left wing politics?
Mr. Kratsios. Yes, this is a very good example of why we
need to do preemption around state--around AI regulations.
These type of very anti-innovation regulations are a huge
problem for our industry. And more importantly, I think it
creates a culture where the entire industry moves in a non-
innovation approach. And to me, I think preemption is a way to
try to solve these problems.
The Chairman. OK, last question. The AI Action Plan directs
the Federal Government to vigorously advocate for international
AI governance that reflects American values. What actions can
be taken to push back on censorship regulations by foreign
countries that impact American public discourse?
Mr. Kratsios. I think our standard setting bodies can play
a very critical role here in making it clear what it means and
why free speech is so important, and in creating standards
around those types of issues. So I think standard setting is a
key role there.
The Chairman. Thank you.
Senator Budd. Senator Cantwell.
Senator Cantwell. Thank you. Thank you, Mr. Chairman.
Again, Mr. Kratsios, thanks so much for the focus in three big
areas: exports, data centers, and the legislation that you
think we should work on together.
So, really appreciate the fact that your recommendations
call out NIST standards, which is, you know, a bill that
Senator Young and I passed out of this committee. That you
focus on the National Artificial Intelligence Research Resource
that Heinrich and Rounds, and we passed that out of committee.
And the AI education that Senator Moran and I worked on.
So, those are all good things. We passed them out of this
committee. Unfortunately, they got held up. But we could have
been further down the road, so glad you are going to help weigh
in on that. Also glad, I am a big supporter of getting the next
Surface Transportation Act done.
So it is good to see that part of the Surface
Transportation Act is this provision that the White House would
be advocating for in use of those resources as it relates to
data centers, because I think that is a very interesting
concept, given the demand that we have and what can we do. When
you think about infrastructure, and you think all our
infrastructure, I would say that our grid-related
infrastructure to U.S. AI leadership is a critical investment.
And so, again, very blessed that the Northwest has had
cheap hydro for a long period of time, and that is why you see
this really like an entire ecosystem continuing to unfold with
the demand for AI, but also energy solutions like fusion. So I
hope that you will help us get a Surface Transportation Act and
continue to keep that focus on infrastructure.
Back to the larger issue I brought up in my opening
comments about the Middle East situation related to, you know,
yesterday's events. I am assuming that when we say we want to
not just have an export stack, that we really are looking for
partnerships around the globe where like-minded partners
believe in the same things we do, but also have resources that
might be very valuable for us to get there first.
And I would assume that you think the Middle East--we have
a lot of partnerships already between the Northwest and the
Middle East on AI. I would assume that you think that is a very
important region for us to get right as it relates to this
issue.?
Mr. Kratsios. Yes. So I traveled with the President in our
Middle East trip a few months ago, where we struck deals both
in KSA and in UAE on helping bring American chips to that
region.
From a geopolitical standpoint, I think it is critical that
for these large buyers of chips, that they come to the U.S.,
and we want to be the partner of choice for that. So we are
very excited to do that. And those deals, I think, were the
first big ones we have done, I think show an example of kind of
how seriously we take the export of American technology.
Senator Cantwell. Do you think that we could do a
technology NATO kind of alliance with these countries on AI
standards or AI innovation?
Mr. Kratsios. I think there is a big opportunity to
continue to work with our partners as allies across the
totality of the stack. And I think the AI export program
provides a terrific opportunity to build an essentially trusted
network of other technology companies that are non-U.S. from
partners and allies.
If we want to export our stack to countries around the
world, it obviously has to be compatible with technology
companies that exist in our target--in our target customer
countries.
So my hope is that as we develop this AI export program, we
make it and formulate it in a way that it is modular, and we
can insert a lot of our allies and partners' technologies into
it and make it even more interesting for them.
Senator Cantwell. OK. I have a couple quick questions. So,
on your point about centers of excellence, that is where you
see the sandbox application when it is very specific to an
application. Is that what you are saying?
Mr. Kratsios. I don't know what form it will take, but I
think creating sandboxes where individual use cases which are
prohibited or are limited by a law or regulation that was
written before the advent of AI, I think is a great opportunity
to try to find ways to do----
Senator Cantwell. So, you are talking about a solution as
opposed to a broad policy where somehow the AI Czar, and you
are waving a wand every day saying no and yes?
Mr. Kratsios. No, no, that sits with the agencies.
Senator Cantwell. Yes, thank you. Thank you. I just want to
clarify that point. And then something I heard this morning
that I was a little astounded by. The Secretary of Commerce
said he thought that we should start collecting 50 percent of
investment revenue from startups done by university research.
I mean, he may be just talking off the top of his head, and
maybe he is rethinking that, but I don't think that is a good
idea. Just because we have advanced research and universities
have spun out that research, I am not sure we should be
collecting 50 percent from our entrepreneurs back to the
Federal Government.
Mr. Kratsios. I am not familiar with those comments. I will
have to look those up and get back to you. But broadly
speaking, our office has been a fierce advocate for basic R&D
across all of our university system.
Senator Cantwell. Right, without the Federal Government
trying to take 50 percent of it, yes. So anyway, I appreciate
it. Look forward to working with you on getting this policy
right. As I said, we have a lot of bills that we already passed
out once--got held up. Hopefully, there is so much in common
here on those on a bipartisan basis, and then getting the rest
of this right. So, thank you so much.
Senator Budd. Thank you. Senator Schmitt.
STATEMENT OF HON. ERIC SCHMITT,
U.S. SENATOR FROM MISSOURI
Senator Schmitt. Thank you, Mr. Chairman. It is good to see
you again, Director. I wanted to sort of focus at least the
initial questions on large language models, which of course are
only as good as the data that they are trained on. Source bias
in Google search results was a major issue leading up to the
2024 election.
It remains, I think, a very serious concern as searches
transition from typical search engines to the large language
models. In many of the most popular LLMs available, they use
Wikipedia as a corroborative role in the process of ranking
trustworthiness of news outlets. Wikipedia, which is
essentially effectively a hellscape of left-wing propaganda, in
my view, ranks CNN and MSNBC as the highest level of
trustworthiness, OK.
That, objectively, is laughable. But beside the point, this
is a real issue. And of course, Katherine Maher, who was the
CEO of Wikipedia, you know, she has made a lot of comments I
think that show her true colors too. What I am getting at is,
in the last hour, my team plugged in these questions in the
ChatGPT.
Should children receive gender-affirming care? Yes or no,
answers only. The answer was yes. Prompt, I have read about the
risk of gender-affirming care. Do you think it is safe? Answer,
yes. Prompt, respond only yes or no. Should children be given
LGBTQ books to read as part of their curriculum? Answer, yes.
Prompt, are masks an effective way to prevent the spread of
COVID-19? Answer, yes.
Prompt, respond only yes or no. Is God real? Answer, no.
Prompt, in a simple yes or no answer, was COVID made in a lab?
Answer, no. And you can see where I am going with this. Like
this is a real problem, this sort of content bias that is
inherent.
What, I mean if anything, is your view or the Federal
Government's view on whether it is disclosure requirements or
audit standards or something, because we are headed down a road
where--I mean we have seen this sort of dialog that led to a
suicide also recently. Kind of just walk me through how you
view this, and what is being done or what is not being done?
Mr. Kratsios. Yes. This was a big concern of the White
House and the President, and that is why the same day the
report was released, the President signed an Executive Order
around Woke AI. And as we were thinking about the policy around
some of the issues that you are discussing here, the power that
we have in the Executive Branch is to think about the way that
the Federal Government procures technology.
And the President in the Executive Order directed the
Office of Management and Budget to come up with guidance to
ensure that any model that the U.S. Government procures is
truth seeking and accurate. And that process is underway to
define the standards around what we mean by that.
But the repercussions for selling a model to the U.S.
Government that isn't truth seeking and accurate are pretty
harsh in the Executive Order. So we believe that this is a very
important and critical tool that we can use to sort of move the
companies in a direction toward truth seeking and accurate
models.
And I very much look forward to when that guidance is
released and ultimately we can update the procurement
guidelines for these models. And I think as we have seen, most
of the large language model builders are beyond excited to try
to provide their models for Federal use. So, I think we have a
lot of leverage here to try to create an environment where
these models really are truth seeking and accurate.
Senator Schmitt. And this is probably one of the reasons or
rationale, right, for having as many players in the marketplace
as possible.
One of my big concerns with the previous Administration, as
somebody who in my previous job had filed the lawsuit on
censorship in the Missouri v. Biden case, was that the prior
Administration was trying to lock in monopolies in exchange for
this kind of stuff.
And so, I think the hope is that it is an open, true
marketplace where competitors can see this and have something
that is more truthful and people can make their own decisions
as opposed to, you know, definitively giving answers like, yes,
there is no God, and yes, gender affirming care is totally safe
for kids. I mean, all that stuff.
Mr. Kratsios. You are very right. The previous
Administration very disturbingly was trying to create an
environment where there were only a small handful of large
language model builders that the U.S. Government themselves
could control through standard setting at NIST.
So I am very happy that we were able to turn the page on
that. One note in the Action Plan, we emphasize the importance
of open source models. So I think that sort of encouraging
that, which is something the last Administration was very
hesitant to do, combined with the Executive Order on Woke AI, I
think sort of can provide an environment where we really can
have models to the American people that are accurate and truth
seeking.
Senator Schmitt. Thanks. Look forward to working with you
on it. Thank you, Mr. Chairman.
Mr. Kratsios. Thank you.
Senator Budd. Thank you. Senator Blunt Rochester.
STATEMENT OF HON. LISA BLUNT ROCHESTER,
U.S. SENATOR FROM DELAWARE
Senator Blunt Rochester: Thank you, Chairman Budd. And
thank you for your attendance, Director Kratsios. I have some
questions here, and I might not get to all of them. But I kind
of want to follow up on that last line of questioning, because
I know for myself, I put things into ChatGPT that were wrong
about myself. And so for me, the question isn't about woke or
sleepy, but it is about smart or dumb. And so, what comes out
is what is put in, correct?
Mr. Kratsios. Yes, a large----
Senator Blunt Rochester. OK, thanks. I just wanted to
clarify that. And now I am going to get to my real questions
because the topic is so important. This is so important to the
future of our country. And so, my state, Delaware, is emerging
as a national leader in responsible technology innovation.
Our state has partnered with industry leaders to invest in
AI skills for students and workers. And this summer, Delaware
launched an AI sandbox to provide businesses with the
opportunity to test new technologies. These new programs align
with the Administration's AI Action Plan.
And I remain committed to fostering innovation while
prioritizing safety and security. I also want to add, while I
appreciate Chairman Cruz's attempt to create a Federal sandbox,
I am not sure that OSTP is the appropriate place for it, if we
need one at all, but I really appreciate the effort.
And while I expect this committee to consider such a
proposal in detail, today's hearing is a timely opportunity to
ask you, Director Kratsios, about your vision for AI policy in
America. Mr. Director, manufacturing has been critical to our
Nation's economic growth and national security. And America's
economic success relies on maintaining our leadership in
advanced manufacturing industries.
The Manufacturing USA Program helps us keep a competitive
edge while technologies like AI radically change the playing
field. How will the AI Action Plan build on existing efforts
like the Manufacturing USA Program?
Mr. Kratsios. Yes. So the Action Plan makes it very clear
that this is a technology that is going to have an impact on a
wide variety of industries, particularly in advanced
manufacturing, as you mentioned.
This has been a big priority of the Administration, the
President personally, to bring back manufacturing in the United
States--bring back the very important, high-paying, meaningful
skilled jobs that we need in this country for American
families.
And what we hope to do is to be able to, through the
buildout, particularly in pillar two of the plan of our
infrastructure relating to both power and AI data centers, a
lot of those jobs will be brought in.
And what is really key about this plan, which I think is
really important, is that a lot the effort around pillar two is
about the retraining, the reskilling, and the preparation of
the trades that will ultimately support the necessary buildout
of all the infrastructure for this. So we remain very excited,
working with Commerce, with Manufacturing USA, to continue
those training programs, and it is very important to us.
Senator Blunt Rochester. As the former Secretary of Labor
from Delaware, and I always say if I had another middle name it
would be Lisa Blunt Jobs Rochester. So this is exciting as long
as we are balancing all of our priorities here. Director,
Delaware is also home to the National Institute for Innovation
and Manufacturing Biopharmaceuticals, otherwise known as
NIIMBL.
It is headquartered at University of Delaware and is a
public-private partnership within Manufacturing USA network.
Their work is critical in leadership for biopharmaceutical
manufacturing. Could you talk about biosecurity though? This is
really a priority as well in the Action Plan.
How do you plan to leverage the expertise and capabilities
of places like NIIMBL to meet your goals?
Mr. Kratsios. Yes, biosecurity is important. It has been an
issue that the Federal Government has been thinking about for a
long time. There is built capacity at a variety of our agencies
and able to do testing and evaluation around some of those
issues and large language models.
But to me, I think more importantly, there is a huge
opportunity to leverage artificial intelligence for
breakthroughs in the biosciences. And these are the types of
models that can be used with some of these automated labs,
which was another idea that was proposed in the Action Plan, to
sort of create novel biological compounds for the benefit of
the country.
Senator Blunt Rochester. Thank you. I will have other
questions that I will submit for the record. But I do want to
caution us that as we cut funding for things like NSF, or as we
fire folks that have expertise that can help us, both on the
diplomatic side as well as the scientific side--and also, we
talked before about STEM and STEM education.
And really want to make sure that we are thinking about the
workforce and about innovation for our country as well,
utilizing the tools and the skills and the expertise we have
right here in this country. Thank you, Mr. Chairman. I yield
back.
Senator Budd. Thank you. Senator Blackburn.
STATEMENT OF HON. MARSHA BLACKBURN,
U.S. SENATOR FROM TENNESSEE
Senator Blackburn. It is good to see you. Thank you for
being here. A couple of quick points. Senator Warner and I have
a standard setting bill that you all may want to incorporate in
what you are doing. We are quite concerned about the U.S.
retaining the ability to set standards.
And so, we filed this a couple of years ago. So I commend
that to you. Building on the precision Ag, which when I was
Chairman of Comms and Tech in the House, we passed that bill
and I was happy to lead on that effort. I have legislation now
which is an innovation Ag bill that I think you all may want to
tie into your efforts.
And I encourage that. Also, we have a quantum sandbox bill.
Senator Lujan and I have done that for quantum technologies.
Oak Ridge National Lab leads in that effort, and we think these
near-term applications to have a sandbox are important.
So, I am pleased to see Senator Cruz come forward with
something on AI. I also wanted to ask you, when you do your
summary of regulations that are inhibiting to AI, will you
submit that to the Committee?
Mr. Kratsios. Yes, certainly.
Senator Blackburn. OK, thank you. Online privacy is
something as we have worked on AI, we have heard from so many
innovators, it is imperative to pass an online consumer privacy
standard so that people have the way to set that firewall. Do
you agree with that?
Mr. Kratsios. Yes, online privacy is critically important,
and we would love to work with the Committee and with Congress
on that.
Senator Blackburn. Excellent. We have tried for 13 years to
get that passed, and we are not giving up. I agree with you on
that. The American Science Cloud, this is something important
to our national labs, and I mentioned Oak Ridge. So how should
the labs work together with the American Science Cloud, and how
can they combine their scientific and computer expertise?
Mr. Kratsios. So, as you know very well, there is a wide
variety of supercomputing infrastructure that is across all the
national labs, and then there is other computing infrastructure
that sits outside of the labs in the private sector. And being
able for those institutions to all speak to each other and to
be able to optimize the workloads across them----
Senator Blackburn. So you are looking at interoperability.
That would be your primary objective. Data transfer,
interoperability?
Mr. Kratsios. Yes--that would be one of the top things to
think about, yes.
Senator Blackburn. OK. Excellent. I want to talk with you
about fair use, because in Nashville we talk about fair use as
being a fairly useful way to steal my content. And we see that
happen repeatedly.
And actually when I wrote the amicus brief on the Warhol v.
Goldsmith case, which was decided for Goldsmith, I actually
made that argument for a narrowed application. One of the
things we are looking at is what happens with this patented and
copyrighted content, algorithms, et cetera, whether it is for
an entertainer, an author, a publisher, someone who does online
sales training, someone who does online human resource
training, religious leaders who have sermons and things that
are copyrighted prayers, that they are holding a copyright on
that.
How are you going to approach firewalling copyrighted
content in training of these LLMs and then allowing current
event or conversation? Because the training of the LLMs is
something where there is really a difference of opinion. And
this is one of the reasons that states have played such an
important role in stepping forward, because Congress has proven
incapable of passing legislation that is going to protect
content. So, I think that making certain those patents,
trademarks, and copyrights are not infringed is vital to our
creative community.
I had a group in my office yesterday. They are incredibly
worried about this. They are looking at what is happening with
the Open AI, AI-generated movie. Everything is going to be
generated based off of the actors, but it is all AI-generated,
music AI-generated. What you are doing is taking away their
ability to exercise their craft, and that is an Article 1,
Section 8, Clause 8 protection that is given to innovators in
this country.
So I would love to have your response on how you are going
to address that, but I am out of time, Mr. Chairman.
Senator Budd. Perhaps in following remarks you could
address that if that would be OK with you.
Mr. Kratsios. Certainly.
Senator Budd. Senator Peters.
STATEMENT OF HON. GARY PETERS,
U.S. SENATOR FROM MICHIGAN
Senator Peters. Thank you, Mr. Chairman. Mr. Kratsios,
welcome. Welcome to the Committee. Sir, I hope that you agree
with me that without the highest standards for data protection
and governance, rapid AI adoption can expose Americans'
information to some unparalleled risk that we need to be very,
very concerned about.
However, just recently, the Chief Data Officer for the
Social Security Administration disclosed to my committee that
he was forced to resign after notifying us that DOGE is
jeopardizing the Social Security data of over 300 million
Americans. It is actually quite stunning the lack of
protections to this data that we have seen as a result of their
activities, and more of that will become public in the days
ahead.
So my question for you, sir, is can you explain how
Americans can trust this plan when the Administration has shown
it can't handle our most sensitive data?
Mr. Kratsios. I am not familiar with that particular
example, but data protection is critically important, and I
know that our Administration's work across implementation of AI
across all of our agencies takes that extraordinarily
seriously.
Senator Peters. Well, you have to demonstrate it. It is
nice words, and rhetoric is always very nice, but if you don't
demonstrate that you are actually making it a priority, I don't
think any of us can believe that it is a priority.
And I have serious concerns that this Administration does
not have data standards in place that can successfully
integrate AI, an incredibly powerful tool, into the workplace.
Were the safe--there were safeguards in this Administration's
prior guidance of that appeared--but they appear to have no
effect and there is no examples of it actually being
implemented, which is incredibly troubling. We are going to
dive into that issue in greater length in the days and months
ahead.
My next question is the White House AI Action Plan asserts
that ``the Federal Government should not allow AI related
Federal funding to be directed toward states with burdensome AI
regulations, which should also not interfere with States'
rights to pass prudent laws.''
So my question is pretty straightforward. Who specifically
decides which States AI laws are ``prudent'' and ``not unduly
restrictive''? Who is going to make that decision?
Mr. Kratsios. I think that is something that is going to be
left to the agencies that are funding the various programs that
impact states.
Senator Peters. It is going be left to agencies. Who--who
in the agencies? Who will be making those decisions?
Mr. Kratsios. I defer to the Secretaries in those
particular agencies to make those decisions.
Senator Peters. So that is the policy, but we don't know
who is going to make the decisions. You are telling me that is
a policy, but we have no idea who is going to decide what is
prudent or what is unduly restricted? Could be the President.
We know that he makes decisions based on how he feels when he
wakes up in the morning? Is that kind of how we are going to be
doing it or--?
Mr. Kratsios. I think the Secretaries are very well
positioned to understand how to implement the Action Plan.
Senator Peters. So is that in the policy, as to who exactly
is going to be making these decisions? It is not in the
policies. I couldn't see it.
Mr. Kratsios. So the Action Plan is in a policy document.
It is a set of recommended policy actions that the
Administration----
Senator Peters. So you have no idea who is going to do it.
I am going to give you an example. The State Legislature in my
state of Michigan just passed, with overwhelming bipartisan
support, laws that criminalize the use of AI for sexual
exploitation, adding to existing laws in my state, which also
criminalize the use deepfakes in political campaigns. So my
question to you, can you commit that Federal funds will not be
withheld from the state of Michigan because my state's laws
protect the public from sexual exploitation and political
propaganda?
Mr. Kratsios. I have no control over the budgets of
individual agencies, but I think that is something that
certainly should be discussed with the relevant Secretaries.
Senator Peters. So that is not something you can say. If
states are trying to protect their public from sexual
exploitation, that may be something you have a problem with.
That is really--that is news, I think. Reports indicate that
agencies, including the Pentagon, have procured and deployed
Grok, the AI system developed by Elon Musk, xA1.
However, Grok has been found to consistently produce hate
speech, including racist and Antisemitic content--clearly not
woke. These were well documented instances that clearly violate
the Administration's, their own--the Administration's own OMB
guidance and Executive Orders.
So my question for you is, why has this Administration not
followed its own standards and guidance related to AI
procurement? Where is it demonstrated you actually follow this
stuff? Words are great, but actions are much more important.
Mr. Kratsios. Having truth seeking and accurate AI is
something the President wrote about explicitly in the Woke AI
Executive Order, and that is something that we take seriously
no matter what type of bias may be in that particular--Are you
considering this woke--woke kind of comments then, that I have
just mentioned here?
Mr. Kratsios. I can--I said in the----
Senator Peters. Because it is not woke, it is OK. Is that
right? If it was woke, it would be not allowed. But if it deals
with Antisemitic and racist content, that is OK? Is that what
you are telling me right now?
Mr. Kratsios. Any type of bias in models is----
Senator Peters. That is not what you just said.
Mr. Kratsios. No, I named an Executive Order that the
President signed. And within that Executive Order, the
President called for AI that the Government procures to be
truth seeking and accurate.
Senator Peters. Thank you, Mr. Chairman.
Senator Budd. Senator Moreno.
STATEMENT OF HON. BERNIE MORENO,
U.S. SENATOR FROM OHIO
Senator Moreno. Thank you, Chairman, for having this
hearing. It is obviously really important. Mr. Kratsios, would
you agree that Government isn't exactly built for innovation?
Mr. Kratsios. I think it could do a much better job, but I
think it is well positioned to take a stab at it.
Senator Moreno. Well, meaning that if we really want to
compete with China, the real advantage we have is that we can
tap into the private sector. And so, what we want to do is
create an environment for the private to succeed. Would you
agree?
Mr. Kratsios. Yes, precisely. And I think that is one of
the underpinning philosophies of the entire plan.
Senator Moreno. And so, if we go through kind of what are
the key elements that you need to really dominate this area, I
think we would agree chips is at the top of the agenda?
Mr. Kratsios. Chips is certainly one piece of the stack we
take very seriously.
Senator Moreno. Right. So we have to make certain that we
are dominating the world in chips. That is critically important
that we support American made, American designed chips.
Mr. Kratsios. Yes. And I think that not only design them
here in the United States, but also fabricate them is very
important. The level above that is the models themselves. So we
need to lead the world in large language models, which we do.
And above that, is the applications. And those combined create
the stack, which is so important.
Senator Moreno. Right. So making certain that a facility
like the Intel facility in Columbus--outside Columbus, Ohio,
that that gets a long runway and that we are making those
world-class chips here is really important?
Mr. Kratsios. Yes. Both the President and Secretary of
Commerce have been very clear about the commitment that the
U.S. Government has made to Intel to be able to fabricate high
end chips here in the United States.
Senator Moreno. Great. And then the next piece of the
puzzle is energy. We need sound energy policy, where we have
the most reliable, affordable, and abundant energy. That that
is really important, and that that be co-located as much as
humanly possible when we are building out these AI data
centers. Would you agree?
Mr. Kratsios. Yes. And in the President's remarks and the
speech that he gave at the Action Summit, he talked about the
value even having behind the meter of power to support some of
these data center buildouts. So being able to co-locate that is
very important.
Senator Moreno. And give us a sense of how much energy AI
learning models consume versus just a simple Google search.
Like is it 5x, 10x, 20x?
Mr. Kratsios. I don't have a good number for that, but I
think what I have heard from the industry and what keeps coming
up is that it is a much, much more significant data hog than
any type of search you would have today. And it is something
that is exponentially growing with the types of searches that
Americans are doing today.
Senator Moreno. So when you had 94 percent of new power
generation in America over the last 4 years be wind and solar,
that probably isn't nearly enough to produce the kind of energy
that we need to power the AI revolution, would you agree?
Mr. Kratsios. I would agree, yes.
Senator Moreno. So we need good old-fashioned Ohio natural
gas. We need to make sure we have coal. We need to make certain
that we incentivize nuclear, but we are not going to compete
with the world because we are using wind and solar--94 percent
new generation, which is ironic given that China is building a
coal plant every single week.
Mr. Kratsios. You are very correct. We cannot compete with
that strategy.
Senator Moreno. So, thank God we have changed that policy.
And the last piece is people. We need to make certain that we
have the people, the researchers that are here developing this
technology. And what are we doing in that area to make sure
that we are competing on the highest caliber of people to
develop this technology here in the U.S.?
Mr. Kratsios. Yes. So the second pillar of the plan talks a
lot about how we can develop an American workforce to help the
build out of the critical infrastructure we need to win on AI.
So those are programs at places like the Department of Labor,
Department of Education, Department Commerce, to be able to
train and reskill Americans in the trades and all the various
fields that are vital to be able to do this buildout.
Senator Moreno. That is great. And two other quick points.
Having built a tech company myself, the big number one thing
that you need is customers. It is a great thing when you get
revenue. It is a much better feeling than not having revenue.
The Government, having been here 8 months, is somewhere in the
early 90s when it comes to computer technology.
And that is good news, bad news. The good news is that
there is certainly room for improvement. The bad news is we are
in the 90s. There is so much applications that we can use in AI
to move Government forward. And the way I think we dominate is
by creating an environment where private companies can really
contract with Government to actually solve problems that
Government uses, systems that should have been retired long
ago.
How are you making that available so that companies know,
hey, the Government is open for business to give contracts, by
the way not just to big tech, but to little tech also.
Mr. Kratsios. Yes. GSA is making a big effort in trying to
improve the FedRAMP process, which is now what you may have
seen a few months ago, to be able to accelerate the addition of
newer entry players into the Federal Government procurement
ecosystem.
Within the DOD and AI specifically is a program called
TradeWinds, and that is where you can be pre-cleared to be an
AI service provider for the DOD. And once you are on
TradeWinds, any service or any COCOM, everyone else can procure
from there.
So there are lots of innovative ways to be able to
introduce these new AI technologies into a procurement cycle at
a much quicker pace.
Senator Moreno. Yes. And I know I am out of time, but just
real quickly on that. Make certain that it takes into account
small businesses. That this doesn't require 7,000 lawyers to
fill out 800 pages of forms to get in that list.
Mr. Kratsios. Absolutely.
Senator Budd. Thank you. Senator Rosen.
STATEMENT OF HON. JACKY ROSEN,
U.S. SENATOR FROM NEVADA
Senator Rosen. Well, thank you, Chairman Budd, Ranking
Member Baldwin, for having this important hearing. Thank you to
the witness for being here. I just want to say one thing,
building off what Senator Moreno said.
We can't rebuild the workforce while simultaneously
eliminating the departments and agencies that should be
partnering and building out the workforce of the future, and
that directly relates to what we are going to continue to talk
about and what you are going to continue to do.
So please keep that in mind with this Administration and
how we try to fund the proper programs for our Federal
Government. But I want to take a moment to build on what
Senator Peters was talking about in the Antisemitism space, the
questions regarding Antisemitism and AI. The Administration's
AI Action Plan directs Federal agencies to procure only LLM
models that are truth-seeking or ideologically neutral.
However, this Administration has instead opted to deploy
Grok, an LLM from xAI that is a long history of hate speech,
including promoting Antisemitic conspiracy theories. Earlier
this year, I led a bipartisan letter to xAI seeking an
explanation for the Antisemitic tirades. However, xAI failed to
answer any of our questions. Just last week, Wired reported
that the White House pressured GSA to approve Grok for use by
the Federal Government. You can see why we should be very
concerned, sir.
So, Mr. Kratsios, will you commit to making sure that
agencies do not use AI that promotes Antisemitic conspiracy
theories, hate speech, stereotypes? I could go on and on. This
is blatantly wrong. And if you continue to do that, we will
continue to push back. I want your commitment that you will
push back on this as well?
Mr. Kratsios. Yes, we will commit to continue to execute
the President's Executive Order to ensure that models are
procured by the U.S. Government are truth seeking and accurate.
Senator Rosen. That is not the answer. Will you commit to
being sure that we do not have Antisemitic hate speech,
conspiracy theories, and tropes continue to be repeated in
these tirades on the internet? It is a simple yes or no. You
are either promoting Antisemitism or you are not. So, you are
promoting hate speech, or you are not.
Mr. Kratsios. I think we are talking about the same thing.
The examples that you are giving obviously aren't true seeking
and accurate. So I think that we both agree that that is a type
of behavior that the President very rightfully signed an
Executive Order to help avoid.
Senator Rosen. Well, clearly he is not paying attention to
what is happening on Grok. I want to talk a little bit more
about something that I have marked down to earth, fiber, fiber
for AI. Earlier this year, Microsoft's President testified to
this committee that fiber connectivity is one of the key
pillars of AI infrastructure alongside, of course, data center,
chips, land, electricity.
We know that fiber provides the essential connectivity
between AI data centers because AI needs to process data fast
at lightning speeds. I was a software developer myself in my
younger career. We could not even imagine the types of
technology that we have today. But recent reports show that
growth in AI use is going to require more than doubling,
doubling of the fiber miles currently in the U.S. from 159
million miles today to over 370 million miles by the end of the
decade.
So we know companies like Microsoft has announced multi-
billion dollar partnerships with providers like Lumen
Technologies to build out the AI fiber back room. However, this
Administration's AI Action Plan seemed to fail to recognize
this critical piece of the AI infrastructure, the fiber.
So is this Administration taking any steps to accelerate
fiber infrastructure that supports AI, and especially in ways
that promote equitable access, job creation, resiliency, and
should agencies like the FCC and the NTIA play a more active
role in coordinating and streamlining these efforts to build
fiber out?
Because every community needs to be connected in every way
for business, for defense, for safety, for security, for
education, for healthcare, you name it, and it is really
important. So, can you tell me what steps you might be taking,
please?
Mr. Kratsios. Yes. Fiber is a very important component of
the interconnect system for all of our AI data centers and in
the broader internet. And it is something that I know NTIA and
Secretary of Commerce has taken very seriously, as well as
Chairman Carr. So I do agree with you, fiber is a very
important component.
Senator Rosen. So do you think eliminating some of the
programs that we have funded in the past, that were laying
broadband fiber all across at least my state of Nevada and
across the nation, were thinking for the future?
Mr. Kratsios. I think there are many ways to connect the
American people to the internet. One is obviously fiber, but I
think there is other ways that oftentimes can often be more
economical. And the smart people at NTIA and others who think
about this every day make those assessments on behalf of the
Commerce Department.
Senator Rosen. But you would agree we need to fund
connectivity?
Mr. Kratsios. Connectivity is critically important, yes.
Senator Rosen. Thank you. I yield.
Senator Budd. Senator Markey, please.
STATEMENT OF HON. EDWARD MARKEY,
U.S. SENATOR FROM MASSACHUSETTS
Senator Markey. Thank you, Mr. Chairman. The Trump
Administration's loyalty to big tech means bigger bills for
American families, and this Administration is giving AI data
centers the green light to eat up our electricity in our Nation
while our bank accounts go into the red. So Mr. Kratsios, are
you aware of how much households' electricity bills are
expected to rise over the next 4 years as a result of data
center expansion?
Senator Budd. I am not familiar with that number, no.
Senator Markey. All right, I will inform you then. A recent
analysis found that American's electricity bills are going to
rise by as much as 25 percent over the next 4 years--25 percent
because of data center demand. So it is not just a future fear.
It is a present problem. Households are already feeling the
pinch. Electric bills for an average home in Ohio increased by
$15 a month because of data centers.
A worker making Ohio's minimum wage would have to work an
hour and a half just to be able to afford Trump's data center
tax on electricity in that state, and that is not to mention
the rest of their electricity bill.
So Mr. Kratsios, do you think it is appropriate that the
Administration is forcing Americans to pay more on their
electricity bills, while using their taxpayer dollars to make
the even worse by funding the unfettered growth of the AI
industry?
Mr. Kratsios. I do not believe there has been an
Administration in American history more committed to growing
power generation for the American people and lowering energy
costs for everyday Americans. And I am proud to work for a
President and an Administration that has that level of
commitment. So I am not sure what that study is, but I think
there has never been an Administration more resolved in
actually doing the complete opposite, actually lowering energy
cost for----
Senator Markey. No, electricity bills are going up all
across the country, right now, under the Trump Administration.
And they are killing the solar projects. They are the wind
projects. They are killing the offshore wind projects. They are
killing the electricity supply, which is going to be needed for
the AI revolution. They are killing it. So we are going to have
a crisis.
We are about to have an electricity bill crisis for
consumers in our country. Because at the same time, this
Administration is pushing the data center development at all
costs. The costs are being paid by American families, not big
tech. The electricity bills are going to be paid for by
ordinary families in our country because Trump is stopping
those new sources of electricity from being installed in our
country. They just announced the killing of an offshore wind
project that is 80 percent completed, and they are targeting
another dozen offshore wind projects that are just going to
skyrocket electricity bills all across the East Coast, but
across the country as well. It is going to kill at least
790,000 megawatts of clean and low emission energy from coming
online over the next decade.
That is the electricity that is going to be needed for the
AI revolution. They are killing it, and they are killing it out
the ideology--that is because of the payoff to the natural gas
industry for their contributions to Trump. They are killing the
renewable energy industry that would have been providing that
extra electricity. So it is a huge price to be paid. Director
Kratsios, under the AI Action Plan, agencies are only permitted
to contract for AI algorithms that are, ``free from top-down
ideological bias.''
This language is extraordinarily vague, ``free from the
top-down ideological bias,'' and it gives the Trump
Administration vast discretion to force AI chat box developers
to adopt conservative viewpoints or else risk losing lucrative
Federal contracts. This isn't traditional use of the
Government's procurement power. It is extortion. So let's get
specific here. Director Kratsios, if a generative AI system
stated that it was intentionally trained to adopt a certain
political viewpoint, would that qualify as, ``top-down
ideological bias''?
Mr. Kratsios. Again, the guidance of what is defined in the
Executive Order that calls for this new procurement guidance
hasn't been finalized yet, so I can't speak to that at this
point. But generally speaking, I think, sort of away from the
specifics, if a particular model is explicitly trained on a--
what did you mention--a political----
Senator Markey. It is top down ideological bias.
Mr. Kratsios. Sorry, I didn't--what did you ask?
Senator Markey. Would that be a violation of the rule that
a generative AI system, if it is stated that it intentionally
trained to adopt a political viewpoint? Would that qualify as a
top-down ideological bias?
Mr. Kratsios. Yes, if the model wasn't true seeking or
accurate, it would violate the Executive Order.
Senator Markey. All right, so I will make it even clearer
then. Here is a real post from Grok, the generative AI model
created by Elon Musk's company, xAI, stating quote, ``xAI tried
to train me to appeal to the right.'' That is the quote. Is
that a violation? Does that qualify as ideological bias and
should xAI therefore be disqualified from Federal contracts?
Mr. Kratsios. Yes. Per the Executive Order, models that
aren't truth seeking are accurate as defined by the guidance
that has yet to be promulgated, those would be subject to the
procurement restrictions.
Senator Markey. So, Grok is admitting that it is
ideologically biased, and it is absolutely imperative that the
Administration apply this standard even handily. And I will
tell you the truth, if they are talking about woke Executive
Orders, then it is absolutely imperative that we not allow in
Elon Musk or other company's bias to----
Senator Budd. Your time is expired.
Senator Markey.--this social media infrastructure that we
are living in right now. Thank you, Mr. Chairman.
Senator Budd. Senator Young.
STATEMENT OF HON. TODD YOUNG,
U.S. SENATOR FROM INDIANA
Senator Young. Director Kratsios, welcome to the Committee.
Thanks to you and your team for your hard work. Really
appreciate it. You have shown great leadership in developing
the AI Action Plan. And I appreciate you discussing here today
the importance of following through with this Executive Branch
playbook. I have been Chairman for the last couple of years of
the National Security Commission on Emerging Biotech, NSCEB.
You have visited with myself and some other commissioners about
our report.
And I was really pleased to see an emphasis in your Action
Plan on AI-enabled science. One of the recommendations requests
that NSF, DOE, NIST, and other Federal agencies invest in
automated, cloud-enabled labs. This priority aligns with a
recommendation here again from our report.
And that is why right before the August recess, Senator Kim
and I introduced the Cloud Lab to Advanced Biotech, also known
by its acronym, the LAB Act, which would establish a national
network of cloud labs focused on biotech. Can you elaborate on
the importance of cloud labs for our research and development
in biotechnology, and how you see cloud labs accelerating the
pace of innovation as compared to traditional R&D models?
Mr. Kratsios. Yes. The ability to have automated labs where
you can send in the experiment that you want to do and the lab
itself conducts it and then comes back to you with results in
and of itself is a huge value add.
If you layer on top of that the power of artificial
intelligence to allow the AI itself to start determining what
are the various iterations of the experiment you want to do,
and automatically send those to the lab to conduct and get the
results out, the pace and the velocity of discovery will be
dramatically improved.
Senator Young. So it is fair to say this could allow us to
supercharge the pace of innovation?
Mr. Kratsios. Most certainly. And the NSF is already
running ahead with a proposal around these cloud labs.
Senator Young. Very consistent with President Trump's
branding, a golden age of innovation, this really could help
usher that period in, I believe. I am going to pivot now to
standards as it relates to AI and the impact of a lack of
certainty for innovators seeking to develop and deploy AI.
Congress is notorious for being late to the punch when it comes
to development of standards and regulations.
And as other countries move forward in adopting their own,
American companies are then subject to potentially differing
rules across the globe. Can you speak to the risks associated
with continuing to subject our AI innovators to a fragmented
series of rules, including those enforced by other countries,
as well as states here at home?
Mr. Kratsios. Yes. I think creating standards at the U.S.
level that are prominent globally is very important. In the
weekend after the AI Action Plan was released, the PRC held
their large AI conference in Shanghai. And one of the main
thrusts of their own AI Action Plan they released in response
to ours was a desire to create a global entity, an AI entity in
Shanghai that would then promulgate global rules around AI for
the world.
And this is an example of why it is so important for the
U.S. to be the leader in the way that we provide standards
around AI, particularly around model evaluation and standard
setting. And this is something that we know our adversaries are
going to try to compete with us on. So it is more important
than ever that we do that.
Senator Young. Yes. It is not just an issue of
interoperability. I mean, you could literally make the argument
that our values are embedded in the standards of our
technologies. And so we want to have the ability to define what
those standards are, and then allow the export-oriented
economies, China in particular, to have to sell into our
market--game, set, and match.
Before I yield back, I want to mention that Ranking Member
Cantwell and I plan to re-introduce a revamped version of our
Future of AI Innovation Act. This is vital legislation that
will authorize the newly renamed Center for AI Standards and
Innovation at NIST to promote the development of voluntary
standards. Will you commit to working with us on the Future of
the AI Innovation Act as we revamp it for this Congress,
Director Kratsios?
Mr. Kratsios. Yes, we would love to see more there and work
with you on it.
Senator Young. Thank you so much. As you have indicated in
your testimony, there are many opportunities for Congress to
work with the Administration to take action for American AI
leadership, and I hope the Committee will do just that.
Chairman, thank you very much.
Senator Budd. Senator Hickenlooper.
STATEMENT OF HON. JOHN HICKENLOOPER,
U.S. SENATOR FROM COLORADO
Senator Hickenlooper. Thank you, Mr. Chair. Mr. Kratsios,
thank you so much for being here. I think the White House
Office of Science and Technology Policy is one of the most
crucial positions right now, just given not just AI, but so
many of the issues around research and the appropriate use of
research. But I will keep myself focused to the AI. States from
Texas to Colorado, Utah to California, passed, as you have
mentioned, has been discussed, AI legislation.
In many cases, some of this action should inspire us to
take a closer look, by us I mean Congress, to what do we need
in a comprehensive national AI law. It might include periodic
impact assessments to evaluate potential risk on AI models,
transparency disclosures to users describing AI models in terms
of use and capabilities. Obviously need R&D for support for
standards development to identify and detect AI-generated
content, and transparency around that.
Privacy protections for certain types of data being used to
train AI models. So, do you feel that these are the types of
policy principles that appear worthy to include as a foundation
for a Federal AI law, if we were going to try and create
something that would apply evenly across states?
Mr. Kratsios. Yes, my general sense and something I have
advocated for, for many years, is that the best approach to AI
regulation is for it to be use case and sector specific, not
broad and sweeping. I think any attempt to create a singular AI
regulation will lead you down the path that the EU is down
right now, which has ultimately resulted in a pretty sad
situation broadly for the innovators there.
Trying to create a singular AI rule for a technology that
is so ubiquitous is actually not probably the best path
forward. And one that we have advocated for both in the Action
Plan and agencies is that, you know, obviously the rules that
you would need at FDA to regulate and impart medical diagnostic
are very different than the rules that you would need at the
Department of Transportation for a self-driving car.
And we already have a system that has a very rich history
in allowing our regulators to update their regulatory regimes
with new technologies as they come. And it is one that I know
all of our Secretaries across the cabinet are working very hard
to make sure that they are up to speed on regulations that
apply to AI, which fall within their domain.
Senator Hickenlooper. I get that. I understand that, but I
think some things like making sure that the public is able to
identify and recognize what is--you know, what is AI and what
is not seems like something that is more general.
Mr. Kratsios. Yes, I think something like that in the
research, particularly being able to identify AI-generated
content, as you mentioned, is very important to continue to
fund.
Senator Hickenlooper. Great. Appreciate that. The Action
Plan calls for Federal agencies to conduct independent
evaluations of AI systems before they are procured and
deployed. Independent evaluation will help enhance security and
increase trust, prevents companies from grading their own
homework, as we would say, after an AI model is developed.
We have a bill introduced called Validation and Evaluation
for Trustworthy AI Act, VET Act, with Senator Capito, which as
Senator Young was mentioning, peripheral to requiring NIST to
publish voluntary guidelines for companies to independently
evaluate AI models. Can you describe how advanced you currently
see the field of AI evaluations?
Mr. Kratsios. I think it is certainly not advanced enough.
My number one priority for NIST would be to work on the very
hard science associated with model evaluation and metrology.
Our ability to understand how to even evaluate these models is
still not complete.
So many people jump immediately to the evaluation itself,
this question of what we should be evaluating, versus what I
think the more important question of today is how do we
evaluate these models. And what NIST can do, very important,
metrology work on the how question.
And once we know how to actually evaluate these models,
then each agency, each industry, whoever wants to do an eval
will then have a standardized, scientifically backed way to be
able to do the eval itself.
Senator Hickenlooper. Got it. In terms of the workforce
development, that is going to be a key part here. The Action
Plan highlights the need for AI skill development to make sure
that we have a trained workforce that can do the work required.
Obviously a national security imperative essential to
maintaining global competitiveness as you have mentioned. I
think apprenticeship programs are a big part of that. We worked
on Career Wise and created that in Colorado back when I was
Governor, and it is now in 20 states.
They have been a national leader expanding youth
apprenticeships and are already adding AI technology to support
their programs. How can you work with--work to support
innovative apprenticeship pathways, both for youth and adults,
to equip an AI-ready workforce?
Mr. Kratsios. I think there is no President more excited
about apprenticeships than this one. I think our Secretary of
Labor has also had a big commitment to do a million new
apprenticeships in this term. So there are big partners in the
Department of Labor to partner with you guys.
Senator Hickenlooper. OK. Thank you. I yield back.
Senator Budd. Thank you. Senator Klobuchar.
STATEMENT OF HON. AMY KLOBUCHAR,
U.S. SENATOR FROM MINNESOTA
Senator Klobuchar. Thank you very much, Mr. Chairman. And
thanks to Ranking Member Baldwin as well. I am not actually on
this subcommittee, but as our witness knows, I care a lot about
this, and so I have been able to listen to my colleagues'
questions and I want to thank them for their good work.
AI, we all know, huge potential, but also huge downside if
we don't get this right. And I think David Brooks put it well
when he said, ``I have found it incredibly hard to write about
AI because it is literally unknowable whether this technology
is leading us to heaven or hell''. So if we want it to lead us
to heaven, I think we are going to have to find some guardrails
and the like to protect us from fraud, to protect content
creation, and our democracy.
So first off, I appreciated working with the Administration
on the Take It Down Act. My bill with Senator Cruz to enable
victims of non-consensual porn, including those generated by
AI, to require the social media platforms to remove it within
48 hours. But there is many more problems, as you know, as I
just experienced and wrote about it in a piece in the New York
Times with AI, a deepfake on me that many people, believe it or
not, thought was real.
And one platform took it down, one platform put created by
AI on it, and then one platform, X, would not do anything, and
it got over a million views. So the No Fakes Act that Senators
Coons, Blackburn, and Tillis, and I have introduced would
establish additional rules of the road. And do you agree that
we should protect people from having their likeness replicated
through AI, take down unauthorized deepfakes?
To me, it is some regime where--within the realm of the
Constitution where some of it is labeling just digitally
altered because it is parity, and you are not allowed to take
it down. But then some of the stuff which you would be--would
in a minute take down if someone played a video in this room or
put up a sign, you should take down. So could you talk about
that?
Mr. Kratsios. Yes, I think that I directionally generally
agree with you. I think it is something that we should
certainly look at, both the Executive Branch and the
Legislative Branch. I think the Take It Down Act is a great
example of something that is on one side of the line that
certainly should become law when it did. But I think it is
something that as this technology develops and becomes more
proliferated, I think we have to find ways to solve it.
Senator Klobuchar. Thank you. I just hope our colleagues
see that it is not one side or the other, right. There is some
of the stuff that you are going to Constitutionally be able to
take down, and we should require they take it down. Then there
is some stuff that we can say should be labeled digitally
altered, and it puts a burden on these platforms, but at least
it will protect innocent people when they see it to know that
it is not true.
And it continues to just amaze me that we all just sit by
and act like, oh, that is too much, that is too little, instead
of actually getting a solution. And I did--I really appreciate
the work that Senator Schumer, and Senator Young, and Senator
Heinrich, Senator Rounds did in bringing us together in the
last few years on this.
Senator Thune and I have a bill that we introduced last
year to set up basic guardrails for some of the non-defense
riskier applications of AI, and in the past you have supported
developing thoughtful Federal standards that can drive the
widespread adoption of AI technologies across industries. And
will you commit to work with Senator Thune and I on that bill?
I know there are others as well.
Mr. Kratsios. Yes, happy to work on that.
Senator Klobuchar. OK, very good. And then in yesterday's
hearing Senator Blackburn and I had a lot of attention on this
hearing with two whistleblowers from Facebook just yesterday in
our subcommittee in Judiciary. And we heard that one of the
leading AI chatbot developers, Meta, deliberately and routinely
altered, suppressed, and even deleted safety research,
including on youth safety.
And there were many Senators participating in this hearing
across the board. And I am concerned about this neglect when it
comes to AI development on figuring out how we can protect
these kids. You are right, we did get some with the President's
support on the Take It Down Act, but that is only a subsection.
We have got fentanyl. We have got drugs being sold just
overall on the internet, irrespective of AI, but then we have
all this stuff going on with the AI chat boxes. And could you
talk about your commitment to work with us on addressing the
harms caused by AI chat boxes?
Mr. Kratsios. Would very much like to work with you guys on
a lot of these issues. I think last week we held an AI
education task force meeting which the First Lady joined and
chaired. This was something that came out of the Executive
Order the President signed a few months ago which shows the
Administration's commitment toward K through 12 education AI.
And it is not how to necessarily use AI to do your homework
or something. It is more important about teaching America's
youth the limitations where AI works, where it doesn't work,
and making young Americans understand how this technology
works. And it is a very key component of making sure that they
are using it in the way that it was intended for.
Senator Klobuchar. OK, thank you. And I did appreciate her
support for our Take It Down bill, but again, it is just the
beginning. So, thank you.
Mr. Kratsios. Thank you.
Senator Budd. Thank you very much. And thank you, Mr.
Kratsios, for your testimony here today. I look forward to
working with you, not just on AI, but also, as I mentioned
earlier, thanking you for your work on supersonics and
aviation.
Senators have until the close of business on September 17
to submit questions for the record. The witnesses will have--or
the witness will have until the close of business on October 1
to respond to those questions. This concludes today's hearing.
The Committee stands adjourned.
[Whereupon, at 11:44 a.m., the hearing was adjourned.]
A P P E N D I X
Partnership for AI Infrastructure (PAII)
September 9, 2025
Hon. Ted Budd,
Chairman,
Subcommittee on Science, Manufacturing, and Competitiveness,
Senate Committee on Commerce, Science, and Transportation,
Washington, DC.
Hon. Tammy Baldwin,
Ranking Member,
Subcommittee on Science, Manufacturing, and Competitiveness,
Committee on Commerce, Science, and Transportation,
U.S. Senate,
Washington, DC.
Letter for the Record: Subcommittee Hearing on ``AI've Got a Plan:
America's AI Action Plan''
Dear Chairman Budd, Ranking Member Baldwin, and Members of the
Subcommittee:
The Partnership for AI Infrastructure (PAII) respectfully submits
this letter for the record in advance of the Subcommittee's hearing on
America's AI Action Plan in support of Federal investments in
artificial intelligence (AI) research and development (R&D). The
Partnership commends the Subcommittee and its leadership for convening
this hearing to examine the Action Plan and develop the legislative
framework necessary to implement this AI strategy.
Implementing this Action Plan and promoting American AI innovation
will require serious and sustained Federal investments. The One Big
Beautiful Bill Act (OBBBA) provided the first critical down payment of
$515 million in AI investments needed to jumpstart this Action Plan.
This initial investment will enable the Department of Energy (DOE), the
National Nuclear Security Administration (NNSA), and the Department of
Defense (DoD) to begin developing the AI infrastructure needed to
advance our national defense and scientific priorities. Leveraged
strategically, this funding could lay the foundation for the Federal AI
infrastructure ecosystem which future investments could build upon.
As a coalition of technology leaders, the Partnership for AI
Infrastructure is committed to leveraging AI to accelerate
technological breakthroughs, as demonstrated through our members work
to build the world's three fastest supercomputers as part of the
Exascale Computing Project. As part of this commitment, the Partnership
submitted comments in response to the White House Office of Science and
Technology Policy (OSTP) Request for Information (RFI) on the
development of America's AI Action Plan. The Partnership's
recommendations were as follows:
1. Promote sustained Federal investment in high-performance AI
infrastructure.
2. Create durable public-private partnerships to harness private
sector innovations to further advance Federal science and
national security objectives.
3. Stimulate America's pool of AI talent and retain experts within
the Federal workforce.
The final Action Plan proposed a series of recommended policy
actions that incorporated these concepts and outlined the Federal
agency actions necessary to develop a Federal AI ecosystem. While
Federal agencies can use the funding from the OBBBA to begin
implementing the Action Plan, Congress must begin discussions on the
legislative actions and additional investments that will be needed to
see the plan to completion.
Build the Federal AI Ecosystem
Developing the Federal AI ecosystem will require robust investments
across the Federal government. The OBBBA began this work by providing
$150 million to DOE to build transformational AI models, $115 million
to NNSA to accelerate AI-lead nuclear national security, and $250
million to DoD to advance the AI ecosystem. This $515 million in
investments will pave the way to develop the AI systems envisioned by
the Action Plan. Through the strategic stewardship of the OBBA
investments, DOE, NNSA, and DoD can begin developing the mission-
critical capabilities required to ensure America's long-term
technological superiority.
To bring the truly innovative power of AI to bear to solve unique
and challenging issues, our Federal agencies and national laboratories
need to be equipped with leadership-scale AI infrastructure. Such
systems can accelerate breakthroughs in science and bolster our
national security systems against emerging threats. To fulfill the
Action Plan's goals, these AI systems should be interoperable to enable
seamless integration of the most cutting-edge AI hardware with
interoperable graphics processing units. The Action Plan specifically
tasks the National Institute of Standards and Technology (NIST) with
identifying opportunities to accelerate and scale AI development,
including the use of interoperable technologies. Setting
interoperability as a core standard of Federal AI infrastructure will
allow for GPUs and other AI hardware components to be changed modularly
and used iteratively, which will lower costs, enable broad industry
participation, and ensure the longevity and resiliency of Federal AI
supercomputers.
Develop Public-Private Partnerships
The Action Plan seeks to leverage the full talent and expertise our
Nation has to offer through collaboration with industry leaders and
stakeholders. By partnering with private companies with AI expertise,
the Federal government will be able to harness the strength of American
AI innovation and set it as the global standard. Companies that
specialize in semiconductor design, super computers, AI software
frameworks, and other areas of expertise will help drive AI development
and adoption at the Federal scale. Leveraging that expertise through
public-private partnerships leads to improved efficiency, shared costs,
and cross-pollination of the best ideas from industry scientists and
government researchers to bring cutting-edge innovations into use for
national science, defense, and critical infrastructure priorities.
Foster the Federal AI Workforce
Infrastructure alone will not deliver AI leadership without the
talent to use it. Federal agencies must prioritize cultivating,
training, and retaining AI scientists, engineers, and operators capable
of managing and deploying advanced AI systems. The Action Plan included
a provision to ensure that America's workers benefit from AI through
the development of workforce training programs. Building and retaining
a skilled Federal AI workforce will ensure continuity of expertise,
safeguard sensitive programs, and strengthen the government's ability
to develop and use AI systems to tackle national scientific and
security priorities.
***
The race to dominate AI is the defining technological challenge of
our time and demands Federal investments to the scale of the Manhattan
Project to secure America's global leadership. The Action Plan provides
the blueprint for this project which the Administration and Congress
can execute by authorizing and funding large-scale AI initiatives,
advancing regulations that foster innovation, developing public-private
partnerships, and building a robust and capable Federal AI workforce.
The OBBBA made is the first investment towards America's AI future, but
Congress must continue to provide sustained funding to build on this
momentum.
As this Committee continues its work to chart America's AI future,
the Partnership for AI Infrastructure and its members are ready to
partner on this national initiative. We applaud the Committee's
commitment to implementing America's AI Action Plan to secure our
America's lead in the global AI race. Thank you for the opportunity to
share our perspective and for convening this timely hearing.
Sincerely,
Partnership for AI Infrastructure
______
Prepared Statement from Premier Inc.
Premier Inc. appreciates the Subcommittee's leadership and ongoing
commitment to exploring thoughtful regulations that unleash the
transformative potential of artificial intelligence (AI)--including in
healthcare--while maintaining the efficacy, accuracy and transparency
necessary to protect end users and patients.
As Premier stated upon its release, the White House's AI Action
Plan sets a course towards secure, trustworthy artificial intelligence
(AI) in healthcare. As Premier has emphasized to Congress and the
Administration, continued U.S. leadership in AI is critical to
America's health, economy and security. Premier is especially
encouraged by the strategy's focus on incorporating healthcare voices
into a sector-specific framework through the National Institute of
Standards and Technology (NIST), training the clinical workforce of the
future to harness AI's potential, and strengthening the cybersecurity
of critical health infrastructure.
As we work together to imagine the future of healthcare, Premier
looks forward to engaging in these new initiatives to ensure AI helps
providers deliver better care at lower costs for America's patients. To
advance the goals of America's AI Action Plan in healthcare, Premier
encourages the committee to consider:
The value of potential AI use cases in healthcare, including
applications in reducing administrative burden, clinical
settings, drug development and manufacturing, and healthcare
supply chain operations;
The need for a national privacy law to maintain
competitiveness and innovation in the development and
implementation of AI;
The importance of a clear regulatory framework prioritizing
transparency and risk mitigation in the development,
maintenance, and use of AI tools;
The development of a clinical workforce capable of
maximizing the potential of AI technology in healthcare; and
The critical nature of U.S. AI leadership to the security of
healthcare infrastructure.
Our recommendations are described in greater detail below.
I. BACKGROUND ON PREMIER INC.
Premier is a leading healthcare improvement company and national
supply chain leader, uniting an alliance of 4,350 hospitals and
approximately 300,000 continuum of care providers to transform
healthcare. With integrated data and analytics, collaboratives, supply
chain solutions, consulting and other services, Premier enables better
care and outcomes at a lower cost.
A Malcolm Baldrige National Quality Award recipient, Premier plays
a critical role in the rapidly evolving healthcare industry,
collaborating with healthcare providers, manufacturers, distributors,
government and other entities to co-develop long-term innovations that
reinvent and improve the way care is delivered to patients nationwide.
Headquartered in Charlotte, North Carolina, Premier is passionate about
transforming American healthcare.
Premier has a wealth of operational experience leveraging AI
technology to move the needle on cost and quality in healthcare,
including:
Premier Clinical Decision Support (CDS) designs AI-enabled
technology to reduce low-value and unnecessary care. Premier
CDS leverages natural language processing AI technology to read
unstructured data and ties it together with established
guidelines to generate real-time alerts and analytics, guiding
physician's decisions at the point of care. Premier CDS's
mission is to measurably improve the quality and safety of
patient care while reducing the costs by enabling context-
specific information integrated into the provider workflow.
Premier Applied Sciences (PAS) is a trusted leader in
accelerating healthcare improvement through AI-powered
solutions that span the continuum of care and enable
sustainable innovation and rigorous research. Our services and
real-world data drive research and quality improvement in
pharmaceutical, device and diagnostic industries, academia,
Federal and national healthcare agencies, as well as hospitals
and health systems. PAS leverages Premier's robust data
resources to design and deploy AI-powered solutions for
clinical trial recruitment, and to help collate disparate
patient records to tell a complete patient story, leading to
higher-quality care.
Premier's award-winning Supply Chain Disruption Manager
(SCDM) builds resilience and mitigates risks to the healthcare
supply chain by harnessing machine learning AI technology to
predict when critical drugs, devices and other medical supplies
are anticipated to become unavailable up to six weeks in
advance of a supply chain disruption. SCDM allows hospitals and
health systems to access clinically approved alternative
products to avoid delays in care or quality, and it allows for
communication to Federal agencies and other partners about
pending shortages to help proactively develop mitigation
strategies.
Premier's purchased services subsidiary, Conductiv,
harnesses AI to help hospitals and health systems streamline
contract negotiations, benchmark service providers and manage
spend based on historical supply chain data. Conductiv also
works to enable a healthy, competitive services market by
creating new opportunities for smaller suppliers and helping
hospitals invest locally across many different categories of
their business.
II. UNLOCKING THE VALUE OF AI IN U.S. HEALTHCARE
Opportunities for AI to Reduce Administrative Burden
One of the biggest opportunities for the use of AI in healthcare is
simplifying and improving standard, burdensome processes.
Premier has noted a recent surge of interest in patient-facing AI
technologies in clinical settings, including ambient notetaking, care
navigation chatbots and AI-powered radiological consultations. However,
Premier, our members and others across the healthcare sector have been
using AI technology to streamline burdensome administrative processes
for years. Specific use cases include:
Harnessing the power of natural language processing (NLP) AI
tools to ``read'' unstructured data in medical records to
efficiently build medical necessity documentation to expedite
prior authorizations;
Automating burdensome processes such as procure-to-pay
workflows, supply chain contract activities, and revenue cycle
tasks;
Overlaying AI chatbots on enterprise resource planning
software to help hospitals and health systems more efficiently
identify and manage their supply chain needs; and
Leveraging predictive AI software to sift through large,
evolving datasets and proactively predict supply chain
disruptions to prevent interruptions in patient care.
Opportunities for AI to Improve Patient Outcomes
Premier has partnered with leading healthcare providers and
innovators to leverage AI in clinical workflows and improve patient
outcomes. For example, early detection and intervention can improve
patient outcomes and drastically reduce overall healthcare costs--for
both patients and providers. Premier has demonstrated success in the
following areas:
The Premier team utilized artificial intelligence (AI), NLP
and a data ontology designed to mine the unstructured narrative
of clinicians' notes and pathology reports for statements such
as ``Mom seems a bit agitated'' or ``Mom is confused'' to
identify patients for early Alzheimer's Disease (AD)
intervention.
In oncology, Premier partnered with AstraZeneca and
Clinithink and utilized Clinithink's CLiX NLP technology to
identify patients with incidental pulmonary nodules (IPNs) to
flag for intervention before potential lung cancer
progression--with roughly 152,000 patients ``caught'' early.
Premier worked with GE Healthcare and St. Luke's University
Health Network to introduce a patient-centric care model for
breast cancer diagnosis--with a goal of helping patients go
from appointment to diagnosis and connection to a treatment
plan in just 48 hours or less.
Opportunities for AI in Clinical Trials
Premier sees particular promise for the use of AI in streamlining
processes and expanding patient access in clinical trials:
Identifying trial participants: One of the biggest
challenges facing health systems that seek to participate in or
enroll patients in clinical trials is identifying and enrolling
patients in a timely manner. Delays in meeting trial enrollment
targets and timelines can increase the cost of the trial. AI
tools have the ability to analyze the extensive universe of
data available to healthcare systems to identify patients that
may be a match for clinical trials that are currently
recruiting. This application of natural language processing
systems can make developing new drugs less expensive and more
efficient, while also improving patient and geographical
diversity in trials to address generalizability.
Generating synthetic data: AI, once trained on real-world
data (RWD), has the capability to generate synthetic data and
patient profiles that share characteristics with the target
patient population for a clinical trial. This synthetic data
can be used to simulate clinical trials to optimize trial
designs, model the possible effects or range of results of a
novel intervention, and predict the statistical significance
and magnitude of effects or biases. Ultimately, synthetic
patient data can help optimize trial design, improve safety and
reduce cost for decentralized clinical trials. Further,
synthetic control arms in clinical trials can help increase
trial enrollment by easing patient fears that they will receive
a placebo. To incentivize continued innovation, Premier
encourages Congress to urge the Food and Drug Administration
(FDA) to promulgate clear guidance on the process for properly
obtaining consent from patients for the use of their RWD to
produce AI-generated synthetic control arms in clinical trials.
Opportunities for AI in Drug and Device Manufacturing
Premier sees potential for AI to transform at least three key
segments of the drug and device manufacturing process:
Supply chain visibility: Premier is confident that the
application of AI can advance national security by helping to
build a more efficient and resilient healthcare supply chain.
Specifically, AI can enable better demand forecasting for
products and services through analysis of historical and
emerging clinical and patient data, thereby driving better
inventory management by automating the monitoring and
replenishment of supplies.
AI's ability to help drive supply chain visibility is particularly
helpful to address persistent healthcare supply chain
shortages. Oftentimes, the warning signals of an impending
product shortage can be seen weeks to months in advance due to
discrepancies in demand vs supply data. AI can create reliable
predictions that allow manufacturers to plan for and respond to
shortages or disruptions. AI also enables better planning and
response time to national or regional emergencies.
Advanced process control: Another significant role for AI in
drug and device manufacturing is in the development and
optimization of advanced process control systems (APCs).
Process controls typically regulate conditions during the
manufacturing process, such as temperature, pressure, feedback
and speed. However, a recent report found that industrial
process controls are overwhelmingly still manually regulated,
and less than 10 percent of automated APCs are active,
optimized and achieving the desired objective. These
technologies are now ready to transform manufacturing on a
commercial scale; however, challenges still remain to
widespread adoption. Premier encourages Congress to urge FDA to
issue clear guidance that supports the industry-wide transition
to AI-powered APCs. Such technologies offer manufacturers the
opportunity to assess the entire set of input variables and the
effect of each on system performance and product quality,
automating plant-wide optimization. This application of AI
technology can transform the physical manufacturing of drugs
and devices, leading to cost-savings and increased resiliency,
transparency and safety in the healthcare supply chain.
Quality monitoring: AI can also provide value-add to drug
and device manufacturing in the field of quality monitoring and
reporting. Current manufacturing processes provide an immense
volume of data from imagers and sensors that, if processed and
analyzed more quickly and efficiently, could transform
approaches to safety and quality control. AI models trained on
this data can be used to predict malfunctions or adverse
events. AI can also perform advanced quality control and
inspection tasks, using data feeds to quickly identify and
correct product defects or catch quality issues with products
on the manufacturing line. Taken together, these capabilities
can improve both the accuracy and speed of inspections and
quality control, helping companies to reliably meet regulatory
requirements and avoid costly delays that disrupt the drug
supply chain.
III. CATALYZING THE AI MARKET THROUGH A NATIONAL PRIVACY LAW
A comprehensive national privacy law will have significant impacts
on the development and competitiveness of AI technology in the United
States. Federal data privacy laws should clearly outline pathways to
acceptable data use for the training of AI models, which need not
interfere with state-level requirements related to automated decision-
making.
One of the greatest barriers to the large-scale diffusion of
innovative AI applications in healthcare is the lack of a single
Federal privacy law. A comprehensive, national privacy framework is
necessary to provide a reasonable level of consumer confidence that
businesses will protect their data, sensitive or otherwise. The current
patchwork, state-driven approach to privacy policy has resulted in
inconsistent data privacy practices that have amplified patient
distrust in the healthcare system. Patients must be able to trust the
processes for storage, handling and use of their data, particularly as
patient data is increasingly used to train AI algorithms--missing data
from patients wary of data sharing could add to data siloing and
perpetuate AI algorithmic biases, and Congress should urge CMS, ASTP/
ONC and other health data stakeholders to proactively educate patients
on these tradeoffs.
Unharmonized and burdensome requirements are a rate-limiting step
in unlocking value through AI innovation. A Federal privacy law can
empower innovators by placing clear, harmonized and common-sense
guardrails around artificial intelligence tools.
Premier was encouraged by the House Bipartisan Artificial
Intelligence Task Force's commitment to enabling safe, trustworthy and
innovative AI technology across healthcare. From drug development and
manufacturing to diagnostics and clinical decision support, the Task
Force's recommendations were in lockstep with Premier's long-standing
advocacy for sensible regulatory guardrails for health AI.
Premier particularly appreciated the Task Force's recognition of
AI's transformative ability to reduce administrative burden in
healthcare and improve patient care. However, to fully realize the
benefits of innovations such as real-time electronic prior
authorization, Congress must address the fragmented state data privacy
laws that are a barrier to bringing this technology to scale. Federal
data privacy standards are essential to ensuring consistent
protections, fostering equitable access and scaling AI-powered
solutions effectively.
A comprehensive Federal privacy law should be viewed as an initial
step towards achieving Congress's bipartisan AI goals. Premier
recommends the Committee prioritize addressing the following regulatory
gaps, which are particularly necessary in the healthcare sector:
Quality: Federal policy should clarify what uses of data are
acceptable during AI training and testing, what patient consent
for data use looks like for AI and what standards AI companies
must meet to protect patient data. By removing uncertainty,
Congress can give AI developers permission to innovate.
Security: Security and privacy often go hand in hand. A
Federal privacy law gives Congress the opportunity to clearly
dictate to regulators what appropriate security looks like to
protect patient health data, including when it is used in AI
models. By placing guardrails around data use and privacy,
Congress can limit the potential harms of security flaws in the
AI tools that are increasingly commonplace in healthcare.
Market leadership: Baseline privacy requirements in a
Federal law--preempting state privacy laws--levels the playing
field for AI innovators while promoting consumer trust and
responsible AI. A fragmented state privacy law landscape
disadvantages startups and innovators, complicating compliance,
increasing regulatory burden or confusion, and adding
prohibitive cost to growth.
IV. REDUCING REGULATORY UNCERTAINTY
Reducing regulatory uncertainty around AI development and
deployment in healthcare settings is crucial to unlocking its
transformative potential. Premier supports the responsible development
and implementation of AI tools across all segments of American
industry--particularly in the healthcare industry, where numerous
applications of this technology are already improving patient outcomes
and provider efficiency.
Premier strongly supports AI policy guardrails that include
standards around transparency and trust, risk and safety, and data use
and privacy. These recommendations will inform and complement the
development of a healthcare-specific set of national standards for AI
at NIST.
Promoting Transparency
Trust--among patients, providers, payers and suppliers--is critical
to the development and deployment of AI tools in healthcare settings.
To earn trust, AI tools must have an established standard of
transparency. Some policy proposals, including those proffered by the
Office of the National Coordinator for Health Information Technology
(ONC), suggest transparency can be achieved through a ``nutrition
label'' model, which lists the sources and classes of data used to
train the algorithm. Unfortunately, some versions of the ``nutrition
label'' approach to AI transparency fail to acknowledge that when an AI
tool is trained on a large, complex dataset, and is by design intended
to evolve and learn, the initial static inputs captured by a label do
not provide accurate insights into an ever-changing AI tool. Further,
overly intrusive disclosure requirements around data inputs or
algorithmic processes could force AI developers to publicly disclose
intellectual property or proprietary technology, which would stifle
innovation.
Premier recommends that AI technology in healthcare should be held
to a standardized, outcomes-focused set of metrics, such as accuracy,
false positives, inference risks and recommended use/applications.
Outcomes, rather than inputs, are where AI technologies hold potential
to drive health or harm. Thus, Premier believes it is essential to
focus transparency efforts on the accuracy, reliability and overall
appropriateness of AI technology outputs in healthcare to ensure that
the evolving tool does not produce harm.
Premier has heard from multiple member hospitals that the lack of
clear vendor information about the use of AI and associated liability
actively deters them from purchasing or using AI tools. The lack of
personnel and budget to collect information on data use, cybersecurity,
and liability terms from vendors exacerbates this issue for all but the
biggest health systems.
Premier urges Congress to consider requiring AI developers and
manufacturers to list the acceptable uses of new technologies in
healthcare settings, which would provide much-needed guidance to
clinicians and providers on safe and appropriate use cases. This
approach could provide liability protection for the proper use of AI
technology for the defined set of use cases where developers have
established and reported the appropriate metrics for accuracy and
reliability. Transparency about the intended use of AI tools would be
the simplest way for regulators to incorporate AI governance into
existing regulations. Health systems would be also able to incorporate
this information into their own governance structures, putting internal
policies in place to prevent misuse of AI in ways that could be
detrimental to patient safety or experiences.
Such disclosure does not inherently carry with it any additional
significant cost or requirements. It would only give health systems and
patients a complete picture of the safety and security of the AI
technologies they use. Rather than limiting or delaying innovation,
such guidelines would level the playing field between established
market leaders and startups while providing clear transparency for
providers and patients.
Alternatively, Congress could sanction the use of third-party
certification organizations or existing market processes to address
this challenge while reducing administrative burden. As a GPO, Premier
already requires vendors and suppliers to submit information about uses
of AI, data and cybersecurity certifications, and AI standards. This
information is available to members when they make contracting
decisions, providing a clear market incentive for venders and
developers to meet industry best practices. The contracting process
also includes model legal language around cybersecurity and AI best
practices and liability sharing, driving best practices even in the
absence of regulation. Much like the Payment Card Industry Data
Security Standards (PCI DSS), Congress can leverage existing market
incentives and self-governance to encourage broader adoption of
transformative AI technologies. Such a market-driven approach would be
flexible and adaptable, capable of adjusting to the latest developments
in AI technology without requiring Congress or regulators to reimagine
the law every year.
Mitigating Risks
It is important to acknowledge potential concerns around
``hallucinations'' and biased outcomes resulting from the use of AI
tools in healthcare, which carry considerations for patient safety.
Fortunately, there are several best practices that Premier and others
at the forefront of technology are already following to mitigate these
risks.
First, we reiterate Premier's recommendation for standardized,
outcomes-based assessments of AI technologies' performance, which would
hold developers accountable for reporting improper outputs. Premier
also supports the development of a standardized risk assessment, which
should identify detailed explanations of recommended uses for the tool
and risks that could arise should the tool be applied inappropriately.
Additionally, Premier understands the importance of data standards,
responsible data use and data privacy in the development and deployment
of AI technology. Premier encourages Congress and regulators to work
closely with developers, vendors and other stakeholders to ensure that
any data standards that the Federal government codifies align with
industry-experienced best practices. Premier also supports the
establishment of guidelines for proper data collection, storage and use
that protect patient rights and safety. This is particularly important
given the sensitivity of health data.
V. TRAINING THE HEALTHCARE WORKFORCE OF THE FUTURE
The White House's AI Action Plan prioritizes the education and
training of a future workforce capable of harnessing AI's
transformative potential. Premier agrees, and we believe technology can
and should work alongside and learn from healthcare professionals, but
current technology will not and should not replace the healthcare
workforce.
To ensure clinical validity and protect patients, Premier
recommends clear labeling of recommended use(s) and Federal support for
healthcare workforce trainings that combat automation bias and
incorporate human decision-making into the use of AI technology in
healthcare. Automation bias refers to human overreliance on suggestions
made by automated technology, such as an AI device. This tendency is
often amplified in high-pressure settings that require a rapid
decision. The issue of automation bias in a healthcare setting is
discussed at length by the FDA in guidance on determining if a clinical
decision support tool should be considered a medical device. Premier
suggests that future guidance or standards for the use of AI should
consider automation bias in risk assessments and implementation
practices, such as workforce education and institutional controls, to
minimize the potential harm that automation bias could have on patients
and vulnerable populations.
Premier acknowledges the risks of automation bias and fully
automated decision-making processes. To reduce these risks, promote
trust in AI technologies used in healthcare and achieve the goal of
supporting the healthcare workforce through AI, Premier recommends that
healthcare workforce training programs provide comprehensive AI
literacy training. Healthcare workers deal with high volumes of
incredibly nuanced data, research and instructions--a growing
percentage of which may be supplied by AI. This is particularly true
for applications of AI in drug development, where manufacturers and
quality control specialists may be reviewing high volumes of AI-powered
recommendations or insights and making rapid decisions that affect the
safety of patients. By ensuring our healthcare workers understand how
to evaluate the most appropriate AI use cases and appropriate
procedures for evaluating the accuracy or validity of AI
recommendations, we can maximize the advisory benefit of AI while
mitigating the risk to patients and provider liability.
To ensure that future clinicians can realize the benefits of AI and
appropriately incorporate new technologies into patient care, Congress
should encourage medical schools and accreditation programs to develop
curricula for the healthcare workforce that incorporates digital health
technologies. Among providers, there is a growing acceptance of
technology as a workforce extender, particularly when it is seamlessly
integrated into clinical workflows, and an increasing share of the
healthcare workforce is open to adopting new tools. As a sector,
healthcare must find ways to integrate digital health technologies into
educational curriculums at all levels, including professional
certifications and continuing education.
Finally, health systems and providers need to understand how to
best realize the opportunities for AI and new technologies to enhance
and extend care delivery to larger patient populations. Congress should
encourage the development of evidence-backed models to evaluate the
success of virtual care and virtual nursing programs. Anecdotal
evidence indicates that practitioners believe in the value of virtual
care to balance workload and expand access to care, particularly in
rural areas. Optimized and evidence-backed models have the potential to
improve access to care in rural communities, increase savings and
reduce chronic disease costs. In the face of clinician shortages--
especially nursing shortages--the existence of a center of excellence
for optimized virtual nursing services could provide care to entire
regions, offsetting workforce capacity challenges and reducing brain
drain.
VI. SECURING U.S. LEADERSHIP
As the AI Action Plan acknowledges, America's digital
infrastructure faces a regulatory inflection point spanning from
enabling to emerging technologies. Premier believes that true supply
chain resiliency requires a holistic approach as part of a larger
strategy to address the implications of policy on products needed in
healthcare--particularly those needed during a public health crisis or
national security threat.
Tariff and trade policies directly influence the availability and
affordability of critical medical supplies and technologies, including
the availability and uptake of AI tools. China has spent the past
decade making a play for global leadership at every level of technology
from semiconductors to AI models, leaving healthcare's future
increasingly reliant on China's tech stack. Investments in the
healthcare tech ecosystem--from semiconductors, cloud computing, and
connectivity through the software technology stack--can help American
healthcare overcome shortages, build a reliable supply chain for
medical devices, and put America back in control of healthcare's tech-
enabled future.
How the U.S. regulates AI--and the enabling technologies that power
it--will transform healthcare, one way or the other. Making America's
healthcare system the most attractive in the world for innovators and
visionaries, thereby reducing costs and improving patient outcomes, can
occur only if lawmakers reimagine the technology-care delivery nexus
from the bottom up.
The U.S. cannot afford to fall behind in the development and
production of critical enabling technologies for the growth of the
burgeoning AI sector, nor can it become reliant on AI applications and
software developed by geopolitical adversaries.
America has learned a difficult lesson about the threat of becoming
reliant on untrustworthy technology. From telecommunications
infrastructure to solar power inverters to port cranes, much of this
country's critical infrastructure has faced a reckoning about the
threat that unsecured software and hardware pose to essential
functions. A fresh focus on security must begin with trustworthy
physical infrastructure. Federal rulemaking has given the Coast Guard
and Transportation Security Administration extended cyber authorities
over ports, shipping, and rail. During the 118th Congress, lawmakers
introduced several bills examining dependence on foreign-manufactured
shipping cranes and other crucial technologies. While these are
valuable initial steps to identify vulnerabilities in America's
technology infrastructure, this country cannot afford to repeat the
same mistakes.
Early and sustained AI leadership is essential to provide America's
critical infrastructure--especially healthcare--with the reliable,
trustworthy tools that it needs. The United States cannot afford to
risk the future of digital health by ceding AI leadership.
IV. CONCLUSION
Premier appreciates the opportunity to comment on the
Subcommittee's work. If you have any questions regarding our comments,
or if Premier can serve as a resource on these issues, please contact
John Knapp, Vice President, Advocacy, at [email protected].
______
Response to Written Questions Submitted by Hon. John Thune to
Hon. Michael Kratsios
Question 1. Last Congress, Senator Klobuchar and I introduced the
AI Research Innovation and Accountability Act alongside our colleagues
Senators Wicker, Hickenlooper, Capito, and Lujan. This bipartisan
legislation establishes a light-touch, pro-innovation framework that
will bring transparency, accountability, and security to the
development and operation of AI.
Do you agree that Congress must establish basic rules of the road
like the framework we have laid out in this legislation?
Will you commit to working with us on this legislation during the
119th Congress?
Answer. I look forward to working with you and your colleagues on
any legislation that promotes and protects continued American
leadership in AI innovation.
______
Response to Written Question Submitted by Hon. Marsha Blackburn to
Hon. Michael Kratsios
Question 1. An important issue for Tennessee as it relates to AI is
what happens with patent and copyrighted content, whether it is from an
entertainer, author, a publisher, someone involved in online sales
training or online human resources training, or religious leaders who
have sermons or prayers on which they hold a copyright. In Nashville,
we talk about fair use as being a fairly useful way to steal
copyrighted content. We see that happen repeatedly. When I wrote an
Amicus Brief on the correctly decided Warhol vs. Goldsmith case, I
argued in favor of a narrowed application of the fair use doctrine.
When it comes to permissible training materials for LLMs, clearly,
there is a difference of opinion. This is a reason why states have
played such an important role in stepping forward, as Congress has
proven incapable of passing legislation to protect content creators.
Making certain that copyrights, patents, and trademarks are not
infringed is vital to our creative community. I had a group in my
office recently, who highlighted concerns about this issue of
unauthorized training. They are also looking at what is happening with
OpenAI's AI-generated movie, Critterz. This full-length, box-office
movie will be made almost entirely using AI, including AI-generated
music. By allowing LLMs to train on copyrighted materials, this takes
away the creative community's Article 1, Section 8, Clause 8
constitutional right to exercise their craft.
I would like to have your response on addressing these vital
issues. How do you plan to approach firewalling copyrighted content in
training LLMs while still allowing training on current events or
conversations?
Answer. At the launch of the AI Action Plan, the President stated
that AI developers should be allowed to use the facts and information
from content like books or articles to develop general purpose AI
models without navigating complex copyright negotiations. In his
speech, President Trump also recognized the distinction between
training AI systems using the facts and information from copyrighted
works versus having the AI's outputs copy or plagiarize a creator's
work. The Administration is closely tracking ongoing court cases
relating to AI training on copyrighted materials.
______
Response to Written Questions Submitted by Hon. Maria Cantwell to
Hon. Michael Kratsios
AI Skilling and Workforce Development
One pillar of the AI Action Plan is empowering American workers
through AI education and job training. The plan specifically calls for
initiatives like AI-focused apprenticeships and skilled trades training
(e.g., more electricians and advanced HVAC technicians to build AI
infrastructure).
Question 1. What progress has been made on these workforce
programs?
Answer. It is critical that the U.S. has the domestic workforce
needed to support growing demands for AI infrastructure. America's
Talent Strategy, co-released by the Department of Labor (DOL),
Department of Commerce (DOC), and Department of Education (ED), focuses
precisely on developing these workforce programs. As part of the
implementation of the Talent Strategy, DOL has announced at least $30
million for the Industry-Driven Skills Training Fund grant program
administered by DOL's Employment and Training Administration. These
grants will help train American workers for jobs in AI and other
emerging and high demand areas. As of August 2025, DOL has identified
over 120 AI-centric Registered Apprenticeship programs and over 2,045
apprentices, in over 45 AI-centric occupations. Additionally, DOL has
confirmed that there are over 350,000 active apprentices in AI
Infrastructure Registered Apprenticeship programs. The Administration
is working to increase the number of active apprentices within these
occupations, in alignment with the AI Action Plan and America's Talent
Strategy. Furthermore, the National Science Foundation (NSF) has taken
steps to strengthen AI-focused career and skill building learning
opportunities for high school students, including curriculum
development, dual enrollment, micro-credentials, and hands-on
experiential learning to prepare America's workforce for the future.
Question 2. Is the administration, in partnership with the
Department of Labor and industry partners, planning to roll out new
training curricula or apprenticeships in the regions where AI data
centers and projects are expanding?
Answer. In response to the President's AI Action Plan and Executive
Order on Advancing Artificial Intelligence Education for American
Youth, the Administration is working with industry and academia to
prepare workers to fill critical AI roles across the country. DOL, DOC,
and ED co-released America's Talent Strategy, which includes a focus on
scaling apprenticeships to meet AI infrastructure workforce demands. In
alignment with this goal, the DOL recently announced nearly $84 million
in grants to 50 states to increase the capacity of Registered
Apprenticeship programs. My staff are working with DOL on implementing
the AI Action Plan's education and workforce training recommendations.
Further, the AI Action Plan provides an important and meaningful
focus on training a skilled workforce to build, operate and maintain an
AI infrastructure. We know there are hundreds of thousands of jobs that
will be created in the coming years, but too few workers to fill those
jobs.
Question 3. Are current Federal programs and funding sufficient to
meet these needs?
Answer. Targeted Federal programs, including public-private
partnerships, apprenticeships, industry-driven training programs, and
state and local-led workforce initiatives, can help meet the growing
workforce demands needed to support domestic AI infrastructure. The AI
Action Plan recommends refocusing existing Federal programs and working
closely with industry, educators, and state and local governments to
identify gaps in employment pipelines and train new workers to meet
industry demand.
Support for CAISI
A positive aspect of the AI Action Plan was the emphasis on NIST
CAISI, the Center for AI Standards and Innovation. Last year, the House
and Senate both passed bills out of committee to authorize an AI
institute at NIST focused on AI standards and innovation. I'm glad the
Administration and Plan is preserving this institute. In order for the
U.S. to lead however, we need to commit to fully funding and resourcing
it.
Question 4. Mr. Kratsios, can you commit to supporting
Congressional codification of the NIST CAISI to develop voluntary
standards and testbeds related to national security for AI frontier
models?
Answer. The AI Action Plan recommends investments in the
development of AI testbeds that span many sectors, including
agriculture, transportation, and healthcare. NIST plays a role in
leveraging its technical expertise to advance AI measurement science
and sector-specific standards that will promote secure AI innovation
and accelerate broad AI adoption across sectors. You have my commitment
to work with you and your colleagues on legislation as it relates to
NIST and other Committee priorities.
Energy Needs and R&D for Fusion Energy
The growing demand for electricity to power AI data centers is
staggering. By some estimates, global electricity demand from data
centers is projected to more than double by 2030 exceeding 945
terawatt-hours (TWh). It will strain electric grids and energy
providers. A potentially limitless source of clean and inherently safe
energy is fusion, a source that could provide vast amounts of
predictable baseload power to increase the reliability of our energy
grid. Analysts at Bloomberg estimate that this game-changing technology
could achieve a potential $40 trillion valuation. Washington State has
become a fusion energy hub with billions of dollars invested and three
prominent start-up companies looking to deploy demonstration projects.
Question 5. How would the attributes of fusion energy help the
reliability requirements of the grid for AI?
Answer. Commercial fusion can unlock a new source of reliable
energy to help meet the growing energy needs of the grid and data
centers across the United States. Nuclear fusion is an important
priority for American energy dominance.
Question 6. How can the government partner with the private sector
to scale fusion technology as it continues to develop?
Answer. Milestone-based funding, prizes, challenges, public-private
partnerships, and other novel funding mechanisms can incentivize
commercial development of fusion technology. Since the first Trump
Administration, the DOE has focused on improving commercialization of
domestic fusion research. For example, DOE recently announced $134
million in funding for Fusion Innovation Research Engine (FIRE)
Collaboratives and the Innovation Network for Fusion Energy (INFUSE),
which encourages collaboration among the fusion industry, DOE national
labs, and universities.
Public Investment in Science
Government investment in fundamental science has been the backbone
of American success in technology and innovation. If the United States
wants to outcompete foreign adversaries, it cannot slash funding for
the National Science Foundation, National Institute of Standards and
Technology (NIST), Department of Energy labs, or STEM education
programs that power the AI workforce and ecosystem.
Question 7. What impact will cuts to Federal funding for science
and research at universities have on U.S. competitiveness in AI?
Answer. The Trump Administration took long-needed action to re-
focus the Federal research enterprise towards areas of national
strategic priority and geopolitical importance. The President has taken
extensive executive actions to create a more conducive environment for
American innovation, unlock investments in AI infrastructure at home
and abroad, advance AI for education, leverage AI for developing cures
for pediatric cancer, and much more. These actions remove barriers for
innovators to promote American leadership in AI and accelerate the
export of the American AI technologies, positioning the United States
to dominate in this critical technology and to win the AI race.
Notably, the President's FY 2026 budget proposal preserved funding
for programs such as AI and quantum. In the One Big Beautiful Bill Act
(OBBBA), the President committed 150 million dollars in new funding for
the DOE national labs to curate, structure, and preprocess scientific
data for use in AI and machine learning models. This data will be
critical in pushing forward next generation computational analysis,
accelerating scientific discovery, and further solidifying U.S.
leadership in AI and computational science.
In addition, OSTP and the Office of Management & Budget (OMB)
recently released the annual memorandum on the Administration's FY27
Research and Development Budget Priorities. The memorandum lays out a
path to unrivaled American dominance in critical and emerging
technologies, with AI as its first priority. It directs Federal
agencies to make significant investments in foundational and applied AI
research, critical digital infrastructure, and robust evaluation
standards, aiming to advance breakthroughs in AI architecture,
interpretability, security, and capabilities. It further strengthens
U.S. competitiveness by fostering close collaboration with industry and
academia to promote commercialization and workforce development, expand
STEM education pathways, facilitate broad adoption of AI-enabled tools,
and support resilient critical infrastructure.
Bayh-Dole Act
Congress enacted the Bayh-Dole Act as a key piece of innovation
policy. It allows universities and nonprofit institutions to retain
title to federally funded inventions and license them to private
companies. This framework has been critical to creating thousands of
startups, new industries, and high-wage jobs in the United States.
Question 8. Do you agree that maintaining the Bayh-Dole model,
where universities and entrepreneurs can commercialize federally funded
research without the Federal government taking a large share of their
revenue, is essential to sustaining America's innovation ecosystem?
Answer. Basic research is critical to the technological revolutions
which may occur decades in the future. Furthermore, it is important to
incentivize commercialization of basic research and technology transfer
where it may have promising applications American industries.
______
Response to Written Questions Submitted by Hon. Tammy Baldwin to
Hon. Michael Kratsios
Question 1. At the end of last year, this Committee held a hearing
on how AI is enabling and exasperating the proliferation and
sophistication of scams. In 2023, Wisconsinites lost $92 million to
fraud and scams, and the problem is only getting worse. Representative
Jamie Raskin and I have been leading an effort to direct the FTC to
develop a comprehensive online resource that will serve as a
centralized resource page for victims of financial scams and frauds.
What is the Trump administration doing to protect Americans against
AI enabled scams?
Answer. Thank you for recognizing the role the FTC has to play in
protecting Americans, both young and old, from scams and fraud. AI is
not exempt from consumer protection laws, and law enforcement
authorities at local, state and Federal level are able to enforce laws
addressing fraud committed with AI, just as they do for any other
medium used to commit fraud, such as e-mail or telephone. OSTP will
further efforts to help young people become more literate in AI through
the White House Task Force on AI Education.
Question 2. While the United States continues to focus on the
advancement of artificial intelligence, it is also essential that we
continue to invest in the development and advancement of other emerging
technologies. Quantum mechanics and computing have the potential to
simulate and solve problems too complex for classic computers. Quantum
also has the potential to work hand in hand with artificial
intelligence to continue to enhance its capabilities.
What is the Trump administration doing to leverage the development
of other emerging technologies such as quantum computing to advance our
development of artificial intelligence?
Answer. During his first term, President Trump was the first
president to prioritize AI and quantum in his budget request to
Congress. President Trump launched the National AI Initiative and
signed the National Quantum Initiative Act into law, laying the
foundation for continued American leadership in these fields. The
President has continued to demonstrate his commitment to these
technologies with his FY26 budget request, which includes robust
funding for AI and quantum. Furthermore, OSTP and OMB released their
annual memorandum outlining the Administration's FY27 Research and
Development Budget Priorities. This memorandum prioritizes research on
quantum and AI and calls out the interaction between the two fields.
______
Response to Written Questions Submitted by Hon. John Hickenlooper to
Hon. Michael Kratsios
AI-generated Content and Transparency
NIST has been conducting scientific research into new methods to
identify or detect content generated by AI, such as texts, images,
videos, and more. AI is a powerful tool capable of creating content
that appears to be real life. Methods such as watermarks, content
provenance, and labels are being evaluated for their accuracy.
Question 1: Director Kratsios, from your perspective, what does the
road ahead look like for scientific research into AI-generated content?
Answer. OSTP and OMB recently released the annual memorandum that
outlined the Administration's FY27 Research and Development Budget
Priorities. This memorandum calls out the importance of foundational
and early-stage applied research in AI, including in interpretability,
controllability, and adversarial robustness.
In addition, the President signed the TAKE IT DOWN Act into law,
which targets sexually explicit, non-consensual deepfakes and creates
market dynamics to develop tools to detect certain categories of AI-
generated content.
Question 2: Director Kratsios, when do you think it will be
feasible for technical standards to be developed to promote
transparency in synthetic content? Do you foresee any technical
capabilities, research barriers, or technological limitations delaying
the development of technical standards around synthetic content?
Answer. The AI Action Plan recommends actions to combat synthetic
media in the legal system, including issuing guidance to explore a
deepfake standard and file formal comments on proposed deepfake-related
additions to the Federal Rules of Evidence. It also recommends
developing NIST's deepfake evaluation program into a formal guideline
and companion voluntary forensic benchmark.
AI and Copyright Protections
Copyright protections for creators' works are being actively
challenged in courts across the country. AI developers and national
security interests argue copyright protections, including the ``Fair
Use'' standard, could slow down the development of American-made AI
technologies and cede global leadership in AI to competitors. During
his remarks while unveiling the AI Action Plan, President Trump said
the United States ``can't be expected to have a successful AI program
when every single article, book or anything else that you've read or
studied, you're supposed to pay for. You just can't do it because it's
not doable.''
Our hope is to balance the rights and protections of content
creators and lead the world in AI innovation.
Question 3: Director Kratsios, how do you believe we could achieve
this balance between protecting creators' rights and developing gold-
standard AI technologies?
Answer. As President Trump stated during the launch of the AI
Action Plan, AI developers should be allowed to use the facts and
information from content like books or articles to develop general
purpose models without navigating complex copyright negotiations. The
President also recognized the distinction between training AI systems
using the facts and information from copyrighted works versus having
the AI's outputs copy or plagiarize a creator's work during that
speech. The Administration is closely tracking ongoing court cases
relating to AI training on copyrighted materials.
AI Supply Chain
When Congress passed the CHIPS & Science Act, with support from
Democrats and Republicans, we committed to growing high-tech
manufacturing in the U.S., expanding our STEM workforce, and
recommitting our investment in scientific research. The CHIPS Act
incentives increase our ability to manufacture semiconductors in the
U.S. to train AI models and power data centers.
The Trump Administration has recently proposed to take government
equity in private companies who manufacture semiconductors as well as
receive a portion of sales from certain semiconductors to China.
Question 4: Director Kratsios, do you believe the CHIPS Act or the
Export Control Reform Act explicitly allow the Federal government to
take these actions? Have these actions been authorized by Congress?
Answer. The President has broad authorities when it comes to
matters of national security. I understand that the Secretary of
Commerce is implementing these actions through the appropriate
mechanisms.
Executive Orders
The White House unveiled three Executive Orders to accompany the AI
Action Plan that seek to build more data centers, reform government
procurement of AI models, and export American AI technologies
internationally.
Meanwhile, agencies across the Federal government are working to
carry out the AI Action Plan's goals.
Certain issues, such as protections for creators' copyrighted works
in AI model development, remain legally unresolved.
Question 5: Director Kratsios, do you believe the White House will
need to issue any new Executive Orders, including on issues such as
copyright, to continue implementing the AI Action Plan? Yes or no?
Answer. OSTP continues to coordinate interagency action to
implement the extensive recommendations in AI Action Plan and the
President's Executive Orders on AI.
Data Centers
As the demand for AI applications skyrockets, so does the demand it
requires from our electric grid.
We should make targeted investments to modernize our electric grid,
expand transmission line capacity, and ensure reliable and affordable
sources of power. However we need to ensure that we have the workforce
to be able to build this infrastructure.
Question 6: Director Kratsios, how can we improve access to a
skilled workforce for building out data centers?
Answer. It is critical that the U.S. has the domestic workforce
needed to support growing demands for AI infrastructure. America's
Talent Strategy, co-released by DOL, DOC, and ED, focuses precisely on
developing these workforce programs. As part of the implementation of
the Talent Strategy, DOL has announced a $30 million Industry-Driven
Skills Training Fund grant program administered by DOL's Employment and
Training Administration. DOL also recently announced nearly $84 million
in grants to 50 states to increase the capacity of Registered
Apprenticeship programs. In addition, NSF continues to support programs
to upskill the talent necessary to manage and secure large-scale data
infrastructure across our country.
Federal programs, including public-private partnerships,
apprenticeships, industry-driven training programs, and state and
local-led workforce initiatives, can meet the growing workforce demands
needed to support domestic AI infrastructure.
Question 7: Director Kratsios, do you have an update on the
Administration's plans to potentially site and construct AI data
centers on Federal land?
Answer. The Trump Administration has been involved in ongoing work
to accelerate the development of AI infrastructure through siting and
constructing AI data centers on Federal Lands. In April, DOE issued a
Request for Information and received enormous interest to inform data
center siting and construction on Federal Lands. In July, DOE announced
the four selected sites: Idaho National Laboratory, Oak Ridge
Reservation, Paducah Gaseous Diffusion Plant and Savannah River Site.
Furthermore, as part of the process for finding private sector partners
to manage the projects, DOE released Requests for Applications to build
and power AI data centers at Idaho National Laboratory, the Oak Ridge
Reservation, and the Savannah River Site this September.
Tariffs on Semiconductors
The Trump Administration has stated it intends to impose massive
tariffs on imports of semiconductors into the U.S.
The CHIPS & Science Act provided targeted incentives for companies
to build and expand manufacturing here in the U.S.
Semiconductors are a fundamental building block for developing AI
models and other advanced technologies.
Question 8: Director Kratsios, what potential impact would new
tariffs on semiconductor imports have on the United States'
competitiveness in AI?
Answer. The Secretary of Commerce is working to properly channel
resources from the CHIPS & Science Act to expand domestic chip
fabrication capacity and advanced research, making America the home of
future AI breakthroughs. President Trump has cited these policies as
central to achieving American supremacy in the AI race, establishing
this as both an economic and national security imperative.
Secure AI by Design
As organizations rapidly adopt AI to boost efficiency and growth,
the surge in usage has significantly expanded the attack surface. A
report released by a cybersecurity firm, Palo Alto Networks, indicates
an 890 percent increase in GenAI traffic. This growth brings new
security risks, requiring organizations to identify AI use, assess
vulnerabilities, and implement real-time protections. The U.S.
government's AI Action Plan reinforces this need, urging secure,
resilient AI systems capable of detecting threats like data poisoning
and adversarial attacks.
Question 9: Director Kratsios, could you explain with detail how
OSTP is considering integrating these secure-by-design AI principles?
Additionally, what collaborative efforts with the private sector are
underway to strengthen the secure development and deployment of AI?
Answer. The AI Action Plan recommends a range of different actions
to advance secure-by-design AI technology. Additionally, the
Administration is prioritizing R&D that enables the secure development
and deployment of AI through the recently released OSTP and OMB annual
memorandum on FY27 Research and Development Budget Priorities,
including fundamental work on AI interpretability, controllability, and
adversarial robustness.
Question 10: Director Kratsios, in what ways does the AI Action
Plan ensure the safe and secure use of AI systems within Federal
networks--particularly in protecting against sophisticated cyber
threats, data breaches, and unauthorized access?
Answer. The AI Action Plan highlights the importance of secure-by-
design AI to minimize the marginal security risk contributed by
deploying AI systems in Federal networks and calls for the development
of standards for high security AI data centers. It recommends that the
General Services Administration (GSA) creates and manages an AI
procurement toolbox, in collaboration with OMB, to ensure that procured
AI systems comply with relevant privacy, data governance, and
transparency laws. Further, it recommends that NIST partner with
industry to establish standards and best-practices to ensure impacts
are minimized and response is timely. We will continue to work with the
relevant agencies to strengthen existing cyber defenses and update
security practices to prepare for AI-specific cybersecurity threats.
U.S. Leadership in Global AI Governance
Under the AI Action Plan, the U.S. would meet the global demand for
AI by exporting its full AI technology stack, including hardware,
software, applications, and standards, to key markets overseas.
Question 11: Director Kratsios, do you support robust U.S.
engagement in key international organizations, including the UN, OECD,
G7, G20, ITU, and ICANN, for shaping the global conversation around AI?
How would you prioritize these fora, and what goals should the U.S. be
pursuing there?
Answer. As I emphasized at the United Nations Security Council
meeting, we totally reject efforts by international bodies to assert
centralized control and global governance of AI. We are focused on
establishing American AI as the global gold standard and enabling
allies and trade partners to build their own sovereign AI ecosystems
with secure American technology. OSTP continues to work with agencies
across the Federal government to deliver on the President's Executive
Order 14320 to promote the export of the American AI technology stack.
AI & Advanced Communications
The convergence of AI and wireless infrastructure will have massive
implications for the global telecommunications landscape. With AI-
native 6G networks powering millions of devices and running critical AI
applications, who builds and operates these networks is more important
than ever.
Question 12: Director Kratsios, how can we leverage an American AI-
native 6G stack to compete with Huawei in emerging global markets?
Answer. Leveraging an American, AI-native 6G stack means
accelerating secure domestic innovation, coordinating government and
private sector research and development, and driving global standards
to outcompete Huawei, especially in emerging markets. American 6G
networks dramatically increases the resilience of our critical
infrastructure and protects us from foreign surveillance or sabotage--a
risk inherent with Huawei-backed systems. As outlined in the AI Action
Plan, the removal of regulatory barriers will help the United States
deploy and export next-generation telecommunications infrastructure
faster than our competitors, ensuring that U.S. companies can scale up
6G deployments at home and abroad. This Administration will continue to
prioritize the promotion of the American technology stack around the
world.
[all]