[Joint House and Senate Hearing, 118 Congress]
[From the U.S. Government Publishing Office]
S. Hrg. 118-365
ARTIFICIAL INTELLIGENCE AND ITS POTENTIAL
TO FUEL ECONOMIC GROWTH AND IMPROVE
GOVERNANCE
=======================================================================
HEARING
BEFORE THE
JOINT ECONOMIC COMMITTEE
OF THE
CONGRESS OF THE UNITED STATES
ONE HUNDRED EIGHTEENTH CONGRESS
SECOND SESSION
__________
JUNE 4, 2024
__________
Printed for the use of the Joint Economic Committee
[GRAPHIC NOT AVAILABLE IN TIFF FORMAT]
Available via www.govinfo.gov
__________
U.S. GOVERNMENT PUBLISHING OFFICE
56-240 WASHINGTON : 2024
-----------------------------------------------------------------------------------
JOINT ECONOMIC COMMITTEE
[Created pursuant to Sec. 5(a) of Public Law 304, 79th Congress]
SENATE HOUSE OF REPRESENTATIVES
Martin Heinrich, New Mexico, David Schweikert, Arizona, Vice
Chairman Chairman
Amy Klobuchar, Minnesota Jodey C. Arrington, Texas
Margaret Wood Hassan, New Hampshire Ron Estes, Kansas
Mark Kelly, Arizona A. Drew Ferguson IV, Georgia
Peter Welch, Vermont Lloyd K. Smucker, Pennsylvania
John Fetterman, Pennsylvania Nicole Malliotakis, New York
Mike Lee, Utah Donald S. Beyer Jr., Virginia
Tom Cotton, Arkansas David Trone, Maryland
Eric Schmitt, Missouri Gwen Moore, Wisconsin
J.D. Vance, Ohio Katie Porter, California
Jessica Martinez, Executive Director
Ron Donado, Republican Staff Director
C O N T E N T S
----------
Opening Statements of Members
Page
Hon. Martin Heinrich, Chairman, a U.S. Senator from New Mexico... 1
Witnesses
Brian J. Miller, M.D., American Enterprise Institute, Washington,
DC............................................................. 2
Mr. Adam Thierer, Resident Senior Fellow, Technology and
Innovation, R Street Institute, Washington, DC................. 4
Dr. Ayanna Howard, Dean of Engineering, The Ohio State
University, Columbus, OH....................................... 6
Dr. Jennifer Gaudioso, Director, Center for Computing Research,
Sandia National Laboratory Albuquerque, NM..................... 7
Submissions for the Record
Prepared Statement of Hon. Martin Heinrich, a U.S. Senator from
New Mexico..................................................... 26
Prepared Statement of Brian J. Miller, M.D., American Enterprise
Institute, Washington, DC...................................... 29
Prepared Statement of Mr. Adam Thierer, Resident Senior Fellow,
Technology and Innovation, R Street Institute, Washington, DC.. 39
Prepared Statement of Dr. Ayanna Howard, Dean of Engineering, The
Ohio State University, Columbus, OH............................ 65
Prepared Statement of Dr. Jennifer Gaudioso, Director, Center for
Computing Research, Sandia National Laboratory, Albuquerque, NM 69
Questions for the Record Submitted to Brian Miller, M.D. from
Vice Chairman David Schweikert and Dr. Miller's response....... 78
Questions for the Record Submitted to Adam Thierer from Vice
Chairman David Schweikert...................................... 89
Questions for the Record Submitted to Dr. Howard from Senator
Mark Kelly and Dr. Howard's response........................... 90
Questions for the Record Submitted to Dr. Gaudioso from Senator
Mark Kelly and Dr. Gaudioso's response......................... 94
ARTIFICIAL INTELLIGENCE AND ITS POTENTIAL TO FUEL ECONOMIC GROWTH AND
IMPROVE GOVERNANCE
----------
TUESDAY, JUNE 4, 2024
United States Congress,
Joint Economic Committee,
Washington, DC.
The hearing was convened, pursuant to notice, at 2:30 p.m.,
in 216 Hart Senate Office Building, before the Joint Economic
Committee, Vice Chairman, David Schweikert, presiding.
Senators: Heinrich, Klobuchar, Schmitt, Hassan.
Representatives: Schweikert, Beyer.
Staff: Alexander Schunk, Kole Nichols, Tess Carter, Lia
Stefanovich, Ron Donado, Colleen Healy, Jeremy Johnson, and
Jessica Martinez.
Vice Chairman Schweikert. (off mic)--out there as we sort
of build the record of how do we do policy in the future. All
right.
I would like to introduce our four distinguished witnesses.
Dr. Brian J. Miller is a practicing hospitalist and Professor
of Medicine and Business at Johns Hopkins University. Dr.
Miller is also a non-resident fellow at the American Enterprise
Institute, where his research focuses on health care
competition, FDA public policy, health policy and the
integration of AI in the health care sector.
Then there is Mr. Adams. Mr. Adams here? Thierer. Mr.
Thierer is a senior fellow for the Technology and Innovation
Team at the R Street Institute. Mr. Thierer also serves as a
commissioner on the U.S. Chamber of Commerce's Artificial
Intelligence Commission on Competitiveness, Inclusion and
Innovation, where he advises on a variety of issues, including
Internet, government telecommunication policy and AI. Senator
Heinrich.
Chairman Heinrich. Thank you for pulling this hearing
together. It should be really interesting. A number of folks
know that I have been heavily involved in these conversations,
and we have been able to really put together a surprising
amount of sort of bipartisan interest in where we think we need
to, you know, where we really think the benefits are going to
accrue from artificial intelligence and where are the places
where we have to be careful and minimize some of the risks.
So I am very much looking forward to continuing that
conversation today, and I am going to introduce our other two
distinguished witnesses. Dr. Ayanna Howard is the Dean of
Engineering at Ohio State University. Previously, she was chair
of the Georgia Institute of Technology School of Interactive
Computing in the College of Computing, as well as the founder
and director of the Human Automation Systems Lab.
Her career spans higher education, NASA's Jet Propulsion
Laboratory and the private sector. Dr. Howard is the founder
and president of the Board of Directors of Zyrobotics, a
Georgia Tech spinoff company that develops mobile therapy and
educational products for children with special needs.
She is also a fellow of the American Association for the
Advancement of Science and the National Academy of Inventors,
and was appointed to the National Artificial Intelligence
Advisory Committee.
Dr. Jennifer Gaudioso is Director of the Center for
Computing Research at Sandia National Laboratories, where she
stewards the Center's portfolio of research from fundamental
science to state-of-the-art applications. She is also the
program executive for the National Nuclear Security
Administration's Advanced Simulation and Computing Program
there at Sandia.
Previously, she served as the director of the Center for
Computation and Analysis for National Security, where she
oversaw the use of systems analysis, cybersecurity and data
science capabilities to tackle complex national security
challenges.
[The prepared statement of Chairman Heinrich appears in the
Submissions for the Record.]
Vice Chairman Schweikert. Thank you, Senator Heinrich. Let
us go ahead and hear from our witnesses and Dr. Miller,
everyone gets five minutes and then hopefully we can follow up
with questions. Dr. Miller.
STATEMENT OF BRIAN J. MILLER, MD, AMERICAN ENTERPRISE
INSTITUTE, WASHINGTON, D.C.
Dr. Miller. Thank you, Chairman Heinrich, Vice Chairman and
Schweikert and distinguished members of the Committee for
allowing me to share my views on AI and its potential to fuel
economic growth and governance. I am a pragmatist, so I am
going to focus on pragmatic applications and policy questions
for the fifth of the economy that comprises health care.
As mentioned, I'm a practicing hospitalist at Hopkins, non-
resident fellow at AEI. I actually work for four regulatory
agencies, including the FDA and CMS and the FTC and FCC, and I
also serve on MEDPAC. I should note that today I am here in my
personal capacity, and my views are my own and do not represent
those of Johns Hopkins, AEI or MEDPAC.
So I just actually finished a week working in the hospital
on the night shift. It is an interesting experience. It is
seven days in a row of flying a 747 with analog controls and no
autopilot. It is not good thing for us to have systems focused
this way across the country.
I would say actually since I first rounded in the hospitals
as a medical student 15 years ago, things have not really
changed. I do not really see a lot of change in clinical
operations and what we do, and the broader economic data
support this assertion. The Bureau of Labor Statistics tells us
that for around 25 years, the hospital industry has had flat or
declining labor productivity most years.
And demand is going up, right? People are getting sicker.
We have more elderly patients, and we have a labor shortage as
a consequence. So we are missing 78,000 registered nurses,
68,000 primary care physicians, amongst others, and also the
spending is breaking the budget, right?
So Medicare and Medicaid are $1.7 trillion or more
annually, and that crowds out other sort of transformative
investments that we want to make in things like transportation,
education, my personal favorite, space exploration.
So we have got to think differently. And so AI and
automation can help solve our productivity problem in my
industry and let us clinicians do what clinicians do best,
which is focus on the patients instead of paperwork. Patients
today face delays in diagnosis, clinical errors and tired and
fatigued clinical staff who are focused on admin tasks.
So AI is not really Terminator 3. It is also not really
Star Trek. It is an inherently practical and technical issue
for implementing it in health care. We can use it to automate
mundane administrative tasks like physician charting with
ambient AI, coding and billing. Imagine if AI were summarizing
your clinic visit as you were actually talking with the
physician, instead of them staring at the computer.
And imagine if that physician could save time from the six
hours a day spent in charting. This is actually being tested
today and my colleagues at other hospitals are part of these
pilots. It can also augment clinical labor. It could assist
with mammography interpretation, melanoma diagnosis, improving
efficiency and accuracy, identifying areas of concern in
advance of physician review.
It can automate other elements of clinical practice,
reading pathology slides, looking at EEGs to check for seizures
and other neurologic problems. And then a lot of folks are
really worried about the labor impact, and I have to say that
with the average day for a primary care physician estimated at
26.7 hours if they complete all the tasks they are supposed to,
there is plenty of room for us to have software and automation
pick this up.
For consumers, the win is huge. So if you are a consumer
and you have a chronic disease, the burden is significant.
Being a diabetic, you have to check your sugars, you have to
give yourself a bunch of shots, you have to catch your carbs,
watch what you eat.
It is not easy. Imagine if we could create integrated
systems with glucometers to check glucose, insulin pumps and we
could take that burden away from the patient, so they could
just focus on going about their life? From a policy
perspective, we have to be careful not to over-regulate. So
right now, this is--and I am a car guy. This is like putting
airbags in cars in 1920 if we go too far.
We should be practical and use existing authorities that we
have at agencies like the FDA and the Office of National
Coordinator for Health IT, and we want sort of to facilitate
permissive bottom-up innovation from clinicians, nurses,
engineers and others, and we want that to come from the
bedside.
We should also aim to pay for and drive competition amongst
new and old care models, between humans and technology, and we
want rapid cycles stacked incremental innovation to transform
health care. We cannot tax and spend our way out of this, so we
must innovate and instead remember why America is great. Thank
you.
[The prepared statement of Dr. Miller appears in the
Submissions for the Record.]
STATEMENT OF ADAM THIERER, RESIDENT SENIOR FELLOW, TECHNOLOGY
AND INNOVATION, R STREET INSTITUTE, WASHINGTON, D.C.
Mr. Thierer. Chairman Heinrich, Vice Chairman Schweikert,
members of the Committee, thank you for the invitation to
participate in this important hearing on artificial
intelligence and its potential to fuel economic growth and
improve governance.
My name is Adam Thierer, and I'm a senior fellow at the R
Street Institute, where I focus on emerging technology issues.
I also recently served as a commissioner on the U.S. Chamber of
Commerce Commission on Artificial Intelligence,
Competitiveness, Inclusion and Innovation.
Today I will discuss three points relevant to this hearing.
First, AI and advanced computational technologies can help fuel
broad-based economic growth and sectoral productivity, while
also improving consumer health and welfare in important ways.
Second, to unlock these benefits, the United States needs
to pursue a pro-innovation AI policy vision that can help
bolster global competitive advantage and geopolitical security.
Third, we can advance these goals through an AI opportunity
agenda that includes a learning period moratorium on burdensome
new forms of AI regulations. I will address each point briefly,
but I have included three appendices to my testimony for more
details.
AI is set to become the most important general purpose
technology of our era, and AI could revolutionize every segment
of the economy in some fashion. The potential exists for AI to
drive explosive economic growth and productivity enhancements.
While predictions vary, analysts forecast that AI could
deliver trillions in additional global economic activity, and
significantly boost annual GDP growth. This would be over and
above the four trillion of gross output that the U.S. Bureau of
Economic Analysis says that the digital economy already
accounted for in 2022.
But what really matters is what AI means to every American
personally. AI is poised to revolutionize health outcomes in
particular. AI is already helping with early detection and
treatment of cancers, strokes, heart disease, brain disease,
sepsis and other ailments. AI is also helping address organ
failure, paralysis, vision impairments and much more. The age
of personalized medicine will be driven by AI advancements.
AI can help make government more efficient as well. Ohio
Lieutenant Governor John Husted recently used an AI tool to
help sift through the state's Code of Regulations and eliminate
2.2 million words of unnecessary and outdated regulations.
California Governor Gavin Newsome just announced an effort to
use generative AI tools to improve public services and cut
eight percent from the state's government operations budget.
And regulators are already using AI to facilitate
compliance with existing policies, such as post-market medical
device surveillance. AI also holds the potential to achieve
administrative savings for federal health insurance programs,
or better yet, reduce the number of people dependent on them by
identifying and treating ailments earlier.
There is an important connection as well between AI and
broader national objectives. A strong technology base is a key
source of strength and prosperity, so it is essential we do not
undermine innovation and investment as the next great
technology race gets underway with China and the rest of the
world.
Luckily, U.S. innovators are still in the lead. Had a
Chinese operator launched a major generative AI model first, it
would have been a veritable Sputnik moment for America. Still,
China has made imperial ambitions clear, its imperial ambitions
clear to become a global leader in advanced computation by
2030, and it has considerable talent, data and resources to
power those innovations.
Experts argue that China's whole of society approach is
challenging America's traditional advantages in advanced
technology. We therefore need an innovation policy for AI that
will not only strengthen our economy and provide better
products and jobs, but also bolster national security and allow
our values of pluralism, personal liberty, individual rights
and free speech to shape global information markets and
platforms. If by contrast fear-based policies impede America's
AI developments, then China wins.
To achieve these benefits that AI offers and meet the
rising global competition, America needs what I call an AI
Opportunity Agenda. An AI Opportunity Agenda begins with
reiterating the freedom to innovate as a cornerstone of
American technology policy, and the key to unlocking the
enormous potential of our nation's entrepreneurs and workers.
As part of this agenda, Congress should craft a learning
period moratorium on new AI proposals, such as AI-specific
bureaucracies, licensing systems or liability schemes, all of
which would be counterproductive and undermine our nation's
computational capabilities.
In addition, this moratorium should consider preempting
burdensome state and local regulatory enactments that conflict
with our National AI Policy Framework. Next, Congress should
require our government's existing 439 federal departments and
sub-departments to evaluate their current policies towards AI
systems, with two purposes in mind. First, to ensure that they
are not overburdening algorithmic systems with outdated
policies and second, to determine how existing rules and
regulations are capable of addressing the concerns that some
have raised about AI.
Taking inventory of existing rules and regulations can then
allow policymakers to identify any gaps that Congress should
address using targeted remedies. Finally, an AI Opportunity
Agenda requires openness to new talent and competition. Experts
providing that with a talent war brewing between the U.S. and
China, China is moving ahead in some important ways, and we
must take steps to attract and retain the world's best and
brightest.
In sum, America's AI policy should be rooted in patience
and humility, instead of a rush to over-regulate based on
hypothetical worse case thinking. We are still very early in
the life cycle. There is still no consensus on even how to
define the term, let alone legislate beyond establishing
definitions.
I thank you for holding this hearing and for your
consideration of my views. I look forward to any questions you
may have.
[The prepared statement of Mr. Thierer appears in the
Submissions for the Record.]
STATEMENT OF DR. AYANNA HOWARD, DEAN OF ENGINEERING, THE OHIO
STATE UNIVERSITY, COLUMBUS, OHIO
Dr. Howard. Chairman Heinrich, Vice Chairman Schweikert and
members of the Joint Economic Committee, thank you for this
opportunity to participate in today's hearing on artificial
intelligence, and its potential for job growth and improved
governance. It is an honor to be with you today.
My comments in this testimony are focused on the national
importance of AI literacy, and its role in augmenting the
current and future workforce talent pool, as well as the
government's role enabling this to happen. While demographics
of the U.S. are changing, these changes are not reflected in
the diversity of students pursuing degrees related to AI,
engineering and computer science.
According to the 2023 World Economic Forum Future of Jobs
report, AI continues to shift the skills that are needed within
the workforce, in some cases creating new jobs, augmenting old
jobs and eliminating other jobs. AI talent shortage is thus not
just a U.S. problem. Buying outside talent is thus no longer a
viable option to solve this issue.
Too often though, we disregard our untapped talent pools.
Organizations tend to over-index on hiring new talent with
needed skills versus upskilling their current workforce. As an
educator, I have witnessed bright students who, because of gaps
in their high school curricula, leave the engineering major
because they struggle when they take their first discipline-
specific engineering course.
Yet when we have instituted enrichment programs such as
Preface and Accelerate in the College of Engineering at Ohio
State, we have seen quantifiable growth in student retention
and graduation rates in engineering. There is thus no reason
beyond intentionality and resources why organizations,
government agencies and educational institutions cannot
institute similar AI training and literacy programs within
their own organizational borders.
There has been some movement in Congress to expand the
Digital Equity Act into an AI Literacy Act, but there needs to
be more. As a technology researcher and college dean, I also
dabble a bit in policy with respect to AI and regulations. I
think policy would be critical to building trust.
Policies and regulations allow for equal footing by
establishing expectations and ramifications if companies or
other governments violate them. Now some companies will
disregard the policies and just pay the fines, but there is
still some concept of a consequence.
Right now, there is a lot of activity around AI
regulations. There is the European Union AI Act, which
Parliament just adopted in March 2024. There are draft AI
guidelines that were released by the Japanese government, and
slightly different proposals in the U.S., including President
Biden's AI executive order.
There is state-specific activity too. Over the past five
years, it has been documented that 17 states have enacted 29
bills that focus on some aspect of AI regulations. In fact, on
June 11th his month, I will be participating in an AI symposium
at the Ohio State House, which brings academic leaders,
policymakers and industry experts to talk about the challenges
and opportunities that AI poses for Ohio's universities.
But this practice of each state coming up with their own
rules for regulating AI, it will continue to happen if AI bills
are not being passed at a federal level, and that is a problem.
I believe we have a lot of room for improvement and making sure
that people not only understand technology and the
opportunities it provides, but also the risks that it creates.
With new federal regulations, more accurate systems and
increased AI literacy training and upskilling for the untapped
labor market, this can happen. The intersection of the
country's growing dependence on advanced AI technologies,
coupled with the clear shortage of AI talent, is fast becoming
a national security issue that must be addressed urgently. In
2001, Secretary of Defense Lloyd Austin emphasized in a speech
that sophisticated information technologies, including
artificial intelligence, will be key differentiators in future
conflicts.
In the U.S. though, we have our risk and we don't have
enough talent trained with sufficient AI literacy that is
needed for advancing emerging technologies, critical to
maintaining American leadership. If we are not careful, we
might be living another 1957 Sputnik moment.
Today, with nearly every aspect of life evolving to being
coupled to AI, the U.S. cannot afford to sit back and wait for
an AI-based crisis to hit. We are at a crossroads. The U.S.
must make an equivalently bold investment in growing the AI
talent pool, to help protect democracy, citizens' quality of
life and the overall health of the nation.
I want to thank you for this opportunity to participate in
this important hearing, and I appreciate the Committee's
attention to this topic, and look forward to answering your
questions. Thank you.
[The prepared statement of Dr. Howard appears in the
Submissions for the Record.]
STATEMENT OF DR. JENNIFER GAUDIOSO, DIRECTOR, CENTER FOR
COMPUTING RESEARCH, SANDIA NATIONAL LABORATORY, ALBUQUERQUE,
NEW MEXICO
Dr. Gaudioso. Chairman Heinrich, Vice Chairman Schweikert
and distinguished members of the Committee, thank you for the
opportunity to testify today on the crucial role of the
national labs in driving AI innovations.
Doing AI at the frontier and at scale is crucial for
maintaining competitiveness and solving complex global
challenges. Today, I want to emphasize two key points about the
national labs can and should contribute to Frontier AI at
scale.
First, the role of the national labs in accelerating
computing innovations through partnerships, and two, the role
of the national labs in critical AI advances aligned with our
national interest to date and going forward. But first, let me
provide a brief overview of Sandia National Labs to provide
context for the rest of my testimony.
Sandia is one of three research and development labs of the
U.S. Department of Energy, National Nuclear Security
Administration. Our roots go back to World War II and the
Manhattan Project. Throughout its 75 year history as a multi-
disciplinary national security engineering laboratory, Sandia's
primary mission has been to ensure the U.S. nuclear arsenal is
safe, secure and reliable, and can fully support our nuclear
deterrence policy.
Importantly, there is strategic synergy and interdependence
between Sandia's core mission and its capabilities-based
science and engineering foundations, because breakthroughs in
one area beget discoveries in others in a cycle that pushes
breakthroughs and fuels advancements.
For decades, the Department of Energy National Labs have
been pioneering breakthroughs in high performance computing
through strong public-private partnerships. This collaborative
approach has greatly enhanced America's overall
competitiveness.
As Mike Schulte from AMD Research said, ``One of the key
take-aways is how impactful the forward programs were on our
overall high performance computing, plus AI competitiveness. We
not only created great systems for the Department of Energy,
but in general it greatly enhanced U.S. overall competitiveness
in high performance computing AI, and energy efficient
computing.''
Another powerful example is our recent tri-lab partnership
with Cerebras Systems that I discussed in my written testimony.
Let me expand upon that impact of that partnership by sharing
the latest results.
Funded by NNAS, the team achieved a major breakthrough
using the Cerebras wafer scale engine to run molecular dynamic
simulations 179 times faster than the world's leading
supercomputer. This required innovations in both hardware and
software. This remarkable advancement has the potential to
revolutionize material science and drive scientific discoveries
across various domains.
For example, renewable energy experts will now be able to
optimized catalytic reactions and design more efficient energy
storage systems by simulating atomic scale processes over
extended durations. This partnership exemplifies how to open up
new frontiers in scientific research, potentially transform
industries and address critical global challenges while pushing
the boundaries of AI and computing technologies.
The DOE National Labs have also researched AI for decades,
with a focus on addressing critical challenges for the nation.
Recently, ten of these laboratories, including Sandia,
showcased their work at the AI Expo for National
Competitiveness in Washington, D.C. At the Expo, the labs
highlighted their contributions to AI research and their
ability to contribute to the frontiers of science and solve
national energy and security challenges.
The labs are developing reliable and trust for the AI-based
solutions for critical areas such as nuclear deterrence
engineering, national security programs, non-proliferation,
energy and homeland security needs and advanced science and
technology. Pushing AI to the frontier and scaling it through
the Department of Energy's Frontiers of AI for Science,
Security and Technology initiative known as FASST, will
maintain U.S. competitiveness and solve global challenges.
The national labs' long history driving computing
innovations, coupled with our strategic AI research focused on
key applications, makes DOE and the labs invaluable partners
for realizing AI's full potential through secure, trustworthy
and high performance systems.
In New Mexico, we are working with our premier institutions
and industrial partners in the state to finalize the New Mexico
AI Consortium. This consortium seeks to transform the landscape
of AI research, cultivate a skilled workforce, and build a
robust infrastructure to support cutting edge AI research,
education and commercialization in the state.
By harnessing the lab's capabilities through academic and
industry partnerships, we can lead the world in AI while
safeguarding our national interests. I welcome the discussions
on how we can work together on this critical imperative. Thank
you for convening the hearing, and I look forward to your
questions.
[The prepared statement of Dr. Gaudioso appears in the
Submissions for the Record.]
Vice Chairman Schweikert. (off mic)
Chairman Heinrich. Thank you, Vice Chairman Schweikert. Dr.
Gaudioso, as you talk in your testimony, national labs like
Sandia have historically played an important role in innovation
and technology development. How has that prepared them to
steward AI development?
Dr. Gaudioso. The national labs when it comes to AI
development one, we have a history of working in AI, in the
algorithms. Our work in advancing computing technologies has
been focused on supporting the simulation missions and the
science the labs have, but we can also have been using that
computing power to start pushing large-scale AI.
We also in the national labs actually have the world's
largest--the free world's largest scientific workforce, and the
unique data sets that science has. So for instance, ChatGPT and
other types of large language models are built on the corpus of
knowledge that is in the Internet.
We know that we can build much more exquisite and impactful
models if we train them on the exquisite science data that we
have in the Department of Energy, and we look forward to using
that data to build models that can transform how we do science
to solve our challenges.
Chairman Heinrich. Can you explain a little bit of that,
because you know, there is a tendency among some of our
colleagues to think of AI now just as a really elegant chatbot,
you know, something that can respond back with, you know, with
language that you would be hard-pressed to know whether it was
a human or not on the other side.
But when you take a large language model and you put in on
top of some of these foundational science models, so that you
can use language as the--basically to coach new science, new
alloys, new molecules, new pharmaceuticals, out of these
foundational models, you get really powerful combinations.
Can you talk about the opportunities there a little bit?
Dr. Gaudioso. I would be happy to discuss those
opportunities, because I think, you know, we have the large
language models that are trained on language, visual arts,
other popular media. We now need to train physics models. We
need to train them on chemistry data and these models will help
us be able to make connections in the science data that today,
you know, I am a chemist by training.
I was trained to read the scientific literature, comb
through the data, spend years trying to make sense of the world
around me, make a hypothesis, design experiments to test my
hypothesis and iterate. Well, if we can train a chemistry AI
model, I have my own student intern right there with all of the
world's chemistry knowledge, or at least the trust chemistry
knowledge included in it, and we can use that to make science
go much faster and to make connections that no human is ever
going to make, right?
And so we're already seeing this in materials discovery.
Chairman Heinrich. Yeah, material science in particular I
just an incredibly slow, painful like long-term endeavor in the
normal course of how we do science. I think it is really going
to change that dramatically. We heard a little bit about the
importance of labor and workforce in having, maintaining our
advantages in AI.
But you mentioned something else, which is data. Talk a
little bit about what the unique data sets that we have at
places like our national labs, within our agencies, and how
some of that--and for that matter data curation, the importance
of data curation, how that gives us a leg up over some of our
competitors as well.
Dr. Gaudioso. Yeah. The data is really at the heart of AI,
right, and we have data both open science data, the Office of
Science Laboratories. The national labs broadly do science to
advance the public interest. So most of the science data we
have is public.
But we as the scientists that discover and produce that
data know how to interpret it and how to curate it to make it
AI-ready, and to be able to use it to build these models. But
we also have access--as federally funded research and
development centers, we have trusted partnerships with the U.S.
government, and we have access to national security science
data that we use, as Sandia does, in designing hypersonic
reentry bodies or nuclear weapons.
And that data, which of course we do not want to make
public, can be used to train closed foundation models that will
help us change the design life cycles and respond to--at the
speed of the national security threats we are facing today.
Chairman Heinrich. Great. I am going yield back the rest of
my time, Vice Chairman.
Vice Chairman Schweikert. Thank you, Chairman Heinrich. Dr.
Miller, first you already know I am a bit of a fan what you do
and the way you think. Can you play a game with me instead of
just reading a written question here? I come to you, you get to
use the full power of what you believe exists today and is
going to exist over the next year.
How could you revolutionize medicine? How could you
revolutionize the cost? How could you revolutionize making
people well and the morality of ending and providing cures?
Dr. Miller. A couple of answers. One, if you had high blood
pressure, we have software that could titrate the medications
for you. You could do that home, you could send me a message. I
could talk with you about exercise, and in fact software in
theory could titrate lots of medications for lots of common
conditions.
You would not even have to necessarily leave your house to
see me. In fact, a lot of the time you might not even need to
see me, and then see me for acute concerns. You could
automatically have your clinical preventive services ordered,
right? You could have your colonoscopy, if relevant a PSA to
check for prostate cancer.
So a lot of care could occur not just outside the walls of
the clinic, but also even outside needing to see a physician.
And then let us say you had a condition and you had to do a
prior authorization, which my colleagues and I do not
particularly enjoy doing.
Imagine if the first layer of approval or review and then
approval were automated and in near-real time?
Vice Chairman Schweikert. You know, we have that piece of
legislation. So Doctor, within that scope, you have the data of
my wearables, my breath biopsy, whatever it may be. Do you see
a world at least at the basic level, the AI and then the
algorithm that's attached to it could write the scrip?
Dr. Miller. Absolutely.
Vice Chairman Schweikert. Okay. That was clean without a
whole lot of struggle. Dr. Howard, this is a little bit
different, but in--and you need to correct me, because I was
listening to your discussion about okay, we need more people, a
variety who are writing AI and code. But in some ways, maybe I
have the utopian vision of it provides access for more people
to be able to do technology.
Most people have no idea of how to write an app, but they
can use the app to do technical jobs. Is there some ways that
yes, there may be this hierarchy of here is over here, my
people writing code, doing those things. But over here, is not
this an empowerment for almost every American to do things that
are much more complex?
Dr. Howard. Yeah, it is. So when I define AI literacy, it
is not about creating computer scientists or coders. It is
about making every citizen understand how to interact with AI
to do their jobs better. So it is allowing doctors to basically
talk in their phone and then transcribe it into the actual
records that can then be shared with other doctors. So that is
really about it.
Vice Chairman Schweikert. Okay. That is much more elegant
way to phrase it. Mr. Thierer, what is my GDP growth? What is
my--I have a personal fixation on where we are demographically
as a country. We are getting old very fast. We often do not
want to talk about it.
We have to be brutally honest. 100 percent of calculated
future debt for the next 30 years, interest, health care costs
and if a decade from now we backfill Social Security. It is
demographics. What is your vision of AI, the growth, the labor
substitution? Does it save us?
Mr. Thierer. Yeah. Well, nothing can save us, but it can
certainly make a major contribution towards the betterment of
our government processes and potentially our debt. There has
been various estimates, Congressman, on exactly how much AI
could contribute to overall gross domestic product, the low end
being somewhere like at least 1.2 percent annually, but it goes
up from there, with one forecast for 15----
Vice Chairman Schweikert. I beg of you to be slightly
louder.
Mr. Thierer. 1.2 percent annually GDP boost and 15.7
trillion potential contribution to the global economy by 2020,
according to another report. I have all this data in a
supplement to my testimony. And again, the estimates vary
widely.
But the bottom line is almost all economists, political
scientists and consultancies realize that this is a great, you
know, opportunity for the United States to once again build on
the success of our past technological, you know, success story
of the Internet and digital economy, you know.
We look at the data that our government has put out, the
Bureau of Economic Analysis. I mentioned one data point in my
testimony. $4 trillion in gross output from digital economy in
2022. Nine million jobs, a huge amount of compensation. 18 of
the 25 largest digital technology companies in the world by
market capitalization are U.S.-headquartered companies. Fully
50 percent of the largest digital technology employers in the
world are American technology companies. That happened because
we got policy right.
Vice Chairman Schweikert. Thank you, Mr. Thierer. All
right. To our true AI expert, Mr. Beyer.
Congressman Beyer. First of all Mr. Vice Chairman, thank
you very much for convening this, and I am very excited to be
here. Thank you very much for coming. I am a huge AI optimist,
especially on the health care side.
So Dr. Miller, in fact I just got off a Zoom a couple of
minutes ago with Dr. George Church at Harvard, who was
explaining to me that he and his colleagues have built new
microorganisms with DNA completely different from all the other
DNA on the planet.
And because of that, the viruses do not work. They are
completely, completely immune for viruses. Within these are the
idea of making replacement organs that will not be rejected,
because there will be nothing to reject. They will be
unrecognizable. Just extraordinarily exciting.
So Dr. Miller, you talked about how agencies like ARC have
been at the forefront, but we have seen in the past that to
introduce the new technologies to medicine has not necessarily
improved things. He specifically talked about the absence of
labor productivity growth in health care. The best example I
can think of is how EHR, electronic health records and the lack
of interoperability.
Veterans Affairs and DoD have been fighting for years about
how to bring them together. How do we take--how do we
acknowledge the 17 to 19 percent GDP on health care, like
double the highest of any other place in the world, and use AI
to bring down those costs and bring labor products in the deal?
Dr. Miller. Thank you. I think a lot of this is practical,
right? So one, one of the many things that gets in the way of
actually us using it in a productive and proactive fashion is
state and federal regulation. There is a role for state and
federal regulation, but we do not want to go to town to prevent
people from innovating at the bedside and getting it into
practice.
Think about a radiologist, right, reading CT scans,
mammograms. Imagine if software automatically went through all
the images and I pre-identified the areas of concern. That
could massively speed up the efficiency at which that
radiologist reads those CT scans. Instead of reading ten an
hour, maybe they read 12 or 14.
So if we direct payment and FTA policy to support this, for
example, if a tech company is providing a service, why not let
them bill, right? If they can provide that service cheaper than
I as a physician or a nurse practitioner or a pharmacist, they
should have the opportunity to bill for that and compete.
And if you have that competition within a population-based
payment system like Medicare Advantage or Medicaid Managed
Care, you can potentially drive service delivery and innovation
for consumers to then have a choice.
They could have a choice of whether they want human in-
person service; they could have remote human service, maybe
with a blue tooth exam; they could have remote service like
audio video only; they could have automated service, right,
from software, or they could even have a phone visit or maybe
an email visit.
And so if we drive policy to give consumers that choice,
then that will improve labor productivity, because the
consumers will choose.
Congressman Beyer. Thank you, Dr. Miller, very much. Mr.
Thierer, your ten principles to guide AI policy, you said
``It's equally important that lawmakers not demand that all AI
systems be perfectly explainable in how they operated.'' We had
Secretary Becerra in here recently over at Ways and Means. I
asked him about that, and he said that HHS does not have enough
authority to see behind the curtain. But we also, every doctor
I talk to, is worried about prior authorization decisions being
made by AI.
What are the limits of explainability? What can we as
lawmakers really demand in terms of explainability?
Mr. Thierer. Yeah, well transparency is a good principle,
but the question about how to mandate it by law is always
tricky. And when you get specifically into algorithmic
explainability, the question of exactly how do you explain all
the inner workings of a model before it gets to market, right?
That is very difficult, and what I have articulated in the
ten principles to the AI Task Force that I sent out were
basically the need to, on the back end, look at how we can
regulate the outputs or outcomes associated with algorithms, as
opposed to trying to micromanage all the inputs and figure out
how ``explainable'' they are, quote-unquote.
Because I think that is a fool's errand. I do not think
that can be done efficiently without stopping a lot of that
innovation from happening altogether. That does not mean again
we do not regulate; we just regulate it as we look at the
outputs or outcomes to see did it actually work as billed,
right? That is the most important thing. Did it actually hurt
anyone? Is there any actually consumer harm, and then we
address it with targeted policies.
Congressman Beyer. Great, thanks. We do have a wonderful AI
Foundation Model Transparency Act, bipartisan, two Dems, two
Republicans in the House side, I think many on the Senate side,
trying to find that right balance. But thank you for the
principles, and Mr. Chairman, I will yield back.
Vice Chairman Schweikert. Mr. Schmitt.
Senator Schmitt. Thank you. Just a few comments, then I
have a couple of questions. America's poised to enter the next
decades of the 21st century hand in hand with the technology
that could possibly define it: artificial intelligence.
Decades of innovation and entrepreneurship have led to this
point, from industry titans of NVIDIA to innovation centers
like St. Louis' own geospatial hub. America is ahead in the AI
race and has the resources to double down on its unique
advantages.
Yet America's position in AI is under constant pressure.
China is investing billions and billions into its own AI
industry. Some of this investment is for AI surveillance
technology, to export their malignant surveillance state
abroad. There is no telling what could happen if China became
the dominant player in the 21st century. I am sure China is
watching us; Europe is too, hoping that we bury our burgeoning
AI industry in unnecessary regulation and lose sight of what
got us in this position in the first place.
The worse thing we could do in this race towards AI is
stifle innovation by unleashing the bureaucrats and putting
crippling regulations onto innovators. The EU has done this and
now Europe will now most likely be watching this race from the
sidelines.
Yet there have been rumblings here on Capitol Hill and
fancy summits all over the world that the U.S. should over-
regulate this industry. This would only serve to hamstring our
innovation and give the China the keys to this amazing
technology.
I want to zero in on this because we--I think this is a
common theme that we hear about as far as over-regulating, and
I think the American way here is a--we are concerned about
this. But I want to drill down on that a little bit, and maybe
Mr. Thierer I will start with you.
What do we mean by that? Like how would you define that?
Colorado has passed some regulations that even their governor
has questioned. I am just using that as one example. What is it
that we should be concerned about in this framework?
Mr. Thierer. Certainly. Thank you for the question. So
first of all, as of noon today, there are 754 AI bills pending
across the United States of America. 642 of those bills are at
the state level. That does not include all the city-based
bills.
Probably the most important AI bill that has passed so far
is New York City. Not New York state, New York City. And so
there is patchworks and then there is patchworks, right? And so
the cumbersome nature of all those compliance rules added on
top of each other, even if well-intentioned, can be enormously
burdensome to AI innovators and entrepreneurs. So that is just
one thing to note.
The other thing to note is that there has been discussions
about the idea of like overarching new bureaucracies or, you
know, certain types of licensing schemes. I have no problem
with existing license schemes as applied in the narrow focused
areas where AI might be applied, whether it is medicine, you
know, drones, driver-less cars.
But an overarching new licensing regime for all things AI
is going to be incredibly burdensome. That is a European
approach. We do not want that. And sir, let me just say
something about your China point, because this is really
important.
You know, we are here on June 4th. This is the 35th
anniversary of the Tiananmen Square Massacre. When we talk
about like, you know, the importance of getting this right for
America and our global competitiveness, it is important for
exactly the reason you pointed out. Because if we do not and
China succeeds, then they are exporting their values, their
surveillance systems, their censorship.
The very fact that I just uttered the term ``Tiananmen
Square'' at this hearing means it will not--this hearing will
not be seen in China. I apologize for that to everyone else
here. But the bottom line means that that means what is at
stake is geopolitical competitiveness and security and our
values as a nation. So this is why we have to get it right.
Senator Schmitt. So it is interesting, because when I was
going to school the idea was that sort of the more literate a
society became, the more educated it became, the more open it
became, the more likely they were to become a democracy, right,
and China was kind of always an example of maybe if there are
fewer poor people there and they are more literate, that
ultimately they will demand more.
But interestingly, AI has uniquely, and very low tech AI as
it relates to surveillance, has empowered Communist regimes,
right? It empowers the totalitarian level of control that 30
years ago I am not sure anybody could really foresee, and that
is certainly what they have capitalized on to your point.
If people think that that is a way to maintain power, which
has been the way of the world in many places, you are right,
you know--They become the dominant player in this. I do want to
just shift with a little bit of the time I have left, and
anybody please chime in on this point, but I will start with
you, Mr. Thierer, again.
Big tech versus little tech here. I think there is a--there
is a concern, at least that I have, that a regulatory scheme or
we are doing something that sort of protects the big players,
but ultimately leaves out the innovation, again that got us to
this point now.
How would--how do you view this and what can we do to guard
against that, because I do think there are some folks that want
a more, sort of a protectionist view of the big players here,
and they have all the answers. They are very important players,
but not the only players. How do you guard against this
shutting out little tech in this process?
Mr. Thierer. Amen to that. So, let us take a look at
Europe. I mean one of the things that I always ask my students
or crowds that I talk to about AI policy or technology policy,
as I say, name the leading digital technology innovator
headquartered in the European Union today. Silence, right?
That has everything to do with getting policy wrong, and
what the European Union--the only thing they are exporting now
is regulation. And basically that is all they have got left,
and they are trying to regulate mostly large American tech
companies.
And so what is ironic that--is it was meant to sort of like
keep things more in check and competitive, but there is only a
handful of large technology companies that can comply with
those rules and regulations. We do not want that to happen in
the United States. We have thousands upon thousands of small
entrepreneurial companies starting up in the AI space right
now, and this is the hope for the future, especially open
course technology.
You know, right here in America that is happening on the
ground. We have got to preserve that entrepreneurial, you know,
freedom to innovate kind of model for the United States, so we
do not become the innovation backwater that is the European
Union.
Senator Schmitt. Thank you. Thank you, Mr. Chairman.
Vice Chairman Schweikert. Senator Klobuchar.
Senator Klobuchar. Thank you very much. Thanks for doing
this important hearing, and thank you to our witnesses. I come
from a state that believes in innovation. We brought the world
everything from the pacemaker to the post-it note, and I also
think that we have to get ahead of this in a good way.
We have to put guardrails in place. That is something that
we really did not do with tech policy, and now there are all
kinds of issues with privacy. I am not going to go into
everything that we need to do, that I hope we can do
differently with AI.
I think David Brooks, a columnist, put it best when he said
``The people in AI seem to be experiencing radically different
brain states all at once. I found it incredibly hard to write
about because it is literally unknowable whether this
technology is leading us to heaven or hell.''
We need guardrails that acknowledge that both are possible.
So I will start with Senator Thune and I serve on the Commerce
Committee, and we have introduced legislation that has gotten
some positive feedback, the AI Research, Innovation and
Accountability Act to increase transparency and accountability
for non-defense applications, and sort of differentiating
between some of the riskier applications like electric grids
and then others, and directing the NIST, the Commerce
Department to issue standards for critical impact systems.
So I guess I will start with you, Mr. Thierer. The bill
that I just mentioned takes a risk-based approach that
recognizes different levels of regulation are appropriate for
different uses of AI. Do you agree that risk-based approach to
regulation is a good way to put in place some guardrails?
Mr. Thierer. Yeah, absolutely. I wrote a paper about your
bill, Senator, and I----
Senator Klobuchar. Maybe I know that. It gets kind of a
softball beginning.
Mr. Thierer. Well, I love building on the NIST framework,
right, because that exists and it was a multi-stakeholder,
widely agreed to set of principles for AI risk management. And
so it is really good utilize the sort of existing sort of
regulatory infrastructure we already have, and build on that
first.
Senator Klobuchar. Uh-huh, very good. Do you want to add
something Dr. Howard? I also noticed that your testimony
emphasized the importance of AI literacy, training and we
actually in that bill direct the Commerce Department to develop
ways of educating consumers that this has got to be part of
anything, including the work that Senator Heinrich, our leader
here, as well as Senator Schumer and Senators Rounds and Young
have done for the bigger base bill, and that we hope to be part
of. Do you want to talk about literacy a bit?
Dr. Howard. Yeah. I think even if you think about doing
policy right, you have to have individuals understand that
definition of right. If you do not understand AI and both the
opportunities and the risks, there is no way that you can think
about great policy.
And so when I think about this, it is not just computer
scientists and engineers; it is everyone that is touching any
type of technology, to understand how to define it, understand
data, understand parameters, understand outcomes, understand
what the impacts are on different markets, different
populations. So that is really important.
Senator Klobuchar. Do you want to add anything, Dr.
Gaudioso?
Dr. Gaudioso. You know, I think, that there is the
importance of the risk framework. There is also research that
needs to be done to give us the technical underpinning, right?
Trust is something that a human conveys, but we are still in
the early stages of doing research to understand what makes a
model trustworthy.
When does it respond within the bounds of our data, what--
where is it reliable, where is it not? And so I think, you
know, policy just needs to keep in mind where we are heading
and what the technical basis is at any given point in time,
because the technology to understand the trustworthiness, the
mathematical underpinnings is something the national labs have
researched for a long time and is moving quickly.
Senator Klobuchar. Uh-huh, very good. One of the things
that I am like hair on fire at the moment is just because I
chair the Rules Committee, is the democracy piece of this, and
I guess I will ask you. This is not the subject really. We are
talking about innovation.
But if our democracy is unstable because people do not know
if it is the candidate they love or the candidate they do not
like that is speaking, because you cannot tell, it is just
something that we have to think about in terms of going forward
as a nation. Something like over 15 states now have required
bans or disclosures on deep fake ads.
Senator Hawley and I, as well as Senator Collins and Coons
and many others have put together a bill on actually banning
deep fakes with exceptions for satire and the like. Senator
Murkowski and I have the bill that we lead on disclaimers. And
I am just really worried with federal elections, that while
states are doing things, which is good, we do not preempt them
on state ads, that we have to guardrail our democracy here so
people know who they are hearing from.
And I often get worried that some little disclaimer at the
end, no one is going to really know. Do you want to answer
that?
Dr. Howard. That is true. It is just like with consent
forms. Nobody actually reads them, and so one of the things is
how do we provide individuals or how do we provide some
transparency and trust on the information they are hearing,
because we know it is very easy to manipulate individuals with
advertisement and media.
And so if those advertisements and media are very, very
real or associated with a candidate that people resonate with
or do not, that will influence them, guaranteed 100 percent.
Senator Klobuchar. Uh-huh. And Dr. Miller, I think I am out
of time, but I will put a question in writing to you on tech
hubs. I know that you know a lot about this kind of--your
testimony is on the importance of policies that promote
development of new science and new innovation, and we have a
lot of medical device in Minnesota and it served our country
well.
I just want to talk a little bit about that and tech hubs,
and you can do it in writing, unless you want to add something
and the Chair will let me ask you that. Is that okay? Do you
want to add anything on that?
Dr. Miller. Yeah. I guess one thought, I think, with tech
hubs and also just tech innovation, is we often do not realize
that the current status of purely human-driven care is actually
frequently low quality and often highly unsafe.
And so promoting innovation at universities, at small
companies that change that and automate components of care
delivery or assist nurses, doctors, pharmacists, whomever in
making decisions, will actually massively raise the quality and
safety and efficiency of care.
I would add, I would say my greatest fear is actually that
we do not take advantage of this opportunity, because the care
delivery system is a mess.
Senator Klobuchar. That is where you go to heaven or hell.
We have got to make sure we have got it right. All right. Thank
you very much, Dr. Miller. Thank you all.
Chairman Heinrich. And Senator, that was a terrific
question. It is sort of the--we sometimes have, are emotionally
tied and sometimes the disruption of the technology makes us
nervous. But the math is the math.
You know, we have seen a number of papers that talk about
some of the ability for the AI to read the data coming off my
watch or the wearable or the glucose meter or the thing you
blow into, and being able to analyze that data actually is
remarkably good and statistically much more accurate, you know,
someone that went to postgraduate school for what, nine years?
And I feel crappy saying that, because I cannot imagine
what your student debt is.
Senator Klobuchar. On that note----
Vice Chairman Schweikert. Yeah, on that note. Thank you,
Senator. And Congressman Beyer was actually--and he and I were
sort of channeling each other. Where I am trying to get is a
model where AI makes traffic better, where AI helps me attach
an air quality monitor to these things, and we crowd source our
environmental data, where AI is--and I accept some of that
becomes technically an algorithm underlying. It is actually
not, you know, crawling through a stack.
But even where Congressman Beyer was, the ability to
revolutionize the cost and delivery and efficacy of health
care, of--what was it, about three weeks ago, a month ago, we
had one of the first drugs solely designed by AI, a new
molecule that looks like it has a remarkable efficacy.
How do I get this to move fast, because I believe cures are
moral? And it is an interesting--is the solution an environment
as you and I think about policy, is it taking a look at the
outcomes and making sure those outcomes are effective and in
some ways moral, efficient?
Because if we do not do something fairly dramatically on
the cost of delivering services, I mean yesterday we borrowed
$101,000 per second over the last 366 days. It is a leap year.
You know, if I had come to you a few years ago and said we
are going to be over $100,000 a second in borrowing, and almost
all the growth of borrowing is interest. Interest now will be
number two in our spending stack, and the growth of health
care. Am I channeling you appropriately?
Congressman Beyer. Totally, very much so. It is terrifying
to think that interest on the debt is greater than Medicare,
greater than Medicaid, greater than the Defense budget. Only
has to catch up with this discretionary non-defense.
Chairman Heinrich. Yeah, just Social Security.
Congressman Beyer. And Social Security.
Chairman Heinrich. So as I come to all of you, you have the
ginormous computers and lots of technical data that is not
public. You have the next generation students. You have the
policy and you have the case of how we could revolutionize
health care. How do I deal with the fact that when he and I
have actually had conversations of telehealth, you know,
digital health.
The fact of the matter is in many ways you know this
because you sat and we talked about it. If the pandemic had not
happened, I do not know if I would have ever gotten our
telehealth bill a single hearing. It only moved forward--and
because apparently grandma would not know how to work FaceTime.
Turns out she is really good at it.
I do not believe the next generation is talking to someone
on the phone. I think it is reading the data off my body. How
do I sell this story, Dr. Miller? How do we sell the morality
of doing it better, faster, cheaper and much more accurately?
Dr. Miller. I think it is immoral not to do that, right? So
if we do not give patients the choice of having cheaper, more
efficient, more accessible, more personalized care, I think
that we would be making a massive moral error. You mentioned
telehealth. 20 years ago if we talked about telehealth, people
would say that we were cuckoo for Cocoa Puffs, right, because
no one is going to call their doctor, do Skype or FaceTime, and
now it is the standard.
It took a global pandemic where a million Americans died,
for us to have telehealth. So I think the answer is one,
hopefully we do not have another global pandemic, but we do not
want to wait until there is some catastrophic event until we
offer automated or autonomous care, right?
If you are a poor American with chronic disease, autonomous
and automated care or AI-assisted care is basically the best
thing ever, because you will get more access, you will get
higher quality and it is going to be cheaper. So I personally
think that we have to do it. It is not a choice.
Chairman Heinrich. Mr. Thierer, and if you--I know you are
going to respond to that. Does it make a difference in our
world that, what was it three weeks ago, Apple finally got its
next generation a watch for cardiac arrhythmias, those things,
essentially certified as a medical device.
Is that what you were talking about, that the next
generation disruption is coming?
Mr. Thierer. Yeah, absolutely. And to answer your question,
Congressman, about how we essentially sell these benefits, we
talk about it in terms of opportunity cost. Like what would be
losing, it is what kind of foregone innovation will we lose if
we do not get this right?
Well, we can put our numbers on this. Let us talk about
some of the biggest killers in America today. 800,000 people
lose their lives to heart disease. 600,000 people lose their
lives to cancers every year now. I mean how about--how about
cars? Let us talk about public health and vehicles.
I mean every single day there are 6,500 people injured on
the roads in America, 100 of them die. 94 percent of those are
attributable to human error behind the wheel. I have to believe
that if we had more autonomy in the automobiles sector, we
could actually make a dent, excuse the pun, in that death toll.
Yeah, and so I mean this is where we can talk to the public
about like the real world trade-offs that work if we get this
wrong, right? I mean we have had a 50 year war on cancer that
goes back to the time when Richard Nixon was in office and, you
know, we have made some strides, but we could make a lot more
if we had serious, robust technological change to bring to bear
on this through the form of computation and algorithmic
learning. I mean this is where we can make the most efforts.
Chairman Heinrich. Mr. Vice Chairman, if I can wonder for
just 30 seconds?
Vice Chairman Schweikert. (off mic)
Chairman Heinrich. Okay, well I'm just--I just wanted to
help you stay on message. But if I can go off message for a
minute. I wanted to respond to one of the things that Senator
Schmitt said about licensing. My dear friend Tom Wheeler, who
chaired the FCC, a Democrat and clearly a left to center
Democrat, called to tell me how important it was not to use
licensing in AI.
That when we did that, all we were doing was essentially
embracing anti-competitiveness, and locking in the advantage of
the incumbents. We need to be very careful about that. Senator
Schmitt also started with two minutes on China. I also want to
quote Martin Wolf, who is the editor-in-chief of Financial
Times, saying please do not give up.
That 20 years of liberalization is too soon to tell, that
sooner or later, the state model of Virginia is sic semper
tyrannis, that sooner or later the Chinese people are going to
rise up. And we need to be worried about the Chinese Communist
Party, not the Chinese people, that they will be demanding
freedom sooner, hopefully rather than later.
Dr. Howard, I have two Brunonian children. So it is
wonderful to have you here, and I really appreciate your
service on the National AI Advisory Council. I mean you really
set the stage for the big executive order and all that.
And I specifically understood your emphasis on digital
literacy. We have been looking at what Finland has done with
the multi-hour training in digital literacy. As we struggle
with deep fakes, which are now coming more and more, that you
start with the notion that we need to be teaching people what
to be suspicious of, and let their own instincts kick in.
But how--how can we develop digital literacy in a much more
robust way that we have done so far?
Dr. Howard. Well, I think this is an area where you have to
bring in academics, industry, organizations, non-profits and
government. I think about it as very similar to cybersecurity.
Nowadays, people actually check to make sure is this really
spam. I'm not going to click the link.
But I will tell you five years ago, everyone was clicking.
And so how do you get people to be aware that this is an issue?
Half of the Americans have no clue that, you know, there might
be a fake. It might be manipulation. Advertisement might by via
chatbot. I mean so what it really is is ensuring that we have
this conglomeration of everyone thinking about how do we train
within the organization, outside the organization, from K to 12
to gray.
Chairman Heinrich. David, also before yielding back to you,
because you did not shorten my----
Vice Chairman Schweikert. This is a conversation. We are
doing almost a colloquy question model.
Chairman Heinrich. Well, in a colloquy thing, I want to
thank you for bringing together----
Vice Chairman Schweikert. And we are actually also
stalling, because I have another member coming.
Chairman Heinrich. Oh okay.
Vice Chairman Schweikert. So keep going.
Chairman Heinrich. Thank you for getting the Joint Economic
Committee to focus on the challenges of diabetes, and end stage
renal disease. We had the same type hearing a few months ago,
and we have both been worried about the cost of dialysis. It
took Mitch Daniels, former OMB Director, etcetera, to do the
math while we were sitting here and say 31 percent of our
Medicare budget right now is just dialysis.
Vice Chairman Schweikert. Think about what he just said. 31
percent is Medicare; 33 percent is all health care. It's
functionally diabetes.
Chairman Heinrich. $260 billion a year, and now we have
GLP-1 antagonist. We have solutions. Not inexpensive, but so
far----
Vice Chairman Schweikert. Can you help me do some things on
the farm bill?
Chairman Heinrich. Oh, absolutely. Everything we can. But
this, when we look at how to deal with the $100,000 a second
and how we make the 19-18 percent of GDP on health care trim
down, and not just GLP-1 but many other ways that we think are
using technology and AI and better management to manage health
care in America.
Vice Chairman Schweikert. Dr. Howard, just a stick in the
back of your head and it is a slightly non-sequitur as you were
talking about teaching people technology literacy. What is our
only success functionally in the last decade of getting
Americans to actually exercise?
We have spent hundreds of billions. This is somewhat of a
trick question, and he may--he already knows the answer. It was
gamification. It was Pokemon Go. I know that sounds absurd, but
if you actually look at the data, Pokemon Go did more to get
people out chasing their little--and we have often had this
running discussion.
What would happen if that type of technology, saying here
is how I train you how to understand how to work ChatGPT. The
gamification of even down to health care and maintaining--if
drug adherence is 16 percent of all U.S. health care, when I
forget to take my statin, when I do not do those things.
How do I make it so my pill bottle cap beeps at me and
those sorts of things. There are solutions that are genuinely
ahead of us, and we are actually struggling, saying is there a
unified theory of the ability to use this technology disruption
when I call the IRS? The person I am talking to is actually
ChatGPT.
But it stays on the phone with me, and it helps me fill out
my forms and then maybe texts me the form I need, instead of
someone who has been dealing with crazy for seven hours and
does not really want to be on the phone with me. That is
actually going on right now and so far the early data of the
IRS experiment of using a chatbot has been apparently early
good.
That is human, so if it be from the curers to the education
to the, you know, miracles of producing new materials. We are
trying-- help us sort of build the argument that, you know,
many of us are not that bright, but we get to sit here and read
things that smart people write for us.
But how do we create a unified theory of let the technology
run, because God forbid, none of us truly know what it is going
to look like a few years from now. I mean am I being fair?
Congressman Beyer. Mr. Chairman, will you yield for
questions?
Chairman Heinrich. I thought you were going to tell us it
was a pickle ball, rather than----
Vice Chairman Schweikert. You know I do not like you
anymore. I tried one pickle ball and my eight year-old beat me.
I mean----
Mr. Thierer. Could I just wholeheartedly endorse what Dr.
Howard had to say about digital literacy, AI literacy, because
this is really important. First of all, Representative
Rochester has a really nice bill on digital AI literacy that I
think we should take a look at. It is really good stuff.
And when we talk about this, you know, AI literacy/digital
literacy, we are talking about, you know, learning for life.
You know, no matter what kind of punches come out, if we can
roll with those punches and figure out how to adapt when we
know more about the technology.
It is about building resiliency, societal and individual
resiliency. And you know, people sometimes laugh at this. I was
on a--I was a co-chair of an Obama administration Online Safety
and Security Task Force, where like the only thing anybody in
the room could agree on was the importance of digital etiquette
and literacy.
So there is a lot of agreement on this. This a good place
to start. It is a good foundation for building that resiliency.
And some people will say well, that is not enough. Okay fine.
We will find other remedies. But it can go a long way.
You know, I am old enough to remember the problems we had
in this country with littering and forest fires back in the
60's and 70's, and I remember well, I am sure some of you up
there too as well, that you know ``give a hoot, don't
pollute.'' We addressed that, right? We went after Woodsy, you
know, Woodsy the Owl and things like that, with Smokey the Bear
and forest fires.
We made a huge difference just with societal education
about the problems of littering and forest fires, right?
That was not a law that passed. That was actual societal
learning. It was wrong to throw things out the window of your
car, right?
So you apply that mentality to the world of like digital
and AI policy, and we talk about again, AI etiquette,
netiquette if you will, like proper behavior. Using algorithmic
services and technologies, using LLMs, using, you know, these
systems.
Vice Chairman Schweikert. I want to go, and actually I also
want to Mr. Beyer to come into this. And you know, you teach
students. You already have--you have to deal with lots of
freaky, smart people. Most of them bathe, I assume, because it
is actually really funny if you know some of your scientists.
How do I deal with my brothers and sisters here who are not
Don Beyer, who are almost fearful of technology? I mean you
know, what do we do to take away--I mean I swear they instantly
think of a Terminator movie. I mean what do you do--I mean in
health care.
I cannot tell you the--and I'm forgive my elegance in my
language, the crap I take when I basically say the same things
you have at forums of here's my health care cost, here's things
we could do to disrupt it using technology.
And I will get administrators and this and that to come and
say ``well, we can't do that. It might be against our state
law.''
Dr. Miller. Technology allows us to operate at a higher
level. I have a terrible sense of direction, right? So I use
Google Maps and Uber and Lyft to get places. I do not pick up a
rotary phone and call my friends to ask for directions and
write them down on a note pad, right?
Vice Chairman Schweikert. Is that after you look it up in
the phone book?
Dr. Miller. Right, yeah. I do not even have a--do not even
have a phone book in the house anymore, and you know my iPhone
organizes my calendar and email and tells me where to go and
what to do, because I am a little absent-minded. And that is
the standard. Like that is the standard of my day.
And I think if we make that an analogy over to health care,
where right now we have the rotary phone and we actually
single-handedly keep the fax machine lobby employed, we have an
opportunity to totally transform that, so that the clinical
example is like if your blood pressure is really low and you
have septic shock and you are going to the ICU and you are
getting pressers, have to stick some big IV in your neck, 30
years ago if they did that they would just look at, you know,
the topical landmarks and put the IV in and hope that, you
know, they didn't hit your carotid artery, which would be bad.
Now, you use ultrasound. You do ultrasound guided. You have
a little probe and you take a look and if you try to do it the
other way, the nurse would run screaming into the room, telling
you that you are about to be negligent and doing something bad.
And the answer here is that technology will allow us to do
a safer, more effective job. It will become the standard and at
some point to actually not use technology will be negligent.
Vice Chairman Schweikert. You get the last.
Congressman Beyer. Well first of all, on your comment on
gamification, I wanted to show you, David, that I'm on Day 641
on Duo Lingo.
Vice Chairman Schweikert. I am so proud of you.
Congressman Beyer. And that is only because of gamification
and----
Vice Chairman Schweikert. But it makes my point----
Congressman Beyer. And it will ring at 11:30 at night if I
forgot to do it.
Chairman Heinrich. So that is what I want from pill bottle
caps when you do not take your statin.
Congressman Beyer. And Dr. Gaudioso, I was very impressed
with all of your testimony, but especially the notion that
scientific machine learning, Sandia's fusing machine learning
with scientific principles to solve scientific and engineering
problems.
For me, that is maybe the most exciting part of AI. Not
ChatGPT-4, 5 or 6 or 7, but the notion that everything from
fusion energy to how our biology works, etcetera, etcetera,
that you can use machine learning, the predictive parts of AI
to figure things out. Can you expand on that as a scientist?
Dr. Gaudioso. I would love to. Thank you for the question.
You know, I think--to me, this is--this is the really exciting
potential, right? I mean ChatGPT has shown us how it can change
our daily interactions and, you know, I was able to put my
written testimony into our internal chat engine and ask it if
it was, you know, helping me make it a little less technical
and more general, and it was great for providing me with a
first draft and editing.
But that is just been trained on the corpus of knowledge
that is in the Internet. I think what I get really excited
about is the transformative potential of training models on
science data, so that I have my chemist intern with me that can
help me discover new science properties, that can then help me
think through the physics in thermal and mechanical stresses to
design a part that can be manufactured today, right?
We can just go from a new material to something that can be
in our hands and usable, and transform not just how we do
medicine and how we interact with patients, but how we make
things in the country. And so AI has the potential if we do it
and we can train it with science, so that these concepts of
hallucination and statistically guessing what the next answer
should be based on what it has learned, we can constrain that
with physics and chemistry and science data.
We can then do new manufacturing. We can make digital twins
of the human body to take to drug discovery from decades down
to months, maybe 100 days for the next vaccine.
Vice Chairman Schweikert. Beyer, anything to follow-up.
Congressman Beyer. No, but I am so glad that you are doing
that and I--one of the things we do not talk about much is as
somebody who ran a small business for many, many years, the
notion that one of the most important technologies is
management.
We do not tend to think of it that way, but the way we
can--the way we can explore the use of artificial intelligence,
to make much better and management decisions much better. Once
again to the issue of making our world much more efficient,
dealing with the $100,000 per second that we borrow.
Vice Chairman Schweikert. And if we are lucky we will
replace members of Congress with something intelligent. Never
mind.
Congressman Beyer. Or raise our pay.
Vice Chairman Schweikert. And they have called votes for us
on the House side.
Congressman Beyer. Oh no. Can I ask one more question?
Vice Chairman Schweikert. Will it be short?
Congressman Beyer. Yeah, yeah.
Vice Chairman Schweikert. You sure?
Congressman Beyer. I am positive.
Vice Chairman Schweikert. Okay.
Congressman Beyer. Dr. Howard, you started Zyrobotics, and
you also made, what does it say, STEM tools and learning games
for children with diverse learning needs.
Dr. Howard. Yes.
Congressman Beyer. I would love--you know, the chair of our
AI Task Force, Jay Obernolte, Dr. Obernolte, machine learning,
master's from Cal Tech, so sort of a smart guy, and he made his
fortune in video games. I would love to get your insight into
how we use gaming to help educate people, on not just
artificial intelligence but on everything else in the science
world?
Dr. Howard. Well with Zyrobotics, I could get five year-
olds to learn how to code through gamification. And so--and it
really is, is how do you provide small nuggets based on
someone's knowledge, engaging with them, and bringing them
along, scaffold them along til at the end they are like oh, I
am actually putting code together to do simple things for a
five year-old. I think that could be done with adults as well.
Congressman Beyer. Yeah, I would love to work with you. I
have a couple of ideas which we could go offline with. But
David, thank you so much, Mr. Chairman.
Vice Chairman Schweikert. And he knows that is actually one
of my fixations. So you are--there is a reason I like you.
Thank you for engaging in this hearing with us. You be
prepared. You have--we are going to--for three days we may ask
you questions.
I am going to ask also to do something a little bit
different for the public record. If you have articles that you
think would be appropriate for us to try to absorb, in reality
we are going to make our staff read it and then give us the
highlighted copy, please send it our direction. And with that
we are off the boats. This hearing is adjourned.
[Whereupon at 3:40 p.m., the hearing was adjourned.]
[GRAPHICS NOT AVAILABLE IN TIFF FORMAT]
[all]