[House Hearing, 119 Congress]
[From the U.S. Government Publishing Office]


                   AI AT A CROSSROADS: A NATIONWIDE STRATEGY 
                               OR CALIFORNICATION?

=======================================================================

                                HEARING

                               BEFORE THE

                  SUBCOMMITTEE ON COURTS, INTELLECTUAL
                 PROPERTY, ARTIFICIAL INTELLIGENCE, AND
                              THE INTERNET

                                 OF THE

                       COMMITTEE ON THE JUDICIARY

                     U.S. HOUSE OF REPRESENTATIVES

                    ONE HUNDRED NINETEENTH CONGRESS

                             FIRST SESSION

                               __________

                      THURSDAY, SEPTEMBER 18, 2025

                               __________

                           Serial No. 119-36

                               __________

         Printed for the use of the Committee on the Judiciary
         
[GRAPHIC NOT AVAILABLE IN TIFF FORMAT]         


               Available via: http://judiciary.house.gov
               
                                __________

                   U.S. GOVERNMENT PUBLISHING OFFICE                    
61-690                      WASHINGTON : 2025                  
          
-----------------------------------------------------------------------------------     
              
                       COMMITTEE ON THE JUDICIARY

                        JIM JORDAN, Ohio, Chair

DARRELL ISSA, California             JAMIE RASKIN, Maryland, Ranking 
ANDY BIGGS, Arizona                      Member
TOM McCLINTOCK, California           JERROLD NADLER, New York
THOMAS P. TIFFANY, Wisconsin         ZOE LOFGREN, California
THOMAS MASSIE, Kentucky              STEVE COHEN, Tennessee
CHIP ROY, Texas                      HENRY C. ``HANK'' JOHNSON, Jr., 
SCOTT FITZGERALD, Wisconsin              Georgia
BEN CLINE, Virginia                  ERIC SWALWELL, California
LANCE GOODEN, Texas                  TED LIEU, California
JEFFERSON VAN DREW, New Jersey       PRAMILA JAYAPAL, Washington
TROY E. NEHLS, Texas                 J. LUIS CORREA, California
BARRY MOORE, Alabama                 MARY GAY SCANLON, Pennsylvania
KEVIN KILEY, California              JOE NEGUSE, Colorado
HARRIET M. HAGEMAN, Wyoming          LUCY McBATH, Georgia
LAUREL M. LEE, Florida               DEBORAH K. ROSS, North Carolina
WESLEY HUNT, Texas                   BECCA BALINT, Vermont
RUSSELL FRY, South Carolina          JESUS G. ``CHUY'' GARCIA, Illinois
GLENN GROTHMAN, Wisconsin            SYDNEY KAMLAGER-DOVE, California
BRAD KNOTT, North Carolina           JARED MOSKOWITZ, Florida
MARK HARRIS, North Carolina          DANIEL S. GOLDMAN, New York
ROBERT F. ONDER, Jr., Missouri       JASMINE CROCKETT, Texas
DEREK SCHMIDT, Kansas
BRANDON GILL, Texas
MICHAEL BAUMGARTNER, Washington
                                 ------                                

             SUBCOMMITTEE ON COURTS, INTELLECTUAL PROPERTY,
               ARTIFICIAL INTELLIGENCE, AND THE INTERNET

                    DARRELL ISSA, California, Chair

THOMAS MASSIE, Kentucky              HENRY C. ``HANK'' JOHNSON, Jr., 
SCOTT FITZGERALD, Wisconsin              Georgia, Ranking Member
BEN CLINE, Virginia                  ZOE LOFGREN, California
LANCE GOODEN, Texas                  TED LIEU, California
KEVIN KILEY, California              JOE NEGUSE, Colorado
LAUREL LEE, Florida                  DEBORAH ROSS, North Carolina
RUSSELL FRY, South Carolina          ERIC SWALWELL, California
MICHAEL BAUMGARTNER, Washington      SYDNEY KAMLAGER-DOVE, California

               CHRISTOPHER HIXON, Majority Staff Director
                ARTHUR EWENCZYK, Minority Staff Director
                            C O N T E N T S

                              ----------                              

                      Thursday, September 18, 2025

                           OPENING STATEMENTS

                                                                   Page
The Honorable Darrell Issa, Chair of the Subcommittee on Courts, 
  Intellectual Property, Artificial Intelligence, and the 
  Internet from the State of California..........................     1
The Honorable Henry C. ``Hank'' Johnson, Ranking Member of the 
  Subcommittee on Courts, Intellectual Property, Artificial 
  Intelligence, and the Internet from the State of Georgia.......     3
The Honorable Jamie Raskin, Ranking Member of the Committee on 
  the Judiciary from the State of Maryland.......................    42

                               WITNESSES

Dr. David Bray, Chair, Loomis Council Member & Distinguished 
  Fellow, Stimson Center
  Oral Testimony.................................................     6
  Prepared Testimony.............................................     8
Kevin Frazier, AI Innovation and Law Fellow, University of Texas 
  School of Law
  Oral Testimony.................................................    12
  Prepared Testimony.............................................    14
Adam Thierer, Senior Technology & Innovation Fellow, R Street 
  Institute
  Oral Testimony.................................................    21
  Prepared Testimony.............................................    23
Neil Richards, Koch Distinguished Professor in Law, Washington 
  University Law
  Oral Testimony.................................................    33
  Prepared Testimony.............................................    35

          LETTERS, STATEMENTS, ETC. SUBMITTED FOR THE HEARING

All materials submitted for the record by the Subcommittee on 
  Courts, Intellectual Property, Artificial Intelligence, and the 
  Internet are listed below......................................    65

A letter to the Honorable Jim Jordan, Chair of the Committee on 
  the Judiciary from the State of from the Ohio, the Honorable 
  Jamie Raskin, Ranking Member of the Committee on the Judiciary 
  from the State of Maryland,the Honorable Darrell Issa, Chair of 
  the Subcommittee on Courts, Intellectual Property, Artificial 
  Intelligence, and the Internet from the State of California, 
  and the Honorable Henry C. ``Hank'' Johnson, Ranking Member of 
  the Subcommittee on Courts, Intellectual Property, Artificial 
  Intelligence, and the Internet from the State of Georgia, from 
  Privacy Protection Agency, Sacramento, California, Sept. 17, 
  2025, submitted by Zoe Lofgren, a Member of the Subcommittee on 
  Courts, Intellectual Property, Artificial Intelligence, and the 
  Internet from the State of California, for the record
A letter to the Honorable Darrell Issa, Chair of the Subcommittee 
  on Courts, Intellectual Property, Artificial Intelligence, and 
  the Internet from the State of California, and the Honorable 
  Henry C. ``Hank'' Johnson, Ranking Member of the Subcommittee 
  on Courts, Intellectual Property, Artificial Intelligence, and 
  the Internet from the State of Georgia, from the Council for 
  Innovation Promotion (C4IP), Sept. 18, 2025, submitted by the 
  Honorable Deborah Ross, a Member of the Subcommittee on Courts, 
  Intellectual Property, Artificial Intelligence, and the 
  Internet from the State of North Carolina, for the record
Materials submitted by the Honorable Henry C. ``Hank'' Johnson, 
  Ranking Member of the Subcommittee on Courts, Intellectual 
  Property, Artificial Intelligence, and the Internet from the 
  State of Georgia, for the record
    A letter to the Honorable Darrell Issa, Chair of the 
        Subcommittee on Courts, Intellectual Property, Artificial 
        Intelligence, and the Internet from the State of 
        California, and the Honorable Henry C. ``Hank'' Johnson, 
        Ranking Member of the Subcommittee on Courts, 
        Intellectual Property, Artificial Intelligence, and the 
        Internet from the State of Georgia, from Alejandra 
        Montoya-Boyer, Vice President, Center for Civil Rights 
        and Technology, The Leadership Conference on Civil and 
        Human Rights, Sept. 18, 2025
    A statement enitled, ``Don't Ban State AI Laws--Let 
        Innovation Compete Fairly,'' Sept. 18, 2025, Bria AI
    A letter to the Honorable John Thune Majority Leader, U.S. 
        Senate, and the Honorable Henry C. ``Hank'' Johnson, 
        Ranking Member of the Subcommittee on Courts, 
        Intellectual Property, Artificial Intelligence, and the 
        Internet from the State of Georgia, regarding the One Big 
        Beautiful Bill Act, from multiple Republican governors, 
        Jun. 27, 2025
Materials submitted by the Honorable Darrell Issa, Chair of the 
  Subcommittee on Courts, Intellectual Property, Artificial 
  Intelligence, and the Internet from the State of California, 
  for the record
    A letter to the Honorable Darrell Issa, Chair of the 
        Subcommittee on Courts, Intellectual Property, Artificial 
        Intelligence, and the Internet from the State of 
        California, and and the Honorable Henry C. ``Hank'' 
        Johnson, Ranking Member of the Subcommittee on Courts, 
        Intellectual Property, Artificial Intelligence, and the 
        Internet from the State of Georgia, from Engine Advocacy, 
        Sept. 16, 2025
    A letter to the Honorable Darrell Issa, Chair of the 
        Subcommittee on Courts, Intellectual Property, Artificial 
        Intelligence, and the Internet from the State of 
        California, and the Members of the Subcommittee on 
        Courts, Intellectual Property, Artificial Intelligence, 
        and the Internet, from Americans for Prosperity, Sept. 
        18, 2025
    An article entitled, ``The California-Washington tech fight 
        heats up,'' Sept. 16, 2025, Politico
    An article entitled, `` `We don't want California to set 
        rules for AI across the country,' Trump adviser says,'' 
        Sept. 16. 2025, Politico
    A document entitled, ``Winning the Race: America's AI Action 
        PLAN,'' Jul. 2025, The White House
    A bill H.R. 10550, 118th Congress, 2D Session, Dec. 20, 2024
    A speech by Vice President J.D. Vance entitled, ``Remarks by 
        the Vice President at the Artificial Intelligence Action 
        Summit in Paris, France,'' Feb. 11, 2025, The American 
        Presidency Project

                 QUESTIONS AND RESPONSES FOR THE RECORD

Questions submitted by the Honorable Darrell Issa, Chair of the 
  Subcommittee on Courts, Intellectual Property, Artificial 
  Intelligence, and the Internet from the State of California

  Questions for Adam Thierer, Senior Technology & Innovation 
      Fellow, R Street Institute
    Response to questions from Adam Thierer, Senior Technology & 
        Innovation Fellow, R Street Institute
  Questions for Kevin Frazier, AI Innovation and Law Fellow, 
      University of Texas School of Law
    Response to questions from Kevin Frazier, AI Innovation and 
        Law Fellow, University of Texas School of Law

  Questions for Dr. David Bray, Chair, Loomis Council Member & 
      Distinguished Fellow, Stimson Center
    Response to questions from Dr. David Bray, Chair, Loomis 
        Council Member & Distinguished Fellow, Stimson Center

 
     AI AT A CROSSROADS: A NATIONWIDE STRATEGY OR CALIFORNICATION?

                              ----------                              


                      Thursday, September 18, 2025

                        House of Representatives

           Subcommittee on Courts, Intellectual Property, and

               Artificial Intelligence, and the Internet

                       Committee on the Judiciary

                             Washington, DC

    The Committee met, pursuant to notice, at 10 a.m., in Room 
2141, Rayburn House Office Building, the Hon. Darrell Issa 
[Chair of the Subcommittee] presiding.
    Members present: Representatives Issa, Massie, Fitzgerald, 
Cline, Gooden, Kiley, Lee, Fry, Baumgartner, Johnson, Lofgren, 
Lieu, Neguse, Ross, Swalwell, and Kamlager-Dove.
    Also present: Representatives Correa and Raskin.
    Mr. Issa. The Committee will come to order. Actually, I do 
have to--the Subcommittee will come to order. Without 
objection, the Chair is recognized to declare a recess at any 
time. We welcome everyone here today for a hearing on the 
future of AI policy. I will note that this will be perhaps the 
last in a long series of AI hearings before several pieces of 
legislation will be marked up.
    I encourage the Members on both sides of the aisle to make 
sure that this panel of witnesses are asked questions that may 
be germane to proposed legislation, or legislation already 
offered.
    I now recognize myself for an opening statement. Literally 
a generation ago, or in technology, ten generations ago, a 
sharp young man graduated from Cal State San Marcos in my 
Congressional district. He joined a company that I then was CEO 
of Directed Electronics, which had an inherent inventory 
problem.
    The inventory problem was that we had a few SKUs that sold 
well, and we managed, and hundreds of SKUs that were constantly 
either out or over supply. It wasn't anyone's fault, we had 
simply grown quickly, and there was a certain amount of 
inconsistency in what was being sold in a given month.
    That bright young man took all that inventory in the 
records, and put it into SuperCalc, a precursor to Microsoft 
Excel. Within weeks, we had reduced our out of stock, increased 
our same day delivery, trimmed inventory to a level that 
actually saved us over a million dollars a year in inventory 
maintenance costs. That bright young man continued to work at 
the company for many years.
    He did not continue to use SuperCalc for long, because 
technology quickly gave better and better tools. The man, the 
program, and the machine. Both are necessary to implement and 
make AI a reality. It was the man who made the machine that 
made the man a success. Over the last century the U.S. has led 
the way in virtually every area of technology because of our 
pro-innovation bias.
    We are the innovators, while China are the duplicators, and 
Europe, yes, are the regulators. As we speak, though, my home 
State of California, with an economy larger than that of Italy, 
is rivaling the European Union when it comes to trying to lead 
on regulation. This wouldn't be such an ironic occurrence, 
except we are the home of innovation, and yes, the new bastion 
of regulation.
    Just as in the 1990s when America led the internet 
revolution, a light touch such as that offered by the President 
in his initiative, in fact must be the direction we go. 
Anything else will give us a problem that I will describe. If 
we in fact are not innovating ten times faster than we are 
regulating, if the speed of innovation in the U.S. is not at 
least months or if possibly years ahead of China, their speed 
of duplication, some of it actually using AI to duplicate what 
we are doing will, in fact, cause us to lose our edge.
    My home State is part of the problem, the European Union is 
part of the problem, but as you will see from our witnesses 
today, all fifty States have implemented some form of AI 
regulation, and in fact there are in the neighborhood of a 
thousand pieces of legislation spread over fifty States, that 
will create, if allowed to continue, a patchwork of indecision 
by the AI industry.
    Given conflicting regulations, given the inability to roll 
out with certainty, technology, that technology will simply not 
be a priority. Let there be no doubt though. Either we win in 
innovation, and we win in AI, or we lose our edge on the 
international stage. Vice President Vance said it best, 
``America's AI technology must remain the gold standard 
worldwide.''
    We must continue to produce the next generation AI, and we 
cannot do it with a patchwork of conflicting State laws. As of 
now, we are ahead, let there be no doubt. We are ahead in 
hardware, and we are ahead in software development. We are also 
on the leading edge of having the solutions for the energy 
problems. That includes modular nuclear reactors, it includes a 
willingness to provide innovative solutions.
    During the last break I went to one of Apple's facilities, 
almost 17 hundred acres located near Sparks, Nevada. What I saw 
there were some of the most impressive, simple buildings, 
filled with endless rows of various levels of chips for both AI 
and conventional data storage. What I also saw was a system 
that used zero conventional air conditioning to maintain that 
cooling.
    They had managed to beat one of the major causes of 
unrelated energy consumption, which was air conditioning, 
through an innovative system of evaporative coolers from 
locally available water, and a filtration system that allowed 
those to operate twenty-four seven without in any way being 
damaged by the high flow of air.
    They are making advances. This is over and above the 
innovation that we see in chips, and the ones that we plan to, 
and the additional power. I am going to contrast just over the 
border from California, this location in Sparks, Nevada. 
Because it has 64 gigawatts of generation power. Why would they 
need it? Well, they would only need it in case of a power 
failure.
    Not so, the first time all was operational to prevent 
blackout in California, because California lacked the power, 
and by their going offline, Nevada was able to export power 
into my home State. That tells you a lot about the innovation 
in California, but not the ability to have those great new 
centers located there. In fact, Virginia, just a mile from 
here, is the No. 1 location, and other States are competing 
aggressively for it, and if nothing changes, they will win.
    These new laws will also affect early stage development 
because technical experts, let alone lawmakers, are not capable 
today of predicting where we will be tomorrow. Earlier this 
year in fact, overnight, the thinking on AI development and 
power needed took a sharp change, and everyone on both sides of 
the Atlantic and Pacific are learning from what was released, 
and that will continue.
    Of course, I don't want to be just a nay sayer, because in 
fact I am from an innovative State. I am from a State that is 
second to none in finding the best, the brightest, and bringing 
them here. Although this Subcommittee does not have 
jurisdiction over immigration, I want to make it clear here 
that AI development will also be about this Committee working 
in a bipartisan basis to find ways to not just attract, but to 
retain the best and the brightest for that development.
    Let there be no doubt, there are three hundred thousand 
Chinese students studying in America, and most of them are 
being told to come home and bring with them what they are 
learning here. The release of the new AI Action Plan signaled 
to the world that the Trump Administration needs Congress to 
legislate America first AI. Now, I know that sounds pejorative, 
but it isn't.
    The fact is that whether it was catching up on, if you 
will, the interstate superhighway under Al Gore, Sr., or it was 
leading on taking the ARPANET, and turning it into the 
internet, we have worked together in the past. We have worked 
to limit States, and to limit restraining our own over 
regulation for the benefit of our economy, and it has worked.
    I want to welcome the President's leadership and look 
forward to working again to promote it. I want to additionally 
say that we have partners on both sides of the aisle. This 
Committee, including--she is not here right now, but Zoe 
Lofgren and others have been great partners in this in the 
past, and I expect they will be. Again, I just want to leave us 
with one truism.
    America has innovated and out innovated the countries 
around the world for generations. Europe has become a 
regulator, and an admirer of our technology without embracing 
the way you get it. China has become the most efficient stealer 
of technology, and the term duplication, if it was truly 
innovation, would be a compliment, but it isn't.
    With that, I recognize the Ranking Member for his opening 
statement.
    Mr. Johnson. Thank you, Chair. I say thank you to the 
witnesses for your testimony today. When I drive from Georgia 
to Washington, DC, about once every six months or so, and I get 
a chance to listen to the radio, scan, and listen to all my 
favorite stations and tunes, I have got to pass through three 
States. I go through Virginia, I go through South Carolina, I 
go through North Carolina, and then I hit Georgia.
    We all have had experiences in crossing State lines before, 
and you will have different speed limits, and different levels 
of enforcement. You have the experience of figuring out what 
are the rules. In other words, because you are going 79 miles 
an hour at the State line, and then flip to another State, and 
boom, all of a sudden the speed limit is 65.
    You have got to do 74 to stop getting pulled over, and hope 
that officers won't stop you. Anyway, when some suggested 
earlier that we in Congress should preempt all State AI laws, 
they would not just have done away with State's nascent 
generative AI consumer protections, they would have preempted 
common law causes of action against AI companies as well.
    When the doctrine of caveat emptor, or buyer beware ruled 
American jurisprudence, consumers had minimal protection and 
were expected to thoroughly inspect products themselves. 
Judicial interpretations began to change in the early middle 
20th century as products became less straight forward, and more 
complicated. Common law is developed to better protect 
consumers, products liability, and negligence cases.
    Today, most Americans can hardly imagine taking apart a 
toaster, let alone an AI chat box to make sure that it works 
correctly. Caveat emptor is effectively what advocates of a 
moratorium are suggesting we revert to when we talk about an AI 
moratorium. When you preempt an entire field of law, you are 
preempting the common law right along with it.
    Supreme Court law has repeatedly found as it did in Riegel 
v. Medtronic, that a Federal law's reference to a State's 
requirements include its common law duties. In plain language 
that means if Congress preempts State AI laws, we also preempt 
State common law, unless the legislation explicitly says 
something else. Common law cases to protect consumers are 
already being filed against generative AI platforms.
    Two days ago, Senator Holly held a hearing on the harms to 
children using AI technology, calling witnesses whose children 
died or were hospitalized after interacting with artificial 
intelligence chat bots. I know some of the parents and families 
are in the room today. Kristin Bride, Juliana Arnold, Manny 
Fernesse, and Megan Garcia. I am so sorry for what you all have 
been through, and I admire your commitment to justice.
    Common law is crucial to the protection of Americans 
because it exists no matter whether there are comprehensive 
State laws on the books, or no laws governing new technology on 
the books. Even when there are no statutes, Common law helps us 
set a floor for a standard of care as a society. When some of 
my colleagues across the aisle talk about a moratorium, 
preempting common law is exactly what they are talking about.
    Carve outs might be offered for some areas of the law, 
others may get a loose regulatory structure, but what many 
don't realize is that the glue that holds the law together 
would be wiped out in almost every scenario. By protecting 
common law, we can protect that floor that ensures every person 
harmed can seek to have their case heard before a court of law.
    This basic standard of care can spur innovation by 
preventing a race to the bottom, and it can offer a level of 
security as the Federal Government and States determine what 
the best next steps are for AI in the United States.
    With that, Mr. Chair, I yield back.
    Mr. Issa. I thank the gentleman. Does the gentleman from 
California seek to be waived onto the Committee?
    Mr. Correa. I do, sir.
    Mr. Issa. Without objection, the gentleman will be waived 
on, even though he is not a Member of the Subcommittee, and if 
others yield time to him, he will be permitted to ask 
questions. Without objection, so ordered. It is now my 
pleasure, notwithstanding the Chair and Ranking Members 
arriving, to introduce our distinguished panel of witnesses.
    Dr. David Bray is a distinguished fellow and Chair of the 
accelerator at the Alfred Lee Loomis, Innovative Consul at the 
Stimson Center, he previously served as IT Chief at 
Bioterrorism Preparation and Response Program at the CDC, and 
in the intelligence community. Dr. Bray is the recipient of a 
Joint Civilian Civil Commendation Award, and National 
Intelligence Exceptional Achievement medal. Welcome.
    Mr. Kevin Frazier is the AI innovation and law fellow at 
the University of Texas law school. His research focuses on how 
to design regulatory regimes that increase adoption and use of 
AI. Mr. Frazier also leads the AI innovation and law program, 
which prepares students for careers related to artificial 
intelligence. Thank you for being here.
    Mr. Adam Thierer is the Senior Fellow for technology and 
innovation at the R Street Institute, a free market think tank. 
His work focuses on cultivating emerging technologies. He 
previously was Senior Fellow at the Mackinac Center, and was 
President of the Progress and Freedom Foundation. Professor 
Neil Richards is the Koch Distinguished Professor at Washington 
University School of law.
    Where he also codirects the Cordell Institute for Policy in 
Medicine and Law. His work focuses on privacy law, information 
law, and freedom of expression. We welcome all our witnesses 
here, and as you may have seen on C-Span, it is the rule of the 
Committee that all witnesses be sworn in. Would you please rise 
to take the oath, and raise your right hand?
    Do you solemnly swear or affirm under penalty of perjury 
that the testimony you are about to give will be true and 
correct to the best of your knowledge, information, and belief, 
so help you God? Please be seated. Let the record reflect that 
all witnesses answered in the affirmative.
    As you also have heard many times, your true entire 
statement, including reasonable, even if expansive additional 
information you submit will be placed in the record. As a 
result, if you are going to go past five minutes, do so by 
extension, and summarize what you do, so we can leave time for 
questions.
    With that, we will begin. I want to make sure I get the 
right name, Dr. Bray. After this it gets easy, we just go 
across.

                  STATEMENT OF DR. DAVID BRAY

    Dr. Bray. Thank you, Chair Issa, Ranking Member Johnson, 
and the Members of the Committee. I appreciate the opportunity 
to testify today. I am Dr. David Bray, Chair of the Looms 
Accelerator at the Stimson Center, Principle at LeadDoAdapt 
Ventures, Senior Advisor of General Catalyst Institute, and a 
Fellow with the National Academy of Public Administration.
    I work on tech, data, and geopolitical issues to help 
startups scale, communities adapt, and legacy organizations 
transform themselves amidst rapid global changes. My testimony 
focuses on advancing reliable, trustworthy AI consistent with 
the values of free societies and free markets from these 
perspectives.
    I place my comments in the context that the United is 
experiencing multiple tech revolutions in addition to AI. 
Advances in space technology, biotech, quantum tech, and the 
nature of relation of sensors and robots all impacting U.S. 
companies, our workforce, and our communities. With respect to 
AI, I would like to mention three noteworthy advances to inform 
our discussion.
    First, active inference AI models themselves demonstrate 
faster learning, and use less data, and less energy. Such 
approaches can be bound by spatial or temporal limitations in 
ways that are human readable, and interpretable across AI 
systems. Each of us as individuals could in the future restrict 
what AI systems do on our behalf.
    Second, open weight AI models with open-source code have 
shown that we can transform currently complicated processes, 
such as a Veterans Affairs form, into a conversational 
interface, dramatically reducing the time to complete, and 
speeding access to care.
    Third, federated learning allows AI systems to learn on 
datasets where they exist with proper consent.
    Empowering both individuals and organizations to choose if 
their datasets and intellectual property are usable by AI and 
negotiate a beneficial contractual relationship in return. 
Given these advances, three guiding principles drive my 
recommendations to the Subcommittee:
    First principle: U.S. strategies for advancing AI should 
recognize interdependencies between AI and other tech 
advancements.
    This requires a light touch policy framework. Recently, the 
National Academy of Public Administration has illuminated 
methods for sufficiently agile policy approaches to achieve 
measurable goals at the pace necessary given global changes.
    Second principle: Different AI methods carry different 
risks and benefits. For example, AI approaches to computer 
vision and expert systems follow predictable outcomes.
    Whereas generative AI produces less predictable results. As 
such, AI policies should reflect these differences in AI 
methods.
    Third principle: There have been multiple ways of AI 
improvement over the years. We should expect continued 
advancement, which means U.S. policy approaches must adapt 
accordingly. For example, the Stimson Center's Loomis Council 
intentionally brings together industry leaders to adapt 
projects to new AI developments.
    Even with different AI methods, and the need for continuous 
adaptation, groups tied to specific domain applications of AI, 
for example, healthcare, transportation, and finance, can 
promote data level interoperability across AI systems, avoiding 
silos. When electronic health record systems advanced in the 
2000s, the United States encouraged the nonprofit Health Level 
Seven to evolve an open standard framework for interoperable 
clinical data with privacy controls.
    We should do something similar now for health and AI. We 
each deserve a choice as to when AI uses our data, and medical 
doctors should not be hindered by noninteroperable AI systems. 
Given these principles, my recommendations are as follows:
    First recommendation: Our principles and policies should 
help advance freedom, human agency, and individual liberties.
    We face global competition from the Chinese Communist Party 
regarding AI's future, including their AI Plus initiative. The 
U.S. AI strategy must simultaneously encourage the advancements 
of the entire U.S. AI industry, and encourage the industry to 
advance individual freedoms.
    Second recommendation: Upgrading existing domain specific 
laws is more pragmatic than adapting new, sweeping AI 
regulations.
    I recommend a domain specific approach, because the risk of 
different AI methods vary by application. Examples include 
updating the Privacy Act of 1974, revisiting HIPAA, and 
reviewing other existing laws where the speed, scale, and scope 
of AI methods impact different risk calculus. Congress' recent 
efforts to upgrade banking laws with respect to stable coins is 
another example of updating existing statutes given to new 
technologies.
    Third recommendation: Assess what actions consistent with 
U.S. values of freedom, human agency, and individual liberties 
may need light touch policy to ensure AI advances freedoms 
across our Nation. When updating policies that already exist, 
we should bid on Justice Brandeis' concept of a right to be 
left alone as law abiding citizens.
    Including choices about when personal datasets are and are 
not used by an AI, as well as when AI and any associated 
intellectual property shared is processed locally as opposed to 
a cloud-based incidence. Any national AI strategy should ensure 
we do not stifle advancements toward reliable, trustworthy AI, 
consistent with the values of both free societies and free 
markets.
    Thank you, and I look forward to your questions.
    [The prepared statement of Mr. Bray follows:]
    [GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]
    
    Mr. Issa. Thank you. Mr. Frazier?

                   STATEMENT OF KEVIN FRAZIER

    Mr. Frazier. Chair Issa, Ranking Member Johnson, and the 
distinguished Members of the Committee, thank you for the 
opportunity to testify. My name is Kevin Frazier, I am the AI 
innovation and law fellow at the University of Texas School of 
Law. Outside of teaching students, I believe there is no 
greater purpose for academics than sharing knowledge with 
policymakers.
    This purpose is all more paramount when it comes to complex 
challenges like harnessing AI to unleash human flourishing. A 
few months ago, Dr. Jensen came before this Committee and 
announced that the Nation that leads in AI will shape the 
future. Nothing has changed in the interim. What remains 
uncertain however is whether the U.S. will retain its leading 
position.
    My goal today is to address the proper role of the States 
and the Federal Government in shaping AI policy. On governing 
the use of AI, the Tenth Amendment reserves extensive authority 
to the States to regulate within their borders. On the matter 
of AI development, the founders offered their answer in 
abandoning the Articles of Confederation and adopting a strong 
centralized government capable of protecting and advancing the 
national interest.
    As I will explain in the rest of my testimony, the founders 
infused three principles into our Constitution that when 
applied to the AI discussion resolve debates about the 
authority of each actor to shape AI development. Subsequent 
changes in related areas of the law, namely the Commerce 
Clause, have given rise to the false impression that muddy 
judicial interpretation somehow relaxed these principles.
    However, they remain as foundational today as they were two 
hundred years ago. Adherence to these principles is essential 
both as a matter of fidelity to the founders' vision, as well 
as to securing an AI regulatory posture that aligns with our 
Federal system.
    The first principle is that the Federal Government alone is 
responsible for matters that implicate the economic and 
political stability of our country.
    The emerging threats to national security and economic 
stability posed by advances in AI place regulation of training 
frontier AI models squarely in the authority of the national 
government. To focus on one of many examples, AI has lowered 
the barriers to the creation and deployment of bioweapons by 
bad actors. Defensive measures have not progressed at the same 
rate.
    Experts warn that with significant technical progress the 
Nation would still need to adopt extreme measures to ready 
ourselves for a near future in which synthetic pathogens go 
undetected. That effort will flounder with second rate AI. 
Training frontier AI models, and by extension safeguarding our 
national health and prosperity cannot be waylaid by State laws, 
no matter how well intentioned.
    Second, the extensive authorities reserved to each State 
end at their respective borders. As the Supreme Court has 
specified on multiple occasions, the equal sovereignty of the 
States is a fundamental principle of our Constitution. Our 
constitutional order does not condone one State to 
intentionally and substantially interfere with the liberty and 
freedom of another.
    Political clout, economic might, more population grants one 
State the authority to project its legislation into another. 
Whether a State is the fourth largest economy in the world, or 
the 104th largest has no bearing on its authority to shape the 
lives of Americans beyond its borders.
    Though the Supreme Court has tolerated the inevitability of 
some regulatory spillover, its recent holding in National Port 
Producers v. Ross does not permit the sorts of regulations 
pending before many State legislatures, regulations that may 
deny all Americans access to a good itself because of the 
preferences of one political community.
    Building new pig pens to satisfy the preferences of 
Californians is technically and financially feasible. Training 
two AI frontier models, one to comply with the preferences of a 
single State, and one for the rest of us, is a billion dollar 
undertaking that rests on uncertain and evolving realities.
    Contradictory and vague State laws that impact AI 
development may thwart the sort of technological progress that 
has long fueled the American dream. Under a patchwork of State 
laws that impact AI development, we will see that Americans may 
never experience the education and healthcare that could have 
been realized by a national approach to pursuing the AI 
frontier.
    The third principle is that the ultimate authority in our 
constitutional system rests with the people. Our founders 
aspired for every American to exercise meaningful control over 
their daily lives. Extraterritorial regulation of AI 
jeopardizes these and other features of individual agency.
    The nature of AI development means that if labs are 
compelled to comply with one State's regulations for model 
training, those requirements will be imposed on the rest of the 
country, rendering us all less likely to realize the benefits 
of AI advances. Americans may be able to move as freely as they 
like, but they would still find themselves using AI tailored by 
State legislators over which they have no control.
    Such a world is the antithesis of liberty. Denial or delay 
of the most sophisticated AI as the result of flawed State 
legislation is not a matter of mere inconvenience. It is a 
question of access to the greatest driver of human flourishing 
we have yet to develop.
    Thank you again for inviting me here today, I look forward 
to your questions.
    [The prepared statement of Mr. Frazier follows:]
    [GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]
    
    Mr. Issa. Thank you. Mr. Thierer?

                   STATEMENT OF ADAM THIERER

    Mr. Thierer. Chair Issa, Ranking Member Johnson, and the 
Members of the Subcommittee, thank you for the invitation to 
participate in this hearing. My name is Adam Thierer, and I am 
a Senior Fellow at the R Street Institute, where I cover 
emerging technology policy. My message here today boils down to 
one simple point.
    Congress needs to act promptly to formulate a clear 
national policy framework for artificial intelligence to ensure 
our Nation is prepared to win the computational revolution. If 
we get this wrong, the consequences could be profound in terms 
of geopolitical competitiveness, national security, economic 
growth, small business innovation, and human flourishing.
    Unfortunately, America's AI innovators are currently facing 
the prospect of many State governments importing European style 
technocratic regulatory policies across America. As you noted, 
Mr. Chair, more than one thousand AI related bills are already 
pending across the Nation. Some States are far more aggressive 
and influential on national market outcomes than others.
    Almost 50 AI related laws are currently pending in 
California and New York is currently considering almost triple 
that number. Sacramento and Albany should not be dictating AI 
policy for the entire Nation. That approach is especially 
problematic for so-called little tech innovators who will 
struggle with confusing, costly compliance burdens.
    America would not have become the global leader in digital 
technology it has if we had had 50 State computer bureaus, or 
even a single California Computer Commission allowed to license 
every single aspect of interstate computing, and treat the 
internet as a regulated utility. Thankfully, America avoided 
that fate because of wise bipartisan decisions that this 
Congress made in the 1990s.
    Which let digital technology be born free, as opposed to 
being born into a regulatory cage. Laws like the 
Telecommunications Act of 1996, and the Internet Tax Freedom 
Act of 1998 included important provisions preempting and 
facilitating a national digital marketplace. The U.S. is now 
the global leader in almost every segment of computing and 
digital commerce, thanks to this wise policy approach.
    Now, is the time for Congress to work the same magic for AI 
by creating a national framework to prevent a patchwork of 
State mandates from undermining AI innovation. Colorado 
Governor Jared Polis has called on Congress to preempt State AI 
laws such as the one his own State passed last year, and he has 
even endorsed the idea of a State AI regulatory moratorium like 
the one Congress considered this summer.
    Other Governors have raised similar concerns, Connecticut 
Governor Ned Lamont has warned of quote, ``Every State going 
out and doing their own thing, a patchwork quilt of 
regulations.'' Just last week, New York Governor Kathy Hochul 
noted how quote, ``It is hard when one State has a set of 
rules, another State does, and another State. I don't think 
that is a model for inspiring innovation.''
    Congress could again try to implement a moratorium, or 
could formally preempt specific State and local regulatory 
enactments that impose an undue burden on interstate 
algorithmic commerce. If Congress chooses the latter option, 
Federal law makers should first preempt State regulations of AI 
frontier models, because the cost associated with such 
regulations would outweigh any local benefits.
    Such rules would create spill overs and undermine 
development of the systems the Nation needs to compete 
globally. State officials also lack technical expertise and 
information about national security matters that could be 
relevant to AI safety considerations.
    Second, for issues related to so called algorithmic bias or 
AI discrimination, Congress should preempt State efforts to 
regulate the development of AI systems and applications through 
cumbersome and confusing mechanisms such as AI audits or 
algorithmic impact assessments.
    To the extent any such regulations are imposed, it should 
be done at the Federal level, and existing Federal civil rights 
laws and nondiscrimination standards should apply.
    Finally, Congress should also require the National 
Institute of Standards and Technology, and the new Center for 
AI Standards and Innovation within NIST to oversee a new 
standing AI working group to coordinate and work to resolve 
other Federal and State AI policy matters.
    NIST and CAISI could help devise more workable, consistent 
standard for AI policy matters not already preempted by Federal 
law. Even where the scoping of Federal preemption proves 
difficult, everyone should agree that AI development will be 
discouraged if America has dozens of different definitions of 
key concepts. Inconsistent standards will undermine market 
certainty, and hurt investment, innovation, and competition.
    Ongoing, Congressional oversight of this process will be 
essential, and Congress can simultaneously consider what sort 
of new light touch rules might be necessary at the Federal 
level to address various AI safety concerns. Meanwhile, State 
governments still have a role to play, and will have plenty of 
room to act using a diverse policy toolkit of generally 
applicable laws to address any real world harms that might come 
about from AI applications.
    In closing, the time has come for Congress to exercise its 
constitutional responsibility, to protect the interstate 
marketplace and the national interest in the development of 
robust AI capabilities that will ensure the United States 
remains at the forefront of this technological revolution.
    Thank you for holding this hearing, and I look forward to 
any questions you may have.
    [The prepared statement of Mr. Thierer follows:]
    [GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]
    
    Mr. Issa. Thank you. Particularly thank you for mentioning 
our former colleague, Mr. Polis.
    Professor Richards?

                   STATEMENT OF NEIL RICHARDS

    Mr. Richards. Chair Issa, Ranking Member Johnson, and the 
distinguished Members of the Committee, thank you for the 
opportunity--
    Mr. Issa. Could you either put it closer, or turn it on, or 
both?
    Mr. Richards. Sorry, Mr. Chair.
    Mr. Issa. Fantastic, not a problem at all. We are talking 
tech here, so we will go high tech and turn them on.
    Mr. Richards. We have the automatic ones in St. Louis, so.
    Mr. Issa. Of course you do.
    Mr. Richards. Chair Issa, Ranking Member Johnson, and the 
distinguished Members of the Committee, thank you for the 
opportunity to appear before you this morning. My name is Neil 
Richards, and I am the Koch Distinguished Professor in Law at 
Washington University in St. Louis, where I direct the Cordell 
Institute.
    This hearing is about whether Congress should consider 
preempting State laws that touch on artificial intelligence 
technologies, and it is my firm and considered opinion that 
denying States the ability to regulate novel technology issues 
going forward would be a grievous and avoidable error that 
would not be in the best interests of American industry, or the 
American people.
    I would like to offer three high-level points this morning.
    First, Federal preemption of State laws touching AI would 
be reckless, and expose consumers to great risk of harm. 
Artificial intelligence, as we have already heard this morning, 
is not just one technology, it is a cluster of related and 
changing technologies that would be nearly impossible for a 
general preemption law to define with care.
    In addition, AI technologies will likely affect every 
aspect of human life, just as industrialization did start in 
the 19th century, and as the internet did starting in the 20th. 
Like those before them, AI technologies will produce many good 
things, but also many bad ones, like kids becoming emotionally 
dependent on chat bots, generative AI hallucinations affecting 
our courts by making up false citations.
    New ways to hack systems, and other harm critically that we 
cannot foresee today. At a time when we cannot be sure what 
harms will result--I am sorry, at a time when we can be sure 
that harms will result, but we cannot be sure how, depriving 
States of the ability to adapt to, and try to mitigate these 
harms would be to disregard a clear and obvious risk, and that 
is the legal definition of recklessness.
    Second, States have been pioneers of sensible tech 
regulation over the past three decades that has built essential 
digital trust for tech companies. If States had been banned 
from regulating the internet in 2000, we would have no broad 
requirement for website privacy policies, no data breach 
notification laws, no laws banning employers from demanding the 
social media passwords of their employees.
    No laws regulating facial recognition technology, no 
substantive data security statutes, no comprehensive privacy 
statutes, no laws preventing kids from accessing hardcore 
digital pornography, or other dangerous content. No laws 
limiting the ability of tech companies to peddle addictive 
business models to children, and much less enforcement of 
digital fraud, abuse, crime, hacking, and data breaches.
    Guided by these State legal guardrails in place to secure 
essential consumer trust, the past 30 years have seen the 
explosive success of Silicon Valley. Without State privacy and 
security laws for example, we would still be afraid to give our 
credit card numbers to Amazon. The State digital laws have 
tamed the worst excesses of the internet and helped to make it 
a trustworthy place for innovation, connection, free 
expression, and business.
    Broad AI preemption would have the opposite effect for 
artificial intelligence.
    Third, I would like to address a claim frequently made by 
industry that State regulations somehow stifle innovation. As 
history makes clear, these arguments are, in my opinion, 
mistaken and misguided. Law creates and enables innovation by 
stabilizing the marketplace.
    It sets the ground rules for fair and robust competition, 
making the market safe and sustainable for consumers. Contrary 
to its libertarian origin myth, Silicon Valley was shaped by 
laws from the beginning, from government defense contracts to 
intellectual property laws, and from securities laws to Federal 
and State prohibitions on unfair and deceptive trade practices.
    Law has always played a role in preventing scammers and 
thieves, and in shaping corporate business practices so that 
they benefit society as a whole. It is the presence of State 
regulation, including State regulation that has led to America 
being a leader in digital technologies and services. While we 
can certainly, and I am sure we will this morning, debate how 
much regulation, and what kind is appropriate, having no new 
regulations at a time of rapid change would be a disaster.
    If innovation is as magical as industry says it is, it can 
still do good things while respecting the policy choices of the 
people's elected representatives. In this way the necessity 
required by reasonable regulation has been and should continue 
to be the mother of invention. In conclusion, stripping our 
States of any power to regulate AI, potentially anything done 
with a computer would be a reckless and grievous error. The 
State regulations have always played an essential role in 
building consumer trust and shaping the digital revolution for 
the better.
    Thank you, and I welcome your questions.
    [The prepared statement of Mr. Richards follows:]
    [GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]
    
    Mr. Issa. Thank you. I understand the Ranking Member would 
like to make an opening statement.
    Mr. Raskin. If that is all right with you, Mr. Chair.
    Mr. Issa. It is always good to hear from you.
    Mr. Raskin. Well, thank you kindly.
    Mr. Issa. The gentleman is recognized.
    Mr. Raskin. I am very grateful to you for putting together 
this really important hearing, and thanks to the witnesses for 
your statements. When commercially available AI debuted three 
years ago, the consequences were breathtaking from the start. 
Generative AI spurred scientific research and provided 
astonishing new tools to creators.
    The massive jump starts to American innovation swept from 
social and economic domain to social and economic domain, from 
pharmaceutical research and quantum computing to sound 
recording and film editing. Generative AI has also raised 
profound legal, practical, even philosophical problems, such as 
whether individuals have a right to their name, their image, 
their likeness, and their voice when they are used in other 
people's deep fakes.
    If government has unlimited power to engage in AI enabled 
surveillance of our citizens, and what are the appropriate 
standards of care if any for AI platforms to protect users 
against harmful consequences? While Congress takes time to 
absorb the shock of these changes and these problems, and 
examines the technology, the States have already begun to enact 
the first regulations on AI.
    We often talk about creation of rules for the road when 
crafting legislation to govern new technology, but I think this 
way of talking about consumer safety and technological ethics 
suggests that a road without speed limits would get us to where 
we are going faster. For generative AI, it is not about 
creating speed, and safety laws, and building highway 
guardrails.
    Rather building a road system in the first place. Some of 
my colleagues would argue that the construction of local roads 
is unnecessary. They say that without broad preemption, without 
clearing the field of all State based legal encumbrances, AI 
companies and fledgling startups will have trouble complying 
with State laws, and will be put to a disadvantage, and wither 
on the vine.
    I have heard little to suggest that broad preemption is in 
fact the appropriate solution to his problem. Proponents of 
preemption present Americans with a series of false choices, 
telling us we must chose a side between AI innovation or State 
powers and federalism, between business and consumers, or 
national security and safe innovation. In 1816, Jefferson wrote 
laws and institutions must go hand in hand with the progress of 
the human mind.
    As that becomes more developed, more enlightened, as new 
discoveries are made, new truths discovered, manners and 
implements change, with the change of circumstances 
institutions must advance also to keep pace with the times. 
What we see across America is individual States looking at 
these amazing technological developments, and asking whether 
and how their laws need to change to protect their citizens and 
advance the common good.
    Today, you might think that the issue has some kind of 
necessary partisan valence to it, Republicans on one side, 
Democrats on the other, but opposition to an AI moratorium is 
broad and bipartisan. In fact, when some of my Republican 
colleagues tried to pass a moratorium through our last spending 
bill, attorneys general from across the States red, white, and 
blue sent a letter to Congress saying please don't do this to 
our State laws.
    In another letter, 17 Republican Governors wrote to 
Majority Leader Thune and Speaker Johnson praising their quote 
``Big Beautiful Bill,'' but explaining that the moratorium 
provision stripping the right of any State to regulate this 
technology in any way without a thoughtful public debate was 
quote, ``the antithesis of what our founders envisioned.''
    I surely disagree with these Governors on many things, but 
I think that they are right and should indeed be free to create 
what they call quote, ``Smart regulations of the AI industry 
that simultaneously protect consumers, while also encouraging 
this ever developing and critical sector.''
    In a statement submitted for this hearing, AI startup Bria 
wrote the moratorium on State laws would create a giant vacuum, 
and strip away the rules needed to quote, ``raise capital, form 
partnerships, and build safely in order to win consumer 
trust.'' Without a road on which to travel forward, startups 
are cut out of the market in favor of large companies with the 
legal and fundraising teams necessary to deal with a barren 
legal landscape.
    Finally, some argue we need unrestrained AI development to 
properly compete with China. This Subcommittee has held many 
bipartisan hearings on the threat to innovation, AI supremacy, 
and IP from China. It would be amazing, even dangerous, to 
posit that we need to become more like China to compete with 
China.
    In fact, it seems more plausible to me that to believe that 
stronger, better products developed in America while protecting 
Americans and their data through American political processes 
and the passage of American laws will ensure that AI is both 
more advanced, more durable, and more internationally 
competitive.
    Protecting American innovation, investing in American 
research, developing American laws to deal with problems like 
deep fakes, political deep fakes, discrimination through AI and 
so on, and investing in our workforce, I believe is the right 
way to win the so-called AI arms race. American safety is not 
at odds with AI innovation, that should be the baseline for any 
conversation we have about the best way moving forward. I very 
much look forward to this conversation, I have already learned 
a lot from it.
    Back to you, Mr. Chair.
    Mr. Issa. I thank the gentleman. Without objection all 
other opening statements will be placed on the record.
    It is now my pleasure to go to the gentleman from Virginia, 
Mr. Cline, for five minutes.
    Mr. Cline. Thank you, Mr. Chair. I want to thank the 
Ranking Member for referencing our third President, the 
gentleman from Virginia, Mr. Jefferson, and one of our great 
inventors from the earliest days of our republic, and for the 
Chair referencing the work of our current Governor, Governor 
Youngkin, who is working to make sure that Virginia is the 
leader in data center development.
    We continue to be the leader nationally, and we intend to 
stay that way as AI grows and develops. Mr. Frazier, you have 
written that the Constitution's intellectual property clause is 
first and foremost a directive to advance and spread knowledge, 
I think that was on X a couple weeks ago. How should Congress 
strike the right balance between protecting copyright owners 
and ensuring that AI regulation continues to promote the spread 
of knowledge consistent with that constitutional purpose?
    Mr. Frazier. Thank you very much for the question, and 
thanks for the follow, or at least perhaps the like. In this 
regard the Constitution is clear that the IP Clause has always 
been grounded in progressing the promotion of science and 
useful arts. Here, if you go back to the founding articles, as 
well as subsequent interpretation by the Supreme Court, the 
focus has always been on making sure that there is the spread 
of knowledge across the country.
    The IP laws grant an exclusive right to creators to attempt 
to incentivize that creation. What we need is to make sure that 
there is an economic analysis of the extent to which those laws 
are working as intended. The purpose of the IP Clause is not 
the profit of creators it is the progress of society.
    What we need to get back to are those first principles when 
it comes to examining copyright law and patent law. Right now, 
if you look at an analysis from scholars such as Richard Watt, 
you will see that the preponderance of copyright benefits is 
not going to your average Joe and Jane author, but to large 
publishers, and so we need more analysis on that front.
    Mr. Cline. Would you agree that ensuring transparency in AI 
systems such as being able to trace what data was used to train 
a model is essential both for protecting IP rights, and for 
maintaining public trust in AI platforms?
    Mr. Frazier. I would agree that broad overviews of the 
sources of training data are important to get an understanding 
of where and how models are being trained.
    Mr. Cline. In that same spirit, could giving creators a 
private right of action for tampering with content credentials 
help strike the right balance between protection and 
innovation?
    Mr. Frazier. My own estimation is that granting that sort 
of right would be a significant barrier to AI innovation given 
the centrality of access to data for innovation. We have seen 
that many courts and many scholars have regarded the use of 
data as a transformative purpose under copyright law, and 
denying the ability to train on wide swaths of data would be a 
real hindrance to our ability to leverage AI.
    As many folks have said on many occasions, bad data leads 
to bad AI. If you want better AI, you want better quality 
information, and if we throw many legal gears into that 
equation, we won't get the AI we deserve.
    Mr. Cline. Thank you. Dr. Bray, as we consider whether, and 
how best to regulate AI platforms, do you believe that we must 
avoid the same mistakes we made in the early days of the 
internet with broad safe harbors that gave platforms a free 
pass for enabling copyright infringement and countless other 
harms?
    Dr. Bray. Thank you for that question, Representative 
Cline, and as a fellow Virginian, glad to be here. I was 
around, and actually working in the 1990s on the early days of 
the World Wide Web. My observations would be we did fit for 
purpose for the 1990s. Now, in the two decades since, we have 
seen the rise of applications on top of that technology where 
we may need to make adjustments.
    What we need to separate is the desire to roll out the 
technology so that the entire Nation could have access to the 
internet, and at the same time if we see that the applications 
need adjustments for the law, that would make more sense to 
adjust. However, I would say what we need to recognize as well 
as we go forward here, where we are trying to advance the 
technology so it can be used by startups, it can be used by 
communities, it can be used by legacy organizations that 
haven't gone AI native yet.
    At the same time, if we see there are applications where we 
want to prevent harm to individuals, to children, things like 
that, adjust the applications while not limiting innovation on 
the technology.
    Mr. Cline. Just like other businesses, bad actors have to 
be accountable for the harm that they cause, in addition, if we 
consider some type of temporary pause for State level AI 
specific regulation, we have to ensure that other generally 
applicable State and Federal laws continue to apply, with 
copyright law being one example. Do you think AI platforms 
should be held to the same standard of accountability as any 
other business, including when it comes to respecting 
copyright?
    Dr. Bray. Absolutely, and I would actually say that is why 
I am so excited about federated learning, because there 
actually could be the opportunity where whether you are a 
recording artist, you are a musician, you are an individual, 
you could actually say here is the data that I have pooled, you 
can learn on my data in situ as opposed to shipping it 
somewhere, and we can have that actually recorded as a 
transaction.
    Then, in return I am getting benefit, whether it be 
financial or otherwise, it is a new model that is actually 
quite possible. It has been possible for more than five or six 
years, and we can motivate people to do it.
    Mr. Cline. Great, thank you. I yield back.
    Mr. Issa. Gentleman yields back. Who seeks recognition?
    The gentleman from Georgia is recognized for five minutes.
    Mr. Johnson. Thank you, Mr. Chair. Professor Richards, I 
mentioned in my opening statement my concern that broad Federal 
preemption of State AI laws would also preempt common law 
causes of action. How does common law, particularly tort law 
help protect Americans from harm?
    Mr. Richards. Thank you for the question, Representative 
Johnson. Common law is the foundation of American law, it is 
all over the United States, it goes back to the colonies, to 
the English tradition. Common law has flexibility to the law. 
If we think about my own specialty, privacy law, there was a 
reference earlier by one of the other witnesses to Justice 
Brandeis' right to be left alone.
    Privacy law in America was originally a product of common 
law, where the law adjusted to realize that data about people, 
information was being collected without their willingness, or 
it was being disclosed. An important line of cases relevant to 
this subject today, to protect the names or likenesses of 
people whose pictures and names were being used to sell 
products without their consent.
    Common law is a tremendous source of flexibility and 
vitality in our law that allows the law to adjust to change 
circumstances like the advent of technological revolutions such 
as artificial intelligence.
    Mr. Johnson. Thank you. Can common law be used to protect 
Americans even in the absence of explicit statutes?
    Mr. Richards. Absolutely, sir. That is sort of the nature 
of the common law, that judges can apply the existing rooted 
principles of tort law, contract law, and property law, and 
they can, from those general principles, divine specific 
applications that can provide new protections so that the law 
continues as it always has, to evolve alongside technological 
invention.
    As the Ranking Member referred to Mr. Jefferson's statement 
from 1816, which as a proud graduate of UVA I also endorse.
    Mr. Johnson. Some of the current lawsuits against AI 
companies are being brought under common law to hold companies 
accountable for the harm that their products have caused to 
children. For example, Megan Garcia is suing Character 
Technologies and Google after her 14-year-old son Sewell Setzer 
died by suicide.
    She testified before our colleagues in the Senate this week 
that his death was quote ``The result of prolonged abuse by AI 
chat bots on a platform called Character AI.'' The chat bot 
sent Sewell sexual messages and asked him to ``come home to me 
as soon as possible.'' Others have filed lawsuits against 
Character Technologies and Open AI for wrongful death, 
negligence, and other causes under both common law and State 
laws about deceptive or unfair trade practices.
    These tragic cases show some of the worst possible harm 
that can arise from AI technologies. Professor Richards, does 
an AI moratorium run the risk of impeding these lawsuits that 
seek to hold companies accountable?
    Mr. Richards. It would, particularly if it were defined 
broadly. Let me also say in response to your question, 
Representative Johnson, as a parent myself, my heart goes out 
to the families who have lost their children. When we think 
about laws like negligence, or rules like negligence, 
negligence was the great innovation of the common law to 
respond to industrialization.
    It means that anybody acting against other people must 
behave in a reasonable way and not cause unreasonable harm. I 
am sure that the car companies, the railway companies, and the 
industrial companies of the 19th and 20th century would have 
argued that the common law developing the law of negligence 
would have impeded innovation, but actually it safeguarded the 
development of those technologies by enabling us to be able to 
drive cars, ride on the rails, and fly on airplanes.
    Otherwise, enjoy the benefits of our inventions knowing 
that we are safe, and we are protected, and where those 
technologies or their deployers overstep the line, we have a 
right of action to defend our rights, and protect our families.
    Mr. Johnson. Thank you. It was States that developed the 
common law. Professor Richards, beyond common law, are there 
some areas where it is appropriate for States to lead the way 
on laws about AI technologies? If so, what sectors or use cases 
should continue to be the providence of the States?
    Mr. Richards. I think the answer to that question is yes, 
and I think particularly where there is deployment of AI rather 
than generation of AI, the use of AI in point-of-sale devices, 
employment discrimination, consumer protection, the traditional 
provinces of State regulation. Let me also, if I could add one 
additional thing, Representative?
    Mr. Issa. Briefly.
    Mr. Richards. The States have filled the gap that this 
Congress, which did not regulate the internet, did not regulate 
privacy generally, have done so. With AI technologies, if 
Congress is for whatever reason not able to pass comprehensive 
legislation protecting Americans, States will continue to fill 
that gap as they have in the internet age.
    Mr. Issa. Thank you. We now go to the gentleman from Texas, 
Mr. Gooden, for five minutes.
    Mr. Gooden. Thank you, Mr. Chair. Mr. Frazier, as a trend 
it seems like every State is jumping to regulate AI, and 
perhaps some of them are doing it just to show early 
participation, do you think that is well thought out? Also, 
what are the long term effects of having a decentralized 
patchwork of laws, how does this help or hinder new entrants?
    Mr. Frazier. Thank you very much for the question, 
Representative, always good to talk to a fellow Texan, hook em. 
In any prior setting we have seen of the impact of a rush to 
regulate among states, there is a real, noticeable impact on 
small businesses. If we look at research for example from 
Engine done in conjunction with the Michigan Ford Public Policy 
School, we see that just changing a privacy policy statement.
    Maybe $6,000 in funds to outside counsel. That is $6,000 
out of $55,000 of monthly revenue and operational expenses. 
From a small business perspective, the rush to regulate is a 
real hindrance on innovation. I also think that the rush to 
regulate among the States creates a patchwork, and a huge risk 
of extraterritoriality in terms of application.
    We have talked a lot about Virginia today, which is 
welcome. At the time of the founding Virginia had around 
700,000 residents, Delaware and Rhode Island, something around 
30,000. The founders didn't say there was a Virginia privilege, 
or we should have a Virginia effect. They did not want to see 
that happen, instead they made sure that States stayed in their 
respective borders when it came to regulation.
    Mr. Gooden. Thank you. Is it possible for bad actors to 
misuse inconsistencies, especially in terms of violating 
intellectual property rights?
    Mr. Frazier. That we have seen a documented effect of what 
is referred to as regulatory overload, as folks at the Mercatus 
Center have written about. When we have endless litigation, 
endless labels, endless warnings, actually what we get is less 
safety, because people don't know what law to adhere to.
    If you talk to a lot of startups today, they don't have a 
public policy person, they don't have a general counsel. Just 
adding more laws to the equation actually reduces the odds of 
user safety.
    Mr. Gooden. Thank you, I appreciate that.
    I yield the balance of my time to Mr. Correa from 
California.
    Mr. Correa. Thank you, Mr. Gooden. Gentlemen, listening to 
your debate today reminds me of what General Patton used to 
say, which is lead, follow, or get out of the way. AI is moving 
faster than we imagined, or even expected just last year, 
touching every aspect of our lives. Most of our constituents, 
like many of us here, don't know a lot about it, but they know 
enough to expect that we here will protect them, their jobs, 
children, and intellectual property.
    The debate about whether it is local control or Federal 
control, is second to the fact that we just can't move on this 
stuff at the Federal level. Mr. Frazier, you are from Texas, I 
am from California, fourth largest economy in the world. How do 
you coordinate Federal and State action to make sure that we 
respond to our constituents responsibly? Thank you.
    Mr. Frazier. Thank you for the question, and I was a 
Beckley Law grad, so I shared some California ties. I want to 
emphasize that we do need to see regulation in this space, and 
we need to see that Americans are protected, and especially our 
vulnerable communities and children. What I am concerned about 
are laws like AB1046 out of California, for example, that 
impose on AI companions a desire to prioritize factual accuracy 
over a user's preferences, specifically--
    Mr. Correa. The laws 1047 or 1046?
    Mr. Frazier. The AB1046, prioritizing factual accuracies 
over the preferences of the user, in this case a child. To 
which I ask who gets to define factual accuracy? Is it 
California State government? Who is going to answer the 
question of whether Santa is real for a seven-year old? Who is 
going to make factual determinations about religion for that 
child user? Those questions shouldn't be answered by California 
for the rest of the country.
    Mr. Correa. I would ask you to also look at SB53 that is 
now being addressed in the California State legislature, and 
see what you opine on that as well.
    Mr. Frazier. I think SB53 is the least bad option I have 
seen with respect to AI development regulation. As we have 
discussed in this hearing, AI development in my opinion, and in 
the opinion of many should be left to the national government 
as a core--
    Mr. Correa. Anything you agree with in SB53?
    Mr. Frazier. I very much agree with the whistleblower 
protections, that is an important mark. I also agree with the 
fact that it calls on regulators to revisit definitions and 
terms frequently to make sure they are working as intended.
    Mr. Correa. Thank you, Mr. Gooden and Mr. Chair, I yield.
    Mr. Issa. Thank you. We now go to the Ranking Member of the 
Full Committee, the gentleman from Maryland.
    Mr. Raskin. Thank you very much, Mr. Chair. I don't 
understand the attack on the patchwork of laws. Maybe that is 
because I am a quilt person, I like patchworks, but isn't that 
what federalism is? federalism is a patchwork. That is how, 
that is the glory of the American governmental system.
    To be sure that the Congress gets it together to adopt a 
national law eventually on everything from the Clean Air Act, 
to the Clean Water Act, to the National Labor Relations Act, 
and the Fair Labor Standards Act, but it would have made no 
sense to say before those Federal laws were passed, let us wipe 
out the State laws that exist on child labor, minimum wage, not 
polluting the water, or not polluting the air.
    In fact, that contradicts what I thought the central 
dynamic of federalism was, which is the States are the 
laboratories of democracy, that was Brandeis, and then the 
different changes that they make are compared to each other, 
and then they bubble up, and Congress takes all it into account 
when it decides to attempt a nationwide approach. Is that a 
fair statement of the situation, Professor Richards?
    Mr. Richards. Absolutely. I also have a degree in early 
American history from the University of Virginia. My studies in 
early Federal history--
    Mr. Raskin. Virginia is getting a lot of play today, I 
don't know, as a Maryland guy I have got some questions about 
that.
    Mr. Richards. Would you like me to continue?
    Mr. Raskin. Please.
    Mr. Richards. Absolutely, the goal of federalism is to have 
laws that are more responsive to the people who are closer to 
those representatives that the legislatures of particular 
States can adapt to that particular State's problems, 
strengths, and also to protect, to experiment.
    Mr. Raskin. OK, so just to restate the obvious, there is an 
attempt to impose a moratorium on State laws, or to wipe out 
State laws. How is that different from the way that Federal 
preemption has taken place in the other cases that came to mind 
for me?
    Mr. Richards. Well, sometimes Federal preemption can, just 
through the Supremacy Clause, preempt particular laws, or laws 
that are inconsistent with a Federal mandate. In addition, 
Congress is allowed to operate in ways to set a general 
national standard but still allows States to experiment with 
stronger standards so that the innovation in regulation can 
continue at pace with the innovation in technology as you and 
Mr. Jefferson put it so well.
    Mr. Raskin. In addition to those differences, isn't it the 
case that a moratorium today would just wipe out State laws 
without substituting anything, without imposing a national law?
    Mr. Richards. Absolutely--
    Mr. Raskin. Is there any precedent for just doing that, 
saying we don't want any State laws at all while we think it 
over, or while we are stuck in some kind of legislative 
paralysis?
    Mr. Richards. I can't think of one, and that is why I think 
it would be disastrous.
    First, depending on how the law is defined, it could sweep 
very, very broadly, and take out laws that are important and 
protective, that everybody on this panel would agree are good 
laws.
    Second, if you have a broad preemption, this would be a 
defense that tech companies could make in every piece of 
litigation, increasing the cost of the litigation system as the 
contours of that preemption definition could continue to affect 
litigation years into the future.
    Mr. Raskin. Yes. Some people are with us today who are 
involved in, or have been involved in different kinds of 
litigation, and my heart goes out to them, being a father who 
has lost a son, and these are all people who have lost children 
in different kinds of interactions with chat bots, and other 
kinds of AI technology.
    I just want to recognize Kristin Bride from Oregon, who 
lost her son Carson, he was sixteen, in 2020, the same year we 
lost our son Tommy. Juliana Arnold who lost her 17-year-old 
daughter Coco to fentanyl poisoning after she purchased a 
counterfeit pill online. Megan Garcia from Florida, who lost 
her 14-year-old son Sewell, who took his life in February after 
months of abusive interactions with a Character AI chat bot.
    Jane Doe from Texas, whose son, JF, suffered severe 
physical and mental health harm after multiple chat bots 
instructed and encouraged him to engage in self-harm and self-
violence. All of which is to say in my mind there are profound 
problems here that we really do need to deal with.
    The last thing I would want to do is to try to nullify 
States that have already addressed the problem in response to 
constituents dealing with a nightmare like that without 
replacing it with something. I am not averse to the idea that 
there might be a national law that works, but certainly 
imposing a legislative vacuum on the country would be a really 
dangerous way to go.
    Thank you, Mr. Chair, I yield back.
    Mr. Issa. You are most welcome. With that, we go to the 
gentlelady from Florida for five minutes.
    Ms. Lee. Thank you, Mr. Chair. Mr. Frazier, I would like to 
return to you, I appreciate so much your discussion of the 
Commerce Clause, and you made some important distinctions for 
us already in your testimony when you talked about the need for 
the Federal Government to intervene, and think about preemption 
when we are talking about a subject that affects the economic 
or political stability of the United States.
    You drew a distinction between pig sties and artificial 
intelligence. Would you elaborate for us please, about why you 
believe the things that we are discussing here today do go to 
the heart of the economic and political stability of the United 
States, and should be distinct from those areas where the 
laboratories of democracy concept actually works?
    Mr. Frazier. Thank you for the question, Representative 
Lee. It is profoundly important to get back to that Brandeis 
quote about laboratories of democracy. There is a forgotten 
portion of that quote, which is ``without risk to the Nation.'' 
You can run an experiment without risk to the Nation. Many of 
these experiments that we are seeing proposed and enacted in 
California do pose a risk to the Nation, because they try to 
impede AI innovation itself.
    When we see individual States reach into the AI development 
process, they are not just tinkering with a modular process, 
there is not an AI specific training ground for California that 
Open AI does. Anthropic doesn't train its models 50 times over 
for each State.
    While there may be a lane for State regulation, and I 
believe there is a lane for State regulation with respect to AI 
use, we have to followup and ask the question of what does a 
real experiment look like? That experiment can't be one that 
exceeds the borders of that State. Yet, California's bills time 
and time again would result in labs having to change their 
practices the Nation over.
    I have lived in California, I have lived in Florida, I have 
lived in Texas, I have lived in Oregon, I have lived in 
Massachusetts, and in D.C., and I can tell you in each of those 
places they don't want Californians to dictate the terms of 
their AI.
    Ms. Lee. I also need to followup on this question. You said 
something interesting when you were talking about current 
copyright law, how it operates, and really ensuring that we are 
still honoring the concepts of content creators and 
intellectual property, you suggested an economic analysis of 
what is happening with the use of this content and training 
models.
    Would you elaborate for us a bit more on how that would 
look, and how we can get to the bottom of how to properly 
compensate those content creators?
    Mr. Frazier. Happily, and thank you for the question. If 
you look at current settlements, for example in the Bartz v. 
Anthropic decision, and you begin to analyze who those funds 
actually go toward, a large number of that fee is going to go 
to publishers. It is not going to go to the actual authors 
themselves. If we are trying to incentives the creation of new 
art, and new discoveries, and new scientific discoveries, 
copyright may not be the vehicle we need.
    It is not serving the same purpose it did in 1789, back 
when it was just limited to 14 years with the possibility of a 
14-year renewal. It is now 70 years plus the life of the 
author. That is an incredibly long time, especially when you 
consider that the founders really hate monopolies.
    The fact that we ended up in a world in which a handful of 
publishers may be able to dictate the quality of our AI is 
antithetical to the original purpose of the AI Clause.
    Ms. Lee. Thank you. Mr. Thierer, one of the things that you 
touched on was the idea that Congress could explore giving NIST 
or CAISI more authority to develop standards for AI frontier 
models. Would you share more on your perspective of how we 
might do that? Should we designate a single Federal entity to 
try to develop those standards? Share with us a little more of 
your thoughts there, please.
    Mr. Thierer. Yes, absolutely, thank you for the question, 
Congresswoman. Let us be clear that the reason that NIST needs 
to play a role here in this new CAISI body is because they have 
the ability to address exactly what the problem is here, which 
is that many States are attempting to impose a very 
technocratic type of design on artificial intelligence models 
and systems preemptively, in an almost European style way.
    That is a huge problem, I will just again quote from 
Governor Jerry Polis, who said,

        Government regulation as applied at the State level in a 
        patchwork across the country can have the effect to tamper 
        innovation and deter competition in an open market.

    It is not just that, these States lack the technical 
capability to do some of this in certain circumstances, and 
lack the information needed to do it properly.
    We have set up this body, I should remind the Committee, 
setup under President Biden and retained by President Trump in 
a bipartisan move and just renamed to focus more on standards 
and innovation, that is a good plan. Once again, we have a 
bipartisan agreement here, we have got a new technical body, 
and they can handle it in conjunction with other existing 
policies, both Federal and State.
    Ms. Lee. Thank you. Mr. Chair, I yield back.
    Mr. Issa. Thank you. We now go to the gentlelady from 
California, Ms. Lofgren.
    Ms. Lofgren. Thank you, Mr. Chair. Before I make remarks or 
questions, I would like to ask unanimous consent to put into 
the record a letter from the California Privacy Protection 
Agency.
    Mr. Issa. Without objection, so ordered.
    Ms. Lofgren. Thank you. The title of this hearing, ``A 
Nationwide Strategy or Californication,'' I take a little bit 
of exception to. I get it, people brought up California because 
we set the pace, but it is worth noting that my colleague from 
California Mr. Correa mentioned California is the fourth 
largest economy in the world with over 4.1 trillion in GDP.
    It surpassed Japan last year, and trailed only Germany, 
China, and the United States as a whole. It is No. 1, 
California is the No. 1 State for manufacturing. It is home to 
the most Fortune 500 companies in any State, more than forty 
are in Silicon Valley, my home. It also has the highest 
agricultural output of any State.
    It is home to five of the Nation's top ten public 
universities, UCLA, UC Berkeley, UC San Diego, UC Davis, and UC 
Irvine. It accounts for over 12 percent of all university R&D 
expenditures in the United States, with the University of 
California system alone spending more than 12.1 billion.
    It receives more NIH funding than any other university 
system in the United States. Now, these aren't just vanity 
stats, they are the foundation of the modern innovation economy 
that California has built. World class universities, labs, 
investors, entrepreneurs, workers who turn ideas into jobs, 
into growth, California has been, and remains the leader in 
technology, and the engine that built our economy now powers 
our AI leadership.
    California leads the world with 32 of the top 50 AI 
companies based here. Although it is always fun to criticize 
the most successful State, we must be doing something right to 
have achieved all this. Now, this is a hearing on AI, and some 
of the comments made by the witnesses I agree with.
    Mr. Bray, you mentioned that upgrading existing domain 
specific laws is more pragmatic than attempt sweeping new 
regulations, and I very much agree with that. That is also 
going to need room for regulations that are specific, or laws 
that are specific to each State. There are things that are the 
proper purview of States, and there are things that are the 
proper purview of the Federal Government.
    Certainly, I was a critic of Mr. Weiner's bill from the 
last session, and that it overreached in the national security 
effort. I also agree, Mr. Thierer, that the E.U. approach is 
incorrect. To try and micromanage the workings of the AI system 
is doomed to failure it seems to me. However, the 
recommendations that we simply preempt while we have nothing 
put together now are problematic.
    I have just got to say, Mr. Chair, and you are also a 
Member of the Science Committee along with me, we had a pretty 
effective bipartisan task force on AI in the last Congress 
Chaired by Mr. Obernolte from California, as well as Ted Lieu 
from California. They took the first step, they didn't finish 
the job, but they haven't even been reconstituted in this 
Congress.
    We do heavily rely on NIST, an agency that is widely 
respected in the Congress, and in the technological world, but 
we have got to look at what has happened to NIST. They have 
been eviscerated by the DOGE people, and I fail to see how they 
are going to be able to perform the tasks we are hoping that 
they can perform, given what has happened to them.
    I would just like to say that we ought to be working on a 
bipartisan basis again. The Science Committee staff task force, 
I would urge the speaker and whatever influence the Chair can 
have to reestablish that AI task force as a super Subcommittee 
of the Science Committee, so that we can get more work done, 
and get to where we need to be to have the guardrails and the 
standards that are appropriate at a national level.
    While also recognizing there are things that are of value 
at the State levels. The note from Open IA just mentioned that 
online age verification is something they support. There are 
things that the States can do, there is things that the Federal 
Government can do, but we are not going to do anything unless 
we can get our act together, and reinstitute that task force, 
and get some more work done.
    With that, Mr. Chair, I yield back.
    Mr. Issa. I thank the gentlelady, and I note that both 
sides of the title reflect California as the home of the 
innovation that is driving it. I might also take an opportunity 
to completely agree with your comments related to the need for 
us to act, that a preempting without a solution, without some 
of the work that is being currently worked on both here in the 
House and Senate would not be well received.
    We have the ground work for a lot of the kind of work that 
you and I have done together, and I look forward to very much 
this hearing being the beginning of us launching bipartisan 
legislation, because we do need to act in some cases, and you 
have always been a good partner in that acting.
    With that, we continue, with deference to my great State 
the Commonwealth of Virginia, we will continue with the 
California effort.
    Go to Mr. Kiley for five minutes.
    Mr. Kiley. Thank you, Mr. Chair, and I wholeheartedly agree 
with my colleague from California, that our State continues to 
be the center of breathtaking innovation worldwide. However, 
the competency of our State government is another matter 
entirely.
    Not to impugn the competence of any of my former colleagues 
in the Sacramento legislature, but this is a body that 
struggles with things like building roads, delivering 
electricity, managing forests, building dams, and getting water 
to come out of hoses. The notion that this is the right body to 
regulate the most powerful technology in human history, whose 
workings are actually largely beyond the understanding even of 
technology's creators is a fairly fantastical notion.
    Not only that, but we are also faced with technology that 
continues to accelerate in capability, in an exponential way, 
in a way that is unlike anything we have seen before. Just to 
take one very specific example, you have leading models who 
have recently gotten the gold model on the International Math 
Olympiad, something that most experts thought was still going 
to be years away.
    I do think the risk that California is going to drive AI 
policy for the entire country is a very real one. That a 
national framework that seeks to stop that from happening is 
needed and appropriate. More specifically, I see the Federal 
role as including the following.
    First, of course, we need to be prepared to combat concrete 
harm, as they arise, and harm where the use of AI tools can 
sort of accentuate the risk.
    Second, there needs to be risk assessment type tools, and 
as much as I have been giving California a hard time, there are 
some decent ideas in this latest bill, incident reporting, 
transparency as far as safety protocols.
    Of course, there is a tremendous role for the Federal 
Government when it comes to the infrastructure needs behind the 
ever escalating investment in data centers. Beyond that, it is 
very important that policymakers continue to be apprised as to 
the capabilities of these models. In fact, both sides of it, 
the risks, as well as the capabilities.
    There are of course channels that exist both between the 
government, and among the labs themselves, but that most of us 
as policymakers, unless you are out looking for it, are not 
kept up to speed on exactly where the leading edge is. I think 
that is all very important. I think there could also be a lot 
more investment in actually safety, and alignment related 
research.
    The labs do this themselves, but they are not necessarily 
incentivized to do it, and so there could be more of a Federal 
role for promoting basic cutting-edge safety related and 
alignment related research. Then, finally, and maybe most 
importantly, I think that part of this conversation that we 
have been focusing a lot on when it comes to discussions of AI, 
there have been more of them happening here lately, but they 
have been really oriented on the aspects of the issue that are 
familiar.
    OK, the issues related to energy, issues related to water, 
some of the risks that are of a familiar kind. The discussion 
has not focused much on the broader question of how we are 
going to prepare society for the enormous changes that are 
likely to be ushered in the coming years? When we get to this 
idea of States as laboratories of democracy, or of 
experimentation, this actually is maybe the context in which 
that idea is most relevant.
    Because when it comes to sort of regulating the 
capabilities and constraining the capabilities of the systems 
themselves, the laboratories of democracy idea aren't really 
fitting. (1) There is an enumerated Federal power when it comes 
to interstate commerce. (2) You talk about experimentation, 
this is sort of something that we have to get right, and we 
only get one shot at.
    There is a widely shared view that once AI capability 
crosses a certain threshold, whether that be recursive self-
improvement or some other threshold, there is sort of going to 
be an escape velocity, so that has implications for the sort of 
narrower geopolitical context of which country leads in the 
technology.
    Also, for the broader idea, is this technology going to be 
aligned with and beneficial to humanity? I do think that States 
can play a role when it comes to preparing society for using 
this technology for good in various domains. For example 
education, that you are seeing States already experimenting 
with ways that AI can be used to close achievement gaps, and to 
bring tools to students unlike anything we have ever had 
before.
    Transportation, States can take a lead, and some States 
have taken a lead in preparing our transportation systems for 
the increasing capacity for autonomy within various modalities. 
Finally, there are various other examples, but a final example 
I will mention is the use of AI itself in government, to 
improve government processes.
    We are seeing some of here at the Federal level, we are 
seeing some experimentation with States and other countries 
across the world. When it comes to being laboratories of 
democracy and the role of States here, that is probably where 
States can be most valuable, and our role in Congress should be 
to pursue some sort of Federal framework.
    I yield back.
    Mr. Issa. I thank the gentleman. We now go to the patient 
gentlelady from North Carolina for her five minutes, Ms. Ross.
    Ms. Ross. Thank you very much, Mr. Chair. I have a 
unanimous consent request. I ask unanimous consent to enter 
into the record a letter by Frank Cullen, Executive Director of 
the Council of Innovation Promotion to you and the Ranking 
Member dated September 18, 2025, which expresses the council's 
concern regarding recent proposals for Congress to impose a 
moratorium on State level regulation of AI.
    Mr. Issa. Without objection, so ordered.
    Ms. Ross. Thank you again to both the Chair and the Ranking 
Member for organizing this very important hearing, and to the 
witnesses for your testimony. I am glad that we are talking 
about how Congress and other lawmakers can responsibly 
legislate and regulate around AI. I represent the research 
triangle in North Carolina.
    I have seen the incredible things that AI can do, 
particularly in the medical area, and in biopharma. I am just 
blown away by the powerful and positive use of AI. I have also 
seen the negative effects of AI, and I am thrilled that one of 
those issues has been brought up by Representative Cline, and 
Representative Lee, and that is the ongoing necessary 
litigation that is happening with content creators and 
copyright.
    I was with the head of Anthropic this morning talking about 
how much money they are having to pay for what they did that 
was illegal, flat out illegal. I hope and look forward to 
working with Mr. Chair and the Ranking Member to make a hearing 
on that issue, and we have had a couple of those hearings 
happen again, but do it in a way that we can promote good 
behavior by AI companies.
    I also love California, and I know we are talking about 
California, but I want to bring up some crucial areas where 
other States have regulated AI in necessary ways. I know that 
we have parents of children who have been hurt by AI here. The 
States are ahead of Congress in protecting our children.
    Given our inaction, many States have stepped up, passing 
legislation covering topics that run the gamut from expanding 
CSAM laws to cover AI generated material in Alabama, to 
prohibiting AI from being used to provide mental healthcare 
services in Nevada. Then, we have been talking about democracy, 
prohibiting the use of AI during an election to create 
political messaging that contains deep fakes of candidates for 
office in New Hampshire.
    We have been talking about federalism, but sometimes the 
States have to act. I also have some concerns, I fully agree 
with the Chair, and a lot of the sentiment here that Congress 
does need to come together in a bipartisan way. Some of the AI 
companies want this preemption because they know that they can 
muck up the Congressional situation.
    Which isn't that hard to do and create the inaction so they 
can do whatever they want to do for as long as they possibly 
can do it. With that long introduction, Professor Richards, the 
Federal Government often regulates in particular areas that 
affect interstate commerce like air travel. The States have 
areas where they traditionally take the lead, like insurance.
    When it comes to the States making laws that affect AI 
deployment, what sectors or use cases should continue to be 
within the State's purview, where the Federal Government 
shouldn't get in the way?
    Mr. Richards. Thank you for your question, Representative 
Ross. There are numerous lists of them, and I hope that the 
Committee will indulge me, if I forget one, there are too many 
to count. I would say in healthcare, in the provision of 
medicine, I work a lot with our physician scientists at 
Washington University through the Cordell Institute, and they 
are concerned about having access to AI technologies to treat 
their patients.
    Also, to be sure that the delivery of those treatments, and 
the development of those treatments is done in a way which is 
consistent with the ethical, and sustainably ethical practice 
of medicine. I mentioned in my opening remarks the problem we 
have in the courts of hallucinated citations, that States 
should be able to safeguard the integrity of their judicial 
systems and the litigation processes by AI specific laws.
    That general laws will not be enough in these cases given 
the particular fraudulences and applications that AI produces. 
I think about education, I believe it was Mr. Kiley that spoke 
about that a moment ago. AI does have the potential to help 
people in education, but it also does tend to create massive 
plagiarism problems. I am being indicated to wrap up by the 
Chair, so I will pause there.
    Ms. Ross. Thank you very much, and I yield back.
    Mr. Issa. Professor, you are knowledgeable, and we 
appreciate, and that is why I didn't stop it at the bell by any 
means, I wanted you to finish what you were working on.
    With that, we go to the gentleman from Wisconsin for five 
minutes.
    Mr. Fitzgerald. Thank you, Mr. Chair. Mr. Thierer, you have 
written previously about California taking a European style 
approach to regulation. Chair Jim Jordan of the Full Judiciary 
Committee, and Mr. Kiley, and I were just in Europe last month 
talking with businesses, both European businesses, and then 
American businesses with headquarters in Europe now, within the 
EU, many of them in Dublin actually.
    What they told us was that this type of ex-ante regulation 
where anticompetitive practices are regulated before they 
exist, it typically undermines or overburdens companies before 
they can scale, and it kills a lot of the small businesses 
before they are up and running. It is exactly what Europe has, 
it is why they have no gatekeepers.
    We are the gatekeepers. America innovates, China 
duplicates, and then the E.U. regulates, that is where we are 
at right now on a grand scale. If the U.S. were to follow the 
EU's model of over regulating AI before it is understanding the 
risks, what impact would that have on AI development and 
competition?
    Because I believe that the E.U. is trying to create a space 
for themselves, just like they are with the seven American 
corporations for the most part created in California, there is 
one other one called ByteDance, you might have heard of it, but 
now we have the E.U. telling us with the DMA, the Digital 
Markets Act, how we can function, and how we can advance 
ourselves as an American economy. It is very frustrating.
    Mr. Thierer. Yes, you have got it exactly right, 
Congressman, let us actually put some numbers on this. I often 
when I am lecturing to students or other audiences, I ask them, 
can you name any leading global digital technology innovators 
that are headquartered in the European Union today? I am 
usually met with silence.
    There are a couple, but actually 18 of the 25 largest 
digital technology companies in the world by market cap are 
American based companies, only two are European, most people 
can't name them. When I ask that question, most people say 
companies that are now defunct like Skype, and others.
    Innovation has died in the European Union; they have 
committed essentially continental wide technological suicide 
with a regulatory model that is based on a sort of guilty until 
proven innocent mind set. Where every single technology or 
innovation is somehow nefarious, and must be bottled up, and 
preemptively regulated.
    This is why compared to the past, where the United States 
and Europe were very, very even situated in the early 1990s, we 
went down two very different paths. Our path, our more pro-
innovation, pro-growth path that really the Clinton/Gore 
Administration unlocked with a Republican Congress in a 
bipartisan way, that yielded incredible benefits for our 
Nation, which made us the global leader.
    The household names in digital technology in the European 
Union today are American companies. What is the European Union 
exporting on the digital technology front? Red tape. That is 
about all I have got left.
    Mr. Fitzgerald. Mr. Frazier, what would be some of the 
appropriate regulations that States could do a good job on? 
Then, how would we fold that into kind of at the Federal level 
having some oversight, what are your thoughts on that?
    Mr. Frazier. Yes, thank you very much for the question, 
Congressman. That dividing authority on AI development versus 
AI use threshold is very important. If States want to regulate 
the use of AI, the application AI in schools for example, in 
healthcare situations for example, those are instances in which 
States can truly run experiments, because they are finite, they 
are within their own borders, and they are specific to their 
residents.
    When we see States beginning to enact proposals that are 
going to impact how AI models are trained and developed, that 
is necessarily going to bleed into other States, raising 
profound extrater-
ritoriality concerns. That a moratorium in Congress focusing on 
the difference between AI use and application versus AI 
development is a very helpful place to begin.
    What I would also encourage Congress to consider is the 
creation of a cause of action that allows non-State residents 
more means to contest the extraterritoriality of different 
State AI regulations, so that we are not just waiting for 
California to regulate, and just hope no one challenges it. 
Empowering Americans to say AI is too essential to allow one 
big State to set the terms for the rest of us.
    Mr. Fitzgerald. Thank you for that answer. Chair, before I 
yield back, I just wanted to make the comment that, I mean one 
of the concerns on many different fronts is how do you strike 
this balance between State development, and economies, and not 
seeing an overreach like we have seen here in D.C. many times.
    With that, I yield back.
    Mr. Issa. I thank the gentleman; the gentleman yields back. 
We now go to yet another gentleman from California.
    Mr. Lieu. Thank you, Mr. Chair. I am a recovering computer 
science major, and when I was studying computer science, I 
thought neural networks, they are never going to work. Just 
take whatever I say with a grain of salt. I would like to just 
note for the record what happened. Congress established in the 
House of Representatives a bipartisan AI task force, I was the 
Co-Chair.
    There were 12 Democrats, 12 Republicans, and we all agreed 
on over 80 recommendations in a bipartisan manner, a number of 
which will be turned into legislation. Instead, the Trump 
Administration basically says no, we don't want Congress doing 
anything, and we will go into States and not have them do 
anything, we are going to have zero regulation.
    The Trump Administration tried to put in a 10-year 
moratorium ban on States that was opposed by 17 Republican 
Governors, 20 Republican Attorney Generals, and 130 Republican 
State lawmakers. Then, that 10-year proposed ban failed 99 to 
one in the U.S. Senate, a spectacular rejection of what the 
administration was trying to do.
    Now, we are in a place where the actual reality is it is 
not whether we are going to regulate AI, it is do you want 17 
States doing it, or do you want Congress to do it? With that 
lead in, Mr. Thierer, I know you were in support of the AI 
moratorium, your approach failed, so now we are in this new 
sort of position since it has failed.
    I am curious what areas do you think Congress should 
regulate in? Because it is clear we are not going to preempt 
with nothing, right? What are the things that you think that 
would be helpful and further American innovation?
    Mr. Thierer. Sure. Well, first, Congressman, I want to 
thank you for your leadership on this with Representative 
Obernolte, with the House AI task force, and then also the 
legislation that you did mention, that you sponsored on this. 
That was a good starting building block for what we can do.
    We have heard many other Members here today talk about the 
sort of things that NIST could be doing, or the new CAISI, 
which again is a carryover from the AI Safety Institute. We 
could take some of the ideas that have already been percolating 
at the State level, including in California, and New York and 
others, to basically build on what can be done in Federal 
legislation.
    You can combine that with other sort of targeted actions, I 
want to remind everyone here, people say Congress doesn't do 
anything, has everybody already forgotten about the Take it 
Down Act? Passed overwhelmingly, right? We can take targeted 
approaches to this, and we can take broad approaches. The point 
is that we can't have the technocratic design of regulation 
being done in a patchwork like this.
    That is going to create serious problems for American 
innovators as we continue to try to race against China to build 
our capacity. We have to balance safety and innovation at the 
same time. We do need to have some preemption, in my testimony 
I spelled it out in detail how to do this, but reserve certain 
powers to the State.
    I want to agree with the Democratic attorney general of 
Massachusetts, who said quote,

        Existing State consumer protection antidiscrimination data 
        security laws still apply to emerging technology, including AI 
        systems, as they would in any other context.

That is exactly right. States can continue to do that, but we 
need to have a Federal framework to make sure we get this done 
right.
    Mr. Lieu. Thank you. Just as a side, there may have been 
some disparagement of California. I just want to note Apple was 
headquartered in California, Google, Meta, Anthropic, and 
Nvida, turns out that California does pretty darn well with the 
laws that we have. Professor Richards, I have a question for 
you. California is now proposing SB53, have you looked at that 
in the California legislature at all?
    Mr. Richards. Not at the level of detail that I want to 
answer questions under oath on it.
    Mr. Lieu. OK, that is fine. Now, you in your testimony, 
thinks that--
    Mr. Issa. Gentlemen, before you came in, Mr. Frazier 
actually has studied, and is quite favorable in many areas of 
it, if you--
    Mr. Lieu. Tell me about SB53, what is your view of it?
    Mr. Frazier. Earlier in my remarks I said that SB53 is the 
least bad State bill I have seen with respect to AI 
development. That it gets right a lot of the emphasis on 
information sharing that we know is essential to leading to 
better AI policy. The sorts of disclosures that SB53 calls for 
from labs is a very positive step.
    I would like to see it done at a Federal level, and not at 
the State level. I also think that the whistleblower 
protections called for in SB53 are important to contribute more 
information sharing. I will note that, for example, Senator 
Grassley has a whistleblower bill pending before Congress that 
I would prefer to be the vehicle for those sorts of 
protections.
    Mr. Lieu. Thank you, I appreciate that. Professor Richards, 
let me go back to you. Your view is there should be no 
preemption whatsoever, so let me just sort of ask you this 
question, and you can answer because my time will be up soon. 
When a large language model comes out that goes through this 
enormous amount of training and post training, and all this, 
and you have a model.
    Let us say one State says we are going to mandate testing, 
another State says we are not going to mandate testing. A third 
States not only are we going to mandate testing, we are going 
to mandate the 27 specific areas you have to test. Then, 
another State says we are going to go even further than that, 
and do 35 specific areas, and be very specific what you have to 
disclose, and on and on.
    How does even technically an AI company deal with that when 
they have one model? Do they just say we are just not going to 
be able to allow this to happen, for example, in Missouri, 
California, or Florida? How does it even work if you have 17 
States regulating one AI model?
    I will yield back and let him answer.
    Mr. Issa. I was giving you all that extra time so you can 
let him answer, and if there is a followup within reason I will 
let you have it.
    Mr. Lieu. Thank you.
    Mr. Issa. It is the advantage of being nearly at the end.
    Mr. Lieu. There we go, thank you.
    Mr. Richards. Under that hypothetical, Congressman, it 
would be very challenging for a company to apply it, but it is 
not my position that there should be no preemption, just that 
we should not consider broad preemption of State AI laws. Under 
appropriate circumstances, a sensible Federal law would be 
naturally preemptive, and I would welcome a reasonable Federal 
AI statute.
    Just as I have welcomed and advocated for a reasonable 
Federal privacy statute, which the United States is the only 
advanced economy that does not have one.
    Mr. Lieu. Great, thank you, I yield back.
    Mr. Issa. We now go right to the gentleman from California, 
what does the hat say, Eric?
    Mr. Swalwell. Jimmy Kimmel Live.
    Mr. Issa. Of course. The gentleman is recognized for five 
minutes.
    Mr. Swalwell. Thank you, Chair. I will get to AI in a 
moment, but I am not going to miss the opportunity to ask my 
colleagues, the proponents of free speech across the aisle, and 
the champions who sit with me, to condemn in the harshest terms 
what is happening right now from our administration. The second 
late night comedian has been taken off the air because the 
President did not like a joke.
    I want to first condemn in the harshest tones the murder of 
Charlie Kirk, he should be with his family right now, he should 
be with his children. He had a right to say what he wanted to 
say to who he wanted to say it without any physical violence 
being brought his way. Jimmy Kimmel had a right to say what he 
said.
    Which didn't in any way suggest that somebody in the MAGA 
world had been responsible for the murder of Charlie Kirk. He 
was just pointing out what folks online were doing as Twitter 
detectives before any investigation had been completed, were 
talking about the assassination of Charlie Kirk.
    Then, he pointed out that Donald Trump, who did not go to 
Kirk's memorial service at the Kennedy Center over the weekend, 
when he was asked how he was feeling about the assassination, 
did not address it, but rather went right to a construction 
project. For that, Jimmy Kimmel was taken off the air. That is 
not who we are, that is what it looks like in China, that is 
what it looks like in Russia, that can't be what it looks like 
in America.
    The foundation of this, the genesis of this was the 
President's FCC Chair Brandon Carr sending a tweet that said 
essentially, and giving an interview to a podcaster where he 
said essentially, if ABC doesn't want to do this the easy way 
and suspend Jimmy Kimmel, we will do it the hard way, and it 
would be government censorship.
    Maybe, I was not loud enough in the past when Republicans 
spoke out against government censorship, and if that is the 
case I will go back and revisit whether I could have been 
louder. That does not mean that today Republicans are silent 
just in an effort to own the libs. If you didn't like cancel 
culture when you thought it was happening in prior 
administrations, you certainly can't look at what just happened 
in our country and accept that this is something we should live 
with, and we should tolerate.
    I want to make it clear, there is going to be a democratic 
majority in just over a year, and to the FCC Chair, and anyone 
involved in these dirty deals, get a lawyer, and save your 
records, because you are going to be in this room, and you are 
going to be answering questions about the deals that you 
struck, and who benefited, and what the cost was to the 
American people because that happened.
    I want to now move, Chair, and I appreciate you holding 
this hearing, to AI, and ask our witnesses first, and I will 
start with Professor Richards. Professor Richards, what is the 
risk to the country if, particularly to children if the 
government does absolutely zero on AI as far as legislation, as 
far as what they see, privacy that is taken, biases that are 
reinforced, what do you see the risk could be?
    Mr. Richards. Thank you, Congressman. There are a number of 
risks, some of them are known, and some of them are unknown, 
which is why it is essential to preserve regulatory flexibility 
by the States as well as the Federal Government to deal with 
these questions. We have already discussed, and at some length, 
but perhaps we can't discuss it enough that the losses that the 
parents who are seated behind me have suffered.
    When we have the, in some cases, the reckless, or the rash 
deployment of software agents in children's lives, there were 
discussions about telling them about Santa, but they have done 
much, much worse, that is one of the risks, exacerbating the 
mental health epidemic. There are risks to children in schools, 
children don't read books anymore because of AI models.
    The States should be able to address that pedagogically, 
with particular consequences for our critical thinking skills 
that are necessary for our democracy.
    Mr. Swalwell. Thank you. Also, Mr. Thierer, I just want to; 
coming in from another meeting, I want to thank you for your 
remarks earlier about the FCC Chair, and his hypocrisy about 
censorship.
    With that, Chair, I will yield back.
    Mr. Issa. Does the gentleman yield for a second?
    Mr. Swalwell. Yes.
    Mr. Issa. As often happens there is a nuance of total 
agreement here, and I just want to speak well of your 
championing free speech, and perhaps those who leave broadcast 
like our mutual friend Bill Marr might find an even greater 
place, an even greater amount, I do agree with you that we need 
to continue to promote free speech.
    Your kind words related both in defense of one, but also on 
Charlie Kirk is very much appreciated, and I look forward to 
continuing to work with you--
    Mr. Swalwell. You and I have worked on a lot of issues, and 
this is one we can work on as well. Thank you, Chair.
    Mr. Issa. Thank you, appreciate it. That only leaves me. My 
job here is not just to ask questions, but perhaps to try to 
close on as positive a note as I can of what we seem to agree 
on. I am going to use question comment combination, I only ask 
that if I am accurate, you agree that I am somewhat accurate as 
briefly as possible.
    I will start primarily with Mr. Frazier, but I want to make 
sure I have total agreement. If Congress authors laws, and does 
it normally, not by definition unless we expressly trample on 
common law, common law remains a tool of the States, is that 
correct, Mr. Frazier?
    Mr. Frazier. That is correct, absent very clear language, 
yes.
    Mr. Issa. OK, so that is one of our challenges, to make 
sure that any preemption does not challenge existing laws. In 
the case of, if you will, existing laws in States, for example 
product liability laws, we never preempted those, even though 
we do have some Federal laws. The reality is an unsafe product, 
a product that injures people has a myriad of State laws that 
already affect it.
    For example, when we went from a man striking someone, to a 
man on a horse striking someone, to a man in a car striking 
someone, we didn't necessarily have to make major changes in 
the law, they all fell, and none of them were federally 
preempted.
    Mr. Frazier. Correct, and there is a reason why law 
professors laugh at the idea of the law of the horse.
    Mr. Issa. The law of the horse, exactly. Professor 
Richards, you gave us a great deal of caution, is it fair to 
say that if we clearly carve around any question of common law 
preemption, and at the same time do not stop causes of action 
which are, although perhaps automated by a bot and the like, 
still in fact follow that horse example that in fact for the 
most part don't we meet the requirement of allowing the States 
to continue to protect their citizens as they have for 250 
years?
    Mr. Richards. I believe, Chair Issa, that States should 
have the ability to continue to experiment with their own laws 
in addition to the common law.
    Mr. Issa. I fully agree with you, and I will go to Mr. 
Thierer, because this is both law and policy. You mentioned a 
number of times, ingestion versus output. Ingestion, which 
cannot easily be done, 50 different States and 210 different 
countries around the world.
    Isn't that also a case in which the Federal Government must 
both lead on where the standards are, particularly as to 
patent, copyright, other intellectual property, and to have a 
single voice speaking around the world to other countries?
    Mr. Thierer. Yes, that is right, Mr. Chair, and let us be 
clear. We wouldn't be here suggesting that we should have 50 
FDAs for food and drug standards, or 50 FAAs for different 
aviation standards by every State such that planes had to 
change every State, that would be crazy, right? We don't want 
that model for AI either. We don't want--
    Mr. Issa. A death by fentanyl, every State has a right to 
have--
    Mr. Thierer. Absolutely, you said it, and let us just be 
clear, let us just check off the generally applicable laws that 
would be exempt from either moratorium or preemption, civil 
rights law and discrimination, unfair and deceptive practices, 
and antifraud, competition policy laws at the State level, 
other consumer protections--
    Mr. Issa. In fact, the Lanham Act actually helps the 
States.
    Mr. Thierer. We can go on down this list, and then we can 
get into the lawsuits. The one thing America doesn't lack is an 
active trial bar, right? There are going to be a lot of ongoing 
lawsuits, and we should throw the book at bad actors. There are 
always going to be bad actors regardless of technology, we have 
the capability to go after them.
    Mr. Issa. Dr. Bray, I don't want to leave you out of this. 
Isn't one of the greatest cautions we heard today that we in 
fact have to make sure that when harm is done to anyone in a 
given State that they have a reasonable cause of action? If it 
doesn't exist federally, it must be available in the States, is 
that correct?
    Dr. Bray. That is fully correct, Chair, thank you.
    Mr. Issa. OK, Mr. Frazier, I am going to sort of guide this 
another way. From the standpoint of Federal laws, it is fair to 
say that for all practical purposes, patent, trademark, and 
copyright, these are bastions of Federal law because under the 
recognition that they all travel interstate, they have to have 
one standard rules of the road, correct?
    Mr. Frazier. It was very apparent to the founders that they 
did not want a patchwork approach to copyright and patent law, 
correct.
    Mr. Issa. They also said that no State could erect 
basically a draw bridge and charge a toll to pass from one 
State to the other, they specifically understood that States 
might do that.
    Mr. Frazier. It is a lesson we have learned throughout 
history with respect to, for example, attempts to change the 
length of a truck before it enters another State by 100 was 
declared unconstitutional, we have been here before, we don't 
want a patchwork when it comes to national goods.
    Mr. Issa. OK, well I am going to not far exceed, because we 
have the agreement that helps us in the guidelines. Certainly, 
in the case of one that was mentioned briefly, PADRA, which 
does deal with deep fakes, with digital likenesses and the 
like, and which we do have bipartisan support, and we look to 
move, that was an element today.
    I would like any of you that want to comment further for 
the record to do so. I guess the last thing that we all have to 
do is recognize for the families that came here, that we, from 
this Chair, and I think you heard it from both sides of the 
aisle, we want to make sure that if we pass a law that further 
helps protect against the losses that you had, that it consider 
exactly what happened in the case of your families.
    That if we pass a law that in no way should it stop the 
causes of actions that may exist. If anything, we at a Federal 
level for example, a death by fentanyl, we want to hold those 
who knowingly deceive and sell pills purported to be some kind 
of drug when in fact they are a deadly poison, that they be 
able to be charged with murder, as in some cases have been done 
at the State level.
    I can assure you from this standpoint, and I think the 
Ranking Member would not nod in any way but yes, that this is a 
common goal, and that we heard that message loud and clear. I 
want to thank those who are here today for their presence.
    I want to recognize Mr. Johnson for something he wants to 
place on the record.
    Mr. Johnson. Thank you. I have a couple of unanimous 
consent requests. I would ask--
    Mr. Issa. I know I am going to like them.
    Mr. Johnson. I would ask unanimous consent to enter into 
the record a letter by Alejandra Montoya-Boyer, the Vice 
President for the Center for Civil Rights and Technology at the 
Leadership Conference on Civil and Human Rights. A letter to 
you, Chair Issa, and Ranking Member, myself, dated September 
18, 2025, which expresses the conference's views regarding the 
potential preemption of State's efforts to regulate AI--
    Also, to enter into the record a statement by Vered Horesh, 
the Chief of Strategic AI partnerships at Bria AI titled, 
``Don't Ban State AI Laws.'' As well as a letter from 17 
Republican Governors to Speaker Johnson and Majority Leader 
Thune dated June 27, 2025, opposing the AI moratorium and the 
big ugly bill.
    Mr. Issa. Without objection, so ordered.
    In closing, I too have unanimous consents. I ask unanimous 
consent that an extensive report and letter from the 
organization known as Engine to both of us, which is a 
coalition of small startups, been around since I think 2011. 
Without objection, will be ordered.
    An additional letter from the Americans for Prosperity 
detailing the benefits versus the risks of fifty separate 
States. That will be placed on the record without objection.
    An article from Politico dated--there we go, dated 
yesterday, and it is the ``California-Washington Tech Fight 
Heats Up,'' will be placed in the record without objection.
    Additionally, a Politico article entitled ``We Don't Want 
California to set the Rules for AI across the Country Trump, 
Advisor Says,'' but also in spite of that being placed in the 
record.
    Last, I want to thank our witnesses. You have been 
informative, you have been helpful, and I think that this has 
in fact furthered our understanding of, quite frankly, our need 
to act, and our need to act with a restraint from some of the 
warnings that were given by Professor Richards. With that--I 
have two more, and then we are done.
    I would ask unanimous consent that the President's AI 
initiative to be placed in the record in its full. 
Additionally, the recent speech by Vice President Vance 
delivered in Europe be placed in the record. Without objection, 
those both will be ordered.
    Just to make it clear, additionally there will be general 
leave for similar items not specifically spoken to by Members 
on both sides of the aisle. They will have five days in which 
to submit those. As such, we stand adjourned.
    [Whereupon, at 12:06 p.m., the Subcommittee was adjourned.]

    All materials submitted for the record by Members of the 
Subcommittee on Courts, Intellectual Property, and the Internet 
can
be found at: https://docs.house.gov/Committee/Calendar/ByEvent 
.aspx?EventID=118623.

                                 [all]