[Senate Hearing 118-130]
[From the U.S. Government Publishing Office]




                                                        S. Hrg. 118-130

                   AI AND THE FUTURE OF OUR ELECTIONS

=======================================================================

                                HEARING

                               before the

                 COMMITTEE ON RULES AND ADMINISTRATION
                          UNITED STATES SENATE

                    ONE HUNDRED EIGHTEENTH CONGRESS

                             FIRST SESSION

                               __________

                     WEDNESDAY, SEPTEMBER 27, 2023

                               __________

    Printed for the use of the Committee on Rules and Administration





                [GRAPHIC NOT AVAILABLE IN TIFF FORMAT]





                  Available on http://www.govinfo.gov


                               ______
                                 

                 U.S. GOVERNMENT PUBLISHING OFFICE

53-678                    WASHINGTON : 2023











                 COMMITTEE ON RULES AND ADMINISTRATION

                             FIRST SESSION

                  AMY KLOBUCHAR, Minnesota, Chairwoman

DIANNE FEINSTEIN, California         DEB FISCHER, Nebraska
CHARLES E. SCHUMER, New York         MITCH McCONNELL, Kentucky
MARK R. WARNER, Virginia             TED CRUZ, Texas
JEFF MERKLEY, Oregon                 SHELLEY MOORE CAPITO, West 
ALEX PADILLA, California                 Virginia
JON OSSOFF, Georgia                  ROGER WICKER, Mississippi
MICHAEL F. BENNET, Colorado          CINDY HYDE-SMITH, Mississippi
PETER WELCH, Vermont                 BILL HAGERTY, Tennessee
                                     KATIE BOYD BRITT, Alabama

                    Elizabeth Farrar, Staff Director
                Jackie Barber, Republican Staff Director









                         C  O  N  T  E  N  T  S

                              ----------                              
                                                                  Pages

                         Opening Statement of:

Hon. Amy Klobuchar, Chairwoman, a United States Senator from the 
  State of Minnesota.............................................     1
Hon. Deb Fischer, a United States Senator from the State of 
  Nebraska.......................................................     3
Hon. Steve Simon, Secretary of State, State of Minnesota, St. 
  Paul Minnesota.................................................     5
Hon. Trevor Potter, Former Commissioner and Chairman of the 
  Federal Election Commission, Founder and President, Campaign 
  Legal Center, Washington, DC...................................     7
Maya Wiley, President and CEO, The Leadership Conference on Civil 
  and Human Rights, Washington, DC...............................    10
Neil Chilson, Senior Research Fellow, Center for Growth and 
  Opportunity at Utah State University, Logan, Utah..............    12
Ari Cohn, Free Speech Counsel, TechFreedom, Washington, DC.......    13

                         Prepared Statement of:

Hon. Steve Simon, Secretary of State, State of Minnesota, St. 
  Paul Minnesota.................................................    35
Hon. Trevor Potter, Former Commissioner and Chairman of the 
  Federal Election Commission, Founder and President, Campaign 
  Legal Center, Washington, DC...................................    38
Maya Wiley, President and CEO, The Leadership Conference on Civil 
  and Human Rights, Washington, DC...............................    47
Neil Chilson, Senior Research Fellow, Center for Growth and 
  Opportunity at Utah State University, Logan, Utah..............    59
Ari Cohn, Free Speech Counsel, TechFreedom, Washington, DC.......    66

                            For the Record:

Center for AI and Digital Policy--Statement for the Record.......    88
Open Source Election Technology Institute, Inc.--Statement for 
  the Record.....................................................    96
Public Citizen--Statement for the Record.........................   106
As a Matter of Fact--The Harms Caused by Election Disinformation.   108
Common Cause Education Fund--Under the Microscope, Election 
  Disinformation in 2022 and What We Learned for 2024............   192
Statement of Jennifer Huddleston--Research Fellow, Cato Institute   216
Townhall Article--Senator Hagerty................................   222
TechFreedom--Statement for the Record............................   230

                  Questions Submitted for the Record:

Hon. Amy Klobuchar, Chairwoman, a United States Senator from the 
  State of Minnesota to Hon. Steve Simon, Secretary of State, 
  State of Minnesota, St. Paul Minnesota.........................   295
Hon. Deb Fischer, a United States Senator from the State of 
  Nebraska to Hon. Steve Simon, Secretary of State, State of 
  Minnesota, St. Paul Minnesota..................................   296
Hon. Amy Klobuchar, Chairwoman, a United States Senator from the 
  State of Minnesota to Hon. Trevor Potter, Former Commissioner 
  and Chairman of the Federal Election Commission, Founder and 
  President, Campaign Legal Center, Washington, DC...............   297
Hon. Amy Klobuchar, Chairwoman, a United States Senator from the 
  State of Minnesota to Maya Wiley, President and CEO, The 
  Leadership Conference on Civil and Human Rights, Washington, DC   299
Hon. Deb Fischer, a United States Senator from the State of 
  Nebraska to Neil Chilson, Senior Research Fellow, Center for 
  Growth and Opportunity at Utah State University, Logan, Utah...   301
Hon. Amy Klobuchar, Chairwoman, a United States Senator from the 
  State of Minnesota to Ari Cohn, Free Speech Counsel, 
  TechFreedom, Washington, DC....................................   304









 
                   AI AND THE FUTURE OF OUR ELECTIONS

                              ----------                              


                     WEDNESDAY, SEPTEMBER 27, 2023

                               United States Senate
                      Committee on Rules and Administration
                                                    Washington, DC.
    The Committee met, pursuant to notice, at 3:31 p.m., in 
Room 301, Russell Senate Office Building, Hon. Amy Klobuchar, 
Chairwoman of the Committee, presiding.
    Present: Senators Klobuchar, Fischer, Schumer, Warner, 
Merkley, Padilla, Ossoff, Bennet, Welch, Hagerty, and Britt.

  OPENING STATEMENT OF HONORABLE AMY KLOBUCHAR, CHAIRWOMAN, A 
       UNITED STATES SENATOR FROM THE STATE OF MINNESOTA

    Chairwoman Klobuchar. Okay. Good afternoon, everyone. I am 
honored to call this hearing to order. I am pleased to be here 
with my colleague, Senator Fischer, wearing her pin with the 
ruby red slippers, which symbolizes there is no place like 
home.
    Senator Fischer. On top of my heels.
    Chairwoman Klobuchar. Yes, this week in Washington, it is 
kind of on our minds. Thank you as well, Senator Merkley, for 
being here. I know we have other Members attending as well. I 
want to thank Ranking Member Fischer and her staff for working 
with us on this hearing on Artificial Intelligence and the 
Future of our Elections.
    I want to introduce--I will introduce our witnesses 
shortly, but we are joined by Minnesota's Secretary of State, 
Steve Simon, with vast experience running elections and is well 
respected in our state and nationally.
    Trevor Potter, the President of the Campaign Legal Center, 
and former FEC Commissioner and Chair. Thank you for being 
here. Maya Wiley, President and CEO of the Leadership 
Conference on Civil and Human Rights. We are also going to 
hear, I know that Ranking Member Fischer will be introducing 
our two remaining witnesses. We thank you for being here, Neil 
Chilson, Senior Research Fellow at the Center for Growth and 
Opportunity, and Ari Cohn, Free Speech Counsel at TechFreedom.
    Like any emerging technology, AI comes with significant 
risks, and our laws need to keep up. Some of the risks are 
already clear, starting with security, which includes 
protecting our critical infrastructure, guarding against cyber-
attacks, and staying ahead of foreign adversaries. We must also 
protect our innovation economy, including the people who 
produce content, and countering the alarming rise in criminals 
using AI to scam people.


    Confronting these issues is a major bipartisan focus here 
in the Senate, where two weeks ago we convened the first in a 
series of forums organized by Leader Schumer, and Senators 
Rounds and Young, and Senator Heinrich to discuss this 
technology with experts of all backgrounds, industry, union, 
nonprofit, across the spectrum in their views.
    Today, we are here to focus, hone in on a particular risk 
of AI. That is the risk that it poses for our elections and how 
we address them. Given the stakes for our democracy, we cannot 
afford to wait. The hope is we can move on some of this by year 
end with some of the legislation which already has bipartisan 
support, to be able to get it done with some larger 
legislation.
    As I noted, we are already seeing this technology being 
used to generate viral, misleading content, to spread 
disinformation, and deceive voters. There was an AI-generated 
video, for instance, posted on Twitter of one of my colleagues, 
Senator Warren, in which a fake Senator Warren said that people 
from the opposing party should not be able to vote. She never 
said that, but it looked like her.
    The video was seen by nearly 200,000 users in a week, and 
AI-generated content has already begun to appear in political 
ads. There was one AI-generated image of former President Trump 
hugging Dr. Fauci that was actually a fake.
    The problem for voters is that people are not going to be 
able to distinguish if it is the opposing candidate or their 
own candidate, if it is them talking or not. That is untenable 
in a democracy. Plus, new services like Banter AI have hit the 
market, which can create voice recordings that sound like, say, 
President Biden or other elected officials from either party.
    This means that anyone with a computer can put words in the 
mouth of a leader. That would pose a problem during an 
emergency situation like a natural disaster, and it is not hard 
to imagine it being used to confuse people. We also must 
remember that the risks posed by AI are not just about 
candidates. It is also about people being able to vote. In the 
Judiciary hearing, I actually just simply asked ChatGPT to 
write me a tweet about a polling location in Bloomington, 
Minnesota. I noted that sometimes there were lines at that 
location, what should voters do? It just quickly spit out, go 
to 1234 Elm Street. There is no such location in Bloomington, 
Minnesota.
    You have the problem of that too, more likely to occur as 
we get closer to an election. With AI, the rampant 
disinformation we have seen in recent years will quickly grow 
in quantity and quality.
    We need guardrails to protect our elections. What do we do? 
I hope that will be some of the subject, in addition to 
admiring the problem that we can discuss today. Senator Hawley 
and I worked over the last two months on a bill together that 
we are leading together--hold your beer, that is correct. On a 
bill that we are leading together to get at deepfake videos, 
like the ones I just talked about used against former President 
Trump, and against Elizabeth Warren. Those are ads that are not 
really the people. Senator Collins and Senator Coons, Senator 
Bennet, Senator Ricketts have joined us already on that bill.


    We just introduced it. It creates a framework that is 
Constitutionally all right, based on past and recent precedent, 
with exceptions for things like parody and satire, that allows 
those to be banned. Another key part of transparency when it 
comes to this technology is disclaimers for other types of ads.
    That is another bill, Congresswoman Yvette Clarke is 
leading it in the House, which would require a disclaimer on 
ads that include AI-generated images so at least voters know 
that AI is being used in the campaign ads.
    Finally, I see Commissioner Dickerson out there. Finally--
are you happy about that, Mr. Cohn? There you go. Finally, it 
is important that the Federal Election Commission be doing 
their part in taking on these threats.
    While the FEC is now accepting public comments on whether 
it can regulate the deceptive AI-generated campaign ads after 
deadlocking on the issue earlier this summer, we must remain 
focused on taking action in time for the next election. Whether 
you agree or not, agree that the FEC currently has the power to 
do that, there is nothing wrong with spelling it out if that is 
the barrier.
    We are working with Republicans on that issue as well. I 
kind of look at it three-pronged. The most egregious that must 
be banned under the--with the Constitutional limitations, the 
disclaimers, and then giving the FEC the power that they need, 
as well as a host of state laws, one of which I am sure we will 
hear about from Steve Simon.
    With bipartisan cooperation put in place, and we will get 
the guardrails that we need. We can harness the potential of 
AI, the great opportunities, while controlling the threats we 
now see emerging and safeguard our democracy from those who 
would use this technology to spread disinformation and upend 
our elections, whether it is abroad, whether it is domestic.
    I believe strongly in the power of elections. I also 
believe in innovation, and we have got to be able to draw that 
line to allow voters to vote and make good decisions, while at 
least putting the guardrails in place. With that, I turn it 
over to my friend, Senator Fischer. Thank you.

  OPENING STATEMENT OF HONORABLE DEB FISCHER, A UNITED STATES 
               SENATOR FROM THE STATE OF NEBRASKA

    Senator Fischer. Thank you, Chairwoman Klobuchar. Thank you 
to our witnesses today for being here. I do look forward to 
hearing your testimony.
    Congress often examines issues that affect Americans on a 
daily basis. Artificial intelligence has become one of those 
issues. AI is not new, but significant increases in computing 
power have revolutionized its capabilities. It has quickly 
moved from the stuff of science fiction to being a part of our 
daily lives.
    There is no question that AI is transformative and is 
poised to evolve rapidly. This makes understanding AI all the 
more important. In considering whether legislation is 
necessary, Congress should weigh the benefits and the risks of 
AI.
    We should look at how innovative uses of AI could improve 
the lives of our constituents, and also the dangers that AI 
could pose.


We should consider the possible economic advantages and 
pitfalls. We should thoughtfully examine existing laws and 
regulations, and how they might apply to AI.
    Lately, AI has been a hot topic here in Washington. I know 
many of my colleagues and Committees in both chambers are 
exploring this issue. The Rules Committee's jurisdiction 
includes federal laws governing elections and campaign finance, 
and we are here today to talk about how AI impacts campaign, 
politics, and elections.
    The issues surrounding the use of AI in campaigns and 
elections are complicated. On one hand, there are concerns 
about the use of AI to create deceptive or fraudulent campaign 
ads. On the other hand, AI can allow campaigns to more 
efficiently and effectively reach voters. AI driven technology 
can also be used to check images, video, and audio for 
authenticity.
    As we learn more about this technology, we must also keep 
in mind the important protections our Constitution provides for 
free speech in this country. Those protections are vital to 
preserving our democracy.
    For a long time, we did not have many reasons to consider 
the sources of speech, or if it mattered whether AI was helping 
to craft it. Our First Amendment prohibits the Government from 
policing protected speech, so we must carefully scrutinize any 
policy proposals that would restrict that speech.
    As Congress examines this issue, we need to strike a 
careful balance between protecting the public, protecting 
innovation, and protecting speech. Well-intentioned regulations 
rushed into law can stifle both innovation and our 
Constitutional responsibilities.
    Again, I am grateful that we have the opportunity to 
discuss these issues today and to hear from our expert 
witnesses. Thank you.
    Chairwoman Klobuchar. Thank you very much, Senator Fischer. 
I am going to introduce our witnesses. Our first witness is 
Minnesota Secretary of State Steve Simon. Secretary Simon has 
served as Minnesota's Chief Elections Administrator since 2015.
    He previously served in the Minnesota House of 
Representatives and was an Assistant Attorney General. He 
earned his law degree from the University of Minnesota and his 
bachelor's degree from Tufts.
    Our second witness is Trevor Potter, President of the 
Campaign Legal Center, which he founded in 2002, and former 
Republican Chairman of the Federal Election Commission, after 
his last appointment by President H.W. Bush.
    He appeared before this Committee last in March of 2021 and 
did not screw up, so we invited him back again. Mr. Potter also 
served as General Counsel to my friend and former colleague, 
Senator John McCain's 2000 and 2008 Presidential campaign, and 
has taught campaign finance at the University of Virginia and 
at Oxford. He earned his law degree from The University of 
Virginia, and bachelor's degree from Harvard.
    Our third witness is Maya Wiley, President and CEO of The 
Leadership Conference on Civil and Human Rights. Ms. Wiley is 
also a Professor of Public and Urban Policy at The New School. 
Previously, she served as Counsel to the Mayor of New York City 
and was the Founder and President of the Center for Social 
Inclusion. She earned her law degree from Columbia Law School and 
her bachelor's degree from Dartmouth. With that, I will have 
Senator Fischer introduce our remaining two witnesses.
    Senator Fischer. Thank you, Senator Klobuchar. Again, I 
thank our witnesses for all being here today. We have with us 
also Neil Chilson, who serves as a Senior Research Fellow at 
the Center for Growth and Opportunity, a nonpartisan think tank 
at Utah State University that focuses on technology and 
innovation.
    Mr. Chilson has previously served as Acting Chief 
Technologist at the Federal Trade Commission.
    We also have Ari Cohn, who serves as free speech Counsel at 
TechFreedom, a nonpartisan nonprofit devoted to technology, 
law, and policy, and the preservation of civil liberties. Mr. 
Cohn is a nationally recognized expert in First Amendment law 
and defamation law, and co-authored amicus briefs to state and 
federal courts across the country on vital First Amendment 
issues. Welcome to all of you.
    Chairwoman Klobuchar. Very good. If the witnesses could 
please stand.
    Chairwoman Klobuchar. Do you swear the testimony you are 
going to give before the Committee shall be the truth, the 
whole truth, and nothing but the truth, so help you God?
    Mr. Simon. I do.
    Mr. Potter. I do.
    Ms. Wiley. I do.
    Mr. Chilson. I do.
    Mr. Cohn. I do.
    Chairwoman Klobuchar. Thank you. Please be seated. We are 
going to proceed.

OPENING STATEMENT OF HONORABLE STEVE SIMON, SECRETARY OF STATE, 
            STATE OF MINNESOTA, ST. PAUL, MINNESOTA

    Mr. Simon. Thank you, Madam Chair, Ranking Member Fischer, 
and Members of the Committee. Thank you for this opportunity. I 
am Steve Simon. I have the privilege of serving as Minnesota's 
Secretary of State. I am grateful for your willingness to 
engage on this important topic, and I really am honored to be 
here.
    Artificial intelligence is not a threat to American 
democracy in and of itself, but it is an emerging and powerful 
amplifier of existing threats. All of us who touch the election 
process must be watchful and proactive, especially as the 2024 
Presidential contest approaches.
    A year ago, we were not talking so much about generative 
AI. The release of the newly accessible tools such as ChatGPT 
challenged all that. In the hands of those who want to mislead, 
AI is a new and improved tool. Instead of stilted 
communications with poor grammar, generative AI can provide 
apparent precision and clarity. The potential threat to the 
administration of elections is real.
    We are talking about an old problem, namely election 
misinformation and disinformation, that can now more easily be 
amplified. One possible danger could come from an innocent 
circumstance. AI software simply might fail to grasp the nuances 
of our state by state election system.
    A prominent computer scientist in Minnesota named Max 
Hailperin made this point in an article several months ago. He 
asked ChatGPT questions about Minnesota election law, much as 
Senator Klobuchar said that she did, and the program gave the 
wrong answers to several questions. Now, was that intentional 
misdirection? Probably not. Still, it is a danger to voters who 
may get bad information about critical election rules.
    In the wrong hands, AI could be used to misdirect 
intentionally and in ways that are far more advanced than ever. 
I remember seeing a paper leaflet from an election about 20 or 
more years ago, distributed in a particular neighborhood, that 
told residents that in the coming election voting would occur 
on Tuesday for those whose last names begin with the letters A 
through L, while everyone else would vote on Wednesday.
    Now, that was a paper leaflet from a couple or more decades 
ago. Now imagine a convincing seeming email or deepfake 
conveying that kind of disinformation in 2024. The perpetrators 
could be domestic or foreign. In fact, the Department of 
Homeland Security has warned recently that our foreign 
adversaries may use AI to sharpen their attacks on our 
democracy.
    One last point on potential consequences. The Brennan 
Center recently identified a so-called liar's dividend from the 
very use of AI. Simply put, the mere existence of AI can lead 
to undeserved suspicion of messages that are actually true. A 
video, for example, that contradicts a person's preconceived 
ideas may now be simply dismissed as a deepfake. The bottom 
line is that misdirection in elections can cause disruption.
    If AI misdirects, it could become an instrument of that 
disruption. What can be done about it? Well, in our office, we 
are trying to be proactive. First, we are leading with the 
truth. That means pushing out reliable and accurate information 
while also standing up to mis and disinformation quickly.
    Second, we have been working with local and federal 
partners to monitor and respond to inaccuracies that could 
morph into conspiracy theories on election related topics.
    Third, we have emphasized media literacy. The National 
Association of Secretaries of State has helped with its Trusted 
Sources Initiative, urging Americans to seek out sources of 
election information from Secretaries of State and local 
election administrators.
    Fourth, our cyber defenses are strong. We have invested 
time and resources in guarding against intrusions that could 
introduce misleading information to voters.
    As for possible legislation, I do believe that a federal 
approach would be helpful. The impact of AI will be felt at a 
national level. I applaud bipartisan efforts such as the 
Protect Elections from Deceptive AI Act and the Real Political 
Ads Act.
    Recently, the Minnesota Legislature enacted similar 
legislation with broad bipartisan support. There is a critical 
role for the private sector, too. Companies have a 
responsibility to the public to make sure their AI products are 
secure and trustworthy. I support the efforts already underway 
to encourage adherence to basic standards. But let me end on a 
note of some cautious optimism.

    AI is definitely a challenge. It is a big challenge. But in 
some ways, we have confronted similar challenges before with 
each technological leap. We have generally been able to manage 
the potential disruptions to the way we receive and respond to 
information.
    The move to computerization, the arrival of the internet, 
the emergence of social media all threatened to destabilize 
information pathways. But in short order, the American people 
got smart about those things.
    They adapted, and Congress helped. AI may be qualitatively 
different from those other advances, but if we get better at 
identifying false information and if we continue to rely on 
trusted sources for election information, and if Congress can 
help, we can overcome many of the threats that AI poses, while 
harnessing its benefits to efficiency and productivity.
    Thank you for inviting me to testify today. I look forward 
to our continued partnership.
    [The prepared statement of Mr. Simon was submitted for the 
record.]
    Chairwoman Klobuchar. Thank you very much. Appreciate it. 
Mr. Potter.

     OPENING STATEMENT OF HONORABLE TREVOR POTTER, FORMER 
 COMMISSIONER AND CHAIRMAN OF THE FEDERAL ELECTION COMMISSION, 
  FOUNDER AND PRESIDENT, CAMPAIGN LEGAL CENTER, WASHINGTON, DC

    Mr. Potter. Good afternoon and thank you for the honor of 
appearing before you today to testify about artificial 
intelligence and elections. My testimony will focus on how 
political communications generated through AI relate to the 
conduct of campaigns and why federal regulation is urgently 
needed to address the impact of some aspects of this technology 
on our democracy.
    To summarize the overarching concern, AI tools can 
increasingly be used to design and spread fraudulent or 
deceptive political communications that infringe on voters' 
fundamental right to make informed decisions at the ballot box.
    Every election cycle, billions of dollars are spent to 
create and distribute political communications. Before voters 
cast their ballots they must parse through these many messages 
and decide what to believe. Our campaign laws are intended to 
protect and assist voters by requiring transparency about who 
is paying to influence their election choices and who is 
speaking to them.
    However, AI could make voters' task much more difficult 
because of its unprecedented ability to easily create realistic 
false content. Unchecked, the deceptive use of AI could make it 
virtually impossible to determine who is truly speaking in a 
political communication, whether the message being communicated 
is authentic, or even whether something being depicted actually 
happened.
    This could leave voters unable to meaningfully evaluate 
candidates, and candidates unable to convey their desired 
message to voters, undermining our democracy. It opens the door 
to malign--even foreign--actors to manipulate our elections 
with false information. Foreign adversaries may not favor 
specific candidates, they may just seek to create chaos and sow 
distrust in our elections, thereby harming both parties and the 
whole country.


    I believe there are three concurrent paths to proactively 
addressing these risks, three paths flagged by the Chair in her 
opening remarks.
    First, Congress could strengthen the FEC's power to protect 
elections against fraud. Under current, existing law, the FEC 
can stop federal candidates and their campaigns from 
fraudulently misrepresenting themselves as speaking for another 
candidate or party on a matter which is damaging to that 
candidate or party.
    I believe the FEC should explicitly clarify, through the 
rulemaking process, that the use of AI is included in this 
prohibition. Then Congress should expand this provision to 
prohibit any person, not just a candidate, from fraudulently 
misrepresenting themselves as speaking for a candidate.
    Second, Congress should pass a new law specifically 
prohibiting the use of AI to engage in electoral fraud or 
manipulation. This would help protect voters from the most 
pernicious uses of AI. While any regulation of campaign speech 
raises First Amendment concerns that must be addressed, let me 
also say this, the Government has a clear, compelling interest 
in protecting the integrity of the electoral process.
    In addition, voters have a well-recognized First Amendment 
right to meaningfully participate in elections, including being 
able to assess the political messages they see and know who the 
actual speaker is. There is no countervailing First Amendment 
right to intentionally defraud voters in elections, so a narrow 
law prohibiting the use of AI to deceptively undermine our 
elections through fake speech would rest on firm Constitutional 
footing.
    Third, and finally, Congress should also expand existing 
disclosure requirements to ensure voters know when electoral 
content has been materially altered or falsified by AI. This 
would at least ensure voters can treat such content with 
appropriate skepticism.
    These proposals are not mutually exclusive or exhaustive. 
Congress could decide to use a combination of tools, while a 
single solution is unlikely to remain relevant for long. 
Congress should carefully consider how each policy could be 
most effectively enforced, with options including overhauling 
the often gridlocked and slow FEC enforcement process, new 
criminal penalties enforceable by the Justice Department, and a 
private right of action, allowing candidates targeted by 
deceptive AI to seek rapid relief in federal court.
    Thank you for the opportunity to testify today. I look 
forward to your questions.
    [The prepared statement of Mr. Potter was submitted for the 
record.]
    Chairwoman Klobuchar. Thank you very much, Mr. Potter. The 
Rules Committee, as Senator Fischer knows, is the only 
Committee on which both Senator Schumer and Senator McConnell 
serve.
    This makes our jobs very important. We are pleased that 
Senator Schumer is here, and we are going to give him the 
opportunity to say a few words. Thank you.
    Senator Schumer. Well, thank you, Senator Klobuchar. 
Whatever Committee you Chair will always be important. Same 
with Senator Fischer. I would like to congratulate you, Mr. 
Potter. You made it as a witness without being from Minnesota.


    [Laughter.]
    Senator Schumer. Anyway, thank you. I want to thank my 
colleagues for being here. As you all know, AI, artificial 
intelligence is already reshaping life on earth in dramatic 
ways. It is transforming how we fight diseases, tackle hunger, 
manage our lives, enrich our minds, ensure peace, and very much 
more.
    But we cannot ignore AI's dangers, workforce disruptions, 
misinformation, bias, new weapons. Today, I am pleased to talk 
to you about a more immediate problem, how AI could be used to 
jaundice, even totally discredit, our elections as early as 
next year. Make no mistake, the risks of AI on our elections is 
not just an issue for Democrats, nor just Republicans. Every 
one of us will be impacted. No voter will be spared.
    No election will be unaffected. It will spread to all 
corners of democracy, and thus it demands a response from all 
of us. That is why I firmly believe that any effort by Congress 
to address AI must be bipartisan, and I can think of few issues 
that should both--unite both parties faster than safeguarding 
our democracy.
    We do not need to look very hard to see how AI can warp our 
democratic systems this year. We have already seen instances of 
AI-generated deepfakes and misinformation reach the voters. 
Political ads have been released this year, right now, using 
AI-generated images and text to voice converters to depict 
certain candidates in a negative light.
    Uncensored chat bots can already be deployed at a massive 
scale to target millions of individual voters for political 
persuasion. Once damaging information is sent to 100 million 
homes, it is hard, oftentimes impossible, to put that genie 
back in the bottle.
    Everyone has experienced these rampant rumors that once 
they get out there, no matter how many times you refute them, 
still stick around. If we do not act, we could soon live in a 
world where political campaigns regularly deploy totally 
fabricated but also totally believable images and footage of 
Democratic or Republican candidates, distorting their 
statements and greatly harming their election chances.
    What then is to stop foreign adversaries from taking 
advantage of this technology to interfere with our elections? 
This is the problem we now face. If left unchecked, AI's use in 
our elections could erode our democracy from within and from 
abroad, and the damage, unfortunately, could be irreversible.
    As Americans prepare to go to the polls in 2024, we have to 
move quickly to establish safeguards to protect voters from AI 
related misinformation. It will not be easy. For Congress to 
legislate on AI is for us to engage in perhaps the most complex 
subject this body has ever faced. I am proud of the Rules 
Committee and the leadership on this issue.
    Thank you, Chairwoman Klobuchar, for your continuing work 
on important legislative efforts to protect our elections from 
the potential harms on AI. Thank you again for organizing this 
hearing. Holding this hearing on AI and our elections is 
essential for drawing attention to the need for action, and I 
commend you and Ranking Member Fischer for doing just that.
    In the meantime, I will continue working with Senators 
Rounds, Heinrich, and Young to host AI inside forums that focus 
on issues like AI and democracy, to supplement the work of the
Rules Committee and our other Committees, and I look forward to 
working with both Senators Klobuchar and Fischer, and all of 
the Rules Committee Members.
    Thank you for being here, to Senators Welch, and Merkley, 
and Britt to develop bipartisan legislation that maximizes AI's 
benefits, and minimizes the risks.
    Finally, the responsibility for protection--protecting our 
elections will not be Congress's alone. The Administration 
should continue leveraging the tools we have already provided 
them, and private companies must do their part to issue their 
own safeguards for how AI systems are used in the political 
arena.
    It will take all of us, the Administration, the private 
sector, Congress working together to protect our democracy, 
ensure robust transparency and safeguards, and ultimately keep 
the vision of our founders alive in the 21st century.
    Thank you again to the Members of this Committee. Thank you 
to Chairwoman Klobuchar, Ranking Member Fischer, for convening 
the hearing. I look forward to working with all of you on 
comprehensive AI legislation and learning from your ongoing 
work. Thank you.
    Chairwoman Klobuchar. Thank you very much, Senator Schumer. 
I will note it was this Committee, with your and Senator 
McConnell's support, that was able to pass the electoral reform 
bill, Electoral Count Reform Act, with near unanimous support 
and got it over the finish line on the floor.
    We hope to do the same with some of these proposals. Thank 
you for your leadership and your willingness to work across the 
aisle to take on this important issue.
    With that, Ms. Wiley, you are up next. Thanks.

    OPENING STATEMENT OF MAYA WILEY, PRESIDENT AND CEO, THE 
LEADERSHIP CONFERENCE ON CIVIL AND HUMAN RIGHTS, WASHINGTON, DC

    Ms. Wiley. Good afternoon, Chairwoman Klobuchar, Ranking 
Member Fischer, my own Senator, Majority Leader Schumer, 
Brooklyn, to be specific, and all the Members of this esteemed 
Committee. It is a great honor to be before you.
    I do just want to correct the record, because I am no 
longer on the faculty at The New School, although I have joined 
the University of the District of Columbia School of Law as the 
Joseph Rao Professor.
    I am going to be brief because so much of what has been 
said I agree with, but really to elevate three primary points 
that I think are critical to remember and that I hope we will 
discuss more deeply today and in the future.
    One is that we know disinformation and misinformation is 
not new and it predates artificial intelligence. That is 
exactly why we should deepen our concern and why we need 
government action, because as has already been said, and we at 
the Leadership Conference have witnessed this growth already 
even in the last two election cycles, artificial intelligence 
is already expanding the opportunity and the depth of not only 
disinformation in the sense of elevating falsehoods about where
people vote, whether they can vote, how to vote.
    That goes directly to the ability of voters to select 
candidates of their choice and exercise their franchise 
lawfully. We have seen that it disproportionately targets 
communities of color.
    I mean, even the Senate Intelligence Committee noted that 
when it was looking at Russian interference in the 2016 
election, that the African American community was really 
disproportionately targeted by that disinformation.
    That the tools of artificial intelligence we are already 
seeing in the generative sense of artificial intelligence, the 
deepfakes already being utilized by some political action 
committees and political parties.
    That is something that already tells us that it is already 
in our election cycle and that we must pay attention to whether 
or not people have clear information about what is and is not 
accurate, what a candidate did or did not say, in addition to 
the other things that we have talked about.
    But I also want to talk about the conditions in which we 
have to consider this conversation about generative artificial 
intelligence and our election integrity. You know, we only have 
a democracy if we have trust in the integrity of our election 
systems.
    A big part of the narrative we have been seeing driving 
disinformation in the last two cycles has been the narrative 
that our elections, in fact, are not trustworthy. This is 
something we are continuing to see increase.
    We have also watched as social media platforms have turned 
back from policies, have gutted staffing to ensure that their 
public squares, essentially that they maintain as private 
companies, adhering to their user agreements and policies in 
ways that ensure that everyone online is safe from hatred, safe 
from harassment, but also is clear what is and is not factual 
information.
    I say that because we cannot rely on social media companies 
to do that on their own. We have been spending much of our time 
over the past few years focused on trying to get social media 
companies both to improve their policies, as well as to ensure 
that they are policing them fairly and equally.
    With regard to communities that are particularly targeted 
for mis and disinformation, I can tell you what you have seen 
in many news reports. In many instances we have seen a gutting 
of the staffing that has produced the ability to do some of 
that oversight. Even when they had that staffing, it was 
inadequate.
    We as a civil rights community, as a coalition of over 240 
national organizations are very, very, very much in favor, 
obviously, of the bipartisan processes that we are able to 
participate in. But also, to say, unless we start to recognize 
both how people are targeted, who is targeted, and its increase 
in violence in our election cycles--not just, it is not just 
theoretical, it is practical, it is documented, and we are 
seeing an increase.
    FBI data shows that we are at risk, but that we can take 
action both in regulating artificial intelligence and ensuring 
the public knows what is artificially produced, and also 
ensuring that we have oversight of what social media companies 
are doing, and whether they are complying with their own 
policies and ensuring that they are helping to keep us safe.
    Thank you.
    [The prepared statement of Ms. Wiley was submitted for the 
record.]
    Chairwoman Klobuchar. Very good. Thank you very much, Ms. 
Wiley. Mr. Chilson.

  OPENING STATEMENT OF NEIL CHILSON, SENIOR RESEARCH FELLOW, 
   CENTER FOR GROWTH AND OPPORTUNITY, UTAH STATE UNIVERSITY, 
                          LOGAN, UTAH

    Mr. Chilson. Good afternoon, Chairwoman Klobuchar, Ranking 
Member Fischer, esteemed Committee Members.
    Thank you for inviting me to discuss the influence of 
artificial intelligence on elections.
    Imagine a world where our most valuable resource, 
intelligence, is abundant to a degree we have never seen. A 
world where education, art, and scientific innovations are 
supercharged by tools that augment our cognitive abilities. 
Where high fidelity political speech can be created by voices 
that lack deep pockets. Where real time fact checking and 
inexpensive voter education are the norm. Where AI fortifies 
our democracy.
    That is a promise of AI's future, and it seems plausible to 
me. But if you take one message from my comments, it should be 
this: artificial intelligence and political speech is not 
emerging, it is here and it has been for years. AI technologies 
are entangled in modern content creation. This is not just 
about futuristic tech or deepfakes. It is about the 
foundational technologies that we use to craft our political 
discourse today.
    Let's follow a political ad from inception to distribution. 
Today, an ad campaign director does not just brainstorm ideas 
over coffee. She taps tools like ChatGPT to rapidly prototype 
variations on her core message.
    When her media team gathers assets, automatic computer 
vision tagging makes it a breeze to sift through vast image 
databases. Her photographers' cameras use AI. The camera 
sensors adjust to capture images based on the lens attached or 
the lighting conditions. AI powered facial and eye detection 
ensures that subjects remain in focus.
    Apple's newly announced iPhone takes this to the next 
level. Its dedicated neural nets powering its computational 
photography. It is no exaggeration to say that every photo 
taken on an iPhone will be generated in part by AI.
    AI also powers post-production. Speech recognition tools 
make it easy to do text based video edits. Sophisticated 
software automatically joints multiple raw video streams into a 
polished final product. Blemishes disappear and backgrounds are 
beautified because of AI, and tools like HeyGen make it 
possible to adapt the audio and video of a final ad into an 
entirely different language seamlessly. These are just some of 
the AI tools that are involved in creating content today. Some 
are new, but many others have been here for years and in use.
    AI is so intricately woven into the fabric of modern 
content creation that determining whether a particular ad 
contains AI-gen erated content is very difficult. I suspect
each Senator here has used AI content in their ad campaigns,
knowingly or not.
    Here is why this matters: because AI is so pervasive in ad 
creation, requiring AI content disclosures could affect all 
campaign ads. Check-the-box disclosures will not aid 
transparency, they will only clutter everyone's political 
messages.
    And to address what unique problems? AI will facilitate 
more political speech, but there is no reason to think that it 
will shift the ratio of truth to deception. Historically 
malicious actors do not use cutting edge tech. Cheap fakes, 
selective editing, overseas content farms, and plain old 
Photoshop are inexpensive and effective enough.
    Distribution, not content generation, is the bottleneck for 
misinformation campaigns. Money and time spent creating content 
is money and time that they cannot spend spreading it.
    This Committee should continue to investigate what new 
problems AI raises. It could review AI's effects on past 
elections and should obviously closely monitor its use and 
effects on the coming election cycle. More broadly, Congress 
should establish a permanent central hub of technical expertise 
on AI to advise the many federal agencies dealing with AI 
related issues.
    Remember, AI is here now, already affecting and improving 
how we communicate, persuade, and engage. Imprecise legislative 
approaches could burden political speech today and prevent the 
promise of a better informed, more engaging political dialog 
tomorrow.
    Thank you for your attention. I am eager to address any 
questions that you have.
    [The prepared statement of Mr. Chilson was submitted for 
the record.]
    Chairwoman Klobuchar. Thank you, Mr. Chilson. Mr. Cohn.

      OPENING STATEMENT OF ARI COHN, FREE SPEECH COUNSEL, 
                  TECHFREEDOM, WASHINGTON, DC

    Mr. Cohn. Chair Klobuchar, Ranking Member Fischer, Members 
of the Committee, thank you for inviting me to testify today. 
It is truly an honor. The preservation of our democratic 
processes is paramount. That word processes, I think, 
highlights a measure of agreement between all of us here.
    False speech that misleads people on the electoral process, 
the mechanics of voting, where to vote, how to register to 
vote. Those statements are particularly damaging, and I think 
that the Government interest in preventing those specific 
process harms is where the Government's interest is at its most 
compelling.
    But a fundamental prerequisite to our prized democratic 
self-governance is free and unfettered discourse, especially in 
political affairs. First Amendment protection is at its zenith 
for core political speech and has its fullest and most urgent 
application to speech uttered during a campaign for political 
office.
    Even false speech is protected by the First Amendment. 
Indeed, the determination of truth and falsity in politics is 
properly the domain of the voters, and to avoid unjustified 
intrusion into that core civic right and duty, any restriction 
on political speech must satisfy the most rigorous 
Constitutional scrutiny, which requires us to ask a few 
questions.


    First, is the restriction actually necessary to serve a 
compelling government interest? We are not standing here today 
on the precipice of calamity brought on by seismic shift. AI 
presents an incremental change in the way we communicate, much 
of it for the better, and a corresponding incremental change in 
human behavior that predates the concept of elections itself.
    Surely deceptively edited media has played a role in 
political campaigns since well before the advent of modern AI 
technology. There is simply no evidence that AI poses a unique 
threat to our political discussion and conversation.
    Despite breathless warnings, deepfakes appear to have 
played little, if any, role in the 2020 Presidential election. 
While the technology has become marginally better and more 
available in the intervening years, there is no indication that 
deepfakes pose a serious risk of materially misleading voters 
and changing their actual voting behavior.
    In fact, one study of the effect of political deepfakes 
found that they are not uniquely credible or more emotionally 
manipulative relative to non-AI manipulated media. The few 
instances of AI use in current election cycle appear to back 
that up.
    Even where not labeled, AI-generated media that has been 
used recently has been promptly identified and subject to 
immense scrutiny, even ridicule.
    The second question is whether the law is narrowly 
tailored. It would be difficult to draft a narrowly tailored 
regulation in specifically at AI. Such a law would be 
inherently under inclusive, failing to regulate deceptively 
edited media that does not utilize AI--media, which not only 
poses the same purported threat, but also has a long and 
demonstrable history of use compared to the relatively 
speculative fears about AI.
    A law prohibiting AI-generated political speech would also 
sweep an enormous amount of protected and even valuable 
political discourse under its ambit. Much like media manually 
spliced to create the impression of speech that did not in fact 
occur, AI-generated media can serve to characterize a 
candidate's position or highlight differences between two 
candidates' beliefs.
    In fact, the ultimate gist of a message conveyed through 
technical falsity may even turn out to be true. To prohibit 
such expression, particularly in the political context, steps 
beyond what the First Amendment allows.
    But even more obviously, prohibiting the use of political 
AI-generated media broadly by anyone, in any place, at any 
time, no matter how intimate the audience or how the low the 
risk of harm, clearly is not narrowly tailored to protect 
against any harm as the Government might claim it has the right 
to prevent.
    The third question is whether there is a less restrictive 
alternative. When regulating speech on the basis of content, 
the Government must choose the least restrictive means by which 
to do so. Helpfully, the same study revealing that AI does not 
pose a unique risk also points to a less restrictive 
alternative. Digital literacy and political knowledge were 
factors that uniformly increased viewer's discernment when it 
comes to deepfakes.
    Congress could focus on bolstering those traits in the 
polity instead of enacting broad prophylactics. Another more 
fundamental alternative is also available, more speech. In 
over a decade as a First Amendment lawyer, I have rarely 
encountered a scenario where the exposition of truth could 
not serve as an effective countermeasure to falsity, and I 
do not think I find myself in such a position today.
    Nowhere is the importance, potential, or efficacy of 
counter speech more important than in the context of political 
campaigns. That is the fundamental basis of our democracy, and 
we have already seen its effectiveness in rebutting deepfakes. 
We can expect more of that.
    Campaign related speech is put under the most powerful 
microscope we have, and we should not presume that voters will 
be asleep at the wheel. Reflexive legislation, prompted by fear 
of the next technological boogeyman, will not safeguard us.
    Free and unfettered discourse has been the lifeblood of our 
democracy, and it has kept us free. If we sacrifice that 
fundamental liberty and discard that tried and true wisdom, 
that the best remedy for false or bad speech is true or better 
a speech, no law will save our democratic institutions, they 
will already have been lost.
    More detail on these issues can be found in my written 
testimony and thank you again for the opportunity to testify 
today. I look forward to your questions.
    [The prepared statement of Mr. Cohn was submitted for the 
record.]
    Chairwoman Klobuchar. Thank you, Mr. Cohn. I am going to 
turn it over to Senator Merkley in the interest of our schedule 
here, but I wanted to just ask one question, then I will come 
back--a twofold question.
    I want to make sure you all agree that there is a risk 
posed by the use of AI to deceive voters and undermine our 
elections. Do you all agree with that? There is at least a 
risk?
    [Nods in the affirmative.]
    Chairwoman Klobuchar. Okay, great. Then secondly, last, do 
you believe that we should work, and I know we vary on how to 
do this, but do you believe that we should work to ensure 
guardrails are in place that protect voters from this threat?
    [Nods in the affirmative.]
    Chairwoman Klobuchar. Okay, great. Well, that is a good way 
to begin. I am going to turn it over to Senator Merkley, and 
then we will go to Senator Fischer, and then I think Senator 
Warner, who just so kindly joined us--has a scheduling crunch 
as well. Senator Merkley.
    Senator Merkley. I thank you so much, Madam Chairwoman. 
Really, this is such an important issue. I am struck by a 
conversation I had with a group of my wife's friends who said, 
``how do we know what is real in political discourse? Because 
we hear one thing from one cable television, another from 
another.''
    I said, well, one thing you can do is go to trusted sources 
and listen to the candidates themselves. But now we are talking 
about deepfakes where the candidates themselves might be 
profoundly misrepresented.
    I wanted to start by turning to you, Mr. Potter, in your 
role as a former Chair of the Federal Election Commission. 
Currently, it is not uncommon in ads to distort a picture of an 
opponent. They get warped, they get blurred. They are kind of
maybe tweaked a little bit to look evil.
    Is there anything about that right now that is a violation 
of federal election law?
    Mr. Potter. No, it is not.
    Senator Merkley. Okay. Thank you. You have got your 
microphone on there. Okay. He said, no, it is not. What if an 
ad, an individual quotes their opponent, and the quote is 
false. Is that a violation of that?
    Mr. Potter. No, it is not a violation of law--well, wait a 
minute. If you had a candidate misrepresenting what their 
opponent had said, under the current FEC rules, if the 
candidate did it themselves and they were misrepresenting the 
speaker, then it possibly could be.
    Senator Merkley. An advertisement in which one candidate 
says, hey, my opponent took this position and said such and 
such, and that is not true. That is not true. That is a 
violation?
    Mr. Potter. If you are characterizing what your opponent 
said, I think that would not be a violation. It would be 
perhaps a mischaracterization.
    If you create a quote and put it in the mouth of your 
opponent, and those words are inaccurate, then the FEC would 
look at it and say, is that a misrepresentation of the other 
candidate?
    But it would have to be a deliberate creation of something 
that the opponent had not said, quoting it, as opposed to the 
candidate's opinion of what they had said.
    Senator Merkley. Would a candidate's use of a completely 
falsified digital image of the opponent saying something that 
the person had never said, would that be illegal under current 
election law?
    Mr. Potter. I think it would. That is what I have urged the 
FEC in my testimony to make clear. That if they use--if a 
candidate creates a completely false image and statement by an 
opponent through this artificial intelligence, which is what 
could be done, that would violate existing law.
    Senator Merkley. Okay, great. Secretary Simon, you talked 
about a leaflet that told people if their name ends in I think 
M through Z, to vote on Wednesday.
    I picture now with modern technology, having that message 
come from a trusted source, a community leader in the voice of 
or the sound of, you know, if they were not identified as 
whomever.
    Suddenly Barack Obama is on the line telling you, you are 
supposed to vote on Wednesday. Is such a presentation today a 
violation of election law?
    Mr. Simon. Boy, that is a tough one, Senator. Thanks for 
the question. I am hung up on a couple of details of Minnesota 
law. I do not know if it came up in the federal context. I 
think Mr. Potter might have the answer to that one. But, you 
know, not--I would say arguably, yes, it would be. Maybe not 
election law, but other forms of law. I mean, it is 
perpetrating a fraud.
    Senator Merkley. Okay. I recognize there is some 
uncertainty about exactly where the line is, and that is part 
of why this hearing is so important as we think about this 
elaboration. Mr. Cohn, you said that deepfakes are not 
credible.


    There was a 2020 study that 85 percent of the folks who saw 
the deepfakes said, oh, these are credible. It has much 
improved since then. Isn't there--I am not sure why you feel 
that a deepfake done, you know a well done one, is somehow not 
credible when studies have shown that the vast majority of 
people that see them go, wow, I cannot believe that person said 
that. They believe the fake.
    Mr. Cohn. Thank you for the question, Senator. A study in 
2021 that actually studied a deepfake of Senator Warren 
actually particularly said that they could test whether or not 
misogyny also played a role into it, found that in terms of 
identifying whether something is a deepfake or not--the road is 
pretty--it does not really--it is not really more likely that 
someone is going to be moved by a deepfake than another piece 
of non AI-generated, manipulated media.
    Senator Merkley. Okay. Thank you. My time is up. I just 
want to summarize by saying I--my overall impression is the use 
of deepfakes in campaigns, whether by a candidate or by a third 
party, can be powerful and can have people, can you believe 
what so-and-so said or what position they took. Because our 
eyes see the real person as if they are real, and so I am 
really pleased that we are holding this hearing and wrestling 
with this challenge. I appreciate your all's testimony.
    Chairwoman Klobuchar. Very good. Thank you very much, 
Senator Merkley. Senator Fischer.
    Senator Fischer. Thank you, Madam Chair. Mr. Chilson, you 
mentioned that AI tools are already common in the creation and 
distribution of digital ads. Can you please talk about the 
practical implications of a law that would ban or severely 
restrict the use of AI, or that would require broad disclosure?
    Mr. Chilson. Thank you for the question. Laws like this 
would mean that requiring disclosures, for example, would sweep 
in a lot of advertising content.
    Imagine you are a lawyer advising a candidate on an ad that 
they want to run. If having AI-generated content in the ad 
means that ad cannot be run or that it has to have a 
disclosure, the lawyer is going to try to figure out whether or 
not there is AI-generated content in the ad. As I pointed out 
in my testimony, AI-generated content is a very broad category 
of content.
    I know we all use the term deepfake, but the line between 
deepfake and tweaks to make somebody look slightly younger in 
their ad is pretty blurry and drawing that line in legislation 
is very difficult.
    I think that in ad campaigns, as a lawyer advising a 
candidate, one will tend to be conservative, especially if the 
penalty is a potential private defamation lawsuit, with 
damages, where the defamation is per se.
    I think that if the consequences are high that lawyers will 
be conservative, and it will chill a lot of speech.
    Senator Fischer. It could add to increased cost of 
elections, couldn't it, because of the increased cost in ads 
where you would have to meet all those requirements in an ad 
for the time you are spending there?


    Mr. Chilson. Absolutely. Increased costs. Also, less 
effective ads in conveying your content. It crowds out the 
message you want to get across. It could raise a barrier, too 
for smaller campaigns.
    Senator Fischer. Right. You also advocated an approach to 
preventing potential election interference that judges outcomes 
instead of regulating tools. What would that look like in 
practice?
    Mr. Chilson. I am hearing a lot of concern about deceptive 
content in ads and in campaigns overall. The question is, if 
that is the concern, why are we limiting restrictions to only 
AI-generated content?
    When I say an outcome neutral test, I mean test based on 
the content that we are worried about, not the tool that is 
used to create it. If the concern is with a certain type of 
outcome, let us focus on that outcome and not the tools used to 
create it.
    Senator Fischer. Okay. Mr. Cohn, I understand that while 
all paid political advertisements already require at least one 
disclaimer, the Supreme Court has long recognized that 
compelled disclaimers could infringe on First Amendment rights. 
In your view, would an additional AI specific disclaimer in 
political advertisements violate political speakers' First 
Amendment rights?
    Mr. Cohn. Thank you for the question, Senator. I think 
there are two things to be concerned about. First, the 
Government still has to have a Constitutionally sufficient 
interest.
    When it comes to the kinds of disclaimers and disclosures 
that we see presently, the informational interest that we are 
protecting is the identification of the speaker, who is talking 
to us, who is giving us this ad, which helps us determine 
whether we credit that ad or view it with some kind of 
skepticism.
    Now, it is one thing to further that informational 
interest, and certainly it can make a difference in how someone 
sees a message. But that ties into the second problem, which is 
that pretty much as Mr. Chilson said, everything uses AI these 
days. If the interest is in making people a little more 
circumspect about what they believe, that actually creates the 
same liar's dividend problem that Secretary Simon said.
    If everything has a disclosure, nothing has a disclosure, 
and it gives cover for bad actors to put these advertisements 
out, and the deceptive ones are going to be viewed just as 
skeptically as the non-deceptive ones because everything has to 
have a disclosure on it.
    I am not sure that the, you know, proposed disclosure would 
actually further the Government interest, unless it is much 
more narrowly drawn.
    Senator Fischer. Some people have proposed using a 
reasonable person standard to determine whether an AI-generated 
image is deceptive. You have used that word here. Can you tell 
us how this type of standard has been used to regulate speech 
and other content?
    Mr. Cohn. Well, that is a great question, because who knows 
what the reasonable person is. But, you know, generally 
speaking, I think that is a harder standard to impose when you 
are talking about something like political speech.
    It ties in closely, I think, with materiality. What is 
material to any particular voters? What is material to a group 
of voter? How does the reasonable standard person correspond 
with the digital literacy of a particular person?
    A reasonable person of a high education level may be much 
less likely to have a fundamentally different view of what a 
piece of edited material says than the original version. 
Whereas a person with lower--a lower education level might be 
more susceptible to it.
    It really defies a reasonable person standard, particularly 
with such sensitive and important speech.
    Senator Fischer. Thank you. Thank you, Madam Chair.
    Chairwoman Klobuchar. I have returned. Senator Warner, the 
Chair of the Intel Committee, and one of the esteemed Members 
of the Rules Committee.
    Senator Warner. Thank you, Madam Chairwoman. I was actually 
just at a hearing on the PRC's use of a lot of these 
disinformation and misinformation tools. Candidly, I am not 
going to debate with the panel. I completely disagree with them 
on a number of topics, and I would love them to get some of the 
classified briefings we receive.
    I really appreciate the fact that you have taken, Madam 
Chair, a lead on AI regulations around elections. As I think 
about the exponentially greater power of AI in misinformation, 
disinformation, the level of bot usage, it is child's play to 
what happened in terms of Russia's 2016 interference, the tools 
that are existing now.
    I think it would be naive to underestimate that that we are 
dealing with a threat of a different magnitude. I applaud what 
you are doing. I actually think if we look at this, where our 
existing AI tools right now with very little increase in power, 
where can they have the most immediate effect that could have 
huge negative consequences and does not have to be necessarily 
generated by a potential adversarial, a nation like China, but 
just generally.
    I would say those are areas where public trust is the key 
glue that keeps an institution stuck together. You have 
identified one in the question of public elections, and we have 
seen how public trust has been eroded, again, using somewhat 
now, you know, tools, and in 2016.
    While we thank goodness the FEC has finally required the 
fact that a political ad on Facebook has to have some level of 
disclosure, as you know, that was your legislation, we still 
have not passed law number one to equalize disclosure 
requirements on social media and to equalize with traditional 
TV and broadcast. I think that is a mistake.
    The other area, I would argue for consideration for the 
panel, maybe for a later time, is the fact that the other 
institution that is as reliant on public faith as public 
elections, that we could have the same kind of devastating 
effect if AI tools immediately are used, are faith in our 
public markets.
    You know, there has been one example so far where AI tool 
did a false depiction of the Pentagon burning, had a disruption 
in the market. Child's play, frankly, the level of what could 
take place, maybe not in Fortune 50 companies, but Fortune 100 
to 500 companies.

    The ability to not just simply use deepfakes, but to 
generate tools that would have massive false information about 
products. Across a whole series of other ways that the 
imagination is pretty wild.
    Again, I would welcome my colleagues to come for a 
classified briefing on the tools that are already being 
deployed by our adversaries using AI. Somehow this notion that 
there is, you know, well, if it is already a law, why do we 
need anything else?
    Well, there are plenty of examples, and I will cite two, 
where because the harm is potentially so great, we have decided 
either in a higher penalty level or certain times a lower 
threshold of proof or in more extreme cases, even a 
prohibition, if the harm is so great that we have to think 
twice as a society. I mean, murder is murder.
    But if that murder is created by a terrorist, there is a 
higher and differential level of--society has implied a 
different level of heinousness of that. We have lots of rules--
or tools of war, but we have decided that, you know, there may 
be some tools of war, chemical weapons, atomic weapons, that go 
beyond the pale.
    I think it would be naive to make assumptions at this point 
that would the potential that AI has, that we should not at 
least consider if these tools are unleashed, and I again 
applaud the fact that we are starting to drill down this issue 
around public elections.
    Obviously, there is First Amendment rights that have to be 
respected. Might even be easier on public markets because I 
could very easily see massive AI disruption tools being used to 
disrupt public markets that could have hugely catastrophic 
effects, and we might then overreact.
    But I do want to make sure I get in a question. I will go 
to Ms. Wiley. You know, one of the things we found in the 2016 
elections were Russia disproportionately targeted black 
community in this country with misinformation and 
disinformation.
    We just came from the hearing I was referencing where the 
Freedom House indicated that PRC's current influence 
operations, some using AI tools, some not, are once again 
targeting the black communities in our country.
    You know, don't you think if the tools that were used in 
2016 are now 100x, 1,000x, 1 million-x because of the enormous 
power of large language models and generative AI, don't we need 
to take some precautions in this space?
    Ms. Wiley. Thank you, Senator. We absolutely must. What you 
are quoting is extremely important. It is also important to 
note, and when we look at the research and the RAND study that 
came out just last year showed that a minimum of 30-33 to 50 
percent of all people in their subject pool of over 2,500 
people took the deepfake to be accurate.
    What they found is increased exposure actually deepened the 
problem. You know, the notion that you see it over and over 
again from different sources actually can deepen the impact, 
the belief in the deepfake.
    I am saying that because part of what we have seen, and it 
is not only foreign governments, but it certainly includes 
them, but also domestic hate groups utilizing social media and 
utilizing the opportunity.


    We are starting to have a lot of concerns about some of the 
ways the technology, particularly with chat bots and text 
message, actually can vastly increase exponentially the reach. 
But targeting communities that are more easily made afraid or 
giving false information about where and how to vote.
    But also, I want to make this clear, too. We are seeing it 
a lot with people who are lawfully allowed to vote, but for 
whom English is not their first language. They have also been 
targeted, particularly Spanish speakers, but not also--also in 
the Asian community. We know that there is, and a lot of social 
science shows that there is real targeting of communities of 
color.
    It does go to the way that we see even with political 
parties and political advertising, the attack on the integrity 
of our election systems, and even whether voters are voting 
lawfully or fraudulently in ways that have made people more 
vulnerable to violence.
    Chairwoman Klobuchar. Very good. Thank you, Senator Warner. 
I know Senator Britt was here earlier, and we thank her for 
being here. Senator Hagerty.
    Senator Hagerty. Thank you, Senator Klobuchar, Ranking 
Member Fischer. Good to be with you both. Mr. Chilson, I would 
like to start with you. If I could, just engage in a thought 
experiment with you for a few minutes. Let's go back to early 
2020 when the COVID-19 pandemic hit.
    Many policymakers, experts are advocating for things like 
mask mandates, shutting down schools, and mandatory remote 
learning.
    In many states, many localities adopted mandates of that 
nature at the outset. I think we know the result of those 
mandates had great economic damage, particularly to small 
businesses, children's learning was set back considerably, and 
loss of liberty. What I am concerned about is that Congress and 
the Biden Administration may be finding itself right at the 
same place again when we are looking at artificial 
intelligence, and I do not want to see us make the same set of 
mistakes.
    I would like to start with a very basic question, if I 
might, and that is, is artificial intelligence a term with an 
agreed upon legal definition?
    Mr. Chilson. It is not. It does not have an even agreed 
upon technical definition. If you read one of the leading 
treatises that many computer scientists are trained on, the 
Russell and Norvig book, they describe four different 
categories of definitions, and underneath those, there are many 
different individual definitions.
    Then, if you run through the list of things that have been 
considered AI in the past and which nobody really calls AI now, 
you have everything from edge detection, which is in 
everybody's cameras, to letter detection, to playing chess, to 
playing checkers, things that once it works, we kind of stop 
calling it AI.
    That paraphrases computer scientist John McCarthy who 
actually coined the term AI. There is not an agreed upon legal 
definition, and it is quite difficult actually to define.
    Senator Hagerty. Yes. Using broadly how we think about AI 
and AI tools, do political candidates and others that engage in 
political speech use AI today for routine functions like taking 
and editing pictures like you just mentioned, or for speech 
recognition, or for processing audio and video content?

    Mr. Chilson. Absolutely. Ads are created using and all 
content is created using many different algorithms. My cell 
phone here has many, many different AI algorithms on it that 
are used to create content.
    Senator Hagerty. I would like to use this scenario to 
illustrate my concern. Madam Chair, I would like to introduce 
this article for the record. It is one of many that cites this 
particular----
    Chairwoman Klobuchar. You have it in the record.
    [The information referred to was submitted for the record.]
    Senator Hagerty [continuing]. that I will come back to. One 
of the proposals that is under consideration now would prohibit 
entities from using, ``deceptive AI-generated audio or video 
visual media in election related speech.''
    This would include altering an image in a way that makes it 
inauthentic or inaccurate. That is a pretty vague concept. For 
example, age may be a very relevant factor in the upcoming 2024 
elections. You may recall recent media reports, again, this is 
one of them right here, describing how President Biden's 
appearance is being digitally altered in photographs to make 
him look younger.
    My next question for you, Mr. Chilson, if the Biden 
campaign were to use photo editing software that utilizes AI to 
make Joe Biden look younger in pictures on his website, could 
that use of artificial intelligence software potentially 
violate such a law against inaccurate or inauthentic images?
    Mr. Chilson. Potentially, I believe it could. The question 
should be, why does the use of those tools violate it but not 
the use of makeup and use of lighting in order to make somebody 
look younger.
    Senator Hagerty. Is there a risk then, in your view, that 
hastily regulating in a very uncertain a rapidly growing 
concept like AI might actually chill political speech?
    Mr. Chilson. Absolutely.
    Senator Hagerty. That is my concern too. My point is that 
Congress and the Biden Administration should not engage in 
heavy handed regulation with uncertain impacts that I believe 
pose a great risk to limiting political speech.
    We should immediately indulge the impulse for Government to 
just do something, as they say, before we fully understand the 
impacts of the emerging technology, especially when that 
something encroaches on political speech.
    That is not to say there are not a significant number of 
issues with this new technology. But my concern is that the 
solution needs to be thoughtful and not be hastily implemented. 
Thank you.
    Chairwoman Klobuchar. Thank you very much, Senator Hagerty. 
I will start with you, Senator Simon, and get at some of--I am 
sorry, Secretary of State Simon, and get at some of the 
questions that Senator Hagerty was raising. Just first, just 
for now, because all my colleagues are here, and I have not 
asked questions yet. Which state has consistently had the 
highest voter turnout of all the States in America?
    Mr. Simon. Senator, that would be----
    Chairwoman Klobuchar. Okay. Thank you very much.
    Mr. Simon. Yes, that would be Minnesota.

    Chairwoman Klobuchar. Especially because Senator Bennet is 
here, and he is always in a close race with me for Colorado. I 
thought I would put that on the record. Okay.
    Senator Hagerty has raised some issues, and I wanted to get 
at what we are doing here with a bill that Senator Hawley, 
certainly not a Member of the Biden Administration, that 
Senator Hawley and I have introduced with Senator Collins and 
Senator Ricketts, Senator Bennet, who has been such a leader on 
this, Senator Coons, and others will be getting on it as well.
    This bill gets at not just any cosmetic changes to how 
someone--this gets at materially deceptive ads. This gets at 
the fake ad showing Donald Trump hugging Dr. Fauci, which was a 
lie. That is what it gets at.
    It gets at the person that looks like Elizabeth Warren but 
isn't Elizabeth Warren claiming that Republicans should not be 
allowed to vote. It is of grave concern to people on both sides 
of the aisle. Can you talk about and help us with this kind of 
materially deceptive content has no place in our elections.
    Mr. Simon. Thank you, Senator, for the question. I think 
that is the key. The materiality test and courts, it seems, are 
well equipped to use that test in terms of drawing lines.
    I do not pretend to say--and I think Senator Hagerty is 
correct and right to point out that this is difficult, and that 
Congress and any legislative body needs to get it right. But 
though the line drawing exercise might be difficult, courts are 
equipped under something like a materiality standard to draw 
that line.
    I think that materiality, it really in the realm of 
elections is not so different from other realms of our national 
life. It is true, as Mr. Cohn and others have said, that the 
political speech, the bar for political speech is rightly high. 
It is, and it should be.
    But in some senses, it is no different than if someone were 
to say something false in the healthcare field. If someone said 
something just totally false, a false positive or negative 
attribute--if someone said that breath mints cure cancer or 
breath mints cause cancer or something like that, I do not 
think we have quite the same hesitation.
    Political speech, of course there is a high bar, but 
courts, given the right language such as a materiality test, 
could navigate through that.
    Chairwoman Klobuchar. Right. I am going to turn to Mr. 
Potter, but I note that even in a recent decision, Supreme 
Court decision by Justice Barrett, a 7 to 2 decision, the 
Supreme Court was joined by Justice Barrett, Justices Roberts, 
Thomas, Alito, Kagan, Gorsuch, and Kavanaugh stated that the 
First Amendment does not shield fraud.
    The point is that we are getting at a very specific subset, 
not what Mr. Cohn was talking about with the broad use of some 
of the technology that we have on political ads. Mr. Potter, 
you would be a good person to talk to.
    You were a Republican appointee, Chair of the FEC. Can you 
expand on how prohibiting materially deceptive AI-generated 
content in our election falls squarely within the framework of 
the Constitution?

    Mr. Potter. Thank you, Madam Chair. The court has 
repeatedly said that it is Constitutional to require certain 
disclosure so that voters have information about who is 
speaking. There, I think Justice Kennedy in Citizens United was 
very clear in saying that voters need to know who is speaking, 
to put it in context.
    Who the speaker is informs the voters' decisions as to 
whether to believe them or not. In those circumstances where we 
are talking about disclosure, it seems to me particularly 
urgent to have voters know that the person who is allegedly 
speaking is fake. That the person who they think is speaking to 
them or doing an act is actually not that person.
    There, it is the negative of, yes, who is paying for the 
ad, but is the speaker actually the speaker. That would fit 
within the disclosure framework. In terms of the prevention of 
fraud, I think that goes to the fact that the court has always 
recognized that the integrity of our election system and 
citizen faith in that system is what makes this democracy work.
    To have a circumstance where we could have the deepfake and 
somebody is being alleged to say something they never said or 
engage in an act where they never did, is highly likely to 
create distrust. Where you have a situation where that occurs, 
the comment has been made, well, the solution is just more 
speech.
    But I think we all know, and there is research showing 
this, but we intuitively know that, you know, I saw it with my 
own eyes is a very strong perspective. To see somebody, hear 
them engaging in surreptitiously recorded racist and misogynist 
comments, and then have the candidate whose words and image 
have been portrayed say, that is not me, I did not say that, 
that is all fake.
    Are you going to believe what you saw, or are you going to 
believe a candidate who says that is not me? I think that is 
your first inherent problem.
    Chairwoman Klobuchar. Thank you for doing it, and also in 
neutral terms, because I think we know it could happen on 
either side and why we are working so hard to try to get this 
done.
    I would also add in this was on the disclosure comment with 
Scalia, who said in 2010 for an opinion concurrence. ``For my 
part, I do not look forward to a society which, thanks to the 
Supreme Court campaigns anonymously hidden from public scrutiny 
and protected from the accountability of criticism. This does 
not resemble the home of the brave.''
    There has been a clear indication and why Senator Hawley, 
and Collins, and Senator Bennet, and a number of the rest of us 
drafted a bill that had the ability to look at this in a very 
narrow fashion, but also allowed for satire and the like.
    I did find, Mr. Cohn's--I went over and told Senator 
Warner, some of your points, I might have to turn it over here, 
interesting. When we get to beyond the ones that would be 
banned, of which ones the disclaimer applies to, and that we 
may, you know, want to look at that in a careful light so that 
we do not have every ad--it becomes meaningless, as you said. I 
really did appreciate those comments.
    With that, I am going to--I think it is Senator--I think 
our order is Senator Ossoff, because he has to leave. Is this 
correct? Then we go to Senator Welch, who has been dutifully 
here for quite a while.

    Then Senator Bennet and then Senator Padilla, even though 
he does represent the largest state in our Nation and is a 
former Secretary of State. Hopefully that order will work out. 
If you need to trade among each other, please do. Thank you.
    Senator Ossoff. Thank you, Madam Chair. I think you just 
got to the root of the matter very efficiently and elegantly. 
You know, Mr. Cohn, I appreciate your comments, but I think 
that the matter that is being discussed here is not subjective, 
complex, judgments about subtle mischaracterization in public 
discourse.
    We are talking about, for example, Senator Fischer, one of 
your political adversaries, willfully, knowingly, and with 
extreme realism, falsely depicting you or any of us, or a 
candidate challenging us, making statements that we never made 
in a way that is indistinguishable to the consumer of the media 
from a realistic documentation of our speech.
    That is the most significant threat that I think we are 
talking about here. Mr. Potter, in your opinion, isn't there a 
compelling public interest in ensuring that that kind of 
knowing--knowingly and willfully deceptive content whose 
purpose, again, is not to express an opinion, it is not to 
caricature, but it is to deceive the public about statements 
made by candidates for office--isn't there a compelling public 
interest in regulating that?
    Mr. Potter. I think absolutely there is and that the court 
would recognize that compelling interest. I also--I mean, there 
is no argument that there is a compelling interest in 
fraudulent speech, as the Chair noted.
    I think what you would find here is that in a circumstance 
where we are talking about this sort of deepfake, as opposed to 
the conversations about did you use a computer to create the 
text, but where you are creating a completely false image, I 
think we would have a compelling public interest and no 
countervailing private interest.
    Because the First Amendment goes to my right, our right, to 
say what we think, even about the Government and in campaigns, 
without being penalized. But the whole point of this 
conversation is you are falsifying the speaker. It is not what 
I think my First Amendment right.
    It is creating this fake speech where the speaker never 
actually said it. That, I think, is where the court would come 
down and say, creating that is not a First Amendment right.
    Senator Ossoff. Indeed, as you point out, there is 
substantial jurisprudence that would support the regulation of 
speech in this extreme case, with knowing and willfully 
deceptive fabrication of statements made by candidates for 
office or public figures.
    Mr. Potter. Yes. I think the distinction I draw is that the 
court has protected a candidate saying I think this even if it 
is false, or my opponent supports or opposes abortion rights. 
That may be a mischaracterization. It may be deceptive.
    But if it is what I am saying, engaging in my First 
Amendment speech, mischaracterizing an opponent's position, 
that is in the political give and take. But I think that is 
completely different from what we are talking about here, where 
you have an image, or a voice being created that is saying 
something it never said.

    It is not me characterizing it. It is putting it in the 
image of this candidate.
    Senator Ossoff. Thank you. Mr. Cohn, since I invoked your 
name earlier, I will give you the chance to respond. But is it 
your position that broadcast advertisements, which knowingly 
and willfully mischaracterize a candidate for office, and I do 
not mean mischaracterize as in mischaracterized their position 
or give shaded opinions about what they believe stand for or 
may have said in the past, but depict them saying things they 
never said for the purpose of misleading the public about what 
they said, is it your position that that should be protected 
speech?
    Mr. Cohn. Well, thank you for the question, Senator. I 
think there is two things.
    First of all, you know, it is one thing to say the word 
fraud, but fraud generally requires reliance and damages. 
Stripping those requirements out of here into and effectively 
presuming them takes us well outside of the conceptualization 
of fraud that we know.
    I think there are circumstances in which I would probably 
agree with you that things cross the line. But take, for 
example, two--just two examples. First, in 2012, the Romney 
campaign cut some infamous lines out of President Obama's 
speech in the you did not build that campaign ad.
    They made it seem like he was denigrating the hard work of 
business owners, but instead he was actually referring to the 
infrastructure that supported those businesses. Just in this 
last election, the Biden campaign was accused of cutting out 
about 19 sentences or so from a President Trump campaign rally 
that made it sound like he was calling COVID-19 a hoax.
    My point is not that these are good or valuable and that we 
need people to say these. It is that this is already a problem, 
and by trying to legislate them with AI specifically, instead 
of addressing it as Mr. Chilson said, the broader effect causes 
a Constitutional concern that the government interest is not 
actually being advanced.
    Senator Ossoff. I see. If I understand correctly, and do 
not let me put words in your mouth, but you agree, broadly 
speaking, with the premise that certain forms of deceptive 
advertising in the political arena are subject to regulation on 
the basis there is a compelling public interest in preventing 
outright, willful, knowingly deception, such as putting words 
in Senator Fischer's mouth she never put on, in a highly 
realistic way.
    Your argument is that the question is not the technology 
used to do so, the question is the materiality, the nature of 
the speech itself. Is that your position?
    Mr. Cohn. Yes. I think that drawing the statute narrowly 
enough is an exceedingly difficult task. I think in principle 
is a, you know, pie in the sky concept. I think I agree with 
you, I just am not sure how to get from point A to point B in a 
manner that will satisfy strict scrutiny.
    Senator Ossoff. Forgive me, Senator Fischer, for invoking 
your example in that hypothetical. Thank you all for your 
testimony.
    Chairwoman Klobuchar. Okay, very good. Thank you. I will 
point out that while network TVs have some requirements and 
they take ads down when they find them highly deceptive, that 
is not going to happen online.

    That is one of our problems here, why we feel we have to 
act and why we have to make clear that the FEC has the power to 
act as well, because otherwise we are going to have the Wild 
West right now on the platforms where a lot of people, as we 
know, are getting their news and there's no rules. Senator 
Welch.
    Senator Welch. Yes and thank you. Kind of following up on 
Senator Ossoff and Senator Klobuchar, nobody wants to be 
censoring, so I get that. What that line is, is very porous. 
But the example that Senator Ossoff just gave was not about 
political speech, it was of flat out fraud, right.
    Whether it was AI-generated or it was used with older 
technologies in broadcast, would you guys agree that there 
should be a remedy for that?
    Mr. Cohn. Well, thank you, Senator. I am not entirely sure 
that we can define it exclusively as----
    Senator Welch. All right. Let me stop for a second, because 
what I am hearing you say is, it is really, really difficult to 
define, which I think it is, but your conclusion is we cannot 
do anything. I mean, the issue with AI is not just AI, it is 
just the amplification of the deception.
    You know, something that happened to Senator Fischer is so 
toxic to trust in the political system, and that is getting out 
of control as it is. You know, I will ask you, Mr. Potter, how 
do we define that line between where you are doing something 
that is totally false versus the very broad definition of 
political speech.
    Then one other thing I want to ask, there has to be some 
expectation that the platforms like, say, Google, take some 
responsibility for what is on the platform. They have been 
laying off the folks whose job it is to monitor this and make a 
judgment about what is a flat out deception.
    How do we deal with this? Then second, what is your 
observation about the platforms like Twitter, now X, Google, 
Facebook, essentially laying off all the folks whose job it was 
within those organizations to be reviewing this material that 
is so dangerous for democracy?
    Mr. Potter. Yes. Let me start with the first one, which is 
I think what you are hearing from all the panelists. It is 
important to have a carefully crafted, narrow statute to 
withstand Supreme Court scrutiny, but also to work. The 
language that gets used is going to be the key question.
    Senator Welch. All right. We all agree on that, but there 
is a real apprehension, understandably so, that this is going 
to be censoring speech. I do not know who is going to draft the 
statute.
    We will let all of you do that. But it is a real problem. 
But what about the platforms laying people off so that we do 
not even get real time information? It gets out--the false, the 
deceitful advertising is out there, and we do not even know it, 
and cannot verify that it is false.
    Mr. Potter. Right. If I could, one more line on your first 
question and then I will jump to your second.
    Senator Welch. Okay.
    Mr. Potter. On the first one, I think the comment, the 
examples cited by Mr. Cohn in terms of snippets being taken 
from a Romney speech or snippets from a Trump speech and then
mischaracterized, that to me falls on the line of that is 
defensible, permissible political speech that falls into the 
arena where we argue with each other over whether it was right 
or wrong, because in his example, those people actually said 
that and it was their words, and you are interpreting them or 
misinterpreting them, but they said it.
    That is where I draw the line. Say where you are creating 
words they did not say, the technology we have heard about, 
where my testimony today, because I have been talking enough, 
can be put into a computer and my voice pattern can be used, 
and it can create an entirely different thing, where I sat here 
and said, this is ridiculous.
    You should not be holding this hearing, and you should not 
regulate any of this. That could be created and be false.
    Senator Welch. Would there be any problem banning that? I 
mean, why would that be legitimate in any campaign? I will ask 
you, Mr. Chilson or Mr. Potter.
    Mr. Chilson. Rearranging somebody's speech to say something 
truthful, even if it is a misrepresentation, I do not think you 
could ban that. If, you know, if I had this, your recording of 
this speech----
    Senator Welch. No, we are talking about using the--using 
whatever technology to have somebody, me, saying something I 
never said, at a place I never went. Yes, sorry. Thank you.
    Mr. Chilson. I think that it would really depend. If you 
have AI video of somebody saying something that they did not 
say in a place that they did not go, but it makes them look 
good, right? It is not defamatory in any way. It is truthful 
and it is positive on you. It would be hard to draw a line that 
would ban one of those and not the other.
    Chairwoman Klobuchar. Okay. Mister--Senator Bennet.
    Senator Bennet. Thank you, Madam Chair. Thank you very much 
for holding this hearing and thank you for the bill that you 
have allowed me to co-sponsor as well. I think it is a good 
start in this area. Thank you, the witnesses, for being here.
    You know, not everybody up here, and I think everybody on 
this panel, is grappling with the newness of AI. Disinformation 
itself, of course, is not something that is new. Ms. Wiley, 
this is a going to be a question for you once I get my--through 
it.
    It was common in the 20th century for observers and 
journalists or maybe journalists themselves to say that if it 
bleeds, it leads. Digital platforms, which have in many cases, 
I think tragically replaced traditional news media, have turned 
this maxim into the center of their business model, creating 
algorithms that are stoked by outrage to addict humans, 
children in particular, but others to their platforms to sell 
advertising to generate profit.
    That has then found its way into our political system, and 
not just our political system. In 2016, foreign autocrats 
exploited the platforms' algorithms to undermine Americans' 
trust in our institutions, our elections, and each other.
    I remember as a Member of the Intelligence Committee just 
being horrified by not just the Russian attack on our 
elections, but also the fact that it took Facebook forever to 
even admit that it had happened--that they had sold ads to 
Russians that were then used to anonymously attack our elections
and spread falsehoods--in our democracy.
    In 2017, you know, it was Meta--now Meta, where it was 
Facebook, now Meta's algorithms played what the United Nations 
described as a determining role in the Myanmar genocide. 
Facebook said that they, ``lose some sleep over this.'' That 
was their response. Clearly not enough sleep, in my view.
    Thousands of Rohingya were killed, tortured, and raped, and 
displaced as a result of what happened on their platform with 
no oversight and with no even attempt to try to deal with it. 
In 2018, false stories went viral on WhatsApp, warning about 
gangs of child abductors in India.
    At least two dozen innocent people were killed, including a 
65 year old woman who was stripped naked and beaten with iron 
rods, wooden sticks, bare hands and feet. Just last night, The 
Washington Post reported--by the way, these are not 
hypotheticals. Like this is actually happening in our world 
today.
    Just last night, The Washington Post reported how Indian 
political parties have built a propaganda machine on WhatsApp 
with tens of thousands of activists spreading disinformation 
and inflammatory religious content. Last month, when the Maui 
wildfires hit, Chinese operatives capitalized on the death of 
our neighbors and the destruction of their homes, claiming that 
this was the result of a secret weather weapon being tested by 
the United States.
    To bolster their claims, their post included what appear to 
be AI-generated photographs. Big tech has allowed this false 
content to course through our platforms for almost a decade. We 
have allowed it to course through these platforms.
    I mean, I am meeting every single day, it is not the 
subject almost every day at home. I did, literally did on 
Monday with educators in the Cherry Creek School District, 
listening to them talk about the mental health effects of these 
algorithms. I know that is not the subject of today's hearing, 
but let me tell you something, our inability to deal with this 
is enormously costly.
    I am a lawyer. I believe strongly in the First Amendment, 
and I think that is a critical part of our democracy, and a 
critical part of journalism and politics. We have to find a way 
to protect it. But it cannot be an excuse for not acting. The 
list of things that I am talking about here that I read today, 
these are foreign actors to begin with that are undermining our 
elections.
    The idea that somehow we are going to throw up the First 
Amendment in their defense cannot be the answer. We have to 
have a debate about the First Amendment to be sure. We need to 
write legislation here that does not compromise or 
unconstitutionally impinge on the First Amendment.
    I totally agree with that. We cannot go through another 
decade like the last decade. Ms. Wiley, I almost am out of 
time, but just in the last seconds that I have left, could you 
discuss the harm disinformation has played in our elections and 
the need for new regulation to grapple with traditional social 
media platforms, as well as the new AI models that we are 
talking about here today? I am sorry to leave you so little 
time.

    Ms. Wiley. No, thank you. Just to be very brief and very 
explicit, we have been working as a civil rights community on 
these issues for a decade as well, Senator Bennet.
    What we have seen, sadly, is even when the social media 
platforms have policies in place prohibiting conduct which they 
are Constitutionally allowed to do, to say you cannot come on 
and spew hate speech and disinformation without us either 
demoting it or labeling it or kicking you off the platform 
potentially, right, in the worst--for the worst offenders.
    Yet, what we have seen is, sadly and frankly, not 
consistent enforcement of those policies and most recently 
actually pulling back from some of those policies that enable 
not only a safe space for people to interact--you know, we 
should just acknowledge that for 8 year olds and under, we have 
seen double the rate of 8 year olds on YouTube since 2017, 
double.
    It really is significant what we have seen, both in terms 
of telling people they cannot vote or sending them to the wrong 
place. But it is even worse because as we saw with YouTube, a 
video that went viral out of Georgia, that gets to Arizona, and 
then we have an elected officials who call out vigilantes to go 
armed to mail drop boxes, intimidating vote, which essentially 
intimidates voters from dropping off their ballot.
    Senator Bennet. My colleague from California has waited. I 
apologize.
    Chairwoman Klobuchar. Yes, I think we are going to let him 
go.
    Senator Bennet. One observation, that that, Ms. Wiley, is 
such an important point. In 2016, the Russians were telling, 
the Russian Government was telling the American people that 
they could not go someplace to vote. It is the point you are 
making. They do not have a First Amendment right to do that and 
we need to stop it.
    Chairwoman Klobuchar. Okay. Thank you for your patience and 
your great leadership on elections. Senator Padilla.
    Senator Padilla. Thank you, Madam Chair. Want to just sort 
of associate myself with a lot of the concerns that have been 
raised by various Members of the Committee today. But as the 
Senate as a whole is having a more of a complete comprehensive 
conversation about AI, I think Leader Schumer and others have 
encouraged us to consider balanced thinking.
    We want to minimize the risk, the negative impact of AI, 
but at the same time, be mindful of the potential upside and 
benefits AI, not just in elections, but across the board. While 
I share some of the concerns, I have a question relative to the 
potential benefits of AI. One example of the potential benefits 
is the identification of disinformation super spreaders, right.
    We are all concerned about disinformation. There are some 
small players and big players. I am talking about super 
spreaders who are influencers, accounts, web pages, and other 
actors that are responsible for wide dissemination of 
disinformation.
    AI can, if properly implemented, help scrape for these 
actors and identify them so that platforms and government 
entities can respond accordingly. I see some heads nodding, so 
I think the experts are familiar with what I am talking about.

    Another example is, in the enforcement of AI rules and 
regulations. One example, Google just announced that it will 
require political ads that use synthetic content to include a 
disclosure to that effect.
    Using AI to identify synthetic content will be an important 
tool for enforcing this rule and others like it. Question for 
Mr. Chilson. What--can you think of one or two other examples 
of benefits of AI in the election space?
    Mr. Chilson. Absolutely. As I said in my statement, it is 
already integrated deeply into how we create content, and it 
has made it much easier to produce content. One of the things 
that comes to mind immediately is a relatively recent tool that 
lets you upload, sample a video and then pick a language to 
translate it into.
    That--and it translates not just the audio, but it also 
translates the image so that it looks like the person is 
speaking in that language. That type of tool to quickly be able 
to reach an audience that maybe was harder to reach for the 
campaign before, especially campaigns that do not have deep 
resources, I think that is a powerful, potential tool.
    Senator Padilla. Thank you. Question for a former 
colleague, Secretary Simon.
    I think that one short term tool that could benefit both 
voters and election workers is the development of media 
literacy and disinformation toolkits that could then be branded 
and disseminated by state and local offices.
    Do you think it would be helpful to have additional 
resources like this from the federal level to boost media 
literacy and counter disinformation?
    Mr. Simon. Thank you, Senator, and good to see you. We in 
the Secretary of State community miss you, but we are glad you 
are here as well. Thank you for the question. Yes, I think the 
answer to that is yes.
    When it comes to disinformation or misinformation, I think 
you put your finger on it, media literacy really does matter. I 
mean, I know you are aware, and I alluded to earlier in my 
testimony, the Trusted Sources Initiative of the National 
Association of Secretaries of State.
    The more we can do to channel people to trusted sources, 
however they may define that, I would like to think it is the 
Secretary of State's Office, but someone may not. Someone may 
think it's a county or a city or someone else, I think that 
would be quite helpful.
    Senator Padilla. Thank you. While we cannot combat 
disinformation--well, we cannot combat disinformation, whether 
it is AI disinformation or any other form, without fully 
understanding where disinformation comes from and how it 
impacts our elections.
    We know there is numerous large nonpartisan organizations, 
I would emphasize that, nonpartisan groups that are dedicated 
to studying and tracking disinformation in order to help our 
democratic institutions combat it. But these organizations are 
now facing a calculated legal campaign from the far right under 
the guise of fighting censorship, to halt their research into 
and work to highlight disinformation. Just one example The 
Election Integrity Partnership, led jointly by Stanford 
Internet Observatory and the University of Washington Center
for an Informed Public, tracks and analyzes disinformation
in the election space and studies how bad actors can manipulate
the information environment and distort outcomes.
    In the face of a legal campaign by the far right, this work 
is now being chilled, and the researchers are being silenced. 
This is happening even as some platforms are getting their own 
trust and safety teams that previously helped guard against 
election hoaxes and disinformation on their platforms.
    Ms. Wiley, what impact does the right wing campaign to 
chill disinformation researchers have on the health of our 
information ecosystems?
    Ms. Wiley. Well, quite sadly and disturbingly, we are 
seeing the chilling effect take effect, meaning we are seeing 
research institutions changing what they are researching and 
how.
    I think one thing I really appreciate about this panel is I 
think our shared belief, not just in the First Amendment, but 
in the importance of information and learning, and the 
importance of making sure we are disseminating it broadly.
    There is nothing more important right now than 
understanding disinformation, its flow, and how better to 
identify it in the way I think everyone on the panel has named. 
I think we have to acknowledge that.
    Certainly, there is enough indications from higher 
education in particular that it has had a devastating impact on 
our ability to understand what we desperately have to keep 
researching and learning about.
    Senator Padilla. Thank you. Thank you, Madam Chair.
    Chairwoman Klobuchar. Well, thank you very much. Thank you 
for your patience and that of your staffs. I want to thank 
everyone. We could not have had a more thorough hearing. I want 
to thank Senator Fischer and the Members of the Committee for 
the hearing.
    I also want to thank the witnesses for sharing their 
testimony, the range of risks with this emerging technology, 
and going in deep with us about potential solutions and what 
would work. I appreciated that every witness acknowledged that 
this is a risk to our democracy, and every witness acknowledged 
that we need to put on some guardrails.
    While we know we have to be thoughtful about it, I would 
emphasize the election is upon us. These things are happening 
now. I would just ask people who are watching this hearing, who 
are part of this, who are, you know, within--with the different 
candidates or on different sides, that we simply put some 
guardrails in place.
    I personally think giving the FEC some clear authority is 
going to be helpful. I think doing--then, of course, doing some 
kind of ban for the most extreme fraud is going to be really, 
really important, and I am so glad to have a number of Senators 
joining me on this, including conservatives on the Republican 
side, and then figuring out disclaimer provisions that work. 
That has been the most eye opening to me as we have this 
hearing today about which things we should have them cover and 
how we should do that. That is where I am on this. I do not 
want that to replace the ability, and this is what I am very
concerned about, to actually take some of this stuff down 
that is just all out fraud in the candidates voices and 
pretending to be the candidate.
    Clearly, the testimony underscored the importance of 
congressional action, and I look forward to working with my 
colleagues on this Committee in a bipartisan manner, as we did 
in the hardest of circumstances in the last Congress and last 
years, including, by the way, not just the Electoral Count 
Reform Act bill that we passed through this Committee with 
leadership in this Committee, but also the work that we did in 
investigating security changes that were needed at the Capitol, 
along with the Senator Peters and Portman at the time over in 
the Homeland Security Committee--the list of recommendations 
that Senator Blunt, the Ranking Member at the time, and I, and 
those two leaders came up with, most of which have been 
implemented with bipartisan support.
    We just have a history of trying to do things on a 
bipartisan basis. That cries out right now for the Senate to 
take a lead, hopefully before the end of the year. We look 
forward to working on this as we approach the elections and 
certainly as soon as possible.
    The hearing record will remain open for a week, only a 
week, because, like I said, we are trying to be speedy, and 
hope the Senate is not shut down at that time. We will find a 
way to get your stuff, even if it is.
    But we are hopeful, given that nearly 80 percent of the 
Senate, actually 80 percent, the few people who worked on, that 
supported the bill last night that Senator McConnell and 
Senator Schumer put together to avoid a Government shutdown.
    We go from there in that spirit, and this Committee is 
adjourned. Thank you.
    [Whereupon, at 5:22 p.m., the hearing was adjourned.]





                      APPENDIX MATERIAL SUBMITTED

                              ----------                              




[GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]




  

                                [all]