[Senate Hearing 118-432]
[From the U.S. Government Publishing Office]



                                                        S. Hrg. 118-432

               OPEN HEARING: FOREIGN THREATS TO ELECTIONS
                   IN 2024_ROLES AND RESPONSIBILITIES
                      OF U.S. TECHNOLOGY PROVIDERS

=======================================================================

                                HEARING

                               before the

                    SELECT COMMITTEE ON INTELLIGENCE

                                 OF THE

                          UNITED STATES SENATE

                    ONE HUNDRED EIGHTEENTH CONGRESS

                             SECOND SESSION

                               __________


                           SEPTEMBER 18, 2024

                               __________


      Printed for the use of the Select Committee on Intelligence





                 [GRAPHIC NOT AVAILABLE IN TIFF FORMAT]





        Available via the World Wide Web: http://www.govinfo.gov

                               ______
                                 

                 U.S. GOVERNMENT PUBLISHING OFFICE

57-025                    WASHINGTON : 2025












                    SELECT COMMITTEE ON INTELLIGENCE

           (Established by S. Res. 400, 94th Cong. 2d Sess.)

                   MARK R. WARNER, Virginia, Chairman
                  MARCO RUBIO, Florida, Vice Chairman

RON WYDEN, Oregon                    JAMES E. RISCH, Idaho
MARTIN HEINRICH, New Mexico          SUSAN M. COLLINS, Maine
ANGUS S. KING, Jr., Maine            TOM COTTON, Arkansas
MICHAEL F. BENNET, Colorado          JOHN CORNYN, Texas
ROBERT P. CASEY, Jr., Pennsylvania   JERRY MORAN, Kansas
KIRSTEN E. GILLIBRAND, New York      JAMES LANKFORD, Oklahoma
JON OSSOFF, Georgia                  MIKE ROUNDS, South Dakota
MARK KELLY, Arizona

                CHARLES E. SCHUMER, New York, Ex Officio
                 MITCH McCONNELL, Kentucky, Ex Officio
                  JACK REED, Rhode Island, Ex Officio
                ROGER F. WICKER, Mississippi, Ex Officio

                              ----------                              

                       William Wu, Staff Director
                  Brian Walsh, Minority Staff Director
                   Kelsey Stroud Bailey, Chief Clerk









                                CONTENTS

                              ----------                              

                           SEPTEMBER 18, 2024

                           OPENING STATEMENTS

                                                                   Page
Mark R. Warner, U.S. Senator from Virginia.......................     1
Marco Rubio, U.S. Senator from Florida...........................     5

                               WITNESSES

Kent Walker, President, Global Affairs and Chief Legal Officer, 
  Alphabet.......................................................     6
    Prepared Statement for the Record............................     9
Brad Smith, Vice Chair and President, Microsoft..................    18
    Prepared Statement for the Record............................    20
Nick Clegg, President, Global Affairs, Meta......................    30
    Prepared Statement for the Record............................    32

                         SUPPLEMENTAL MATERIAL

Slides submitted by Senator Warner...............................    65
Slide submitted by Senator Kelly.................................    70

                        QUESTIONS FOR THE RECORD

Questions for the Record and Responses Received from Fred 
  Humphries, Microsoft Corporate Vice President, US Government 
  Affairs........................................................    71
Question for the Record and Responses Received from Kent Walker, 
  President of Global Affairs, Google and Alphabet...............    85
Questions for the Record and Responses Received from Meta 
  Platforms, Inc.................................................   116









 
               OPEN HEARING: FOREIGN THREATS TO ELECTIONS
                   IN 2024_ROLES AND RESPONSIBILITIES
                      OF U.S. TECHNOLOGY PROVIDERS

                              ----------                              


                     WEDNESDAY, SEPTEMBER 18, 2024

                                       U.S. Senate,
                          Select Committee on Intelligence,
                                                    Washington, DC.
    The Committee met, pursuant to notice, at 2:35 p.m., in 
Room SH-216 in the Hart Senate Office Building, Hon. Mark R. 
Warner, Chairman of the Committee, presiding.
    Present: Senators Warner (presiding), Rubio, Heinrich, 
King, Bennet, Gillibrand, Ossoff, Kelly, Risch, Collins, 
Cotton, Cornyn, Lankford.

 OPENING STATEMENT OF HON. MARK R. WARNER, A U.S. SENATOR FROM 
                            VIRGINIA

    Chairman Warner. I am going to call this hearing to order. 
And I want to welcome today's witnesses: Mr. Kent Walker, 
President of Global Affairs and Chief Legal Officer, Alphabet; 
Mr. Nick Clegg, President, Global Affairs, Meta; and Mr. Brad 
Smith, Vice Chair and President of Microsoft.
    Today's hearing builds on this Committee's longstanding 
practice of educating the public about the intentions and 
practices of foreign adversaries seeking to manipulate our 
country's electoral process. I do know we have all come a long 
way since 2017, and as many folks may remember, there was a lot 
of skepticism that our adversaries might have utilized 
America's social media platforms for intelligence activities.
    It was almost seven years ago that in response to inquiries 
from this Committee that Facebook shared first evidence of what 
would become an expansive discovery documenting Russia's use of 
tens of thousands of inauthentic accounts across Facebook, 
Instagram, YouTube, Twitter, Reddit, LinkedIn, and even smaller 
platforms like Gab and Tumblr and Medium and Pinterest, all to 
try to divide Americans and influence their votes.
    Through this Committee's bipartisan investigation into the 
Russian interference in the 2016 election, we learned that 
Russia had devoted millions to wide-ranging influence campaigns 
that literally generated hundreds of millions of online 
impressions which sowed political division, racial division, 
and impersonated social, political, and faith groups of all 
stripes across all ends of the political spectrum to infiltrate 
and manipulate our debate.
    Our committee's bipartisan efforts also resulted in a set 
of recommendations for government, for the private sector, and 
for political campaigns recommendations for which I hope 
today's hearing will serve as a status check.
    These recommendations included greater information sharing 
between the U.S. Government and the private sector about 
foreign malicious activity--not domestic--foreign malicious 
activity; greater transparency measures by platforms to inform 
users about that malicious activity; as well as more 
information on the origin and authenticity of information that 
was presented to them.
    This is something that didn't get a lot of attention: the 
facilitation of open-source research by academics and civil 
society organizations to better assist platforms here and 
others and the public in identifying malicious use of social 
media, again, by foreign actors.
    On the government side we have also seen some significant 
progress. Let me state right now that the 2020 election I think 
was the most secure in the United States' history, and that is 
verified by election security experts, and I want to commend 
the Trump administration for helping that come about.
    Now it came about because the progress has been made 
through a combination of the bipartisan appropriation of 
funding for election upgrades, things that folks on both sides 
now have been calling for for a long time; paper records, 
implementing audits to verify results, a better postured, 
frankly, national security community that we have oversight on 
to track and expose and disrupt foreign adversarial election 
threats and I think a pretty successful effort to share threat 
information about foreign influence activity with the private 
sector.
    U.S. tech companies as well have made progress, although as 
I warned all of our witnesses, albeit uneven, since 2016. These 
include, and I want to cite because many of you were present, 
when the three companies in front of us and literally 24 other 
companies, including companies where a lot, unfortunately, a 
lot of this is taking place right now X, formerly known as 
Twitter, which wouldn't even send a representative today, where 
27 companies signed in Munich what was called the Tech Accord 
to Combat Deceptive Use of AI in 2024 Elections--not just in 
America, but around the world.
    While I appreciate the voluntary commitments that were made 
there, you know, I think it has been uneven about ``where is 
the beef?'' and how much has actually been done.
    Recently, I sent letters to all 27 of those companies. Some 
came back with specificity some of you. Unfortunately, others 
simply ignored even responding. And why we are doing this, and 
I think on a bipartisan basis is, there are four new factors 
that I think have raised my concerns dramatically.
    The first is, I'm certain our adversaries realize this is 
effective and cheap. Putin clearly understands if he wants to 
try to undermine support--American support for Ukraine, 
weighing in and frankly putting up fake information can help 
him in that matter.
    Similarly, we have seen since the conflict between Israel 
and Hamas post-October 7, this has also been a ripe area for 
foreign misinformation and disinformation. Again, we have seen 
Iran dramatically increase their efforts to stoke social 
discord in the U.S. while, again, potentially seeking to shape 
elections.
    We have seen less from China, but there have been some 
efforts by China, not at the national level, but on down ballot 
races where candidates may not be taking a pro-CCP position.
    Recently, literally in the last eight weeks, we have seen a 
covert influence project led by RT to bankroll unwitting U.S. 
political influencers on YouTube. We have seen a wide-ranging 
Russian campaign that frankly has not gotten much media 
attention because I think they focused on the guys in Tennessee 
and not some of the slides that we are going to put up later in 
our questioning part where major institutions like the 
Washington Post and Fox News--the bad guys have basically put 
out false information under those banners with the goal of 
spreading what sounds like credible sounding narratives to 
really shape American voters' perceptions of candidates and 
campaigns.
    And we have seen--and this Committee has called this out--
efforts to infiltrate American protests over the conflict in 
Gaza by Iranian influence operatives, who, again, seek to stoke 
division and in many cases in terms of these efforts to 
denigrate former President Trump.
    I do want to acknowledge in these recent efforts you all 
have played a positive role. I want to thank Meta, and I hope 
our committee's interest in this subject helped move you 
yesterday when you guys decided to take down RT and related 
Russian influence operations.
    I want to thank Microsoft for being forward leaning and 
publicly sharing information on, again, some of the Russian 
activity. And I want to thank Alphabet--and I want to call you 
guys Facebook and Google for the less informed, when you were 
one of the first ones to come forward on the sources on the 
Iranian hacks. So, compliments to all of you on that.
    On an overall basis, we have also seen the scale and 
sophistication of the kind of attacks be escalated. When we 
think about AI tools, we all know about that. I think we 
originally thought this would be in the form of deep fakes, 
video and audio alteration. You are going to see AI-type tools 
being used to create what appears and virtually any American 
voter would think is a real Fox News or Washington Post site 
when in reality, it isn't.
    And unfortunately, Congress has not been able to take on 
this issue. But I would point out, and it is a pretty broad 
swath of individual States, and they range across the political 
spectrum that have really put some pretty significant 
guardrails in place at least in terms of deep fake manipulation 
in their States' elections, and that is Alabama, Texas, 
Michigan, Florida, and California. I wish we could take some of 
the best ideas of some of those States and bring them to the 
national level.
    Most of you have indicated that you have not seen, and I 
think the good news is so far, we have not seen the kind of 
massive AI interference that we might have expected 
particularly in the British or French elections, but as we know 
that from past times, the real time this will gear up will be 
closer to the election.
    And third, the truth is, way back in 2016, Russia had to 
create fake personas to spin wild stories. Unfortunately, we 
now have a case where too many Americans, frankly, don't trust 
key U.S. institutions from Federal agencies to local law 
enforcement to traditional media. There is an increased 
reliance on the internet. I think most of us would try to tell 
our kids, ``Just because you saw it on the internet doesn't 
mean it's true.'' But the job of the adversary to amplify 
things that are stated by Americans goes up--goes up 
dramatically.
    Finally, we have seen a concerted litigation campaign that 
has sought to undermine the Federal Government's ability to 
share this vital threat information between you guys and the 
government and vice versa. And frankly, a lot of those 
independent academic third-party checkers have really been 
bullied in some cases or litigated into silence. For instance, 
we have seen the shuttering of the election disinformation work 
at Stanford's Internet Observatory as well as the termination 
of a key research project at Harvard Shorenstein Center. We 
need those academic researchers in the game as that independent 
source.
    And again, this is a question that really bothers me, and 
we will--I know, we may litigate this a bit--but too many of 
the companies have dramatically cut back on their own efforts 
to prohibit false information. Again, we are talking about 
foreign sources. And we have seen the rise--and Senator Rubio 
and I have been in the lead on this--of a foreign-owned 
platform that has a huge reach in the case of TikTok, that has 
huge national security concerns. I'm very, very glad that over 
80 percent of both the House and the Senate voted to say that a 
creative platform shouldn't be ultimately controlled by the 
CCP.
    Now, in the last open hearing we had on this topic we heard 
about what the Federal Government is doing to disrupt. We are 
going to continue to get with law enforcement and the IC before 
election day. But this is really our effort to try to urge you 
guys to do more to kind of alert the public that this problem 
has not gone away. Lord knows, we have enough differences 
between Americans that those differences don't need to be 
exacerbated by our foreign adversaries.
    Again, we are not cherry-picking these adversaries. These 
are nation-states that are in the law of our country--China, 
Russia, Iran, North Korea and others that have been designated 
as foreign adversaries.
    The truth is, we are 48 days away from the election. And 
the final point I want to make clear is that we need to do all 
we can before the election, but I also think it is not like at 
the end of election night particularly assuming how close this 
election will be, that this will be over. One of my greatest 
concerns is that the level of misinformation, disinformation 
that may come from our adversaries after the polls close could 
actually be as significant as anything that happens up to 
closing of the polls on election night.
    With that, I appreciate you are here.
    Let me just, before I go to Senator Rubio, when we do the 
open hearings and I appreciate Senators Cornyn, Cotton, and a 
lot of our colleagues getting here early. We are going to do by 
seniority rather than at the gavel.
    With that, Senator Rubio.

  OPENING STATEMENT OF HON. MARCO RUBIO, A U.S. SENATOR FROM 
                            FLORIDA

    Vice Chairman Rubio. Thank you for holding this hearing. 
Thank you all for agreeing to be here. This is important. It is 
actually a tricky and difficult topic, because I think there 
are two kinds of things we are trying to address.
    The first is generated disinformation. And I think you are 
going to describe some of those efforts today. But that is some 
foreign adversary--Iran, China, Russia--they create or make 
something up and then they amplify it. They make it up, they 
push it out there and they hope people believe it.
    It is actually something--I remember giving a speech back 
in 2018 or 2019, warning about AI-generated videos that were 
going to be a wave of the future in terms of trying to 
influence what people think and see, and we have seen some of 
that already play out. That is pretty straightforward.
    Let me tell you where it gets complicated. Where it gets 
complicated is, there is a preexisting view that people have in 
American politics. I use this as an example, not because I 
generally agree with it, but because this is an important 
example. There are people in the United States who believe that 
perhaps we shouldn't have gotten involved with Ukraine or 
shouldn't have gotten involved in the conflict in Europe. 
Vladimir Putin also happens to believe and hope that view will 
conclude.
    Now there is someone out there saying something that 
whether you agree with them or not is a legitimate political 
view that is preexisting, and now some Russian bot decides to 
amplify the views of an American citizen who happens to hold 
those views. And the question becomes: Is that disinformation, 
is that misinformation, is that an influence operation because 
an existing view is being amplified?
    Now, it is easy to say, well, just take down the 
amplifiers. But the problem is it stigmatizes the person whose 
view it is. Now the accusation is that that person isn't simply 
holding a view, they are holding the same view that Vladimir 
Putin has on the same topic or something similar to what he 
has, and as a result, they themselves must be an asset. That is 
problematic and it is complicated. And as we try to manage all 
this, we recall that in 2020--and this is now well known. 
Obviously, it has been well discussed. There was a laptop--
Hunter Biden's laptop. There was a story in the New York Post 
and 51 former--and I say ``former'' because I have people 
comment all the time saying, ``intelligence officers.'' These 
are former intelligence officials, went out and said: This has 
all the attributes of a Russian disinformation campaign. And as 
a result, the New York Post who posted the original story had 
their story censored and taken down, their account locked. 
There was a concerted effort on the basis of that letter to 
silence a media outlet in the United States on something that 
actually turned out not to be a Russian disinformation. Even 
though, I imagine maybe the Russians wanted to spread that 
story. They might have amplified it, but it also happened to be 
factual.
    We know, based on the letter from the CEO of Meta, that the 
government pressured him during the COVID pandemic to censor 
certain views, and he expressed regret about agreeing to some 
of that.
    So, there are people in this country that had their 
accounts locked or even got in some cases canceled out because 
they questioned the efficacy of masks--something that we now 
know that Dr. Fauci agreed that masks were not a solution to 
all the problems.
    The question whether there was a lab leak put out the lab 
leak theory that at one time was considered a conspiracy and a 
flat out lie, and now our own intelligence agencies are saying 
it is 50 percent likely, just as likely as naturally occurring.
    So, this is a tricky minefield. And it is even trickier now 
because Russia is still doing it more than anybody else. But 
the others--you don't need to have a big expensive operation to 
pursue some of this. I think we should anticipate that in years 
to come--and it is happening already--the Iranians are going to 
get into this business. They already are. The Chinese are going 
to get into this business. They already are. And you see them 
using that in other countries to sow discord and division. It 
is coming. It is also North Korea, multiple--and maybe even 
friendly States who have a preference on how American public 
opinion turns.
    So, I do think it is important to understand what our 
policies are today in terms of identifying what is 
disinformation, what is actually generated by a foreign 
adversary versus the amplification of a preexisting belief in 
America which has left a lot of people in a position of being 
labeled collaborators when, in fact, they just hold views that 
on that one issue happen to align with what some other country 
hopes we believe as well.
    I am very interested to learn what our internal policies 
are in these companies, because I think it is a minefield that 
we may end up sowing in an effort to prevent discord. I don't 
want to sow discord, and that is one of the dangers that we are 
now flirting with.
    Thank you for being here. I look forward to hearing your 
testimony.
    Chairman Warner. And before I go, I just want to 
reemphasize and I agree with Senator Rubio, Americans have got 
the right to say whatever, their First Amendment right to say 
that we agree or disagree, no matter how crazy.
    I do think there is difference when foreign intelligence 
services cherry-pick information and amplify it that in many 
ways stokes division. And that is again where the core of this 
debate is, and we are anxious to hear your testimony.
    I am not sure who drew the short straw to go first.

                OPENING STATEMENT OF KENT WALKER

  PRESIDENT, GLOBAL AFFAIRS, AND CHIEF LEGAL OFFICER, ALPHABET

    Mr. Walker. Happy to launch.
    Chair Warner, Vice Chair Rubio, Members of the Committee: 
Thank you all for the opportunity to be with you today.
    Google, Alphabet, is in the business of earning the trust 
of our users. We take seriously the importance of protecting 
free expression and access to a range of viewpoints while also 
maintaining and enforcing responsible policy frameworks. A 
critical aspect of that responsibility is doing our part to 
protect the integrity of democratic processes around the world. 
That is why we have long invested in significant new 
capabilities, updated our policies, and introduced new tools to 
address threats to election integrity.
    We recognize the importance of enabling people who use our 
services in America and abroad to speak freely about the 
political issues that are most important to them. At the same 
time, we continually take steps to prevent the misuse of our 
tools and our platforms, particularly attempts by foreign state 
actors to undermine democratic elections.
    To help advance this work, we created the Google Threat 
Intelligence Group which combines our Threat Analysis Group or 
TAG and Mandiant intelligence. Google Threat Intelligence 
identifies, monitors, and tackles threats, including 
coordinated influence operations and cyber espionage campaigns. 
We disrupt activity on a regular basis, and we publish our 
findings, and we provide expert analysis on threats originating 
from the kinds of countries we are talking about: Russia, 
China, Iran, and North Korea, as well as from the criminal 
underground.
    This year alone, we have seen a variety of malicious 
activity, including cyberattacks, efforts to compromise 
personal email accounts of high-profile political actors, and 
influence operations both on and off our platforms that are 
seeking to sow discord among Americans the way you were both 
discussing.
    We remain on the lookout for new tactics and techniques in 
both cybersecurity and disinformation campaigns. We are seeing 
some foreign state actors experimenting with generative AI to 
improve existing cyberattacks like probing for vulnerabilities 
or creating spear phishing emails. Similarly, we see generative 
AI being used to more efficiently create fake websites, 
misleading news articles, and robotic social media posts.
    We have not yet seen AI bring about a sea change in these 
attacks, but we do remain alert to new attack vectors.
    To help us all stay ahead, we continue to invest in state-
of-the-art capabilities to identify AI generated content.
    We have launched since_id, an industry leading tool that 
watermarks and identifies AI generated content in texts, in 
audio, in images, and in video. We were also the first tech 
company to acquire election advertisers to prominently disclose 
ads that include realistic looking content that is synthetic or 
digitally altered.
    On YouTube, when creators upload content, we now require 
them to indicate whether it contains material that appears 
realistic which we then label appropriately. And we will soon 
begin to use content credentials that is a new form of tamper 
evident metadata coming out if the C2PA program that we will 
discuss, I'm sure, to identify the provenance of content across 
ads, search, and YouTube, and to help our users identify AI 
generated material.
    We, our users, industry, law enforcement, and civil society 
all play important roles in safeguarding election integrity. We 
encourage our high-risk users, including elected officials and 
candidates, to protect their personal and official email 
accounts, and we offer them our strongest cyber protections, 
our Advanced Protection Program.
    We also work across the tech industry, including through 
the Tech Accord that you mentioned, Chair Warner, and the 
Coalition for Content Provenance and Authenticity, the C2PA 
group I mentioned, to identify emerging challenges and to 
counter abuse.
    We are committed to doing our part to keep the digital 
ecosystem safe, reliable, and open to free expression.
    We appreciate the Committee convening this important 
hearing, and we look forward to your questions.
    [The prepared statement of the witness follows:]


[GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]    
    

   OPENING STATEMENT OF BRAD SMITH VICE CHAIR AND PRESIDENT, 
                           MICROSOFT

    Mr. Smith. Thank you, Chairman Warner, and thank you, Vice 
Chairman Rubio. It is a pleasure to be here.
    I first want to say, many days, we are competitors, but I 
think when it comes to protecting the American public, all 
three of us and all of us across the tech sector are and need 
to be colleagues committed to a common cause of protecting our 
elections.
    I think we have to start by recognizing that there are real 
and serious threats, including in this election. We all have 
all been reporting on them, we have been seeing them, and you 
have talked about them.
    Every day, we know that there is a Presidential race 
between Donald Trump and Kamala Harris; but this has also 
become an election of Iran versus Trump and Russia versus 
Harris. It is an election where Russia, Iran, and China are 
united with a common interest in discrediting democracy in the 
eyes of our own voters and even more so in the eyes of the 
world.
    So, what do we do?
    What is the role and responsibility of the tech sector? 
That is the fundamental question you have put to us.
    First, I think we should always adhere to two principles. 
The first is to preserve the fundamental right to free 
expression that is enshrined in our Constitution that Vice 
Chairman Rubio spoke about. That is and needs to be our North 
Star.
    The second is to defend the American electorate from 
foreign nation states who are seeking to deceive the American 
public.
    How do we do this?
    I think we have three roles. The first is really to prevent 
foreign nation state adversaries from exploiting American 
products and platforms to deceive our public. We do that with 
guardrails, especially around AI-generated content; but we also 
do it by identifying and addressing content on our platform--
especially AI-generated content created by foreign States.
    I think our second role is to protect candidates the people 
who are putting themselves out there to run for office, their 
campaign staffs, the political parties, the county and State 
election officials, on which we all rely. And we do that in 
part by providing them with technology and knowhow. Google, 
Microsoft, we all do that, and we do it by getting out there 
and working with them.
    At Microsoft, we have now worked across 23 countries this 
year. We have had more than 150 training sessions reaching more 
than 4,700 people. And we do it by responding immediately in 
real-time when incidents arise, as we do, to work with 
campaigns to help protect them.
    And the third role we play, quite possibly the most 
important, is to build on your leadership in having this 
hearing to prepare the American public for the risks ahead.
    We do that by informing them, encouraging them to check 
what they see, to recheck it before they vote. And we do it by 
I think recognizing that there is a potential moment of peril 
ahead.
    Today, we are 48 days away from this election, as you said, 
Chairman Warner. The most perilous moment will come I think 48 
hours before the election. That is the lesson to be learned 
from, say, the Slovakian election last fall and other races we 
have seen.
    I think above all else, even in a country that has so many 
divisions, I do hope we can all remember one thing: If Google 
and Microsoft and Meta can get together, if Republicans and 
Democrats and Independents can work together, then I think we 
have an opportunity as a country to stand together to ensure 
that we, the people of the United States, will choose the 
people who lead us and we will protect ourselves from foreign 
interference and deception.
    Thanks very much.
    [The prepared statement of the witness follows:]

[GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]


 OPENING STATEMENT OF NICK CLEGG, PRESIDENT OF GLOBAL AFFAIRS, 
                              META

    Mr. Clegg. Chairman Warner, Vice Chairman Rubio, 
distinguished Members of the Committee:
    Thank you for the opportunity to appear before you today. 
At Meta we are committed to free expression. Each day, more 
than 3 billion people around the world use our apps to make 
their voices heard. By the end of this year, more than two 
billion people will vote in elections around the world, and we 
are proud that our apps help people participate in the civic 
process.
    No tech company delves or invests more to protect elections 
online than does Meta, not just during peak election seasons, 
but at all times. We have around 40,000 people overall working 
on safety and security, and we have invested more than $20 
billion on safety and security since 2016.
    Meta has developed a comprehensive approach to protect the 
integrity of elections based on several key principles. First, 
we have strong policies designed to prevent voter interference 
and intimidation. Second, we connect people to reliable voting 
information. Third, we work tirelessly to combat foreign 
interference and the spread of misinformation. And finally, we 
lead the industry in transparency for political advertisements.
    Our approach reflects the knowledge gained from prior 
elections and we continue to adapt to stay ahead of emerging 
challenges. One of the most pressing challenges for the 
industry is people seeking to interfere with elections to 
undermine the democratic process. We constantly work to find 
and stop these campaigns across our platforms. This is an 
adversarial space, and we are often responding to urgent 
situations with imperfect information. We may not always get it 
right, so we need to be cautious, and in each case, we need to 
conduct our own independent investigation to identify what is 
and is not interference.
    Where we identify coordinated inauthentic behavior, we 
remove the networks at issue. In fact, we have removed over 200 
such networks since 2017, including networks from Russia, Iran, 
and China. We remain committed to stopping these threats and we 
are constantly improving and evolving our defenses to stay 
ahead of our adversaries.
    I am pleased to appear beside other industry leaders today, 
and it underscores an important point. People trying to 
interfere in elections rarely target a single platform. Cross-
industry collaboration and transparency in reporting are 
essential to tackle these networks across the internet. That is 
why we publicize our takedowns for all to see and share the 
relevant information we learn with researchers, academics, and 
others including, of course, Congress.
    This year elections are also taking place as more people 
are using AI tools. To date we have not seen generative-AI 
enabled tactics used to subvert elections in ways that have 
impeded, so far, our ability to disrupt them. However, we 
remain vigilant and will continue to adapt as the technology 
does as well.
    We know that AI progress and responsibility can and must go 
hand in hand. That is why we are working internally and 
externally to address the risks of AI. We have implemented 
industry leading efforts to label AI generated content, giving 
people greater context to what we are seeing. And of course we 
are working across industry to develop common AI standards.
    We are proud to have signed on to the White House's 
voluntary AI commitments and the Tech Accord to combat 
deceptive use of AI in 2024 elections, both of which will help 
guide the industry towards safer, more secure, and more 
transparent development of AI.
    Every election brings its own challenges and complexities. 
We are confident our comprehensive approach can help protect of 
not only this year's elections in the United States, but 
elections everywhere.
    Thank you. I look forward to your questions.
    [The prepared statement of the witness follows:]

[GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]
    
    Chairman Warner. Thank you, gentlemen.
    I am going to put up the first two presentations.
    Let me add to what Mr. Smith said. I concur that the 48 
hours before the election, but I would argue the 48 hours after 
the polls close, particularly if we have as close an election 
as we anticipate, could be equally if not more significant in 
terms of spreading false information, disinformation, and 
literally undermining the tenets of our democracy.
    Now there was a lot of press attention recently on the 
Department of Justice indictments of the Canadians in Tennessee 
who were using--paying off influencers, knowingly or 
unknowingly.
    What didn't get much attention is the first slide here, 
where under the banner of Fox News and the Washington Post--
These look exactly like the Washington Post and Fox News. As a 
matter of fact, it may not be what we thought of as AI, but 
these are kind of AI techniques to make this so real.
    As a matter of fact, they have even got real authors' 
bylines, and the balance of the ads and other things are 
totally reflective. This came out of this DOJ indictment. I 
guess the question in these are--Nick, you mentioned 
``comprehensive.'' They appeared on your site. They also 
appeared on Twitter's site--X's site.
    I think it is a real shame that in the previous 
investigations Twitter was a very collaborative entity. Under X 
they are absent and some of the most egregious activities are 
taking place.
    But I am not sure any American, even a technology savvy 
American, is going to figure out that these are fake. So where 
does that responsibility lie?
    Shouldn't your efforts have been able to spot that, and how 
do we make sure--because only after the fact in 2016, we didn't 
have real-time numbers of how many Americans were viewing the 
fake sites, and they literally ended up with hundreds of 
millions.
    I still remember both the Tennessee Republican Party and 
the Black Lives Matter site. The real sites had less viewership 
than did the Russian based sites.
    How does this get through? How do we know how extensive 
this is?
    And we have many, many more of these.
    What are we going do about these in the next 48 hours to 
make sure Americans are informed to be aware?
    Mr. Clegg.
    Mr. Clegg. Well, firstly, Senator, you are absolutely right 
that it is a hallmark of Russian foreign interference in the 
democratic process to generate AI stories resembling real 
media. As it happens, since those appeared on our site, we have 
just over the last 48 hours banned the organization that 
spawned a lot of this activity, the disinformation.
    Rosia Sovodnia [sic], not least after the editor-in-chief 
gave an interview where she said publicly--and this is in 
effect a media organization owned and run out of the Kremlin--
that she, and I quote, at least this is the translation, is 
conducting--her and her team are conducting what she calls 
``guerrilla projects'' in half of American democracy, and the 
panel behind you is a manifestation of that. That is one of the 
reasons why--
    Chairman Warner. All right. I want to make sure I get this 
in. I need to know how many Americans viewed this and other 
Russian generated Facebook sites that appear to be on your 
sites. I hope you get that information as soon as possible.
    I also want to indicate, there is still an effort, and this 
is more over here in terms of targeting by the Russians, 
towards specific groups. In this case it was the Doppelganger 
gang, and it was both Jewish Americans and then it was targeted 
towards the Latino community. They are very sophisticated. I 
guess it is not--wouldn't be jaw dropping that they have 
focused most of their efforts on the same six States that 
everybody else is focused on. This again goes more to both Mr. 
Clegg and Mr. Walker. You know, they are still targeting paid 
advertising.
    We remember in 2016, when we didn't have controls when 
Russians were paying with rubles for paid advertising on sites. 
I would have thought 8 years later, we would be better in at 
least screening the advertising. Again, in the case of YouTube 
and in the case of Facebook, what are we doing to stop the paid 
advertising targeting by these adversaries?
    Mr. Walker. You took the last one. I can start on this one.
    We have an extensive series of checks and balances in our 
advertising networks that are designed to identify problematic 
accounts, particularly around election ads. We require election 
ads to have registration, effectively.
    In the 2016 situation, I remember we did an extensive 
forensic review of our systems and found that less than $4,000 
had been spent on those.
    Chairman Warner. Respectfully, sir.
    Mr. Walker. Senator.
    Chairman Warner. As recently as January, I note the 
Treasury Department has said that both of your companies have 
still repeatedly allowed Russian influence actors including 
sanctioned entities to use your ad tools. We will get that 
specific information to you.
    We are going to really need as soon as possible the 
content, the bad actors, how much content have they purchased 
on both of your sites and, frankly, others, and we are going to 
need that extraordinarily fast because I think they are getting 
through in many, many more ways than has been represented up 
here.
    Mr. Walker. I certainly appreciate the concern. And we have 
taken down, as we indicated earlier, something like 11,000 
different efforts by Russian associated entities to post 
content on YouTube and the like.
    Chairman Warner. We are just going to need this as quickly 
as possible.
    Mr. Walker. Happy to provide that.
    Chairman Warner. The number of Americans viewing Fox News--
what they think is Fox News or Washington Post, or 
advertisements. We need that data to make sure, again, that we 
inform the public.
    OK. Thank you.
    Vice Chairman Rubio. The area I want to focus on is where 
political speech is involved, and it is sort of the area I 
talked about in my opening statement, which is and really in 
particular, I want to understand what the current policies and 
practices are as we speak regarding to content moderation. Just 
as I am reading and I'm not reading the opening statement from 
Meta:
    We are constantly working to stop the spread of 
misinformation and disinformation. We have built the largest 
independent fact-checking network of any platform with nearly 
100 partners from around the world, to review and rate viral 
misinformation in more than 60 languages. Stories that they--
this platform or these group of people rate as false are shown 
lower in feed, and if some page repeatedly creates or shares 
misinformation, we significantly reduce their distribution and 
remove their advertising rights.
    Let me explain. We are not talking about the stuff that was 
up here. That is fake content. That is just purely fake 
content. It is generated to look like Fox News or Wall Street 
Journal or New York Times. No one is arguing that. That is 
fake. That should be taken down. Those companies should want 
them taken down. That is their copyright and their logo and 
their letterhead.
    I'm talking about this. So, you have got a group of people 
that I think are your fact checkers from all over the world to 
determine whether something is true or not.
    So let me take you back to the real-world scenario which 
ties into what the CEO of the company said, and that is, there 
were people at one point saying, ``maybe I believe that the 
pandemic began in a lab. I believe there was an accident in a 
lab, and it leaked out.''
    And at one time that was considered not factual. In fact, 
there was pressure from government officials on companies not 
to report on that.
    How would that work today, a story like that? Who 
determines whether that is true or not, because it wasn't true 
then but all of a sudden now it is 50 percent maybe likely. How 
would something like that--because there were people that were 
caught up in that. I would imagine that under the policies that 
you described, if I was out there or someone was out there 
raising the specter of a potential lab leak, it would run 
through these fact-checkers, from 100 partners all over the 
world. They would decide whether it is true or not, and you 
could have your page diminished, potentially de platformed if I 
write too much about it.
    So how does this policy deal with that problem that I just 
described, which is a real world one?
    Mr. Clegg. Senator, yes, indeed it is. And as I said in my 
opening statement, we all--obviously, we all inhabit a world of 
imperfect information. And crucially, the pandemic was a very 
good example of that information which changes. And obviously, 
with the benefit of hindsight, we now understand the 
epidemiology of the pandemic which we didn't at the time.
    When we were in the middle of the pandemic, prior to the 
vaccines being rolled out, when people were dying, when really 
no one knew what the trajectory was of this global pandemic. We 
as an engineering tech firm, of course, we are not specialists 
in epidemiology.
    Vice Chairman Rubio. Yes, but I'm not asking what happened. 
I understand what happened. I want to know how this policy 
today would prevent that from happening, because if the 
government is telling you this is a lie, ``We have proof that 
it's a lie. Take it down'' and your fact-checkers say it's a 
lie, then my account gets blocked, gets diminished.
    Today, is that happening today, right now?
    Mr. Clegg. So, two things. Firstly, we do continue to rely 
on these independent fact-checkers. We don't employ them. They 
are not part of Meta. They are independently vetted by a third-
party organization.
    Vice Chairman Rubio. Who are they?
    Mr. Clegg. Oh, there is a variety of organizations which--
which specialize in examining what they think is a reliable way 
of asserting whether something is mis----
    Vice Chairman Rubio. Is there a way to know who those 
vetters are?
    Mr. Clegg. Oh, yes, absolutely.
    Vice Chairman Rubio. Is there a list somewhere, a roster?
    Mr. Clegg. Yes, We have a full list. Absolutely. We can 
provide them to you, and they obviously work in multiple 
languages, in fact, including the United States. I think there 
are 11 fact checkers in the United States, and we can provide 
you with all the information of them. That is the first thing.
    And the second thing is, and Mark Zuckerberg did indeed 
explain this in his recent letter to the House Judiciary 
Committee. I think we learned our lesson, certainly as Meta is 
concerned, that in the heat of the moment when governments, and 
it is governments around the world, exert particular pressure 
on us on particular classes of content which they are 
particularly focused on, we need to act always--and we strive 
to do this, but, of course, we make mistakes--we need to act 
independently; and we need to be resistant to the sort of 
passing moods and passions around particular bits of content, 
which was particularly the case in the pandemic. People were, 
in effect, in a panic.
    Vice Chairman Rubio. Well, let me give you a different 
context, the exact same system. A laptop appears and 51 people 
sign a letter saying: We used to work in the intelligence 
community. This is Russian disinformation. And your fact 
checkers say: We got to listen to the experts. They would know.
    Does anybody--does the New York Post get their account 
taken down again?
    Mr. Clegg. To be very clear, we did not take down the 
account or the content. I think X--they are not here but they 
did.
    Vice Chairman Rubio. But under this policy, if you deem it 
to not be true because it is disinformation because some guys 
signed a letter saying that it was, it will lower them in the 
feed and potentially reduce their distribution, and if they 
post that story too many times, you may actually lock them out. 
So that is policy.
    Mr. Clegg. So, in this instance, Senator, you are correct 
that that story was demoted. I mean, it was always available. 
Millions of people saw it. But its prominence on our services 
was temporarily reduced. And we used to do that to allow the 
fact-checkers to give them the space and time to choose to 
examine that content.
    In this instance, the Hunter Biden story, they didn't do 
so. So that temporary demotion of a few days was then released, 
and it was circulated back to normal.
    Vice Chairman Rubio. Did the fact-checkers reduce or demote 
the 51 people who signed the letter or the letter they signed, 
because that turned out to be not true?
    Mr. Clegg. I don't believe they did so at the time, no.
    Vice Chairman Rubio. All right. Thank you.
    Chairman Warner. Senator Heinrich.
    Senator Heinrich. All right. So, I want to stay on this 
same topic of the sort of fraudulent news sites that look like 
something people would recognize from their own news 
preferences.
    Do each of your companies have a policy of removal once you 
become aware of something that is clearly a fraudulent version 
of a legitimate site?
    Mr. Smith. I think the answer is yes, and Vice Chairman 
Rubio, I thought, captured it very well.
    It actually, in my view, does not depend on whether the 
topic had anything to do with politics. Those are counterfeit 
sites. Those are people using the trademarks of Fox News and 
the Washington Post without their permission and in a way that 
deceives the public and diminishes the value of those 
companies.
    And so, yes. And I think you see pretty universally across 
the industry in the terms of use that prohibit that.
    (Vice Chairman Rubio is now Presiding.)
    Senator Heinrich. Why does it seem to take as long as it 
does for those sites to be identified and removed?
    I think they remain up sometimes longer than I think most 
of us would hope or expect. And then, have you been able to use 
AI proactively to identify some of those fake news outlets?
    Mr. Smith. I think increasingly we are using AI to detect 
these kinds of problems, and I think AI is especially good at 
detecting the use of AI to create content. That is one of the 
things we do, and we are able to see things faster. You always 
have to be in a race.
    For example, just this morning, we saw a Russian group put 
online an AI-enhanced video putting into Vice President 
Harris's words, at a rally, words she never spoke. So, I think 
that is one of the goals for all of us to keep pursuing to 
identify these things faster and then where appropriate take 
action.
    Senator Heinrich. I am encouraged, because obviously AI is 
being used offensively and we need to be on our game and 
responding with those same tools to be able to identify and 
appropriately deal with these things at a much faster rate.
    At a hearing of the U.S. House Committee on House 
Administration, last week, New Mexico Secretary of State 
testified that, quote:
    ``[Y]ears of false election claims and ideological attempts 
to discredit our voting systems and processes . . . [have] led 
to . . . increased threats and harassment to election 
workers.''
    How have you sought to improve your platform's ability to 
detect and remove content that actually threatens or harasses 
people who are part of the democratic process and apparatus for 
fairly administrating elections?
    Mr. Walker. I will take that, and I suspect the same is 
true for all of us. There are two elements of that. One is 
making sure that we are trying to safeguard our election 
officials against threats that may be posted online. And I am 
confident that all of our companies have policies against 
incitements to violence, direct threats, bullying, 
cyberattacks, et cetera. So that kind of material would come 
down.
    The second half is helping our election officials be more 
protected themselves through the use of some of the tools that 
we have spoken about like the Advanced Protection Program, so 
their information is not being hacked or doxed, et cetera--
their personal information is not being made public and the 
like.
    So, between the various companies here, including, I know, 
our Mandiant Group has worked with a number of election 
officials and agencies to make them more cyber resilient, if 
you will--more robust against cyberattack.
    Senator Heinrich. Mr. Clegg.
    Mr. Clegg. Senator, again, I'm sure this is incumbent for 
all of us represented here, but we also encourage local 
election officials to use our platforms to communicate with 
voters. So, we established a system called voting alerts. I 
think since we established that program in 2020, around 650 
million voting alerts have been issued by local and State 
officials on Facebook's apps and services so that voters are 
properly informed about where and when to vote.
    Mr. Heinrich. I am going to give the rest of my time back, 
very uncharacteristic for this body, but nonetheless.
    Senator Collins. I will take it.
    Vice Chairman Rubio. Senator Collins.
    Senator Collins. Thank you, Mr. Chairman.
    Mr. Clegg, we have received briefings from the intelligence 
community that indicate that China is not focused on the 
Presidential election race but rather on down ballot races at 
the State level, county level, local level. That concerns me 
because officials at those levels are far less likely to 
receive the kinds of briefings that we receive or to get 
information from Homeland Security or the FBI on how to be on 
alert.
    In addition, China is attempting to build relationships 
with State and local officials. We see the sister city 
programs. We see the Confucius Institutes at educational 
institutions.
    So how are your platforms attempting to help safeguard the 
down ballot races? The presidential race, I think, everybody is 
aware of the risk there, but the down ballot is what really 
concerns me.
    Mr. Clegg. Senator, I think you are right to be concerned, 
and that is why our vigilance needs to be constant. It can't 
just sort of peak at the time of the Presidential elections. It 
is something in which we need to deploy policies and 
enforcement around the world and around the clock.
    And also, you are right, Senator, to point out that what we 
have seen--what we have at least seen, I know my colleagues 
have witnessed what we call the coordinated inauthentic 
behavior networks conducted by China. Some are quite 
specifically targeted at particular communities.
    So, for instance, quite recently we disabled dozens of 
Facebook and Instagram accounts which were targeting the Sikh 
community in the United States. That is one of the reasons why 
the central signals that we look for aren't related to the 
content or even the person, but the behavioral patterns that we 
see. And the telltale patterns are most especially the use of a 
network of fake accounts. And that of course manifests itself 
in lots of different ways and is targeted at different 
communities, but the underlying analysis that our team has 
conducted is about the behavior rather than the individual bit 
of content. Because as Vice Chairman Rubio said, sometimes the 
content can be actually consistent with things that are 
circulated by kind of ordinary folk in kind of, you know, the 
normal, day-to-day business.
    Senator Collins. Thank you.
    Mr. Smith, you talked about the need for the American 
people to be prepared and to be on the alert. Why isn't part of 
the answer so that we are not getting into suppressing 
dissenting views or criticism of public officials, for example, 
why isn't the answer to watermark posts to indicate not whether 
they are AI generated, but rather where they originate?
    Like, why couldn't you do an ``R'' if it came from Russia? 
Then the person who is looking at the post can make his or her 
own determination, but they would be on alert that this isn't 
Joe, down the street, who has posted this. This is someone who 
is in Russia.
    Mr. Smith. I do think that is a really interesting idea and 
it is one that across the industry people have been giving a 
lot of thought to.
    I would say a couple things. First, I think actually it 
starts with also picking up on the idea you just described and 
putting Americans and American organizations in a position to 
put what is called metadata, in effect, to put the credentials 
in place so it is clear, where their content has come from.
    We worked, for example, with the Republican National 
Convention, and they used that on more than 4,000 images that 
were released in Milwaukee so that it would protect their 
content from being distorted.
    I do think one can then go farther, and it is an important 
question, as you raised, if we find something that is coming 
from somewhere else, how and when should we identify it.
    I frankly think the most important thing is that we address 
content where that kind of protection has been removed. And 
that has been the subject of legislation being proposed, 
including from Members of this Committee, to protect against 
tampering. And then we can think about other forms of 
identification for the public.
    Senator Collins. Thank you.
    Vice Chairman Rubio. Senator Kelly.
    Senator Kelly. Thank you, Mr. Chairman.
    Thank you all for being here for this very important 
hearing.
    I just got back from visiting our allies in the Baltics who 
all border Russia, also to Finland. And they have been targeted 
by disinformation attacks at a pretty high level and come 
pretty quickly. And they have efforts in place to try to equip 
their citizens and their institutions to counter disinformation 
campaigns, they feel, somewhat successfully, though it is a big 
problem for them. But I do think we can learn something from 
our partners in the Baltics.
    Malicious actors, as you know, use social media and 
internet platforms as a key vector for these campaigns that 
they have against us and are increasingly employing tools. 
We've talked about this bots, generative AI. So, It is my hope 
that we can also count on the partnership of the American tech 
industry to aggressively counter these threats.
    I want to turn to a specific problem that is of great 
concern to me, and as my constituents learn about this, I am 
sure it will be to them as well.
    Behind me you can see a screen capture of Russian made web 
pages designed to look like major American outlets Fox News and 
the Washington Post but showing fabricated headlines. I went 
through these the other day.
    I think the Chairman showed something very similar, so 
apologies for being a little bit redundant here.
    But these pages were created by Russians or Russian cyber 
operatives to distribute Russian messages by co-opting the 
brand of a real news website that Americans trust, both Fox 
News and the Washington Post, but there are others, as well.
    These are really well done. I mean, it would be hard, 
unless you were looking specifically at the URL and noticed 
that something was not exactly right, where there was no dot-
com, there was a dot-pm or dot something else at the end you 
wouldn't otherwise know and you would think this was a 
legitimate news source.
    They've also spoofed the official NATO website as well. And 
they use these sites to push messages that cast doubt on 
Russian atrocities that we know are real. They lie about NATO 
suppressing peaceful protests. They stoke controversies or even 
invent them where they don't exist.
    So, an additional concern is that they specifically 
targeted swing State voters--so, my constituents in Arizona and 
others--and they seek to influence the outcome of these 
elections. This is absolutely beyond the pale. We have got to 
do something about it.
    So, I am curious from each of you, and I have about two 
minutes here. Just what are you doing about it? And 
specifically with these websites, if we were to go and look for 
them now, have they been taken down--the Fox News website, the 
Washington Post? Would we still--is there a way to--
    Let's say we start with you, Mr. Walker. If we search on 
Google and tried to find this through a Google search engine or 
search for the Washington Post, could we navigate from your 
website to these fake websites?
    Mr. Walker. We are obviously concerned about the larger 
problem. I haven't searched for these specific sites, but I can 
tell you, we have launched tools called ``about this image'' 
and ``about this result'' which tells you the first time we saw 
an image appear on the internet. So, in many cases 
disinformation may not be AI-generated, it may be a repurposed 
photo.
    Most of the disinformation we see coming out of Gaza is not 
AI-generated, it is pictures from a different war.
    So, providing that kind of context is valuable. Then just 
quickly, to say that if content is AI generated, increasingly 
the ability to watermark it or understand its provenance 
through the C2PA cross industry group that I mentioned before 
will help all of us do a better job of identifying and removing 
this type of content.
    Senator Kelly. Once you find the content and you know it is 
fake, at that point, can you take action to make sure that your 
customers cannot navigate to that content?
    Mr. Walker. The search context is somewhat different than 
other contexts where we are hosting information. So, let's say 
YouTube, which would be our hosted content example here. If 
something is demonstrably false and harmful, we will remove it, 
in addition to all of our other policies. And that has been 
consistent for many years.
    We also have a general manipulated media policy, whether it 
is AI manipulation, or you may remember the cheap fakes that 
went around some time ago, which were slowing down videos to 
make a politician look as though they were intoxicated. We will 
remove that kind of content, yes.
    Senator Kelly. You said if it is false or harmful. How 
about if it is just them co-opting somebody else's website like 
Fox News or Washington Post?
    Mr. Walker. I go back to Brad's earlier comments with 
regard to the notion of trademark infringement, copyright 
infringement. As we get complaints about that, we will remove 
that content, yes.
    Senator Kelly. All right. Thank you.
    Chairman Warner. I would quickly note, I think most of your 
companies do a pretty good job on trademark protection. I just 
feel like Fox News and Washington Post should have gotten that 
same level of protection, and, frankly, they should be weighing 
in as well.
    Senator Cotton.
    Senator Cotton. Thank you. Gentlemen, thanks for appearing. 
I mean, I want to bring a little perspective to the topic 
today.
    I think this committee's own report of more than 1,000 
pages, said that Twitter users alone produce more election 
related content in about three hours in 2016 then all of the 
Russian agents working together.
    I have no doubt that Russia and China and Iran and North 
Korea are all doing these things, up to no good. And if you 
don't know what they are doing, it is probably no good. And 
there is a lot of things they could do that are very bad to 
influence American politics.
    You know, Russian intelligence spent millions of dollars in 
the early 1980s to promote the nuclear freeze movement which 
Joe Biden bought hook, line and sinker. And Russian 
intelligence under Vladimir Putin has spent millions of dollars 
to oppose fracking which Kamala Harris has bought hook, line 
and sinker, trying to ban fracking.
    And there is plenty of things they could do in our election 
infrastructure as well. They could hack into campaigns, leak 
their strategy, or steal their voter contact information. Even 
worse, they could hack into county clerk's offices or Secretary 
of State's offices and delete voter registration files or try 
to manipulate votes.
    They don't even have to get into the election machinery. 
They can turn off the electricity in a major American city on 
election day and wreak havoc there.
    So, there is a lot of threats that our adversaries could 
pose to us in our elections. I just don't think that memes and 
YouTube videos are among the top, especially when we have an 
example of election interference here in America that was so 
egregious.
    Some of your companies' efforts, in collusion with Joe 
Biden's campaign, led by the current Secretary of State to 
suppress the factual reporting about Hunter Biden's laptop.
    Mr. Clegg, you acknowledged earlier that Facebook demoted 
that story after it was published by the New York Post, is that 
right?
    Mr. Clegg. Correct, but I should clarify we don't do that 
anymore.
    Senator Cotton. I know Mr. Zuckerberg has said that you 
demoted it, and he expressed regret. And I assume you share 
that regret with your boss?
    Mr. Clegg. Yes.
    Senator Cotton. And you share what he said that you are not 
going to do it anymore, right?
    Mr. Clegg. Correct. So that demotion does not take place 
today.
    Senator Cotton. Mr. Walker, what about Google?
    Did Google suppress results about the Hunter Biden laptop?
    Mr. Walker. We did not, sir. We had an independent 
investigation, and it did not meet our standards for taking any 
action, so it remained up on our services.
    Senator Cotton. OK. And Twitter under the old regime there, 
was, I think someone said, was even more egregious than 
Facebook or other platforms. And again, this is domestic 
information operations, if you would like to say--far more 
influential on elections than some memes or YouTube videos or 
articles that Russian intelligence agents or Chinese 
intelligence agents posted, which no doubt they do.
    And just look today. The New York Times the other day had a 
fit that social media was awash--``awash'' it said--in AI 
generated memes of Donald Trump saving ducks and geese.
    Are AI generated memes of Donald Trump saving ducks and 
geese really all that dangerous to our election?
    Mr. Smith, you laughed, for the record.
    Do you want to answer my question? Are you worried about--
--
    Mr. Smith. I think it's to your point.
    Senator Cotton [continuing]. Ducks and geese memes of 
Donald Trump saving them from predators?
    Mr. Smith. When I create a list of the greatest worries for 
this election, they do not involve ducks or geese.
    Senator Cotton. I wouldn't think so, either. It didn't seem 
like that to me, either.
    Mr. Walker, Google famously did not auto fill results of 
people searching for Donald Trump's--the assassination attempt 
of Donald Trump a few weeks ago. What happened there? Why was 
that the result of your company's----
    Mr. Walker. We have had a longstanding policy, Senator, of 
not associating terms of violence associated with political 
officials unless they have become an historical event. So, the 
assassination of Abraham Lincoln would have been allowed.
    Up until the weeks prior to the assassination attempt, it 
would have been deeply problematic, I think, to auto-complete 
``assassination'' after a search for Donald Trump.
    Those terms are periodically updated. The assassination 
attempt occurred in between one of those periodic updates. It 
has subsequently been updated and now auto-completes 
appropriately.
    Senator Cotton. Let me ask both of your companies. This 
primarily Mr. Walker for Google and Mr. Clegg for Facebook.
    Gavin Newsom just signed a law--three laws, actually, in 
California, into effect that will criminalize the use of so 
called ``deep fakes'' before an election.
    How do you plan to comply with that law?
    Are you going to go arrest people who are making AI-
generated memes of Donald Trump running away with ducks and 
geese?
    Mr. Walker. Senator, it is early for us to understand. We 
are just receiving the laws which were signed very recently, 
and we are looking at how we might best comply with a number of 
laws. There were quite a few.
    Senator Cotton. Mr. Clegg, a lot of ducks and geese memes 
on your website.
    Mr. Smith thinks you are funny. He is laughing again.
    It's fine. People laugh at them. Satire and political humor 
are as old as our country. It's fine.
    I am glad that you are not going to do again what you did 
in 2020, but I don't envy either of your companies dealing with 
what Gavin Newsom has done in California or what many in this 
Congress propose to do, criminalizing and censoring core 
political speech.
    Mr. Clegg, do you have any idea of how you are going to 
comply with California's law?
    Mr. Clegg. Well, it's only just been signed, so, again, we 
would need probably to look at it more closely.
    But I think, Senator, your central point that there is a 
lot of playful and innocent and innocuous use of AI and then 
there is duplicitous and egregious and dangerous use of AI. 
That is exactly why I think Governor Newsom----
    Senator Cotton. And I have to ask. My time has expired, but 
I have to ask: Who is going to draw that line?
    Who is going to decide what is playful and innocuous and 
harmless and what is misinformation and disinformation?
    And I got to say some of the people you go to, PolitiFact 
and Southern Poverty Law Center don't strike me as quite 
neutral sources and I don't think you are going to find neutral 
sources in the government of California or in this 
administration, either.
    Chairman Warner. And I would like, just as we look at the 
California law, I would like your analysis as well of the deep 
fakes used in political advertising that was passed and signed 
into law in Alabama, Texas, and Florida as well.
    Senator King.
    Senator King. Thank you, Mr. Chairman.
    I think the bright line here should be foreign--the word 
``foreign,'' as has been pointed out.
    As the vice chairman pointed out in his opening remarks, it 
becomes problematic when you are talking about domestic content 
and then it is being amplified by foreign content. That should 
be the line.
    I mean, I don't want you all or the government certainly to 
be the arbiters of truth, because one man's truth is another 
man's propaganda. I mean, I think we should have that kind of 
flexibility.
    It seems to me what is happening here is that foreign 
governments are engaged in a kind of geopolitical judo, where 
they are using our own strength against us. Our strength is our 
democracy and our regular elections plus freedom of expression 
and that is what they are taking advantage of in order to try 
to manipulate our fundamental way of making decisions, which is 
through elections. But the issue should always be is there a 
foreign nexus, is there a foreign influence in this matter?
    I guess the question is, in this day and age, can you 
determine that given the fact that we have got very 
sophisticated adversaries in St. Petersburg or Moscow or 
wherever, or in Tehran, who may be coming in via a server in 
Georgia.
    Can you technically tell when something is of foreign 
origin?
    Mr. Walker or Mr. Smith.
    Mr. Smith. I would say the answer is not always, but often, 
yes. And I do think that there are some threats that we take 
seriously, and we should start with the word ``foreign.''
    But if you want to see the risks that we should be thinking 
about, I would go back to Slovakia. Their Parliamentary 
election was last year, September 30. Two days before, on 
September 28, a Russian group released a deep fake audio. It 
purported to be an audio of a conversation between a mainstream 
journalist and the leader of the pro-European Union political 
party, one of the two largest political parties in that race.
    That reflected what we see in Russia, No. 1, a good content 
creation strategy.
    The second thing they did on that same day, is they 
released it on Telegram which tends to be the Russians' favored 
distribution channel to get things going. They did it from what 
was the private account of the spouse of a major official in 
Slovakia.
    The third thing they did is they pursued a content 
amplification strategy where then one of the most senior 
officials in the Russian Government, as they tend to do, came 
out the very same day and accused the United States of doing 
what that audio recording purported to capture in Slovakia; 
namely, a plot to buy votes and steal the election.
    Senator King. In other words, it was a very sophisticated 
operation.
    Mr. Smith. It is, and this is what we need to remember. You 
can't have a great play without a great playwright. The Russian 
government is very capable, very sophisticated, not just in 
technology, but in social science.
    Senator King. And very determined. Very determined, are 
they not?
    Mr. Smith. Yes, absolutely. And that is, that is what we--
There are many things.
    It is right, I think, to focus on the things that should 
unite us and say let's not worry about what we are seeing over 
in one direction, but let's not close our eyes in what we could 
see in the other as well.
    Senator King. The question is, No. 1, it's happening. You 
have all testified to that. It is happening and it is not a 
minor project on the path of Iran, Russia, and to some extent 
China.
    So, the question is then, what do we do? I know Senator 
Collins asked about watermarking, some kind of way to determine 
the source of the information attribution.
    But I had a formative experience about eight or nine years 
ago in this building before any election, before 2016, meeting 
with a group of people, politicians, political officials in 
Estonia who are under bombardment all the time from Russian 
propaganda and Russian disinformation. I asked: How do you deal 
with it? You can't cut off the internet or cut off your TV 
stations. Their interesting answer was: We deal with it by 
educating the public that it is happening. And they say, ``Oh, 
hell, it is just the Russians again.''
    And that is why I think what we are doing today is so 
important and your testimony is so important, so the American 
people can be alerted to the fact that they may be being misled 
and they should check. Is that a reasonable approach?
    Mr. Smith. Absolutely, and what I hope we can take away 
from this, because first of all, there is something very 
important what Senator Cotton said, not everything is a threat; 
and, as Senator Rubio said, we should honor our citizens to say 
what is on their minds. But Senator Kelly captured something 
that is critical, and you are pointing to the same thing. When 
you go to Estonia, when you go to Finland, when you go to 
Sweden, when you meet people who have lived their entire lives 
in the shadow of Russia, they are on the alert. They know, as 
we have discovered, that not everything on the internet is 
true. They just remember that when they read something that is 
new.
    Senator King. My wife and I have a sign in our kitchen that 
says: ``The difficulty with quotes on the internet is 
determining their authenticity--Abraham Lincoln.''
    [Laughter.]
    Senator King. Mr. Clegg, you were going to respond? I'm 
sorry, Mr. Walker.
    Mr. Walker. Yes, Senator, thank you.
    Just very briefly. In addition to those very good points 
which I agree, I do think we are increasingly able to use the 
AI to detect some of these patterns.
    As we discussed previously, YouTube has gone from having 
one view in 100, following our policies to one view in a 1,000. 
That is in large part because we are using AI to detect some of 
these patterns of misinformation and disinformation that are 
out there and take action against them.
    Senator King. You can either take action or you can alert 
your customers that this has been manipulated in some way.
    Mr. Walker. Agreed and also provide high quality, 
authoritative information. The old line, ``the best remedy for 
bad information is good information.''
    So, the more we can promote accurate information about when 
the polls are going to be open, people's eligibility to vote, 
whatever else it might be, that is an important part of the 
democratic process.
    Senator King. Thank you, Mr. Chairman.
    Chairman Warner. And I just remind, I agree with the 
comment around memes, but I recall that this committee exposed 
in 2016 the effort by the Russians to incite violence between a 
pro Muslim group in Texas and a pro kind of Texas separatist 
group that but for law enforcement would have resulted in 
American harm.
    And echoing in how we know, I don't know when these slides 
are up how a normal American consumer, even a relatively 
sophisticated one, would have the expertise to read the URL 
that closely when everything else looks so closely like Fox or 
the Washington Post.
    Senator Cornyn.
    Senator Cornyn. I would like to ask each of you to respond 
to this question: Do you believe that ByteDance should be 
required to divest TikTok in order for TikTok to operate in the 
United States?
    Mr. Walker.
    Mr. Walker. Senator, I would defer to Congress. I know you 
have legislated on this very question.
    Senator Cornyn. Do you think social media companies owned 
by foreign governments that are adversaries of the United 
States that are known to use information warfare against the 
United States, do you believe they should be able to operate 
freely in the United States?
    Mr. Walker. As a technology company, our area of expertise 
is making sure that they are not distributing malware. We have 
found situations where such companies were distributing 
malware, at which point we removed them from our services.
    But on the broader question of accessibility, I think that 
is a question for Congress.
    Senator Cornyn. I will put you down as undecided.
    Mr. Smith.
    Mr. Smith. You can put me down as I think you all have 
already decided. The Congress has passed a law. The President 
has signed it. The courts will adjudicate it, but assuming it 
is upheld, then clearly it needs to be followed. And I am not 
going to try to substitute my judgment for the judgment you all 
have already brought to bear.
    Senator Cornyn. Mr. Clegg?
    Mr. Clegg. In addition to that, I will just point out that 
there isn't a level playing field globally. Our services, for 
instance, are not available to people in China. So, Chinese 
social media apps are available here, but American social media 
apps are not available in China. That has been the state of 
affairs for some time.
    Senator Cornyn. What I am looking for is the guiding 
principles here, and Mr. Clegg, it sounds like reciprocity 
should be perhaps one of those principles.
    Mr. Clegg. I think the First Amendment principle of voice 
for the maximum amount of people for the maximum amount of time 
wherever they reside around the world is a good principle.
    Senator Cornyn. Well, the problem I think we are having, 
trying to figure out what the appropriate framework is to think 
about what you all do day in and day out, because it has 
presented a bunch of novel and difficult questions. But before 
social media companies existed, it seems to me we had 
doctrines, laws that governed the way that we dealt with the 
subject matter that we are talking about here today.
    Of course, what is so different today is you are private 
entities so presumably the Constitution, the First Amendment, 
can't be directly applied. I know the Supreme Court is 
wrestling with how to figure out what the right way to view 
social media companies is.
    You have your terms of use which strike me as a pretty 
powerful tool to be able to regulate what is on your site, but 
there are also legitimate concerns about censorship of views. 
And of course, Mr. Clegg, you talked a little bit about Mr. 
Zuckerberg's letter and the fact that he regrets that Meta was 
being influenced and cooperating with the Federal Government.
    And then we have regulations that usually help us in this 
area, or as a last resort, litigation.
    So, I am wondering, is there anything about the way that we 
operated and the legal framework we operated under before your 
companies existed that should inform the way that we view your 
operations today?
    It strikes me as we are dealing with adversaries often that 
view information warfare as a legitimate tool. Obviously, the 
Russians and their active measures campaign existed long before 
your companies existed.
    But we are an open society, and we believe in freedom of 
exchange and free speech; but is there anything about the way 
that we regulated or the way the framework under which we 
understood that newspapers, radio, movies, other means of 
communication were handled pre-social media companies that 
should guide us here or are we just trying to make this up from 
scratch?
    Mr. Smith. The one thing I would say without getting into I 
think your very important question about sort of the history of 
regulation of communications in the country and everyone could 
have a vibrant debate about section 230 and the like is this: 
It is easy to spend all our time on issues where we disagree.
    I think the most important thing is we identify where we 
actually do agree across the political aisle and across the 
industry, because if we can act based on common consensus to 
address the foreign adversaries, emphasizing again that word 
``foreign'' and nation states, we can do the most important 
thing I think we need to do this year. I think that can build a 
foundation for the future and then we will deal with the rest 
and your very important question among that.
    Senator Cornyn. My time is up.
    Chairman Warner. Again, I want to commend Senator Cornyn 
for raising this. We did actually do that on the question about 
CCP control of a platform that candidly is even more popular at 
this point then your platforms, and 80 percent of the Congress 
in both political parties said that is not in our national 
security interest, and I appreciate you raising it.
    Senator Bennet.
    Senator Bennet. Thank you, Mr. Chairman.
    I appreciate you having this hearing and appreciate you 
coming to testify. I'm very grateful for that.
    I think what we are struggling with a little bit, in terms 
of answering the question Senator Cornyn just posed is the 
sheer scale of the enterprises you represent. That presents 
something new to us.
    And as I sit hear listening to this conversation, I am 
thinking about the people who are going to be sitting in your 
chairs 30 years from now and the people who are going to be 
sitting in our chairs 30 years from now, and what are the 
incentives that are leading us to have the conversation we are 
having right now and answers we are having in this minute for 
all the right reasons are the ones we would have wished for 30 
years into the future.
    I really wish on behalf of the American people that the 
American people would have had a negotiation with Mark 
Zuckerberg, just to pick him as an example, around our privacy 
and around our data and around our economics. I don't believe 
we have had that negotiation. I don't think we have with any of 
these social media platforms--different, Mr. Smith, than your 
company--with any of these platforms about our privacy, our 
data, our economics, the way we want our children's bedrooms 
invaded or not invaded. And for better or for worse, they are 
looking to us to try to begin to have that conversation.
    So, first, we haven't had it, and here we sit having to 
deal with the very, very severe consequences across our 
society. I say that partly as a capitalist but also as a former 
school superintendent who has seen the effects of mental health 
on our kids, and as Members of the Intelligence Committee who 
are trying to protect the country from an invasion of our 
democracy across your social media platforms and tech 
platforms.
    When I read your Capex numbers it staggers my mind. I can't 
even get my head around the idea that you are going to spend 
$170 billion over 18 months on AI investments. I mean, that 
annual expenditure for your three companies is more than we had 
for roads and bridges in the first infrastructure bill we 
passed since Eisenhower was President. And for all the telecom 
or broadband infrastructure across the entire United States of 
America. Those things together are dwarfed by your annual Capex 
expenditure on AI. And I feel like we are being asked to sort 
of hope for the best.
    I think it is an amazing testament to American capitalism 
that you have those resources to invest in the future, but you 
better be making the right decisions. And part of that I think 
is a question of whether the commitment--you really made the 
commitment on the front end to safeguard America's democracy, 
to make sure our elections are protected, to not say that it is 
up to our citizens to try to figure it out in the hailstorm of 
propaganda that has almost been perfected by our adversaries 
and every day is being used by them to divide one American from 
the next, from the next, from the next, because they see that 
division as a potential benefit to them and a huge detriment to 
us.
    How much money are you investing to make sure that you are 
protecting our elections?
    Is that your responsibility, or is this just, you know, an 
approach that says let a thousand flowers bloom?
    I am a strong believer in the First Amendment, but I don't 
think there is anything about the First Amendment that obviates 
your need to be able to say to the American people: We believe 
we have a responsibility to you because we are creatures--among 
other things because we are creatures of this unique society 
and this unique democracy and we have an obligation here.
    So, I don't know if anybody would like to respond.
    Mr. Clegg? Please.
    Mr. Clegg. Yes. So, to answer your specific question, we 
have around 40,000 people working on security integrity of our 
services. In fact, that number is slightly up from what it was.
    Senator Bennet. I am deeply, deeply skeptical of the 
numbers, because the numbers don't tell you what the investment 
is. We know they go up and we know they go down. And as Mr. 
Walker said earlier, maybe the AI tools themselves are better--
and I don't doubt that. That may be true. I am more interested 
in what the total capital expenditures are.
    Mr. Clegg. Capital expenditures, about $20 billion over the 
last several years. Around $5 billion over the last year.
    To your wider point, Senator, I strongly agree with you 
that the scale that one is dealing with, whether it is from the 
tech company's point of view, from legislatures and governments 
around the world, it is clearly unprecedented because the 
network effects are created by the internet.
    On our services alone, you have 100 billion messages around 
the world on WhatsApp every day. You got now about three and a 
half billion reshares of short form videos, reels every single 
day.
    And much as cooperation between companies at this table and 
between companies that are not represented at this table is 
crucial to deal with the scale of all of that, I would also 
suggest that cooperation between different jurisdictions in the 
democratic world globally is important as well, particularly 
between the United States, Europe, India, and so on, because I 
think one of the greatest risks is a fragmentation of different 
regulatory approaches around the world for technology which by 
definition are borderless.
    Mr. Smith. I would just note very quickly----
    Mr. Walker. Go ahead.
    Chairman Warner. Go ahead. We have got a couple more 
minutes.
    Mr. Smith. First of all, I believe that the American tech 
sector is the engine of growth and frankly is the envy of the 
world, and we should at least remember that.
    No. 2, we do have a very high responsibility to protect 
elections, to think about the impact on others, on our societal 
responsibility in so many areas.
    No. 3, if there is a foundational principle for this 
country, I believe it is straightforward. No one should be 
above the law--no individual, no company, no leader, no 
government.
    But then, No. 4, let's recognize the obvious. We need laws.
    I would just say, I put it slightly differently. We haven't 
had a shortage of debate in this country about an issue like 
privacy. We have had a shortage of decision making. So instead 
of always worrying about where we can't reach agreement, why 
don't we get something done by taking more action, by calling 
on us to be maybe more supportive, as we could and should on 
certain days and helping you all so this Congress can pass the 
laws we need. I think that is the recipe that we need for the 
future.
    Chairman Warner. I can't. I will bite my tongue.
    Senator Lankford.
    Senator Lankford. Mr. Chairman, thank you. Thank you too 
for showing up.
    We invited several more tech companies and they chose to 
just decline, not to be here in the national conversation. So, 
I do appreciate you giving the chance for you to be able to be 
here.
    Let me just outline some of the challenges we face on this 
that do become obvious to all of us when we get a chance to be 
able to look at it.
    This is not picking on Meta, but it is going to be a side-
by-side with TikTok who is not here. But this is just an 
example side by side of content delivery from a company. When 
there was a comparison that was done of content delivery to 
individuals that were 35 and younger from Instagram to Tiktok.
    On Uyghur content, it was 11 to 1 Instagram. So TikTok 
hardly delivered it; 11 to 1, that if someone was talking about 
Uighurs, Instagram was talking about it, TikTok wouldn't.
    In a conversation about Tibet, 41 to 1 Instagram to TikTok. 
TikTok just screened it out.
    On the Tiananmen Square, 80 to 1 content on Tiananmen 
Square. This is among Americans, by the way.
    Hong Kong protests, 180 to 1. That seemed to be a 
conversation that was discussed on Instagram that just didn't 
show up on TikTok for whatever reason.
    Ukraine, 12 to 1.
    And this one was interesting to me. There is 50 times more 
pro-Palestinian content on TikTok than pro-Israel content.
    Now, I say that to you to say, there is a sense of an 
outside foreign influence, in this case owned by a foreign 
entity trying to be able to deliver content to the United 
States to affect the national conversation. That is the 
challenge that we have because there is not a challenge on what 
Americans want to be able to talk about. The challenge is a 
foreign entity reaching into the United States and saying: Hey, 
I want to try and influence you by delivering content to your 
box that may try to sway opinions on this.
    So, two things I would say on this: First of those is, the 
concern is for not just a TikTok or to a foreign entity, a 
Russia, an Iran trying to be able to put bad content in, 
misinformation, disinformation, but it is also the feeding of 
the quantity of the algorithm. This is an area where Americans 
have got to be able to rebuild trust. I would say there is a 
lot of suspicion, because the delivery of what content is 
actually coming to your feed is an area of skepticism, whether 
it is in a Google search or whether that is in whatever they 
are getting from a social media network on it.
    How do we actually set in front of the American people 
enough transparency that there is a trust that is neutral in 
what is delivered, yet your task is to keep people looking at 
the screen all day, so you are trying to feed them information 
they want to see more of.
    How do we hit that rhythm on it, because that will be 
important for Americans, period, in their own dialogue?
    Anybody want to try that one?
    Mr. Clegg. I will try, Senator.
    I think, Senator, you pinpoint a very important issue, 
which is algorithms in a sense deal with a practical problem 
which is there is an infinite amount of content that you can 
show people, but of course, people have a limited amount of 
time they are scrolling on their feed, so you have to somehow 
rank and funnel it.
    And I believe the way to square that circle that, Senator, 
you quite rightly allude to is giving people confidence that 
these algorithms are working for them and not against their 
interests. It is firstly to give people real control.
    So, for instance on our services you can just turn the 
algorithm off. You can just have it chronologically delivered 
instead. You can click on to the three dots and you see exactly 
why you are seeing a post. You can say you don't want to see 
certain ads. You can prioritize certain content or not. I think 
user controls are crucial.
    And secondly, we need to be transparent. We need to be 
transparent about what are the signals that we use in the 
algorithms. We publish alongside our financial results every 12 
weeks, for instance; full transparency report showing how we 
act on content that violates our policies. We have it audited 
by EY so we are not marking our own homework if I can put it 
like that.
    So, I think user agency and sort of control and a maximum 
amount of transparency for the company are the key ingredients 
here.
    Senator Lankford. Mr. Walker.
    Mr. Walker. Just to follow up on that. We seriously take 
the point about maintaining and building trust in the services. 
So, some of the ways we do that are anchoring our results in 
raters who are located throughout the United States, in rural 
and urban areas, 49 States at last count. That is the ground 
truth for many of our services. But beyond that, we do things 
like, for example, on YouTube, not just promoting the most 
popular videos, but the videos that users have found the most 
valuable.
    We will survey our users the day after: Did you have a good 
experience in the service? Did you find this a valuable use of 
your time?
    And then making sure that we are consistently and clearly 
enforcing--transparently enforcing our policies, which we also 
publish.
    It is a responsibility we take very seriously.
    Senator Lankford. It is. And it is something that is 
incredibly important. And it is also consistent with law on 
this as well.
    Mr. Smith, in a comment that you made earlier that Iran is 
fighting against Trump, Russia is fighting against Harris, and 
we see the noise that is out there on this and the awareness of 
it. I do think it is important that we have this conversation 
to make Americans well aware that not everything they see is 
accurate and correct and there are things very deliberate. But 
one of the challenges that we have that we have got to figure 
out, both as a committee and both from you is attribution, that 
when something shows up, how to be able to designate that: 
``Here is where that originated'' because by the time it is 
shared 50 times to 50 places, people don't know where it 
originated anymore.
    So, one challenge is taking off content that is Russian 
content, Iranian content, that is deliberately a means to 
attack and to disturb Americans in whatever way that may be, 
but another one is to be able to make sure that when it gets 
out there that people are well aware of it. We can't tell the 
story of this is disinformation, misinformation, unless we get 
fast attribution on that. And that has to be something we have 
to work out.
    Chairman Warner. And again, I have got critiques of all 
three of these companies. I will come back to some of those, 
but on this one, they have been more forward leaning, because 
if they don't share that by the time the IC or law enforcement 
picks it up, it may be too late.
    Senator Ossoff.
    Senator Ossoff. Thank you, Mr. Chairman, and thank you all 
for joining us.
    On the point about attribution and identification of 
foreign covert influence, Mr. Walker, give us a sense of your 
independent capacity absent case by case warning or 
notification from the U.S. Government of content on your 
platforms that is foreign covert influence?
    Mr. Walker. It is challenging as was talked about earlier. 
Russia has moved beyond paying for things in rubles and only 
working between 9 and 5 Moscow time. So, they are increasingly 
making it more difficult to identify things.
    That said, we have 500 analysts and researchers working on 
the Mandiant team, Google threat intelligence, who are tracking 
between 270 and 300 different foreign cyberattack groups at any 
given point, tracking activities, metadata, et cetera, through 
our services and sharing it with the security teams that are 
represented here and elsewhere in the industry and also working 
with the FBI's foreign influence task force.
    Senator Ossoff. Let me put it this way: Do you think you 
are mostly across it and playing Whac-A-Mole or do you think 
you fundamentally lack the ability to know how much you don't 
know?
    Mr. Walker. I think the humble and probably accurate 
statement would be the latter, because the adversaries are 
always moving forward and it's a constant cat and mouse game.
    Senator Ossoff. You mentioned earlier using machine 
learning or algorithmic tools to try to identify it. Is that on 
the basis of network activity and posting tactics as opposed to 
content where there is a risk of collateral damage, you might 
suppress bona fide American speech because oftentimes what the 
foreign actors are amplifying resembles perhaps extreme or 
polarizing speech that is happening organically in the country?
    Mr. Walker. It is a deep and important question, and the 
answer differs to some degree across the different platforms; 
because a pure social network, as Mr. Clegg was referring to, 
will have more behavioral information. We may have more 
content-related or metadata style information.
    We do try and share across the different platforms where we 
can, but inherently there is some sort of an assessment of the 
nature of the content. We talked a little bit about provenance 
in AI or metadata in AI. That's going to be a component of it. 
Network activity is a component of it, and then behavior 
signals will also be a component of it.
    Senator Ossoff. OK. In addition to attribution, let's talk 
about authentication.
    Mr. Smith, you mentioned the Slovakian example, I believe. 
Let's game it out, all right? I think we need to be able to 
discuss this out in the open how this might unfold in the 
United States and who bears responsibility for handling it. 
There might be some very compelling, seemingly authentic, deep 
fake audio clip which is, in fact, fake and defamatory 
implicating a candidate for office in the United States in the 
hours or days or weeks before an election. How confident are 
you that either you or another private sector actor or somebody 
else has the capacity to identify this fake, particularly where 
we can't rely on one campaign or the other necessarily to in 
good faith acknowledge that something which is useful to them 
because it deliberately defames and mischaracterizes the 
statements or conduct of their political opponent, isn't real?
    Mr. Smith. Well, I'd say first I think have a word of 
wisdom in saying we have to always act with a sense of 
humility, and hence I think we should require of us an 
extraordinarily high level of confidence approaching certainty 
before we take action.
    Having said that, I do think especially given our ability 
to use AI to identify the creation of a fake and just the good 
old human judgment that comes from crowd sourcing, especially 
for video, we can identify a great deal. And I then think what 
it translates into is another part of your question. Great, 
what do we do about it?
    There will be days, or it could be hours when the most 
important thing we will need to do is alert the public so that 
there is a well-informed conversation.
    But I also think this points more broadly to what is a 
systemic strategy to try to address the problem that we are 
worried about here.
    Senator Ossoff. Well, because time is short, let me try 
this question, and ask it of each of you.
    What will you do? What is your policy if, in that critical 
time period before an election, there is deep fake content 
attacking a candidate for office which can be demonstrated to 
be inauthentic but cannot be decisively attributed to a foreign 
actor, how would you handle it?
    Mr. Clegg. We would label it. We would label it so that 
users would see that the veracity or truth of it is under real 
question. So, we would label it.
    Senator Ossoff. What about how it is handled in the 
algorithm in its amplification or suppression?
    Mr. Clegg. We would also make available to us the ability 
to demote the circulation of it.
    Senator Ossoff. Mr. Smith.
    Mr. Smith. We don't have the same issue in terms of a 
consumer platform, but I think that the notification to the 
public, the labeling, I do think that is the essence of what we 
all need to be prepared to do very quickly.
    Mr. Walker. And I would add to that that we would notify 
the foreign influence task force so that there was government 
awareness to the situation.
    Senator Ossoff. Thank you.
    Chairman Warner. Thank you, gentlemen. I've got a few more 
comments.
    I guess, where I would start is, I remember all three of 
you in Munich, when companies like TikTok and X signed on to 
that agreement. Again, amazed and disappointed with 
particularly X's failure to participate and failure in any way 
to adhere to that document.
    But if what you have just said is--I want to make sure we 
didn't get off just on Fox and Washington Post, but moving this 
publication forward, another example.
    If we got a watermarking system, the fact that this is 
content that didn't originate with you but was placed on your 
platform, these are not watermarked. I'm not sure there is a 
way that anyone that is a normal consumer--because you've got a 
byline, you've got authentic ads on the other side--are going 
to find that. And again since, they ended up on yours, I'm 
gonna--You know, you want to protect your brands. These are 
brand clients. Why didn't we catch this?
    Mr. Clegg. So, I think the key challenge here is to disrupt 
and remove the underlying networks of fake accounts that 
generate this content.
    Chairman Warner. We appreciate what you did yesterday.
    Mr. Clegg. That is the only foolproof way that we can deal 
with this, because otherwise, as you quite rightly say, 
Senator, we are just playing Whac-A-Mole on individual pieces 
of content.
    The companies on this table and other companies besides I 
think have made real material progress since we assembled 
together in Munich, for instance, to agree on interoperable 
standards of not only visible watermarking but also so-called 
metadata and invisible watermarking. So, we, as social media 
platforms, as we ingest content from elsewhere, we can then 
detect those invisible signals so we can then alert that to our 
users. But of course, bad actors--in this case, foreign actors, 
Russian networks, are not going to introduce those.
    Chairman Warner. They are not going to put the watermark 
on.
    Mr. Clegg. Correct, which is why for us the overriding 
objective is always to disrupt the wider network.
    Chairman Warner. But again, but at the end of the day, what 
I don't understand and whether this was on Facebook or appeared 
on Google or appeared on YouTube or appeared on X, the URL is 
the distinguishing characteristic. The consumer is not going to 
get that. Should that be simply the government's responsibility 
to spot that? Don't we need you leaning in on that issue?
    Mr. Clegg. Yes, of course, absolutely.
    Chairman Warner. So, one of the things, because we are--we 
keep coming back with we are 48 days away.
    You know, I'm going to ask you, Mr. Smith, as well, but let 
me start with Mr. Walker and Mr. Clegg.
    I need to know, starting with this kind of and we will 
share all the ones that came out of the Justice Department 
report--how many Russian manipulated images that are completely 
false, that sow dissension, that undermine campaigns--how many 
Americans have seen those? Because clearly your whole metrics 
of models is based on how many eyeballs you get. We have got to 
have that information.
    I also believe that there are a series of ads, and we will 
share again with the companies in more detail that are getting 
through the protections at this point. We need to know how many 
of those ads because if we can--my concern is when people 
undermine and say: This is only memes or this is not a serious 
issue. Again, Americans have the right to say anything no 
matter how ``out there'' it is. But back to what Senator Cornyn 
said, you know, the notion even around reciprocity, the idea 
that Russia or China would allow this kind of manipulation on 
their social media is beyond the pale. Of course, they 
wouldn't.
    So, we need that because the one thing we do know, most all 
of us will agree, in the next 49, 48 days, it is only going to 
get worse. And having that data now, not to embarrass what 
happened at least on Facebook, to say: Hey, you know, x-
millions of Americans saw this kind of fake content. Just be 
aware, because chances are no matter what we do, we are not 
going to stop all of this from coming down, but that measure 
would help identify.
    I also think on the ads. I mean, I know it has gotten 
better. Mr. Walker, you mentioned the fact that you don't take 
payment in rubles anymore from 9 to 5 Moscow time, but there is 
still a ton of this getting through, and we need better data at 
this point. So, I will expect that very shortly.
    If you still have colleagues or friends at X, I sure as 
heck invite them to be a part of the solution, as opposed to 
simply trying to be part of exacerbating, sometimes, the 
problem.
    And we have those who don't play. I mean, X, TikTok, this 
whole set of Discords, the Telegrams. There are others. They 
almost in some cases pride themselves of giving the proverbial 
middle finger to governments all around the world, which I 
think raises huge issues as well. So, I'd like to have that 
information--I think Senator Ossoff has one more--and as soon 
as possible--and I will have one last closing comment.
    Senator Ossoff.
    Senator Ossoff. I will be brief, Mr. Chairman.
    Just to note--and the committee has made public some of the 
underlying information that was contained in the charging 
documents related to the specific recent Russian effort for 
which there were 32 domain seizures Doppelganger, which 
planning documents specifically identified ``swing states whose 
voting results impact the outcomes of the elections more than 
other states,'' and named in particular Georgia as a 
destination for this covert Russian influence.
    We talked about attribution. We talked about 
authentication. I think we have also been discussing the 
importance of having a society that is resilient, that takes a 
skeptical and critical approach to information.
    One of the challenges we have is for some avid consumers of 
political content anything which seems to affirm one's partisan 
perspective is deemed credible without that kind of critical 
scrutiny.
    For my constituents in Georgia who have recently been 
targeted by this foreign covert influence campaign, but for the 
whole nation, how do you think about your role, and invite you 
to comment on the role of public leaders, elected leaders. How 
do we build that kind of resilience across society such that we 
don't just accept anything that seems to affirm our world view 
or denounce our enemies, but we recognize that, foreign and 
domestic, there are a lot of folks telling lies and a lot of 
folks taking an interest in manipulating us.
    Mr. Clegg, want to take a shot at that?
    Mr. Clegg. Well, the first thing I think as has been 
mentioned by a number of Senators already, we can learn a lot 
from countries like the Baltics. Moldova, I think is a country 
right now in the frontline facing a lot of Russian 
interference. Taiwan--the Taiwanese election recently--All 
these countries in different situations are dealing with major 
adversaries who are trying to interfere in their elections. And 
public skepticism, voter skepticism, is probably the greatest 
antidote to a lot of this. And I do think political leadership 
can play a role in fostering that.
    The other thing which I think is crucial, and that is on 
us, is every time we find networks like that, we need to share 
that as widely as possible with researchers, with our 
colleagues in the tech industry, with governments. For 
instance, we now publish every 12 weeks an adversarial threat 
report. We have done so in the last few years.
    And Doppelganger. Senator, you mentioned Doppelganger. It 
was our threat intelligence team that identified Doppelganger 
first 2 years ago. We blocked around 5,000 accounts and pages 
in 3 months, in a 3-month period this year.
    We have placed a lot of the signals we were able to detect 
on GitHub so that everybody can look at that and everyone could 
learn from that experience, and we've got people that come in, 
scrutinize it, tell us what we got right and what we got wrong. 
I think that interchange of research and data is crucial to 
develop public and societal resilience in the long run.
    Senator Ossoff. And education plays a role as well. Thanks.
    Let me ask this final question.
    Oh, Mr. Walker, go ahead.
    Mr. Walker. Just very briefly, I want to give one example 
because it is obviously a deep democratic question at a time 
when trust in institutions of all kinds is going down. But in 
one specific case study that might be helpful, YouTube has 
launched a program called ``Hit Pause.'' It is a series of 
short videos designed to remind people not to believe 
everything they see; that if facts are one sided, if it is an 
overly emotional kind of pitch, et cetera, there are a series 
of ways of framing things that are often used by people pushing 
false information. We found actually in independent research 
that the lasting effect of some of those short exposures can 
actually last for months. People become more resistant to fake 
news.
    Mr. Smith. I would just underscore that. That is an 
excellent initiative. We have been doing similar work at 
Microsoft. We really sharpened our ability in the European 
Union Parliamentary elections. We ran a paid media advertising 
campaign around checking and rechecking before people make up 
their mind and vote. It reached 350 million people outside the 
United States.
    That is why we are bringing that to the United States. 
Certainly, the swing States are critical. And it is not just 
advertising. It is getting out on drive time radio, local 
press, to help bring this message so that the American public 
has the information it needs.
    Mr. Ossoff. Thank you. Final question.
    Mr. Clegg, putting aside law and regulation, when you think 
about, for example, your employer's social obligations and how 
you meet those social obligations in the decisions that you 
make about how content is labeled or how your algorithms treat 
content, in a society where sharp-elbowed political debate is 
part of the process and free speech is cherished as a value in 
addition to being a constitutional right, what is the 
distinction between the role that your teams are fulfilling in 
making those calls and the traditional editorial judgment that 
a traditional news organization would make?
    Mr. Clegg. The fundamental difference is that we don't 
generate the content. So, it is user-generated content that 
circulates on apps and services. It is almost an inversion of 
the top-down way in which information is selected and 
handpicked by editors sitting in editorial suites for 
newspapers.
    Senator Ossoff. But you decide what is on the page.
    Mr. Clegg. We decide as I said earlier or decide--we have 
systems that seek to ensure that every person's feed is in a 
sense unique to them. It reflects their interests. It reflects 
what they enjoy spending time on. As it happens, the vast 
majority of people don't use Facebook and Instagram, for 
instance, to argue about politics. So, news and news links 
constitute around 3 percent of the total content on Facebook.
    Most people use our services for much more playful, 
innocent--you know, connecting with family and friends, family 
holidays, family birthdays, bar mitzvahs, barbecues, you name 
it. And that is reflected in the overwhelming majority of 
content of our services.
    Senator Ossoff. Thank you.
    Chairman Warner. That sounds to me like a backhanded 
description around protection around section 230, which I 
fundamentally disagree with you on.
    Again, I don't accept that characterization. That was the 
same characterization that initially people made about TikTok. 
``What could be so wrong about people sharing cat videos?'' 
Although cat videos may take on a political stripe right now. 
Yet now, the number is 30, 40, 50 percent of 18- to 24-year-
olds get all of their news or a vast majority of news from 
TikTok?
    Again, I just do not accept the notion of just ``we are 
just independent creators.'' There are algorithms that shift 
what you see, how much you see. Tech colleagues that we both 
know said there has never been a more creative, addictive, 
crack-like tool than TikTok in terms of tracking and keeping.
    Again, in the effort that Senator Cornyn raised and the 
vast majority of us here, that the ultimate dials can be turned 
by CCP leadership in terms of what content you receive. I 
believe that is a huge national security concern.
    I also just want to point out that the independent 
reviewers, I agree, that's good. And I do think there is a role 
for the academic reviewers.
    I think we are less safe today because many of those 
independent academic reviewers have been litigated, bullied, or 
chased out of the marketplace. That concerns me.
    I also hope and I would like to see not just kind of one-
off answers, but I would like to see from all three of you 
something to the committee that Senator Rubio and I will review 
and share with our colleagues.
    I think this point about the 48 hours, Brad, that you 
raised--I think we have put attention on that, but I think the 
post-election 48 hours is going to be equally important. And I 
would like to hear with specifics what kind of surge capacity 
each of your institutions are going to have as we get closer, 
because I am not going to litigate here whether you have cut 
back or not your content. And again, not content moderation on 
a political bent but content moderation in terms of whether 
your users actually adhere to your own terms of service.
    I would simply state for the record, the overwhelming 
majority of outside observers, I think across the political 
stripes have said most of you have cut back. But you made your 
points. We don't have to relitigate.
    So again, I want to know how many folks have seen and 
echoing especially in these targeted States, how many ads have 
gotten through, what we are going to move forward on.
    I would also--I bit my tongue earlier before Senator 
Lankford got on--and I do think I have worked with each of you 
and each of your companies. There are places where we agree. 
There are places where we disagree. And I do believe, you know, 
Congress's batting record on social media platforms and on AI 
is virtually zero in terms of laws being passed, maybe with the 
exception of TikTok.
    I would point out that when we had the largest AI dog and 
pony show in the emergence of AI when your CEO, colleagues, and 
everybody else was there, and Senator Schumer at that point 
asked: How many of you think we need regulation?
    Everybody raised their hand.
    And you know, I have got a half a dozen bipartisan AI laws 
or bills, some of them addressing things like how we avoid 
those entities that circumvent the watermarks that you and 
others may put in. But for the most part--and since I get the 
last word, I will leave this without contradiction--Everybody 
is for it in theory until you see words on the page. And there 
is always a reason why ``we can't really do that'' or ``oh, 
gosh, if we do that, we are going so to slow down innovation'' 
or ``if we do that China is going to leap ahead.''
    And this is not the topic for today, but I think there is a 
whole lot of us--virtually every parent in America today would 
say that had there been a few guardrails on social media back 
in 2014, we might have a heck of a lot healthier kids in this 
country in terms of mental health issues. Not the subject for 
today, but something that the vast majority of Americans 
believe, including me.
    So, we have made--you know, as I go through my statement, 
we have made some progress.
    I do worry that this is not going to lead the news tonight. 
The fact that Russia and Iran--we don't have the kind of 
visuals yet. I hope we will get the visuals yet on what Iran 
has done--but that Russia, using brands that most Americans on 
either end of the political spectrum respect, FOX News and 
Washington Post, are seeing things that look like is that 
content that's not. It's coming from Moscow. And anyone who 
thinks that is appropriate, I just don't think reflects where 
we are in this democracy.
    I will end with where I started. We have more than enough 
differences amongst Americans. We have a God given or 
Constitutionally given First Amendment right that allows us to 
say anything, no matter how stupid, unless it is the equivalent 
of ``Fire!'' in a crowded theater.
    But we should have those debates, but sure as heck should 
be concerned about foreign government services. This is not 
some one-off entity. These are foreign spy services who by 
definition want to undermine our country. When they are trying 
to sway an already very close election, we all should be 
concerned about that.
    I appreciate you all being here. I wish more of your 
colleagues in the sector would be as engaged.
    I think I have given you all some to-do work, and my hope 
is we will have some of that information because the clock is 
ticking, as you all have said.
    I would hope we would get some preliminary information back 
even by middle of next week. Let's see if we can get this as we 
go into October.
    With that, I did promise Senator Rubio I wouldn't go off on 
some other tangent, so I will respect that right now and say we 
are adjourned.
    (Whereupon, at 4:35 p.m., the hearing was adjourned.)

[GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]


                            [all]