[Senate Hearing 117-681]
[From the U.S. Government Publishing Office]




                                                        S. Hrg. 117-681

               SOCIAL MEDIA'S IMPACT ON HOMELAND SECURITY

=======================================================================

                                HEARING

                               before the

                              COMMITTEE ON
               HOMELAND SECURITY AND GOVERNMENTAL AFFAIRS
                          UNITED STATES SENATE

                    ONE HUNDRED SEVENTEENTH CONGRESS


                             SECOND SESSION

                               ----------                              

                           SEPTEMBER 14, 2022

                               ----------                              

          Available via the World Wide Web: http://govinfo.gov

                       Printed for the use of the
        Committee on Homeland Security and Governmental Affairs
        
        
        
        
 [GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]       
        


               SOCIAL MEDIA'S IMPACT ON HOMELAND SECURITY




                                                        S. Hrg. 117-681
 
               SOCIAL MEDIA'S IMPACT ON HOMELAND SECURITY

=======================================================================

                                HEARING

                               before the

                              COMMITTEE ON
               HOMELAND SECURITY AND GOVERNMENTAL AFFAIRS
                          UNITED STATES SENATE

                    ONE HUNDRED SEVENTEENTH CONGRESS


                             SECOND SESSION

                               __________

                           SEPTEMBER 14, 2022

                               __________

        Available via the World Wide Web: http://www.govinfo.gov

                       Printed for the use of the
        Committee on Homeland Security and Governmental Affairs
        
        
        GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]
        
        
        

        COMMITTEE ON HOMELAND SECURITY AND GOVERNMENTAL AFFAIRS
        
        
        
        
                          ______

             U.S. GOVERNMENT PUBLISHING OFFICE 
 51-542               WASHINGTON : 2023
 
 
 
       
        
        
        
        
        

                   GARY C. PETERS, Michigan, Chairman
THOMAS R. CARPER, Delaware           ROB PORTMAN, Ohio
MAGGIE HASSAN, New Hampshire         RON JOHNSON, Wisconsin
KYRSTEN SINEMA, Arizona              RAND PAUL, Kentucky
JACKY ROSEN, Nevada                  JAMES LANKFORD, Oklahoma
ALEX PADILLA, California             MITT ROMNEY, Utah
JON OSSOFF, Georgia                  RICK SCOTT, Florida
                                     JOSH HAWLEY, Missouri

                   David M. Weinberg, Staff Director
                    Zachary I. Schram, Chief Counsel
         Christopher J. Mulkins, Director of Homeland Security
                 Alan Kahn, Chief Investigative Counsel
             Moran Banai, Senior Professional Staff Member
                Pamela Thiessen, Minority Staff Director
            Sam J. Mulopulos, Minority Deputy Staff Director
      Clyde E. Hicks, Jr., Minority Director of Homeland Security
        Margaret E. Frankel, Minority Professional Staff Member
                     Laura W. Kilbride, Chief Clerk
                   Ashley A. Gonzalez, Hearing Clerk
                   

                            C O N T E N T S

                                 ------                                
Opening statements:
                                                                   Page
    Senator Peters 



    Senator Portman 



    Senator Johnson 



    Senator Lankford 



    Senator Romney 


    Senator Hawley 



    Senator Rosen................................................    26
    Senator Hassan...............................................    29
    Senator Carper...............................................    53
    Senator Sinema...............................................    58
    Senator Padilla..............................................    61
    Senator Ossoff...............................................    68
    Senator Scott................................................    86
Prepared statements:
    Senator Peters 



    Senator Portman 




                               WITNESSES
                     Wednesday, September 14, 2022

Alex Roetter, Former Senior Vice President for Engineering (2014-
  2016) Twitter..................................................     5
Brian Boland, Former Vice President (2018-2020) Partnerships 
  Product Marketing, Partner Engineering, Marketing, Strategic 
  Operations & Analytics Facebook................................     7
Geoffrey Cain, Senior Fellow for Critical Emerging Technologies, 
  Lincoln Network................................................     9
Chris Cox, Chief Product Officer, META...........................    41
Neal Mohan, Chief Product Officer, YouTube.......................    42
Vanessa Pappas, Chief Operating Officer, TikTok..................    44
Jay Sullivan, General Manager of Bluebird, Twitter...............    46

                     Alphabetical List of Witnesses

Boland, Brian:
    Testimony....................................................     7
    Prepared statement...........................................   110
Cain, Geoffrey:
    Testimony....................................................     9
    Prepared statement...........................................   117
Cox, Chris:
    Testimony....................................................    41
    Prepared statement...........................................   125
Mohan, Neal:
    Testimony....................................................    42
    Prepared statement...........................................   129
Pappas, Vanessa:
    Testimony....................................................    44
    Prepared statement...........................................   136
Roetter, Alex:
    Testimony....................................................     5
    Prepared statement...........................................   104
Sullivan, Jay:
    Testimony....................................................    46
    Prepared statement...........................................   154

                                APPENDIX

Senator Peters Washington Post article...........................   161
Senator Peters Facebook Auto Generates Pages for Extremist Groups   162
Verge Article....................................................   163
Senator Johnson quote from Rochelle Walensky.....................   192
Senator Johnson censored chart...................................   193
Senator Johnson Drug Adverse Event Comparison Chart..............   194
Senator Johnson chart from Public Health England.................   195
Senator Scott chart..............................................   196
Google's response letter to Senator Scott........................   197
Response to post-hearing questions submitted for the Record
    Mr. Cain.....................................................   200
    Mr. Cox......................................................   203
    Mr. Mohan....................................................   248
    Ms. Pappas...................................................   290
    Mr. Sullivan.................................................   339

               SOCIAL MEDIA'S IMPACT ON HOMELAND SECURITY

                              ----------                              


                     WEDNESDAY, SEPTEMBER 14, 2022

                                     U.S. Senate,  
                           Committee on Homeland Security  
                                  and Governmental Affairs,
                                                    Washington, DC.
    The Committee met, pursuant to notice, at 10 a.m., in room 
SD-342, Dirksen Senate Office Building, Hon. Gary Peters, 
Chairman of the Committee, presiding.
    Present: Senators Peters, Hassan, Rosen, Ossoff, Portman, 
Johnson, Lankford, Romney, Scott, and Hawley.

            OPENING STATEMENT OF CHAIRMAN PETERS\1\

    Chairman Peters. The Committee will come to order.
---------------------------------------------------------------------------
    \1\ The prepared statement of Senator Peters appears in the 
Appendix on page 95.
---------------------------------------------------------------------------
    In recent years, domestic terrorism, and specifically white 
supremacist, conspiracy related, and anti-government violence, 
has become one of our nation's greatest homeland security 
threats.
    Last October, the Committee held a hearing to examine the 
role social media platforms play in the amplification of 
domestic extremist content and how that content can translate 
into real-world violence. We heard from expert witnesses who 
discussed how recommendation algorithms, ad targeting, and 
other amplification tools end up pushing increasingly extreme 
content to users because that type of content is what keeps 
people active on the platforms.
    Unfortunately, because these platforms are designed to push 
the most engaging posts to more users, they end up amplifying 
extremist, dangerous and radicalizing content. This includes 
QAnon, Stop the Steal, and other conspiracy theories, as well 
as white supremacist and Anti-Semitic rhetoric.
    In some cases, this content may not necessarily violate a 
company's community guidelines. In other cases, even content 
that is in clear violation of company policies remains on the 
platforms, and is often only removed after public pressure. In 
both cases, this content does significant harm to our society 
and stokes real-world violence.
    We have seen this happen time and time again. From the 2017 
neo-Nazi ``Unite the Right'' rally in Charlottesville, Virginia 
that was organized using a Facebook event page, to the violent 
January 6, 2021, attack on the U.S. Capitol spurred to action 
in part by Stop the Steal content that repeatedly surfaced 
online, to the shooter who livestreamed as he massacred Black 
shoppers at a Buffalo supermarket, there is a clear connection 
between online content and offline violence.
    Over the years, we have heard many explanations from social 
media companies about their content moderation policies, 
efforts to boost trust and safety, and actions taken to remove 
harmful accounts.
    There is no question that those efforts are certainly 
important, but there is a question of whether those actions are 
enough to effectively address the spread of dangerous content 
online and the resulting threats it poses to our homeland 
security.
    The central question is not just what content the platforms 
can take down once it is posted, but how they design their 
products in a way that boosts this content in the first place, 
and whether they build those products with safety in mind to 
effectively address how harmful content spreads.
    That is the focus of today's hearing where we will have the 
opportunity to hear from two panels of witnesses, outside 
experts, including former Facebook and Twitter executives, as 
well as current senior executives from Meta, YouTube, TikTok, 
and Twitter, who are charged with designing social media 
products used by billions of people around the world.
    The overwhelming majority of social media users have very 
little information about why they see certain recommended 
content in their feed, and there is very limited transparency 
into how social media companies balance their business 
decisions with the need for online safety, including what 
resources they invest into limiting the spread of harmful 
content.
    Our goal is to better understand how company business 
models and incentive structures, including revenue generation, 
growth, and employee compensation, determine how social media 
products are built and the extent to which current incentives 
contribute to the amplification of content that threatens 
homeland security.
    For nearly a year, I have been pressing Meta, YouTube, 
TikTok, and Twitter for more information on their policies to 
monitor and remove extremist and conspiracy content that 
advocates violence, as well as the relationship between their 
recommendation algorithms and targeted advertising tools that 
generate much of the companies' revenues, and the amplification 
of extremist content.
    The companies' response to those inquiries have been 
incomplete and insufficient so far.
    This morning, we will hear from two former executives and a 
technology journalist with social media expertise about the 
internal product development process and the business decisions 
these companies make, including tradeoffs between revenues and 
growth and their trust and safety efforts, as well as how they 
interact with foreign governments.
    Later this afternoon we will hear directly from the Chief 
Product Officers (CPO) of Meta, YouTube, and Twitter and the 
Chief Operating Officer (COO) of TikTok, the Executives who are 
charged with making these business decisions and driving the 
strategic vision of the companies.
    I certainly look forward to a productive discussion with 
both panelists. Welcome to this Committee here today. We look 
forward to your testimony.
    Ranking Member Portman, you are now recognized for your 
opening comments.

            OPENING STATEMENT OF SENATOR PORTMAN\1\

    Senator Portman. Thank you, Mr. Chairman, and I thank the 
experts for being here. We look forward to hearing from you. 
This is going to be an interesting hearing.
---------------------------------------------------------------------------
    \1\ The prepared statement of Senator Portman appears in the 
Appendix on page 99.
---------------------------------------------------------------------------
    This past Sunday observed the 21st anniversary of the 
tragic September 11, 2001 (9/11) terrorist attacks, and over 
these past couple of decades our country has adapted to combat 
the most pressing threats to our nation's security, and that is 
good. But the advent of social media has added a new dimension 
to the ever-evolving threat landscape and created new 
considerations for combating terrorism, human trafficking, and 
many other threats.
    During last October's hearing on how algorithms promote 
harmful content I focused on how social media business models 
contribute to the amplification of terrorism and other 
dangerous activities. Since then, the Committee has identified 
ways in which social media companies' product development 
processes tend to conflict with user safety. Whistleblower 
testimony has revealed that in numerous occasions the leaders 
at social media companies were aware that certain platform 
features increased threats to user safety and chose not to 
mitigate such concerns. We will hear about that today.
    It is unfortunate that the American public must wait for 
whistleblower disclosures to find out about ways in which 
platforms are knowingly and unknowingly harming their users. 
The lack of transparency in the product development process, 
the obscurity of algorithms, and misleading content moderation 
statistics create an asymmetric information environment in 
which the platforms know all, yet the users and policymakers 
and the public actually know very little.
    One consequence of this lack of transparency is related to 
China. I have serious concerns about the opportunities that the 
Chinese Communist Party (CCP) has to access TikTok's data on 
American users. There are now over 100 million Americans, 
including 40 million under the age of 19 who use TikTok. This 
TikTok data remains vulnerable to the Communist Party of China, 
both as the CCP tries to exploit its access to U.S. data and 
exert influence over the content that U.S. users see.
    For example, despite moving U.S. data servers to the United 
States, TikTok and ByteDance employees in China retain the 
ability to access this data. If that is not true we would like 
to hear about that today.
    Also we learned yesterday, from Senator Grassley's opening 
statement in a Senate Judiciary Committee hearing with the 
Twitter whistleblower that Twitter failed to prevent Americans' 
data from being accessed by foreign governments. In fact, 
Senator Grassley spoke about how several Twitter employees were 
actually foreign agents of India, China, and Saudi Arabia, 
which is concerning and speaks to why Congress needs more 
information from platforms on how they secure user data.
    Another consequence of poor transparency relates to content 
moderation. While I recognize that content moderation is a key 
component to creating safe platforms for users, it cannot be 
the only thing. Transparency reports released by companies 
often detail the amount of content that has been removed for 
violating company policy. However, these reports do not account 
for violating content that is left up on the platform and yet 
goes undetected.
    It also does not account for content that is incorrectly 
censored, as we often see with many conservative voices on 
social media. I, like many of my colleagues, have been critical 
of the political biases held by big tech platforms, which have 
resulted in systematic takedowns of accounts that hold 
ideologies with which the left and liberal media disagree.
    We will hear about that today, but these takedowns are 
often done under the guise of combating misinformation which, 
in fact, they are just combating conservative viewpoints that 
conflict with their own. Any steps taken to address the impact 
media on homeland security must account for First Amendment 
protections, of course, and safeguard free speech.
    For us to have a responsible conversation about the impact 
of harmful content on American users and homeland security we 
need to talk about how current transparency efforts have worked 
or not worked. Congress must enact legislation that will 
require tech companies to share necessary data so that research 
can be done to evaluate the true extent of how harms from 
social media impact Americans.
    As some of you know, I have been working on legislation 
along those lines with Senator Coons to establish bipartisan 
legislation to do just that. The Platform Accountability and 
Transparency Act (PATA) would require the largest tech 
platforms to share data with vetted independent researchers and 
other investigators so that we can all increase our 
understanding of the inner workings of social media companies 
and regulate the industry based on good information that we 
simply do not have now, that we can learn through this process.
    Again, I thank the witnesses for being here and I look 
forward to having your expertise help to guide us in these 
complicated issues, and thank you, Mr. Chairman, for holding 
this hearing.
    Chairman Peters. Thank you, Ranking Member Portman.
    It is the practice of this Homeland Security and 
Governmental Affairs Committee (HSGAC) to swear in witnesses, 
so if each of you would please stand and raise your right 
hands.
    Do you swear that the testimony you will give before this 
Committee will be the truth, the whole truth, and nothing but 
the truth, so help you, God?
    Mr. Roetter. I do.
    Mr. Boland. I do.
    Mr. Cain. I do.
    Chairman Peters. You may be seated.
    Today's first witness is Alex Roetter, the former Senior 
Vice President of Engineering at Twitter. In his previous role, 
Mr. Roetter helped grow Twitter's monthly active users to over 
300 million and build the ad network from near zero revenue to 
$2.5 billion a year.
    Mr. Roetter also spent six years at Google on a variety of 
projects including building the world's largest computational 
advertising platform. He was in the room for major decisions 
about products at Twitter and is familiar with the priorities 
that were weighed as products were created, as well as how 
those products are then built.
    Mr. Roetter, welcome to the Committee. You may proceed with 
your opening remarks.

TESTIMONY OF ALEX ROETTER,\1\ FORMER SENIOR VICE PRESIDENT FOR 
                ENGINEERING (2014-2016), TWITTER

    Mr. Roetter. Good morning, Mr. Chairman and members of the 
Committee. Thank you for inviting me here today.
---------------------------------------------------------------------------
    \1\ The prepared statement of Mr. Roetter appears in the Appendix 
on page 104.
---------------------------------------------------------------------------
    We live in a world where an unprecedented number of people 
consume information from social networks. Viral content and 
misinformation can propagate on these platforms on a scale that 
is unseen in human history. Regulators must understand 
companies' incentives, culture, and processes to appreciate how 
unlikely voluntary reform is.
    In over 20 years of working in Silicon Valley as an 
engineer and as an Executive, I have seen firsthand how several 
of these companies work. Today I will talk about how these 
companies operate and actionable ways to demand transparency.
    The product development lifecycle works as follows. First, 
teams of product managers, engineers, and designers are 
assigned specific metrics to maximize. These metrics carefully 
track user engagement and growth as well as revenue and 
financial indicators. Other metrics, such as user safety, are 
either not present or much less important.
    Second, teams use an experimental system to launch changes 
to small percentages of users. The effect of every experiment 
on key metrics is measured extremely accurately. Absent are 
detailed metrics tracking impacts on user safety. For example, 
I never once saw a measurement such as did a given experiment 
increase or decrease the spread of content later identified as 
hate speech.
    Third, executives review these experimental dashboards 
regularly and make decisions on which experiments to launch. 
These reviews are run by product and engineering. Other 
functions like legal or trust and safety are absent or do not 
play a substantial role.
    Culturally, these companies are informal hierarchies with 
the ``builders,'' by which I mean engineers, product managers, 
and designers, held in the highest regard. Other functions are 
viewed much more skeptically. The strong bias is to make sure 
that corporate bureaucracy does not slow down product 
development.
    These companies conduct regular performance evaluations and 
promotions, and these drive peer recognition, career 
advancement, and cash and stock awards. The main data collect 
is what impact an individual's work has on key metric families. 
Only a minority of builders get promoted based on impact to 
trust and safety metrics, as those impacts are not valued as 
highly.
    What data has been shared selectively to date is mostly 
non-illuminating statistics designed to create the appearance 
that they are taking the problem seriously. When one of the 
largest companies in the world says it is spending what seems 
like a large, absolute number, that number must be put in 
context and compared to the size of other initiatives, for 
example, product efforts or how much they spent on stock 
buybacks. Large investments amounts are not sufficient. We must 
demand transparency based on measuring actual results.
    Similarly, when a company points to how much content it has 
taken down, that has to be understood in terms of its reach in 
the network. Removing a billboard in Wyoming is very different 
than removing a billboard in Times Square.
    For real transparency I recommend assembling an independent 
group of researchers and data scientists. Task them with 
enumerating the right questions to ask and the set of data they 
need to answer them. Fund them to continually do this work and 
refine their questions and data requests.
    The government is able to demand transparency in 
technically demanding fields. For example, third-party auditors 
of public company financial statements are able to balance the 
public's need for reliable financial statements with a 
company's need to keep information confidential.
    Until such transparency exists, every assurance by any of 
these companies has to be taken on faith. Transparency is 
necessary but not sufficient. Until we change the fact that 
user attention and profits are what companies care about above 
all else, all the data-sharing in the world will not address 
the problem.
    Policy and legal experts have previously testified before 
the Committee on ways that incentives could be changed. 
Incentives matter. Companies behave differently when they care 
about the quality of content. For example, having inappropriate 
ads could materially harm financial performance, so most 
advertising systems place ad copy removal as a step that has to 
occur before the new ad ever makes its way to users. On the 
other hand, user-generated content is allowed to go live 
instantly.
    Incentives also shape companies' recommendation algorithms. 
For example, TikTok and ByteDance feed young people in China a 
diet of educational science and math content via their 
recommendation algorithms. The Chinese version of the app even 
enforces a daily usage limit. Contrast this to how U.S. 
companies target content to young Americans, optimizing their 
engagement of revenue at any cost.
    Any suggestion for more useful transparency will be met 
with many objections. The status quo is simply too lucrative. 
Do not underestimate these companies' ability to fight requests 
for information. After all, the legal team at Google alone has 
the same number of lawyers as all the employees of the Federal 
Trade Commission (FTC).
    Given what we know about companies' incentives, processes, 
and culture, we should not expect meaningful progress 
voluntarily, and we should view their commitments extremely 
skeptically, however, with the proper transparency and 
regulatory environment I believe we can change their incentives 
and start to see real, measurable progress against these 
problems. Thank you.
    Chairman Peters. Thank you.
    Our next witness is Brian Boland, a former Vice President 
of Partnerships Product Marketing, Partner Engineering, 
Marketing, Strategic Operations, and Analytics at Facebook. Mr. 
Boland worked at Facebook for 11 years. He worked in several 
roles including leading a 500-person multifunction team focused 
on product strategy, market strategy, partner engineering, 
operations, analytics, and marketing. These high-impact teams 
worked across Facebook products and features including watch, 
video, news, group admins, developers, payments, and audience 
network. Before joining Facebook, he worked at Microsoft and 
other tech companies.
    Mr. Boland, welcome to the Committee. You may proceed with 
your opening remarks.

   TESTIMONY OF BRIAN BOLAND,\1\ FORMER VICE PRESIDENT (2018-
  2020), PARTNERSHIPS PRODUCT MARKETING, PARTNER ENGINEERING, 
     MARKETING, STRATEGIC OPERATIONS, & ANALYTICS, FACEBOOK

    Mr. Boland. Good morning, Mr. Chairman, and Members of the 
Committee. Thank you for holding these hearings that cover such 
important issues for our nation and the world, and thank you 
for inviting me here today to provide testimony on my 
experiences as senior executive at Facebook, now known as Meta.
---------------------------------------------------------------------------
    \1\ The prepared statement of Mr. Boland appears in the Appendix on 
page 110.
---------------------------------------------------------------------------
    For the last few years I have grown increasingly concerned 
about the roles that Facebook, Instagram, YouTube, Twitter, and 
TikTok play in driving the growth of misinformation, extremism, 
and generally harmful content. I worked at Facebook for 11 
years in a variety of leadership roles, helping to shape 
product and market strategies for a broad array of products, 
including advertising news, video media, and more. During my 
tenure at the company I worked for the most senior executive 
and was deeply embedded in the product development process.
    In my last two years of my time at the company, the 
CrowdTangle team and product was a part of my organization. 
CrowdTangle is a tool that provides limited, albeit industry-
leading transparency in the public news feed content on 
Facebook. What finally convinced me that it was time to leave 
was that despite growing evidence that the news feed may be 
causing harm globally, the focus on and investments in safety 
remained small and siloed.
    The documents released by Frances Haugen, the Facebook 
whistleblower who last fall testified here, highlight issues 
around polarization globally and the power of Facebook to lead 
people down a path to more extreme beliefs. These papers 
demonstrate thoughtful, well-researched documentation of the 
harms that concerned me. The research was done by highly 
skilled Facebook employees who are experts in their field, and 
was extensive.
    Rather than address the serious issues raised by its own 
research, Meta leadership chooses growing the company over 
keeping people safe. While the company has made investments in 
safety, these investments are small are routinely abandoned if 
they do not impact company growth. My experience at Facebook 
was that rather than seeking to research and discover issues on 
the platform before others found them, they would rather 
reactively work to mitigate the public relations (PR) damage 
for issues that came to light.
    I have come to believe that several circumstances have put 
Americans at risk from the content on these platforms. The 
first is the growth over safety incentive structure that leads 
to products that are designed and built without a primary focus 
on safety. The next is the unprecedented lack of transparency 
available from these platforms so that we can analyze content 
and understand the impact from these tools. Finally, the lack 
of clear oversight for the business practices of these 
companies.
    We have faced challenges like this before with new 
technologies. In the 1960s, Congress addressed the dramatic 
rise in fatalities caused by the rapid increase in automobile 
use in the United States. That industry experienced explosive 
growth in the companies focused on growth in sales, and it 
turns out safety did not sell. The creation of the National 
Highway Traffic Safety Administration (NHTSA), at the time the 
National Highway Safety Bureau (NHSB), empowered an agency to 
study the available data, and in partnership with researchers 
and other agencies to take steps to make driving in America 
rapidly and significantly safer. Today, automobile 
manufacturers portray safety as a selling point such that they 
welcome verification of these efforts.
    The problem with these social media platforms today is that 
we lack public data to understand the current issues, and there 
is extremely limited ability to research these platforms and 
almost no ability to protect our future and creation a version 
of crash testing the car. Imagine if, in the 1960s, we had no 
way of knowing the deaths that were happening from cars, and we 
had no way of knowing it was increasing so rapidly. That lack 
of data is where we are today with social media platforms.
    The reality is that for all the debate about whether social 
media is predominantly good or bad, the truth is that we do not 
really know. If anyone tells you they know, they do not know. I 
believe that we have a right to know. The good news is that 
with the right incentives in place and rules around 
transparency we can develop a better understanding of these 
issues and take steps to mitigate the harms.
    If we take these steps we can do now what we did with the 
automobile. We can empower agencies and researchers to deeply 
understand the issues, and through changes in incentives, 
public education, and better development, build a path to a 
future where we still get the amazing benefits from these 
products while mitigating the harms that we barely understand 
today.
    Today I hope to shed light on product development process, 
internal and external incentive structures for these 
organizations, and the critical importance of transparency. I 
appreciate your work to better understand these issues and 
deliver real-world solutions to the American people. Thank you.
    Chairman Peters. Thank you.
    Our final witness of our first panel is Geoffrey Cain, 
Senior Fellow for Critical Emerging Technologies at Lincoln 
Network. Mr. Cain is an award-winning foreign correspondent and 
author. His work has taken him to the world's most 
authoritarian and remote places, from inside North Korea to the 
Trans-Siberian Railway across Russia, from investigations into 
genocide in Cambodia to experiments in technological 
surveillance in China.
    Mr. Cain has served as a tech Congress fellow with the 
House Foreign Affairs Committee minority and supported a range 
of issues, including China, tech sanctions, and investigative 
work.
    Mr. Cain, welcome to our Committee. You may proceed with 
your opening comments.

   TESTIMONY OF GEOFFREY CAIN,\1\ SENIOR FELLOW FOR CRITICAL 
             EMERGING TECHNOLOGIES, LINCOLN NETWORK

    Mr. Cain. Good morning, Chairman Peters, Ranking Member 
Portman, and Members of the Committee. It is an honor to be 
invited to testify here today on social media's impact on 
national security. Today I will talk about one of the greatest 
technological threats facing our homeland security and 
democracy: TikTok, the social media app that reports to a 
nefarious Chinese company called ByteDance.
---------------------------------------------------------------------------
    \1\ The prepared statement of Mr. Cain appears in the Appendix on 
page 117.
---------------------------------------------------------------------------
    As an investigative journalist in China and East Asia for 
13 years, I have been detained, harassed, and threatened for my 
reporting on Chinese technology companies. Today I will show 
you how TikTok has orchestrated a campaign of distraction and 
deflection to mask the alarming truth.
    Americans face the grave unprecedented threat of software 
in our pockets that contains powerful surveillance and data-
gathering capabilities, owned by private companies that must 
comply with the dictates of a foreign authoritarian government 
ruled under the Chinese Communist Party.
    The CCP had signaled its ambitions to assert global 
jurisdiction over private companies everywhere as a condition 
for doing business in China. TikTok, therefore, is a disaster 
waiting to happen for our security and the privacy of our 
citizens.
    We will have TikTok executive here today later. According 
to their internal public relations guidelines, leaked to the 
media, they are required to, ``Downplay the parent company, 
ByteDance, downplay the China association, and downplay 
artificial intelligence (AI).''
    The public relations guideline states that if you ask them 
about the influence of the Chinese company, ByteDance, and its 
influence over its American product, TikTok, which is used by 
many Generation Z teenagers, its executives must deceptively 
tell you that ByteDance is a separate company in China and that 
you should talk to ByteDance instead.
    They will attempt to confuse you, claiming that TikTok 
takes a localized approach, hiring local moderators, 
implementing local policies, and showing local content. They 
will not tell you about an individual who is unnamed so far, 
called the ``Master Admin'' in Beijing--this has been leaked to 
the media, to Buzzfeed--who has had access to all Americans' 
data. They also will not tell you that they, at TikTok, report 
to ByteDance executives in China, and ByteDance reports to the 
Chinese Communist Party.
    TikTok's fast expansion into the American market was only 
possible because China has rigged the market. The Chinese 
government offered ByteDance vast market protection in China, 
all while banning competing American social media apps, 
Facebook, Instagram, Twitter, and Google.
    Like all Chinese companies, ByteDance runs an in-house 
Communist Party Committee that enforces the political loyalty 
of its employees. In 2018, ByteDance and TikTok's founder and 
previous Chief Executive Officer (CEO), a man named Zhang 
Yiming, wrote a public letter promising Chinese regulators that 
his company would follow ``core socialist values,'' would 
introduce these ``correct values into technology and products'' 
and would ensure his products promoted the Chinese Communist 
Party's agenda.
    These values, he wrote, included ``strengthening the work 
of the party construction,'' ``deepening cooperation with 
official party media,'' and strengthening ``content review in 
line with these party values.''
    ByteDance's public statement in China should be cause for 
alarm, considering American government employees, military 
personnel, and workers in sensitive and strategic industries 
use TikTok.
    When TikTok began growing its present in the United States 
in 2016 and 2017, I was an investigative journalist in China's 
western region of Xinjiang, where I was writing my second book, 
The Perfect Police State, which is an investigation into the 
Chinese surveillance dystopia.
    I learned that ByteDance and TikTok were expanding into 
America, and I knew that this was ominous because I had been 
speaking to a former worker for the Ministry of State Security, 
a major intelligence and extremely powerful intelligence body, 
who had told me that he had worked with numerous companies, 
including ByteDance, to expose the data of ethnic minorities in 
China. It was not hard. It simply happened.
    ByteDance has also had an active role in suppressing news 
about the atrocities, which included physical and psychological 
torture, internment in concentration camps, forced 
sterilizations, and the wholesale destructions of mosques and 
other cultural artifacts. So this is very serious.
    I am aware of time so I do have much more in the written 
testimony if you would like to ask. But thank you for your time 
today, and I look forward to answering your questions.
    Chairman Peters. Thank you, Mr. Cain.
    Extremist groups, including QAnon followers, Islamic State 
of Iraq and Syria (ISIS), and white supremacists certainly have 
expanded their ranks by recruiting individuals on major social 
media platforms. The Christchurch shooter, who killed 51 people 
and inspired the Poway and El Paso shooters was radicalized on 
YouTube and livestreamed his attacks on Facebook to rally 
others to his cause.
    Three years later, a shooter in Buffalo, New York, streamed 
his attack on Twitch, which acted quickly to take it down but 
the video was soon circulating widely on Facebook.
    Mr. Roetter, would you tell this Committee why do these 
platforms' recommendation algorithms spread this extremist 
content just so rapidly?
    Mr. Roetter. Thank you for the question. The way to 
understand these recommendations, they do not have 
intentionality about specific types of content. But the way 
they work is they assemble a massive amount of information, 
they model everything about your usage, your interests, your 
geography, who you are connected to, what you have engaged with 
historically--and then they model the content and they try to 
match those optimally.
    What makes this so dangerous is there is a positive 
feedback loop, and if you pick something not controversial, 
just pick a hobby--knitting, for example--if I think that you 
are somewhat into knitting I might recommend some knitting 
content to you because I believe you will engage with it. You 
do engage with it, and that both makes you more interested in 
knitting because you are doing the hobby more, but also it 
feeds back to the algorithm, which then has signaled that you 
do like more of this content.
    The next day, or the next session, it is more confident 
that you will engage in this content and you will go further 
down the rabbit hole. Obviously, with knitting that is fine, 
but this is true for all sorts of content. Because of this 
feedback loop, if you have some proclivity or some interest in 
some topic, you will be fed more of that. That generally feeds 
your interest, and you are fed more and more.
    That is why we see people that start off with more things 
in common than differences sort of splitting and fracturing as 
they each go into their worlds that are more and more different 
and have less in common with other people. This is all an 
inevitable consequence of this optimization of driving 
engagement.
    Chairman Peters. In terms of content that takes you down 
the rabbit hole, are companies able to change some of those 
algorithms to prevent that from occurring, at least with that 
kind of content, and how would that work?
    Mr. Roetter. You certainly could, in theory. It will never 
happen, given the current incentive structure. These are for-
profit companies. They are incented to maximize profit, and 
before they have realizable profits they are incented to show 
massive user growth to convince investors that they will be 
massively profitable in the future.
    The way they do that is getting people to come back to 
their platform over and over. The way they do that is for 
optimizing engagement. As long as the algorithms are optimizing 
for showing you things that you will engage with we will always 
have this positive feedback loop property. I show you something 
you are interested in, you get more interested it, you are more 
likely to keep engaging.
    You can build an AI to train for anything, but you pick an 
incentive based on the overarching incentive of the environment 
you find yourself in, in this case a public company that is 
reporting to shareholders. Until those incentives change we 
should not expect the AI to optimize for anything other than 
engagement and profit maximization.
    Chairman Peters. We are going to hear later this afternoon 
from chief product officers at some of these major companies. 
Is it possible for them to set different priorities for product 
development to address the spread of extremist content? Is that 
within their purview, and is that something they should be able 
to talk about?
    Mr. Roetter. In practice it is not possible, and the reason 
is these are just individuals. This is not a matter of a few 
bad eggs running companies. This is a system that these people 
find themselves in. They are in a system where they have to 
report user growth, engagement, increasing attention from the 
users, and profit.
    I should add, this attention game is a race to the bottom. 
If I build a product that less addictive than a competitor's 
product, by definition user eyeballs and attention will go over 
to the competitor. Then I have to, in turn, make my product 
more addictive to pull people back or I will quickly be 
abandoned by investors.
    Given that structure there is no way that a product leader 
or any other executive at a company could optimize for anything 
other than those core metrics, engagement and revenue, because 
that is the system they find themselves in.
    Chairman Peters. They cannot do it by themselves. It has to 
be broader than that. But they are at the front end of that, or 
at the beginning of that, to understand exactly how that 
incentive structure, how those priorities shape the work that 
they do? They can talk about that, I suspect.
    Mr. Roetter. They could talk about that. They are doing 
exactly what you would expect them to do, given the environment 
they find themselves in. As long as the incentives of those 
companies are what they are, they will continue to behave the 
way they are behaving, and if they did not, it would hurt the 
trajectory of the company.
    Chairman Peters. Very good. Mr. Boland, why are the actions 
taken by trust and security teams at these platforms just not 
enough to deal with this problem? We are going to hear a lot 
about these teams, I think, this afternoon. Why is that not 
enough?
    Mr. Boland. Yes, I imagine that you will hear that there 
are significant investments from the companies in trust and 
safety, and it is true they make some investments.
    The important thing to think about with trust and safety 
effort is that if it is siloed from the rest of the process, if 
it is a last-check safeguard or a group that is small and not a 
core part of the way that people build products, it will always 
be an afterthought, and it will always be a team that has to 
fight in the battle of tradeoffs between their ways that they 
would like to improve trust and safety and impacts to growth. 
Impacts to growth translate to impacts to revenue.
    That dual tension of not receiving enough resources and 
being at odds with the product development teams and the 
product development process makes it so that team has to fight 
for any sort of interventions they want to put in place.
    You can change the way that product teams could work with 
those organizations. A good example would be the efforts that 
Facebook is now putting in place around privacy. For a number 
of years we know that Facebook and Meta had not been at the top 
of its game on privacy. After the last sort of issues with the 
FTC, the company has invested significantly in efforts around 
privacy, and has made that something that the product teams and 
the product managers actually have to care about.
    As long as that team in trust and safety is off to the 
side, if there are not the incentives in place that say to the 
company, ``You really have to make sure that when people are 
making day-to-day decisions they are prioritizing these 
efforts,'' that team is fighting a losing battle, both in 
resources, because they are battling against a number of 
people, and incentives at the company because they have to 
justify every single change that they want to make. It is not a 
core part of how the teams think.
    Chairman Peters. At best, there is going to be follow-up. 
The products are going to get launched. They are going to 
potentially cause problems that team may have even recognized 
but they were not able to interject that effectively during the 
product development phase. Then later they may be engaged but 
at that point the genie is out of the bottle, so to speak, 
before they can get engaged. Is that correct?
    Mr. Boland. That is correct. You can kind of understand 
this if you think about where these companies started and the 
incredibly short period of time that they have grown to be as 
successful as they are. They still feel like startups, in the 
way that the leaders think, even though these are some of the 
biggest companies in the world.
    At the beginning of their lifecycle it was about trying to 
figure out products that could grow, and grow effectively in 
the world, and has gotten to a point where they have not 
matured out of that stage.
    I think we can get to a point where these companies could 
do more in that space. We just have not seen them make that 
transition into a more responsible set of activities.
    Chairman Peters. All right. Thank you.
    Ranking Member Portman, you are recognized for your 
questions.
    Senator Portman. Thank you, Mr. Chairman. We have a good 
group of Members here so I will keep within the time, and I 
want to focus on TikTok first. We talked about TikTok being the 
most popular social media app in the United States. I also 
think it poses a risk to our national security, and I want to 
dig deeper into that today with both of these panels, this one 
and then when the TikTok representatives are here later.
    My understanding is that under Chinese law the Chinese 
Communist Party can access data of tech companies that are run 
out of China or have parent companies that are run out of 
China. Both ByteDance and TikTok have offices and they have 
employees in Beijing.
    Mr. Cain, under Chinese law does TikTok have a legal 
obligation to give U.S. user data to the Chinese Communist 
Party?
    Mr. Cain. Oh yes, absolutely. TikTok executives will, under 
Chinese law, face a minimum of 20 days' detention if they 
refuse to turn over data on anyone in the world, and this could 
be anybody in China, anybody who is traveling through China, 
through Hong Kong. This is a documented legal situation, and it 
is not something that TikTok, despite claiming to be an 
American company, can avoid.
    I would also like to point out that TikTok does dodge this 
question frequently by trying to point out that it is run by a 
Cayman Islands holding company, a shell company essentially. 
This is a red herring to distract from the issue at hand.
    The American company, TikTok and the Chinese company, 
ByteDance, both report to this Cayman Islands shell company. 
The company has never said how many people actually work for 
the shell company, the holding company, but we do know that the 
CEO of ByteDance and the CEO of TikTok are the same person. 
This is listed on the Cayman Islands registry. The CEO is the 
same person running the ByteDance company in China, according 
to their website.
    Senator Portman. Let me delve a little deeper here because 
we are going to hear from TikTok later, and based on the 
testimony we have received from them in advance I think they 
are going to say they have not provided data to the Chinese 
Communist Party. Even if CCP requested data, they said they 
would not share it with them. Again, does China need to make a 
request to access this data, or does China have the capability 
to access it at will?
    Mr. Cain. I am not aware of the Chinese government having 
the ability to simply open a computer and access it at will. It 
would usually happen through somebody in ByteDance or in 
TikTok. This has already been demonstrated and documented. 
There was a Buzzfeed report that came out a few months ago 
which contained 20 leaked audio files from internal meeting at 
TikTok in which TikTok employees said that they had to go 
through Chinese employees to understand how American data was 
being shared. It also pointed out that employees were saying 
that there is an individual in Beijing who is called the 
``Master Administrator.'' We do not know who that is yet. But 
this person, according to them, had access to all data in the 
TikTok system.
    When they say that this data is being kept separate, it is 
simply a point that has been disproven already, because we have 
documentation that shows that the data has been shared 
extensively.
    Senator Portman. OK. We will get a chance to talk to TikTok 
about that, but I appreciate your work on this and your 
testimony today. We have cause for us to legislate more in this 
area, generally. We talked about that earlier, regulations, 
legislation.
    My concern is that we really do not know what is behind the 
curtain, the black box, so to speak. We proposed this 
legislation called the Platform Accountability and Transparency 
Act to require the largest tech platforms to share data, again 
with vetted, independent researchers and other investigators. 
We know what is happening with regard to user privacy, content 
moderation, product development. We talked about the bias that 
I believe is out there in social media today, in many of the 
companies, and other practices.
    Mr. Boland, you talked a little bit about this in your 
testimony. I see in your written testimony you said, ``To solve 
the problem of transparency we must require platforms to move 
beyond the black box with legislation like the Platform 
Transparency and Accountability Act.'' Can you explain why that 
legislation is needed and how it would be used?
    Mr. Boland. Thank you, Senator. Yes, I believe the Platform 
Accountability and Transparency Act is one of the most 
important pieces of legislation that is before you all. It is 
not sufficient because we have to address the incentives that 
we have been talking about.
    But to begin with, we are at a point where we are supposed 
to trust what the companies are telling us, and the companies 
are telling us very little. I think Facebook, to their credit, 
is telling us the most, but it kind of like a grade of a D out 
of an A through F grading system. They are not telling us much, 
but they are telling us more than everybody else, especially 
YouTube and TikTok.
    In order to understand the issues that we are concerned 
about with hate speech and the way that these algorithms can 
influence people, we need to have a public understanding and a 
public accountability of what happens on these platforms.
    There are two parts of transparency that are very 
important. One is understanding what happens with moderation, 
so what are the active decisions that companies are taking to 
remove content or make decisions around content. There is 
another critically important part that is around what are the 
decisions that the algorithms built by these companies are 
taking to distribute content to people.
    If you have companies reporting you what they would like 
to, and I am sure you will hear from them this afternoon, a lot 
of averages, a lot of numbers that kind of gloss over the 
concerns, if you look at averages across these large 
populations you miss the story.
    If you think about 220-some-odd million Americans who are 
on Facebook, if one percent of them is receiving an extremely 
hate-filled feed or radicalizing feed, that is over two million 
people who are receiving really problematic content. In the 
types of data that you are hearing today, that you are 
receiving today, you get an average, which is incredibly 
unhelpful.
    By empowering researchers to help us understand the problem 
we can do a couple of things. One, we can help the platforms, 
because today they are making the decisions on their own, and I 
believe that these are decisions that should be influenced by 
the public. Two, then you can bring additional accountability 
through an organization that has clear oversight over these 
platforms. Whether that be through new rules or new fines that 
you levy against the companies, you have the ability to 
understand how to direct them.
    Today, you do not know what is happening in the platforms. 
You have to trust the companies. I lost my trust with the 
companies, of what they were doing, and what Meta was doing. I 
think we should move down trust to helping our researchers and 
journalists understand the platforms better.
    Senator Portman. OK. To the other two quickly, Mr. Cain and 
Mr. Roetter, do you disagree with anything that was said about 
the need for more transparency? Just quickly. I have very 
little time.
    Mr. Cain. I 100 percent agree.
    Mr. Roetter. I 100 percent agree, and I think that this 
Committee is uniquely poised, given its subpoena powers, to 
enforce transparency.
    Senator Portman. OK. I will have additional questions 
later. Again, we have so many Members here I want to respect 
the time, but I appreciate your testimony.
    Chairman Peters. Thank you, Ranking Member Portman.
    Senator Johnson, you are recognized for your questions.

              OPENING STATEMENT OF SENATOR JOHNSON

    Senator Johnson. Thank you, Mr. Chairman.
    Listen, I think we all agree this is a big problem. It is 
my definition of a problem, that there is no easy solution. As 
Chairman of this Committee I would meet with Facebook, and I 
appreciated what they were trying to do to hold down Islamic 
terror type of content. I think we all agree that we do not 
want to be disseminating extremist, violence-inducing type of 
behavior through these platforms, but we also need to protect 
free speech as well. It is a real tension. It is a real 
balancing act.
    Mr. Boland, I think you talked about the term extremism and 
harmful content, but I guess that is all in the eye of the 
beholder, is it not? It is difficult to define.
    I guess what I want to focus on a little bit is, where do 
we draw the line in terms of taking down content that we all 
would agree is extreme and could induce violence, versus 
censoring legitimate political debate?
    Mr. Roetter, do you have any idea of what percentage of 
Twitter employees are conservative versus liberal?
    Mr. Roetter. I have no idea.
    Senator Johnson. You think it is probably pretty heavily 
tiled to the left. Correct?
    Mr. Roetter. I do not know.
    Senator Johnson. I think you do.
    Mr. Boland, would you want to answer that question?
    Mr. Boland. I do not know.
    Senator Johnson. OK. Mr. Cain.
    Mr. Cain. It is obvious, but my only knowledge is China 
tech and TikTok. I am not as familiar with that area.
    Senator Johnson. OK. Let me move on. Let me use an example 
that I think we are all aware of, the 800-pound gorilla in the 
room. Let us talk about the Hunter Biden laptop. Mr. Roetter, 
do you believe that, like The Washington Post, that there is 
authentic information on that laptop?
    Mr. Roetter. I am not sure. I will say that these are 
massive platforms. There are billions of people.
    Senator Johnson. I have very little time.
    Mr. Roetter. I do not know.
    Senator Johnson. Mr. Boland, do you assume that is 
authentic information on the laptop?
    Mr. Boland. I do not have an opinion on the laptop.
    Senator Johnson. OK. Twitter was actually very effective 
when they blocked the New York Post articles on the Hunter 
Biden laptop. We had Jack Dorsey in front of the Senate 
Commerce Committee back in, I think, October 2020, and Senator 
Cruz and myself asked him, because we were talking about 
Russians using the platforms to impact our elections, and 
everybody agrees that could happen.
    We asked Mr. Dorsey, ``Do you believe Twitter could impact 
the election?'' Mr. Dorsey said, ``No.'' Mr. Roetter, do you 
believe Twitter has the capability of impacting an election?
    Mr. Roetter. I think all of these social platforms, they 
are so massive it is hard to believe that they are not 
impacting.
    Senator Johnson. Mr. Boland, do you believe that as well?
    Mr. Boland. Yes, these platforms absolutely have influence.
    Senator Johnson. Mr. Cain.
    Mr. Cain. Absolutely.
    Senator Johnson. OK. There is a problem right there, OK, 
and I appreciate you acknowledging that fact. We had 51 former 
intelligence officials. I have no idea on what basis they wrote 
this letter, that came out immediately. I think it might be 
because the Federal Bureau of Investigation (FBI) had a scheme 
in August 2020, to downplay the derogatory information on the 
Hunter Biden laptop. But they came out and said that the laptop 
had all the earmarks of a Russian information operation. It 
seems to me like that letter itself was an information 
operation.
    We have the platform censored that and Facebook throttled 
it back. We actually took a poll on this. I did not but a 
company called Media Research Center poll--this was after the 
election--1,750 voters in seven swing States, of Biden voters 
who were unaware of the emails, texts, testimony, banking 
transactions on the laptop, as well as Senator Grassley's and 
my report, which was based on interviews with U.S. persons and 
U.S. documents.
    Seventy nine percent of those Biden voters said they would 
still vote for him, but 16 percent said they would not, four 
percent said they would switch their vote to President Trump, 
four percent would vote for a third party, four percent would 
skip voting altogether, five percent would not have voted at 
all. Pretty strong evidence that what Facebook and Twitter did 
impacted the 2020 election to a far greater extent than 
anything Russia ever could have hoped to do in 2016 or 2020.
    I want to talk about other disinformation coming out of 
this Committee. A day or two after Senator Grassley and I 
issued our report, based on U.S. documents and interviews with 
U.S. persons, our now Committee Chairman, who was then 
Committee Ranking Member, issued a press release. It said, 
``Peters response to a Republican effort to amplify Russian 
disinformation.'' He said, ``I generated a partisan political 
report that is rooted in Russian disinformation.''
    Mr. Chairman, do you want to retract that false allegation 
now, now that we know that the Hunter Biden laptop is accurate, 
that there has not been one scintilla of information provided 
in Senator Grassley's and my report that has ever been refuted. 
It was 100 percent accurate. Yet you, as Ranking Member of the 
Committee, accused me repeatedly of soliciting and 
disseminating Russian disinformation. Do you want to retract 
your false allegation here that you issued in your press 
release on September 23rd?
    Chairman Peters. No. Let us focus on what we are trying 
to----
    Senator Johnson. I am focusing on this because this is 
exactly the type of harm we can do to our political process 
when you have these big tech companies engaging in political 
debate, censoring one side of the political spectrum and 
amplifying the false allegations of another side. Do any of you 
want to dispute that?
    Mr. Boland. Senator, I think it is important that we get 
the data to know. This is why the Platform Accountability and 
Transparency Act is so critical to our globe and our Nation, is 
that if you were able to look at the data, to understand what 
had happened from content moderation, and you were able to see 
the distribution, you could compare that data across the 
platforms and see what sort of impact that it had.
    Senator Johnson. One part of the transparency would be to 
at least have people who at least used to work or work for 
these platforms to at least acknowledge the highly political 
nature of the individuals that work in them. Just acknowledge 
it. It is obvious to everybody. Mr. Zuckerberg spent, what, 
about a half a billion dollars impacting the 2020 election? 
Took over the Green Bay election system, in a highly partisan 
fashion. About 95 percent of the money he spent was in 
Democratic strongholds in Wisconsin.
    Can we at least acknowledge that there is enormous 
political activity going on, partisan activity going on, within 
these social media companies, rather than just trying to bury 
it? Let us be honest. Let me be transparent. But let us be 
honest in our transparency.
    Mr. Boland. I agree with you on the request for 
transparency. My experience, outside of whether someone had a 
certain political leaning or not, I did not see political 
leanings shape the decisions that were made inside the company, 
per my experience, and what I saw.
    Senator Johnson. OK. I sat it in their censure of the New 
York Post article prior to the 2020 election, and I think it is 
pretty obvious.
    Thank you, Mr. Chairman.
    Chairman Peters. Senator Lankford, you are recognized for 
your questions.

             OPENING STATEMENT OF SENATOR LANKFORD

    Senator Lankford. Thank you, Mr. Chairman. Mr. Cain, you 
have spent a lot of time studying authoritarian regimes and how 
they use social media to be able to literally control their own 
populations. The Chinese government obviously doing this with 
the Uyghurs and what they have done. In that case you have 
spent a lot of time studying and going through.
    One of those features that is in TikTok, for instance, and 
in several platforms, is the permissions. When you join it you 
use this free platform. The user gives ByteDance, TikTok, 
whoever it may be, the opportunity to be able to open their 
microphone, to use facial recognition, to be able to store data 
on that. How is that information used in an authoritarian 
regime?
    Mr. Cain. An authoritarian regime such as China will 
attempt to get access to that data and use it to build 
artificial intelligence capabilities, capabilities that might 
involve espionage, spying on military officials, government 
officials. This is a major Trojan Horse that needs to be dealt 
with, and the Chinese government has made clear, in its 
National Artificial Intelligence Strategies, that it does need 
data, that data is its biggest target.
    Senator Lankford. Right. One of the things that I have seen 
from TikTok even recently is the ability to be able to keep up 
with keystrokes. If you use their app to be able to then go to 
other websites so they can then track your keystrokes, that 
would include credit card numbers, that would include 
passwords, user IDs, all of those things as well. Factual or 
not factual?
    Mr. Cain. Factual. You are absolutely correct, Senator.
    Senator Lankford. They have made the statement publicly, 
``We do not use that for any other purposes. We just maintain 
that.'' That is now owned by the Chinese government. At that 
point if it is going through TikTok they have access through 
that to be able to get user names, passwords, facial 
recognition, everything else, on this. That is the building of 
a database system.
    This is not some hypothetical, possible thing. This is 
actually occurring.
    Mr. Cain. Precisely, Senator. This is occurring. TikTok 
does get gather large numbers of data, and there was one recent 
study by Citizen Lab, which does work on this, which found that 
TikTok does gather unusually large amounts of data from its 
users.
    The key login software that was found recently and reported 
on, TikTok has said that they do not use this. But it is there, 
and if the Chinese Communist Party wants to get access to it 
they have the power to do it.
    Senator Lankford. OK. Thank you.
    Mr. Roetter, I want to ask you a little bit about value 
system. You have a unique perspective from coming from Twitter 
and then outside of it to be able to look backwards on this. It 
is unusual to me that Twitter is blocked in many authoritarian 
countries, but yet the leaders of those countries are allowed 
to be on Twitter and to be able to put out authoritarian 
propaganda, basically. They are still allowed to be able to do 
that.
    Twitter's value system seems to shift from country to 
country, based on that country. Even if that country blocks 
them from a platform, they are still allowed to be able to put 
out the propaganda from that platform. Am I wrong on this? What 
have you seen?
    Mr. Roetter. Twitter certainly is obligated to follow the 
laws of all the countries where it is operated. That is a 
consequence of what you are seeing.
    Senator Lankford. But it also seems to be a patchwork of 
values in these countries, where in our country they will say, 
``We really stand strong for this principle,'' but in another 
country they do not.
    Mr. Roetter. I think that is a fair characterization.
    Senator Lankford. Is that a problem long-term or giving 
authoritarian regimes a platform through that is just a matter 
of having customers there, even if those countries actually 
even block the use of Twitter in their country.
    Mr. Roetter. I think the bigger problem is the consequences 
of their overwhelming incentive model. Not being deeply 
specific about that example, of per-country variation. That is 
a consequence of trying to get everyone to use the platform and 
then being subject to some constraints, whether it is local 
governments or some other constraints.
    I think the broader problem is the consequences of who sees 
what content and what that does to people in the real world as 
a function of the incentive structure that is created for these 
companies.
    Senator Lankford. Recent testimony on Twitter has come out 
that they have had on their time Chinese spies, individuals 
that work for the Indian government, individuals that work for 
the Saudi government, that were on the staff and were funneling 
information back to those authoritarian regimes from the staff.
    I would assume Twitter has a process of actually going 
through and vetting their employees. I am making the assumption 
that while you were there you saw some of the vetting of how 
this actually happens. How are they vetting their employees to 
be able to evaluate individuals so they do not end up with 
Chinese spies, Saudi employees from the Saudi government, or 
required from the Indian government, for instance?
    Mr. Roetter. It may have changed in the time that I was 
there. There are background checks and other things you go 
through when you get hired, but there was nothing I saw that 
made me think that process was designed to counteract a threat 
model of governments inserting spies. It was much more 
pedestrian of a process than that.
    Senator Lankford. It seems to be a different issue when you 
talk about the Indian government saying we are requiring our 
individuals to actually be on your staff, to be able to be in 
the process, or to allow individuals, as was accused from the 
Saudi government, to be there on the staff. That does not seem 
to be a vetting issue. That seems to be a requirement. If you 
are in our country we also require backdoor access, basically.
    Mr. Roetter. I am not sure I am familiar with the specific 
rules from India and Saudi Arabia in terms of operating in the 
country.
    Senator Lankford. Fair enough. But this is an issue we will 
have an opportunity in the second panel to be able to talk 
about this.
    Mr. Boland, you have spoken out often on the algorithms 
that are out there and dealing with basically how the platforms 
seem to engage angry comments. The angrier that you become, the 
more it helps the algorithm to be able to engage and to be able 
to place this.
    I have done a recommendation to Facebook for years to say 
why could not the page owner, in that sense, be able to take 
the comments, if people want to make comments, those comments 
come to the page itself, that individual, but other individuals 
cannot see it. There is basically an option that you could 
create to turn off the public viewing of all the comments. If 
you want to make comments to me, and we want to have dialog, 
you can do that. But it prevents people that make comments on 
my page from attacking other people that make comments on the 
page.
    Basically what Facebook has created is a place for people 
to scream at each other in a lot of the political dialog, and 
it is pretty hostile, and it reinforces the anger comments to 
continue to be able to drive that.
    What I am describing to you, of giving the user, the owner 
of a page, the option to be able to make the comments between 
myself and those that are making comments so they cannot attack 
each other, is that technically possible, to be able to do?
    Mr. Boland. That is technically possible to be able to do. 
I think an important step for a lot of the product development 
work is back to this transparency point of if we could have 
researchers and academics involved and evaluating the different 
types of scenarios here and the tradeoffs, I think that could 
move us a lot faster forward.
    Senator Lankford. Yes. The value system of Facebook seems 
to be that we want that engagement across and that anger and 
that attack to each other because that keeps people engaged. 
Instead of trying to lower the temperature and saying on this 
page the temperature is going to be lower, it seems to be high 
temperature in as many places as possible, and the anger emoji 
or anger responses seem to build in that algorithm to continue 
to accelerate coming back to that page over and over again.
    Mr. Boland. Yes, and I think an important point that Alex 
has made is that the algorithm knows no temperature. It does 
not know if something is charged or not charged. It just knows 
whether it gets a result. It knows if it gets engagement.
    Without there being a qualitative view over the kinds of 
content, the algorithms will just chase what they are told to 
chase, and they are really good at it. They are really good at 
going after the metrics that they are given, and as machine 
learning techniques improve you will see more and more of that. 
The idea that it chases that kind of content, if that is what 
gets engagement and that still gets reaction, that is what is 
going to grow in the system.
    Senator Lankford. OK. Mr. Chairman, thank you.
    Chairman Peters. Thank you, Senator Lankford.
    Senator Romney, you are recognized for your questions.

              OPENING STATEMENT OF SENATOR ROMNEY

    Senator Romney. Thank you, Mr. Chairman. Mr. Roetter, your 
comments about the incentives of a corporation are accurate, 
which is they are trying to make as much money as they can, and 
to do so they are trying to get as many eyeballs as they can. 
Every newspaper, every magazine, every TV show, every 
broadcaster, every radio station does the same thing, which is 
how can we get more eyeballs? What gets more attention?
    What we are seeing with social media is not entirely 
unprecedented. I was not around in the early days of broadcast, 
but I presume it was Wild West initially, and then there was 
threat of government intervention to tell you who could 
advertise and what they could advertise and what words could be 
said and how much sex and violence could be allowed.
    The industry came together and said, OK, we are going to 
start grading things and establishing rules. Ultimately, from 
what I understand, the government also established an entity to 
establish rules for broadcasting, saying what you could show at 
certain hours of the day, how much sex and violence, and so 
forth, could be on broadcast networks.
    We have not done that with regard to social media. Social 
media is far more engaging and captivating of our young people, 
as well as many adults, than broadcast was. I wonder whether we 
need to do this, one, whether the industry should not come 
together and talk about its own decisionmaking, the rules, 
where they draw the lines, and say, yes, these are things we 
are all going to agree to. If the industry does not do that 
whether government should, whether we should establish an 
agency to say, hey, these are the rules, and you are all going 
to have to follow them.
    Is the industry willing to take action of that nature? 
Could it? Should it? If not, should government? I will ask you, 
Mr. Roetter, first, and then Mr. Boland, and Mr. Cain.
    Mr. Roetter. I think probably the best predictor here is 
just past behavior. My observation is the industry will share 
information, which is not the information you would share if 
you were generally interested in providing transparency. Brian 
has talked somewhat about sharing an average, when you have a 
distribution which is so non-uniform that an average is not a 
useful statistic. We see that. We see exact numbers being 
shared.
    I view this as a two-step process. The first is we actually 
do not know what is happening on these networks today, and that 
is why a lot of the conversation about networks devolves into 
cherry-picking examples to prove something that I believe these 
networks are doing. You can also prove that they have a bias in 
the other direction, depending on which example you cherry-
pick.
    The first step is we need statistically representative, 
unbiased, raw data that can be processed and then we understand 
what they are actually doing. I think if we had a better view 
of what they are actually doing in a representative way we 
could then talk about do we believe their incentives are going 
to create the right outcome, what is the true impact of that. 
Then, faced with that shared understanding of what the networks 
are doing, it is possible, to your point, that maybe the 
companies would come together and decide to self-regulate 
because they realize the specter of someone else regulating 
them is worse. I am not sure.
    But I think really the first step is demanding the 
transparency so we have a shared view on what is actually 
happening on these things.
    Senator Romney. Yes. Thank you. Mr. Boland.
    Mr. Boland. Yes, I think that usually you see industries 
take self-regulatory steps when they feel like external 
pressures or legislation is impending, and I do not believe 
that they feel that with the United States today. I think that 
drives decision from them to kind of let the status quo go.
    I think what is particularly terrifying is that we do not 
know what is happening on these platforms today. The nice thing 
about broadcast is that you know what was broadcast. Everyone 
could see what was being broadcast. With the way that our feeds 
work today you have such a distribution of content that it can 
look, on an average basis, that things are getting better. The 
industry can tell you, ``We are making improvements. Here is 
the average that shows you what we are doing.'' But for the 
person who has the 99th percentile most hate-filled feed or the 
group of people that have that 99th percentile most extremist 
feed, they may be seeing an increase in the types of harmful 
content that we will never know.
    Meta provided a lot more transparency three years ago than 
they provide today; that transparency is decreasing--none of 
the other platforms are taking steps in that regard to increase 
transparency, and you have an increased TikTok-ization of media 
in that Facebook is now moving toward a TikTok model, where it 
is not just friends and family content. It is unconnected, 
algorithmically driven content. These kinds of fringes, these 
pockets, are going to grow. We will never see them unless we 
mandate that we get to see them.
    Senator Romney. Thank you. Mr. Cain.
    Mr. Cain. Thank you, Senator Romney, for the question. 
TikTok has seen an alarming number of leaks get out in the 
press, which does suggest, and I have spoken personally with 
former TikTok employees who have been disgruntled, who say that 
the company is not transparent, that it is not doing what it 
promises. Since it has this connection to the Chinese Communist 
Party it will not be transparent about what it is doing, that 
this algorithm, we do not know what is behind the curtain.
    China is not a place that values transparency. It is a one-
party, authoritarian state, and the most sophisticated police 
state in the world. I do not think we can count on a company 
such as TikTok or its parent, ByteDance, to do anything that 
will actually address the problems at hand.
    I think that, to be honest, this Committee is uniquely 
placed to address this problem of transparency because the 
subpoena power that can be used here I think would require 
TikTok to open up its emails, show us what is really going on, 
and show us what the China-based executives are saying with the 
American executives.
    Senator Romney. Yes, I must admit I share your view in that 
regard, although I am probably even more alarmist than you, 
which is I question whether we should allow an authoritarian 
regime to have a social media capability of the scale they have 
in our country, gathering the data they have. I think that is a 
huge risk for us.
    I have a lot of kids and grandkids, particularly grandkids 
these days. I am very concerned about their exposure to social 
media. Have other countries figured a better way to try and 
reduce the draw and the compelling nature of social media? I 
understand China, for instance, between various TikTok segments 
they have a five-second gap where the screen just goes blank or 
something. Are we not doing even what other nations are doing 
to try and protect our kids? I will let anyone that wants to 
respond to that. Maybe Brian, if you want to take that.
    Mr. Boland. Yes, it is a tricky one in that you are dealing 
with incentives you would mandate versus steps you would like 
companies to take on their own. There are steps. You are 
describing friction, right, that slowdown----
    Senator Romney. Right.
    Mr. Boland [continuing]. The process. There are known steps 
you can take to introduce friction in the products.
    Senator Romney. Have some other nations done some of those 
things?
    Mr. Boland. I do not know of mandated friction. I think 
Europe has done a very good job of starting to move toward 
required transparency, so we will see how that moves. But I 
have not seen the prescriptive type of products.
    Senator Romney. Thank you.
    Chairman Peters. Thank you, Senator Romney.
    Senator Hawley, you are recognized for your questions.

              OPENING STATEMENT OF SENATOR HAWLEY

    Senator Hawley. Thank you, Mr. Chair. Thanks to the 
witnesses for being here.
    Mr. Boland, let me start with you. Can you tell me, when 
were you at Facebook?
    Mr. Boland. I was there 2009 through November 2020.
    Senator Hawley. November 2020. Was it normal in the time 
you were at Facebook for executives or team members at 
Facebook, not even have to be executives, to coordinate closely 
with the United States government?
    Mr. Boland. I am not aware of that.
    Senator Hawley. You were never in any such meetings?
    Mr. Boland. No.
    Senator Hawley. You never had any contact with U.S. 
Government employees in your time at Facebook?
    Mr. Boland. Not that I can recall.
    Senator Hawley. Would you be surprised to know that on July 
16, 2021, an employee at Facebook wrote to the Department of 
Health and Human Services (HHS) saying, ``I know our teams met 
today to better understand the scope of what the White House 
expects from us on the misinformation going forward.''
    On July 23, 2021, Facebook employee thanked HHS for 
``taking the time to meet earlier today. I wanted to make sure 
you saw the steps we just took this past week to adjust 
policies in what we are removing with respect to 
misinformation.'' That included, and I am quoting, ``increasing 
the strength of our demotions for Coronavirus Disease 2019 
(COVID-19) and vaccine-related content.''
    On April 16, 2021, Rob Flaherty at the White House 
circulated a Zoom meeting invitation stating, ``White House 
staff will be briefed on vaccine misinfo.''
    On April 7, 2021, a Facebook employee thanked the Centers 
for Disease Control and Prevention (CDC) for responding to 
misinformation queries. ``We will get moving now to be able to 
remove all but that one claim as soon as the announcement and 
authorization happens.''
    On July 28, 2022, this year, a Facebook employee reached 
out to CDC about ``doing a monthly misinfo/debunking meeting.'' 
CDC responded, ``Yes, we would love to do that.''
    On May 11, 2021, Facebook employees organized a be-on-the-
lookout meeting with CDC officials.
    On July 20, 2021, Clark Humphrey at the White House emailed 
Dave Sumner and others at Facebook asking, ``Any way we can get 
this pulled down,'' and cited an Instagram account. Within 46 
seconds, Facebook replied, ``Yep. We are on it,'' and down the 
account went.
    Is that normal? Is that normal in your time at Facebook?
    Mr. Boland. I do not have experience around that.
    Senator Hawley. You have no knowledge of anything like 
this. Nothing like this ever happened, and then, presto, it 
started happening just suddenly in 2020, as soon as you left?
    Mr. Boland. I did not have personal experience with that, 
or I did not hear about it.
    Senator Hawley. You do not know anything about it at all? 
You have never heard of anything like this happening, ever?
    Mr. Boland. I do not.
    Senator Hawley. That is remarkable. I thought that you were 
the former Vice President of Partnerships Product Marketing, 
Partner Engineering, Marketing, Strategic Operations and 
Analytics at Facebook.
    Mr. Boland. That is true.
    Senator Hawley. None of this ever happened. Why did it 
start happening, do you think, as soon as you left? What do you 
think drove this kind of collaboration, where you have Facebook 
becoming an arm of the United States government, more 
specifically the White House, to censor private information, 
personal speech, at the behest of government officials?
    Mr. Boland. It is hard for me to comment on the specific 
context of the content we are talking about, whether it was 
public content or whether it was personal content. I do know 
that from what I have read, and probably the same documents 
that you have access to, that there were a lot of steps taken 
around COVID response and COVID misinformation that may have 
presented a unique scenario and a unique situation where the 
company took steps to coordinate that way.
    Senator Hawley. Took steps to coordinate, by which you mean 
to censor the speech of ordinary Americans at the best of the 
President of the United States and his Administration.
    I commend to everyone who is interested these emails which 
were discovered as part of litigation led by the State of 
Missouri and other States as they are suing these tech 
companies, including your former employer, Mr. Boland, which, 
for the record, is one of the worst companies in America. They 
have discovered this trove of information, extensive 
coordination, extensive, between Facebook and the Biden 
administration, targeting the speech of ordinary Americans. By 
the way, for standards that are ever-shifting.
    Early on, if you questioned that COVID had anything to do 
with a lab you were marked as disinformation, you were 
censored, only to have the President of the United States later 
admit the possibility that COVID has some lab nexus is, in 
fact, a very distinct possibility that our intelligence 
communities (IC) think is actually quite a viable theory. We 
have seen the same thing with people who have questions about 
masks, who have questions about vaccine efficacy. It is really 
quite remarkable.
    Let me ask you this. What safeguards, when you were at 
Facebook, were in place to protect Americans from having their 
speech censored or having government censors like this access 
personal information?
    Mr. Boland. During my time there my experience was the 
company was more reluctant to take down speech and very careful 
about trying to remove content. I also do not think the company 
studied the content on the platform as heavily as you would 
like.
    Senator Hawley. They did not do things like, not like what 
they were doing later, when they were looking at particular 
private Instagram accounts and removing them at the behest of 
the White House? You are saying that did not happen while you 
were there?
    Mr. Boland. That is not a scenario that I ran across. It is 
hard for me to comment on COVID pandemic response, which I 
think a lot of things were outside of the norm.
    Senator Hawley. I will just say this. I find it hard to 
believe that suddenly Facebook became an entirely different 
entity and was interfacing with the United States government in 
an entirely different way only when COVID happened. I mean 
maybe, but I doubt it.
    Let me ask you this, Mr. Roetter? You were an engineer at 
Twitter. Is that right?
    Mr. Roetter. Correct, yes.
    Senator Hawley. You were the Senior Vice President for 
Engineering?
    Mr. Roetter. Yes.
    Senator Hawley. Yesterday Mr. Zatko testified to another 
Committee I sit on that 4,000 engineers at Twitter had access 
to all of the personal information, user data, geolocations, of 
Twitter users. Is that accurate?
    Mr. Roetter. I have never met him, and he joined the 
company after I left, so I do not know if that particular claim 
is accurate.
    Senator Hawley. But he said all the engineers. You were an 
engineer. Did you have access to user data?
    Mr. Roetter. When I was there, I do not know if it was all 
the engineers.
    Senator Hawley. Did you have access to user data?
    Mr. Roetter. I was head of engineering for the whole 
company.
    Senator Hawley. Did you have access to user data? I am 
looking for a yes or a no.
    I will just remind you.
    Mr. Roetter. No.
    Senator Hawley. You did not have access to user data?
    Mr. Roetter. I think I could have gotten it.
    Senator Hawley. I am sorry?
    Mr. Roetter. I think I could have gotten it.
    Senator Hawley. OK. If you can get it, that is what we call 
access. You did have access to user data. Is that a yes?
    Mr. Roetter. When I was there I probably could have. Yes, 
so that is probably right. Yes, I probably could have.
    Senator Hawley. OK. You did. Did you ever access any user 
data?
    Mr. Roetter. No.
    Senator Hawley. Were you aware of Twitter engineers ever 
doxing members, of users?
    Mr. Roetter. No.
    Senator Hawley. Were you aware of Twitter engineers ever 
taking over an account and tweeting out or altering the content 
of that account? Mr. Zatko said he thought that had happened.
    Mr. Roetter. I am not aware of that.
    Senator Hawley. OK. Lots to unpack there. Thank you, Mr. 
Chair.
    Chairman Peters. Thank you, Senator Hawley.
    Senator Rosen, you are recognized for your questions.

               OPENING STATEMENT OF SENATOR ROSEN

    Senator Rosen. Thank you, Chairman Peters, Ranking Member 
Portman. Thank you to the witnesses for being here today.
    Transparency and accountability, I guess those are the 
words of the day, because we know that social media companies, 
of course, what we do. I am a former computer programmer. Data 
is power. How you analyze the data, the data tells a story if 
you are smart enough to listen to it.
    You collect the demographic behavioral data from consumers 
in order to enhance the predictive engagement algorithms to 
target the consumers with ads, recommendations based on other 
content, perceive interests, or even vulnerabilities. This is 
really great when you are shopping for a new outfit or some new 
furniture. Maybe not so great when you are on an extremist or 
violent website or harmful or illegal content.
    When it comes to that harmful or illegal content there has 
to be greater transparency into the platform promotion 
mechanisms, and how the content ultimately spreads from 
platform to platform. We have small businesses, hospitals, 
schools, everyone are on these platforms in some form or 
fashion.
    When we say ``consumer,'' we can go from the individual 
right up to our full national security, that we understand 
better the algorithms that amplify the content and how these 
things reach their feed. Some social media platforms, for 
example, they have standards in place for moving content that 
promotes Holocaust denial or distortion. They are often 
inconsistent with implementing the policies, but the content 
flourishes.
    I am going to cut right to the chase. Mr. Roetter and then 
Mr. Boland, is there a difference now in how predictive user 
engagement algorithms behave for harmful, illegal, or extremist 
content versus other content, and how might we modify or 
regulate an otherwise agnostic algorithm--It is math; it is 
agnostic, to your point--an agnostic algorithm to identify 
illegal, certainly illegal, or extremist content? How do we 
take the agnostic out of the math? Mr. Roetter.
    Mr. Roetter. Today the algorithms are doing exactly what 
they are incented to do, which is maximize attention on the 
platform. If you changed what those companies were accountable 
for, these companies are very smart. They have a lot of 
engineers and a lot of money and a lot of computational power. 
They would change what the algorithms do.
    For example, if companies were penalized for sharing 
certain types of content, these algorithms would no longer 
promote that content because it would be not optimal for them 
to do so. The extra benefit they get from the attention and the 
usage would be outweighed by whatever the penalty was.
    This is all possible. I think the two takeaways are, one, 
without transparency we are not going to know what it is doing 
today, and two, they will behave optimally in the face of any 
incentive structure they have. Today it is just maximizing 
intention, and they are doing exactly what you would expect, 
but you could change that.
    Senator Rosen. Thank you.
    Mr. Boland. I think it is important to note that not only 
do we not know what is happening on the platforms, the 
platforms do not know what is generally happening on their 
platforms. The turning point for me, to go from having concerns 
to being publicly vocal about my concerns, was when Facebook 
said, ``Nothing on January 6th happened on our platform.'' Then 
it turns out, after the fact, that we find out that there is a 
lot of Stop the Steal on the platform, and there were internal 
concerns around it.
    In order to change these algorithms part of it is 
understanding what is happening, and as a society having 
conversations about what do we think the right distribution is. 
Facebook has proven that with things like QAnon, after the 
fact, after the fire was lit and burned through, they could 
then adjust it and actually manage the distribution of that 
type of content. It is possible when we know what we are 
managing toward.
    The problem is that it is all after the fact. It is all 
after the damage has been done that you then go back and say, 
OK, there has been this set of articles or conversations, and 
finally we go back to address them, rather than saying we have 
a whole community of researchers and people who can quickly 
spot thing, raise the issues, and then adjust them. It is 
doable. It is an incredibly hard problem. Like I am very 
sympathetic to the fact that human speech is very complicated 
and very nuanced.
    Senator Rosen. But the platforms have an unwillingness. 
They actually want to have this lack of understanding so they 
have some deniability on the back end, if that is what you are 
saying is true. We do not know it is happening. Oh my gosh, it 
happened after the fact. Their lack of wanting to do the 
analysis ahead of time and understand their own platform, they 
are setting themselves up for deniability, in my estimation.
    But we are going to move on to cybersecurity because I have 
a few minutes left. We know the whistleblower complaint from 
Twitter's former head of security depicted they were unable to 
protect the 238 million users, government agencies, influential 
figures, heads of State, from spam and security breaches. The 
complaint alleged the company servers were running out-of-date 
and vulnerable software and withheld dire facts about the 
breaches and lack of protection for user data.
    I am really concerned about cybersecurity. Companies are 
laser-focused on growth, not laser-focused on protection, in my 
estimation, so individuals--again, small businesses, hospitals, 
schools, critical infrastructure, all of those things we are 
responsible for her are at potential risk.
    Again, both of your experiences working at Facebook and 
Twitter, is cybersecurity a high enough priority for the large 
social media platforms, and do the social media platform 
security teams, do they work alongside product development, 
application development to protect cyberattacks? Do you have a 
hunt forward? Are you looking for these breaches? How are you 
working that, and how does this threaten our own security, even 
our national security?
    Mr. Roetter. The teams, they do work alongside engineering, 
but it is not a primary driver the way product and growth and 
revenue is. You need to build something that drives usage and 
revenue and then make it secure enough.
    In terms of your question, is it a high enough priority, 
the answer to that can only be known if you know the nature of 
the threat and if the bad actors trying to break in are being 
successful.
    Senator Rosen. There is no hunt-forward operations built 
into these things for people trying to breach the data. There 
is no kind of hunt forward. There is no way that people are 
really actively looking for data breaches. You are finding it 
after the fact, in many cases.
    Mr. Roetter. No, there are in some cases penetration 
testing and people trying to simulate breaking into something 
to learn. That happens.
    Senator Rosen. Can you speak to that, Mr. Boland?
    Mr. Boland. My sense from Meta's standpoint is that they 
are quite good and quite invested in protecting people's data, 
and from a cybersecurity standpoint, which goes to show you 
that where there is a will and a desire to make progress on 
issues I believe they can. This is an area where, in my 
experience, they were quite strong.
    Senator Rosen. Thank you. I see my time is up so I will 
yield back. Thank you.
    Chairman Peters. Thank you, Senator Rosen.
    Senator Hassan, you are recognized for your questions.

              OPENING STATEMENT OF SENATOR HASSAN

    Senator Hassan. Thank you, Mr. Chair and Ranking Member 
Portman for this hearing, and thank you to our witnesses for 
being before the Committee today. I really appreciate it.
    I want to start with a question that builds a little bit on 
what Senator Rosen was discussing, and this is to Mr. Boland 
and Mr. Roetter. Terrorists have horrifically livestreamed 
their attacks on social media. These livestreamed attacks, in 
turn, inspire other individuals to commit future attacks. Are 
there ways for social media companies to quickly identify 
terrorist content so that it is not shared in real time?
    Mr. Boland, we will start with you.
    Mr. Boland. I know that particular for livestreamed videos 
that Meta has put considerable resources in AI to try to spot 
these types of attacks and take them down quickly. I think they 
have gotten a lot better, obviously, than they were with 
Christchurch years ago. I think it is an incredibly hard 
problem. I am not an expert on the extent of what is possible 
there, but I do think they have made strides.
    Senator Hassan. Mr. Roetter.
    Mr. Roetter. It is certainly possible. It is a hard 
technical challenge but you can build algorithms to figure out, 
in real time or near real time, the content of videos. They 
will not be perfect, like any sort of classification or 
segmentation algorithm, but you could certainly do so.
    Senator Hassan. Yes. I mean, this is an ongoing issue 
because, of course, we are seeing the acceleration from idea to 
action happening much more quickly in part because of the 
influence of social media too. I thank you for that and I would 
look forward to following up with you both on it.
    Another question for the two of you. Facebook is currently 
running an advertising campaign which is touting the thousands 
of employees and billions of dollars that the company says it 
spends on safety and security. These numbers, however, are 
pretty meaningless without proper context, right?
    What specific information or metrics should these companies 
provide this Committee to help us fully understand their actual 
commitment to safety and security? Mr. Boland, again I will 
start with you.
    Mr. Boland. Yes. You are 100 percent correct on the context 
of the numbers matter. When they first announced their $13 
billion over five years safety and security number it was in 
the context of $50 billion in stock buybacks, so like a massive 
imbalance of investment. They also will give you numbers of 
employees. Numbers of engineers matter. If you think about 
these issues, you can have employees who are non-technical who 
can be in what would be a review queue or a process to look at 
content. But the really important thing is engineering 
resources and how many engineers are put on these problems.
    I would really try to get from the companies an 
understanding of where they allocate their engineers, for these 
types of problems. They do not need to show you their entire 
organization chart and you get to know how many are working on 
Metaverse and whatnot, but these are the numbers working on 
these safety and security issues, these safety issue, and this 
is how they are allocated by country, by topic, et cetera. I 
think that is justifiable to understand, and to feel like we 
have a sense of whether that is adequate relative to the total 
number of engineering employees.
    Senator Hassan. OK. Thank you. Mr. Roetter.
    Mr. Roetter. I think what is important is that we get 
metrics that are of the form that show what results they are 
getting, not metrics that basically equate to, ``We are trying 
really hard. Give us a break.'' That would never work in Wall 
Street. You can tell them we tried really hard to make profit 
this quarter. You have to actually show what the results are.
    If you have transparency over the content and how the 
content is shared, and the engagements on that content, we will 
be able to study. Independent people can look and see certain 
content spreads very widely, other content does not--and then 
after this investment that they have made has this changed or 
not?
    We need metrics where we can measure the actual result, not 
just, ``Oh, I tried really hard, so please be happy.''
    Mr. Boland. I am sorry. One more quick thought there, is 
that I worry that a lot of times, because it is so painful, we 
focus on these extreme examples of content, or the livestream 
shootings. There is a broad swath of content that influences 
people that does not feel as scary. That is the stuff that 
terrifies me, and that is the stuff we do not get to see 
without transparency.
    Senator Hassan. OK. I thank you both. I thank the whole 
panel for your testimony, and I am very grateful for this 
hearing, and I will yield back.
    Chairman Peters. Thank you, Senator Hassan.
    During your opening statements I think each of you 
discussed the product development process at these companies, 
and we have talked at length about that process through the 
hearing.
    Mr. Boland, you discussed how Facebook does not incentivize 
limiting the spread of harmful content but, of course, 
prioritizes growth and revenue. Could you tell the Committee 
generally what metrics inform employee compensation at the 
company? What goes into that?
    Mr. Boland. Employees at Facebook, it is about rewards, 
right, so the rewards that you receive is your cash 
compensation, your stock compensations, and promotions. 
Generally if you are building products you are rewarded on the 
success of that product, and that product success is defined by 
some set of metrics around whether that product is being used 
more.
    Let us pretend that you are building a video product. The 
things that you will care about are the metrics around what are 
the total watch hours, how many hours are being spent watching 
videos, what is the user growth, how many people are using that 
video product, where is that spread geographically, et cetera. 
You are incentivized on those hard metrics, and then you are 
not incentivized around, what kind of content are you growing 
your video with? What is the stuff underneath the hood that is 
showing up, that is driving this growth? That is not your 
problem. That is somebody else's problem.
    The problem is that does not drive individual behavior. 
Company goals are kind of there. You do not think about them. 
You think about what you, individually, and your team are 
goaled to deliver. That is always metrics. That is always 
product growth metrics and success metrics of the product and 
not success in are we keeping people safe.
    Chairman Peters. There is not a trust and safety metric?
    Mr. Boland. For the trust and safety team, actually my 
understanding is they have been disbanded and moved into a 
central team. I did not experience products like video or 
others carrying a metric that was incentivizing trust or 
safety.
    Chairman Peters. That is not there. Mr. Roetter, is that 
the case as well?
    Mr. Roetter. Yes. I agree with all that. There is that 
promotion system, compensation system, review system. The 
problem with trust and safety metrics, typically companies may 
have five top-level goals, let us say, and maybe one of them is 
trust and safety. The problem is that is at odds with the other 
metrics, and the other metrics always win.
    If I am an engineer building, say, a new livestreaming 
video service, if I launch that product and it gets some usage, 
that is a feather in my cap. That is something I can say that I 
did. I can point to its effect. That will help me with 
promotions, compensation, career advancement.
    If, at the last minute, I decide not to launch that product 
because I realize I cannot control some of the safety aspects 
and we should not do it if we cannot do it without certain 
safeguards, I get zero credit for that. It is as if I have done 
nothing for the company over the last X months.
    Chairman Peters. So you are, in effect, punished, and your 
future advancement will probably be questionable as well.
    Mr. Roetter. A product that I build and then I do not 
launch because it might not be safe is no different than if I 
just did not show up to work, in terms of the future credit 
that I get.
    Chairman Peters. Not a good place for an employee to be.
    Mr. Roetter. Correct.
    Mr. Boland. You can change incentives and you can change 
the way that people show up, but not even just through goals 
but through process. There was an example where when Facebook 
started as a desktop site and moved to mobile, Mark Zuckerberg 
required that all products that were demoed to him showed 
mobile in their demonstration, that they had designs around 
that. He kicked the first team out. They came in without that 
design, and suddenly everybody was thinking about mobile 
designs.
    If, in your process, you create an incentive where you 
said, as part of every product design discussion is what are 
all the harmful ways this product can drive hate or drive 
extremism or drive polarization, you would have a radical 
change in the way that people showed up to those meetings, and 
in the process thought about the negative impacts of their 
product.
    Chairman Peters. All right. Mr. Boland, before you left it 
is my understanding--and correct me if I am wrong--that you 
voice objections about how Facebook recommendation algorithms 
were actually promoting extreme, hateful, and racist content. 
Is that correct?
    Mr. Boland. That is correct. My concern was specifically 
around racist content.
    Chairman Peters. What was the reaction from your senior 
leadership within Facebook when you expressed these concerns?
    Mr. Boland. It was disappointing. I raised issues, 
particularly around the distribution of racist content that I 
was seeing in the CrowdTangle tool, and my concerns that we did 
not understand it, and brought forward three steps that I felt 
would be very good internal steps to actually help mitigate the 
problem: one, more internal researchers, two, more data to 
external researchers, and three, beefing up CrowdTangle to 
share more information.
    I had a range of responses from, ``You are wrong and that 
is not the case that this is driving this,'' with no evidence, 
mind you. Just, ``I believe you are wrong,'' and no counter-
evidence. Too, some, ``Yes, this might be a problem but not 
something that we are working on right now.''
    Chairman Peters. But when you say ``no evidence'' and those 
statements that say you are wrong, you worked for a company 
that looks at a lot of data and makes decisions based on data, 
but this is something they wanted to ignore, basically.
    Mr. Boland. Yes, so a particularly concerning moment for me 
was when I had my moment where I really came to terms with 
believing that the product could be causing harm. I started to 
look at a variety of things that research teams were doing 
internally, to understand what they were seeing.
    The internal dialog and the internal documents, many of 
which Frances Haugen has shared, were troubling. There was a 
particular document that was an overview of polarization 
research from, I think, June 2020, and that talked about 
political polarization. One of the lines said, ``We have not 
researched and have very little understanding of racial, 
ethnic, or religious polarization.'' That underinvestment was 
significantly concerning to me.
    Chairman Peters. Yes. Mr. Roetter, one of the documents 
submitted by a Twitter whistleblower to the U.S. Securities and 
Exchange Commission (SEC) last month was a 2021 study that he 
commissioned of the site integrity team's capabilities. The 
study found that Twitter planned to launch a new product, 
Fleets, just weeks before the 2020 elections.
    The integrity team, according to that document--and I am 
quoting the document--said, ``Had to beg the product team not 
to launch before the election because they did not have the 
resources or capabilities to take action on misinformation or 
disinformation on the new product.'' The report also found, 
``While product teams do elicit feedback for new product 
launches, product managers are incentivized to ship products as 
quickly as possible and thus are willing to accept security 
risk.''
    Are these findings consistent with the pattern of 
decisionmaking that you saw?
    Mr. Roetter. With the caveat that that specific example 
happened after I was there and I cannot speak to it, that is 
absolutely consistent. In fact, I would be surprised, given the 
incentives at play, if the product team had done anything else.
    One way we used to talk about product managers is they are, 
the ``mini CEOs'' of their product, and they get consultation 
from other teams--trust and safety, legal, finance, 
compliance--but it is their decision to launch or not. Again, 
there is no possible credit or reward for not launching, 
whereas there is possibly a credit or a reward from launching. 
Because they probably had more confidence that it would at 
least get some usage and potentially drive revenue, there is 
every reason to launch and not worry about the other issues.
    Chairman Peters. Thank you.
    Ranking Member Portman, any remaining questions?
    Senator Portman. Let me follow up on that particular issue. 
Twitter Spaces, an audio function newer to the platform, was 
allegedly rolled out in such a rush, to your point, that it had 
not been fully tested for safety. Twitter lacked real-time 
audio content moderation capabilities when they launched it.
    We are told that in the wake of our withdrawal from 
Afghanistan it was exploited by the Taliban, and Taliban 
supporters used this platform to discuss how cryptocurrency can 
be used to fund terrorism.
    First of all, is that accurate? Mr. Roetter, maybe I will 
start with you. Second, is that common for Twitter to launch 
products that lack content moderation capabilities? You said 
that sometimes they are under pressure to ship products as soon 
as possible. Was that why this happened?
    Mr. Roetter. I is accurate that they are under pressure to 
ship products as soon as possible, and Twitter, in particular, 
has a history of being very worried about user growth and 
revenue growth. It is not the runaway success that Facebook or 
Google are, and so there was often very extreme pressure to 
launch thing.
    A saying we has is that if you walk around and ask enough 
people if you can do something, eventually you will find 
someone who says no. The point of that was really to emphasize 
you just need to get out and do something.
    Again, the overwhelming metrics are usage, and you would 
never get credit or be held up as an example or promoted or get 
more compensation if you did not do something because of 
potential negative consequences on the safety side or 
otherwise. In fact, you would be viewed probably as someone 
that just says no or has a reason not to take action. There is 
a huge bias toward taking action and launching things at these 
companies.
    Senator Portman. Yes. Are you aware of this Twitter Spaces 
issue and the Taliban having exploited it?
    Mr. Roetter. That specific example I am not.
    Senator Portman. OK. Do you think, assuming my example is 
correct, which I believe it is, that PATA would have been 
helpful there, to at least get behind the curtain and figure 
out why the decisions are being made?
    Mr. Roetter. I have not read the draft of that, but if my 
understanding is correct, yes, having more understanding of 
what these products do and what sort of content is promoted and 
what the internal algorithms are that drive both decisionmaking 
and usage of the products, I think that would be extremely 
valuable. Without any of that I would expect examples such as 
this to keep happening.
    Senator Portman. On this trust and safety issue, and 
specifically the product development and business 
decisionmaking processes--Mr. Boland, I will maybe direct this 
to you--Meta disbanded its responsible innovation team just 
last week it was announced. Did you see that?
    Mr. Boland. I did. It was extremely disappointing.
    Senator Portman. My understanding is they have been tasked 
with addressing harmful effects of product and development 
processes. You are saying it was concerning to you. Why are you 
concerned about it, and tell us about how you interacted with 
integrity teams while you were at Facebook.
    Mr. Boland. I know the people who led that team. Very high 
integrity, very intentional about responsible design of 
products, as the team was named. Without that kind of center of 
excellence that is helping to shape other teams I fear that 
Meta is not going to continue to have that as a part of their 
conversations.
    You can think about that group as influencing and 
indoctrinating, if you will, the engineers that come to the 
company of how to start to think about some of these issues. It 
is less hard-coded into the incentive structure, which I think 
is a missing element, but would have driven really important 
conversations on how to ethnically design products.
    I do not believe them when they say that they are making it 
a part of everything, that they are going to interweave it into 
the company. That is a very convenient way to dodge the 
question in my view. I do not believe that they are going to 
continue to investing in it if it is not a team.
    This comes at a time when Meta building the Metaverse. We 
do not know how the Metaverse is going to play out. I am 
extremely concerned because the paradigms we have seen in the 
past, that we have started to understand, around content and 
content distribution, are very different in the Metaverse. That 
is an area that if I were this Committee I would spend a lot of 
time really trying to understand the risks of the Metaverse. It 
feels very risky to me. It feels like the next space where 
there will be underinvestment, and without a team of 
responsible innovation helping to guide some of that thinking 
that is concerning.
    Senator Portman. Again, same question to Mr. Roetter, with 
regard to how to evaluate these trust and safety efforts in 
general, and specifically something like the responsible 
innovation team and what impact it is having. Do you think that 
it would be helpful to have this legislation called the 
Platform Accountability and Transparency Act?
    Mr. Roetter. I think so. If we get from that more 
information to illuminate what these algorithms are doing and 
what the incentive structures are, that would be extremely 
helpful.
    I think today we are operating in a vacuum, and what we 
see, a lot of the public conversation about this is people will 
cherry-pick one example and use it as evidence of whatever 
their theory is of these companies are doing, that, of course, 
it must be true because this here is one example.
    The fact of the matter is these companies are so massive 
and there is so much content you can cherry-pick examples to 
prove almost anything you want about these companies. Without 
broad-scale representative data from which we can compute what 
is being promoted and then reverse-engineer what the incentives 
must be, we are never going to see a change into the things 
they are optimizing for.
    Senator Portman. What are your thoughts on that, Mr. 
Boland?
    Mr. Boland. Yes, no, I think the issue that we face today 
is we have to trust, and without having robust set of data to 
understand what is happening, and making these public 
conversations, not company conversations is critical.
    Meta would like to tell you that they do not want to put 
their thumb on the scale when it comes to algorithmic 
distribution. The challenge is that these algorithms were built 
in a certain way that you are kind of leaning on the scale 
already. You just do not realize you are leaning on it. These 
algorithms today are already doing a lot to shape discourse and 
to shape what people experience. We do not get to see it, and 
we have to trust the companies to share with us information 
that we know that they are not sharing.
    As I said earlier, I think the Platform Accountability and 
Transparency Act is a critical first step. We need to do it 
quickly, because these things are accelerating, to understand 
what is actually happening on these platforms.
    Senator Portman. Mr. Cain, you have the last word.
    Mr. Cain. I do believe that there are a number of issues 
that were addressed here today have will have significance not 
only for our democracy within America but the position of 
America in the world. There has been just major changes that I 
have seen personally, having been in China and Russia and 
recently Ukraine, in the world of technology, in the world of 
social media. My greatest concern is that we are ceding too 
much ground to authoritarian regimes that seek to undermine and 
malign us in whatever way they can.
    The software that we are using, the AI, the apps, these are 
ubiquitous. This is not the Cold War where we had hardware, we 
had missiles pointed at each other. Now we have smartphones, 
and it is entirely possible and quite probable that the Chinese 
Communist Party has launched major incursions into our data 
within America to try to undermine our liberal democracy.
    Senator Portman. That is a sobering conclusion, and I do 
not disagree with you, and I appreciate your testimony and our 
other experts. Thank you all.
    Chairman Peters. Thank you, Ranking Member Portman.
    Let me just follow up on a brief question on transparency. 
It is pretty clear we need transparency. We need to have, 
though, the active involvement of researchers that use that 
data and researchers, whether academics or civil rights groups 
or rights organizations, journalists, everybody has to be 
engaged.
    One pushback we could get, and I would like your response, 
is do you think there are ways that we can protect user data 
and still provide the kind of data that is necessary for these 
researchers? Is that possible?
    Mr. Roetter and Mr. Boland?
    Mr. Roetter. Yes, you will get that pushback, as well as a 
bunch of other pushbacks, I am sure. One, it is possible. We 
can obfuscate the data. We can generate random ideas that you 
can hide the personally identifiable information (PII).
    Second, there are examples when third-party reviewers have 
access to confidential information, and because they operate in 
a professional manner and are well trusted, that does not mean 
it leaks out publicly. One of the examples I gave was auditors, 
in the course of certifying financial statements, see a bunch 
of internal financial performance that if it leaked out would 
be extremely valuable to competitors. The reason we have third-
party auditors is they are allowed to balance the public's need 
for information with the company's need to keep information 
private.
    We could do the exact same thing. There are secure 
computing environments. For example, there is a bunch of health 
care data in the world, which has a bunch of personally 
identifiable information and very strict legislated private 
requirements around that, and that is managed in a way such 
that people can extract insights from the data without 
violating individual privacy. We could do the same thing here.
    Chairman Peters. Mr. Boland.
    Mr. Boland. It is absolutely possible and doable. There are 
some hard aspects to it but it can be done. I think there are 
two components that I think are favorable there. One, with the 
increased TikTok-ization of service more and more content is 
public content, so you are really not dealing with issues 
around privacy and private data.
    For private data, Meta was able to solve it with their ads 
measurement system. We built a system where we could connect 
the ads that people saw on Facebook with the purchases that 
they bought in a physical store. We were able to do that in a 
privacy-safe way. If we can do it for ads, you can do it for 
these other areas as well.
    Chairman Peters. Great. Thank you, and I would like to 
thank the three of you for your testimony here today. You 
certain provided some very insightful contributions to what is 
a very important conversation. We appreciate your availability 
to be part of the first panel for this hearing.
    This hearing is going to resume this afternoon when we will 
welcome our second panel of witnesses, the chief product 
officers of Meta, YouTube, TikTok, and Twitter.
    The Committee will now go into recess, and then we will 
reconvene at 2:30 p.m.
    [Whereupon, at 11:47 a.m., the hearing was recessed, to 
reconvene at 2:30 p.m. this same day.
    The Committee reconvened at 2:31 p.m., in room SD-342, 
Dirksen Senate Office Building, Hon. Gary Peters, Chairman of 
the Committee, presiding.

            OPENING STATEMENT OF CHAIRMAN PETERS\1\

    Chairman Peters. This morning the Committee heard testimony 
from experts and former executives at Facebook and Twitter that 
provided important transparency and context for how many of the 
largest social media companies operate. Independent and 
accurate information about how companies balance competing 
priorities or how they do not, who within the companies make 
those decisions, and how they build their products is 
incredibly difficult to find.
---------------------------------------------------------------------------
    \1\ The prepared statement of Senator Peters appears in the 
Appendix on page 97.
---------------------------------------------------------------------------
    This morning's testimony shed some light on many of the 
areas that this Committee and the public have questions about. 
I look forward to building on that testimony with our second 
panel of witnesses, who can speak directly to what steps Meta, 
YouTube, TikTok, and Twitter are taking to stop the spread of 
extremist content on their platforms, and I want to sincerely 
thank each and every one of you for being here today before us.
    As we heard from our panel this morning, as Chief Product 
and Operating officers, you play key roles in your company's 
decisionmaking process. You set the agendas for the product 
teams who are constantly updating the apps and developing new 
features. You play a prominent role in setting priorities and 
determining what tradeoffs to make among those priorities, as 
product teams launch new features or make changes to the apps.
    This is the first time executives in your positions have 
appeared before Congress, and I really do appreciate you 
joining us for this opportunity to hear directly about your 
role at these very powerful companies.
    The platforms you are representing today reach billions of 
people around the world. Meta's platforms reach more than 3.6 
billion people a month, TikTok has more than one billion users 
a month, YouTube reaches almost two billion people a month, and 
Twitter has more than 200 million monthly users. The reach is 
massive and so is the influence that your platforms wield.
    Whether users are fully aware of it or not, the content 
they see on your platforms shapes their reality, and the 
business decisions you make are one of the main driving forces 
of that phenomena. This amount of influence may have a minimal 
impact on the average user of your platform, but we have seen 
firsthand how quickly dangerous and extremist content can 
proliferate online, especially to vulnerable communities or 
users already on the fringe, and alter how people view the 
world.
    Conspiracies like QAnon and Stop the Steal, hateful 
ideologies like white supremacy and anti-Semitism, and so many 
more examples of harmful content pollute your platforms. This 
extremist content can spread like wildfire, amplified by the 
recommendation algorithms and other tools your team build to 
increase your companies' audiences and profits. Extremists use 
the products you designed to recruit and to radicalize 
followers, and plot attacks, including the January 6th attack 
on the Capitol, our democracy, and our Nation.
    There is no question that there is a relationship between 
social media amplification of this extremist content and the 
rise we have seen in hate crimes and domestic terrorist attacks 
that mark one of the gravest threats to our homeland security. 
Despite these serious threats, I am concerned that your 
companies have still not taken the necessary steps to limit the 
spread of the hateful, dangerous, and extremist content that 
has motivated real-world violence.
    That we all understand exactly the type of extremist 
content we are discussing today and how challenging this 
problem is to tackle--it is clearly a challenge--I would like 
to take a moment to show a few examples, if you would check the 
screen.
    [Video plays.]
    This morning we heard from former executives that your 
companies have no incentive to effectively address the problem 
this content creates or prioritize the safety of your users as 
you build and introduce new social media products. Instead, 
like any for-profit company, your incentives are to prioritize 
user engagement, grow your platforms, and generate revenue.
    I have asked you to appear before the Committee today to 
answer questions about your companies' incentives and 
priorities, how those incentives are reflected in how you 
compensate and promote your product development engineers, 
managers, and other employees, and to provide important insight 
into your decisionmaking processes.
    I want to thank you again for joining us today. I am 
looking forward to this conversation, so that our Committee and 
the public can better understand this serious problem and how 
it threatens the safety and security of our Nation.
    Ranking Member Portman, you are recognized for your opening 
comment.

            OPENING STATEMENT OF SENATOR PORTMAN\1\

    Senator Portman. Thank you, Mr. Chairman. We had a very 
productive hearing this morning with experts on the impact of 
social media on homeland security, and I look forward to our 
discussion this afternoon, and I want to thank the 
representatives here from Meta, YouTube, TikTok, and Twitter. 
Thank you all for being here, and in anticipation of another 
good hearing I appreciate you being very frank with us today 
and providing us information we need to be able to move 
forward.
---------------------------------------------------------------------------
    \1\ The prepared statement of Senator Portman appears in the 
Appendix on page 101.
---------------------------------------------------------------------------
    About 300 million Americans now use social media. We know 
that social media has offered unprecedented connectivity, and 
that is often very positive, but we also know it has raised 
serious concerns for our children, our civic culture, and our 
national security. Terrorists and violent extremists, drug 
cartels, criminals, authoritarian regimes, and other dangerous 
forces have used social media in furtherance of their goals. 
They have exploited your platforms.
    Perhaps the most concerning consequence of social media is 
the ability for our adversaries to exploit platforms to harm 
Americans for their own geopolitical gain. As an example, in 
this second panel I hope we will discuss China's influence over 
TikTok, which is a social media app that at least one-third of 
Americans use, and a lot of young people.
    As the lead Republican and former Chairman of the Permanent 
Subcommittee on Investigations (PSI), I have been focused on 
China's malign activities for many years, and in 2019 I led 
year-long bipartisan investigation which found that China 
recruits U.S.-based researchers to steal taxpayer-funded 
intellectual property and research for its own military and 
economic gain.
    Following this report, I introduced bipartisan legislation, 
Safeguarding American Innovation Act, which seeks to stop U.S. 
taxpayer-funded research and Internet Protocol (IP) from 
falling into the hands of the Communist Party of China (CCP).
    Two months ago I issued a new report detailing China's 
efforts to target influence and undermine the United States and 
Federal Reserve. China has a pattern of economic and cyber 
espionage, and social media for them is just another 
opportunity. I am highly concerned about TikTok and how China 
may be leveraging their influence to access the platform's data 
on Americans.
    Chinese law requires all companies operating under its 
jurisdiction to, in essence, allow the Chinese Communist Party 
to access every piece of data collected. Any company that 
refuses to comply with the CCP's demand is subject to severe 
consequences, as are individuals. Therefore, since both TikTok 
and its parent company, ByteDance, have a presence in Mainland 
China, an expert witness this morning told that TikTok's user 
data could be accessed by the Chinese Communist Party. We want 
to talk more about that today.
    This means that the CCP may have access to 100 million 
Americans' personal and proprietary information. As the U.S. 
Government has warned, China's access to user data will allow 
it to extend its malign agenda and build dossier's on American 
citizens. The overwhelming popularity of this app with 
America's youth will allow China to collect never-before 
accessed troves of data on our children, the future generations 
of Americans.
    But the challenges that social media poses to our children 
are not limited to TikTok. We continue to see the proliferation 
of child sexual abuse material online. I have been at the 
forefront of this for years. I am proud that the Stop Enabling 
Sex Traffickers Act was signed into law in 2018. This was the 
first bill to reform Section 230, by removing barriers to both 
criminal prosecution and civil suits against websites that 
knowingly facilitate online sex trafficking.
    Because of this change in law, courts are beginning to 
affirm that Section 230 cannot shield internet companies when 
they fail to respond to images of child exploitation and 
continue to profit from exploitation on their platforms. A 
specific case against Twitter is now being considered by the 
Ninth Circuit Court of Appeals, for example, and will show if 
the law needs to be expanded in order to properly protect 
children.
    But it is not just Twitter. The fight continues on other 
platforms that are used to exploit children. Meta announced 
earlier this year that they would not report all explicit 
images of children and would instead, and I quote, ``err on the 
side of an adult,'' end quote, when moderating explicit images 
of could-be children. In other words, when the age of an 
individual in a sexual image is uncertain, content moderators 
are told to put their thumbs on the scale of that individual 
being an adult.
    To me this is shocking. Let us be clear what we are talking 
about. This is child sexual abuse material, images of a minor's 
rape, exploitation. Somehow, at least what we have been told, 
is that Meta has decided that these should not be referred to 
law enforcement. The National Center for Missing and Exploited 
Children (NCMEC) has made it clear that images must be reported 
if they appear to involve a child so that law enforcement can 
intervene and stop the abuse and prosecute perpetrators.
    I worked with colleague across the aisle to draft this 
legislation, Stop Enabling Sex Traffickers Act (SESTA), and we 
crafted it narrowly so that it would be focused on ending 
trafficking and exploitation online. But it may, in fact, be 
too narrow if companies continue to turn away from keeping the 
exploitation of children off of their platforms. I hope my 
colleagues will take up the challenge of revisiting SESTA and 
tightening the standard so that entities showing a reckless 
disregard for the sexual exploitation of children are held 
accountable. I am ready to be an ally in this fight, even after 
I leave the Senate this term.
    I look forward to discussing these matters, especially 
regarding how product development processes appear to be at 
odds with user safety as well as the need for more detailed 
transparency from companies, and again, I look forward to the 
testimony.
    Thank you, Mr. Chairman.
    Chairman Peters. Thank you, Ranking Member Portman.
    Our first witnesses is Chris Cox, Chief Product Officer at 
Meta. Mr. Cox joined Meta in 2004, as a software engineer, and 
has helped build the first versions of signature Facebook 
features, including the News Feed. He later served as Director 
of Human Resources (HR), leading the direction and tone of 
Meta's company culture.
    In 2008, he began serving as Vice President of Product, and 
in this role Mr. Cox built out the initial product management 
and design teams before being promoted to Chief Product Officer 
in 2014, and began his role overseeing the family of apps in 
2016.
    Before I ask you to have your opening comments, we skipped 
over an important part of the Committee, and that is that it is 
the practice of this Committee to swear in witnesses. If the 
four of you would please stand up and raise your right hands.
    Do you swear that the testimony that you will give before 
this Committee will be the truth, the whole truth, and nothing 
but the truth, so help you, God?
    Mr. Cox. I do.
    Mr. Mohan. I do.
    Ms. Pappas. I do.
    Mr. Sullivan. Yes.
    Chairman Peters. All four answered in the affirmative. 
Thank you. You may be seated.
    Mr. Cox, I have already had your introductions so please 
proceed with your opening comments.

     TESTIMONY OF CHRIS COX,\1\ CHIEF PRODUCT OFFICER, META

    Mr. Cox. Thank you, Chairman Peters, Ranking Member 
Portman, distinguished Members of the Committee. Thank you for 
the opportunity to appear here before you today. My name is 
Chris Cox. I am Meta's Chief Product Officer, overseeing our 
apps and privacy teams.
---------------------------------------------------------------------------
    \1\ The prepared statement of Mr. Cox appears in the Appendix on 
page 125.
---------------------------------------------------------------------------
    I first joined the company in 2005, as one of our first 15 
software engineers. I care deeply about the work we do to help 
people connect with things and the people they care the most 
about. It is important to us that we help people feel safe on 
our apps, and we stand firmly against the exploitation of 
social media by those committed to inciting violence and hate. 
That is why we prohibit hate speech, terrorism, and other 
harmful content.
    To enforce these rules, we employ tens of thousands of 
people and we use industry-leading technology. We regularly 
publish transparency reports so people can see how we are doing 
over time and how we compare to other internet platforms.
    I am proud that we have invested around $5 billion last 
year alone and have over 40,000 people working on safety and 
security, more than any other tech company, even adjusted for 
scale. Our efforts are making a difference. For example, we 
reduced by more than half the amount of hate speech people see 
on Facebook over the last 18 months.
    People often talk about these types of issues as safety 
issues, but at Meta we also refer to them as integrity issues. 
Integrity is our way of referring to the work we do to prevent 
bad actors from abusing our platforms. This includes working to 
stop terrorists and violent extremists and also bullying and 
harassment, scams, and other types of harm.
    As the Chief Product Officer I am proud that safety and 
integrity are key to the product experience. We build products 
and continually update them with safety and integrity in mind. 
It is a core part of our ethos, that as we develop products we 
constantly think about how people are going to use them and 
work to make sure they can do so safely.
    I know you have questions about our algorithms. Like most 
platforms, Facebook and Instagram use different algorithms for 
various features. For example, we use algorithms to help keep 
our community safe by identifying and removing content that 
violates our policies, including hate speech, incitement, and 
terrorism. This work often happens before anyone reports 
content to us, sometimes even at the point of creation. We use 
algorithms to rank the content that appears in people's feed 
and search results, to help deliver relevant advertising, and a 
whole lot more.
    I also want to stress that our goal is to help people see 
what they find most valuable. It is not to keep people on the 
service for a particular length of time, and it is certainly 
not to give people the most provocative or enraging content. In 
fact, key parts of those systems are designed to do just the 
opposite. We reduce the distribution of many types of content, 
including because they may be misleading or are found to be 
false by independent fact-checking partners.
    At the end of the day, our job is to build the best product 
for people, and that is a product that is reliable, fast, safe, 
secure, and relevant, a product that connects people to content 
relevant to their interests and connects them to their family 
and friends. That is the product that people want, and that is 
the product we wake up every day trying to build.
    We appreciate your attention to these important issues and 
look forward to continuing to work with you to find ways we can 
continue to improve our products, our processes, and our 
partnerships.
    Thank you, and I look forward to your questions
    Chairman Peters. Thank you, Mr. Cox.
    Our next witness is Neal Mohan, Chief Product Officer at 
YouTube. In his role, Mr. Mohan is responsible for YouTube 
products and user experience on all platforms and devices 
globally, including YouTube's core mobile applications, 
YouTube, YouTube Kids and Music, YouTube Red, and YouTube TV, 
as well as other designs, policies, and services.
    Previously Mr. Mohan was Senior Vice President of Display 
and Video Ads at Google, and prior to joining Google Mr. Mohan 
served as Senior Vice President of Strategy and Product 
Development at DoubleClick, an advertisement company that 
developed and provided internet and ad-serving services before 
its acquisition by Google. In that role, he built the company's 
strategic plan, led the product management team, and grew the 
business rapidly.
    Mr. Mohan, welcome to the Committee. You may proceed with 
your opening remarks.

   TESTIMONY OF NEAL MOHAN,\1\ CHIEF PRODUCT OFFICER, YOUTUBE

    Mr. Mohan. Thank you, Chairman Peters, Ranking Member 
Portman, and distinguished Members of the Committee. Thank you 
for the opportunity to appear before you here today. As the 
Chairman mentioned, my name is Neal Mohan, and I am the Chief 
Product Officer of YouTube. In my role I am responsible for all 
of YouTube's products, our user experience, and trust and 
safety globally.
---------------------------------------------------------------------------
    \1\ The prepared statement of Mr. Mohan appears in the Appendix on 
page 129.
---------------------------------------------------------------------------
    YouTube's mission is to give everyone a voice and show them 
the world. Our openness is core to that mission and enables us 
to help billions of people around the world to learn new 
skills, discover emerging music and artists, and enjoy videos 
from their favorite creators.
    We are also proud to be a place where creative 
entrepreneurs can build thriving small businesses. Last year, 
YouTube's creative ecosystem contributed over $25 billion to 
the U.S. gross domestic product (GDP), and we supported more 
than the full-time equivalent of 425,000 jobs across the 
country.
    Our commitment to openness works hand-in-hand with our 
responsibility to protect our community from harmful content. 
Responsibility is central to every product and policy decision 
we make, and is our No. 1 priority.
    To that end, I want to make clear that there is no place on 
YouTube for violent extremist content. Our policies prohibit 
content that promotes terrorism, violence, extremism, and hate 
speech. Not only is the type of content harmful to our 
community, the overwhelming majority of creators, viewers, and 
advertisers do not want to be associated with it, meaning it is 
also bad for our business.
    In my testimony today I will provide more information on 
our approach to responsibility as well as our policies and 
technology that enable our skilled enforcement efforts to 
combat terrorist content online.
    We have four pillars of responsibility. We call them the 
Four R's: we Remove content that violates our policies as 
quickly as possible, we Raise up authoritative sources, we 
Reduce the spread of content that does not violate our policies 
but brushes up against our lines, and we Reward trusted 
creators and artists. My written submission explains each of 
these Four R's in much more detail.
    This framework enables us to uphold our responsibility to 
the YouTube community and society while preserving the 
opportunities of an open platform. For us, safety and growth 
are intertwined. Violative content undermines user trust and 
satisfaction, deters advertisers from investing in ads on 
YouTube, and harms the creators that have built businesses on 
our service.
    Let me discuss YouTube's policies prohibiting terrorist, 
violent, and extremist content. Our Community Guidelines set 
the rules of the road for content on YouTube. These policies 
explicitly prohibit terrorist organizations using our services 
for any purpose, and we routinely remove such material. We rely 
on a combination of people and technology to enforce these 
policies. In fact, machine learning is a critical tool in our 
effort to remove violative content at scale before it is widely 
viewed.
    As a result of our ongoing investments in teams and 
technology, in the first six months of 2022 we removed close to 
8.4 million videos for violating our policies. Over 90 percent 
of this violative content was first detected by machines, the 
majority of it removed before receiving ten views.
    Our policies are complemented by our work to raise up 
authoritative content and reduce the spread of content that 
comes close to but does not quite violate our policies. For 
news and information topics our systems elevate authoritative 
sources such as news outlets and public health authorities and 
search results and Watch Next panels, and what we call 
borderline content is not widely recommended.
    We also share best practices on counterterrorism with our 
industry peers through the Global Internet Forum to Counter 
Terrorism (GIFCT), which is dedicated to disrupting terrorist 
abuse of digital platforms. Responsibility is and will continue 
to be YouTube's No. 1 priority. Our business literally depends 
on it.
    Thank you, Mr. Chairman, for convening this important 
hearing. We look forward to continuing to work with you to 
address these challenges. Thank you.
    Chairman Peters. Thank you, Mr. Mohan.
    Our next witness today is Vanessa Pappas, Chief Operating 
Officer at TikTok. In her role she is responsible for 
overseeing content, operations, marketing, and product teams. 
She previously served as interim head of TikTok globally. She 
also has experience serving as Global Head of Creative Insights 
at YouTube, where she oversaw YouTube's global creative 
research and trends, audience development, creative strategy, 
and growth teams.
    Before joining YouTube, Ms. Pappas served as Vice President 
of Programming and Audience Development at nextnewnetwork, 
later acquired by YouTube and Google, where she spearheaded 
business partnerships and audience development efforts.
    Ms. Pappas, welcome to the Committee. You may proceed with 
your opening comments.

TESTIMONY OF VANESSA PAPPAS,\1\ CHIEF OPERATING OFFICER, TIKTOK

    Ms. Pappas. Great. Thank you for having me.
---------------------------------------------------------------------------
    \1\ The prepared statement of Ms. Pappas appears in the Appendix on 
page 136.
---------------------------------------------------------------------------
    Chairman Peters, Ranking Member Portman, and Members of the 
Committee, thank you for the opportunity to appear before you 
today to discuss how TikTok is delivering on our commitment to 
provide a safe and welcoming experience for our community while 
also combating some of the real-world harms that are the focus 
of this Committee's important work.
    My name is Vanessa Pappas, and I am the Chief Operating 
Officer for TikTok. I live in Los Angeles with my family and I 
have been in the United States for 20 years, and have spent my 
career in entertainment and media. Prior to joining TikTok, I 
was an executive at YouTube. I am passionate about creating 
safe online communities where people can express themselves 
creatively and discover entertaining and useful content.
    At TikTok our focus on safety starts at the top, with a 
leadership team goal to strengthen safety and build trust, and 
this focus on safety and security flows through our product 
decisions.
    As a person who believes in the potential of online 
platforms to create amazing opportunities for individuals, for 
businesses, and for society, I am personally invested in this 
goal, and as an executive there is no responsibility greater 
than protecting the people of our platform.
    TikTok's mission is to inspire creativity and bring joy, 
and more than 1 billion people around the globe enjoy the 
authentic, entertaining content that TikTok is known for.
    We know that with success and growth comes responsibility. 
We are committed to being an industry leader in safety and 
security, and earning trust through the transparency of our 
actions. Let me talk first about safety and security.
    At TikTok, creating a safe environment means we make 
decisions that prioritize the well-being of our community and 
limit the potential of online polarization or real-world harm, 
even if those choices come at the expense of short-term 
commercial success. Our trust and safety teams have an active 
seat at the product development roadmap and before launch.
    Our terms of service and community guidelines are built to 
help ensure our vision of a safe and authentic experiences. Our 
policies have zero tolerance for disinformation, violent 
extremism, and hateful behavior. Enforcement of these policies 
in the United States is led by our U.S. safety team in Los 
Angeles, which reports directly to me. TikTok has thousands of 
people working across safety, privacy, and security, and we 
invest heavily in technology to detect potential violations or 
suspicious accounts at scale.
    We also work to prevent the spread of harmful content. For 
instance, with the help of partners, including the U.S. 
intelligence agencies, we identify groups and individuals in 
the United States and abroad who promote violent extremism and 
hateful ideologies, and we work to eliminate that content 
associated with them. Examples include foreign terrorist 
organizations, drug cartels, and groups such as Three 
Percenters and Oath Keepers. Anyone who searches for this 
content or related hashtags or keywords will instead be 
redirected to our community guidelines.
    Notably, TikTok was not the platform of choice for those 
who organized the violence at the Capitol on January 6, 2021. 
Of 686 references in the Department of Justice (DOJ) charging 
documents, TikTok was mentioned in only 18.
    I will next talk about trust and transparency. Trust is a 
huge component of safety, and it is hard to earn but easy to 
lose. We hold ourselves to a high standard when it comes to 
being transparent about our work on safety and security in 
order to build trust.
    You may be familiar with Project Texas, a critical and 
industry-leading initiative we have been pursuing for over a 
year. We are making progress toward a final agreement with the 
U.S. Government to further safeguard U.S. user data and fully 
address U.S. national security interests. We look forward to 
finalizing this arrangement and sharing more when we are able.
    Since 2019, we have released community guideline 
enforcement reports which detail the type and volume of the 
content we remove. For instance, in the first quarter of 2022, 
more than 95 percent of the time we discovered and removed 
problematic content before receiving any reports. We also 
disclose the data on the requests we receive from law 
enforcement and/or governments.
    Finally, our Transparency and Accountability Centers show 
how we moderate content and recommend content. We would be 
happy to arrange for a tour for Members and Committee staff as 
we have for others at your convenience. Last month, we 
confirmed that our content moderation and recommendation models 
will be vetted and validated by Oracle. We recognize your 
questions and concerns and strive to lead the industry in 
meaningful transparency.
    Thank you again for inviting me today. We know that our 
work in safety and security is never done. These issues are of 
the utmost importance to TikTok, to our community, and to our 
industry. We are glad to be a part of a forward-looking 
conversation such as this one so that we can better work 
together to address these critical challenges.
    I look forward to answering your questions. Thank you.
    Chairman Peters. Thank you, Ms. Pappas.
    Our final witness today is Jay Sullivan, General Manager of 
Bluebird, Twitter's Consumer Products. He concurrently serves 
as the interim General Manager of Goldbird, Twitter's Revenue 
Products, and previously served as Vice President of Consumer 
Product at Twitter.
    Prior to joining Twitter, Mr. Sullivan worked at Facebook 
where he led the development of Realty Labs' AI Assistant, and 
then led the privacy, integrity, and systems product teams for 
Messenger and Instagram Direct, launching many user-focused 
features and improvements.
    Mr. Sullivan, welcome to the Committee. You may proceed 
with your opening comments.

  TESTIMONY OF JAY SULLIVAN,\1\ GENERAL MANAGER OF BLUEBIRD, 
                            TWITTER

    Mr. Sullivan. Chairman Peters, Ranking Member Portman, and 
distinguished Members of the Committee, thank you for the 
opportunity to speak with you today about this important issue. 
My name is Jay Sullivan. I joined Twitter in November 2021. In 
April of this year I became General Manager (GM) of Twitter's 
Consumer Product team. This team is responsible for the main 
features that people use on Twitter's mobile apps and website. 
I am also the General Manager of Twitter's Revenue Products 
team.
---------------------------------------------------------------------------
    \1\ The prepared statement of Mr. Sullivan appears in the Appendix 
on page 154.
---------------------------------------------------------------------------
    Twitter's purpose as a company is to serve the public 
conversation. The open nature of our service gives a voice to a 
world of diverse people, perspective, ideas, and information. 
We believe that Twitter is a force for good in the world.
    In the past year we have seen people come to Twitter to get 
on-the-ground information about the conflict in Ukraine, to 
access lifesaving information during natural disasters, and to 
exchange ideas about diverse topics ranging from news to 
culture to sports. The goal of the Consumer Product team is to 
increase healthy participation in the public conversation. We 
measure our success by how many people use Twitter and the 
health and safety of the platform. These two priorities go hand 
in hand. If people do not feel protected from hate, abuse, and 
harassment they will simply leave the service.
    By the same token, advertisers do not want their brands, 
products, or services to appear anywhere near harmful content. 
They will simply pull their ads. This is why it is not in our 
interest to have harmful content on our platform, and this is 
why we set out to build features that promote a healthy 
conversation, and why we will pause, delay, or stop a product 
rollout if we have health or safety concerns.
    My written testimony outlines many steps that Twitter has 
taken to safeguard our service and improve health on our 
platform. I would like to use this time to explain my team's 
overall approach to health, which is built on three main 
pillars.
    The first pillar is integrating health and safety 
considerations into the product design and development process. 
We proactively and methodically assess risk and potential 
unintended consequences before we begin development of a new 
feature and through the development process.
    We also develop new features that incentivize healthy 
discourse. Some recent examples are the development of prompts 
that encourage people to read articles before sharing them; 
interstitial labels that provide context; Birdwatch, a 
community-powered annotation feature; Twitter Circle; and many 
more health and safety features.
    But we cannot always prevent bad behavior so we also build 
tools that enable us to identify and take action on harmful 
content, including machine learning software to help detect it. 
These tools enhance the customer experience by decreasing the 
burden on individuals to report content for review, and they 
improve the platform overall.
    The second pillar is developing and enforcing policies 
designed around health and safety. We have a team responsible 
for developing the Twitter Rules, the policies and governance 
frameworks that prevent and mitigate harm to the people who use 
Twitter. This team does not report to me but we work closely 
together.
    Twitter's policies prohibit terrorist and other violent 
organizations on our platform, inciting violence, harassment 
targeted at individuals or groups, and hateful conduct. Our 
platform integrity and authenticity policies address efforts to 
spread misinformation relating to civic integrity, moments of 
crisis, COVID, and synthetic and manipulated media.
    The third pillar is transparency and accountability. We 
directly engage with outside experts, formal public feedback 
processes, and research. For example, in 2018, we were the 
first in the industry to release an archive of potential 
foreign influence operations identified on Twitter, enabling a 
host of research on this important issue. This effort has now 
evolved into the Twitter Moderation Research Consortium. We 
also provide industry-leading research access to the Twitter 
application programming interface (API).
    Underpinning all of this work are our culture, our 
processes, and our technology. I, and other senior leaders at 
the company, work to encourage and empower every employee to 
contribute to this shared goal.
    I look forward to the discussion today. Thank you.
    Chairman Peters. Thank you, Mr. Sullivan.
    Mr. Cox and Mr. Mohan, the question is going to be directed 
to you. The dangerous QAnon conspiracy that started in 2017, 
spread unchecked on your platforms for years before you started 
to downrank and then ban them on your platforms. A February 
2022 poll found that 16 percent of Americans now believe in 
this conspiracy theory.
    We heard this morning that your algorithms push sensational 
content. Mark Zuckerberg, in fact, said in 2018, that this is, 
quote, ``basic incentive problem,'' end of quote, and he goes 
on to say that, ``when left unchecked people will engage 
disproportionately with more sensationalist and provocative 
content,'' from Mr. Zuckerberg.
    My question is, if your recommendation algorithms are 
focusing on engagement--we understand the business reasons for 
doing that--and provocative content that increases that 
engagement--because that is certainly the most engaging--is it 
inevitable that they will promote extreme content that you have 
not yet labeled violative, like QAnon?
    Mr. Cox, we will start with you, and then Mr. Mohan.
    Mr. Cox. Thank you, Senator. To start with QAnon, this is 
an organization that today is labeled as a violence-inducing 
network and so is not allowed on our platform. In general, we 
believe there is no place for terrorism, for violence-inducing 
content, for extremism across the network. We work hard to take 
that content down, and as we have talked about, we publish our 
results and work with law enforcement to make sure that we can 
do so quickly.
    Chairman Peters. Mr. Mohan.
    Mr. Mohan. Senator, thank you for the question. QAnon on 
YouTube is deemed to be a harmful, criminal conspiracy. We do 
not allow that content on our platform. We have been removing 
that type of content from our platform for years, given the 
nature of potential incitement to violence.
    But we do not just stop there. Not only do we remove the 
content because there is no place for hate, harassment, violent 
extremism, or graphic violence of any kind on YouTube, but we 
also make sure that when users are looking for information on 
our platform around a particular topic, news topic, what have 
you, we raise up content that comes from authoritative sources, 
that includes typically mainstream media outlets, et cetera, 
that can put that particular news story in context.
    We have a combination of a number of tools, those Four R's 
that I alluded to before, that work comprehensively to make 
sure that this type of content has no home on YouTube.
    Chairman Peters. I appreciate that and I wanted to hear 
your response from both of you. I appreciate it has no home and 
you are aware of that now. The intro to the question was that 
it started to spread in 2017. This stuff was on your platforms 
for years. It took you a long time to come to the conclusion 
that both of you have just come to.
    But I want to get back to, really, the question that I 
have, is that we have a quote from Mr. Zuckerberg. We 
understand the business model, although I think Mr. Cox said 
that it is not necessarily to keep people on platforms. It is 
to engage. The more people that are engaged in your platforms, 
the better from a business perspective. You will be able to 
serve up more ads for folks, generate more revenue. As Mr. 
Zuckerberg said, people engage disproportionately in 
sensational and provocative content.
    That content is actually good for your business. If more 
sensational, provocative content is put forward, get people to 
stay on the platform longer, you are going to be able to show 
them more ads.
    My question is, is that not inevitable that that is going 
to happen when you continue to put this content? I appreciate 
after the fact, and we are going to talk about your model up 
front to try to prevent some of this stuff from happening at 
the front end. I appreciate at the back end that you are going 
to take some action. How many people saw the QAnon false or the 
conspiracy theory? I said 16 percent of the American people now 
think this conspiracy theory is real. You caught it, but not 
until 16 percent of the American people are part of this 
insidious theory.
    Tell me about the up front. Why are you not engaged up 
front before you launch products, to understand and perhaps 
anticipate how things could be misused?
    I will start with you, Mr. Mohan, this time, and then Mr. 
Cox.
    Mr. Mohan. Yes, Senator. I appreciate the opportunity to 
clarify what I think might be a potential misconception in 
terms of our business model and what our incentives are.
    To be very clear, we have no incentive to post this 
content, to promote it in any nature, and that goes to the 
fundamental nature of what YouTube's business actually is. We 
are fundamentally an advertising platform. We generate revenue 
through advertising partners. We share that revenue, the 
majority of that revenue, with our creators.
    When we talk about the creator economy, all those over 
400,000 jobs that we have created in this country, it is 
through that business model that generates revenue on behalf of 
our creators. Our advertisers have told us, in no uncertain 
terms, that they do not want to be associated with content that 
promotes hate, violent extremism of any sort, terrorism, or 
what have you. I have firsthand experience myself, over the 
years. When they feel that sort of content is on our platform, 
they walk away.
    We have not just a moral imperative--that is my top 
priority, living up to our responsibility--but also it aligns 
with our business goals.
    Chairman Peters. I appreciate that, and my question is how 
long it takes to be able to identify. This was up for two-plus 
years before the changes there.
    Mr. Cox, Facebook currently has an advertising campaign 
touting that you have spent $16 billion over the last six years 
on safety and security. I do not think it is the amount of 
money that is spent. It is about the results that are most 
important. You are a very large company, and, in fact, I think 
over the last six years a revenue close to $450 billion, a 
massive amount of money, a relatively small amount relative to 
the total revenues of your company. In fact, I think it is 
equal to basically what you have spent is $1 per user per year 
for the entire globe. In comparison, Meta spent over $85 
billion on stock buybacks over the last six years, considerably 
more than you pay on trust and safety.
    Why is your company willing to spend so much more per year 
to drive up a stock price but not willing to spend the money 
necessary to be able to pull down this dangerous content a 
whole lot quicker and perhaps actually be forward-leaning and 
design products from the get-go to eliminate the abuse of these 
platforms?
    Mr. Cox. Thank you, Senator. This is an issue for the whole 
company, not just for the safety teams or for the specific 
investments that we talked about there. I expect every 
engineer, product manager, designer, researcher, data 
scientist, whether it is building a new product or whether it 
iterating upon an existing product, to pay attention to safety. 
That is something that is built into the deoxyribonucleic acid 
(DNA) of the company. It is something I personally care very 
deeply about. It is something that we expect folks who are 
designing products to think about as a part of their work.
    In addition to the specific investment of 40,000 folks who 
work directly on safety and security for the company, it is 
part of the culture of how everybody at the company thinks 
about their work.
    Chairman Peters. I am going to turn it over to the Ranking 
Member because we have a number of Members here, but I am going 
to drill down a little further on that comment in a further 
round.
    Ranking Member Portman, you are recognized for your 
questions.
    Senator Portman. Thank you, Mr. Chairman. I look forward to 
getting into this issue of the balance between free speech and 
the hate speech that leads to violence because it is a line 
that has to be drawn. I know it is not easy, but I am going to 
talk about one that I think is easier, and that is child sexual 
exploitation. I talked about it my opening statement a little 
bit.
    We all know that the threat of this sexual abuse material 
is a persistent threat. In fact, we note that last year over 29 
million reports came in of child sexual exploitation. That was 
a 35 percent increase from just 2020, so this is an increasing 
problem across the board, but particularly with regard to our 
kids.
    That is why I thought it was so unfortunate, Mr. Cox, when 
I learned about the Meta policy directing content moderators, 
and I quoted this earlier, but it is to ``err on the side of 
the person involved in sexual exploitation being an adult'' 
when they are unsure about the age of the person. Let me give 
you a chance to respond to that. This has been in the public 
media. It does not mean that it is true, I suppose, but is that 
truly what you directed your content moderators to do?
    Mr. Cox. Senator, I know that, first of all, this is an 
incredibly serious issue and I appreciate your work on this 
issue. As the father of two kids, this is something I 
personally care about, making sure that we pay attention to as 
well.
    The work that we do here is in consultation with NCMEC. We 
have been the most aggressive of the tech companies there. We 
have referred more content to them, I believe, than all the 
other tech platforms combined. That is both through the work we 
do on WhatsApp and Messenger as well as across the family of 
apps.
    My understanding on this specific question is that we 
received direction from NCMEC to prioritize known Cyber 
Security Assessment & Management (CSAM) content, which was the 
nudge that they gave us and where they wanted us to focus our 
time. I have not been focused on that specific conversation and 
I would be happy to have the team follow up.
    Senator Portman. Yes. Let me make sure I understand this. 
You are blaming the National Center for Missing and Exploited 
Children for changing your approach of moderators saying that 
we are going to assume that kid are adults if we do not know? 
NCMEC has said you have a responsibility, all of you do, to 
report all images that appear to involve a child so that law 
enforcement can intervene to stop the abuse and prosecute the 
perpetrators, period. I cannot believe that you are saying that 
NCMEC would want you guys to send out instructions to your 
moderators saying err on the side of this being an adult if you 
are not sure.
    Did I misunderstand what you said?
    Mr. Cox. Senator, I have not been in that specific 
conversation with NCMEC, but I would be happy to follow up on 
the details. I agree it is a very important issue.
    Senator Portman. Given your role, would you commit to, one, 
getting back to me on it, and two, ensuring that if that is 
true that you change that policy?
    Mr. Cox. Senator, I could commit to getting into the 
details of the policy and make sure we follow up with the team 
to work on it.
    Senator Portman. OK. You are the Chief Product Officer. I 
would hope that this is one that you would follow up on and 
ensure it is not the direction you are giving your moderators, 
because that is what has been publicly reported.
    With TikTok, we talked about this earlier, again in the 
opening statement, nearly half of American kids use TikTok, as 
you know. That is your audience. There are a lot of risks there 
to privacy and national security, in my view. Ms. Pappas, I 
understand that TikTok is subject to the laws of the United 
States but it is also subject to the laws of other countries in 
which it operates--United Kingdom, Germany.
    But with regard to China, is it true, yes or no, does 
TikTok have an office and employees in Beijing?
    Ms. Pappas. I think this is another one for clarification--
--
    Senator Portman. Just yes or no.
    Ms. Pappas [continuing]. Of which TikTok does not operate 
in China. You are right in saying that TikTok is subject to the 
laws in the United States, as we are incorporated in the United 
States and California.
    Senator Portman. Do you have employees in Beijing?
    Ms. Pappas. Yes, we do, as do many global tech companies, 
including those----
    Senator Portman. I was asking you, do you have an office in 
Beijing?
    Ms. Pappas. Yes.
    Senator Portman. OK. Is your parent company ByteDance 
headquartered in China?
    Ms. Pappas. No, they are not.
    Senator Portman. ByteDance is not headquartered in China?
    Ms. Pappas. No. ByteDance is founded in China but we do not 
have an official headquarters. It is a global company.
    Senator Portman. Where is the headquarters of ByteDance?
    Ms. Pappas. We are a distributed company. We have offices 
around the world. Our leadership team is largely in Singapore, 
but we do have an official headquarters.
    Senator Portman. You have to be headquartered somewhere, 
and I think it is in the Cayman Islands. Is that correct?
    Ms. Pappas. The parent company was incorporated in the 
Cayman Islands. That is correct.
    Senator Portman. OK. You are headquartered somewhere, and 
it is the Cayman Islands, but you have a presence in China, and 
of course, you comply with Chinese law with regard to your 
people presence in China. Correct?
    Ms. Pappas. That is not correct. Again, TikTok does not 
operate in China. The app is not available. As it relates to 
our compliance with law, given we are incorporated in the 
United States we comply with local law.
    Senator Portman. Yes. Do you believe that the Chinese 
Communist Party has the right to access data collected by your 
company because you have a presence in China?
    Ms. Pappas. Sorry, again, Senator Portman. TikTok, the app, 
is not available in China.
    Senator Portman. No. You said you have an office in Beijing 
and you have employees in Beijing. That is a presence.
    Ms. Pappas. Yes, so as we have said on the record, we do 
have employees based in China. We also have very strict access 
controls around the type of data that they can access and where 
that data is stored, which is here in the United States. We 
have also said under no circumstances would we give that data 
to China.
    Senator Portman. Yes. I am glad that you say that. It does 
not seem to square with what we know about the Chinese national 
security law, but I appreciate that approach. U.S. military 
banned their own servicemembers from using TikTok for this 
reason, as you know, and last month the House of 
Representatives warned lawmakers of the risk of using TikTok. 
These are Members of Congress were told not to use it. Our 
military was told not to use it out of concern for the user's 
privacy and national security.
    Do you think those decisions were wrong?
    Ms. Pappas. I would not opine on the needs for an 
entertainment platform on Federal devices, but I would say that 
TikTok is an entertainment platform first and foremost, and 
this is part of the joy that we bring to millions of people 
around the world. We are very much committed to the security of 
our U.S. users and citizens, which is why we are investing so 
heavily in this area.
    Senator Portman. According to a leaked audio obtained by 
Buzzfeed news, which I am sure you saw, there are TikTok and 
ByteDance employees in China who can gain access to U.S. user 
data, so this Committee will now be looking into the assurance 
of what you said, that TikTok would not give U.S. data to 
China. Do you have any response to the Buzzfeed news story?
    Ms. Pappas. Yes. Those allegations were not found. There 
was talk of a master account which does not exist at our 
company, period.
    Senator Portman. Yes. Will TikTok commit to cutting off all 
data and metaflows to China, Chinese-based TikTok employees, 
ByteDance employees, or any other party located in China that 
might have the capability to access information on U.S. users?
    Ms. Pappas. Again, we take this incredibly seriously in 
terms of upholding the trust with U.S. citizens and ensuring 
the safety of U.S. user data. As it relates to access and 
controls, we are going to be going above and beyond in leading 
initiative efforts with our partner, Oracle, and also to the 
satisfaction of the U.S. Government through our work with 
Committee on Foreign Investment in the United States (CFIUS), 
which we do hope to share more information on.
    Senator Portman. Can you make the commitment, though, that 
I just asked you to make, that you will commit to cutting off 
all data and metadata flows to China, Chinese-based TikTok 
employees, ByteDance employees, or any other party located in 
China?
    Ms. Pappas. What I can commit to is that our final 
agreement with the U.S. Government will satisfy all national 
security concerns, yes.
    Senator Portman. But you will not make a commitment to 
agree to what I have now twice asked you about?
    Ms. Pappas. Sorry. Given the confidentiality of CFIUS I am 
not able to talk specifically about that agreement.
    Senator Portman. Forget CFIUS. I am not talking about 
CFIUS.
    Ms. Pappas. I am happy to share more----
    Senator Portman. I am asking whether you would make the 
commitment today. Will you make that commitment?
    Ms. Pappas. I am committing to what I have stated, which is 
we are working with the United States government on a resolve 
through the CFIUS process in which we will continue to minimize 
that data, as well as working with Oracle to protect that data 
in the United States.
    Senator Portman. This is part of the United States 
government too. This is our oversight function, and----
    Ms. Pappas. I appreciate that.
    Senator Portman [continuing]. I am concerned that you are 
not able to answer the question except to say that you will not 
make the commitment to cutting off this data to China. We think 
that all data collected relating to Americans and then accessed 
in China is a problem. We think it should be safe from 
exploitation by the Chinese Communist Party. If the data is 
accessible in China, as you have testified, then it could be 
exploited. That concerns us.
    I have gone over my time. I apologize, Mr. Chairman, but I 
thought it was important to get the answers.
    Chairman Peters. Thank you, Ranking Member Portman.
    Senator Carper, you are recognized for your questions.

              OPENING STATEMENT OF SENATOR CARPER

    Senator Carper. Thanks very much. A warm welcome to our 
witnesses. Mr. Mohan, thank you very much for spending time 
with my staff and me earlier this week. We appreciated that.
    I am a former Governor of Delaware, and the last Vietnam 
veteran serving in the U.S. Senate. Not everybody knows this 
but the first State to ratify the Constitution was Delaware. 
Before anybody else did, we did, and for one whole week 
Delaware was the entire United States of America. Those times 
were a little less complex than they are today.
    But I thought throughout the many years I have now lived in 
Delaware a lot about our democracy, the formation of our 
country and the formation of our government and how we have 
rolled with the punches over many years. I never imagined how 
fragile our democracy could really be.
    Right after the Founding Fathers--they used to work on the 
Constitution up in Philadelphia--Ben Franklin was leaving 
Independence Hall, as I recall, and he was asked by someone, a 
passer-by, who said, ``What have you wrought?'' he responded, 
``A republic, if we can keep it.''
    Churchill had his sense. He described democracy as ``the 
worst form of government devised by word of man except for all 
the rest.'' It is certainly a hard way to govern, and we live 
it and feel it every day, and see it every day.
    Jefferson, who wrote the Declaration of Independence, 
largely, as you know, also wrote these words, and you already 
said this, ``If people know the truth they will not make a 
mistake.''
    I think one of our challenges today, as we try hard to 
reserve our democracy, is people do not really know what the 
truth is, and they are not sure. I am not sure how to put that 
genie back in the bottle, but we need to certainly try.
    I have a question if I could. I would like to start off 
with Mr. Cox. Mr. Cox, again we thank you for joining us today. 
In your testimony you mentioned that Meta has identified more 
than, I think, 1,000 militarized social movements and 270 white 
supremacist organizations on your platform, and removed some 
2.3 million pieces of content from Facebook that are tied to 
organized hate. These statistics, which are deeply disturbing, 
are only from the second quarter of this, 2022.
    While I am glad that they all are tracking and removing 
harmful content, these statistics are indicative of a troubling 
trend of bad actors using social media to organize and mobilize 
their followers.
    To that end, Mr. Cox, what has Meta done to address the 
larger threat of these various groups or organizations and the 
content that they share? What more can and should be done?
    Mr. Cox. Thank you, Senator. As I mentioned, we have 
community standards which outline that there is no place for 
terrorism, for militarized social movements, for violence-
inducing conspiracy networks across our family of apps. We have 
350 experts, folks who work with law enforcement to identify 
terrorist organizations, to identify for violence-inducing 
conspiracy networks (VICNs), to make sure that we have up-to-
date information that we share with law enforcement in order to 
understand, on a real-time basis, which of these networks to 
prioritize and pay attention to.
    We publish our results quarterly. We publish a transparency 
report that outlines the prevalence of various categories of 
bad content. In case it is useful, around 2 in 10,000 pieces of 
content, 0.02 percent, is the prevalence of hate speech on the 
platform as of the most recent report from this last quarter. 
That is down from 50 percent 18 months ago. Each quarter we 
have released the report over the last several years we have 
been able to improve on prevalence, which is a sign that our AI 
systems, our processes, our human systems, et cetera, are 
improving. We believe that is the most important thing, in 
addition to having a transparent report on how we are actually 
doing on these numbers, so that outside experts, so that law 
enforcement, et cetera, can evaluate along with us.
    Senator Carper. Thank you for that. I like to say, if it is 
not perfect, make it better, so keep working on it.
    A question, if I could, for Mr. Sullivan, on misinformation 
on Twitter. You stated, I believe, that Twitter makes it clear 
in its guidelines that the promotion of disinformation is 
against your platform's policies. We have seen numerous 
examples, instances if you will, of users sharing this 
disinformation during the coronavirus pandemic, previous U.S. 
elections, and the January 6th insurrection right here at this 
Capitol, just to name a few.
    Could you take a moment please and explain for us what 
policy changes Twitter has made in light of the rapid spread of 
false information and how it has been effective?
    Mr. Sullivan. Yes. Thank you for the question, Senator, and 
I appreciate the historical context. It adds gravity to what we 
are talking about today.
    Senator Carper. Harry Truman once said, ``The only thing 
new in the world is the history we forgot or never learned.'' 
It is pretty good. It is timely.
    Mr. Sullivan. These are serious matters. We have policies 
against COVID misinformation that have been evolving as we 
learn from what is happening in the world, and the same for the 
spread of misinformation. After our election work has been 
ongoing, but most recently we have been beefing up all of our 
policies against the spread of misinformation.
    Then we have tried to be more proactive as well. For 
example, with what we call interstitials and prompts that give 
positive, valid, truthful information about things like voting, 
where you can vote, when election ballots will be counted, and 
things like that, so that people not only debunk false 
information but can receive vetted information in a way that 
feels more authoritative, so they know what is real and what is 
not real.
    There is always more to do, but our policies are always 
evolving, and our software is always evolving to catch these 
things earlier so that people do not have to report them. We 
want to catch them before a person needs to even see it. Also 
adding these prompts and other user interface elements to 
prevent the spread of harmful information. For example, not 
being able to retweet something or tweet something that is up 
against that line.
    We are continuing to evolve the product every day. We 
acknowledge these are critical, societal issues, as you have 
said. Thank you.
    Senator Carper. If it is not perfect, make it better.
    My time has expired. I just want to say, Mr. Chairman and 
to our Ranking Member, this is a valuable and invaluable 
hearing, a timely hearing, and I just applaud you holding it 
and express our thanks to our witnesses for being here. Thank 
you.
    Chairman Peters. Thank you, Senator Carper.
    Senator Johnson, you are recognized for your questions.

              OPENING STATEMENT OF SENATOR JOHNSON

    Senator Johnson. Thank you, Mr. Chairman. Again, I think 
based on the morning panel and this panel as well we are 
talking about a problem that is my true definition of a 
problem, one that does not have easy solutions.
    Again, what is harmful content is all in the eye of the 
beholder. We all abhor violence. I certainly condemn white 
supremacists. I condemn immediately, forcefully, repeatedly, 
the violence on January 6th.
    But I am concerned about the bias of your platforms. The 
earlier panel would not give me any indication whatsoever, the 
percentage of liberals versus conservative in your 
organizations. I do not expect you will be any more forthright 
in that.
    But let me ask you this question. I know the Chairman likes 
to talk about the white supremacists and January 6th. Democrats 
love talking about that. What about the 570-plus riots that 
occurred in the summer of 2020, 2,000 law enforcement officers 
injured, $1 and $2 billion worth of property damage, a couple 
of people killed in Kenosha, Wisconsin, dozens of buildings 
burned, a couple dozen people also lost their lives during 
those riots.
    Mr. Cox, have you de-platformed, have you throttled back or 
censored anybody that was involved in the organization of the 
summer riots?
    Mr. Cox. Senator, these were----
    Senator Johnson. Give me a pretty quick answer. I have a 
lot of territory to cover.
    Mr. Cox. Senator, domestic terrorism and extremism and 
calls for violence are against our community----
    Senator Johnson. Did you censor anybody that organized the 
summer riots, 570 of them?
    Mr. Cox. Senator, I can look at the specifics----
    Senator Johnson. Mr. Mohan, did you YouTube throttle back 
or censor any of the rioters in the summer of 2020?
    Mr. Mohan. Senator, our policies apply equally regardless 
of who the----
    Senator Johnson. Did you throttle back? Did you censor some 
of the organizations that were responsible for the summer 
riots?
    Mr. Mohan. We would have applied our policies equally, 
regardless of where the riots were, if it was a violation of--
--
    Senator Johnson. Can you provide me with the names of 
people you throttled back that were responsible for the summer 
riots?
    Mr. Mohan. Senator, I am happy to follow up on that.
    Senator Johnson. Good. I appreciate that.
    Mr. Sullivan, what about Twitter?
    Mr. Sullivan. Twitter would have removed any incitement to 
violence that was on our platform, regardless of----
    Senator Johnson. Did you throttle back on organizers of the 
summer riots?
    Mr. Sullivan. For any case where we see incitement to 
violence we would remove that content and take action.
    Senator Johnson. Your CEO, Mr. Dorsey, was before the 
Commerce Committee, I think it was in October 2020, and both 
Senator Cruz and I asked him whether your platform, Twitter, 
could impact our elections, and he denied it. Our three 
witnesses earlier this morning completely disagreed with Mr. 
Dorsey. They said absolutely Twitter and these platforms can.
    Do you believe that Twitter can influence, in fact, our 
elections?
    Mr. Sullivan. I think Twitter plays an important role in 
the public conversation.
    Senator Johnson. It is really kind of a yes-or-no answer. 
Can they impact our elections?
    Mr. Sullivan. We are taking the----
    Senator Johnson. Really, yes or no. I have a lot to cover.
    Mr. Sullivan. As I described to the previous question, we 
have put in place many actions and mitigations relating to 
elections.
    Senator Johnson. Can you impact the elections?
    Mr. Sullivan. People try to use our platform to get 
messages out regarding elections, and we are doing our best 
to----
    Senator Johnson. This morning I simply talked about the 
fact that you censored the New York Post article about Hunter 
Biden. We have polls that said that had the American public 
known that we would not be in the ditch that we are right now.
    Mr. Mohan, do you believe YouTube can impact the elections?
    Mr. Mohan. Senator, there is an open public debate that 
happens on YouTube every day, whether it is----
    Senator Johnson. It is really a pretty simple yes-or-no 
answer.
    Mr. Mohan. Senator, YouTube is an open platform where there 
is debate----
    Senator Johnson. Mr. Cox, do you believe that Facebook can 
impact the elections?
    Mr. Cox. Senator, I think the public discussion that 
happens on our platform are a part of the public discourse.
    Senator Johnson. You do not think by censoring material 
that specifically your management can impact the elections? Not 
what is occurring on your platform, I mean, but your 
management, your decisions, your censoring of information. Can 
you impact the elections?
    Mr. Cox. Senator, respectfully, with respect to the New 
York Post story you reference we did not----
    Senator Johnson. I know you throttled it back. But, again, 
I am not asking about that specifically. Do you have the power? 
Our earlier witness, Mr. Roetter, said, ``A small group of 
people run these companies and have substantial power over 
shaping the reality for billions of people.'' I mean, can you 
just be honest?
    By the way, our earlier panel basically, to paraphrase, 
said do not believe a word you guys are going to tell us. But 
can you at least be honest with the American public and say, 
yes, you had that power. You can impact the elections. Can you 
be honest with them or are you going to sit there and say, 
``People talk about things on our platforms.''
    Mr. Cox. Senator, we do think transparency about our 
decisionmaking around our content, around not just terrorism 
but around misinformation, we take all of these areas of 
content seriously and we publish our work. We do think the 
public deserves to know what our policies are and how we 
enforce them.
    Senator Johnson. We are going to have another round, it 
sounds like. I have a lot more ground to cover. But let me just 
start, and I think Senator Hawley did a great job of talking 
about how Federal health agencies were in direct communication 
with, in particular, I think it was Facebook, possibly Twitter. 
We have also heard that, of course, Mr. Zuckerberg, or 
Facebook, was contacted by the FBI as it relates to Russian 
disinformation, and we covered that in the morning.
    When it comes to how miserably we failed handling COVID, I 
think one of the problems was the lack of robust information 
using this marvelous device we call the internet. I mean, 
doctors could have been testing out different theories of the 
case and sharing that experience, but they were shut down. They 
were censored.
    I want to ask each one of you. I am 67 years old. As long 
as I have been alive I have always been told if you have a 
serious medical condition you really ought to seek a second 
opinion because nobody has perfect information. I wish that 
there would have been some modesty exhibited by our Federal 
health authorities. Quite honestly, your platforms, in 
acknowledging the fact that we do not have perfect information, 
maybe we ought to let some information flourish. I mean people 
were censored, eminently qualified doctors who has the courage 
and compassion to treat COVID patients with cheap, generic, 
widely available drugs.
    I just want to ask the question, Mr. Cox, do you believe 
that people ought to get a second opinion when it comes to 
complex medical conditions?
    Mr. Cox. Senator, we absolutely believe that building a 
product where people have the ability to express their point of 
view is critical to what we do. It is critical to what people 
expect from the product and what people expect.
    Senator Johnson [continuing]. Can you just answer the 
question?
    Mr. Mohan, do you believe you ought to get a second opinion 
when it comes to complex medical conditions?
    Mr. Mohan. Senator, as we all recall, when the pandemic 
started it was an unprecedented event in history where science 
was being created----
    Senator Johnson. Maybe you ought to seek a second opinion. 
I mean, do you think it is a good idea to get a second opinion, 
or do you only go to one authority and put all your faith in 
one authority? No other opinion is going to be valid. Is that 
your belief?
    Mr. Mohan. Senator, we worked with a wide variety of health 
authorities in this country and all over the world.
    Senator Johnson. OK. Mr. Sullivan, do you believe you ought 
to go get a second opinion? If you get diagnosed with cancer 
today are you going to rely on just one authority?
    Mr. Sullivan. As a patient that sounds like common sense. 
Our COVID information----
    Senator Johnson. Are we not 330 million patients here?
    But we were not allowed a second opinion, were we?
    We were not allowed by your platforms for that second 
opinion, and I think hundreds of thousands of people lost their 
lives because you did not allow a second opinion to be 
published on your platforms.
    Mr. Sullivan. Our COVID misinformation policy only----
    Senator Johnson. It was highly flawed, and I will point 
that out in the second round of questions. Thank you.
    Mr. Sullivan [continuing]. It only looked at information 
that was demonstrably and widely believed to be true.
    Senator Johnson. I need a second round of questions and I 
can point that out, that it was not demonstrably false.
    Chairman Peters. There will be a second round, Senator 
Johnson.
    Senator Sinema, you are recognized for your questions.

              OPENING STATEMENT OF SENATOR SINEMA

    Senator Sinema. Thank you, Mr. Chairman, and thank you to 
our witnesses for joining us today.
    Every day cartels post on social media platforms and 
recruit teenagers in Arizona to act as drivers for illegal 
operations. Lured by the promise of easy cash, these teens, 
some as young as 14 years old, take their parents' cars to the 
border and participate in smuggling and trafficking. Innocent 
bystander and migrants have even died while these teens, 
recruited by cartels on social media, flee law enforcement at 
high speeds.
    The Department of Homeland Security (DHS) must do more to 
crack down on dangerous cartels' use of social media, secure 
the border, and keep Arizona families safe.
    My first question is for Ms. Pappas. According to my 
office's conversations with Border Patrol leadership in 
Arizona, TikTok is the platform that cartels use most 
frequently to recruit Arizona teens. What steps does TikTok 
currently take to ensure its algorithms do not promote cartel-
sponsored content, and can you tell me why have those efforts 
not been more effective?
    Ms. Pappas. Certainly, Senator. It starts for us with our 
policies. Obviously that type of content, any illicit activity, 
organized crime, including drug cartels, is strictly prohibited 
from our platform.
    In that regard we work with our trust and safety moderation 
teams to ensure that we are detecting that content through our 
technologies and also through human moderation, to remove that 
content when found.
    Furthermore, as a platform, we do not have the same product 
features that other platforms do in terms of being able to have 
that type of organized behavior. For example, we do not allow 
links through direct messages or images to be sent through 
direct messages. We also do not have group chat available.
    Those types of behavior to help organized crime are limited 
in terms of TikTok's platform. Any of the content that gets 
posted on TikTok has to go through our content moderation. 
Obviously, our work is never done there, but we are constantly 
working to identify that content at scale and remove it when 
found, and all of those numbers are also available through our 
transparency reports.
    Senator Sinema. A follow-up question then. If you are doing 
that content moderation and reviewing each of those posts, how 
is it that there are so many efforts that are successful on 
TikTok to recruit young teens to assist cartels?
    Ms. Pappas. We are striving to get that number to zero. 
Obviously, this is a challenging area for everybody in the 
industry in terms of being able to moderate our platforms, but 
this is something that we heavily invest in from a technology 
perspective as well as a people perspective. I am happy to look 
into any of those cases. But I do know that when reports have 
been sent, that content is immediately taken down.
    Senator Sinema. Thank you. Mr. Cox, my next question is for 
you. What is Meta doing, both on Facebook and on Instagram, to 
prevent cartels from using your platforms to recruit teens 
along our Southwestern Border?
    Mr. Cox. Thank you, Senator. This is an important issue. It 
is sad. I really appreciate your leadership on this issue. We 
prohibit human trafficking. We prohibit these cartels. We work 
with law enforcement to identify the names of the cartels, and 
then we fan our systems to help us find instances of them 
across our platforms and take them down right away.
    Senator Sinema. My next question is for Ms. Pappas, Mr. 
Cox, Mr. Mohan, and Mr. Sullivan, the whole panel. As Chair of 
the Border Subcommittee in this Committee I believe it is 
critical that each of your platforms work with the Department 
of Homeland Security to identify cartel content and prevent 
Arizona teens from being targeted for recruitment. I would ask 
you to answer yes or no. When you discover that cartels are 
using your platforms to recruit are you willing to commit to 
sharing that information with the Department of Homeland 
Security as quickly as possible.
    Mr. Sullivan.
    Mr. Sullivan. Yes, with the appropriate privacy and 
oversight I believe we could do that.
    Ms. Pappas. Yes. Similarly, following legal process and 
privacy policy.
    Mr. Mohan. Yes, Senator. We would cooperate as long as 
there is a due legal process with the DHS and other law 
enforcement as well.
    Mr. Cox. Similarly, we would commit to that provided 
privacy and legal concerns were addressed.
    Senator Sinema. Thank you. Back to you, Ms. Pappas. Today's 
hearing is about product development, and in the case of TikTok 
there is no product more important than the, ``for you'' 
algorithm that offers content recommendation to users. There is 
a real risk that TikTok could alter its algorithm to promote or 
censor content on Beijing's behalf, whether that means 
silencing voices that are critical of China or promoting 
conspiracies or extremist content.
    Has TikTok ever altered its algorithm or promoted or 
downranked content based on the actual or perceived wishes of 
the Chinese government?
    Ms. Pappas. No.
    Senator Sinema. In your privacy policy it says that TikTok, 
``may collect biometric identifiers such as face prints and 
voice prints.'' Has the biometric data of an American ever been 
accessed by or provided to any person located in China, and if 
not, is biometric data able to be accessed by anyone in China?
    Ms. Pappas. Let me clarify because I think biometrics is 
one that is a topic that is hard to define and everybody has 
their own definition of what biometrics means. I will be clear 
in how TikTok sees this.
    We do not use any sort of facial, voice, or body 
recognition that would identify an individual. There is no way 
that we would be able to identify. The way that we use facial 
recognition, for example, would be if we are putting an effect 
on the creator's video. You are uploading a video and you 
wanted to put sunglasses or dog ears on your video, that is 
when we do facial recognition. All of that information is 
stored only in your device, and as soon as it is applied to, 
like that filter is applied and posted, that data is deleted, 
so we do not have that data.
    Senator Sinema. You are assured that there is no 
opportunity that during the time between the use of the imprint 
of the face print or voice print, and the deletion, that there 
is no ability for anyone other than that device to access or 
capture that information?
    Ms. Pappas. That is my understanding, yes. I know it is a 
technical area, so to the best of my knowledge the data is 
stored on the devices and deleted immediately once you post 
your video.
    Senator Sinema. I would like follow-up. Neither of us are 
experts on this technological issue, but I would like to get 
some follow-up from those who are.
    Ms. Pappas. Happy to, yes.
    Senator Sinema. Thank you. Mr. Chair, I see that my time 
has expired. I have a few more questions that I will submit for 
the record. Thank you.
    Chairman Peters. Thank you, Senator Sinema.
    Senator Padilla, you are recognized for your questions.

              OPENING STATEMENT OF SENATOR PADILLA

    Senator Padilla. Thank you, Mr. Chair, and I want to thank 
you for holding this important hearing today. The companies 
testifying today offers users an unprecedented ability to 
access, consumer, and distribute information. Mr. Chair, you 
are right to focus our attention on how corporate product 
design and investment choices influence the content that is 
produced and distributed.
    My first question for a couple of you is relative to 
content moderation, and we have been talking about that 
throughout the hearing here. Last year, Frances Haugen 
disclosed that at Facebook 87 percent of all spending combating 
misinformation on Facebook was spent on English language 
content, despite the fact that only nine percent of Facebook's 
users are English speakers. She also disclosed that trust and 
safety investments for users in countries other than the United 
States were abysmal.
    An audit of Twitter's disinformation and misinformation 
work, disclosed by Pieter Zatko, who testified just yesterday 
in Senate Judiciary Committee, found that Twitter's integrity 
lacked language expertise in the countries it was serving, even 
though 80 percent of Twitter users are outside the United 
States.
    Your companies make commitments to all of your users who 
are not just linguistically diverse but culturally diverse as 
well.
    A question first for Mr. Cox. In your testimony you state 
that you have over 40,000 people working on trust and safety 
issues. How many of those people focus on non-English language 
content and how many of them focus on non-U.S. users?
    Mr. Cox. Sure, Senator. I am happy to take your question. 
Our safety and security teams are deployed to help our users 
all around the world. Specifically on the question of 
misinformation, which you mentioned, we have 80 fact-checkers 
operating in 60 countries around the world. Those fact-checkers 
are certified by independent fact-checking organizations.
    In the United States we have 11 fact-checkers, six of whom 
support Spanish language content. We also have partnerships 
with Univision and Telemundo to connect people with Spanish 
language authentic information around elections. We also offer 
an election voting center in Spanish language to Americans to 
help folks get authoritative information about where to vote, 
that is tailored to their specific ZIP code.
    Senator Padilla. I appreciate the information you are 
sharing. Some of it has been in your testimony. I welcome more. 
More is better. Do you have any idea of the breakdown of the 
40,000 people I referenced? First of all, is that number 
roughly accurate? If it is significantly higher, let me know. 
If it is significantly lower, let us know. What I am looking at 
is the ratio of English versus non-English. Do you have that 
data?
    Mr. Cox. Senator, I would be happy to follow up on the 
specifics of how those 40,000 folks are broken down.
    Senator Padilla. Thank you. I would greatly appreciate 
that.
    Mr. Sullivan, how many members of your trust and safety 
team have non-English language expertise and focus on issues 
outside the United States?
    Mr. Sullivan. Yes, thank you for the question, Senator. As 
a global company this is important to us. We have about 2,200 
people working on content moderation globally. I do not have 
the exact breakdown but we can take that back to our team.
    Senator Padilla. Please. Ms. Pappas, how large is your 
trust and safety team and how much does TikTok invest in your 
non-English users, and I guess non-Western users?
    Ms. Pappas. I do not have those numbers at hand but I am 
happy to get back to you on those as well.
    Senator Padilla. OK. Mr. Mohan.
    Mr. Mohan. Senator, we have over 20,000 people that work on 
content moderation all over the world. We are a global 
platform, as you know, supporting a couple billion users all 
over the world, and we endeavor to enforce our policies as well 
as make sure that our recommendation algorithms work equally 
well for all speakers, all over the world. We support dozens of 
languages on our platform in all the countries that we operate.
    To give you a couple of more concrete examples, here in the 
United States our support across all those Four R's I described 
in my initial testimony are not just about English but other 
languages as well. For example, in Spanish our policies are 
enforced. We serve up information panels not just in English 
but in Spanish. Those relate to optics like elections, how to 
vote, where to vote, et cetera, COVID-related information, 
because families are looking for that content not just in 
English in this country, but we recognize in a number of 
different languages, including Spanish.
    Senator Padilla. Thank you. I would appreciate more detail 
and data from all of you.
    Speaking of data, as some of you may or may not know, my 
background is in engineering, so I am a big believer in data-
informed and data-driven policymaking. In reviewing your 
testimony--and I appreciate some of the data that you did 
provide, especially around dangerous content found in your 
platforms, whether it is incomplete or desire for additional 
data, let me just jump into a couple more questions.
    Mr. Cox, in your testimony you say that Meta found and 
removed 95 percent of hate speech content before it was ever 
reported. Of the remaining five percent, how many users were 
recommended the content in their news feed? Do you have data 
along those lines?
    Mr. Cox. Senator, I can offer data on prevalence, which 
would be the amount of content that appears across the averages 
of the content on our platform. For hate speech the prevalence 
in our last report is 0.02 percent, or two out of every 10,000 
pieces of content.
    Senator Padilla. But you see where I am going, right? 
Ninety-five percent is a good number. The five percent that you 
did not catch before it was reported, if those recommended, 1, 
2, 3, 5 times, that is one thing. If it is recommended tens of 
thousands of times or more that is a different dynamic. That is 
what we are trying to get at. If you do not have the data at 
your fingertips, a follow-up would be welcome.
    Mr. Cox. Senator, we would be happy to follow up on that.
    Senator Padilla. Great. Ms. Pappas, in your testimony you 
say that 88.4 percent of removals under TikTok's violent 
extremism policy occurred within 24 hours of being posted. 
Again, a good number but it is not 100. For the other 11.6 
percent, do we have a gauge of how long it took to find and 
resolve those items?
    Ms. Pappas. No. I would have to get back to you on that, 
but similarly we look at the prevalence of content 
authoritarian would be violative, and for violent extremism it 
is 0.01 percent.
    Senator Padilla. OK. Thank you.
    Mr. Chairman, my time is up. Similar to Senator Sinema I 
will have some additional questions I will submit for the 
record.
    Chairman Peters. Very good. Thank you, Senator Padilla.
    Senator Hawley, you are recognized for your questions.

              OPENING STATEMENT OF SENATOR HAWLEY

    Senator Hawley. Thank you very much, Mr. Chairman. Thanks 
to all the witnesses for being here. Ms. Pappas, let me start 
with you.
    I have to say it is great to see you here today. I have 
repeatedly invited your company to testify before Congress. I 
invited them to testify to the Judiciary Subcommittee on Crime 
and Terrorism in November 2019. I invited them to testify again 
in September of the following year. Both times we were stiffed. 
TikTok told me that they would set up a meeting with the CEO. 
They did not want to testify in public but they set up a 
meeting with the CEO after November 2019. They then canceled 
that meeting.
    It nice to see TikTok be willing to answer questions in 
public. It is a pleasant change. Let us dig into a few things, 
if we could, specifically about TikTok's links to the Chinese 
Communist Party.
    In response to a letter from some of my colleagues, TikTok 
claimed earlier this year that the company has never shared 
data with the Chinese government. Is that correct?
    Ms. Pappas. That is correct, yes.
    Senator Hawley. And has never shared data with the Chinese 
Communist Party. Is that correct?
    Ms. Pappas. We will never share data, period.
    Senator Hawley. My question was in the past tense. Has 
TikTok ever shared data with the Chinese Communist Party?
    Ms. Pappas. We have never shared data with the Chinese 
government. Correct.
    Senator Hawley. With the Chinese Communist Party.
    Ms. Pappas. Yes, correct.
    Senator Hawley. Have you ever shared it with members, to 
members of the Chinese Communist Party?
    Ms. Pappas. We have said many times, Senator, that we do 
have Chinese engineers based in Chinese. I do not think there 
is any platform up here that would be able to speak to what you 
are talking about as it relates to the political affiliation of 
an individual. But I am happy to assure you that we are ensuing 
the access controls around our data as well as the storage of 
that data in the United States.
    Senator Hawley. I think you are telling me that there are 
TikTok employees or ByteDance employees who are members of the 
Chinese Communist Party. Is that what you are saying?
    Ms. Pappas. No. I am saying I would not be able to verify 
that.
    Senator Hawley. Let me ask you affirmatively. Are there 
TikTok employees or ByteDance employees who are members of the 
Chinese Communist Party?
    Ms. Pappas. Senator, I am saying that nobody that is 
sitting on this panel could tell you a political affiliation--
--
    Senator Hawley. I am not interested in anybody's opinion. I 
am asking you a factual question. Are there members of the 
Chinese Communist Party employed by TikTok and ByteDance? Yes 
or no.
    Ms. Pappas. I would not be able to tell you the political 
affiliation----
    Senator Hawley. You do not know?
    Ms. Pappas [continuing]. Of any individual. What I can tell 
you is how much we are investing in the----
    Senator Hawley. No. Membership in the Chinese Communist 
Party is not exactly like membership in the Democratic Party. I 
am looking for an answer. You are telling me you do not know? 
TikTok does not know.
    Ms. Pappas. Here is what I can tell you. I can tell you 
that in our United States and Singapore leadership, there are 
no CCP members.
    Senator Hawley. You do know that. But you are telling me 
that you do not know if there are any members who are employed 
by TikTok or ByteDance, members of the Chinese Communist Party?
    Ms. Pappas. Senator, I am happy to share that we are 
putting access controls----
    Senator Hawley. That is not my question.
    Ms. Pappas [continuing]. As well as----
    Senator Hawley. That is not my question. My question is are 
there any TikTok employees or ByteDance employees, members of 
the Chinese Communist Party? Yes or no.
    Ms. Pappas. Senator, I am saying nobody could sit up here 
and give you that.
    Senator Hawley. You are saying you do not know? But you do 
know your leadership is not but you do not know about your 
employees. Is that your testimony?
    Ms. Pappas. I know that everyone who makes a strategic 
decision at this platform----
    Senator Hawley. Yes.
    Ms. Pappas [continuing]. Is not a member of the CCP.
    Senator Hawley. A strategic decision. OK. It is 
interesting. It is interesting to me that you are quite 
confident that anyone who could make a strategic decision--how 
many people is that?
    Ms. Pappas. It is our leadership team.
    Senator Hawley. The number?
    Ms. Pappas. Again, the leadership team is based in the 
United States and Singapore. Our CEO is based in Singapore. He 
is not Chinese. I am happy to go into the efforts that we----
    Senator Hawley. Would it surprise you to learn that Forbes 
magazine recently reported that at least 300 current TikTok or 
ByteDance employees were members of Chinese State media and 
affiliated with the Chinese Communist Party?
    Ms. Pappas. Again, we do not look at the political 
affiliations or cannot speak to individuals, but what I can 
tell you is that we are protecting the data in the United 
States.
    Senator Hawley. But apparently, though, you do look at 
political affiliation because you are quite willing to sit here 
and tell me that no one who has strategic input or makes 
strategic decisions is a member of the Chinese Communist Party. 
You do know very well, as a matter of fact. You just do not 
want to answer my other question.
    Ms. Pappas. We have thousands of people that work at the 
company so I am not going to vouch on the political affiliation 
of any particular individual. What I can vouch for----
    Senator Hawley. Have you seen the videos of Chinese 
Communist Party members conducting training for TikTok and 
ByteDance employees?
    Ms. Pappas. No.
    Senator Hawley. That is fake?
    Ms. Pappas. I do not know what you are referring to. But 
what I can tell you----
    Senator Hawley. Has that happened?
    Ms. Pappas [continuing]. Is any decision----
    Senator Hawley. Has that happened?
    Ms. Pappas. Sir, I just said that I would not be able to 
tell you. I have not seen it. I am not sure what you are 
referring to, but I am happy to follow up. But what I can tell 
you----
    Senator Hawley. Wait. I am sorry. Let us go back. Let us 
see if we can cut through the mumbo-jumbo. I am asking you if 
the Chinese Communist Party has conducted training sessions 
ever for employees of ByteDance or TikTok. Yes or no.
    Ms. Pappas. Not for TikTok. TikTok, the app, does not 
operate in China.
    Senator Hawley. You have employees in China and ByteDance 
has employees in China. Listen, we have been through this song 
and dance.
    Ms. Pappas. We have.
    Senator Hawley. Let us just skip that. I have heard it all 
before.
    Ms. Pappas. Senator, I appreciate----
    Senator Hawley. Answer my question. Yes or no. Have they 
conducted training for ByteDance employees or TikTok employees.
    Ms. Pappas. I can speak on behalf of TikTok, and the answer 
is no.
    Senator Hawley. No. That is interesting. Do any TikTok 
employees based in China have access to U.S. user data?
    Ms. Pappas. As we have publicly said, yes, we have 
engineers in China, and we are working on the access controls--
--
    Senator Hawley. None of them are members of the Chinese 
Community Party?
    Ms. Pappas [continuing]. We are working on the access 
controls to minimize that data access----
    Senator Hawley. I have heard that, and frankly I do not 
believe it.
    Ms. Pappas [continuing]. Working with the United States and 
through the CFIUS----
    Senator Hawley. Wait. So your testimony is that you do have 
TikTok employees based in China who do have access to U.S. user 
data, but you are confident that none of them are members of 
the Chinese Communist Party and they never accessed it? Is that 
your testimony?
    Ms. Pappas. Anyone who has access to U.S. user data has and 
does so to perform daily duties, so if it is for the 
performance of site management, bug handling. But we have 
strict controls in terms of who and how our data is accessed.
    Senator Hawley. None of that is accessible to any member of 
the Chinese Communist Party. Is that your testimony?
    Ms. Pappas. We believe we have the strictest controls out 
there----
    Senator Hawley. That is not my question.
    Ms. Pappas [continuing]. Actually we are working with 
Oracle----
    Senator Hawley. My question is does anyone who has access 
to user data, are they members of the Chinese Communist Party?
    Ms. Pappas. I feel like I have answered your question.
    Senator Hawley. You have not, and I feel like you are 
avoiding it----
    Ms. Pappas. No.
    Senator Hawley [continuing]. At every opportunity.
    Let me give you another one, since you are on the record 
and under oath.
    Ms. Pappas. Can I be as clear as----
    Senator Hawley. I would welcome you being clear.
    Ms. Pappas. Thank you.
    Senator Hawley. Does any person who has access to U.S. user 
data, are they members of the Chinese Communist Party? Yes or 
no.
    Ms. Pappas. Let me be clear again.
    Senator Hawley. Yes or no.
    Ms. Pappas. For our U.S. users, the data is sorted and 
housed in the United States. We have access controls in place--
--
    Senator Hawley. You are not answering my point. Let the 
record reflect you will not answer my question. Why not?
    Ms. Pappas. Any of that data, it is overseen by our U.S. 
led security team.
    Senator Hawley. That is not my question.
    Ms. Pappas [continuing]. And monitored daily.
    Senator Hawley. That is not my question.
    Ms. Pappas. Furthermore----
    Senator Hawley. My question is does any employee who has 
access to U.S. user data, are they members of the Chinese 
Communist Party? You will not answer that.
    Ms. Pappas. Again, as a global technology platform there is 
no other company that could make that assertion either.
    Senator Hawley. That sounds like a yes to me. I think that 
is news.
    You are familiar, I know, with this Buzzfeed article that 
says that according to leaked audio at more than 80 internal 
TikTok meetings, China-based employees at ByteDance have 
repeatedly accessed non-public data about U.S. TikTok users. 
``Everything is seen in China,'' said a member of China's Trust 
and Safety Department in a September 2021 meeting. In another 
September meeting a director referred to one Beijing-based 
engineer as a ``Master Admin who has access to everything.'' 
These reports show data was accessed far more frequently and 
recently than previously reported. Your testimony is that this 
is false?
    Ms. Pappas. Correct.
    Senator Hawley. All of this is false.
    Ms. Pappas. That is correct. Everything that you just 
stated, there is no such thing as a Master Account.
    Senator Hawley. That is not what it says. It says that 
someone is referred to as ``Master Admin.''
    But you are telling me that China-based employees have 
never accessed non-public data of U.S. TikTok users.
    Ms. Pappas. No. I have already said on the record that we 
have Chinese employees who have accessed data.
    Senator Hawley. That is what this is saying. So you agree?
    Ms. Pappas. If you want to clarify on each individual 
statement. I am saying that there are strict access controls 
around the data that is accessed in the United States. That is 
overseen by our U.S. led security team. We are working with 
Oracle.
    Senator Hawley. That is not what this article says.
    Ms. Pappas. We disagree with the categorization in that 
article, wholeheartedly.
    Senator Hawley. Here is the point. I know there are other 
Senators who want to ask questions. I think we are going to 
have a second round. The truth appears to be, besides the fact 
that we cannot get a straight answer on any of these questions, 
is that you have hundreds of employees with, it appears, access 
to U.S. user data, that may very well be members of the Chinese 
Communist Party. You have no way to assure me that they do not 
have access to our citizens' data. You will not answer my 
question in a straightforward way about whether a CCP has ever 
gained access or not.
    I think, for my own point of view, that is a huge security 
problem.
    Ms. Pappas. Senator, if I may. We are one of the most 
highly scrutinized platforms. There have been many 
cybersecurity experts who have researched our platforms, 
including Citizen Lab, which is a leading academic research 
unit based in the University of Toronto, who have said, and I 
am happy to submit this for the record for the Committee, that, 
``Our research shows that there is no overt data transmission 
to the Chinese government by TikTok.''
    Senator Hawley. Overt.
    Ms. Pappas. ``TikTok's features and codes do not pose a 
threat to national security.''
    Senator Hawley. Wait a minute. Overt data transmission?
    Ms. Pappas. There are also----
    Senator Hawley. Ms. Pappas, this is not a hearing for you 
to testify at will. You are here to answer questions.
    Ms. Pappas. I am providing you with information.
    Senator Hawley. No, you are not. You are talking over me, 
and you are submitting information from--who knows who funds 
this entity, who knows who is behind it, who knows what it 
contains? I do not know.
    What I do know is you will not give me straight answers to 
my questions, and the reason, I think, is pretty clear, because 
your company has a lot to hide. You are a walking security 
nightmare. For every American who uses this app, I am 
concerned.
    Chairman Peters. Senator Hawley, thank you.
    Senator Ossoff, you are recognized for your questions.

              OPENING STATEMENT OF SENATOR OSSOFF

    Senator Ossoff. Thank you, Mr. Chairman, and thank you to 
our witnesses today.
    Mr. Sullivan, in disclosures he has made publicly and to 
the Congress and in his testimony yesterday, former Twitter 
employee, Mr. Zatko, alleged that Twitter has made willful 
misrepresentations to the Federal Trade Commission with respect 
to its compliance with past regulatory action. Is that true?
    Mr. Sullivan. I am familiar with the allegations. I would 
point you to our statements that we made as a company that the 
company disagrees with much of the allegations. Now it is 
connected to an ongoing lawsuit, so I am not able to----
    Senator Ossoff. My question to you, Mr. Sullivan, is has 
Twitter willfully misrepresented facts to the Federal Trade 
Commission?
    Mr. Sullivan. I can tell you that Twitter disputes the 
allegations, is all I can tell you about those particular 
allegations.
    Senator Ossoff. You cannot tell me definitively, Mr. 
Sullivan, that Twitter has not willfully misrepresented facts 
to the FTC.
    Mr. Sullivan. I would point you to what I just said.
    Senator Ossoff. Noted. You do not deny that Twitter has 
willfully misrepresented facts to the FTC. Understood.
    I want to ask you about the logging of access to user data 
and the extent of privileged access to user data for Twitter 
personnel. Does Twitter, Mr. Sullivan, have in place a system 
by which you can determine definitively which Twitter employees 
have accessed private user data, for example, to include 
history of use of the platform, browsing history, direct 
message, geolocation data, Indo-Pacific addresses?
    Mr. Sullivan. Thank you for the question. I can tell you 
what I have observed. I have been in my role since April of 
this year. Our current leadership for infosec, the privacy and 
access controls has a robust process for access to data.
    For example, people have to have a business need to access 
certain datasets, so we have to operate the service, some 
number of people need access to certain datasets. Our goal, 
that is aligned with our privacy objective, is to minimize that 
access to that necessary to do your job function.
    We have access controls, monitoring, logging. I receive, 
for example, new employee approvals, this person needs to be 
able to run this report.
    Senator Ossoff. Mr. Sullivan, I appreciate the overview, 
but the specific question to which I am seeking an answer is, 
is there a log event any time a Twitter employee accesses the 
private user data of a specific user? Can Twitter determine 
every time one of your employees has accessed such private user 
data? Do you have that functionality? It is really a yes-or-no 
question.
    Mr. Sullivan. We have monitoring and logging and access 
control. It is always evolving and improving. But what I can 
tell you is I have observed it in action. I cannot speak to 
every single system. We have a team that can.
    Senator Ossoff. Mr. Sullivan, I will look for that in the 
follow-up. I want to say, respectfully, this. You are here 
before the U.S. Senate. Serious allegations were made yesterday 
by one of your former employees, and I am open-minded. I am 
here pursuing the facts. Certainly in your responses for the 
record it is going to help you to be clear, definitive, and 
precise responding to yes-or-no questions like that one. Can 
you commit that in your written responses we are not just going 
to get talking points and generalities, we are going to get 
precision and yes-or-no answers to yes-or-no questions?
    Can I get a yes-or-no answer to that question?
    Mr. Sullivan. Yes. I am trying to explain----
    Senator Ossoff. Thank you. No. I just need a yes to that 
question.
    Mr. Sullivan. Yes, I understand. Thank you.
    Senator Ossoff. So yes?
    Mr. Sullivan. Yes.
    Senator Ossoff. Great. Thank you.
    Let me ask you, please, Mr. Cox. There has been substantial 
public reporting controversy and concern about the Metapixel 
product and the possibility that its deployment on various 
hospital system websites, for example, has enabled Meta to 
collect private health care data, some of it potentially that 
would typically be Health Insurance Portability and 
Accountability Act of 1996 (HIPAA) protected, from U.S. 
persons.
    Does Meta possess or collect any health care or medical 
data related to its users or to U.S. persons?
    Mr. Cox. Senator, not to my knowledge, but I would be happy 
to follow up on that specific issue.
    Senator Ossoff. OK. I would like you to follow up, and 
please, Mr. Cox, submit to this Committee a comprehensive and 
precise answer to that question, which I will recharacterize in 
writing. We need to understand, as the U.S. Congress, whether 
or not Meta is collecting, has collected, has access to, or is 
storing medical or health data for U.S. persons or your users. 
Will you get me a comprehensive and precise answer to that 
question?
    Mr. Cox. Senator, yes, we would be happy to follow up.
    Senator Ossoff. OK. Thank you very much.
    Ms. Pappas, I overheard some of the responses to Senator 
Hawley's question. I would like you to answer a question. There 
has been a significant topical focus on this throughout this 
hearing. In what ways does the government of the People's 
Republic of China, if at all, exercise influence over TikTok's 
corporate behavior or corporate policies? I am going to ask the 
Chairman's indulgence and follow-up for as much precision as I 
can get, so I am going to humbly and respectfully ask you not 
to give me the immediate topline talking points but to give me 
a precise, particularized answer to that question.
    Ms. Pappas. In no way, shape, or form, period.
    Senator Ossoff. In no way, shape, or form, period, does the 
government of China exercise any influence over TikTok's 
corporate practices or policies.
    Ms. Pappas. Correct.
    Senator Ossoff. For example, if you receive a response from 
the government of China to take down certain content for 
reasons that they State are related to their national security, 
do you comply with such requests?
    Ms. Pappas. No.
    Senator Ossoff. Do you comply with such requests if you 
receive them from the U.S. government?
    Ms. Pappas. If it follows due legal process, yes. We 
actually include all government requests for takedown in our 
transparency reports, in which you can see that China has not 
requested.
    Senator Ossoff. Thank you, Ms. Pappas. There will be some 
follow-up questions for you there for the record. I appreciate 
all of your testimony. Thank you for answering questions, for 
those which were answered, and Mr. Chairman, I yield back.
    Chairman Peters. Thank you, Senator Ossoff.
    Senator Lankford, you are recognized for your questions.

             OPENING STATEMENT OF SENATOR LANKFORD

    Senator Lankford. Mr. Chairman, thank you. Thank you to all 
of you and your testimony. You have been here a long time. 
There are a lot of questions. You have gone through a lot of 
different issues. I apologize I had to be able to run in and 
out real quick.
    Ms. Pappas, I want to be able to follow up on a couple of 
things real quick. You have answered a lot on China. Obviously 
it has been a big issue. You know that. It is not like you went 
to TikTok and were shocked there were issues with China and the 
possibility there.
    There are a couple of questions that have come up recently 
on this. One of them is the ability for TikTok to be able to 
track keystrokes after you leave the app, to be able to be on 
the app, click a link to be able to go to another site, and be 
able to track keystrokes. Is that a part of the app's design 
that you can do that?
    Ms. Pappas. No, it is not.
    Senator Lankford. That is not used, because it has been 
widely reported that is part of the app currently and its 
structure. Has it been part of the app and has it recently been 
taken off?
    Ms. Pappas. The keystrokes one, to my knowledge, was 
basically an anti-spam measure, and so that was never 
collecting the content of what was being typed.
    Senator Lankford. Was there an ability, though, to be able 
to track keystrokes on it as you are on the app, click on a 
link to be able to go to another page, to be able to track?
    Ms. Pappas. I do not believe so, no.
    Senator Lankford. OK. We will follow up on that ``do not 
believe so'' on that.
    The other one is you have offices all over the world. As 
you mentioned, a lot of your offices are in Singapore. The 
original development of TikTok, it is my understanding it came 
from ByteDance. It was a Chinese development originally and 
then it spread all over the world. Correct?
    Ms. Pappas. Yes, it was originally developed by the parent 
company, ByteDance, but also Musical.ly, the app, the two were 
combined. But currently, and for a while now, there have been 
separate apps, separate codes, separate servers.
    Senator Lankford. Are any of the developers that still work 
on the design still based in China, in your Chinese office?
    Ms. Pappas. Yes. We have said that we have engineers in 
China. Correct.
    Senator Lankford. That will be one of the conversations we 
will have in the days ahead to be able to follow up on, what 
those access points are. There is an obvious consideration here 
with this Committee and with others on national security 
issues. It is just well known that China has, as a part of 
their law, they get access to anything with technology. For 
China to have the possibility to have access to 100 million 
Americans, including most of our young people, that is an issue 
for us, and it is the reason we ask hard questions.
    Ms. Pappas. We understand that concern and I appreciate 
your question, Senator, which is why we are investing heavily 
in ensuring strict access controls, and we are working with 
Oracle. We recently announced that 100 percent of our user data 
is now stored in Oracle's cloud infrastructure, and we have 
further said that they will be vetting and validating our 
content moderation and recommendation systems. We really are 
committed to transparency and security on these topline issues, 
and we are happy to provide further information.
    Senator Lankford. Great. We will continue to be able to 
follow up.
    Mr. Cox, thank you, as well, for being here, as for all of 
you in this conversation. I have a couple of questions here. 
One is dealing with the experts, as you mentioned in your 
testimony as well, that are actually helping with the fact-
checking process. We did a little bit of digging in some of 
this, and obviously you have a diverse group of nonprofits and 
think tanks and other folks that help some of the experts in 
fact-checking. But there are also some that make us scratch our 
head a little bit on it.
    There was one of the groups that was dealing with 
coronavirus and some of the fact-checking early on on that was 
actually a group of journalists. As we went through and looked 
at some of the credentials, all of which were public on all 
these individuals--thanks for the transparency on that--none of 
them were medical professionals on it.
    Not to be pejorative on journalists, but I do not run into 
a lot of conservative journalists. There are a few out there. 
The consistent fear is that conservative voices are silenced, 
and when I look at some of the groups that actually do the 
fact-checking I do not find a lot of conservative groups that 
do this.
    Ms. Pappas, on the same kind of issue, as I go through for 
TikTok they list as one of the fact-checking groups, or the 
experts that are out there, the Southern Poverty Law Center as 
one of the places they go. The Southern Poverty Law Center is 
considered the Family Research Council and the Alliance 
Defending Freedom, which are just pro-family groups and 
religious freedom groups, as hate groups. If TikTok is 
dependent on the Southern Poverty Law Center to be able to find 
what is a hate group, the Family Policy Council is a hate 
group, suddenly, on TikTok.
    The question is, how do you develop your expert groups? How 
do you make sure that they are actually balanced and that the 
advice you are getting on what that looks like is actually 
fair?
    Mr. Cox, do you want to jump in first on the Meta side?
    Mr. Cox. Yes, I would be happy to, Senator. On the issue of 
misinformation, we know that people do not want misinformation 
on the platform, and that is why we have developed a program to 
work with independent fact-checkers that are certified by the 
Independent Fact-Checking Network (IFCN).
    Senator Lankford. How do you make sure it is a balanced 
perspective, philosophically?
    Mr. Cox. I know that the IFCN has specific policies around 
looking for balance. I also know that there are folks on both 
sides of the aisle who are members of that network.
    Senator Lankford. I would only say, how do you make sure it 
is balanced, not that organization make sure it is balanced, 
because again, there is a perception--and I would tell you, I 
understand their perception because I have a lot of 
conservative organizations--churches, faith-based nonprofits, 
all kinds of entities--that reach out to my office, at home and 
here, and they will reach out to me and say, ``I just got 
blocked from Facebook. We are trying to figure out why.'' They 
are not terrorist organizations. They are not violent. They are 
not anything else. They just got blocked, and they are trying 
to figure out if conservative ideology is the reason why.
    What I am trying to figure out is who fact-checks the fact-
checkers for you to be able to make sure that you are getting a 
fair perspective on this? You have millions or billions of 
pages IFCN needs to be able to track on this. When someone 
gives you counsel, how do you take advantage of checking that 
first to make sure it does not have a bias?
    Mr. Cox. Senator, on the question of checks and balances 
among the fact-checker network, the system that we have set up 
allows fact-checkers to check each other and resolve dispute 
claims in that way. We believe that that helps the system be 
more fair.
    Senator Lankford. But I guess I am asking, so the same 
question. How do you make sure that that perspective is 
balanced, that it is not all fact-checkers that think alike?
    Mr. Cox. Senator, ultimately we believe that our platform 
is best for people when it can be a place for all voices and 
for all political points of great.
    Senator Lankford. Great.
    Mr. Cox. That is in our interests and that is in the 
interests, we believe, of the Nation.
    Senator Lankford. I 100 percent agree. I am trying to say 
to you it is not, that there are entities that really believe 
their voices are being blocked out, and that the individuals 
that are fact-checking them have a bias against them 
politically, not necessarily for violence or something else. 
That is part of the challenge here, that I would challenge you 
on that, and for all of us, to be able to make sure it is going 
to be fair and balanced on that.
    Let me move on to a couple of other issues. Mr. Chairman, 
do I have an extra minute here I can go on it? Thank you for 
that. I have two quick other things that I want to be able to 
address. In Meta's terms of service you state, in terms of 
service, 3.2.1, ``You may not use our products to do or share 
anything that is unlawful, misleading, discriminatory, or 
fraudulent, or assists someone else in using our products in 
such a way.''
    But you also have stated, as Meta, ``We prohibit content 
that offers to provide or facilitate human smuggling, which 
includes advertising a human smuggling service, but we do allow 
people to share information about how to enter the country 
illegally or request information about how to be smuggled.''
    Now I am trying to align those two, where you say you 
cannot use our platform for any illegal activity, promoting 
illegal activity, facilitating that unless you are illegally 
crossing our border. Then we are going to facilitate the use of 
our platform, and in fact, has been accessibly used to 
facilitate connecting with the cartels and the traffickers to 
be able to facilitate illegally crossing our Southern Border.
    Help me understand between those two. Which one is correct?
    Mr. Cox. Senator, we have been working with law enforcement 
for a while now on the very serious issue of human trafficking 
across our borders. We have folks at the company who 
specifically speak to law enforcement and border officials to 
make sure we have an up-to-date list of which cartels that we 
can use in order to fan out our systems and make sure we take 
them down. We have policies against human trafficking, and we 
have policies against those cartels, to make sure that we are 
able to remove them as soon as they pop up.
    Senator Lankford. But this is a Meta statement that you 
said, ``We allow people to share information about how to enter 
the country illegally or request information about how to be 
smuggled.'' That is allows already, based on Meta's policy, but 
you also say, ``We do not allow you to use this for illegal 
activity.'' That is what I am trying to figure out, is you 
either allow for illegal activity or you do not allow for legal 
activity, but it looks like you are trying to do both. We do 
not like smuggling, but we are facilitating people who are 
illegally coming into the country.
    Mr. Cox. Senator, the policy here, as I understand it, is 
specifically about human trafficking and cartel networks that 
are facilitating illegal trafficking of people. I would be 
happy to follow up with you on this.
    Senator Lankford. Let us do this. There is not a person 
that crosses our Southern Border into the United States that 
does not pay the cartel. As our Border Patrol will tell you, 
the border is secure. It is secure on the south side.
    When I was in McAllen, Texas, a couple of months ago they 
said in that area the cartels, just in that area, make $153 
million a week trafficking people across the border. Many of 
those meet up with those people that are moving them across the 
border illegally, through a Facebook platform. That is a big 
issue to me, and it seems like Meta is being inconsistent in 
their terms of service about illegal activity.
    My last comment on this, and I really will make it my last 
comment, and I really do appreciate the time on it, I have, for 
years gone back to Facebook and said, ``I have all kinds of 
constituents at home that say to me, `I would comment to you on 
your page except when I comment I get just ruthlessly attacked 
by people that politically disagree.' '' They click the angry 
button, they yell at them, they say all kinds of mean things to 
them when they comment on my page. So they just do not comment.
    What has happened is, political Facebook pages--and that is 
for everyone here, both sides of the aisle--have become places 
for anger and aggression. When you disagree you go and attack 
people that comment, that like someone politically, you go 
attack them instead.
    What I have asked Facebook for years is, allow those of us 
that we know our pages are place where there is wide 
disagreement to be able to have the option to say, ``You can 
comment to me but you cannot attack the people that comment to 
me.'' We can have dialog and interaction but you cannot have 
this angry interaction with each other on the page. Give us the 
option to turn that off so we see comments, we can respond back 
to people and have that dialog and interaction, but you turn 
down the volume.
    What I have heard, year after year, from Facebook is, 
``That is not really what we do. What we do is interaction.'' 
But everyone knows the interaction there is angry, bitter, 
aggressive interaction. That is not healthy.
    My request again to Facebook, which we have done in writing 
and in follow-up and in conversation, is you have the ability 
to turn down the volume, to be able to have fewer angry emojis 
flying at people, by allowing that interaction. Please do.
    Chairman Peters. Thank you, Senator Lankford, for the 
Senate one minute. I appreciate it. [Laughter.]
    We will do a second round, and because of the late nature, 
and all four of you have been here a long time, if everyone can 
hold within--I was very generous in the first round. We will 
try to make sure that we do the seven minutes in this round, if 
you would.
    I want to get back to where we left off which was on the 
actual design of these products up front, not dealing with 
problems after they have already arisen and sometimes waiting 
years before you fix the problems. How do we put it in the 
initial design?
    My question, Mr. Cox, is Wall Street Journal (WSJ) reported 
last week that Meta shut down its Responsible Innovation Unit. 
These two dozen employees were charged with identifying 
potential harms at the conceptual stage of new product design 
and change the design culture. Why was that eliminated?
    Mr. Cox. Senator, thanks. Respectfully on this team, 
because I saw this reporting as well, the work here was not 
eliminated. The specific team named here was a small team of 
about 20 people that was overlapping with our much broader 
integrity, safety, civil rights efforts across the company. 
This was a case of just moving that work into the teams where 
it was best----
    Chairman Peters. You just moved them into a different part. 
They are still there. Is it safe to say that this is happening 
across your platform, with our design team? Is everybody on the 
design team, are they compensated based on the trust and safety 
of the products that you are putting out? Is that part of the 
metric?
    Mr. Cox. Senator, when we look at how health of any product 
we will look at trust and safety as a part of that. We will 
look at security. We will look at relevance. We will look at a 
holistic set of metrics, both quantitative and qualitative.
    Chairman Peters. No, I know. Let me interrupt because of 
time, because I am going to hold everybody to seven minutes 
here. In the time, with the metric to determine compensation, 
does an individual in that metric, do they get compensated 
based on something related to safety and trust. When they get 
their bonus at the end of the year, are safety and trust part 
of that compensation?
    Mr. Cox. Senator, so for bonuses we would have----
    Chairman Peters. Just say yes or no.
    Mr. Cox. Is it a part of how we look at the health of the 
product, which is related----
    Chairman Peters. It is a part. It is an actual line.
    Mr. Cox. Excuse me, Senator?
    Chairman Peters. It is an actual line, related to safety 
and trust.
    Mr. Cox. Trust and safety metrics are part of----
    Chairman Peters. If a product goes out that causes a lot of 
problems, they are going to be penalized for that, financially?
    Mr. Cox. Senator, we would not launch a product if we 
believe it was about to be unsafe.
    Chairman Peters. You would not.
    Mr. Cox. Once we do launch products we evaluate things like 
prevalence, things like reports, and a whole host of metrics in 
order to understand the health of a product from a safety 
perspective.
    Chairman Peters. But do people get compensated related to 
safety and trust? Just yes or no. You said yes, they do. I will 
go down. Mr. Mohan.
    Mr. Mohan. Senator, building in trust and safety into our 
products is not just an integral part of our goal, so topline 
metrics, it is our No. 1 priority. But it is also built into 
the product development process.
    Chairman Peters. I know it is in the process. I just want 
to know the compensation. Are they compensated specifically 
because they are working on trust and safety? Is every employee 
in your product team doing that?
    Yes or no.
    Mr. Mohan. If an employee builds a product that does not 
factor into account trust and safety we simply would not launch 
that product for our users.
    Chairman Peters. We heard today that people had questions 
about launching of products, and they still got launched, that 
trust and safety folks--thank you for your opinion but we have 
to launch this product because people are compensated--second 
question, they are compensated based on growth and 
profitability, like other companies do. You are not the only 
company that does that, based on growth and profitability, but 
that is really the main driver.
    Mr. Mohan. Senator, as the Chief Product Officer of YouTube 
I look after both our product development process and our trust 
and safety operations, and I can tell you, unequivocally, that 
we would not launch a product or grow a product that was at the 
detriment of our users' trust and safety.
    Chairman Peters. If you launched a product and it turned 
out it was not like you thought, and it was not trustworthy or 
safe, would the product designer lose their bonus or they would 
be compensated?
    Mr. Mohan. It would factor into their performance reviews.
    Chairman Peters. OK. Ms. Pappas. Yes or no.
    Ms. Pappas. Safety and trust is a core priority for us.
    Chairman Peters. I understand. Is it a part of 
compensation?
    Ms. Pappas. Every trust and safety, like every feature, 
rather, has trust and safety as a seat at the table. As we do 
our product development and launch process we have actively 
delayed launches based on not meeting the merits of safety. It 
is a top priority for us. We invest heavily in this to ensure 
the safety of our products at launch. In regard to performance, 
it is one of the factors.
    Chairman Peters. Mr. Sullivan? Be brief, please. Yes or no.
    Mr. Sullivan. The health and safety is a topline metric for 
the Consumer Products organization, so that will affect how 
people's performance is graded.
    Chairman Peters. You have mentioned examples of where you 
may not have launched or have not launched. I assume that you 
all have examples where you have not launched because some 
issues were raised. I would certainly like to have that 
information. Would each of you commit to giving us an example 
so we have a sense of what actually is caught before it is 
actually released?
    Mr. Cox, not now, but would you provide an example for us 
on that?
    Mr. Cox. Senator, we would be happy to.
    Chairman Peters. Thank you. Mr. Mohan.
    Mr. Mohan. Yes, Senator, I am happy to follow up.
    Chairman Peters. Ms. Pappas.
    Ms. Pappas. Yes.
    Chairman Peters Mr. Sullivan.
    Mr. Sullivan. Yes. I also have examples.
    Chairman Peters. Great. Thank you. The other thing is how 
you deploy resources, and we have heard a lot of numbers here. 
I think the most valuable resource is just the number of 
engineers. I am going to go and ask you three questions, each 
of you to answer.
    We sent this to you last week. We have been trying to get 
this information for a long time. We said I am going to ask you 
this question today, so I am sure you are prepared for the 
question because we asked it Friday.
    Each of you, what is the total number of full-time 
engineers you have in your company? How many of those engineers 
work full-time on ensuring trust, safety, or integrity of your 
platforms? Three, how many engineers work full-time on product 
development?
    Mr. Cox.
    Mr. Cox. Senator, the total number of engineers at the 
company is on the order of tens of thousands.
    Chairman Peters. No. That is not what I asked. We asked 
very specific questions on Friday. We have been trying to get 
this information for a long time. We said we are going to ask 
you this question in the hearing, and you are saying you did 
not get it. You do not have it for me? OK. Mr. Mohan.
    Mr. Mohan. We have thousands of engineers that work at 
YouTube.
    Chairman Peters. OK. You do not have a specific answer for 
me either. Ms. Pappas.
    Ms. Pappas. I do not have the engineer numbers but trust 
and safety represents our largest labor expense for TikTok.
    Chairman Peters. OK. You do not have numbers. Mr. Sullivan, 
you do not have numbers as well, or do you? I hope you do. 
Please, one of you do. We have been trying for months to get 
these answers. This is why we get so frustrated.
    Mr. Sullivan. We have about 2,200 people working on trust 
and safety across Twitter.
    Chairman Peters. What is the total number of full-time 
engineers?
    Mr. Sullivan. I am sorry. That was not an engineer number. 
This is those that build and enforce the Twitter rules. We have 
several thousand engineers at Twitter.
    Chairman Peters. So the same thing. You do not have 
specific numbers, as we asked. OK.
    Would you commit to get me those numbers, Mr. Cox?
    Mr. Cox. Senator, I am happy to have the teams follow up.
    Chairman Peters. That is a yes. Mr. Mohan.
    Mr. Mohan. Senator, I will have my teams follow up as well.
    Chairman Peters. Thank you. Ms. Pappas.
    Ms. Pappas. We are actively working to get you those 
numbers. We will follow up as appropriate, yes.
    Chairman Peters. Thank you. We are trying to work together. 
This is really a complex problem. We get it. I understand the 
complexity of the problems you have to deal with each and every 
day. We want to work with you, but we need to be able to have 
this kind of dialog to get a better sense of what is going on 
as we go forward, so please do that.
    Ranking Member Portman, you are recognized for your 
questions.
    Senator Portman. Thank you, Mr. Chairman. Not to leave 
Twitter out, I wanted to ask a question regarding the sexual 
material online that we talked about earlier. As I said, this 
Committee has been a leader in stopping human trafficking, and 
specifically sex trafficking underage kids, and we have passed 
some legislation that is making a difference.
    Based on a website called Bark, that advises parents on how 
to get their kids safe, among the top five severe sexual 
content sites was Twitter. This year it was widely reported 
that Twitter considered monetizing sexual content, meaning, as 
I understand it, people could actually get paid for 
pornography, basically, for putting sexual content online. My 
understanding is this project has now been put on ice because a 
group of Twitter employees found that the platform could not 
effectively separate out child exploitation content, and I 
appreciate you did not go forward with this plan.
    According to Verge, the Twitter employees have said that 
despite executives knowing about the child sexual exploitation 
problems on the platform they have not committed sufficient 
resources to detect, remove, and prevent this harmful content. 
This is a news story that I would like to ask be made part of 
the record.\1\
---------------------------------------------------------------------------
    \1\ The information referenced by Senator Portman appears in the 
Appendix on page 163.
---------------------------------------------------------------------------
    There are lots of issues here. One is you made the right 
decision not to monetize this explosive conduct at this time, 
which is really pursuing a pornography scheme as I see it. But 
I wonder if you can give us a commitment today to halting this 
program indefinitely so as to prevent the platform and bad 
actors from making money off of child sexual material?
    Mr. Sullivan. First may I say that we abhor CSAM, the 
sharing of sexual material. I appreciate your work there. I 
worked on this here and also at Meta, so I have been working on 
this for years.
    I made that decision to pause this idea. It was not a 
product. It was a set of people had an idea that they thought 
they might want to pursue. I said I want to look at all the 
information here and learn about where we stand, what the risks 
could be. I think this is how the system should work. We looked 
at a product in its very early ideation and did the analysis 
and got the perspectives, and said this is not appropriate for 
us to be doing. So that is how the process went.
    Senator Portman. OK. So you made a commitment today not to 
pursue it?
    Mr. Sullivan. We are not pursuing that.
    Senator Portman. You have made a commitment not to pursue 
it in the future?
    Mr. Sullivan. We have no plans to pursue monetization of 
adult content. That is correct.
    Senator Portman. You have no plans to do it. Can you just 
tell us you are not going to do it?
    Mr. Sullivan. I am not planning to do it, no.
    Senator Portman. You are not planning to.
    Mr. Sullivan. I am not doing this.
    Senator Portman. Just say you are not going to do it.
    Mr. Sullivan. We are not planning to do it, no.
    Senator Portman. Cannot get a ``planning'' out of there.
    Not to, again, leave anybody out, Mr. Mohan, we have not 
had a chance to talk yet. I want to ask you about something 
that is important to this Committee, and I hope a way forward 
in terms of legislating and regulating platforms. Your 
platform's algorithms have been described as a ``black box,'' 
according to experts and researchers, meaning there is little 
to no transparency in the algorithms. I am sure you have heard 
that before.
    The question is, is there a way to come up with a 
transparency approach that makes sense as calls grow for 
Congress to pass legislation? I like the idea of having much 
better information than we have, getting behind the curtain and 
getting into that black box.
    That is why along with Senator Chris Coons I drafted this 
legislation called the Platform Accountability and Transparency 
Act. It would require the largest tech platforms to share data 
with vetted, independent researchers and other investigators so 
that we can know exactly what is happening with regard to the 
privacy issues we talked about today, or content moderation, 
product development, sexual exploitation issues, key industry 
practices.
    My question for you, Mr. Mohan, would you be supportive of 
legislation like PATA to get at this need for transparency and 
for us to be able to legislate with better information?
    Mr. Mohan. Yes, Senator, I would be supportive of the 
spirit behind that regulation. The reason why is because I 
agree with you. I do think that transparency around our 
practices, how we go about them, is an important thing. It is 
the reason why we have invested so heavily in our quarterly 
transparency report, which you may be familiar with.
    It is also the reason why we, just a few weeks ago, 
launched the YouTube research program, which is similar, in my 
understanding, to what the act that you are referring to is 
trying to get at, which is giving academic researchers access 
to our raw data, obviously in a user privacy-sensitive way, 
where they can derive metrics or derive insights of their own 
based on that data. We have taken it a step further where we 
will also provide technical support that these researchers 
might need to get at the insights that they are looking for.
    I am very bullish about that transparency program, and 
based on the feedback that we hope to get from researchers look 
forward to enhancing it in the future as well.
    Senator Portman. We are following your YouTube research 
program carefully. We are glad you created it. We want to see 
what the results are and we want to be sure these are 
independent individuals who will give actual information about, 
what the algorithms are, again, what is in the black box so 
that citizens can understand it better, and as legislators we 
can legislate better. I think that is a positive step.
    With regard to PATA, can I hear from the other members of 
the panel how you feel about this legislation? We have shared 
it with all of you. We hope to introduce it soon. Again, it 
would be bipartisan, and it would be one that would, I hope, 
give us a way forward as a first step. Mr. Cox.
    Mr. Cox. Senator, thanks. I know our teams have been in 
contact with yours on this. We are aligned that more 
transparency about content on our platform is a good thing. It 
is a good thing for the public. It is a good thing for the 
company.
    We also have an academic research program called Fort, 
where we have designed privacy-protected ways of sharing 
information with outside academics and researchers. We have 
also released a widely viewed content report which helps folks 
get access to which content is seen the most times on the 
platform. We also publish quarterly community standards 
enforcement report which gets into categories of content by 
region and shows the work we are doing every day.
    We are committed to working with you on this.
    Senator Portman. Yes. We talked about regulations needs.
    Ms. Pappas, yes or no?
    Ms. Pappas. Senator, transparency builds trust. We were the 
first open platform to open our own Transparency and 
Accountability Center for that specific reason, so people could 
take a look at our content moderation systems, recommendation 
systems as well. Last month we announced that we will be 
opening our API to researchers as well, so we would be happy to 
support that legislation.
    Senator Portman. OK. Thank you. Mr. Sullivan.
    Mr. Sullivan. Yes. We have been publishing data to 
researchers for years, and we are very open to anything that 
improves transparency. Especially as AI moves forward, it going 
to be very important.
    Senator Portman. It is important. It is needed. Thank you, 
Mr. Chairman. Thank you all.
    Chairman Peters. Thank you, Ranking Member Portman.
    Senator Johnson, you are recognized for your questions.
    Senator Johnson. Thank you, Mr. Chairman. Mr. Cox, just a 
quick little housekeeping here. Are you aware of a letter 
Senator Grassley and I sent to Mr. Zuckerberg on August 29th? 
We did get a reply on September 12th from Mr. Kevin Martin, 
just saying they are going to respond. Are you aware of that 
letter asking for information, contact between yourself, FBI, 
Department of Justice, documents, names, that type of thing?
    Mr. Cox. Senator, yes, I am aware of that letter and I know 
the team is working on following up as quickly as they can.
    Senator Johnson. You will commit to full response on that?
    Mr. Cox. I know the team is committed to a response, yes.
    Senator Johnson. OK. Let us put up my first chart\1\ here. 
Back in November 2021, CDC Rochelle Walensky stated in front of 
the Health Committee, ``We have the most robust, safe vaccine 
safety system we have ever had in this country.''
---------------------------------------------------------------------------
    \1\ The chart referenced by Senator Johnson appears in the Appendix 
on page 192.
---------------------------------------------------------------------------
    In October 2020, before the vaccine was approved, CDC's Dr. 
Tom Shimabukuro stated in a web seminar, `` Vaccine Adverse 
Event Reporting System (VAERS) is obviously something that Ms. 
Walensky was talking about--``VAERS traditionally has provided 
the initial data on the safety profile of new vaccines when 
they are introduced. For COVID, vaccine reports will be 
processed within one to five days. Depending on the seriousness 
of the report, CDC and FDC received updated datasets daily, and 
data-mining runs are planned to be conducted every one to two 
weeks.''
    This is an example of the timeliness and responsiveness of 
VAERS, going back to H1N1. It kind of sounds like they are 
really going to rely on VAERS. I remember part of that 
discussion when they said, ``Listen, we are going to take 
vaccine safety so seriously, if we get a report of a couple of 
days of lost time because of an injury we are going to be 
calling that individual up and we are going to be checking on 
it.'' It really sounded like they had this all covered, right?
    Let us see what they actually did. I produced this chart\2\ 
because I took VAERS and FAERS seriously, and I started 
tracking this, and I started putting together this chart. I 
want to quickly describe what this is. The first five lines, 
the first five drugs, four of them are in the FDA Adverse Event 
Reporting System (FAERS) system, the FDA Adverse Event safety 
system: ivermectin, hydroxychloroquine, dexamethasone, and 
Tylenol. You have the flu vaccines in there. That comes off the 
VAERS system. You have remdesivir, which comes off of FAERS, 
and COVID vaccine that comes off of the VAERS system.
---------------------------------------------------------------------------
    \2\ The chart referenced by Senator Johnson appears in the Appendix 
on page 193.
---------------------------------------------------------------------------
    Now I can see why the government really did not like the 
way I put this is their data. I did not make these numbers up. 
This off the VAERS and the FAERS system. But for whatever 
reason Twitter censored this chart.
    Now just quick, and I will show you the current version, 
the one you censored, it showed that ivermectin, on average, 
over 26 years, on average, had 15 deaths reported on the FAERS 
system. Hydroxychloroquine had 69 deaths. Flu vaccines had 77. 
Dexamethasone had 618. Tylenol had over 1,000. Remdesivir, 
since it was approved, had 1,612, and the vaccines had 21,000 
deaths. OK, these are just the facts, and Twitter censored it. 
Do you have any idea why?
    Mr. Sullivan. Senator, I was not at the company at the 
time, but what I can tell you is that we want robust discussion 
on the platform, of any issue. A COVID misinformation policy 
was developed that seemed to me--and again, I do not develop 
it--but it seemed quite narrow to me.
    Senator Johnson. You censored government information. Here 
are the current numbers, by the way. Over 30,000 deaths 
reported worldwide, 27 percent of those, by the way, have 
occurred on Days zero, one, or two. You did not only censor 
this chart, you censored, for example, radio shows that 
interviewed me, talking about FDA-CDC data.
    YouTube took down a video of this Committee's hearing, of 
an eminently qualified critical care specialist who saved 
thousands of lives treating people, using what seems to me 
pretty safe drugs. After eight million views, YouTube pulled 
that video down. What would be the justification for YouTube 
pulling down a hearing of the U.S. Senate with a highly 
qualified doctor just giving a second opinion on how to save 
lives during COVID? Why would YouTube do that? On what 
authority, whose authority, are you censoring that information 
so the American public could not receive a second opinion, and 
access drugs that might have saved their lives? Why would 
YouTube do that?
    Mr. Mohan. Senator, respectfully, as I was mentioning 
earlier, we did not decide those policies on our own. We worked 
with third-party health authorities in this country. That did 
include the CDC or the FDA.
    Senator Johnson. I will be sending you a letter, and I want 
to know who those health authorities were, and I want to see 
the communications between them. Will you commit to providing 
me that information, for transparency's sake?
    Mr. Mohan. Senator, I am happy to follow up on your request 
on how we developed that policy.
    Senator Johnson. In July 2021, talk about misinformation, 
this should have been the 2021 lie of the year. President Biden 
said, ``You are not going to get COVID if you have these 
vaccines. If you are vaccinated you are not going to be 
hospitalized, you are not going to be in an ICU unit, and you 
are not going to die.'' That is the President of the United 
States.
    It just so happens we could not rely on the CDC and the FDA 
because they were not honest, they were not transparent, they 
were not giving us data, so we had to go to Public Health 
England. This is a chart\1\ published from their Technical 
Briefing Number 23, that covered the period from February 1 to 
September 12, 2021. It shows 593 cases of mainly Delta, 2,542 
deaths, 1,613 deaths occurred with the fully vaccinated.
---------------------------------------------------------------------------
    \1\ The chart referenced by Senator Johnson appears in the Appendix 
on page 195.
---------------------------------------------------------------------------
    Obviously, this was published, and they were publishing 
other similar information during that time period when 
President Biden lied to the American public that this was a 
pandemic event and vaxxed, and if you got vaccinated you are 
not going to go to the hospital, you are not going to be in an 
ICU unit, and you are not going to die. Well, 63.5 percent of 
the people fully vaccinated were dying in England at the exact 
same time.
    Why did you not pull this? Have you ever labeled the 
President of the United States' comment as misinformation? Have 
you ever done that? Any of you? I will take that as a no.
    Again, I just wonder, who are the authorities, who do you 
think you are to censor information from eminently qualified 
doctors who had the courage and compassion to treat COVID 
patients when the National Institute of Health (NIH) guideline 
was basically if you test positive for COVID, go home, be 
afraid, isolate yourself, do not do anything until you are so 
sick, we will send you to the hospital, we will give you 
remdesivir, where we have 1,600 deaths so far, we will put you 
on a vent, and we will watch you die.
    You guys bear a fair amount of responsibility for hundreds 
of thousands of people not being treated, and I would say 
probably dying that did not have to die. I hope you are proud 
of yourselves.
    Chairman Peters. Senator Lankford. Here now in the second 
round Senator Lankford, Senator Hawley, and then you.
    Senator Lankford. I will give back part of my magic minute 
here.
    Chairman Peters. Yes, please do that.
    Senator Lankford. I will go short on this. I do want to 
follow through on a couple of things there on illegal activity. 
You have all been very outspoken on dealing with sexual child 
predators, with different issues, drug trafficking. Those were 
all good things to be able to engage on.
    But it is fascinating to me that the platforms have chosen 
to say there are some illegal activities we are OK with, and, 
in fact, we are going to facilitate. One of those is illegally 
crossing our Southern Border. It is not hard for me to go to 
YouTube, and I just type in ``how to cross the border 
illegally'' and I get a video that says, ``How to illegally 
cross the Mexico-U.S. border.'' It has 1.7 million views, and 
it has been there for two years.
    Yes, I watched it, and it showed where to be able to cross, 
what highways to avoid, where the Border Patrol typically puts 
up stations, how to be able to look for different aspects. In 
detail, shows a video of here is how to illegally cross the 
Mexico-U.S. border, and where to be able to cross, and how to 
avoid border patrol. This has been up for two years, and it has 
had 1.7 million views.
    As I mentioned, on Facebook, Facebook has ads that I can 
actually show you that are human smugglers placing ads in 
Central America so people will know how to be able to connect 
with them, to be able to travel through Mexico, to be able to 
pay the cartels, which are a ruthless drug organization, to be 
able to get in the United States.
    My confusion on this is I do not understand why the 
platforms look at illegally crossing the border as ``we are 
going to look the other way'' when your user agreements say 
``we do not promote illegal activity except for this one.'' 
Help me understand why that is different.
    Mr. Mohan. Senator, I do not know about that specific 
video. I am happy to follow up.
    Senator Lankford. It is not just one. It is a bunch. That 
is just the first of many.
    Mr. Mohan. I am happy to follow up on those.
    But just in general, we do have very clear policies where 
content that encourages dangerous behavior, not just illegal 
behavior but dangerous, harmful behavior is removed from our 
platform. We have the Four R's approach that I described in my 
opening testimony, where it not about removal of content but 
also reduction of content, raising up authoritative sources. In 
the context of people searching for that type of information, 
making news stories from prominent mainstream news outlets 
prominent. We do try to have a holistic approach to dealing 
with this type of content on our platform.
    We are not perfect. We continue to improve both our 
policies as well as our enforcement. In this specific case I am 
happy to follow up. But we do have very clear policies against 
cartels, harmful criminal conspiracies, other types of 
organizations where their type of activity is not allows on our 
platform.
    Senator Lankford. I would only say this particular video, 
which, by the way, this one is in English, this particular 
video even talks about how to be able to connect with a cartel 
and how much the cost is going to be when you get to the 
Southern Border.
    Mr. Mohan. Senator, I am happy to follow up, but we do take 
our enforcement----
    Senator Lankford. I get it. This part is not being 
enforced. That is what I am trying to say to you, is that I do 
see all the platforms trying to deal with drug trafficking, but 
human smuggling and illegally crossing the border is not being 
enforced. I am not asking you to solve it today. I am raising 
it as an issue to say somehow we treat cartels different than 
terrorist organizations.
    Cartels are transnational criminal organizations that are 
making money off of moving people illegally into our country 
and making money off of illegally moving drugs into our 
country. I would like for our social media platforms to engage 
with a criminal organization and with criminal activity 
consistent to your own terms of service.
    That is it. I yield back my time.
    Chairman Peters. Thank you, Senator Lankford. Senator 
Hawley.
    Senator Hawley. Thank you, Mr. Chairman.
    Mr. Cox, I know that Facebook has said in the past that it 
is their position, as a private company, you are not subject to 
the First Amendment. I assume that has not changed. Is that 
right?
    Mr. Cox. That is correct, Senator.
    Senator Hawley. But the United States government is subject 
to the First Amendment. I think we can probably all agree on. 
Hopefully we can. Hopefully that is still true in this country.
    Is it appropriate for Facebook to work with the United 
States government to avoid the First Amendment, help the U.S. 
Government avoid the First Amendment?
    Mr. Cox. Senator, we do think it is sometimes appropriate 
to be in contact with government and with government 
organizations.
    Senator Hawley. To help them avoid the First Amendment?
    Mr. Cox. Senator, I am not sure what specifically you are 
referring to.
    Senator Hawley. Let me ask you this. Do you think it is 
appropriate to work with the United States government to target 
private individual speech that is constitutionally protected?
    Mr. Cox. Senator, I am not aware of that.
    Senator Hawley. Let me educate you. On July 16, 2021, an 
employee at Facebook wrote to the Department of Health and 
Human Services, saying, ``I know our teams met today to better 
understand the scope of what the White House expects from us on 
misinformation going forward.''
    On July 23, 2021, a Facebook employee thanked HHS, quote 
``for taking the time to meet earlier today, and wanted to make 
sure you saw the steps we just took this past week to adjust 
policies on what we are removing with respect to 
misinformation. This included''--and I am still quoting--
``increasing the strength of our demotions for COVID and 
vaccine-related content.''
    On April 7, 2021, a Facebook employee thanked the CDC for 
responding to misinformation queries, and I quote, ``We will 
get moving now to be able to remove all but that one claim as 
soon as the announcement and authorization happens.''
    On July 28th of this year, a Facebook employee reached out 
to CDC about, ``doing a monthly misinfo/debunking meeting.'' 
The CDC responded, ``Yes, we would love to do that.'' I am sure 
they would.
    On July 20, 2021, Clark Humphrey at the White House, who 
was digital director of the COVID-19 response team, emailed 
David Sumner at your company, among others, asking, ``Any way 
we can get this pulled down,'' and cited a specific Instagram 
account. Within 46 seconds, your company replied and said, 
``Yep. On it.'' That sounds like what, in the law, we call a 
pattern and practice of meeting, coordinating, and colluding 
with the United States government to target particular speech 
that no one in any of these emails alleges is incitement, which 
would not be constitutionally protected, no one in any of these 
emails alleges it directly encourages violence, which would not 
be constitutionally protected.
    It appears to all be constitutionally protected speech on, 
I might add, very politically sensitive topics, that Facebook 
is directly working with the U.S. Government to target and 
remove. Is that your company policy to do this kind of thing?
    Mr. Cox. Senator, we were quite public about our 
cooperation with health organizations during the unprecedented 
time of COVID. We knew that people expected and wanted accurate 
information on our platform. We had conversations with CDC, 
with the World Health Organization (WHO), and with other public 
health organizations, not just in the United States but abroad, 
in order to understand how to help make sure that folks were 
not getting information that could cause them any harm.
    Senator Hawley. Fair enough. You are saying that this was, 
in fact, company policy to have these kinds of meetings with 
HHS, with the CDC, with the White House directly, that you did 
engage in this behavior, and you think that it was entirely 
fine. Is that your testimony?
    Mr. Cox. Senator, I do believe it is appropriate for 
companies like ours to be in consultation with public health 
organizations and with government.
    Senator Hawley. You can confirm that things like taking 
down a private Instagram account and adjusting your policies at 
the behest of the White House, and putting into place 
misinformation policies at the behest of CDC, that those 
things, you think, are appropriate, that this was company 
policy to do so. Is that fair to say?
    Mr. Cox. Senator, I am not familiar with the Instagram 
account specifically that you are referencing, but we do know 
that people expected and hoped from the platforms that we would 
help them get accurate information about COVID during the 
unprecedented time, especially at the beginning.
    Senator Hawley. Is there not a difference between you, as a 
platform, putting forward information, and censoring your users 
at the behest of the White House, the Administration more 
broadly, and the CDC? Is there not a distinction there?
    Mr. Cox. We specifically wanted to work with public health 
experts to understand the relationship between information and 
behavior, and so we did consult with the CDC, the World Health 
Organization, and others to understand how the platform 
policies we built were affecting public health.
    Senator Hawley. You did not just consult them to understand 
how they affected public health, you actually censored on their 
behalf. I mean, you took these emails--I am just quoting from a 
sample of them--which, by the way, have been disclosed in 
litigation--these emails show that you took censorship steps. 
You took down accounts. You planned misinformation policies. 
You adjusted your policies at the behest of the United States 
government. That is not just some theoretical thing. That is 
actually targeting your users' speech.
    I appreciate your forthrightness, by the way.
    But you think that is fine, and that was your policy.
    Mr. Cox. Senator, we have been public about our policies, 
on COVID misinformation specifically, as well as on 
misinformation generally.
    Senator Hawley. You are not concerned about any of this. 
Nothing that I just read to you, you are not concerned about it 
at all.
    Mr. Cox. Respectfully, Senator, I think the balance of how 
to protect free expression as well as public safety is a 
difficult issue, but it one we are committed to working with 
outside experts and publishing our work.
    Senator Hawley. I appreciate you being so forthright. As I 
said, this is actually from litigation between the State of 
Missouri and the State of Louisiana and the Federal Government. 
I anticipate that your remarks under oath today are going to be 
very interesting and helpful to that litigation.
    I will just say this. My view is that the United States 
government is bound by the First Amendment. They cannot 
encourage or coerce or incite or collude with a private party 
to get around the First Amendment, that you have just said to 
me today that that is basically what they did, that you 
coordinated with them repeatedly, over a pattern of months and 
years, to adjust and target your speech policies for protected 
speech at the behest of the United States government.
    I have to tell you, I have a big problem with that, and I 
think all your users should too.
    Thank you, Mr. Chairman.
    Chairman Peters. Thank you, Senator.
    To our panelists, it is at 5 p.m. I know there was a 
suggestion for a break. We are right down to the end here. 
Rather than break and come back and keep you here longer we are 
going to power right through it with Senator Scott, and then I 
will wrap it up after that. Senator Scott.

               OPENING STATEMENT OF SENATOR SCOTT

    Senator Scott. All right. Thank you, Chair Peters. Thanks 
to each of you for being here.
    It is critical for employees and officers of the FBI and 
DOJ to continue to have a mechanism for reporting concerns of 
fraud, waste, and abuse within their respective agencies to 
Congress without fear of reprisal from DOJ/FBI leadership.
    Whistleblowers from within the DOJ/FBI have come forward 
with concerns about the Department of Justice's alleged 
political bias in the FBI's raid on the former President's home 
in Florida last month. FBI agents have reported similar 
concerns to individual Senate offices as well. We all need to 
ensure safeguards are in place so Attorney General Garland does 
not retaliate against or intimidate FBI agents and DOJ 
employees who come to Congress as whistleblowers.
    Mark Zuckerberg recently disclosed that Facebook's 
restriction of a story about Hunter Biden during the 2020 
election was based on the FBI's, ``misinformation warnings.'' 
Additionally, emails and internal communications obtained by 
the journalist Alex Berenson, in his lawsuit against Twitter, 
have shown his removal from the social media platform was a 
result of pressure from Biden White House officials to silence 
his criticism of the Administration's COVID-19 policies.
    These instances and several others show a clear and 
alarming pattern of speech suppression carried out at the 
direction of agencies and officials in the Federal Government. 
In other words, the Federal Government used private businesses 
to violate the First Amendment rights of our citizens. This 
also confirms that the Federal Government used officials at the 
FBI to interfere in the 2020 election by manipulating the 
normal flow of public discourse and information-sharing with 
false warnings about foreign interference and disinformation.
    I am going to ask you a couple of questions, if you can 
just show by hands yes or no. By a show of hands, how many of 
you and your companies have been contacted by a Federal agency, 
an agency official, or a member of the Biden White House with a 
request to remove, censor, or restrict access to a post or an 
individual user on your platform? If you have, would you raise 
your hand and say yes.
    Mr. Sullivan. I am not aware of it.
    Senator Scott. So Meta, YouTube, TikTok, and Twitter all 
said never.
    Mr. Cox. Not to my knowledge, Senator.
    Senator Scott. You would know, would you not?
    Mr. Cox. I have not been in conversations with the FBI, 
Senator.
    Senator Scott. OK. So no conversations. OK.
    By a show of hands, how many of you and your companies have 
felt pressure to remove, censor, or restrict access to a post 
or individual user on your platform based on that contact with 
a Federal agency or official?
    Is the answer no from all four of you? So all four of you 
say no.
    Mr. Cox. Not to my knowledge, Senator.
    Senator Scott. OK. By a show of hands, how many of you and 
your companies have received a misinformation warning issued by 
the FBI? Every one of you is saying no?
    Mr. Cox. Senator, I know that we have received warnings 
from the FBI and other experts about electoral misinformation, 
in general, and foreign interference, in general.
    Senator Scott. By a show of hands--but the rest of you have 
said no.
    Mr. Mohan. Senator, we receive information from every 
administration about things like foreign interference and 
election results, et cetera. But one thing is very clear. We 
enforce our guidelines based on our community guidelines, so we 
are the ones making decisions about the content that is 
removed, based on the guidelines that we publish transparently, 
not based on what a particular administration asks us to do or 
not do.
    Senator Scott. So your answer is you have never received a 
misinformation warning issued by the FBI?
    Mr. Mohan. Senator, no. What I am saying is that we do 
receive information from the FBI in terms of imminent threats, 
foreign actors trying to interfere with our free and fair 
elections here in the United States, and we take that into 
account in terms of the enforcement of our policies. But those 
decisions about the enforcement of our policies are made solely 
based on our community guidelines that we publish on our 
website.
    Senator Scott. Mr. Cox, what I said about Mark Zuckerberg 
disclosed the FBI's restriction of a story about Hunter Biden 
during the 2020 election was based on the FBI's misinformation 
warning, is that untrue?
    Mr. Cox. Senator, I was not in conversations with the FBI 
so I cannot speak to exactly what the conversation was. What I 
do know is that we were in contact with a number of 
organizations who warned us in the time leading up to the 2020 
election, to be on the lookout for foreign interference in 
elections, and that is an issue that we take incredibly 
seriously.
    Senator Scott. But you should know if Mark Zuckerberg, 
right, would you not know----
    Mr. Cox. Sorry, Senator.
    Senator Scott [continuing]. Would you not know if Mark 
Zuckerberg--he said that Facebook made that decision. You would 
know that, right?
    Mr. Cox. Senator, I do know that, if you are talking about 
the New York Post story, I do know that consistent with our 
policies we made the decision to submit that story to be 
reviewed by independent fact-checkers. It was never removed 
from our service, and we never blocked anybody from sharing 
that story.
    Senator Scott. I know people in Florida that were kicked 
off by putting a story up.
    Last one. By a show of hands, how many of you and your 
companies have felt pressured to act upon an FBI misinformation 
warning you received, such as by removing, censoring, or 
restricting access to a particular user or post as a subject of 
that misinformation warning?
    So the answer is no for all of you.
    Mr. Cox. Not that I am aware of.
    Senator Scott. No for all of you. So every one of you said 
no. All right.
    Mr. Cox, in 2013, DOJ shut down Silk Road, the illegal 
online marketplace on the dark web which featured over 13,000 
illegal drug postings. In comparison, according to the 2020 
Facebook transparency report, Facebook found 5.9 million 
illegal drug sale postings on Facebook and Instagram. That is 
453 times more drug postings than Silk Road. If you found 
almost six million postings, how many posts are you not 
finding? If that is true, if Silk Road got shut down, what in 
the living daylights are you guys still doing in business?
    Mr. Cox. Senator, we release quarterly reports on the 
specific answers to your question around each category of bad 
content on the platform. We do not believe the sale of illegal 
drugs has a place on any of our platforms. We work hard to 
fight against that. We publish quarterly the updates on exactly 
how many pieces of content we take down, as well as how much we 
are about to take down proactively.
    Senator Scott. All right. I have one more question, and 
this has to do with what has happened with my sheriffs. Do you 
collect stats on the average turnaround time for responding and 
resolving customer complaints like hijacked accounts or 
products that violate your terms of service?
    Mr. Sullivan, you say yes. No one else does?
    Mr. Sullivan. Yes. We have a goal of meeting a service 
level agreement to turn those around as quickly as possible.
    Mr. Mohan. Senator, we do look at how quickly we respond to 
requests from our creators who upload content to our platform 
as well as viewers, and we are constantly looking to continue 
to improve our processes around that sort of request handling.
    Senator Scott. TikTok?
    Ms. Pappas. We do as well, yes.
    Senator Scott. And you do it also?
    Mr. Sullivan. Are you asking whether we look at turnaround 
times?
    Senator Scott. Yes.
    Mr. Sullivan. Yes we do, Senator.
    Senator Scott. OK. Do you collect stats on the average 
turnaround time for responding to subpoenas issued by law 
enforcement agencies?
    Is that a yes for everybody?
    Mr. Mohan. Yes, Senator. We have a group that works to 
respond to subpoenas, evaluate them, and respond 24/7.
    Senator Scott. OK. I wrote to all of your companies and I 
asked, and none of you responded. I do not know if you all 
realize that. Every one of you, I wrote and asked for 
information. None of you responded to either of those 
questions.\1\
---------------------------------------------------------------------------
    \1\ The Google letter in response to Senator Scott appears in the 
Appendix on page 197.
---------------------------------------------------------------------------
    Let me just tell you what sheriffs are saying. One of the 
Florida Sheriff's Departments mentions, ``There is no point of 
contact to send subpoenas. They are slow to respond. There is 
no sense of urgency on how they respond to something that is 
even time sensitive.''
    I can tell you I have talked to sheriffs all around 
Florida--is that time-sensitive information that would impact a 
law enforcement investigation or a crime, you guys do not 
respond. How do you respond to that?
    Mr. Sullivan. I work in the product organization. We would 
not be the ones to receive that, but I can have our team get 
back to you.
    Senator Scott. Anybody else?
    Mr. Mohan. Senator, I am happy to follow up. We do have a 
team that responds to those requests. We balance the needs of 
law enforcement as well as our user privacy when we are 
responding to those, as you would imagine. We do take into 
account time sensitivity in terms of trying to respond to those 
requests. I will ask our team to follow up with you as well.
    Senator Scott. The questions I asked before that I sent you 
all, that none of you responded to, you will respond to?
    Mr. Cox. Senator, I would be happy to have the team follow 
up. We take law enforcement requests very seriously.
    Senator Scott. OK. Thank you. Thank you, Chair.
    Chairman Peters. Thank you, Senator Scott.
    I have a couple of follow-up questions and we will wrap up 
and you will be on your way, and again, thank you so much for 
taking so much time. You have been in the seat a long time. I 
have been here with you a long time. I am ready to get out of 
the seat as well, with you.
    You have all spoken about how essential trust and safety 
is, how it is part of your culture and something that you are 
focused on. But I want to reference a report that was 
commissioned by a Twitter whistleblower in the spring of 2021, 
focused on its site's integrity team and misinformation. It 
found that, ``Project managers are incentivized to ship 
products as quickly as possible, and thus are willing to accept 
security risk.''
    Mr. Sullivan, as head of Consumer Products at Twitter, 
would you agree with this finding in that report, commissioned 
by Twitter?
    Mr. Sullivan. The dates of that report would have been 
before I was in the role, but what I can tell you is that I 
have been in multiple product reviews where I push hard, and 
our other leaders push very hard, and work with our teams to 
strike a balance of safety in all that we do. I cannot speak to 
that report but what I can tell you is how we operate now, and 
that is how we operate now.
    Chairman Peters. Obviously this report has a completely 
different conclusion than you have. I just have to ask you, how 
can you say that trust and safety are important to your 
development progress when Twitter launched its Spaces product, 
despite your predecessor publicly stating that Twitter would 
not be able to moderate all of its Spaces?
    It is my understanding that since the launch it is 
documented that Spaces has been used by white supremacists as 
well as ISIS to spread misinformation, as shown in this poster. 
In fact, the internal report shown on this poster basically 
says, if I can read it here, ``We did not prioritize 
identifying and mitigating against health and safety risks 
before launching Spaces.'' Do you disagree with this 
characterization? I have heard you say on the record, ``We 
never, ever send anything out that we have concerns about.'' 
This is obviously very different.
    Mr. Sullivan. Yes, I understand what you are saying. Since 
I have started in my role, we have been looking at health and 
safety across the board and working to improve it.
    In Spaces, for example, we have been continuing to beef up 
all of our reporting, our automatic detection, our language 
support. We are working very hard to further improve the health 
of Spaces. I think that is just one example of many that I 
could give you for how we are operating now.
    Chairman Peters. After the fact. After some of these things 
are released.
    Mr. Cox, you also have talked quite a bit about trust and 
safety as being central to Meta's development process. My 
question to you is, why, after several years of warnings by 
external organizations such as the Tech Transparency Project, 
does Facebook continue to automatically generate home pages for 
white supremacists and other extremist groups and terrorist 
such as ISIS, as shown in this poster right here for the Aryan 
Brotherhood, a page that has been created? I guess it was taken 
down just recently, but it was on Facebook for 12 years, for 12 
years the Aryan Brotherhood.
    Does not this feature allow extremist groups to basically 
recruit members more easily because you are putting this up?
    Mr. Cox. Senator, we believe there is no place for 
terrorism or violence-inciting networks, for militarized social 
movements. We believe there is no place for these on our 
platform. We use automated tools to find and take them down as 
well as teams of experts dedicated to these specific problems.
    Chairman Peters. Your automated tools and teams, they were 
successful after 12 years. They were able to bring it down 
after 12 years. Do you think that is an acceptable performance?
    Mr. Cox. Senator, I know that for 97 percent of terrorist 
content we are able to get to it before anybody reports it, and 
also that we have been able to improve that number, quarter 
over quarter, and I will continue to make sure that we aspire 
to.
    Chairman Peters. But certainly any content that gets 
through and is disseminated very broadly can have catastrophic 
consequences and violent actions, and particularly groups like 
this where they have pages that are being put up by your 
company that are there for 12 years. I hope you would agree 
that that is unacceptable.
    Mr. Cox. Senator, respectfully, we would not have put this 
page up ourselves, but we do work hard to make sure that 
extremist and terrorist networks are not allowed----
    Chairman Peters. Yes, this is auto-generated. This was an 
auto-generated page.
    Mr. Cox. Senator, I have not seen this specific example.
    Chairman Peters. I would love to have you comment on this. 
If you could look at this example, and if we could have your 
written comments on it would appreciate that afterwards.
    My final question, and then we are going to let you go, 
seriously. When your product teams are testing new products or 
features I know that you track engagement and growth, a pretty 
fundamental part of the work that you do.
    My question to you, and this is to each of you, do you 
consistently measure the impacts of these new features on 
societal harms like misinformation, disinformation, hate 
speech, and terrorism? If you could just give me a yes-or-no 
answer. We will start with you, Mr. Sullivan, and work that 
way. Yes or no, please.
    Mr. Sullivan. Yes, depending on the feature and which of 
those harms might apply, we go deep into those and analyze 
those, yes.
    Chairman Peters. So it is yes. If you could go a little 
deeper I am going to ask you then, how do you characterize and 
measure this data, and which metrics are used? If you can be 
very specific, it is very helpful that we have that 
information. I will ask all three of you to do the same, 
please.
    Mr. Sullivan. I will give you one topline metric we use for 
many of these, which is what we call ``harmful impressions.'' 
So 0.1 percent of tweets that turn out to be violative, we want 
to limit the number of people that see them. It is how many 
people may have seen something before we identified it as 
violative. Those are some of the metrics that would be 
important to combat this harm on the platform. Thank you, 
Senator.
    Chairman Peters. Thank you. Ms. Pappas.
    Ms. Pappas. Similarly the way we measure it is looking at 
community guidelines violation rate. Essentially we do a sample 
size that is view-weighted of our corpus of videos, and then we 
are looking at was there any violative content, and then we are 
looking at how do we minimize that exposure and drive it down 
to zero.
    As I had mentioned earlier, we look at that at a per-policy 
basis, so things like hate speech, violent extremism, mis-and 
disinformation, and we are able to measure our improvement of 
that, quarter over quarter, week over week. We look at those 
metrics and reports, and doing so with regard to our features, 
like our For You feed.
    Chairman Peters. You do this before every launch, this is 
the study you will do?
    Ms. Pappas. We would not have the metrics before launch, 
but in terms of setting our baselines and knowing----
    Chairman Peters. Right. Mr. Mohan.
    Mr. Mohan. Senator, I can say very clearly that our 
responsibility as a global platform comes before any of our 
growth metrics around engagement, revenue, et cetera. It is the 
top line of the company, and we are constantly reviewing our 
products.
    Chairman Peters. How do you characterize the data?
    Mr. Mohan. When we are reviewing our products on a constant 
basis one of the core metrics that I look at, and that the rest 
of the company looks at, is something called our ``violative 
view rate (VVR).'' That is the metric that we have also started 
to publish on a quarterly basis so that you have access to it. 
In fact, our most recent transparency report was just published 
a couple of weeks ago. The violative view rate is basically a 
metric that calculates how much content is up on our platform 
that would have violated any of our policies, across hate 
speech, harassment, et cetera. That number is something on the 
order of 9 to 11 impressions out of 10,000. It is a small 
number that we aim every single quarter to continue to drive 
down.
    Chairman Peters. I want to be clear. You do this with your 
AB testing (split testing). This is testing that you do before 
you launch a product?
    Mr. Mohan. We measure this metric on a constant basis.
    Chairman Peters. But do you do it before you launch a 
project?
    Mr. Mohan. We would not have the metric----
    Chairman Peters. You do testing. You do not launch without 
doing some testing. You do AB testing. I have not heard about 
AB testing here from you today. You do not test the product 
before you launch? You just say, ``Hey, let's launch and see 
what happens''? I do not think you do that. Is that what you 
do? You launch products without testing whether or not it makes 
sense?
    Mr. Mohan. Senator, I did not say that.
    Chairman Peters. OK.
    Mr. Mohan. We test our products extensively before 
launching, in terms of usability of the products, but also the 
trust and safety and the impact those products will have on our 
users.
    Chairman Peters. OK. You do those tests during the test.
    Mr. Mohan. Correct.
    Chairman Peters. Ms. Pappas, you said afterwards. Do you do 
that AB testing before you launch, and do you also test safety 
and trust, because you are testing your product before you send 
it--unless you do not test it before you send it out to the 
world.
    Ms. Pappas. We do do testing before launch, and we will 
delay products, or rather features, if they do not meet our 
safety standards.
    Chairman Peters. Mr. Cox.
    Mr. Cox. Senator, to your question, the primary way we 
measure and understand is prevalence, and we publish reports on 
the specific categories of content that violates our policies 
as well as regions around the world.
    Chairman Peters. You do that in your AB testing before you 
launch?
    Mr. Cox. Senator, for many of those metrics you need a 
specific study in order to understand that metric, but we look 
at lots of other metrics associated----
    Chairman Peters. Does that mean you do not do it before you 
launch?
    Mr. Cox. What we would do before the launch of any product 
where we had any reason to be concerned about safety is put it 
through a review with our integrity teams, whose job is to 
understand safety concerns. We would not launch a product if we 
believed that there was a safety issue.
    Chairman Peters. You have referenced several times the 
statistic that hate speech on your platform represents 0.02 
percent of all views. Is that accurate?
    Mr. Cox. That is correct.
    Chairman Peters. Certainly that sounds like a small number. 
I will appreciate that. But you also have a lot of views. You 
are a massive platform. I am concerned that this could mask the 
total amount of hate speech that could be out there and viewed 
by an awful lot of folks. My question is, what is the total 
number of views that hate speech actually gets on your 
platform, not a percentage, but how many views, last year, for 
example, or last month, yesterday, whatever you may have? Do 
you have those numbers?
    Mr. Cox. I do not have those numbers right now, Senator, 
but I would be happy to have our teams follow up.
    Chairman Peters. Would you provide those numbers to the 
Committee as to the total number of views, not as a percentage?
    Mr. Cox. Yes, I would be happy----
    Chairman Peters. You can do the math. You have massive 
amount of views on your platform.
    Mr. Cox. Yes, Senator. I would be happy to have our teams 
follow up on that.
    Chairman Peters. I appreciate it. Thank you.
    Thank you. I want to thank again our witnesses for joining 
us today. I am certainly grateful for your contributions to 
what is a very serious and a very important discussion, and I 
want to certainly thank Ranking Member Portman for holding this 
hearing with me here today.
    I think today's hearing shed some new light on some serious 
problems of rising domestic extremism and violence and its 
relationship to amplified content on platforms.
    We heard from our first panel earlier today about how user 
engagement and revenue generation are the primary incentives 
that drive product development and decisionmaking at your 
companies, and that the overall goals of growth and profit are 
always prioritized over the safety of users. That tradeoff, 
revenue over safety, has contributed to, unfortunately, some 
real-world harms, from horrific attacks and acts of violence 
motivated by extreme ideologies to our fundamental democratic 
process also being challenged.
    I will be honest. I am frustrated that the Chief Product 
Officers, who all of you have a prominent seat at the table 
when these business decisions are made, were not more prepared 
to speak to specifics about your product development process, 
even when you were specifically asked if you would bring 
specific numbers to us here today, and that your companies 
continue to really avoid sharing some very important 
information with us. We have been working on this for quite 
some time and continue to be frustrated at the slow response, 
or the no response that we receive from you.
    The testimony we heard today from both experts and from 
former executives as well as from the four of you have made 
clear the important work of the current trust and safety teams. 
It is simply not enough to address the problem. This problem 
continues to be with us today.
    Although we head plenty of testimony about your companies' 
content moderation policies, what content gets removed and why 
and even how much you spend on safety measures, it is clear 
that those actions cannot effectively address this problem as 
long as the product development process and the revenue-based 
incentives do not change to make safety a higher priority in 
those structures.
    We need to continue this important conversation. This will 
be the first of, I am sure, many conversations, and discuss 
possibly regulatory measures and changes to the incentive 
structures within your companies to build better practices, to 
limit the spread of harmful and extreme content before it is 
actually spread to users. Certainly we appreciate actions that 
are taken after the fact, but at that point much harm could 
already be released out into society with potentially 
catastrophic consequences. We all want to be ahead of the 
problem, not reacting to a problem that exists already in our 
society.
    As Chairman of this Committee I will continue to work 
alongside Ranking Member Portman and Members of the Committee 
to find effective solutions to this growing homeland security 
threat. I certainly hope that each and every one of you will be 
part of that process to find that solution. We all need to be 
working together on this. It is very clear, the more we talk 
about this issue, the more we realize how complex it is, and it 
is going to take all of us putting our heads together and 
figuring out a path forward.
    The record for this hearing will remain open for 15 days, 
until 5 p.m. on September 29, 2022, for the submission of 
statements and questions for the record.
    With that, this hearing is now adjourned.
    [Whereupon, at 5:25 p.m., the hearing was adjourned.]

                            A P P E N D I X

                              ----------                              

[GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]