[Senate Hearing 117-514]
[From the U.S. Government Publishing Office]


                                                        S. Hrg. 117-514

                         SOCIAL MEDIA PLATFORMS
                    AND THE AMPLIFICATION OF DOMESTIC 
                    EXTREMISM AND OTHER HARMFUL CONTENT

=======================================================================

                                HEARING

                               BEFORE THE

                              COMMITTEE ON
               HOMELAND SECURITY AND GOVERNMENTAL AFFAIRS
                          UNITED STATES SENATE

                    ONE HUNDRED SEVENTEENTH CONGRESS


                             FIRST SESSION

                               __________

                            OCTOBER 28, 2021

                               __________

        Available via the World Wide Web: http://www.govinfo.gov

                       Printed for the use of the
        Committee on Homeland Security and Governmental Affairs
        
[GRAPHIC NOT AVAIABLE IN TIFF FORMAT]


                    U.S. GOVERNMENT PUBLISHING OFFICE                    
47-981 PDF                WASHINGTON : 2022                     
          
----------------------------------------------------------------------------------- 

        COMMITTEE ON HOMELAND SECURITY AND GOVERNMENTAL AFFAIRS

                   GARY C. PETERS, Michigan, Chairman
THOMAS R. CARPER, Delaware           ROB PORTMAN, Ohio
MAGGIE HASSAN, New Hampshire         RON JOHNSON, Wisconsin
KYRSTEN SINEMA, Arizona              RAND PAUL, Kentucky
JACKY ROSEN, Nevada                  JAMES LANKFORD, Oklahoma
ALEX PADILLA, California             MITT ROMNEY, Utah
JON OSSOFF, Georgia                  RICK SCOTT, Florida
                                     JOSH HAWLEY, Missouri

                   David M. Weinberg, Staff Director
                    Zachary I. Schram, Chief Counsel
          Christoper J. Milkins, Director of Homeland Security
             Moran Banai, Senior Professional Staff Member
                  Kelsey N. Smith, Research Assistant
                Pamela Thiessen, Minority Staff Director
    Andrew Dockham, Minority Chief Counsel and Deputy Staff Director
       Kirsten D. Madison, Minority Director of Homeland Security
       Maggie Frankel, Minority Senior Professional Staff Member
          Sam J. Mulopulos, Minority Professional Staff Member
                     Laura W. Kilbride, Chief Clerk
                     Thomas J. Spino, Hearing Clerk

                            C O N T E N T S

                                 ------                                
Opening statements:
                                                                   Page
    Senator Peters...............................................     1
    Senator Portman..............................................     2
    Senator Hassan...............................................    19
    Senator Johnson..............................................    21
    Senator Ossoff...............................................    24
    Senator Rosen................................................    26
    Senator Lankford.............................................    28
    Senator Romney...............................................    31
    Senator Padilla..............................................    34
Prepared statements:
    Senator Peters...............................................    39

                               WITNESSES
                       Thursday, October 28, 2021

Hon. Karen Kornbluh, Director, Digital Innovation and Democracy 
  Initiative, and Senior Fellow, The German Marshall Fund of the 
  United States..................................................     4
David L. Sifry, Vice President, Center for Technology and 
  Society, Anti-Defamation League................................     7
Cathy O'Neil, Ph.D., Chief Executive Officer, O'Neil Risk 
  Consulting and Algorithmic Auditing............................     8
Nathaniel Persily, Ph.D., Co-Director, Stanford Cyber Policy 
  center, and James B. McClatchy Professor of Law, Stanford Law 
  Center.........................................................    10
Mary Anne Franks, D.Phil., Professor of Law and Michael R. Klein 
  Distinguished Scholar Chair, University of Miami...............    13

                     Alphabetical List of Witnesses

Franks, Mary Anne D. Phil.:
    Testimony....................................................    13
    Prepared statement...........................................   104
Kornbluh, Hon. Karen:
    Testimony....................................................     4
    Prepared statement...........................................    41
O'Neil, Cathy Ph.D.:
    Testimony....................................................     8
    Prepared statement...........................................    75
Persily, Nathaniel Ph.D.:
    Testimony....................................................    10
    Prepared statement...........................................    80
Sifry, David L.:
    Testimony....................................................     7
    Prepared statement...........................................    46

                                APPENDIX

Southern Poverty Law Center statement submitted for the Record...   115
Responses to post-hearing questions for the Record:
    Ms. Kornbluh.................................................   123
    Mr. Sifry....................................................   125
    Ms. O'Neil...................................................   130
    Mr. Persily..................................................   131
    Ms. Franks...................................................   151

 
                         SOCIAL MEDIA PLATFORMS
 AND THE AMPLIFICATION OF DOMESTIC EXTREMISM AND OTHER HARMFUL CONTENT

                              ----------                              


                       THURSDAY, OCTOBER 28, 2021

                                     U.S. Senate,  
                           Committee on Homeland Security  
                                  and Governmental Affairs,
                                                    Washington, DC.
    The Committee met, pursuant to notice, at 10:15 a.m., via 
Webex and in room 342, Dirksen Senate Office Building, Hon. 
Gary C. Peters, Chairman of the Committee, presiding.
    Present: Senators Peters, Hassan, Sinema, Rosen, Padilla, 
Ossoff, Portman, Johnson, Lankford, Romney, Scott, and Hawley.

            OPENING STATEMENT OF CHAIRMAN PETERS\1\

    Chairman Peters. The Committee will come to order. I would 
like to thank our witnesses for joining us here today.
---------------------------------------------------------------------------
    \1\ The prepared statement of Senator Peters appear in the Appendix 
on page 39.
---------------------------------------------------------------------------
    Our Committee has held several hearings this year, 
examining the rise of domestic terrorism, and today's hearing 
is going to focus on the role of social media platforms and the 
role that they play in the amplification of domestic extremist 
content and how that content can translate into, unfortunately, 
real-world violence.
    Yesterday marked three years since a white supremacist 
gunman opened fire in a Pittsburgh synagogue, killing 11 
innocent people in the deadliest attack on the Jewish community 
in the United States. The attacker used the fringe social media 
platform Gab prior to the attack to connect with like-minded 
extremists and spread his own hateful and anti-Semitic views 
online. While bad, violent, and hateful ideology has long 
terrorized communities for many Americans, the shocking attack 
was the first glimpse at how quickly increased exposure to 
extremist content and users with similar beliefs can radicalize 
domestic terrorists and drive them to act on their violent 
intentions.
    Less than a year after the Tree of Life attack, we saw a 
white nationalist open fire in an El Paso shopping center. This 
attacker was one of many who viewed video of the Christchurch 
mosque massacres that widely circulated on social media just a 
few months earlier, a video he reportedly cited as inspiration 
for his deadly attack in a 2,300-word racist manifesto that he 
also posted online.
    On January 6, 2021, we saw a stark example of how 
individuals went beyond seeing and sharing extreme content 
across numerous social media platforms. They were spurred to 
action by what they repeatedly saw online, and ultimately, a 
mob violently attacked Capitol Police and breached the Capitol 
Building.
    In attack after attack, there are signs that social media 
platforms played a role in exposing people to increasingly 
extreme content and even amplifying dangerous content to even 
more users. Yet, there are still many unanswered questions 
about what role social media platforms play in amplifying 
extremist content. We need a better understanding of the 
algorithms that drive what users see on social media platforms, 
how companies target ads, and how these companies balance 
content moderation with generating revenue.
    For the majority of social media users who want to connect 
with distant family and friends or stay up to date on their 
favorite topics, there is very little transparency about why 
they see the content, recommendations or ads that populate 
their feeds. While social media companies have promoted how 
they are providing more curated content for their users, we 
have seen how users can be shown increasingly polarizing 
content. In worst case scenarios, users are reportedly 
recommending more and more extreme content, nudging them down a 
dark and dangerous ``rabbit hole.''
    Recent reporting and congressional testimony and 
revelations in the Facebook Papers have shed some light on 
business models that appear to have prioritized profits over 
safety and decisions that appear to disregard the platforms' 
effect on our homeland security. It is simply not enough for 
companies to pledge that they will get tougher on harmful 
content. Those pledges have gone largely unfulfilled for 
several years now. Americans deserve answers on how the 
platforms themselves are designed to funnel specific content to 
certain users and how that might distort users' views and shape 
their behavior, both online and offline.
    As part of my efforts to investigate rising domestic 
terrorism, including the January 6th attack, I have requested 
information from major social media companies about their 
practices and policies to address extremist content so that we 
can better understand how they are working to tackle this 
serious threat. While we are continuing to work with companies 
to get answers and examine relevant data, I am looking forward 
to hearing from our experts today about how these platforms 
balance safety and business decisions and, specifically, how 
these decisions relate to rising domestic extremism.
    Ranking Member Portman, you are welcome to start with your 
opening remarks.

              OPENING STATEMENT OF SENATOR PORTMAN

    Senator Portman. Thank you, Mr. Chairman. I appreciate your 
holding this hearing. It is a very important topic. I look 
forward to hearing from our experts today and then, I think in 
a future hearing, hearing from some of the companies 
themselves.
    The role that social media plays in directing content that 
can lead to online and offline harm has taken on new 
significance in the past several weeks as we have learned more 
about this from whistleblowers and others, and news has emerged 
about malfeasance by some of the largest internet platforms. 
This exploitation of social media, of course, is not new. In 
2016, as Chair of the Permanent Subcommittee on Investigations 
(PSI) of this Committee, I held a hearing which examined the 
Islamic State of Iraq and Syria (ISISs), use of online 
platforms in furtherance of their violent goals. We learned 
from testimony that social media accelerates, in this case, 
ISIS's radicalization and recruitment efforts and can also 
speed up their mobilization to violence.
    Today, foreign terrorist actors continue to try to 
weaponize social media to inspire radicalization and attacks 
against Americans and American interests. This use of social 
media for nefarious purposes is not limited to terrorists, of 
course. Drug traffickers, foreign adversaries, and a host of 
other threat actors exploit online platforms, particularly 
social media. China and Russia use these platforms to conduct 
influence campaigns targeting Americans, including being 
involved in our elections. Drug cartels and gangs use these 
platforms to traffic narcotics. Traffickers use these platforms 
to exploit children and other vulnerable people, and domestic 
violent extremists (DVE) across the ideological spectrum use 
social media to spread propaganda and recruit members to their 
cause. So it is a broad problem.
    Social media platforms acknowledge that the threats exist, 
and they talk about what they are doing to prevent bad actors 
from exploiting their sites, including artificial intelligence 
(AI) to help moderate content, networks of cross-industry 
partnerships, and on-staff experts--some of you may have been 
in that position--all to prevent or remove dangerous content. 
However, there is still a persistent threat of harmful content 
despite what the platforms say they are doing.
    These social media companies are businesses, so it is not 
surprising that Congress and others have trouble getting more 
information on their algorithms and how they operate, how they 
are designed to amplify content. That is proprietary 
information; I understand that. But Congress has heard from 
whistleblowers, like Frances Haugen recently, that Facebook has 
not addressed troubling aspects of its algorithms, which have 
promoted of variety of concerning and alarming posts.
    This raises important questions about whether or not it is 
time to revisit the immunity provided by Section 230. In 2017, 
during my time as PSI Chair, I introduced legislation which 
would remove Section 230 immunity from platforms that knowingly 
facilitated sex trafficking. So we have dealt with this issue. 
That legislation, called the Stop Enabling Sex Traffickers Act 
(SESTA), was actually signed into law back in 2018. We have 
figured out how to deal successfully with Section 230 at least 
in this one narrow area but very important area.
    I take advocates, researchers, and even platforms at their 
word when they call for regulation. A regulation can take many 
forms which puts a premium on having sound information and 
analysis as we consider legislation to solve these problems. In 
other words, we need to know more. We need to be able to look 
under the hood and figure out what the issues are to be able to 
regulate properly.
    These experts are in front of us here today to help 
evaluate the extent of the problem and also discuss what we 
should be doing about it. So far, we have found out a lot about 
social media business models from third-party researchers and 
from whistleblowers. The findings are largely based on 
anecdotal evidence. If they want to ensure that Congress 
pursues evidence-based policy solutions, I think it is 
incumbent upon the platforms to provide quality data.
    I am already working with Senator Coons, who is Chair of 
the Senate Judiciary Subcommittee on Privacy, on legislation 
that would require the largest tech platforms to share data 
with legitimate researchers and scholars so that we can all 
work together on solutions to these problems that all of us 
have identified. Dr. Persily has been an important partner in 
this work. I look forward to his testimony today.
    Importantly, as we look at these issues, we must take care 
that our efforts hold these platforms accountable, but it is 
done in a manner that balances First Amendment protections, 
which I understand Professor Franks is going to discuss in her 
testimony.
    Mr. Chairman, again thanks for having this hearing, and I 
look forward to hearing from our witnesses, and I thank you for 
giving us a chance to have real experts in front of us.
    Chairman Peters. Thank you, Ranking Member Portman.
    It is the practice of this Committee to swear in witnesses, 
so if each of you will please stand and raise your right hand, 
including those who are joining us by video.
    Do you swear the testimony you will give before this 
Committee will be the truth, the whole truth, and nothing but 
the truth, so help you, God?
    Ms. Kornbluh. Yes.
    Mr. Sifry. Yes.
    Ms. O'Neil. Yes.
    Mr. Persily. Yes.
    Ms. Franks. Yes.
    Chairman Peters. You may be seated.
    Our first witness today is the Honorable Karen Kornbluh, 
former Ambassador to the Organization for Economic Cooperation 
and Development (OECD) and who currently serves as a senior 
fellow at the German Marshall Fund of the United States, where 
she leads its Digital Innovation and Democracy Initiative to 
ensure technology supports democracies across the globe. Prior 
to her role with the German Marshall Fund (GMF), Ms. Kornbluh 
served in previous administrations as Chief of Staff (CoS) at 
the U.S. Treasury Department and of the Office of Legislative 
and Intergovernmental Affairs at the Federal Communication 
Commission (FCC), where she negotiated early internet policies.
    Welcome to the Committee. You are now recognized for your 
5-minute opening remarks.

TESTIMONY OF THE HONORABLE KAREN KORNBLUH,\1\ DIRECTOR, DIGITAL 
  INNOVATION AND DEMOCRACY INITIATIVE, AND SENIOR FELLOW, THE 
           GERMAN MARSHALL FUND OF THE UNITED STATES

    Ms. Kornbluh. Thank you, Chairman Peters, Ranking Member 
Portman, and Committee Members for the opportunity to testify 
on this critical issue.
---------------------------------------------------------------------------
    \1\ The prepared statement of Ms. Kornbluh appears in the Appendix 
on page 41.
---------------------------------------------------------------------------
    To underscore the points that you both made, the National 
Strategy for Countering Domestic Terror States clearly that the 
widespread availability of domestic terrorist recruitment 
material online is a national security threat. I would like to 
stress three points today. First, the design of the platform 
and its algorithms can promote radicalization. Second, this 
cannot be addressed by after-the-fact, whack-a-mole content 
moderation. Third, we need to change the platforms' incentives 
so that they fix these dangerous design elements.
    As part of a test, Facebook researchers created an account 
for a fictional Carol Smith, a 41-year-old conservative mother 
from North Carolina. Within days, Carol was recommended pages 
related to QAnon, and within only three weeks the platform 
showed her an account associated with the militia group, Three 
Percenters. She did not ask to be shown this content, she had 
no idea why she got it, and she had no idea who was paying for 
it.
    Facebook groups can be manipulative as well. Internal 
research found that a full 70 percent of Facebook political 
groups in the United States were rife with hate, bullying, 
harassment, and misinformation. Facebook's own algorithms 
recommend extremist groups to users. For instance, Facebook 
directs users who like certain militia pages toward other 
militia groups.
    The platforms also provide tools that help extremists to 
organize. So-called ``super inviters'' can create invitation 
links to groups that can be shared on or off Facebook. The 
platform provides inviters recommendations of specific friends 
to invite, allowing them to recruit from other conspiracy and 
militia groups. As an example, Stop the Steal inviters sent 
these kinds of automated invitations to members of other 
groups, resulting in high membership overlap with Proud Boy and 
militia groups and fueling Stop the Steal group's meteoric 
growth rates.
    This is a national security vulnerability. It was recently 
revealed that 140 million Americans were targeted by troll 
farms operating out of Eastern Europe.
    Similar algorithm radicalization is evidence on other 
platforms. TikTok's algorithm, for instance, also promotes 
content from QAnon, the Patriot Party, Oath Keepers, and Three 
Percenters.
    YouTube has 290 extremist channels. When researchers showed 
an interest in militant movements, YouTube suggested videos to 
them with titles like ``Five Steps to Organizing a Successful 
Militia.'' The platform also recommended videos about weapons, 
ammunition, and tactical gear.
    Extremists find it all too easy to work across platforms. 
Diehards can organize on less moderated platforms, like 4Chan 
or Telegram, and then may retail the fringe content on more 
mainstreams with just a few clicks.
    Second, the whack-a-mole approach to catching content after 
it has gone viral cannot work. Facebook employees themselves 
admitted this problem. They said that the mechanics of the 
platform were behind the hateful speech and misinformation, but 
most of their ideas for changing these in order to limit 
algorithm radicalizations were rejected. The content moderation 
system cannot win against huge volumes of algorithmic 
recommendation, but that system is further undermined by 
exempting many users with large footprints. No wonder that at 
Facebook the researchers said they catch only 0.6 percent of 
content that depicts violence or could incite serious violence.
    Third, it is critical to change the platforms' incentives. 
While Congress works on more comprehensive legislation 
regarding privacy and antitrust, a digital code of conduct 
could help tackle algorithmic radicalization while protecting 
free expression. Congress or the Federal Trade Commission (FTC) 
could demand platforms commit to common-sense design changes 
and transparency, and the FCC would enforce the companies' 
commitments.
    Platforms should implement the kinds of design changes that 
research has already shown would enable more consistent 
application of their own terms of service. For example, a 
circuit breaker, like those used by Wall Street to prevent 
market crises, could prevent the viral spread of sensitive 
content in topics areas with high harm potential while human 
reviewers have time to determine whether or not it violates 
platform policies.
    Second, platforms should commit to transparent third-party 
audits, the equivalent of a black box flight data recorder, 
like the National Transportation Safety Board (NTSB) gets when 
a plane goes down or the data available to the Food and Drug 
Administration (FDA) or the Environmental Protection Agency 
(EPA), which should not need a whistleblower to access data.
    Third, the FTC should enforce commitments to this code 
under its Section 5 consumer protection authority. Of course, 
Section 230 reform, as contemplated in a number of current 
bills, would also allow users to sue in cases of terrorism or 
serious harm. Or, they could require a code as a condition of 
limited liability, but this would require legislation.
    Mr. Chairman, the whistleblowers documents are a look in 
the rearview mirror, but Web 3.0 is being built today. It is 
essential that we act now to set sensible rules of the road. I 
thank you for holding this hearing.
    Chairman Peters. Thank you, Ms. Kornbluh.
    Our next witness is Dr. David Sifry. He is the Vice 
President of the Center for Technology and Society (CTS) at the 
Anti Defamation League (ADL). Mr. Sifry leads a team of 
innovative technologists, researchers, and policy experts 
developing proactive solutions and producing cutting-edge 
research to protect vulnerable populations. Additionally, Mr. 
Sifry is an advisor and a mentor for companies and was selected 
as a technology pioneer at the World Economic Forum (WEF). He 
joined the ADL after a career as a technology entrepreneur and 
as an executive at companies including Lyft and Reddit. Mr. 
Sifry is also an advisor and mentor for companies and was 
selected as a technology pioneer at the World Economic Forum.
    Welcome, Mr. Sifry. You are recognized for your opening 
comments.

  TESTIMONY OF DAVID L. SIFRY,\1\ VICE PRESIDENT, CENTER FOR 
         TECHNOLOGY AND SOCIETY, ANTI-DEFAMATION LEAGUE

    Mr. Sifry. Mr. Chairman, Ranking Member Portman, Members of 
the Committee, good morning. It is an honor to appear before 
you today to address the ways social media platforms amplify 
hate and foment domestic terrorism.
---------------------------------------------------------------------------
    \1\ The prepared statement of Mr. Sifry appears in the Appendix on 
page 46.
---------------------------------------------------------------------------
    For over a century, ADL has been a leading voice in 
fighting hate in all forms, and we have been tracking online 
hate since the days of dialup. In 2017, ADL launched our Center 
for Technology and Society to respond to the threat of online 
hate. My team advocates for targets of online hate and 
harassment. We deeply engage with, and call out, tech platforms 
to hold them accountable for their actions and their deliberate 
inaction.
    Before joining ADL, I spent my career as an entrepreneur 
and executive in the tech sector. A trained computer scientist, 
I founded six technology companies and served as an Executive 
at Lyft and Reddit. I have been on the inside, and I know 
firsthand how big tech companies work and how business 
incentives drive product, policy, and strategy.
    These platforms maximize profits by providing hyper 
targeted ads to an audience that spends large parts of their 
life online. Core product mechanics like virality and 
recommendations are built around keeping you, your friends, and 
your family engaged. The problem is that misinformation, hate 
filled and polarizing content is highly engaging. So algorithms 
promote that content.
    As ADL's own research has long suggested and Facebook leaks 
confirm, these platforms exploit people's proclivity to 
interact more with incendiary content, and tech companies do so 
with full knowledge of the harms that result. Ultimately, these 
companies neglect our safety and security because it is good 
for the bottom line.
    With no accountability, no regulation, and no incentives 
beyond growth and increasing ad revenue, extremists find a 
haven to reach, recruit, and radicalize. Platform algorithms 
take advantage of these behaviors, especially to our attraction 
to controversial and extremist narratives. As a result, some 
users get trapped in a rabbit hole of toxic content, pushing 
them toward extremism. This has deadly consequences. ADL 
reports show that extremists on mainstream platforms push 
people into fringe communities that further normalize hate and 
violence. Extremist ecosystems inspire individuals to commit 
acts of domestic terrorism as we saw in Charlottesville, Poway, 
and El Paso.
    Senators, three years ago yesterday, in what was the most 
lethal anti-Semitic attack in American history, 11 congregants 
were massacred at the Tree of Life Synagogue in Pittsburgh. 
Before he attacked, the terrorist posted his anti-Semitic 
manifesto, which then spread online and was expressly cited as 
inspiration by the Poway and El Paso shooters. How many lives 
will be lost before big tech puts people over profit?
    The leaked Facebook documents revealed that company 
researchers flagged Facebook's key role in spreading conspiracy 
theories, inciting extremist violence, and contributing to the 
events of January 6th. Company executives were fully aware of 
the problem and chose not to act. Self-regulation is clearly 
not working. These billion and trillion-dollar companies have 
the resources to improve systems, hire additional staff, and 
provide real transparency. Yet, they claim it is too 
burdensome. Without regulation and reform, they will continue 
to focus on generating record profits at the expense of our 
safety and the security of our republic.
    The leaked Facebook documents, January 6th, and rising 
domestic terrorism all confirm what ADL has been stating for 
years; social media is a super spreader of the virus of online 
extremism. It is time for a whole-of-government, whole of 
industry, whole-of-society approach to fighting online hate. 
ADL built the PROTECT Plan to address the rise in domestic 
extremism and our REPAIR Plan to push back hate to the fringes 
of the digital world.
    Congress must establish an independent resource center to 
track online extremists and make appropriate referrals, 
regulate platforms including through targeted Section 230 
reform, ensure academic researchers access to data, and require 
regular and meaningful transparency records and independent 
third-party audits.
    It is well past time to hold social media platforms 
accountable. Thank you for your leadership in working to bring 
an end to this cycle of hate.
    Chairman Peters. Thank you, Mr. Sifry.
    Our next witness is Dr. Cathy O'Neil, the Chief Executive 
Officer (CEO) of O'Neil Risk Consulting and Algorithmic 
Auditing (ORCAA), an algorithmic auditing company that helps 
companies and organizations manage and audit algorithmic risk. 
As an independent data science consultant, Dr. O'Neil works for 
clients to audit the use of particular algorithms in context, 
identifying issues of fairness, bias, and discrimination, and 
recommending steps for remediation. Dr. O'Neil earned a Ph.D. 
in math from Harvard, was a post-doc at MIT Math Department and 
a professor at Barnard College. She has authored the books, 
Doing Data Science and Weapons of Math Destruction: How Big 
Data Increases Inequality and Threatens Democracy.
    Welcome, Dr. O'Neil. You may proceed with your opening 
comments.

 TESTIMONY OF CATHY O'NEIL, PH.D.,\1\ CHIEF EXECUTIVE OFFICER, 
        O'NEIL RISK CONSULTING AND ALGORITHMIC AUDITING

    Ms. O'Neil. Thank you so much for having me today. I am in 
the lucky position of just being sort of a background expert 
here. I am going to try to explain three things: first of all, 
what is an algorithm; second of all, what is a recommendation 
algorithm; and third, what is a filtering algorithm because 
those are the two types of algorithms that you see the most on 
social media.
---------------------------------------------------------------------------
    \1\ The prepared statement of Ms. O'Neil appears in the Appendix on 
page 75.
---------------------------------------------------------------------------
    But I am just going to start with what is an algorithm. I 
am going to do the opposite of what most people who talk about 
AI and big data will do. I am not going to try to make it 
complicated; I am going to try to explain it in simple terms 
because it is quite simple. It is predicting the future, 
predicting what will be successful in the future based on 
patterns in the past, what was successful in the past.
    It does not even have to be formal. It could be something 
you do in your head. Like for example, I cook dinner for my 
children every day. I look for patterns in the past. That is 
historical data. Well, historical data is just my memories. I 
know what my kid eats. He only eats carrots but not broccoli. 
That kid will eat raw carrots but not cooked carrots. I have a 
lot of information, and I can predict what will be a successful 
dinner.
    But here is the thing that is really important. Besides the 
historical data I am talking about is the definition of 
success. I have to be very precise when I make an algorithm, 
and I have to say exactly what I mean by success. In this case 
of making dinner for my kids, I am going to define success as 
my kids eat vegetables.
    The reason it is so important how you define success is 
because you actually optimize to success. So I am going to make 
meals that are likely to be successful. Time after time, I am 
going to learn from my past mistakes or successes, and I am 
going to make meals that will be successful in the future.
    Now I just want to make the point that a different choice 
of success changes everything. If my son were in charge--he is 
a fan of Nutella and not so much of vegetables--then we would 
have very different meals. We would be optimizing to Nutella 
rather than optimizing to vegetables. So just imagine what kind 
of meal that looks like. It is completely different.
    I want to make the point that algorithms depend a lot on 
patterns in the past, but they depend even more on what you 
define as success.
    I will make a last point about algorithms just in general, 
which is like whoever is in power, whoever owns the code, 
typically is the one that defines success. I would say, to the 
points I have already heard, success for social media platforms 
is about money. They are always going to optimize to money 
which is, of course, ads, ad clicks.
    Now we are going to talk about recommendation algorithms, 
which is how social media decides what content to show you or 
what groups to offer you, membership for.
    I want you to think about this. I want you to think about 
your behavior on these platforms, it is a sort of scoring 
system, like you have scores in multiple dimensions.
    Actually, let's start not with social media platforms, but 
let's start with like Netflix. Let's say you watch Diehard, the 
movie, twice a week, every week. Then you are going to be 
scored in multiple ways by the platform, and for example, you 
would be scored along the lines of: Do you like male characters 
in your movies or female characters in your movie? Do you like 
violent movies or nonviolent movies? Do you like suspenseful 
movies or nonsuspenseful movies?
    If you watch Diehard twice a week, your sort of male 
character, violent, suspenseful movie scores will go up every 
time you do it. Every time you watch a movie that is like a 
chick flick or a romantic comedy, those scores will go down a 
little bit and romantic comedy scores will go up. So you should 
think about your profile from the perspective of a 
recommendation algorithm as just a series of scores along these 
various dimensions that profile you, like what is your taste.
    Now for the definition of success of those algorithms, the 
point is that they want you to stay on the platform as long as 
possible. For social media algorithms, the definition of 
success is, again, staying on the platform as long as possible. 
They will profile you and score you in all sorts of ways to 
figure out how to keep you on the platform.
    I want to make the point that this is completely neutral so 
far. They would likely score me as very interested in crafting, 
and yarn in particular, and they would offer me yarn-type 
things, and every time I click on them my yarn profile score 
goes up. They would peg me more and more over time as somebody 
quite interested in yarn. In that sense, I would become an 
extremist with respect to yarn.
    Every single person is sort of nudged and profiled with 
respect to their interests. I would even add that it is not 
just what they are interested in initially, but they can become 
more interested in certain things over time because of the 
content that is offered them. Similarly, if I watched Diehard 
enough on Netflix, I would be offered more and more movies like 
Diehard, and that would actually inform my profile and my 
tastes in the future.
    I would spend time a little bit on filtering algorithms, 
but just suffice to say that whereas recommendation algorithms 
work quite well for the social media platforms and they make 
them very profitable because they do succeed in keeping people 
on the platforms, the opposite is true for filtering 
algorithms. The idea of getting rid of harmful content, they do 
not work well at all. They are very facile, sort of keyword 
search-based algorithms, and they are quite unsuccessful 
compared to recommendation algorithms.
    I will stop there for now. Thank you.
    Chairman Peters. Thank you, Dr. O'Neil.
    Our next witness is Dr. Nathaniel Persily, the Co-Director 
of Stanford Cyber Policy Center and the James B. McClatchy 
Professor of Law at Stanford Law School. Dr. Persily's 
scholarship and legal practice focuses on American election 
law. He is a co-author of the book, Law of Democracy. His 
current work has been honored as a Guggenheim Fellow, Andrew 
Carnegie Fellow, and a Fellow at the Center for Advanced Study 
in the Behavioral Sciences, examining the impact of changing 
technology on political communications, campaigns, and election 
administration.
    Dr. Persily, you may proceed with your opening comments.

TESTIMONY OF NATHANIEL PERSILY, PH.D.,\1\ CO-DIRECTOR, STANFORD 
 CYBER POLICY CENTER, AND JAMES B. MCCLATCHY PROFESSOR OF LAW, 
                      STANFORD LAW SCHOOL

    Mr. Persily. Thank you very much, Chairman Peters and 
Ranking Member Portman and Members of the Committee.
---------------------------------------------------------------------------
    \1\ The prepared statement of Mr. Persily appears in the Appendix 
on page 80.
---------------------------------------------------------------------------
    I am going to spend my time today talking about what we 
know, what we need to know, and then what to do about it. 
Before I do that, let me sort of admit to what we all know, 
which is why we are here, and that is that a courageous woman 
revealed thousands of pages of documents that previously no one 
had seen outside of the company. The revelations themselves, 
the content, were quite striking, but the fact that she did it 
was really momentous. It sort of brought front and center the 
fact that the internal researchers at these companies know an 
enormous amount about us and we know very little about then.
    That is an equilibrium which is not sustainable. Right? We 
cannot live in a world where all of human experience is taking 
place on these platforms and the only people who are able to 
analyze the data are the people who are tied to the profit 
maximizing missions of the firms.
    As Senator Portman mentioned earlier, I have been working 
both with his staff and others trying to figure out a way to 
open up these companies to outside research because they have 
lost their right to secrecy. All right? We are at a critical 
moment where we need to understand what is happening on these 
platforms.
    Let me talk about three areas of critical importance when 
it comes to research on online harms. The first is how much 
harmful content, appears on these platforms. The second is the 
role of the algorithm. The third has to do with advertising.
    First, how much harmful content is there on these 
platforms? The answer is a lot. Right? If you listen or read 
the transparency reports, there are millions of examples, 
whether it is hate speech or disinformation or inciting content 
and the like.
    But those numbers are really quite meaningless, so is the 
claim that, for example, Facebook takes down four billion 
accounts a year. I mean, that is interesting because it seems 
like a lot, but none of us really know what the denominator is. 
We do not know how much of the offending content is being 
shared and viewed by users of the platform and what the 
experience is both of the average user and certain subsets of 
users.
    For most people it is quite sort of important to understand 
their experience online is not filled with hate speech; it is 
not filled with disinformation and the like. But that is also 
the wrong question to ask.
    The question--and where I think research has progressed in 
the last few years--is to suggest that there is a sizable 
minority on these platforms who are experiencing and producing 
an enormous amount of this terrible content, whether we are 
talking about incitement, hate speech, disinformation and the 
like. Particularly because these are folks that it is going to 
be very hard to survey, it is going to be very hard to get them 
in sort of outside strategies for research, we need the 
internal data to figure out exactly what is going on.
    Second, on the role of the algorithm. This is an area where 
the platforms and outside observers seem to have diametrically 
opposed views. Whether it is the Haugen revelations or others 
that--from Facebook who have made this point or even 
conventional wisdom, sort of the argument is made that they are 
maximizing for engagement, and salacious disinforming and 
hateful content generates the most clicks, and therefore, that 
is what is favored in the algorithm.
    If you ask the companies, they say, no, that there are all 
kinds of measures that the firms are taking in order to take 
down this offensive content, that they take down, as I said, 
millions and millions of examples of hate speech and the like.
    I will say one other point that they make is that on 
encrypted platforms such as WhatsApp and the like that you see 
the same types of offending content, and if you go outside the 
United States, where those kinds of platforms are much more 
ubiquitous and they do not have algorithms, that you see, as 
much, if not more, of the hateful content.
    Finally, let me talk about advertising. These firms are 
advertising monopolies, and we need to treat them as such. That 
is what makes them distinctive. We do not really know a whole 
lot about how advertising is affecting sort of this information 
ecosystem problem that is the subject of this hearing. We know, 
of course, about the meddling in the elections, incendiary 
content, whether it is from Iran, whether it is China, whether 
it is Russia and the like. When outside researchers try to 
study advertising, as a group of New York University (NYU) 
researchers tried to do, they were kicked off the platform 
because they were trying to scrape and to try to find out 
exactly how people were targeted and the like.
    So now what to do about it? There are many areas of reform 
that I think are on the table. Karen Kornbluh mentioned some 
important ones. I want to focus on this question of researcher 
transparency. We cannot live in a world, as I said before, 
where the only people who understand what is happening on the 
platforms are the internal researchers to the firms. Whether it 
is immunizing outside researchers who want to scrape the 
platform as these NYU researchers do or to develop a secure 
pathway, maybe administered by the FTC, in order to vet 
researchers so that they can analyze privacy-protected data, 
that has to be our future. Right? We cannot live in this world 
where the platforms hide behind their right to secrecy. Only if 
we can get access to this data can we then regulate 
intelligently.
    Thank you.
    Chairman Peters. Thank you.
    Our final witness this morning is Dr. Mary Anne Franks, who 
is a professor of law and the Michael R. Klein Distinguished 
Scholar Chair at the University of Miami. Dr. Franks is also 
the President and Legislative and Tech Policy Director of the 
Cyber Civil Rights Initiative, a nonprofit organization 
dedicated to combating online abuse and discrimination. Her 
work is at the intersection of civil rights and technology. Dr. 
Franks authored the award-winning book, The Cult of the 
Constitution: Our Deadly Devotion to Guns and Free Speech and 
has been awarded a grant from the Knight Foundation to support 
research for a book titled Fearless Speech.
    Welcome, Dr. Franks. You may proceed with your opening 
comments.

TESTIMONY OF MARY ANNE FRANKS, D.PHIL.,\1\ PROFESSOR OF LAW AND 
  MICHAEL R. KLEIN DISTINGUISHED SCHOLAR CHAIR, UNIVERSITY OF 
                             MIAMI

    Ms. Franks. Thank you. On October 14, 2021, Facebook 
announced a new artificial intelligence project called Ego4D. 
The name derives from the project's focus on ego-centric, or 
first-person, perception. Among Facebook's plans for this data 
include equipping augmented reality glasses with the capacity 
to transcribe and recall recordings of what people say and do 
around the user.
---------------------------------------------------------------------------
    \1\ The prepared statement of Ms. Franks appear in the Appendix on 
page 104.
---------------------------------------------------------------------------
    Asked whether Facebook had implemented measures to address 
potential privacy and other abuses of these capabilities, a 
spokesperson replied that the company ``expected that privacy 
safeguards would be introduced further down the line.'' As 
underscored by multiple internal documents recently released by 
whistleblower Frances Haugen, this approach is characteristic 
of Facebook, aggressively pushing new untested and potentially 
dangerous products on the public and worrying about the 
consequences later, if at all.
    Documents shared with the FCC note the asymmetrical burden 
on employees to demonstrate the legitimacy and user value of 
harm mitigation tactics before implementation, a burden not 
required of new features or changes aimed at increasing 
engagement or profits. While it may no longer be an official 
motto, ``move fast and break things'' still seems to be 
Facebook's animating philosophy.
    It is notable that Facebook chose to announce such a highly 
controversial new project just as the company faces a storm of 
criticism and scrutiny over documented evidence that it 
knowingly allowed violent extremism, dangerous misinformation, 
and harassment to flourish on its platforms. One might have 
expected Facebook would be more circumspect about drastically 
increasing the capacity of individuals to record people around 
them without consent in light of the revelation, for example, 
that it allowed nude images of an alleged rape victim to be 
viewed 56 million times simply because the man she accused of 
raping her was a famous soccer star.
    Is it arrogance? Is it callousness? Or, is it merely 
confidence? Confidence that no matter what is revealed about 
Facebook's role in the disintegration of our shared reality or 
the dissolution of our democracy--not its acceleration of a 
conspiracy theories from QAnon to Stop the Steal, its 
amplification of deadly disinformation about Coronavirus 
Disease 2019 (COVID-19), its endangerment of teenage mental 
health, its preferential treatment of powerful elites, or its 
promotion of violently racist and sexist propaganda--that it 
will face no real consequences?
    After all, that seems to be the lesson that not only 
Facebook but other dominant tech companies have learned from 
previous scandals. Media attention will be intense for a while. 
They may be called before Congress to answer some uncomfortable 
questions. They may face some fines. But the companies will 
reassure the public that their purpose was never to cause harm. 
They will promise to do better in the future.
    It should be clear by now that debates over tech companies' 
intentions are a distraction and an obstacle to real reform. 
Moral and legal responsibility is not limited only to those who 
intend to cause harm. We hold entities accountable also when 
they know their actions will cause harm or when they are 
reckless about foreseeable harms and even sometimes when they 
are negligent.
    Facebook and other tech companies have known for years that 
a business model focused on what is euphemistically called 
``engagement'' is ripe for exploitation and abuse. These 
companies have, at a minimum, consciously disregarded 
substantial and unjustified risks to Americans' privacy, 
equality, and safety.
    These risks are not politically neutral. Contrary to oft 
repeated claims that social media is biased against 
conservatives, the algorithms of major social media sites 
disproportionately amplify right-wing content. Facebook allows 
right-wing news sites to skirt the company's fact-checking 
rules and changed its algorithm in 2017 to reduce the 
visibility of left-leaning news sites. The day after the 2020 
election, 10 percent of all views of political content on 
Facebook in the United States were of posts that falsely 
claimed that the vote was fraudulent.
    As one Facebook employee wrote, if a company takes a hands 
off stance for these problems, whether for technical or 
philosophical reasons, then the net result is that Facebook 
will be actively, if not necessarily consciously, promoting 
these types of activities.
    According to recently released internal research, Twitter's 
algorithms also disproportionately amplify right-wing content. 
Research on YouTube's algorithms show that they create a far 
more robust filter bubble for right-wing content than left-wing 
content and that Fox is, by far, its most recommended 
information channel, influence that illustrates how the 
ecosystem of extremism and disinformation is driven by forces 
beyond social media.
    Lopsided political amplification is all the more troubling 
given the disproportionate rate of right-wing offline violence. 
Since 2015, right-wing extremists have been involved in 267 
plots or attacks and 91 fatalities.
    To be clear, the object of concern here is not conservative 
content as such but content that encourages dehumanization, 
targets individuals for violence and harassment, traffics in 
dangerous disinformation, and promotes baseless conspiracy 
theories that undermine our democratic institutions. Social 
media, along with, in some cases, mainstream media, elected 
officials, and others with influential platforms amplify these 
anti-democratic forces.
    Structural reform, including reform to Section 230, that 
limits its protections to speech protected by the First 
Amendment and denies immunities to intermediaries who exhibit 
deliberate indifference to unlawful content, is necessary to 
ensure that no industry and no individual is above the law when 
it comes to the reckless endangerment of democracy.
    Thank you.
    Chairman Peters. Thank you, Dr. Franks.
    Recent reports based on the leaked Facebook Papers indicate 
that Facebook was certainly aware that changes that they made 
to their news feed configurations spread dangerous content more 
rapidly. Each of our witnesses have mentioned this in one form 
or another already. Yet, company leaders repeatedly argue that 
they are investing in trust, safety, and civic integrity 
efforts. I am certainly struck, as I think you are, by this 
fundamental conflict between efforts to take down hateful and 
violent content and company algorithms that seem to amplify 
extremist views simultaneously.
    Mr. Sifry, can you speak to this apparent conflict you have 
mentioned in your opening comments but a little more in depth 
if you would, please? How successful can these social media 
companies be at content moderation if they continue to design 
their own platforms to amplify extreme content?
    Mr. Sifry. Mr. Chairman, thank you so much for that 
excellent question. I think it really cuts to the heart of the 
conversation here, that at its core, what we are talking about 
is the incentive systems that drive this business model, a 
business model that is based around getting you engaged with, 
unfortunately, the natural human biases that we have toward 
engaging with controversial, with polarizing, with content that 
makes us afraid, with incendiary content.
    What happens? All of those indicators that were talked 
about earlier, right, likes, shares, retweets, you name it, 
that those indicators go up. And so these platforms, Senator, 
are working as designed. What ends up happening is that we end 
up spending more time on those platforms and they end up 
tracking us more and then they get to send us more hyper 
targeted ads.
    This will not change until there is a clear shift in the 
incentive systems that they use to be able to do this business, 
and that is where Congress must act. In creating systems that 
actually bring about a change in their incentive systems, that 
is how we end up getting them to behave rationally in this 
sense, right, and move toward those different incentive 
systems.
    Chairman Peters. Professor Kornbluh, just a follow-up 
basically on that question, in your opinion, can investments in 
trust and safety ever overcome a business model that 
prioritizes advertising revenues based on extremist content and 
the desire to keep people on a platform as long as possible?
    Ms. Kornbluh. I think that goes right to the heart of the 
question. The algorithm is this machine that is pushing 
misinformation or extremist content into a user's feed. It is 
recommending these extremist groups. It is targeting small 
groups of people with ads designed to agitate them. It is the 
mechanics of the system that are working.
    Then these poor human content moderators, or even 
outmatched AI systems as Dr. O'Neil was talking about, do not 
stand a chance. So their own data shows that less than 1 
percent of content that depicts violence or could incite 
violence is caught, and this is content that violates their own 
terms of service.
    This is true; there are new revelations today about even 
something that was a high priority from Mark Zuckerberg, which 
is catching COVID misinformation. Apparently, they are 
unmatched at the content moderation even there. An 
international vaccine expert that I talked to told me he was so 
frustrated because he felt that the disinformation purveyors 
were working with the motor of social media and that the public 
health officials were fighting against the engine of social 
media.
    I always think of that ``I Love Lucy'' skit when she and 
Ethel are on the candy conveyor belt and they are desperately 
trying to go as fast as the conveyor belt. These content 
moderators just do not have a chance.
    Chairman Peters. I want you to continue on this line of 
questioning, Ambassador. Earlier this year, a report found that 
Facebook was posting ads for body armor, gun holsters, and 
other military equipment right next to content promoting the 
2020 election misinformation and news about the attack on the 
Capitol on January 6, so getting this content and ads for 
military equipment. What do we know about how Facebook and the 
platform targets their advertising and this kind of link? Can 
you talk to me a little bit more about what you are finding?
    Ms. Kornbluh. Yes. There is so much here, and it is 
overwhelming. It is hard to get our heads around this 
advertisement of these kinds of weapons, but there are three 
ways in which ads in general drive extremism.
    First, it is very different than what we think of in terms 
of cable and broadcast ads, where they are micro-targeted 
specifically to the people who would be most moved by them and 
other users who might object, who might say, ``Oh, that is 
violent'' or ``Oh, that is wrong,'' they do not get to see 
them. The ad content is not reviewed by humans before it is 
placed as they are on broadcast and cable. Third, users cannot 
really find out who is paying for these ads. Even political ads 
can just list a dark money group instead of their true sponsor, 
which could be a foreign government. So in 2016, remember, 
there were ads paid for in rubles that got through because 
there is no human monitoring this and there is very little 
transparency.
    Then as we have talked about, the algorithms are designed 
to maximize this ad revenue. The ads are driving the whole 
thing. They are keeping you online to be fed the ads, and the 
incendiary content keeps users on the platforms longer.
    Then there is this third element that I think is really 
important to think about. The algorithms that are trying to 
keep you online to feed you these ads, they are creating a 
marketplace that values extremism. The more extremist the 
content the better the distribution will be, the cheaper the 
ads will be actually. It is a cheaper ad if it is more 
incendiary per unit because it gets wider spread distribution. 
So it is creating this marketplace.
    Political parties apparently came to Facebook and said, 
your algorithm is driving us to put out more incendiary content 
because it is the only way we can get distributed. If we just 
put out our 10-point plan, it does not get distributed online. 
So the ads are really the heart of this matter.
    Chairman Peters. Thank you. The buzzer you heard is a vote. 
So we are in a series of votes. You will see Members running in 
and out, and that is what I will do. I will go and vote. I will 
recognize Ranking Member Portman for his questions, and Senator 
Hassan will chair the hearing in my absence.
    Ranking Member Portman.
    Senator Portman. Thank you, Mr. Chairman, and thank you to 
the witnesses. This is a really complicated area, and I am glad 
that Dr. O'Neil gave us a little one-on-one on algorithms and 
how they work. Ultimately, she came down to the conclusion that 
this is about money. This is about what works, kind of along 
the lines of what Ambassador Kornbluh just talked about, that 
you know, what works is what sells more advertising and these 
algorithms are at the core of that. In other words, they are 
determining what we want to hear as online participants and 
amplifying that.
    I will say two things that I just want to try to stipulate 
here at the start. Some may not agree with me, but it seemed to 
me, Ambassador Kornbluh, in your comments in particular, you 
were focused on right-wing extremism as being the problem. It 
is not. It is everything.
    I hope we can agree to that because, again, the work that 
we did early in this Committee with regard to what ISIS was 
doing online, and in terms of recruiting and spreading 
violence, and in terms of what happens today even--I mean, 
there is, as you know, a lot of concern about these platforms 
not allowing what happened at the Wuhan lab to leak, to come 
out, or you know, concerns about what Hunter Biden is doing or 
not doing being blocked, or other things that lead me to 
believe that, whether it is Antifa on one side or whether it is 
White Supremacy on the other side, we need to look at this as a 
problem that is impacting ideologies across the spectrum.
    I am just going to stipulate that because I want to get to 
some questions. Some of you may not agree with me on that, but 
I think that is really important, for us not to make this a 
partisan exercise.
    Second is the First Amendment. This is impossible. How do 
you figure out what is speech that is peaceful expressions of 
points of view that we should be encouraging and what is 
content that ought to be filtered in some way? And there is 
lots of examples of this.
    Recently, parents at a school board meeting by, I guess it 
was, a National School Boards Association (NSBA) said these 
were domestic terrorists. They are not domestic terrorists. 
These are parents trying to--and I think they later apologized 
for saying that. But, parents expressing their strongly held 
views about their kids' education.
    You have to be sure that we are not taking content which is 
people expressing political views that are peaceful and somehow 
filtering those out. Any thoughts on that as we start, for any 
of the panelists, either with us or virtually?
    Ms. Kornbluh. I just want to agree with you. The algorithms 
are not partisan. The algorithms are economically driven, as 
Dr. O'Neil said, and they are trying to keep us all online. I 
think it is really important to keep that in mind.
    I think for the First Amendment, the freedom of speech, 
freedom of association, it is extremely important that the 
government not be in the business of deciding what is true and 
what is not true and that instead--that is why I think some of 
these revelations from the whistleblower are so important, 
because she focuses upstream of the content, at the mechanics 
of the platform, and how it is driving this content just to 
service itself and to service its ad revenues.
    If we focus on those design elements, and we focus on 
transparency especially, that furthers First Amendment 
concerns. It furthers freedom of speech and freedom of 
association. Transparency is such an important principle, if we 
let consumers know who is behind what they are seeing, what 
interest is it serving, that----
    Senator Portman. That leads me to my core question today 
really, again, just stipulating this is a hard area. We have 
talked about a couple of those issues.
    But it seems to me that, as I said earlier about getting 
under the hood and looking into what these design elements are, 
that you talk about the transparency issue. What you talk about 
is really important because--again, it is proprietary 
information. I understand that. These are private companies.
    Again, this is not an easy issue for government to be 
involved with, but everybody is talking about regulation right 
now. I mean, everybody. Facebook is talking about it. Google is 
talking about it. Twitter is talking about it. We are talking 
about it. Everybody has a different view what that regulation 
might be, but shouldn't it be based on better data because we 
really do not know what we are trying to regulate, if there is 
a lack of transparency, as to what that design is or how these 
algorithms are derived.
    So you know, we talked a little again in your testimony 
about this. Dr. Persily, in particular, you talked about your 
thoughts on how to give access to impartial researchers to be 
able, I assume, to publish about what is actually behind all 
this. What is the content-directing mechanism and how does it 
work. I am intrigued by that. I do not know if that is the 
answer. You mentioned that the FTC could play a role in this.
    But can you amplify that a little bit and talk about what 
you think could lead us to more transparency and better 
understanding?
    Mr. Persily. Thank you for that question. The model that I 
put out there is that the FTC, working with the National 
Science Foundation (NSF), would vet researchers who would not 
have the permission of the company, but the company would have 
to basically develop research portals for outside, independent 
research to study all of these societal problems that we are 
saying are caused by social media. The key features of this are 
simply that the firms have no choice in who has access to the 
data and we have some way of vetting the researchers to 
prevent, say, another Cambridge Analytica and the like. We need 
to have some process in place so that someone other than those 
who tied to the profit-maximizing mission of the firm get 
access to this data.
    As you mentioned, it is proprietary data, but it is data 
about us. Right? It is essentially data about everything that 
is happening in American politics and, frankly, around the 
world. We need to figure out some way for the firms to be 
opened up so that outsiders can see it.
    My view is that they should not turn the data over to a 
government agency, that that would pose real privacy and 
surveillance problems. We want to make sure that there is a 
vetted, independent third party that is able to do this kind of 
research, on all of these questions that have come up today, so 
that we can get to the answers to some of the questions that 
you asked earlier about the propensity on the left and the 
right to engage in hate speech or engage in violent content and 
the like, as well as potential bias in the content moderation 
platforms and the like. Only if we have access to the data can 
we really answer those questions.
    Senator Portman. Otherwise, very hard to come up with 
regulation, which is what, again, everybody is talking about. I 
know there are different views on what that means, but it seems 
to me that there should be a consensus that if we are going to 
try to regulate this we need to have better information as to 
what the actual design and what the intentions are and the 
impact is. So this hearing is helpful, I think, in that regard.
    I am at the end of my time. Hopefully, we will come back 
for a second round. I have so many other questions for this 
team. But again, I thank you for your expertise today.
    Senator Hassan [presiding]. Thank you, Senator Portman.

              OPENING STATEMENT OF SENATOR HASSAN

    it is now my turn for a round of questions. I want to thank 
Senators Peters and Portman for holding this hearing and to all 
of our witnesses, both in the room and virtually. I really 
appreciate your testimony. This is an excellent panel, and I 
appreciate your work very much.
    I want to start with a question to Dr. O'Neil and Dr. 
Persily. I am concerned that extremist groups, including ISIS, 
continue to develop and refine online radicalization techniques 
that make it easier and quicker to radicalize people. At this 
Committee's annual threats hearing in September, leaders from 
the Department of Homeland Security (DHS), the Federal Bureau 
of Investigation (FBI), and the intelligence community (IC) 
expressed similar concerns. Extremists take advantage of the 
speed of social media platforms but also their algorithms, 
which often facilitate the quick spread of extreme content.
    Dr. O'Neil and Dr. Persily, in your view, what are the 
weaknesses in the algorithms used by large social media 
platforms that can make it possible for anyone, including 
violent extremist groups, to expand and capture their audience? 
What steps can social media platforms take to curtail extremist 
efforts? Why aren't these companies taking these actions 
already? I will start with you, Dr. O'Neil.
    Ms. O'Neil. Thank you for the question, yes, and it is an 
important one. I want you to think of the filters, these things 
that are trying to catch content like that is hateful or 
otherwise not allowed, as sort of nets that fishermen use in 
the ocean. They sort of pull the net, and they see what they 
have got. They have some fish there, and they count the fish. 
They say, oh, we got a lot of fish. What they are not counting, 
of course, is the fish that got through the holes of the net.
    What I am talking about are the people who are paid, 
actually, to put hateful content on Facebook and elude the 
nets. The thing about is that they can tell when they have been 
caught, and then they will double, redouble their efforts to 
change their content somewhat so that it gets through the net.
    It is kind of like if you think about, the early spam on 
Viagra that would filter into your e-mail, and then you know, 
the spam filters got rid of the things that said Viagra. Then 
they started saying Viagra, but it was spelled--instead of with 
an ``I,'' it was spelled with a ``1.'' Those got through for a 
while until they did not get through.
    Spam filters work pretty well to remove Viagra ads in part 
because Viagra is the same word over many years. But in the 
case of social media, the stuff that they are putting on social 
media changes very quickly, and like the spam filters 
essentially cannot keep up.
    I think you should think of it as an arms race. It is the 
filters on the one hand owned by the social media, and of 
course, the people that get sent the high scored, like high 
risk content----
    Senator Hassan. Right.
    Ms. O'Neil. Then they have to decide whether it is in fact 
against policy, and then on the other hand, all the 
propagandists who are actively trying to evade the filters. The 
simple truth is that the propagandists are winning that war.
    To the extent that social media companies can combat it 
more, it would require much more expensive work, and they 
simply do not want to do it. So their policy has been, we are 
going to count how many fish we got, we are not going to count 
how many fish we did not get, and we are going to hope that it 
sounds good enough for you guys to stop asking questions.
    Senator Hassan. Thank you.
    Dr. Persily.
    Mr. Persily. So this is one of those areas where I wish I 
had the answer, and in order to get the answer we do need to 
have access to the data. I will say that, having talked to many 
of the integrity teams at these companies, I mean, these are 
pretty sophisticated operations----
    Senator Hassan. Right.
    Mr. Persily [continuing]. That they have stood up in the 
last five years, and there is more that they could do.
    As bad as things may be in the United States, by the way, 
they are a lot worse around the world, and that is something 
that I think naturally we are focusing on problems unique to 
the United States. But if they do not have the competencies in 
the local languages around the world, especially if you are 
dealing with terrorist content and the like, then they are 
going to be hindered in their ability to really attack these 
problems.
    Again, if you look at the work that we have done at 
Stanford, if you go to the Stanford Internet Observatory, we 
have been trying on the outside to flag this kind of content, 
the violent content, terrorist content, and foreign election 
meddling and the like. We have been trying to do on the outside 
what they do on the inside, but it is extremely difficult.
    Senator Hassan. OK. Thank you.
    Dr. Franks, the proliferation of nonconsensual intimate 
imagery, sometimes called ``revenge porn,'' is a pervasive 
problem on the Internet. There are a number of truly despicable 
sites dedicated to hosting that material, but often users also 
share these private images and videos on large social media 
platforms with absolutely devastating consequences for those 
whose images are shared without their consent.
    Congress and the States have taken notice of the tremendous 
harms from these situations, and there is some work going on to 
address the problem. But, what more should social media 
companies be doing to prevent this content from being hosted on 
their platforms and to more immediately remove it when found? 
What additional tools could these companies give to people to 
help ensure that their images are not shared on social media 
platforms?
    Ms. Franks. Thank you. Social media companies--I will say 
this about some of the more dominant companies--have been 
trying to tackle this issue, Facebook among them. There are 
teams at all of these various companies that are quite 
concerned about these issues and have worked with organizations 
like mine, the Cyber Civil Rights Initiative, to think about 
ways to impose their content policy restrictions and to 
encourage people not to participate in this kind of abuse and 
to try to empower victims to be able to remove the content.
    That being said, those kinds of measures are essentially 
putting victims at the mercy of these companies. They may 
choose to make this priority. They may choose to impose some 
sort of policies. But there is not any necessary reason for 
them to continue doing so, and tomorrow they could simply stop.
    I think it really is important for the work that is being 
done by the State legislatures and by Congress. The SHIELD Act 
is included in the Violence Against Women Reauthorization Act 
of 2021. That would federally criminalize nonconsensual 
pornography. That would be a real incentive, I think, for these 
companies to take it seriously.
    This is connected to what I have suggested about Section 
230, and it also goes to the broader question of incentives, 
about transparency or about regulation, about any of these 
questions. So long as these companies enjoy essentially blanket 
immunity for these harms, there is no real incentive for them 
to do anything, and therefore, it would be very important for 
there to be changes in Section 230 to take away some of that 
preemptive immunity.
    Senator Hassan. Thank you very much. Again, I appreciate 
the testimony of all of you.
    I am going to recognize Senator Johnson now for his round 
of questions.

              OPENING STATEMENT OF SENATOR JOHNSON

    Senator Johnson. Thank you, Madam Chair. First of all, let 
me say anybody who has watched the documentaries, The Social 
Dilemma and The Creepy Line, I think has to understand that 
this can do great harm, particularly to children. I think as a 
result I think our first line of defense is with parents. I 
think that they need to really do everything they can to make 
sure that their children are not affected by this in a negative 
fashion.
    Professor Franks, you made the comment that it seems like 
from your standpoint most of the bad content leans toward 
conservative and that type of misinformation. I would just ask 
you, what is your assessment in terms of the leadership, the 
people who work in these companies? How do they lean 
politically in your assessment?
    Ms. Franks. I am not sure that it is relevant. I think 
the----
    Senator Johnson. Pardon?
    Ms. Franks. I am not sure that that is relevant. The 
particular individual leaning----
    Senator Johnson. OK. I will just determine what is a 
relevant question or not, but let me just ask you again, what 
political affiliation do you think most people in leadership 
positions and that work in whether it is Facebook or Google or 
Twitter? Are they right-wing extremists? Are they conservative? 
Are they liberal? Are they radical leftists? I mean, what part 
of the political spectrum do you think they fall on?
    Ms. Franks. Perhaps you could explain to me the relevance, 
and then I could answer the question.
    Senator Johnson. Can you speak a little bit louder?
    Ms. Franks. Sure. Perhaps you could explain to me the 
relevance of political affiliations of individual employees.
    Senator Johnson. I was saying, just listening to the 
testimony here, it seems like the big concern here is about 
right-wing extremism, which I completely condemn, or right-wing 
misinformation. So again, I would just argue when you take a 
look at what Mark Zuckerberg has done through his Center for 
Tech and Civic Life, a couple hundred million dollars spent, 
pretty well took over the election system--I think in violation 
of Wisconsin law--of Green Bay and three or four other cities, 
it does not seem to me that the impact or the intent of their 
manipulation of data would tend to favor conservative groups or 
conservative thought. It seems to make more sense that they 
would tend more to push a liberal ideology.
    Ms. Franks. As other experts have testified, the main issue 
for most of these companies tends to be profit, and profit is 
usually going to be built around engagement. Engagement is 
usually going to be built around outreach, misinformation, 
half-truths, things that provoke people into thinking that they 
are under attack, that they are being victimized.
    Senator Johnson. OK. We heard that in testimony. Let me ask 
you, have you heard of the work of Dr. Robert Epstein?
    Ms. Franks. I am not sure.
    Senator Johnson. So I saw one of you shake your head. Is 
that Mr. Persily?
    Mr. Persily. Yes. I am familiar with it.
    Senator Johnson. You are familiar with it?
    Mr. Persily. Yes.
    Senator Johnson. Can you summarize it, or should I 
summarize it for you?
    Mr. Persily. Whichever you prefer, but, yes, that he makes 
the argument that the platforms favor the Democrats in his----
    Senator Johnson. He makes the argument that through 
manipulation of search, Google, as it ramps up toward 
elections, starts manipulating the search to push users of 
Google into the type of information that is going to tend to 
have you vote or decide to vote for a Democrat, delivering, 
according to him, millions of votes to Hillary Clinton, 
millions of votes to congressional Democrats in elections in 
2018.
    Mr. Persily. Yes, that is what he says.
    Senator Johnson. OK. I think the point I am trying to make 
here is I think manipulation is potentially going to swing both 
ways. Who is going to be the arbiters of truth? You know?
    Professor Franks, you said that the reckless endangerment 
of democracy. Now I would happen to think that if one of these 
platforms is utilizing their awesome power of manipulating a 
search to turn votes toward one political party versus the 
other, if one of these tech giants uses--spends hundreds of 
millions of dollars to turn out voters in Democrat-leaning 
areas and regions, that certainly impacts our democracy.
    I will give you another example in terms of Facebook. I 
provided a forum for people who were vaccine injured to just 
tell their story. Following that, there were about 2,000 people 
involved in groups supporting each other. Some of these women 
have such severe vibrations from vaccine injuries that they 
committed suicide. So this Facebook group was a support group. 
It was literally helping people prevent suicides. Within a 
week, their group grew from 2,000 to 5,000 people, at which 
point Facebook dismantled the groups. These individuals who 
were counseling and helping prevent suicides lost contact with 
the people who were suicidal.
    So who is going to regulate this? How is free speech 
different when it is on a platform versus when it is just 
spoken in the town hall?
    Ms. Franks. I would suggest----
    Senator Johnson. Who is going to be the unbiased arbiter of 
truth? I do not think it exists. I am certainly questioning the 
Section 230 liability protections when you have these platforms 
acting as publishers, which is what they are doing when they 
censor primarily conservative thought. I have been censored 
myself, repeatedly.
    So again, I am just pushing back and challenging the fact 
that this is something that is fomenting right-wing 
conspiracies and highly advantageous to the conservative 
movement. I would say, if anything, it is more likely it is, 
from a political realm, advantaging left-wing ideology.
    But again, I will come right back to we have a constitution 
that protects free speech. Who is going to regulate that 
fairly, in an unbiased fashion? It is just not possible. Along 
the way we are violating people's constitutional rights.
    Anybody want to just take a stab at that one?
    Ms. Franks. I would be happy to respond to that. Yes, we do 
have a First Amendment. We do have right to free speech.
    But we also know, of course, that private companies are not 
obligated under the First Amendment to take all comers. They 
are allowed to make their own decisions about what is 
considered to be high quality or low quality content. They can 
make any number of decisions, and I think we would applaud them 
in many cases to make those decisions. As we were talking about 
just before, in terms of nonconsensual pornography, I, for one, 
am very happy that Facebook has made the decision to say that 
that is not welcome on their platform.
    When it comes to the questions of conservative versus 
liberal bias, this is not a preconceived notion that I am 
suggesting here. This is not about intuitions or impressions 
although I know that those can go in many different directions. 
This is about what the data actually suggest. The data actually 
do indicate that right-wing content is more amplified on these 
social media platforms than left-wing content and that right 
wing content is more disproportionately associated with real-
world violence, not hurt feelings, not people being upset, but 
in fact actual violence, actual armed insurrections, actual 
notions of terrorism and anarchy.
    Senator Johnson. Thank you, Mr. Chairman.
    Chairman Peters. [presiding]. Thank you, Senator Johnson.
    Senator Ossoff, you are recognized for your questions.

              OPENING STATEMENT OF SENATOR OSSOFF

    Senator Ossoff. Thank you, Mr. Chairman.
    Ms. O'Neil, based upon your experience reviewing the 
algorithms underpinning many of these platforms and similar 
products, can you please connect the dots for the Committee, 
the link between the scale that these companies manage to 
achieve and the algorithms that they use to feed content to 
targeted users?
    Ms. O'Neil. Thank you for your question. I am going to make 
a confession. I have not audited these algorithms because these 
companies have not welcomed me or invited me to--inside their 
systems. They would not want me there.
    For that matter, I do not think I would take that job for 
the very reason you are asking the question. As an algorithmic 
auditor, I want to consider who would be harmed, who are the 
stakeholders, and what kind of harm would come to those 
stakeholders or might come to those stakeholders. I do not go 
in assuming there is harm, but I do go in thinking about who 
are the stakeholders and what could harm them, and then making 
and developing tests and experiments to see the extent to which 
these harms are actually occurring.
    For example, the kind of research we learned about from the 
whistleblower around teenage girls and suicidal ideation would 
be the stakeholder, teenage girls, and the harm would be 
suicidal ideation caused by their experiences on Instagram. 
That would be an example of the kind of stakeholder and harm 
that I would examine.
    But if I were to actually be given the job of auditing the 
Facebook algorithm or the Instagram algorithm or any of the 
other algorithms, it would just be too large. There would be 
too many stakeholders. For example, it is very clear to me that 
if I had been given that job four years ago I never would have 
imagined the stakeholder that would be the Rohingya Muslims who 
were going to be there was going to be a genocide against them 
in Myanmar. That simply would not have occurred to me.
    So it is just too big a job to do that, and it is because 
of scale, because it is international and because even within 
the United States it is too large to imagine who are the 
stakeholders and what kind of harm could befall them.
    Having said all that, there are specific stakeholders that 
you can imagine right now that are interested--that are the 
focus of this particular Committee, that you could be saying, 
well, wait a second. Are these stakeholders having these harms 
that are actually illegal or a national security threat? That 
is a kind of algorithmic audit I would be happy to do. It would 
not be something that Facebook would invite me to do. You would 
have to somehow subpoena the data for me on my behalf, but that 
is something I could do.
    To summarize, auditing for me is a stakeholder and a harm 
and an experiment to see the extent to which that harm is 
falling on that stakeholder. I would be happy to do that, but 
you would have to choose a few of them because there is just 
almost an infinite number to consider a priori.
    Senator Ossoff. OK, Ms. O'Neil. Thank you.
    Dr. Franks, having observed this hearing, paying close 
attention to the broader discourse in politics, culture, 
society on these issues, what do you think is being missed by 
policymakers, and how is the nature of our debate perhaps 
overlooking key considerations or facts relevant to the policy 
discussion?
    Ms. Franks. Thank you. I think a lot of what is being 
missed is--or I should say the focus is sometimes on purposes, 
intentions, listening to the companies say, we are working on 
this; we wish we had caught that. I really think it is long 
past time that we look beyond what the companies are saying 
they care about and what they are intending to do and simply 
look at what is happening, that the question of intentions just 
be something we leave in the past. This is why I suggest, for 
instance, that Section 230 is ripe for reform because it gives 
too much deference to the idea that this industry will be able 
to regulate itself.
    I think the other important issue is to recognize that 
certain types of changes to Section 230 need not raise or 
settle all of the First Amendment concerns that have been 
brought up, that are, of course, legitimate to bring up and be 
concerned about, that modest reforms to Section 230, for 
instance, denying immunity when it comes to harm that is 
caused, that is foreseeable, and to which intermediaries have 
exhibited deliberate indifference.
    All that would do would be to allow people to be able to 
sue these companies if they had a theory. It does not mean that 
they would be vindicated. It does not mean that some of that 
speech would not ultimately found to be First Amendment 
protected.
    It means that the industry would not continue to have this 
kind of preferential treatment that rarely any other industry 
really has. They would be called to account. They would have to 
reveal documentation. They would give us some transparency 
about the inner workings of what these companies actually do. 
They would allow, at least in some cases, for people who have 
been harmed to prevail and to actually get some kind of 
compensation for their injuries.
    Senator Ossoff. Thank you, Dr. Franks. What information 
about the business practices of these firms that may not 
currently be public do you think would be of the greatest value 
to Congress as we weigh potential statutory revisions?
    Ms. Franks. There are any number of things that I would 
particularly be interested in hearing what these companies are 
doing, but just to take a few examples, when companies 
implement policies against certain types of harms and say that 
they now have removal policies, let's say, with nonconsensual 
intimate imagery: What is the data in terms of what kinds of 
reports they are getting? What is the data in terms of whether 
they are taking those requests seriously? How quickly are they 
responding to those requests? How often are they aware that 
those types of material are flourishing on the platform and 
increasing engagement? How many times are they willing to, 
nonetheless, hold to their principles and take it down as they 
have said that they would?
    In essence, are they actually fulfilling the promises that 
they are making to the public?
    Senator Ossoff. Thank you, Dr. Franks. Thanks to the panel.
    Mr. Chairman, I yield back.
    Chairman Peters. Thank you, Senator Ossoff.
    Senator Rosen, you are now recognized for your questions.

               OPENING STATEMENT OF SENATOR ROSEN

    Senator Rosen. Thank you, Chair Peters, of course, Ranking 
Member Portman, for holding this very timely hearing.
    To our witnesses, I appreciate you being here and sharing 
your expertise with us.
    I want to talk a little bit about algorithms because the 
tools of violent extremism, like conspiracy theories and 
disinformation, they frequently begin online as building blocks 
of hate. As we have seen time and time again, hateful online 
words can morph into deadly online--offline actions, excuse me, 
and then amplified once again online. This is a vicious cycle. 
This was the case three years ago yesterday at the Tree of Life 
Synagogue shooting, and it is true in far too many cases of 
hate.
    As we have been discussing today, what often enables 
extremist groups and individuals to disseminate hate messaging 
are social media algorithms. Platforms generate algorithms that 
promote content, even if it is harmful, to keep people engaged. 
And engagement is what drives advertising revenue.
    We have learned recently that social media platforms have 
known for some time--and I am going to quote here--``hate 
speech, divisive political speech, and misinformation,'' on 
their apps are having a negative impact on society and that--
again I am going to quote--the core product mechanics, such as 
virality, which means how viral someone goes or some post goes. 
Those recommendation algorithms optimizing for engagement, they 
are a significant part of why these types of speech flourish.
    So as a former computer programmer, I know that platforms 
have the capability to remove bigoted, hateful, and incendiary 
content that leads to violence, and I also know that they have 
an obligation to do so.
    So to Mr. Sifry and then Ms. O'Neil, when a platform 
announces a new policy banning hate content, do you know how 
often do they or should they adjust their algorithms to reduce 
this content and, more importantly, to reduce and remove the 
recommendation algorithm for hate content so they do not 
continue to spread like we see that?
    Mr. Sifry. Senator, that is an excellent question. Thank 
you so much. The core issue, right, that these policies that 
they are creating--and many of the large tech companies have, 
on the face of it, admirable policies against hate, against 
incitement to violence, and against harassment. However, the 
issue is enforcement at scale, and what we have seen over and 
over again is that the platforms are falling down at being able 
to enforce at scale.
    For example, for nine years, Facebook did not have an 
official policy on Holocaust denial. They allowed this content 
to stay up on their site with no policy. Last year, after years 
of advocacy by civil society groups, including ADL, they 
finally changed their policy and said, OK, this terrible, 
nefarious content needs to be off the platform.
    Yet, in January of this year, just 3 months later, when you 
would expect they had time to actually enforce said policy, we 
went and tested that policy. We were able to find groups with 
tens of thousands of members that were still advocating 
Holocaust denial, and we were able to find these kinds of 
nefarious content still being pushed to the tops of people's 
feeds.
    So you are so right in the sense that not only the policies 
are, of course, important, but it is how do you enforce these 
policies at scale and how these algorithms, because of these 
engagement mechanisms, will then push them to the top of our 
news feeds right next to content from our friends and family.
    Senator Rosen. Thank you. Mr. Sifry, what do you think we 
can do to reduce or remove these algorithms? I am sorry, I 
meant Ms. O'Neil.
    Ms. O'Neil. Yes. Thank you.
    Senator Rosen. I was looking at Mr. Sifry but seeing you on 
the screen. I knew what I meant. Sorry about that.
    Ms. O'Neil. Thank you, Senator Rosen. It is a really good 
question and an important question. I personally, and probably 
you as well--when I heard Mark Zuckerberg say a few years ago 
that AI is going to solve this problem, I knew that was a lie, 
and he knew that was a lie. AI does not have a notion of truth. 
It does not understand the English language or never mind other 
languages. It simply looks for keywords. It is like a little 
bit of a gussied-up version of a keyword search.
    So the point is that if we cannot decide what is true, 
right, AI can definitely not decide what is true. Its track 
record is so very weak. We have been hearing about less than 1 
percent of certain types of violent content being caught by 
this particular type of AI. I hesitate to call it AI because 
that is giving it too much credit. It is just an algorithm that 
is a filter.
    I also want to say that I do not have any confidence that 
this will work better in the future. The reason is that, 
whereas the propagandists are working with the recommendation 
engine, should get more attention because they want more 
attention--so you can say that it is like they are working with 
that algorithm--they are working directly against the filtering 
algorithm. They are trying to bypass it, and they are very good 
at that. The filter is just a very weak thing, and it will 
never work as far as I am concerned.
    Senator Rosen. I want you to go on. We think about 
enforcement at scale, some of the things you are talking about. 
What can we do here in Congress to really be sure that we can 
propose guardrails there for this misinformation, hate, things 
that promote violence in the real world and then amplify that 
violence and celebrate it after it happens? What can Congress 
do to help stop this?
    Ms. O'Neil. I think I am going to go back to the first 
person who spoke--I believe her name is Karen--that it is about 
incentives. Right? So the thing that I suggest should happen 
would be for you guys to have a specific definition of the kind 
of harm that you want them to keep an eye out for and that you 
have a way of making sure it is happening, a monitor, and every 
single time their monitor fails they get charged money because 
it is always going to be about money.
    So in other words, you have to make it more expensive for 
them to let it happen than the profit that they gain by letting 
it happen. You have to really counter the profit motive, and 
that is the only way you can do it.
    Of course, it will be very expensive for them to stop these 
kinds of things. It will not work with AI. They will have to 
put humans on it, and they do not want to do that.
    Senator Rosen. Right. Thank you.
    [Simultaneous discussion.]
    I was going to say, Mr. Sifry, do you have something to 
add? Because I really want to work to stop this escalation and 
celebration and this vicious cycle. So, please.
    Mr. Sifry. Absolutely. What is so critical here is it 
starts with transparency. So No. 1, what are the actual 
policies that these platforms actually have? Facebook had a 
policy that they never reported on, called Cross Check, where 
over five million people were essentially absolved from any 
kind of machine review, and this included a virtual whitelist 
where celebrities, politicians, and others could say whatever 
they wanted without review.
    Then for each of the policies they have, what are the 
enforcements that they have done? We expect these kinds of 
reports when companies go public and go to the FCC, and we 
expect it, and we penalize them when they lie. We should be 
doing the exact same things with these companies that are so 
vital to the future of our democracy.
    Senator Rosen. Thank you. I appreciate both.
    I have some other questions for the record. Mr. Chairman, I 
will be submitting those. Thank you for your time today.
    Chairman Peters. Thank you, Senator Rosen.
    Senator Lankford, you are recognized for your questions.

             OPENING STATEMENT OF SENATOR LANKFORD

    Senator Lankford. Mr. Chairman, thank you very much.
    Thanks for all the testimony. This is an ongoing national 
conversation, obviously. We do not have an anticipation we are 
going to solve it all today in this hearing, but we do have an 
anticipation we are going to narrow down what are the key 
things that we are going to engage in and that we have to be 
able to find a way to be able to solve. So it is helpful to be 
able to have your dialog in the conversation.
    Five years ago, I was with an executive from a Silicon 
Valley company, which I will leave out the name of the company, 
and as we were chit-chatting I made just some random comment 
about his social media page. he looked at me point-blank and 
said: Oh, I do not do social media at all, and my children do 
not do social media at all. I know better because I know what 
it is and what it is designed to do. I would never do that.
    It was stunning to me to hear someone in the middle of 
Silicon Valley to say: Oh, I know how toxic this is. There is 
no way I am going to allow my children or my own family to be 
affected by this.
    For some reason, that message has not gotten out to a lot 
of other folks, and for some reason there are folks that are in 
social media companies that have no personal challenge with 
being a part of that. The original sale of this seems to be we 
are going to swap pictures of our children and what we are 
eating for dinner, and it has moved to something very 
different.
    So saying all that, I want to talk a little bit about ways 
that it could get better, practical aspects. Some of them were 
just mentioned online, in some of the conversation just now. 
What are some practical things that could be done to make this 
platform, this type of platform, better to be able to engage? 
Anyone who wants to jump in with a practical idea, you are 
welcome to.
    Mr. Sifry. Senator, if I may, so we have talked a lot about 
changing the incentive systems that then get these companies to 
change their algorithms. I think, No. 1, we have been looking 
at Section 230 reform in particular targeted ways. But this is 
a complex issue. We need to be careful about how we do so and 
how to, in fact, incentivize companies to maintain--they will 
keep the 230 protections if they actually reform and behave 
more responsibly.
    Second, we have talked about transparency reporting, a 
number of us have, and I think that this is something that 
again has real First Amendment considerations here. This is 
something that we should be asking these companies to do tout 
de suite.
    Third, ADL has been bringing forward the idea of a resource 
center, an independent resource center much like the National 
Center for Missing and Exploited Children (NCMEC), where we 
have for the exploitation of a child and sexual imagery. It is 
a public-private partnership to help to track what is going on 
in the world of extremism as well.
    Most importantly as well, academic and other good faith, 
third-party researcher access to this data so that they can 
actually be looking at what is going on and report back on what 
is happening beyond what is just mandated.
    Senator Lankford. OK. Other specific issues?
    Ms. Franks. I do think that Section 230 reform is going to 
be one of the most important planks of this. Changing 
incentives for the industry is of the utmost priority. If 
companies are essentially given a blank check, told that they 
can engage as much as they want, as often as they want, and 
take profits from that, that they will face no repercussions 
for this, that they are immune sort of preemptively from suit, 
that is something that gives them a terrible incentive to do 
anything responsibly. So that would be one way.
    Senator Lankford. Is this 230 reform, or is it 230 
enforcement because the issue about not being an editor or 
publisher? Clearly, multiple of these entities are already 
trying to be able to alter content and to be able to engage in 
something that looks like a violation of 230 already. If there 
is a 230 issue, is it enforcement, or is it changing the way 
that it is written?
    Ms. Franks. The 230 issue I think can be thought of in 
either way. There are ways to read 230 more in a limited 
fashion, but I do think that the primary problem with Section 
230 currently mostly has to do with the (c)(1) provisions which 
provide immunity for things that the companies are leaving up.
    Senator Lankford. Right.
    Ms. Franks. I think if we were to open the door to that 
they would be subject to the same kinds of litigation that 
other industries are a part of, we would see more safety and 
better standards in addition to, I think, far more FTC 
interventions against what some of these companies are doing, 
recognizing that there is a fundamental misalignment of 
incentives here because these companies, their customers are 
not the users; their customers are advertisers.
    Therefore, we have a big problem when it comes to how much 
those incentives are being thought of as advertiser issues as 
opposed to user issues----
    Senator Lankford. Right.
    Ms. Franks [continuing]. How often users believe somehow 
that they are getting a free service. There needs to be more 
oversight from the FTC about what that means for consumers and 
what it means for their consent.
    Senator Lankford. With that, how do you balance out the 
issue of censorship? Because Focus on the Family and the 
Heritage Foundation and countless other conservative groups or 
faith based groups would tell you they put up content and 
immediately it gets pulled down and blocked or they get their 
account suspended. They are saying, ``we are trying to keep, 
hateful content away, and typically, hateful content is 
conservative content''. And so they are saying, ``we are trying 
to abide by that''. How do you balance that out?
    Ms. Franks. I think there again we have to underscore the 
fact that these companies do have a First Amendment right of 
their own to exclude content and not associate with content 
that they find objectionable for any reason.
    Senator Lankford. Right.
    Ms. Franks. Therefore, that is the part of Section 230, the 
extent to which it gives them procedural rights as far as that 
enforcement goes, that I think we probably should leave alone.
    Senator Lankford. We are, on the other side of this then, 
dealing with antitrust violations, where you have an issue 
where a company intentionally goes in and blocks out 
competition to be able to make sure there are no other voices 
than them in the marketplace and only their point of view gets 
out. That leaves the First Amendment issue, which I agree with, 
and moves into an antitrust issue that we still have to be able 
to resolve for several things.
    Mr. Persily, do you have something to add to that?
    Mr. Persily. Certainly. Thank you. Let me begin where you 
left off, which is so we were thinking about the antitrust 
problem in a very classic way, and we realize that these firms 
do not fit the normal model of antitrust. Let us talk about it 
in terms of competition law or the like. There are a lot of 
measures on interoperability and trying to break up their 
monopoly on data that I think Congress should explore.
    Also, there are sort of non-speech-related reforms that I 
think will have an impact on these speech-related issues, 
right, in a beneficent way. So privacy legislation, right, is 
tied to advertising regulation because the amount of data that 
they hoover up from all of us that is what enables certain 
types of targeting, certain types of messaging which a lot of 
us here today were complaining about.
    Then on transparency, everybody is in favor of 
transparency. What I want to emphasize--and Mr. Sifry mentioned 
it before--is researchers or just getting some third party in 
there to figure out what is happening, whether it is on the 
political bias, on content moderation or on the actual nature 
of the platforms, that this will change their behavior if they 
know that they are being watched. Right?
    It is not just about providing a subsidy to outside 
researchers to figure out something for their publications. It 
is about making sure someone is in the room to figure out sort 
of what is going on and that the researchers are not the only 
ones in the firms who are tied to the profit-maximizing mission 
of the firms, who then have access to that information.
    Senator Lankford. Yes. Mr. Chairman, it is interesting. I 
brought to Facebook several years ago now just an idea, to say, 
if you want to turn down the volume in some of these pages 
where there is people attacking other people on the page, allow 
the opportunity for the page owner, quote-unquote, to be able 
to just say, the comments just come to me, but they are not 
public. If you want to comment to me, you can comment to me. 
But then that discourages people from attacking each other back 
and forth.
    Their response was, well, we like the interaction.
    It is interesting. The preference for them was very clearly 
we like people hating on each other on this, even on pages 
where they know that is the dominant theme, because it helps 
with advertising dollars. It helps with the income side. These 
platforms are very aware of methods to be able to turn down 
volume, turn down hateful rhetoric, but they choose to be able 
to leave it up for advertising dollars.
    I do think that is something we need to continue to be able 
to engage in, to be able to show how this could be better, that 
you can get back to a point of sharing pictures with loved ones 
and what you had for dinner, though I have no idea why people 
care what picture you had for dinner. But there is a way to be 
able to turn down volume, and we expect platforms to be able to 
step up and do that.
    Mr. Chairman, thank you for holding this hearing.
    Chairman Peters. Thank you, Senator Lankford. Clearly, a 
lot of work ahead for us. I agree with you.
    Senator Romney, you are recognized for your questions.

              OPENING STATEMENT OF SENATOR ROMNEY

    Senator Romney. Thank you, Mr. Chairman. I, like all 
Americans, are very concerned about what we are seeing in 
social media, the impact it is having on our democracy, the 
disinformation that we are seeing. I have grandkids whose 
parents are wisely telling their kids not to get on social 
media. They are not giving them smartphones until they get a 
good deal older.
    But frankly, as I have listened to the conversation this 
morning, it strikes me that this is a story we have heard 
before in a different context. The idea that there are 
companies that are trying to give people what they want to 
read, well, that is what newspapers do. There are companies 
that are trying to let people hear what they want to hear; that 
is what radio stations do. There are TV stations that have 
something known as Nielsen, which tells them what people watch, 
and they found more salacious things, I presume, get more 
eyeballs.
    I have heard on some of the nightly shows that they can 
look at which guest comes in, what that guest is wearing, and 
they can see their ratings going up based upon the gender of 
the guest and what they are wearing.
    I mean, this is not something that is just particular to 
social media. It is something that is part of our entire media 
system. The idea that media companies, like social media 
companies, are trying to maximize their profitability, so is 
TV, radio, newspapers, magazines. They are trying to stay in 
business and maximize their revenue.
    Disinformation. Have you ever read the National Enquirer or 
the Star? I mean, those are out there. They are at the 
newsstand. People can pick them up. There is all sorts of 
disinformation.
    How about political bias? How about Fox, MSNBC, New York 
Times, and Washington Examiner? They have a point of view. They 
have a bias.
    And we say, people only have access to those sites. 
Actually, there are a lot of social media sites out there, and 
more are being introduced, and there will be more introduced 
over time.
    The idea that saying, hey, we are going to subject them--we 
are going to break them up for antitrust reasons. Do not 
forget, data and social media is international. It is not like 
it is just a U.S. enterprise. TikTok is owned by the Chinese. 
We are not going to break them up, even though some of us might 
prefer that we could. We have to think about the competition 
coming from around the world.
    I must admit I have a hard time seeing how we are going to 
change the algorithms. If we are going to change the algorithms 
for social media, how about changing the algorithms for Fox, 
MSNBC, the New York Times, NBC, and CBS? I mean, each of them 
have their own bias. They have their own way of assessing how 
they get the most eyeballs, how they get the most clicks, if 
you will, even if it is just the channel changer. I do not see 
how that works.
    Likewise, to say we are going to set the standards for what 
is truth and not truth, I do not know how government does that 
consistent with the First Amendment.
    I understand why a CBS decides what they are going to show 
and why a New York Times decides what they are going to print. 
That is their standards. By the way, their people can respond 
to it, and people can boycott or stop subscribing as a result 
of what they are saying.
    This is an area, in my opinion, that is extraordinarily 
fraught for government action. What we can do that does not 
violate the freedom of expression, of which we are intent on 
having in our country, is something which I think we have to be 
careful in considering.
    I would note that what is unusual about social media, among 
many things, is the precision with which data is able to be 
gathered and people are able to be targeted. The New York Times 
may say, I want to get a left-of-center subscriber base, but 
they cannot get it literally down to the home to print the 
articles that you might want to watch although as they go 
online, and increasingly they are an online forum, they will be 
able to provide the articles that you want to watch and will 
compete on that basis.
    Anyway, I guess I raise a couple of things as possibilities 
in a topic I know very little about but care very deeply about 
because of my kids and grandkids. That is, one, to require 
social media companies to certify that a person who is posting 
or commenting is an actual person, so very simply saying you 
have a responsibility as a social media company to determine 
if, ``JAB123'' is actually a human being and also what country 
they are from perhaps but whether it is a government or a 
corporation but whether it is human being. That would be No. 1, 
and that is probably the easiest constitutionally.
    The other is something I would like to do. I would like to 
require actually people to be identified as to who they are, 
what their name is, if not their address, at least their name 
so people are responsible for what they post and what they 
comment on. I think that would fly in the face of the First 
Amendment and the right to be anonymous.
    Perhaps having more emphasis on ``blue checks,'' if you 
will, verifying people and really insisting that social media 
companies encourage more people to be identified and perhaps 
even having an alternative site, which is, if you will, there 
is Facebook where there is no blue check, but then there is 
another portion of Facebook which is all blue check. To comment 
or to post, you have to have a blue check on that particular 
wave. That would give people some confidence that there is a 
real human being that is willing to stand behind that comment.
    Those are the only two ideas I could come up with that I 
thought would be helpful here and not fly in the face of the 
same challenge we have with all of our media, that there is 
disinformation that we do not make illegal, that there is bias 
that we do not make illegal, that there is profit motive that 
we do not make illegal, that there are algorithms that draw 
more articles that people want to read that we do not make 
illegal.
    I do not see the course that you all are talking about as 
being consistent with what the rest of our media system is 
providing.
    Do any of you have any comments about those ideas that I 
suggest? I am concerned about the topic, but I just cannot seem 
to find a way to resolve some of the problems we describe.
    Yes, Dr. Persily?
    Mr. Persily. Yes.
    Ms. Kornbluh. Can I just try real quick? Just because I 
worked at Nielsen, so you are absolutely right, and I think it 
is great that you are pulling us back to this history of 
newspapers and broadcast. What is different, I think, is that 
in the newspaper, first of all, speakers did not have this much 
power. That is true in broadcast, too. We had ownership 
restrictions. You could not own a newspaper and a broadcast in 
the same market. You could not own----
    Senator Romney. But you could have the whole country, like 
Sinclair does.
    Ms. Kornbluh. But you could not in the past. We tried to 
deal with some of these issues but mostly through transparency. 
Right? So there is a masthead.
    Senator Romney. Yes.
    Ms. Kornbluh. There are codes and standards that you hold 
yourself to if you are a newspaper. If you are a broadcaster, 
we had the payola rules so that the DJ would have to reveal if 
he was being paid to play that record. The political ads that 
you run, you had to be transparent and say, I paid for these 
ads.
    What we are seeing on social media, I think, is we have not 
worked through those systems. We do not have that transparency. 
That buyer cannot beware because they just do not know who is 
pushing this at me, why are they pushing it at me. The codes 
and standards that they proclaim they are not really honoring. 
I have seen----
    Senator Romney. Yes, the transparency, that is the 
direction I am going is transparency. Yes.
    Ms. Kornbluh. I think what you are talking about is really 
interesting. I might tweak it a little bit and say, if you are 
Facebook and you say you have a real name policy, then you 
better have a real name policy. If you have a blue checkmark 
and it says it is a human, it better be a human. If it is a 
bot, it should be labeled as a bot.
    Senator Romney. Yes.
    Ms. Kornbluh. If it is a deepfake, you need to let people 
know it is a deepfake.
    Senator Romney. Totally agree.
    Ms. Kornbluh. That is really consistent, I think, with the 
way we approach--with the way the industry had norms and 
standards in the journalist industry and in broadcasting, what 
the public interest standard required. In fact, those kinds of 
user empowering----
    Senator Romney. That course makes sense.
    Ms. Kornbluh. Exactly.
    Senator Romney. Yes, sir?
    Mr. Persily. I agree with that. Just in terms of the 
differences with TV, which is that we know what Tucker Carlson 
or Rachel Maddow is saying every night. There is a record of 
it. We know who was able to see it. We do not know that with 
speakers on the internet. Right?
    That is where the transparency legislation would come in--
and I attached some to my testimony--because we need to figure 
out sort of who saw what, when, and how. Right? With television 
and other kinds of broadcast media and even some print media, 
we can figure that out, but not with social media platforms.
    Senator Romney. Yes. Thank you.
    Mr. Chairman, thank you.
    Chairman Peters. Thank you, Senator Romney.
    Senator Padilla, you are recognized for your questions.

              OPENING STATEMENT OF SENATOR PADILLA

    Senator Padilla. Thank you, Mr. Chair.
    Let us just jump right into it. When I was Secretary of 
State of California, prior to joining the U.S. Senate, I saw 
more than our fair share of bad actors seeking to discourage 
communities, particularly communities of color, from exercising 
their right to vote by gaming social media and exploiting gaps 
in trusted sources and data voids.
    Like many of you, I am alarmed by the recent Facebook 
Papers and, in particular, their absolute failure at Facebook 
to invest in integrity systems responsive to non-English 
languages and cultures around the world and right here in the 
United States as well.
    While the focus of today's hearing is on social media, I 
hope we can keep in mind the broader information ecosystem. We 
need to equip our kids and neighborhoods with media and 
information literacy skills. We need to address the collapse of 
local journalism--and I will be asking a question about that 
here in a minute--that are expanding news deserts across the 
country.
    Now this hearing is about platform design choices, and it 
would be an oversight not to reflect on user data. User 
personal data is indeed what fuels targeting on social media, 
informing the content users see and how ads are targeted to 
them. It is all driven by their data.
    Now strong data privacy protections may help address some 
of the unhealthy dynamics that we see online. We have been 
talking about a national privacy law for a long time, and I 
think it is time that Congress finally gets it done.
    Question for Dr. Persily: How can a strong privacy law 
reduce the risk of echo chambers, micro-targeting of 
disinformation or exploitative advertising which targets 
specific individuals or groups based on profiling?
    Mr. Persily. Thank you. Thank you, my Senator, for that 
question. It is good to see you. I hope to see you back in 
California.
    As I mentioned before, we tend to think of internet 
regulation in two domains: One is on speech, sort of 
explicitly, and Communications Decency Act (CDA) 230 reform is 
one of them. Then kind of structural or infrastructure design 
questions, and privacy would be one of those. But I actually 
think that we need to start recognizing that they sort of bleed 
over into each other.
    I think you are right to point out that through national 
privacy legislation and regulating the kind of data that the 
firms can collect, that we will be able to get at some of these 
problems because if you think that part of the problem here is 
the micro-targeting of messages that necessarily select out 
audiences for manipulation and persuasion and the like, that it 
is only enabled because of the amount of data that the firms 
have. If we had rules on what particularly the big platforms 
could do in terms of collecting data, I think it would go a 
long way in addressing some of these speech problems as well.
    Senator Padilla. Thank you. Now on an extended topic 
because I mentioned reporting of the Facebook Papers reveals an 
abdication of responsibility to meet the needs of non English 
speaking Facebook users around the world, and it is happening 
here in the United States as well. We are blessed with a very 
diverse population. Spanish-language disinformation about how 
to vote, where to vote, when to vote, et cetera, ran rampant on 
platforms in 2020 as compared to similar content in English.
    It is not limited to just election information, by the way. 
I saw it significantly when we were doing census outreach and 
assistance at that critical time in 2020. We continue to see it 
in regards to the COVID-19 pandemic, the safety of vaccines, et 
cetera. It is absolutely unacceptable.
    Question for Dr. O'Neil: Why do you think platforms are 
failing even more for non-English language speakers, and in 
what ways can Congress be helpful in this space?
    Ms. O'Neil. Thank you for the question. You are absolutely 
right that they are failing in non-English language spaces. In 
India, which is a huge problem as we have read about in the 
last few days, but for many months and years we have known 
this, there are just too many language dialects. Facebook just 
does not want to hire people to know those languages. It would 
be very expensive.
    It is a cost issue on top of the fact that I already 
mentioned, that the filters for hateful or extreme content are 
essentially keyword searches, so you need to understand what 
keywords to search for. You need a lot of experts working full-
time on this, and they just simply do not want to pay for that.
    It would be very expensive. So it is clear.
    But I want to make it also clear that there is no simple 
solution. I am not suggesting that they are avoiding doing 
something simple to solve these problems. This is actually 
really hard. Their mistake is not that they are not doing it 
well. It is that they are pretending they can do it. They 
simply cannot do it because of what I have said before, that AI 
does not understand truth so it is just simply looking for 
keywords.
    To the extent that Facebook cares about looking good, they 
care much more about looking good to English-speaking Americans 
and to people like you.
    I would say one quick story. I gave a talk in the Ukraine 
recently, and one of the audience members was a parliament 
member of the Ukraine. She said, what can we do here in the 
Ukraine about Russian propaganda that undermines people's trust 
in our elections?
    I was like, wow, I really do not know. I mean, you have 
even less power over Facebook than the Senators in the U.S. 
Senate. It is a really important question.
    Senator Padilla. I will just add to your commentary about 
looking good in front of Americans or looking good in front of 
people like us, members of the U.S. Senate. Sadly, looking good 
in front of investors and Wall Street seems to trump it all.
    My final question in the time remaining: We know today's 
information ecosystem is complex. In addition to facing 
organized propaganda campaigns, social media users encounter 
more content at higher speeds. Right? The innovation technology 
has a role to play here.
    I worry that efforts to help communities critically engage 
with information is not keeping pace. It is also not lost on me 
that we have seen an explosion of propaganda campaigns aimed at 
manipulating and intimidating communities online while we are 
in the midst of a collapse of local journalism and independent 
media.
    My final question is for Ambassador Kornbluh. I welcome any 
thoughts you may have on how the shuttering of local news 
outlets has impacted how users engage with content that they 
consume online.
    Ms. Kornbluh. That is such an important question. As you 
say, it creates this vacuum, and people are served things 
online. Again, I think we have to underscore that so much of 
what happens online is manipulation. People do not know what 
they are being served or who is behind it.
    There are these pretend local outlets that they see online 
that seem to have a name that suggests that they are local, but 
they in fact are often controlled centrally. The news stories 
can even be constructed by AI. They think they are getting 
local news, but they are actually being fed information that 
serves a political interest or a financial interest, and they 
are not aware of it. There is no alternative, so they do not 
have access to the civic information that they need to be a 
citizen.
    The Secretary of State of Colorado just made a really 
interesting point a couple of weeks ago. She said, if I am 
standing up at a podium and having a press conference, and the 
voters in my State are reading about, something completely 
different, SharpieGate or whatever it is, online, I am not in 
conversation with them because she is communicating to them 
over these social media platforms and that is sort of a 
funhouse mirror of what is going on.
    I think we really have to think about how is the civic 
information, public health information, election administration 
information, how is it going to get to citizens at a time when 
local news is so undermined.
    I should say part of the reason local news is undermined is 
because it was supported by advertising and all those 
advertising dollars have now gone to the platforms. There is no 
revenue base for local news. So this is a problem.
    It is really a fundamental democracy problem. The press is 
mentioned in the Constitution. It is something we really have 
to address.
    Senator Padilla. Thank you very much.
    Thank you, Mr. Chair.
    Chairman Peters. Thank you, Senator Padilla.
    I would like to take this opportunity to thank once again 
our witnesses for joining us here today. I think I speak for 
the entire Committee when we say we appreciate your very unique 
insight and expertise and helping us examine this critical 
issue and navigate the tough challenge today.
    Today's hearing provided an opportunity for us to learn 
about the role social media platforms play in the amplification 
of extremist content, including White Nationalist and anti 
government ideologists or ideologies. We heard expert testimony 
about how their algorithms and recommendation tools are driving 
extreme content users, how that exposure to harmful content can 
translate to real-world violence, and how their business models 
built on user engagement and targeted advertising appear to 
prioritize profits over safety.
    The connection between extremist content on social media 
platforms and domestic terrorist attacks in our communities is, 
without question, a national security threat and one this 
Committee will continue to examine. Our next steps will be to 
include a hearing from the social media companies themselves, 
and we will work to bring more transparency to this pressing 
issue. I am also seeking information from both the Department 
of Homeland Security and the FBI on this threat and will be 
looking for needed reforms.
    The record for this hearing will remain open for 15 days, 
until 5 p.m. on November 12th, 2021, for the submission of 
statements and questions for the record.
    This hearing is now adjourned.
    [Whereupon, at 12:12 p.m., the Committee was adjourned.]

                            A P P E N D I X

                              ----------                              


[GRAPHICS NOT AVAILABLE IN TIFF FORMAT]

                                 [all]