[House Hearing, 118 Congress]
[From the U.S. Government Publishing Office]
PRESERVING FREE SPEECH AND REINING IN BIG TECH CENSORSHIP
=======================================================================
HEARING
BEFORE THE
SUBCOMMITTEE ON COMMUNICATIONS AND TECHNOLOGY
OF THE
COMMITTEE ON ENERGY AND COMMERCE
HOUSE OF REPRESENTATIVES
ONE HUNDRED EIGHTEENTH CONGRESS
FIRST SESSION
__________
MARCH 28, 2023
__________
Serial No. 118-15
[GRAPHIC NOT AVAILABLE IN TIFF FORMAT]
Published for the use of the Committee on Energy and Commerce
govinfo.gov/committee/house-energy
energycommerce.house.gov
__________
U.S. GOVERNMENT PUBLISHING OFFICE
54-066 PDF WASHINGTON : 2023
COMMITTEE ON ENERGY AND COMMERCE
CATHY McMORRIS RODGERS, Washington
Chair
MICHAEL C. BURGESS, Texas FRANK PALLONE, Jr., New Jersey
ROBERT E. LATTA, Ohio Ranking Member
BRETT GUTHRIE, Kentucky ANNA G. ESHOO, California
H. MORGAN GRIFFITH, Virginia DIANA DeGETTE, Colorado
GUS M. BILIRAKIS, Florida JAN SCHAKOWSKY, Illinois
BILL JOHNSON, Ohio DORIS O. MATSUI, California
LARRY BUCSHON, Indiana KATHY CASTOR, Florida
RICHARD HUDSON, North Carolina JOHN P. SARBANES, Maryland
TIM WALBERG, Michigan PAUL TONKO, New York
EARL L. ``BUDDY'' CARTER, Georgia YVETTE D. CLARKE, New York
JEFF DUNCAN, South Carolina TONY CARDENAS, California
GARY J. PALMER, Alabama RAUL RUIZ, California
NEAL P. DUNN, Florida SCOTT H. PETERS, California
JOHN R. CURTIS, Utah DEBBIE DINGELL, Michigan
DEBBBIE LESKO, Arizona MARC A. VEASEY, Texas
GREG PENCE, Indiana ANN M. KUSTER, New Hampshire
DAN CRENSHAW, Texas ROBIN L. KELLY, Illinois
JOHN JOYCE, Pennsylvania NANETTE DIAZ BARRAGAN, California
KELLY ARMSTRONG, North Dakota, Vice LISA BLUNT ROCHESTER, Delaware
Chair DARREN SOTO, Florida
RANDY K. WEBER, Sr., Texas ANGIE CRAIG, Minnesota
RICK W. ALLEN, Georgia KIM SCHRIER, Washington
TROY BALDERSON, Ohio LORI TRAHAN, Massachusetts
RUSS FULCHER, Idaho LIZZIE FLETCHER, Texas
AUGUST PFLUGER, Texas
DIANA HARSHBARGER, Tennessee
MARIANNETTE MILLER-MEEKS, Iowa
KAT CAMMACK, Florida
JAY OBERNOLTE, California
------
Professional Staff
NATE HODSON, Staff Director
SARAH BURKE, Deputy Staff Director
TIFFANY GUARASCIO, Minority Staff Director
Subcommittee on Communications and Technology
ROBERT E. LATTA, Ohio
Chairman
GUS M. BILIRAKIS, Florida DORIS O. MATSUI, California
TIM WALBERG, Michigan Ranking Member
EARL L. ``BUDDY'' CARTER, Georgia, YVETTE D. CLARKE, New York
Vice Chair MARC A. VEASEY, Texas
NEAL P. DUNN, Florida DARREN SOTO, Florida
JOHN R. CURTIS, Utah ANNA G. ESHOO, California
JOHN JOYCE, Pennsylvania TONY CARDENAS, California
RANDY K. WEBER, Sr., Texas ANGIE CRAIG, Minnesota
RICK W. ALLEN, Georgia LIZZIE FLETCHER, Texas
TROY BALDERSON, Ohio DEBBIE DINGELL, Michigan
RUSS FULCHER, Idaho ANN M. KUSTER, New Hampshire
AUGUST PFLUGER, Texas ROBIN L. KELLY, Illinois
DIANA HARSHBARGER, Tennessee FRANK PALLONE, Jr., New Jersey (ex
KAT CAMMACK, Florida officio)
JAY OBERNOLTE, California
CATHY McMORRIS RODGERS, Washington
(ex officio)
C O N T E N T S
----------
Page
Hon. Robert E. Latta, a Representative in Congress from the State
of Ohio, opening statement..................................... 1
Prepared statement........................................... 4
Hon. Doris O. Matsui, a Representative in Congress from the State
of California, opening statement............................... 9
Prepared statement........................................... 11
Hon. Cathy McMorris Rodgers, a Representative in Congress from
the State of Washington, opening statement..................... 15
Prepared statement........................................... 17
Hon. Frank Pallone, Jr., a Representative in Congress from the
State of New Jersey, opening statement......................... 21
Prepared statement........................................... 23
Witnesses
Seth Dillon, Chief Executive Officer, Babylon Bee................ 26
Prepared statement........................................... 29
Answers to submitted questions............................... 468
Jay Bhattacharya, M.D., Ph.D., Professor of Health Policy,
Stanford University............................................ 34
Prepared statement........................................... 36
Answers to submitted questions............................... 471
Spencer Overton, Patricia Roberts Harris Research Professor,
George Washington University Law School........................ 39
Prepared statement........................................... 41
Answers to submitted questions............................... 482
Michael Shellenberger, Founder and President, Environmental
Progress....................................................... 53
Prepared statement........................................... 55
Answers to submitted questions............................... 489
Submitted Material
Inclusion of the following was approved by unanimous consent.
Letter of March 11, 2022, from Mrs. Rodgers, et al., to Vivek H.
Murthy, U.S. Surgeon General................................... 105
Letter of July 22, 2021, from Mrs. Rodgers, et al., to President
Biden.......................................................... 110
Report of the Department of Computer Science, North Carolina
State University, ``A Peek into the Political Biases in Email
Spam Filtering Algorithms During US Election 2020,'' by Hassan
Iqbal, et al., March 31, 2022.................................. 127
Article of January 20, 2023, ``BIAS! Google Search Results for
`Pregnancy' Prop Up Planned Parenthood Day Before March for
Life,'' by Gabriela Pariseau, Media Research Center NewsBusters 137
Commentary of September 23, 2021, ``Big Tech's Conservative
Censorship Inescapable and Irrefutable,'' by Robert B. Bluey,
Heritage Foundation............................................ 140
Article of February 8, 2023, ``CENSORSHIP MILESTONE: CensorTrack
Hits 5,000 Documented Cases of Big Tech Censorship,'' by
Gabriela Pariseau, FreeSpeech America.......................... 145
Article of October 25, 2022, ``Google CAUGHT Manipulating Search,
Buries GOP Campaign Sites in 83% of Top Senate Races,''
FreeSpeech America............................................. 149
Report of the Department of Health and Human Services, COVID-19
Misinformation from Official Sources During the Pandemic, May
2, 2022........................................................ 159
Article of June 23, 2022, ``MRC CensorTrack Tallies 156 Times Big
Tech Censored Users Who Affirm Science on Gender,'' by Gabriela
Pariseau, FreeSpeech America................................... 165
Letter of September 13, 2022, from Mrs. Rodgers, et al., to
President Biden................................................ 170
Report by Napa Legal, ``De-Platforming: The Threat Facing Faith-
Based Organizations,'' November 4, 2023........................ 174
Statement of Michael Shellenberger before the House Select
Committee on the Weaponization of the Federal Government, March
9, 2023\1\
Article November 2, 2022, ``STILL AT IT! Google Buries Big Tech
Critics' Campaign Websites Ahead of Midterms,'' by Brian
Bradley, Media Research Center NewsBusters..................... 180
Article February 9, 2022, ``STUDY: CensorTrack Documents Over 800
Cases of Big Tech Censoring COVID-19 Debate,'' by Joseph
Vasquez and Gabriela Pariseau, FreeSpeech America.............. 187
Article of December 15, 2022, ``Twitter's Secret Blacklists,'' by
Bari Weiss, et al., Free Press................................. 198
Article of December 15, 2022, ``Why Twitter Really Banned
Trump,'' by Bari Weiss, et al., Free Press..................... 210
Commentary of January 8, 2023, ``The White House Covid Censorship
Machine,'' by Jenin Younes and Aaron Kheriaty, Wall Street
Journal........................................................ 221
Article of December 23, 2022, ``Top 6 Worst CensorTrack Cases of
2022: Big Tech Censored God, Veterans and So Much More,'' by
Gabriela Pariseau, Media Research Center NewsBusters........... 225
Proposed Findings of Fact. State of Louisiana, State of Missouri,
et al., v. Joseph R. Biden, Jr.\2\
Report of the Center for Technology & Society, Anti-Defamation
League, ``Hate Is No Game: Hate and Harassment in Online Games
2022'' December 2022\3\
Report of the Center for Technology & Society, Anti-Defamation
League, ``Online Hate and Harassment: The American Experience
2022''\4\
Report of the Center for Technology & Society, Anti-Defamation
League, ``Social Media Election Policies: The Good, the Bad and
the Misinformed,'' February 23, 2023\5\
Article of March 22, 2023, ``DeSantis to Expand `Don't Say Gay'
to all grades,'' by Anthony Izaguirre, Associated Press........ 230
Article of October 27, 2022, ``Section 230 shields TikTok in
child's `Blackout Challenge' Death Lawsuit,'' by Ashley
Belanger, Ars Technica......................................... 232
Article of March 23, 2023, ``Book ban attempts reach
`unparalleled' 20-year high in 2022,'' by Jacob Knutson, Axios. 234
Commentary of March 17, 2021, ``Back to the future for Section
230 reform,'' by Mark MacCarthy, Brookings Institution......... 235
Article of May 28, 2020, ``Trump says right-wing voices are being
censored. The data says something else,'' by Oliver Darcy, CNN
Business....................................................... 240
Article of February 1, 2021, ``Claim of anti-conservative bias by
social media firms is baseless, report finds,'' by Adam
Gabbatt, The Guardian.......................................... 242
Commentary of March 10, 2023, ``The drag show bans sweeping the
US are a chilling attack on free speech,'' by Suzanne Nossel,
The Guardian................................................... 243
Article March 23, 2023,``House Panel Targets Universities,
Scholars,'' by Katherine Knott, Inside Higher Ed............... 245
Commentary of August 14, 2019, ``Herrick v. Grindr: Why Section
230 of the Communications Decency Act Must be Fixed,'' by
Carrie Goldberg, Lawfare....................................... 248
Article of February 8, 2022, ``As YouTube and Google ban Dan
Bongino for misinformation, Facebook profits from helping him
promote the same false and sensational content'' by Kayla
Gogarty, Media Matters for America............................. 254
----------
\1\ The statement has been retained in committee files and is available
at https://docs.house.gov/meetings/IF/IF16/20230328/115561/HHRG-118-
IF16-20230328-SD012.pdf.
\2\ The court document has been retained in committee files and is
available at https://docs.house.gov/meetings/IF/IF16/20230328/115561/
HHRG-118-IF16-20230328-SD019.pdf.
\3\ The report has been retained in committee files and is available at
https://docs.house.gov/meetings/IF/IF16/20230328/115561/HHRG-118-IF16-
20230328-SD020.pdf.
\4\ The report has been retained in committee files and is available at
https://docs.house.gov/meetings/IF/IF16/20230328/115561/HHRG-118-IF16-
20230328-SD021.pdf.
\5\ The report has been retained in committee files and is available at
https://docs.house.gov/meetings/IF/IF16/20230328/115561/HHRG-118-IF16-
20230328-SD022.pdf.
Article of March 16, 2023, ``Beyond Andrew Tate: Meet the
misogynistic `manosphere' influencers proliferating across
social media,'' by Justin Horowitz, Media Matters for America.. 268
Article of December 20, 2021, ``Breibart thrives on Facebook, in
part because the platform rewards sensational photos and
videos,'' by Kayla Gogarty, Media Matters for America.......... 283
Article of December 15, 2022, ``Elon Musk continues to cater to
far-right Twitter accounts promoting bigotry, extremism, and
misinformation,'' by Charis Hoard, et al., Media Matters for
America........................................................ 295
Article of November 22, 2022 (updated March 24, 2023), ``Elon
Musk is unilaterally reinstating banned Twitter accounts,
despite assuring civil rights groups and advertisers that he
wouldn't,'' by Kayla Gogarty and Ruby Seavey, Media Matters for
America\6\
Article of April 12, 2021, ``Facebook has a problem with
sensational and misleading content despite VP Nick Clegg's
claims,'' by Kayla Gogarty, Media Matters for America.......... 304
Article of October 8, 2021, ``Facebook tweaked its News Feed
algorithm, and right-leaning pages are reaping the benefits,''
by Kayla Gogarty, Media Matters for America.................... 315
Article of November 1, 2022, ``Misinformation about the midterm
elections is already flourishing on TikTok,'' by Olivia Little,
Media Matters for America...................................... 328
Article of December 15, 2022, ``Musk's `Twitter Files' repackage
debunked claims to falsely allege crime, collusion, and
conspiracy,'' by Natalie Mathes and Camden Carter, Media
Matters for America............................................ 331
Article of January 18, 2023, ``Report: January 6 investigators
confirm that social media platforms `bent their rules to avoid
penalizing conservatives' ahead of the insurrection,'' by
Spencer Silva, Media Matters for America....................... 340
Article of July 6, 2023 (updated July 8, 2023), ``TikTok
continues to allow videos of neo-Nazi to go viral,'' by Abbie
Richards, Media Matters for America............................ 345
Article of August 18, 2021, ``TikTok's algorithm is amplifying
COVID-19 and vaccine misinformation,'' by Olivia Little and
Abbie Richards, Media Matters for America...................... 348
Article of April 5, 2022, ``Viral Twitter account `Libs of
TikTok' calls for all openly LGBTQ teachers to be fired,'' by
Madeline Peltz, Media Matters for America...................... 356
Commentary of December 27, 2022, ``Tucker Carlson just
supercharged the Libs of TikTok anti-LGBTQ bigotry,'' by
Zeesham Aleem, MSNBC........................................... 357
Article of June 17, 2022, ``Anti-LGBTQ threats, fueled by
internet's far right `machine,' shut down trans rights and drag
events,'' by Ben Collins and Doha Madani, NBC News............. 359
Article of November 21, 2022, ``Hours After Club Q Shooting,
Right-Wing Account Attacks Colorado Drag Group,'' by Shira Li
Bartov, Newsweek............................................... 362
Article of March 6, 2021, ``Far-Right Misinformation Is Thriving
On Facebook. A New Study Shows Just How Much,'' by Michel
Martin and Will Jarvis, ``All Things Considered,'' NPR......... 363
Article of October 22, 2021 (updated October 27, 2021), ``Eating
Disorders and Social Media Prove Difficult to Untangle,'' by
Kate Conger, et al., New York Times............................ 365
Article of December 2, 2022, ``Hate Speech's Rise on Twitter Is
Unprecedented, Researchers Find,'' by Sheera Frenkel and Kate
Conger, New York Times......................................... 369
Report of the NYU Stern Center for Business and Human Rights,
``False Accusation: The Unfounded Claim that Social Media
Companies Censor Conservatives,'' by Paul M. Barrett and J.
Grant Sims, February 2021\7\
----------
\6\ The article has been retained in committee files and is available
at https://docs.house.gov/meetings/IF/IF16/20230328/115561/HHRG-118-
IF16-20230328-SD036.pdf.
\7\ The report has been retained in committee files and is available at
https://docs.house.gov/meetings/IF/IF16/20230328/115561/HHRG-118-IF16-
20230328-SD051.pdf.
Report of PEN America, ``Banned in the USA: The Growing Movement
to Censor Books in Schools,'' by Jonathan Friedman and Nadine
Farid Johnson, September 19, 2022\8\
Link to Index of Educational Gag Orders, PEN America\9\.......... 374
Article of November 29, 2022, ``Twitter stops enforcing Covid-19
misinformation policy,'' by Rebecca Kern, Politico............. 375
Article of September 26, 2020, ``Why the right wing has a massive
advantage on Facebook,'' by Alex Thompson, Politico............ 377
Article of March 22, 2023, ``Republican Rep. Jim Jordan Issues
Sweeping Information Requests to Universities Researching
Disinformation,'' by Andrea Bernstein, ProPublica.............. 380
Article of February 8, 2023, ``Twitter Kept Entire `Database' of
Republican Requests to Censor Posts,'' by Adam Rawnsley and
Asawin Suebsaeng, Rolling Stone................................ 383
Article of January 23, 2023, ``Justices request federal
government's views on Texas and Florida social-media laws,'' by
Amy Howe, SCOTUSblog........................................... 386
Statement of Media Matters for America before the Subcommittee on
Communications and Technology, March 28, 2023.................. 388
Article of March 22, 2023, ``Matt Taibbi Can't Comprehend That
There Are Reasons To Study Propaganda Information Flows, So He
Insists It Must Be Nefarious,'' by Mike Masnick, TechDirt...... 393
Article of July 14, 2022, ``Omegle can be sued for matching child
with sexual predator, says court,'' by Adi Robertson, The Verge 397
Article of April 20, 2022, ``Unmasked Libs of TikTok Author Draws
Support From Conservatives, Makes Deal With Babylon Bee CEO,''
by Andi Ortiz, The Wrap........................................ 398
Article of September 28, 2022, ``Meta's Facebook Algorithms
`Proactively' Promoted Violence Against the Rohingya, New
Amnesty International Report Asserts,'' by Chad De Guzman, Time 400
Article of January 6, 2023, ```Urgent need` for more
accountability from social media giants to curb hate speech: UN
experts,`' UN News............................................. 404
Article of March 1, 2023, ``A Texas Republican Wants to Ban
People From Reading About How to Get an Abortion Online,'' by
Bess Levin, Vanity Fair........................................ 411
Article of March 1, 2023, ``The GOP is weaponizing LibsOfTikTok's
Anti-`Woke' Hate,'' by David Gilbert, Vice..................... 412
Article of October 18, 2022, ``An explosion of culture war laws
is changing schools. Here's how,'' by Hannah Natanson, et al.,
Washington Post................................................ 416
Article of March 20, 2023, ``Antisemitic tweets soared on Twitter
after Musk took over, study finds,'' by Cristiano Lima,
Washington Post................................................ 421
Article of April 19, 2022, ``Meet the woman behind Libs of
TikTok, secretly fueling the right's outrage machine,'' by
Taylor Lorenz, Washington Post................................. 423
Article of July 22, 2022, ``South Carolina bill outlaws websites
that tell how to get an abortion,'' by Cat Zakrewski,
Washington Post................................................ 427
Article of September 2, 2022, ``Twitter account Libs of TikTok
blamed for harassment of children's hospitals,'' by Taylor
Lorenz, et al., Washington Post................................ 429
Article of August 4, 2021, ``Facebook Disables Access for NYU
Research Into Political-Ad Targeting,'' by Meghan Bobrowsky,
Wall Street Journal............................................ 343
Article of January 31, 2021, ``Facebook Knows Calls for Violence
Plagued `Groups,' Now Plans Overhaul,'' by Jeff Horwitz, Wall
Street Journal................................................. 437
----------
\8\ The report has been retained in committee files and is available at
https://docs.house.gov/meetings/IF/IF16/20230328/115561/HHRG-118-IF16-
20230328-SD052.pdf.
\9\ The index can be found at https://docs.google.com/spreadsheets/d/
1Tj5WQVBmB6SQg-zP--M8uZsQQGH09TxmBY73v23zpyr0/edit#gid=1505554870.
Summary, ``The Facebook Files: A Wall Street Journal
Investigation,'' Wall Street Journal........................... 445
Article of October 11, 2017, ``Trump suggests challenging TV
network licenses over 'fake news,''' by David Shepardson,
Reuters........................................................ 450
Article of March 26, 2020, ``Trump campaign issues cease-and-
desist letters over ad highlighting Trump's coronavirus
response,'' by Peter Weber, Yahoo News......................... 454
Article of February 28, 2023, ``Trump asked Twitter to take down
`derogatory' tweet from Chrissy Teigen: whistleblower,'' by
Jared Gans, The Hill........................................... 456
Memo of March 17, 2023, ``Background on the SIO's Projects on
Social Media,'' Stanford Internet Observatory.................. 458
Statement of Yael Eisenstat, Vice President, Center for
Technology and Society, Anti-Defamation League, before the
Subcommittee on Communications and Technology, March 28, 2023.. 461
PRESERVING FREE SPEECH AND REINING IN BIG TECH CENSORSHIP
----------
TUESDAY, MARCH 28, 2023
House of Representatives,
Subcommittee on Communications and Technology,
Committee on Energy and Commerce,
Washington, DC.
The subcommittee met, pursuant to call, at 10:30 a.m., in
room 2322, Rayburn House Office Building, Hon. Robert Latta
(chairman of the subcommittee) presiding.
Members present: Representatives Latta, Bilirakis, Walberg,
Carter, Dunn, Curtis, Joyce, Weber, Allen, Balderson, Fulcher,
Pfluger, Harshbarger, Cammack, Obernolte, Rodgers (ex officio),
Matsui (subcommittee ranking member), Clarke, Veasey, Soto,
Eshoo, Cardenas, Craig, Kuster, and Pallone (ex officio).
Also present: Representative Johnson.
Staff present: Deep Buddharaju, Senior Counsel, Oversight
and Investigations; Slate Herman, Counsel, Communications and
Technology; Tara Hupman, Chief Counsel; Sean Kelly, Press
Secretary; Peter Kielty, General Counsel; Emily King, Member
Services Director; Giulia Leganski, Professional Staff Member,
Communications and Technology; Kate O'Connor, Chief Counsel,
Communications and Technology; Michael Taggart, Policy
Director; Dray Thorne, Director of Information Technology; Evan
Viau, Professional Staff Member, Communications and Technology;
Jennifer Epperson, Minority Chief Counsel, Communications and
Technology; Waverly Gordon, Minority Deputy Staff Director and
General Counsel; Tiffany Guarascio, Minority Staff Director;
Dan Miller, Minority Professional Staff Member; Joe Orlando,
Minority Senior Policy Analyst; Greg Pugh, Minority Staff
Assistant; Caroline Rinker, Minority Press Assistant; Michael
Scurato, Minority FCC Detailee; and Andrew Souvall, Minority
Director of Communications, Outreach, and Member Services.
Mr. Latta. Well, good morning. The subcommittee will come
to order. The Chair recognizes himself for an opening
statement.
OPENING STATEMENT OF HON. ROBERT E. LATTA, A REPRESENTATIVE IN
CONGRESS FROM THE STATE OF OHIO
Again, good morning, and welcome to today's hearing on
``Preserving Free Speech and Reining in Big Tech Censorship.''
I would like to begin this hearing with a simple statement:
Free speech is the cornerstone of democracy. In fact, it is
free speech that separates the United States from the
monarchies of yesterday and the authoritarian governments of
today.
When we discuss the importance of free speech in the 21st
century, it is impossible to ignore the large-scale online
platforms from which our ideas are shared and heard most
frequently, social media. For better or worse, social media has
fundamentally changed the way we communicate. It has allowed us
to connect with people all over the world and express our
thoughts to a wider audience than ever before. Its vast online
reach expands from coast to coast and across almost all
nations.
But as social media companies have grown over the years, so
has the influence of Big Tech. It is a scary truth, but the
power these companies have to influence public debate has
become increasingly emboldened. In fact, Big Tech companies
have the ability to influence almost every part of our lives.
They can determine what a user sees, hears, or learns and can
even target what they purchase online.
Now more than ever, we see online platforms engaging in the
wrong types of content moderation. This includes removing
content of opposing viewpoints that aids in important public
discourse and amplifying content that enables drug trafficking
and promotes self-harm and endangers children.
In recent years, online platforms have had the capability
to remove duly elected officials and blocked trusted news
stories from emerging. When this type of censorship is used to
silence dissenting voices, it can have a damaging effect on
democracy and public discourse. At the dawn of the internet,
Section 230 of the Communications Decency Act provided vital
protections for internet startups to engage in content
moderation and removal without fear of being sued for content
posted by their users.
Section 230 has been the foundation of the modern internet,
allowing the internet economy to bloom into what it has become
today. However, Section 230 is outdated. The law was enacted in
1996, when print newspapers were delivered to nearly every
household and before the creation of social media and the
explosion of online content. It has been interpreted by the
court to provide a blanket liability shield to all online
platforms. As a result, it lacks the nuance needed to hold
today's digital world accountable, especially as the power of
AI-backed algorithms continue to evolve. Big Tech's role in
directing and amplifying the type of content that has served
the users, becoming increasingly apparent. While all tech
companies should strive to uphold American values in their
content moderation practices, not all tech companies face the
same challenges.
For instance, small businesses still need the protection of
Section 230 to grow into vibrant members of the e-commerce
community and to compete with the Big Tech community companies
like Google and Facebook. Small online businesses deserve the
same benefit protection that Big Tech companies received when
they first started out. But as they grow, so does the
responsibility to protect our kids and all other users across
America.
As this subcommittee continues to consider section 2 of the
reform legislation, we must strike a delicate balance. For too
long, Big Tech platforms have acted like publishers instead of
platforms for free speech and open dialogue. So they must be
treated as such. I look forward to hearing from our witnesses
and working with our colleagues to reform Section 230 so we can
hold Big Tech accountable and preserve Americans' freedom of
speech.
[The prepared statement of Mr. Latta follows:]
[GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]
Mr. Latta. I thank you all for being here today, and at
this time, I yield 5 minutes to the ranking member of the
subcommittee, the gentlelady from California, for 5 minutes.
OPENING STATEMENT OF HON. DORIS O. MATSUI, A REPRESENTATIVE IN
CONGRESS FROM THE STATE OF CALIFORNIA
Ms. Matsui. Thank you very much, Mr. Chairman. At last
week's TikTok hearing, there was bipartisan concern about the
rise in harmful content on the platform. While some of the
examples highlighted by Members were jarring, TikTok is by no
means unique. This hearing provides another chance to explore
those same concerns across the wider internet ecosystem.
The spread of misinformation, hate speech, and political
extremism online has been meteoric. During the early days of
the pandemic, hate speech targeting Chinese and other Asian
Americans boomed. One study from the AI company Like documented
a 900 percent increase targeting Chinese people in China. That
same study showed that the amount of traffic going to specific
posts and hate sites targeting Asians increased threefold over
the same period.
But this increase wasn't limited to just racial
motivations. Young people of all backgrounds have been
subjected to some of the most appalling examples of
cyberbullying and hate speech. There was also a 70 percent
increase in the number of instances of hate speech between
teens and children during the initial months of quarantine. But
that is not all. Political extremism and dangerous conspiracy
theories are also on the rise.
A study by the DoubleVerify, a digital media and analytics
company, found that inflammatory and misleading news increased
83 percent year over year during 2020 U.S. Presidential
election. And perhaps most disturbingly, hate speech tripled in
the 10 days following the Capitol insurrection compared with
the 10 days preceding that violence.
The week after the Capitol insurrection, the volume of
inflammatory politics and news content increased more than 20
percent week over week. So across all sections or sectors, the
amount of online speech related to political extremism, race-
based violence, and the targeting of other protected classes is
growing.
The reason this increase is so concerning to me is because
it rarely stays online only. A 2019 study by New York
University analyzed more than 530 million tweets published
between 2011 and 2016 to investigate the connection between
online hate speech and real-world violence. Unsurprisingly, the
study found more targeted discriminatory tweets posted in a
city related to a higher number of hate crimes. This backed
similar findings from studies in the U.K. and Europe.
This trend is backed up by the FBI's own real-world data on
hate crimes which show that the number has only increased. This
escalation isn't a one-way problem. Social media platforms are
taking daily steps to foment it and see that it reaches as many
people as possible. The algorithms that promote harmful content
with the users it will resonate with most have benefited from
massive investments in R&D and personnel. In many ways, these
platforms are competing over the effectiveness of their
respective algorithms. They represent a conscious choice by
online platforms and one that I believe means they must assume
more responsibility and accountability for the content they are
actively choosing to promote.
In a 2020 academic article describing racial bias online,
Professor Overton notes that, through data collection and
algorithms that identify which users see suppressive ads,
social media companies make a material contribution to the
illegal racial targeting. This point is an important one.
Online platforms are making regular and conscious contributions
to the spread of harmful content.
This isn't about ideological preferences. It's about
profit. Simply put, online platforms amplify hateful and
misleading content because it makes them more money. And
without a meaningful reorganization of their priorities, their
behavior won't change. And that's where this subcommittee must
step in.
On a bipartisan basis, there is widespread agreement that
the protections outlined in Section 230 of the Communications
Decency Act need to be modernized because continuing to accept
the status quo just isn't an option without bipartisan updates
to Section 230. It is naive to think large, online platforms
will change their behavior. Their profit motive is too great,
and the structural oversight too weak. The discussion we will
have at today's hearing is an important one, and one that I
hope serves as a precursor to substantive bipartisan
legislation.
Section 230 needs to be reformed, and I am ready to get to
work.
With that, I yield the remainder of my time.
[The prepared statement of Ms. Matsui follows:]
[GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]
Mr. Latta. Thank you. The gentlelady yields back. The Chair
now recognizes the chair of the full committee, the gentlelady
from Washington, for 5 minutes for her opening statement.
OPENING STATEMENT OF HON. CATHY McMORRIS RODGERS, A
REPRESENTATIVE IN CONGRESS FROM THE STATE OF WASHINGTON
Mrs. Rodgers. Good morning, and thank you Mr. Chairman. I
want to begin today by celebrating why Americans cherish our
most fundamental right of free speech. It is how we the people
innovate, create new things, make our own arguments stronger,
and engage in the battle of ideas to make our communities
better. Perhaps most importantly, it is the strongest tool
people have to hold a politically--the politically powerful
accountable. It is why regimes across the world shut down free
speech, arrest journalists, and limit people's rights to
question authority.
Free speech is foundational to democracy. It is
foundational to America. Big Tech is shutting down free speech.
Its authoritarian actions violate Americans' most fundamental
rights, to engage in the battle of ideas and hold the
politically powerful accountable.
For the crime of posting content that doesn't fit the
narrative, they want people to see, hear, or believe Big Tech
is flagging, suppressing, and outright banning users from its
platforms. Today we are joined by several of these people who
have been silenced by Big Tech. They will have their voice
before this subcommittee.
Big Tech proactively amplifies its allies on the left while
weakening any dissent, creating a silo, an echo chamber, a
place where only the right ideas are determined by the faceless
algorithm or a few corporate leaders. House Energy and Commerce
Republicans have repeatedly condemned these censorship actions
even in the challenges to mainstream media when they turned out
to be correct, as was the case with the Hunter Biden laptop
story.
What's worse is the Government collusion with Big Tech
companies to censor disfavored views and be the gatekeepers of
truth. Who deserves to be the arbiters of truth? Big Tech
companies and Government officials? That sounds like the
actions taken by the Chinese Communist Party. We had the CEO of
TikTok before this committee last week where we exposed them
for their ties to the Chinese Communist Party and the
censorship TikTok does on its behalf. Let me be clear:
Government-censored--sponsored censorship has no place in our
country. It never will. A healthy marketplace of ideas is
integral to everyday American life and a healthy democracy.
Social media is a place for us to connect with friends and
a place where we should be able to share our views and learn
from one another. Big Tech companies in America have benefitted
from the liability protections given to them by Congress in
1996 under Section 230 of the Telecommunications Decency Act.
As a result, they should be a forum for public discourse and a
place for people to openly debate all ideas.
But instead, censorship on their platforms shut down these
debates and risk a long-lasting stain on our society by
undermining the spirit of our First Amendment. At the same time
this censorship is happening, Big Tech is failing to invest in
tools to protect our kids. Snapchat, TikTok, Instagram, their
platforms are riddled with predators seeking to sell illicit
drugs laced with fentanyl and exploit our innocent children.
Over and over, and I hear from parents who have lost a
child due to targeted content by a social media platform. And
yet instead of addressing this, Big Tech chooses to focus on
shutting down certain speech. As I've said before and I will
say it again, Big Tech remains my biggest fear as a parent, and
they need to be held accountable for their actions. President
Joe Biden and his administration are on a dangerous
authoritarian mission to institutionalize censorship of
American voices and control the narrative to benefit their
political agenda. They have admitted to flagging problematic
content for Big Tech companies to censor. The CDC, the Surgeon
General, the Department of Homeland Security, and--are any of
them working?
Mr. Latta. Mine is not.
Mrs. Rodgers. Well, we know that these companies sought to
establish a disinformation governance board with Department of
Homeland Security to monitor and censor Americans online. This
hearing provides us an opportunity to hear from those that have
been silenced by Big Tech censorship. Americans must have their
voices heard, and I look forward to hearing from our witnesses.
Thank you, and I yield back.
[The prepared statement of Mrs. Rodgers follows:]
[GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]
Mr. Latta. Well, thank you very much. The gentlelady yields
back. And again, this is the Communications and Technology
Subcommittee, and we can't get our mics to work.
The Chair now recognizes the ranking member of the full
committee, the gentleman from New Jersey, for 5 minutes.
OPENING STATEMENT OF HON. FRANK PALLONE, Jr., A REPRESENTATIVE
IN CONGRESS FROM THE STATE OF NEW JERSEY
Mr. Pallone. Thank you, Chairman Latta. I have to say that
I am deeply disappointed with this hearing today. We could be
having a serious discussion about the need to reform Section
230 of the Communications Decency Act, but instead Republicans
have chosen to focus on so-called Big Tech censorship. This
hearing is nothing more than red meat for the extreme
conservative press, who will certainly eat it up. They will
share it on social media where studies show conservative voices
are dominant.
The voices of the Republican witnesses have been far from
silenced. They are incredibly popular on Big Tech platforms.
They are featured in countless videos on YouTube and TikTok.
They have books for sale on Amazon, websites and email
newsletters with paid subscribers. They are guests on popular
podcasts and regularly appear on right-wing cable and streaming
channels.
Say what you want about them, but they certainly aren't
censored. The Republican witnesses have engaged in
pseudoscience to minimize the worsening climate crisis and see
dangerous ideas about COVID-19 and vaccines. One bankrolls
another social media personality that he is calling heroic for
spewing vile, anti-LGBTQ hate, resulting in harassment, threats
of violence, and intimidation across the country. And like the
Big Tech platforms themselves, I am sure they profit handsomely
from the controversy.
Now, that's not to say there isn't real censorship
happening across the country. But it's not the Democrats or the
tech platforms that are responsible. It's the Republicans. In
fact, the Republican Party is responsible for some of the most
egregious First Amendment violations and censorship that we
witnessed in years.
Republican-led States across the Nation have considered
bills that promote censorship and threaten free speech, giving
a vocal minority the power to impose their extreme beliefs on
everyone else in their community. They have banned books about
African-American history, suppressed information about safe
abortions, and demanded teachers don't say ``gay.'' Now, that
is real censorship, in my opinion.
What Republicans are trying to do here today is to force
private companies to carry content that is misinformation or
disinformation, dangerous or harmful. Companies have been
moderating content since the beginning of the internet. And
research has repeatedly refuted Republican claims of an
anticonservative bias in that moderation.
As I said, it is disappointing that we could not have had a
serious discussion about Section 230 reform. We all seem to
agree there is harmful content on these platforms that should
be taken down. Last week at the TikTok hearing, we were deeply
troubled when we saw an implied threat against the committee
with imagery of a gun. We also saw examples of disturbing
videos glorifying suicide and eating disorders, dangerous
challenges leading to death, merciless bullying and harassment,
graphic violence, and drug sales.
And this terrible content is harmful to all of us but
particularly our kids. There is no doubt that Republicans and
Democrats want social media platforms to better protect users
from harmful content. We want to hold platforms accountable and
bring about more transparency about how algorithms and content
moderation processes work. And of course the details matter
tremendously here. And that is why our inability to have a
serious conversation today is so frustrating to me. Every day,
we allow courts to interpret Section 230 to indiscriminately
shield platforms from liability for real-world harm. Every day
like that is a day that further endangers our young people, our
democracy, and our society as a whole.
Now, Democrats today are going to try to have a productive
conversation about these issues with our expert witnesses. But
it's a shame that, in my opinion, our colleagues on the other
side of the aisle are not going to--joining us in this
endeavor.
And with that, I yield back, Mr. Chairman.
[The prepared statement of Mr. Pallone follows:]
[GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]
Mr. Latta. Thank you very much. The gentleman yields back
the balance of his time. The Chair reminds Members that,
pursuant to the committee rules, all Members' opening
statements will be made part of the record. Are there any
Members wishing to make an opening statement? Seeing none, I
now would like to note for the witnesses that the timer light
in front of you will turn yellow when you have 1 minute
remaining of your 5 minutes. And we will--it will turn red when
your time has expired.
We will go down to the--our list of witnesses. Our first
witness today is Seth Dillon, the CEO of Babylon Bee, and I am
going to turn to the gentlelady from California's 16th District
for an introduction.
Ms. Eshoo. Thank you, Mr. Chairman. Let me get--well, I am
not going to go to my notes. My constituent, Doctor--how do you
pronounce your name?--Bhattacharya is a professor at Stanford,
a critic of mine, which is very fair. And I've never attempted
to censor anything he had to say about me. But I want to
welcome you and thank you for traveling across the country to
be with us. So thank you, Mr. Chairman.
Mr. Latta. Thank you very much. The gentlelady yields back.
Our next witness is Spencer Overton, who is the president of
the Joint Center for Political and Economic Studies and
research professor at George Washington Law School. Thank you
for being with us.
And our final witness is Michael Shellenberger, the founder
and president of Environmental Progress. And we appreciate you
being here.
And Mr. Dillon, you will start for our witnesses today, and
you have 5 minutes. So thank you very much for being with us
today. Hopefully the mic is working there on your end.
Mr. Dillon. Do I have to turn them----
Mr. Latta. There we go. Got it.
STATEMENTS OF SETH DILLON, CHIEF EXECUTIVE OFFICER, BABYLON
BEE; JAY BHATTACHARYA, M.D., Ph.D., PROFESSOR OF HEALTH POLICY,
STANFORD UNIVERSITY; SPENCER OVERTON, PATRICIA ROBERTS HARRIS
RESEARCH PROFESSOR, GEORGE WASHINGTON UNIVERSITY LAW SCHOOL;
AND MICHAEL SHELLENBERGER, FOUNDER AND PRESIDENT, ENVIRONMENTAL
PROGRESS
STATEMENT OF SETH DILLON
Mr. Dillon. I am being censored.
I want to start by thanking this community for--this
committee for giving me the opportunity to speak today and for
the willingness of its members to address this important issue
of censorship. My name is Seth Dillon. I am the CEO of the
Babylon Bee, a popular humor site that satirizes real-world
events and public figures.
Our experience with Big Tech censorship dates back to 2018,
when Facebook started working with fact-checkers to crack down
on the spread of misinformation. We published a headline that
read, ``CNN Purchases Industrial-Sized Washing Machine to Spin
the News Before Publication.'' Snopes rated that story false,
prompting Facebook to threaten us with a permanent ban. Since
then, our jokes have been repeatedly fact-checked, flagged for
hate speech, and removed for incitement to violence, resulting
in a string of warnings and a drastic reduction in our reach.
Even our email service suspended us for spreading harmful
misinformation.
We found ourselves taking breaks from writing jokes to go
on TV and defend our right to tell them in the first place.
That's an awkward position to be in as humorous in a free
society. Last year, we made a joke about Rachel Levine, a
transgender health admiral in the Biden administration. USA
Today had named Levine Woman of the Year. So we fired back in
defense of women and sanity with this satirical headline: ``The
Babylon Bee's Man of the Year is Rachel Levine.''
Twitter was not amused. They locked our account for hateful
conduct, and we spent the next 8 months in Twitter jail. We
learned the hard way that censorship guards the narrative, not
the truth. In fact, it guards the narrative at the expense of
the truth. All the more outrageous was Twitter's lip-service
commitment to free expression. Twitter's mission, they write,
``is to give everyone the power to create and share ideas and
information, and to express their opinions and beliefs without
barriers.''
As promising as that sounds, it rings hollow when you
consider all the barriers that we and so many others have
encountered. The comedian's job is to poke holes in the popular
narrative. If the popular narrative is off-limits, then comedy
itself is off-limits. And that's basically where we find
ourselves today. Our speech is restricted to the point where we
can't even joke about the insane ideas that are being imposed
on us from the top down.
The only reason Twitter is now an exception is because the
world's richest man took matters into his own hands and
declared comedy legal again. We should all be thankful that he
did. The most offensive comedy is harmless when compared with
even the most well-intentioned censorship. I hope we can all
agree that we shouldn't have to depend on benevolent
billionaires to safeguard speech. That is a function of the
law. But the law only protects against Government censorship.
It hasn't caught up to the fact that the vast majority of
public discourse now takes place on privately owned platforms.
So where is the law that protects us from them? The lovers
of censorship will tell us that there can be no such law, the
Constitution won't allow it. But they are wrong, and their
arguments fail. I only have time to deal with a few of them
very briefly.
One, they say private companies are free to do whatever
they want. That's nonsense, especially when applied to
companies that serve a critical public function. A
transportation service can't ban passengers based on their
viewpoints, nor can telecom providers. Under common carrier
doctrine, they are required to treat everyone equally.
That precedent applies comfortably to Big Tech. The
argument that only the Government can be guilty of censorship
falls short because it fails to make a distinction between the
way things are and the way they should be. If these platforms
are the modern public squares the Supreme Court has described
them, then speech rights should be protected there even if they
presently are not. The current state of affairs being what they
are is not a good argument for failing to take action to
improve them. But beyond that, these platforms have explicitly
promised us free expression without barriers. To give us
anything less than that is fraud.
Two, they say these platforms have a First Amendment right
to censor, as if censorship were a form of protected speech.
But it isn't. Censorship is a form of conduct. The state has
always been able to regulate conduct. The idea that censorship
is speech was forcefully rejected by the Fifth Circuit Court of
Appeals in their recent decision to uphold an
antidiscrimination law in Texas. The court mocked the idea
that, buried somewhere in the enumerated right to free speech,
lies a corporation's unenumerated right to muzzle speech.
No such right exists, and how could it? The claim that
censorship is speech is as nonsensical as saying war is peace
or freedom is slavery.
Three, they say these platforms are like newspapers, they
can't be forced to print anything they don't want to. But they
aren't like newspapers. They aren't curating every piece of
content they host. And they aren't expressing themselves when
they host it. They are merely conduits for the speech of
others. That's how they've repeatedly described themselves,
including in court proceedings. And that's how Section 230
defines them.
As a final point, I think it's important to acknowledge
that the call for an end to censorship is not a call for an end
to content moderation. Some will try to make that claim. But
Section 230 gives these platforms clearance to moderate lewd,
obscene, and unlawful speech, and antidiscrimination
legislation would respect that. The only thing it would prevent
is viewpoint discrimination. And such prevention would not be
unconstitutional because it would only regulate the platform's
conduct. It would neither compel nor curb their speech.
Thank you.
[The prepared statement of Mr. Dillon follows:]
[GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]
Mr. Latta. Thank you very much.
Mr. Bhattacharya, you are recognized for 5 minutes.
STATEMENT OF JAY BHATTACHARYA, M.D., Ph.D.
Dr. Bhattacharya. Thank you. Thank you for the opportunity
to present to this committee. I am a professor of health policy
at Stanford University School of Medicine. I have been--I hold
an M.D. and a Ph.D. from Stanford University. I have been a
professor for 20-some years. Because of my views of the COVID-
19 restrictions, I have been specifically targeted for
censorship by Federal Government officials.
On October 4, 2020, I and two colleagues--Dr. Martin
Kulldorff, a professor of medicine on leave now at Harvard
University, and Dr. Sunetra Gupta, an epidemiologist at the
University of Oxford--published the Great Barrington
Declaration. The declaration called for an end to economic
lockdowns, school shutdowns, and similar restrictive policies
on the grounds that they disproportionately harm the young and
economically disadvantaged while conferring limited benefits.
We know that the vulnerability to death from COVID-19 is
more than a thousandfold higher in the old and infirm than in
the young. The declaration endorsed a policy of focused
protection that called for strong measures to protect high-risk
populations while allowing the lower-risk individuals to return
to normal life, including specifically opening schools with
reasonable precautions.
Tens of thousands of doctors and scientists signed on to
the declaration. Because it contradicted the Government's
preferred narrative on COVID-19, the Great Barrington
Declaration was immediately targeted for suppression by Federal
officials.
Four days after we wrote the declaration, the then-Director
of the National Institute of Health, Dr. Francis Collins,
emailed Dr. Tony Fauci about the declaration. I have an email
from--that I found via FOIA, which I can enter for the record.
The email stated, ``Hi, Tony and Cliff. This proposal from
three fringe epidemiologists''--that's--that's me, Martin
Kulldorff for Harvard, and Sunetra Gupta of Oxford--``who met
with the Secretary seems to be getting a lot of attention. And
even a cosignature from Nobel Prize winner Mike Leavitt at
Stanford. There needs to be a quick and devastating published
takedown of its premises. I don't see anything like that online
yet. Is it underway? Francis.''
This email is produced over a year later in response to
FOIA request. It is possible to surmise from this email that
Collins viewed the Great Barrington Declaration as a threat to
the illusion that there was a consensus assigned to the
consensus of people who agreed with him about the necessity of
lockdown. In the following days, I was subjected to what I can
only describe as a propaganda attack.
Though the Great Barrington Declaration called public
health authorities to think more creatively about how to
protect vulnerable older people, reporters accused me of
wanting to let the virus rip. Another FOIA'd email which I also
have available--I'd like to introduce for the record--showed
Tony Fauci forwarding a Wired magazine article saying something
along those lines to Francis Collins only a couple of days
after Collins' call for a devastating takedown.
A key part of the Government's propaganda campaign
supporting lockdowns and other pandemic strategies have been
censorship of discourse by scientists and regular people. I am
party to a case brought by the Missouri and Louisiana Attorney
General's office against the Biden administration. Through this
case, lawyers have had the opportunity to depose under oath
representatives from many Federal agencies involved in the
censorship efforts, including representatives of the Biden
administration and Tony Fauci himself.
What this case has revealed is that there's nearly a dozen
Federal agencies, including the CDC, Office of the Surgeon
General and the White House, pressured social media companies
like Google, Facebook and Twitter to censor and de-boost even
true speech that contradicted Federal pandemic priorities,
especially inconvenient facts about COVID vaccines, such as
their inefficacy against COVID disease transmission.
I know for a fact that the Great Barrington Declaration
suffered from censorship from many media--social media
companies, including Google, Reddit, and Twitter, which
removed--which when I was placed on a trends blacklist the
moment I joined in August of 2021. In March 2020--in March
2021, I was part of a roundtable with Governor DeSantis that
was filmed where we discussed masking children. That video of
the Governor of the State of Florida talking to his scientific
advisors was censored off of YouTube.
The suppression of scientific discussion online clearly
violates the U.S. First Amendment. But perhaps even more
importantly, the censorship of scientific discussion permitted
a policy environment where clear scientific truths were
muddled, and as a result, destructive and ineffective policies
persist much longer they would have otherwise.
Government censorship permitted ideas, false ideas, for
instance, that the--that the risk of COVID is not steeply
astratified or that the recovery from COVID does not provide
substantial immunity against--against future infection or
severe disease on future infection, that the COVID vaccines do
stop disease transmission. All these--that school ideas--school
closures were warranted. All of these destructive ideas harm
the health and well-being of the American people. And many
people that are dead today would be alive had those ideas been
countered.
Government censorship--if there is anything we've learned
from the pandemic, it should be that the First Amendment is
more important during a pandemic, not less.
[The prepared statement of Dr. Bhattacharya follows:]
[GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]
Mr. Latta. Well, thank you very much, and Mr. Overton, you
are recognized for 5 minutes for your statement.
STATEMENT OF SPENCER OVERTON
Mr. Overton. Thank you. Chairs, ranking members, and
members of the committee, thanks for inviting me to testify. My
name is Spencer Overton. I am the president of the Joint Center
for Political and Economic Studies. We research the impact of
tech platforms on Black communities. I am also a professor at
GW and focus on democracy and tech platform accountability.
Now, while I favor tech platform accountability, this hearing's
framing, ``preserving free speech and reining in Big Tech
censorship,'' it isn't accurate. This framing suggests that the
way that government preserves free speech is to prevent tech
companies from engaging in content moderation.
In fact, the First Amendment protects private-sector tech
companies in their right to determine what to leave up and what
to take down on their platform. That's the part of freedom of
association, freedom of speech. The censorship the First
Amendment prohibits is government attempting to restrict or
compel private actors to speak in a particular way. Congress
shall make no law that abridges the freedom of speech.
Now, if we were to accept this characterization that tech
platform censor every time they remove a post, that's going to
mean that Fox News censors every time it selects hosts to lead
its primetime lineup. It means that the Wall Street Journal
censors every time it declines an op-ed. Now, some partisans
may want to tell Fox News and the Wall Street Journal how to
moderate their conduct. They may want government to silence
those institutions. But that's not in line with the First
Amendment, because the freedom of speech that private platforms
enjoy in terms of content moderation, because of that, Trip
Advisor has the right to take down comments that have nothing
to do with travel. Truth Social enjoys the right to take down
posts from users about the January 6th committee hearings or
those people who express prochoice opinions here.
These institutions are not common carriers. I will discuss
that maybe if we have time in terms of our discussion piece.
The--the period--the 11th Circuit explained it in detail in
2022.
Now, while existing research suggests that large platforms
like Facebook, Instagram, YouTube do not disfavor or target
conservatives for removal here, you know, they, in fact,
favor--go out of their way to favor conservatives for fear of
accusations of political bias and because these folks are an
important and valuable advertising base. But in fact, that's
really beside the point, right? That's beside the point. The
real point is that private companies have this First Amendment
right to engage in content moderation.
Now, also, if we were to treat these tech platforms as
state actors and require that they keep up all constitutionally
protected speech, the internet would be even worse,
particularly for teenagers, for--for young children. We would
see more violence, more pornography, more graphic content. We
would see more instructions on self-mutilation and suicide and
more swastikas, more Holocaust denials, more White supremacist
organizing. All of this is constitutionally protected speech,
right? But right now, platforms can take it down because they
are not state actors here.
We would see more deep fakes, more political
disinformation, more spam. Now, even though the First Amendment
protects private tech platforms, it doesn't demand that they
bear no responsibility for what they choose to amplify and the
harms that they create. That is not a part of the First
Amendment. That's a part of the overinterpretation of courts of
Section 2--I am sorry--of Section 230 of the Communications
Decency Act.
I think Republicans and Democrats can agree on several
issues, including the fact, as you said, Mr. Chair, that this
isn't 1996. The world has changed since 1996, when 230 was
enacted. By--Democrats and Republicans can act in a bipartisan
way to ensure that tech companies don't impose harms on others
through their algorithms and other activities they use to
profit.
Thank you so much.
[The prepared statement of Mr. Overton follows:]
[GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]
Mr. Latta. Thank you. And Mr. Shellenberger, you are
recognized for 5 minutes for your statement.
STATEMENT OF MICHAEL SHELLENBERGER
Mr. Shellenberger. Thank you, Chairman Latta, Ranking
Member Matsui, and members of the subcommittee for inviting me
to testify today.
Here are events that actually happened. Twitter suspended a
woman for saying, quote, ``women aren't men.'' Facebook
censored accurate information about COVID vaccine side effects.
Twitter censored a Harvard professor of epidemiology for
expressing his opinion that children did not need the COVID
vaccine.
Facebook censored speculation that the coronavirus came
from a lab. Facebook censored a journalist for saying
accurately that natural disasters were getting better, not
worse. Twitter permanently suspended a sitting President of the
United States even though Twitter censors themselves had
decided he had not violated its terms of service.
Now, maybe that kind of censorship doesn't bother you
because people were doing their best to prevent real-world harm
with the knowledge they had at the time. But what if the shoe
were on the other foot? Consider how you would feel if the
following occurred. Twitter suspended a woman for saying trans
women are women. Facebook censored accurate information about
COVID vaccine benefits. Twitter censored a Harvard professor
for saying children needed to be COVID vaxxed annually.
Facebook censored speculation that the coronavirus came
from nature. Facebook censored a Member of Congress for saying
the world is going to end in 12 years because of climate
change. Twitter permanently suspended President Biden even
though, according to Twitter's top censor, he had not violated
its terms of service.
Now, it's true that private media companies are allowed by
law to censor whoever they want. And it would violate the First
Amendment of the United States for the Government to try to
prevent them from doing so. But Internet platforms, including
Twitter, Facebook, and Google, only exist thanks to Section 230
of the Communications Decency Act, which exempts them from
legal liabilities that burden traditional media companies.
If Congress simply eliminated Section 230, internet search
and social media platforms would no longer exist. And maybe
that's what Congress should do. These platforms are obviously
far too powerful. They are making the American people, all of
us, dogmatic and intolerant. And the evidence is now
overwhelming that they have a--that they have played a primary
cause, if not the primary cause, in America's worsening mental
health crisis.
We might be a healthier nation if we simply reverted to the
good old days of websites that have the same liability as
newspapers. But doing so would reduce, rather than increase,
freedom of speech and may not be necessary to protect American
citizens.
As such, I would propose an immediate and partial remedy,
which would also allow us to understand what else, if anything,
is needed to protect the free speech of citizens. And that
would be true transparency. By transparency, I do not mean that
which is being proposed by a Senate transparency bill which
would only allow National Science Foundation-certified
researchers across--allow NSF-certified researchers access to
content moderation decisions.
That bill would increase the power of the censorship
industrial complex, which is actively undermining our free
speech. Rather, I mean immediate public transparency into all
content moderation decisions relating to matters of social and
political importance. We do not need to know how the platforms,
for example, are removing pornography or criminal activities.
Those things should be cracked down upon immediately.
But when Twitter, Facebook, and Google censor people for
expressing disfavored views on transgenderism, climate change,
energy, vaccines, and other plainly social and political
issues, they must immediately announce those content
moderations decisions publicly and give the censored
individuals the right to respond.
And to protect free speech from Government, Congress could
require Government contractors and Government employees to
immediately report any content-related communications they make
to internet platforms.
What I am proposing is rather simple. If the White House is
going to demand that Facebook censor accurate information about
COVID vaccine side effects, which it did do, then it would need
to immediately send an email to be posted on a website, to be
tweeted out, to be put on Facebook that that's what they did.
And if Facebook is going to take down accurate information
about side effects of COVID vaccines, it should be required to
explain that it did that.
If it's going to censor Dr. Bhattacharya or Mr. Dillon,
then it should be required to explain why it did and how it did
that. And it should be required to give them a chance to
respond. Such a solution would not eliminate unfair censorship
and content moderation, since those things are always
subjective. But it would bring it out into the open. It would
restore the right of free citizens to have voice, and it would
open the possibility for better, freer content moderation in
the future.
Thank you very much.
[The prepared statement of Mr. Shellenberger follows:]
[GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]
Mr. Latta. Well, thank you very much to all of our
witnesses, and that will conclude our 5-minute openings with
our witnesses. And I will now recognize myself for 5 minutes
for questioning.
My first question is to all of our witnesses. And hopefully
just pretty much a simple yes-or-no answer will suffice. This
subcommittee has sole jurisdiction over legislation that would
amend Section 230 of the Communications Decency Act. Given the
proven censorship actions taken by Big Tech not limited to
satirical, scientific, and political viewpoints, do you agree
that Section 230 must be reformed?
Mr. Dillon, would you like to start with a yes or no?
Mr. Dillon. Is it a simple yes or no? I think reform would
be helpful. Yes. I do think there is also room for legislation
that would address the issue of viewpoint discrimination
outside of reform to Section 230.
Mr. Latta. Thank you. Mr. Bhattacharya?
Dr. Bhattacharya. Yes, and I think that there should be
restrictions on the ability of Government officials to use
Section 230 and other--other mechanisms to try to censor
scientific debate online.
Mr. Latta. Thank you. Mr. Overton?
Mr. Overton. I do think reform to 230 is in order. I think
it's a question about what kind of reform.
Mr. Latta. Thank you. Mr. Shellenberger?
Mr. Shellenberger. Yes.
Mr. Latta. Thank you. Thank you very much. Mr.
Bhattacharya, in early 2021, you published a scientific article
that discussed age-based mortality risk and natural immunity to
COVID. Is that correct?
Dr. Bhattacharya. Yes. I've published several articles on
this.
Mr. Latta. OK. At the time it was published, were the
findings in your article consistent--consistent with public
health authorities in--with a view on your topic?
Dr. Bhattacharya. So I think that the main--the main
findings that--of--if it's the article I think you are thinking
of was that the lockdown restrictions that did not--that
ignored age-based risk from COVID had not been successful in--
in actually restricting the spread of COVID and that at the--
the other--the other thing from other people's findings, very
clear in the scientific literature, is that those kinds of
restrictions were very damaging, especially to young people.
Mr. Latta. Let me follow up now on this, what you were
talking about on findings. As a professor of medicine at
Stanford University over the course of your career, how often
is it that researchers disagree through the scientific process?
Dr. Bhattacharya. It happens all the time.
Mr. Latta. Thank you.
Dr. Bhattacharya. The norm.
Mr. Latta. You know, after you were banned on Twitter, you
were unable to have an open discussion to provide medical
research data to the most consequential public health decisions
made in generations. How do you believe censoring that
scientific contact--content impacted the ability of America's
parents, small business owners, and others to make educated
decisions related to COVID-19 during the pandemic?
Dr. Bhattacharya. I think that--that the--the Government's
actions to create an illusion of scientific consensus on those
topics harmed the health and well-being of every single
American. I think it closed small businesses. It meant that
children couldn't--little children couldn't go to school.
Minority kids specifically were harmed more than--because
minority kids' schools that were closed more. And many people
who were under the impression that the vaccine would stop
transmission, and it didn't, were also harmed because they
refused the ability to get the full set of facts about the--
about the vaccines when they were making those decisions
whether to take----
Mr. Latta. And what recourse did you have with Twitter?
Dr. Bhattacharya. None until Elon Musk bought--bought
Twitter. What I--what I found out after he did buy Twitter is
he invited me to come visit Twitter headquarters. And I found
that I was placed on a blacklist the day that I joined Twitter.
Mr. Latta. Thank you. Mr. Shellenberger, according to the
information recently uncovered through the Twitter Files, we
know that Twitter censors specific--specific conservative users
through its visibility tools and has used this tool by tagging
the accounts of conservative activists as ``do not amplify.''
This was after assurances from Twitter's head of legal policy
and trust that Twitter does not shadow--shadow ban. Based on
your reporting, what other tools have you uncovered that were
used by Twitter or other platforms to censor conservative
voices?
Mr. Shellenberger. Well, thank you for the question. I
would just say we--we describe the censorship that occurred as
occurring against disfavored voices because I don't think--and
this is why I am very skeptical of any of these studies which
claim to measure bias being more liberal or conservative
because we can't agree on what's liberal or conservative. The
concerns that Dr. Bhattacharya just raised about the
disproportionate impact of school lockdowns on students of
color, I don't think those are necessarily conservative or
liberal. I think those are just human rights concerns.
But we saw--there is a range of tools that were used both
to do not amplify--to not amplify voices to censor tweets. The
Harvard professor Martin Kulldorff was censored by having a
warning on one of his tweets about where he said that kids
don't need to be given COVID vaccines. We see--which I think is
important to point out.
It's a particular form of censorship that's also
humiliating and discrediting. I mean, here we have the most
powerful mass media communications organizations in human
history basically accusing people of being liars or misleading
or deniers, really toxic kind of labeling. So it's occurring
both through removing tweets, putting people --deplatforming
people and also attempting to----
Mr. Latta. Well, thank you very much. My time has expired.
So these are examples of censorship by Big Tech companies
underscores the need to reform Section 230. And they are acting
as bad Samaritans on their platforms and don't deserve that
blanket liability. So I yield back.
And at this time, I recognize the ranking member of the
subcommittee for 5 minutes.
Ms. Matsui. Thank you very much, Mr. Chairman. I want to
focus on algorithms. Section 230 protections were initially
conceived to protect neutral platforms that passively host
information from third parties. While this approach allowed the
internet ecosystem to flourish, I believe the central tenant is
flawed. Modern platforms consciously promote some speech over
others through sophisticated algorithms and data collection
practices. Professor Overton, can you describe how algorithms
and data collection practices materially contribute to
discrimination online?
Mr. Overton. Yes. Thank you so much, Ranking Member.
Essentially, platforms like a Facebook or a Twitter make money
off of ad revenue and views and this type of thing. And so what
they do is they try to use these algorithms to deliver content,
ads, etc., to make money, and to profit. Facebook, what they've
done is a couple of things. One, they have had drop-downs that
basically allow people to target particular racial groups in
the past or ethnic affinity drop-downs. And as a result,
advertisers have been able to, for example, target employment
or housing ads away from African-American communities or Latino
communities.
Ms. Matsui. OK.
Mr. Overton. But then also, the algorithms, as you have
talked about, are also problematic. The advertisers may not
even know. And then the algorithms steer the ads away from
Black and Latino people.
Ms. Matsui. OK. So could I ask this?
Mr. Overton. Yes.
Ms. Matsui. Do you believe the use of algorithms to target
the distribution of certain content should alter our
understanding of the 230 framework?
Mr. Overton. I do think--yes, absolutely.
Ms. Matsui. OK. Now, in Gonzalez v. Google----
Mr. Overton. Right.
Ms. Matsui [continuing]. Court found that Google did not
act as an information content provider----
Mr. Overton. Mm-hmm.
Ms. Matsui [continuing]. When using algorithms to recommend
terrorist----
Mr. Overton. Right.
Ms. Matsui [continuing]. Content because Google used a
neutral algorithm----
Mr. Overton. Right.
Ms. Matsui [continuing]. That did not treat ISIS-created
content differently than any other third-party-created content.
And Google provided a neutral platform that did not encourage
the posting of unlawful material. So Professor, I often see the
phrase ``neutral''----
Mr. Overton. Mm-hmm.
Ms. Matsui [continuing]. Used to describe social media
algorithms. However, I have concerns that phrase glosses over
the inherent biases and certainly algorithms'----
Mr. Overton. Yes.
Ms. Matsui [continuing]. Construction and effect. Do you
believe algorithms can ever be truly neutral? And if not, how
should that fact inform our understanding of----
Mr. Overton. Yes.
Ms. Matsui [continuing]. Section 230?
Mr. Overton. Yes. I think it's wrong to have a broad
neutral rule here that all algorithms are neutral and
mechanical. Certainly they have harms in terms of particular
communities.
Ms. Matsui. OK. Social media and online platforms have
shown consistent success in preventing many forms of
objectionable content, like obscenity and nudity. They have
also moved quickly in some cases to identify and label
misinformation around COVID-19 and vaccines. However, the same
efficiency does not extend to racial equity and voting rights.
Professor Overton, why do you believe online platforms
haven't had commensurate success in preventing harms to racial
equity and voting rights?
Mr. Overton. Yes. I think that there are some steps that
have been made by some companies, but it's not enough. And part
of it is that profit is a big motive in terms of companies, so
that's what they are focused on in terms of the advertising
dollars or whatever is going to drive the bottom line.
Ms. Matsui. OK. While Section 230 establishes broad
protections for online platforms, it doesn't extend to an
information content provider, which Section 230 defines as any
person responsible in whole or in part for the creation or
development of information. Courts have generally understood
development in this context to mean making information usable,
available, or visible.
Mr. Overton. Mm-hmm.
Ms. Matsui. Professor Overton, how has our understanding of
this phrase changed as technology has evolved, and where does
it fit in the broader Section 230 discussion?
Mr. Overton. Certainly. Roommates, a case on the Ninth
Circuit, you know, introduced the fact that there may be some
material contributions where platforms don't enjoy the
protection. And the problem is that it has not been clear.
The difficulty about broad rules in this space is, on one
hand, algorithms are troubling and can be discriminatory. On
the other hand, they can be used for content moderation and
cleaning up the internet. So we want to be careful in terms of
flat, broad, straight rules here and be just very thoughtful
about this space.
Ms. Matsui. OK. Well, thank you very much. I realize we
have a lot of work to do to help reform this. So thank you. I
yield back.
Mr. Latta. Thank you. The gentlelady yields back. And at
this time, the Chair recognizes the gentleman from Florida for
5 minutes.
Mr. Bilirakis. Thank you, Mr. Chair. I appreciate it very
much. And I want to thank the witnesses for their testimony.
Two years ago, I put out a survey to my constituents on Big
Tech. I asked the citizens of my district the following
question: ``Do you trust Big Tech to be fair and responsible
stewards of their platforms?'' Over 2,700 constituents
responded, with 82 percent of them saying no. That's a terrible
performance, in my opinion, for Big Tech.
A year and a half later, I asked the same question to my
constituents. Maybe Big Tech got the hint and had worked to
gain public trust. This time, we had even more constituents
respond to the survey, over 3,200 participants in my district.
Same question: ``Do you trust Big Tech to be fair and
responsible stewards of their platforms?'' Once again, 82
percent of them said no. Zero improvement whatsoever.
In 2020, the documentary ``Social Dilemma'' brought to
light how social media platforms moderate their content. It
showed the power that social media platforms have to polarize
the views of its users based on the algorithms they use to
promote certain content and the incentives to do so to keep us
on their platforms longer.
Mr. Shellenberger, to what extent is Big Tech to blame for
the political polarization in America today?
Mr. Shellenberger. Thank you for the question. I think very
significant amount. It's--obviously, there was trends of
polarization occurring before the rise of social media. But we
know that social media reinforces people's existing beliefs. It
creates a sense of certainty where there should be more
openness and uncertainty. I think it's clearly contributed to a
rising amount of intolerance and dogmatism that we've seen in
the survey research. So unfortunately, it has not played the
role of opening people up to wider perspectives that we had
hoped.
Mr. Bilirakis. Then Mr. Dillon--thank you--has the
censorship you experienced by social media impacted your
livelihoods? If so, can you explain how that has impacted your
family or business relationships, please?
Mr. Dillon. That was directed at me, right?
Mr. Bilirakis. Yes, to--to you, Mr. Dillon. Yes.
Mr. Dillon. Yes. Well, I mean, we were knocked off of
Twitter for 8 months, which is one of our primary traffic
sources. So it impacted--it impacted the business performance
in terms of how much traffic and revenue we were driving from
that, yes.
Mr. Bilirakis. Very good. Question for you and Mr. Bhatta--
I am sorry--butchered the name. Are there opinions or ideas
that you have wanted to post on social media which you
ultimately choose not to because of fear of retaliation by the
platforms?
We can start with Mr. Dillon and--and then Mr.
Bhattacharya. I did a little better that time. Please, sir.
Mr. Dillon. Can you repeat the question?
Mr. Bilirakis. Yes. Are there opinions or ideas that you
have wanted to post on social media----
Mr. Dillon. Yes.
Mr. Bilirakis [continuing]. Which you ultimately choose not
to because of a fear of retaliation by the platform?
Mr. Dillon. In my view, self-censorship is doing the
tyrant's work for him. And so I--I refuse to censor myself. And
I--I say what I think, come what may. And that's why, when we
got locked out of Twitter, we were asked to delete that tweet.
And we could get our account back if we deleted the tweet and
admit that we engaged in hateful conduct. And I refused to do
that too. So, no, I don't censor myself. I refuse to delete
tweets that they want me to delete for hateful conduct when I
don't think they are hateful.
Mr. Bilirakis. OK. Well, I commend you for that. Mr.
Bhattacharya?
Dr. Bhattacharya. I have. I have self-censored because I
didn't want to get booted off of Twitter or social media. I
tried to figure out where the line was, and I think, as a
result, the public didn't hear everything I wanted to say. I
also say that there is a lot of younger faculty members and
professors who have reached out to me, told me that they also
self-censor by not going on social media at all, by not making
their views public at all because of the environment created
around the censorship.
Mr. Bilirakis. No, I understand that as well.
Mr. Shellenberger, in your experience, are there some
platforms that have a better track record at maintaining free
speech principles over others, or have any improved over time?
If not, why do these companies continue to engage in biased
content moderation decisions? And what can Congress do to
better enable constitutionally protected speech?
Mr. Shellenberger. I am not sure of the answer to the first
question. I will say that I--we have seen some--particularly
Twitter censoring some things that Facebook does not censor and
Facebook censoring some things that Twitter does not. So I
think some of it just depends. But I think the most important
thing--and I am really trying to propose something here that I
think both parties can agree to--is transparency. I don't think
that we are going to ever--we can't agree on what a woman is as
a--as a society.
So there is this famous--people say sometimes you are
entitled to your own opinions but not your own facts. We are
entitled to our own facts too under the First Amendment, and
that's just how we are. So I think if you can't--we are not
going to be able to legislate particular content moderation.
And so we need to just move forward with transparency.
Mr. Bilirakis. Well, I appreciate it very much. Very
informative. Thank you for your answers.
Mr. Latta. Thank you. The gentleman's time has expired. The
Chair now recognizes--recognizes the gentlelady from New York
for 5 minutes.
Ms. Clarke. Good morning, everyone. And let me start by
thanking our panelists for joining us today as well as our
chairman, Chairman Latta, and Ranking Member Matsui for
convening this hearing. I am extremely proud of much of the
work this committee has done in this space. While content
moderation policies and reining in the ever-increasing power of
Big Tech are certainly topics worth exploring in this venue, I
am concerned about the potential for this hearing to devolve
into another airing of partisan grievances and personal
anecdotes cherry-picked to spark outrage and push forth certain
narratives for personal or political gain.
It is widely understood that both online and here in the
real world, topics that spark controversy, outrage, fear, and
anger are highly effective tools for attracting attention. So I
urge my colleagues to be careful not to fall into that trap. We
have an opportunity to discuss substantive issues impacting all
Americans and most--must take care not to let those issues take
a backseat to the performative politics of outrage and
fearmongering.
Our current content moderation regulatory framework is a
product of decades-old legislation passed when the internet was
in its infancy as well as the court's overly broad
interpretation of Section 230 in the years that followed. What
began with the intent to incentivize the removal of certain
harmful, objectionable, or obscene content has seemingly
transformed into an all-encompassing shield protecting Big Tech
firms from accountability for the unintended harms caused by
their platforms and moderation policies.
This is certainly--there is certainly no shortage of issues
Big Tech can and should be taking a more aggressive stance on:
harassment, hate speech, White supremacists, radicalization,
deep fakes, organized disinformation campaigns, sexually
explicit material of children. And the list is almost endless.
While imperfect, Section 230, as it is currently
understood, along with the First Amendment, does not appear to
provide Big Tech with the legal protections to tackle these
issues. And yet this harmful content remains all too prevalent
online. Unfortunately, the original intent of Section 230 has
been lost as technology is developed, and all too often,
vulnerable communities are paying the price.
So my first question is for Mr. Overton. In your testimony,
you noted that certain moderation regulations from major tech
platforms differ from that of common carriers. Can you expound
on why, from a legal perspective, that distinction was made and
what it means for users of the platforms?
Mr. Overton. Sure. Thank you so much. The Eleventh Circuit
just laid this out last year. So when people sign up, they sign
agreements in terms of user agreements which says that they'll
comply with community standards. Also, it's not like broadcast,
where there is scarcity in terms of waves. It's more like
cable. And--and the court has found that, you know, cable is
not a common carrier.
Also, the Telecommunications Act of 1996 explicitly says,
hey, these aren't common carriers. So, you know, a variety of
reasons. I really encourage folks to take a look at that
Eleventh Circuit opinion.
Ms. Clarke. Thank you. Studies have shown that not only are
Black Americans subject to a disproportionate amount of online
harassment due to their race but have been purposely excluded
from receiving certain online advertisements related to
housing, education, vocational opportunities. So Mr. Overton,
can you explain for us the role, intended or not, that
algorithms can play in this kind of online discrimination?
Mr. Overton. Thank you, and also thank you for the Civil
Rights Modernization Act that you introduced, which addresses
some of these issues. In short, Facebook, its algorithms and
drop-downs, they were steering housing and employment ads away
from Black and Latino folks and toward White folks. And users
didn't even know about it. It was a problem. As Facebook said,
they don't have to comply with Federal civil rights laws
because of 230. Clearly, if the New York Times had an ad for
housing of all White folks, there would be a civil rights
problem. That's not a scenario where--where Facebook should get
a pass. It's not just there, though.
Entities like Airbnb and Vrbo, they account for about 20
percent of lodging in the United States in terms of revenues.
Hilton, Hyatt, they have got to comply with public
accommodations laws, but Airbnb and Vrbo basically claim they
don't have to comply.
Ms. Clarke. Thank you, Mr. Chairman. I yield back.
Mr. Latta. Thank you. The gentlelady yields back, and the
Chair now recognizes the gentleman from Michigan for 5 minutes.
Mr. Walberg. Thank you, Mr. Chairman, and thanks to the
panel for being here. And Mr. Dillon, thank you for not self-
censoring in your--in your frame of work. I don't self-censor
either. I set priorities. I try to be sensitive. I try to be
proper. And my staff worries about me all the time. But I
believe in truth. And truth can be put out in various ways
without offense except for those who want to be offended.
Mr. Bhattacharya, in October 2020 you and two colleagues
from Stanford University published the Great Barrington
Declaration. It was a document outlining the need to implement
focused protection, your terminology--i.e., eliminating COVID
lockdowns and school closures for everyone except the elderly
and high-risk--which has proven to be right.
The document had a simple message, but it was immediately
targeted by Biden administration officials and, subsequently,
social media companies as misinformation and downgraded across
the platform.
Mr. Bhattacharya, how has the suppression of concerns about
school closures from Big Tech and the Biden administration
impacted our Nation's children. And secondly, can you speak to
both the effects on their well-being and their educational
success?
Dr. Bhattacharya. So there is a very simple data point to
look at as far as what the impact of school closures are. And
that is that--that children in Sweden have suffered zero
learning loss through the pandemic. In the United States, we
have created a generational divide in--in terms of the
educational impact from these school closures and lockdowns.
In California, where I live, schools were closed from
physical--for in-person contact for almost a year and a half.
And it's minority kids in particular that have been harmed by
this--these school closures. We have created a huge deficit in
the learning, and that will have consequences through the
entire lives of these children. The literature on--on the human
capital investments suggests that investments in schools are
the best investment we make as a society. And the school
closures, as a result, will lead to children leading shorter,
less healthy lives.
Mr. Walberg. I appreciate that information being laid out.
We have been always told to follow the science, and we didn't
follow the science. And now we are starting to grudgingly
accept the science. And in Michigan, our Governor closed--
Governor Gretchen Whitmer closed all in-person learning
starting in March of 2020. And it took until January 2021 for
Governor Whitmer to agree to plan a full reopening of schools
in March of that year. The consequences in Michigan: Like you
have said, Michigan's average math score dropped four points
for fourth-graders and eight points for eighth-graders since
2019. In reading, they dropped seven and four points,
respectively.
Dr. Bhattacharya, how did the prevailing narrative standard
opposed by social media companies bolster efforts to keep
schools closed?
Dr. Bhattacharya. Well, I think the social media companies
promulgated voices that panicked people regarding the danger of
COVID to children far outside of what the scientific evidence
was saying at the time. As a result, spread panic in school
board meetings and elsewhere that allowed schools to stay
closed far past the time when they should have been opened,
far--it very--from very early in the pandemic, there was
evidence from Sweden and--and from Iceland and other places,
from Europe, that school openings were safe, that they were
unnecessary to protect people from COVID and that there were
alternate policies possible that would have protected older
people better than school closures and caused much less harm to
our children, and yet we didn't follow those policies. And the
voices that pushed the panic that led to school closures were
amplified on social media settings.
Mr. Walberg. Appreciate that. A constituent from Carleton,
Michigan, in my district wrote to me about his attempts to post
an article from the NIH on his Facebook page. Facebook blocked
the article from being shared because it violated their policy
against misinformation, their policy. As a reminder, the
article, which was entitled ``Facemasks in the COVID-19 era: A
health hypothesis,'' was published by the NIH itself. But 6
months after its publication, the NIH retracted the article,
and I assume because it didn't align with our ongoing efforts
to keep people wearing masks.
Mr. Shellenberger, it can't be a coincidence that an
article that the NIH retracted was also deemed misinformation
by Facebook. How did the two, Biden and the Big Tech companies,
work together to downgrade or suppress information that did not
support COVID goals?
Mr. Shellenberger. Emails released by the attorneys general
of Louisiana and Missouri show the Biden administration
repeatedly haranguing Facebook executives. And we also saw the
President threatening Section 230 status, demanding that they
censor information that they felt would contribute to vaccine
hesitancy. And Facebook went back to the White House and said
that they had been taking down accurate information about
vaccine side effects. We also now know the White House was
demanding censorship of private messaging through Facebook.
Mr. Latta. I am sorry to interrupt. The gentleman's time
has expired. Thank you.
Mr. Walberg. Thank you. I yield back.
Mr. Latta. The Chair now recognizes the gentleman from
Texas for 5 minutes.
Mr. Veasey. Mr. Chairman, thank you very much. Before I get
into my remarks, I also want to remind Mr. Bhattacharya and Mr.
Shellenberger in particular that something else that hurts
Black children, too, is when there is misinformation put on
social media about their parents and their grandparents
stuffing ballot boxes, cheating, and elections being stolen in
places like Atlanta and Milwaukee. And people know that that is
specifically meant to be targeted at Black people, that that
hurts Black children too.
And when misinformation like that is allowed to stay on,
which it routinely is, that that is bad for Black children
also. And so when it's on a social media platform like that is,
there needs to be some sort of way how to do that. I hope that
no one is self-censoring. But, like I tell my 16-year-old every
day when he goes off to school and inappropriate things can
sometimes come out of his mouth--like anybody in here that has
had a teenager, Democrat or Republican, knows that--what I do
tell him is, ``Dude, use a filter. Use a filter, dude. You can
say that, but should you really say that?''
And so don't self-censor, but use a filter, dude. Use a
filter. At a time when public trust in government remains low,
as it has for much of the 21st century, I think that it is
disingenuous for the other side of the aisle to politicize free
speech in the digital age.
There is stuff on Facebook right now that I saw on Hannity
that's fake. And you can go on any of these social conservative
sites on Facebook right now and see tons of information. This
is my personal Facebook page. You can see all of this. And the
truth is that free speech in the digital age will continue to
dominate headlines because the internet, as it operates today,
really does afford Americans all of the opportunity to freely
express themselves in ways that were literally unimaginable 20
years ago.
I don't believe anyone in this room can deny that digital
communication is going anywhere in the foreseeable future.
Instead, we need to focus on a bipartisan basis to find a path
forward so we can have commonsense policy solution reforms as
it relates to Section 230. We all know that the internet is not
the same phenomena it was when Section 230 was enacted back in
1996.
And so let's just take a quick step back and--and think
about the actual censorship that is--is going on today as it
relates to something like voting. Right now in Texas, they are
trying to make it harder for people to vote on college
campuses. And to me, that's the ultimate censorship. And that's
bad. And so I would hope that we can seriously, again, have a
real discussion about how we can make some reforms in Section
230 and come up with some just commonsense language on some
filters.
Professor Overton, I want to thank you for being here today
and testifying once again before this subcommittee about how
disinformation is dangerous. In your 2020 testimony in front of
this subcommittee, you talked about how disinformation on
social media presents a real danger to racial equity, voting
rights, and democracy.
And I wanted to ask you, are social media platforms doing a
better job now than they were 3 years ago to curtail the spread
of general disinformation that you previously discussed in
front of this subcommittee?
Mr. Overton. Thank you. They are better in some ways, and
in other areas they've--they've fallen back. 2022 wasn't as bad
as 2020 in terms of the aftermath with disinformation about so-
called stolen elections. We have got some new factors in terms
of Elon Musk buying Twitter and laying off the content
moderation staff. So things are different.
I also want to--you talked about bad for Black children and
the fact that death rates are higher in Black communities is
also bad for Black children. The fact that kids don't have
access to internet and, as a result, have more learning loss
during a pandemic as opposed to other communities is also bad
for Black children.
Mr. Veasey. No, thank you very much. And as we continue to
talk through these things, I hope, particularly when it comes
to public health, we can try to find a consensus.
Mr. Overton. Yes.
Mr. Veasey. I know five people in one house that they were
dead in a month. Dead in a month over COVID.
Mr. Overton. Right.
Mr. Veasey. We need to try to find some consensus on these
things and not--stop--and stop making them so divisive----
Mr. Overton. Right.
Mr. Veasey [continuing]. When people in our community had
so many stories that we knew like that. Of course we wanted our
kids in school.
Mr. Overton. Right.
Mr. Veasey. We know that it was not good for our kids to be
in school. But we also had bodies in places like Detroit that
were so stacked up that the morgue couldn't even handle them.
And that's the reality in Black America also.
Thank you very much, Mr. Chairman. I yield back.
Mr. Latta. Thank you very much. The gentleman yields back.
The Chair now recognizes the vice chair of the subcommittee,
the gentleman from Georgia, for 5 minutes.
Mr. Carter. Thank you, Mr. Chairman, and thank each of you
for being here. This is extremely important. Let me begin by
saying I agree with my colleague from Texas who just made the
comment that trust in the Federal Government is--is at
historical low. It's also low with the social media companies.
So when the two of these combined collide, then Americans are
worried and concerned. And I think we are all concerned here.
You know, we had the former CEO of Twitter, Jack Dorsey,
who testified before this committee and made the statement that
Twitter does not use political ideology to make any decisions.
Well, we know that wasn't true. And it's clear that the Big
Tech platforms are no longer providing an open forum for all
points of view. And that's extremely important. We want that.
Mr. Shellenberger, I know that you have been before--you
have testified before Congress a number of times. Thank you for
being here again, and appreciate it. It's good to see you. But
2 weeks before the 2020 election, there was damning information
about the President's son, Hunter Biden, that was suppressed
but then later authenticated.
And once--once President Biden was in office, you were
covering, as I understand, the Twitter Files. What was your
takeaway from how Twitter had made the decision to suppress
news articles related to the Hunter Biden laptop story?
Mr. Shellenberger. Yes. Thank you for the question. So it's
important to understand that on October 14th, the New York Post
published this article about emails from the Hunter Biden
laptop. Everything in the article was accurate despite some
people claiming it's not. It was an accurate article. Twitter's
internal staff evaluated it and found that it did not violate
their own policies.
Then the argument was made strenuously within Twitter by
the former chief counsel of the FBI, Jim Baker, that they
should reverse that decision and censor that New York Post
article on Twitter anyway. That--that is--appears to be part of
a broader influence operation, most famously including former
intelligence officials and others to claim that this was
somehow a result of a Russian hack and leak operation.
Mr. Carter. Right.
Mr. Shellenberger. There was zero evidence that this was
hacked and leaked. They had the FBI subpoena of the laptop
published in the New York Post. FBI took the laptop in December
2019. So it appears to me like that was some sort of
coordinated influence operation to discredit what was
absolutely accurate information.
Mr. Carter. Well, let me ask you. The administration had--
had proposed to establish a disinformation governance board
within the Department of Homeland Security. Thank goodness they
didn't go through with that. But what kind of danger do you
think there would have been with a disinformation governance
board?
Mr. Shellenberger. Well, unfortunately, that disinformation
governance board was just the tip of the iceberg of the
censorship industrial complex that my colleagues and I
discovered. That includes agency at the Department of Homeland
Security. It includes, you know, various entities, including
National Science Foundation is now funding 11 universities to
create censorship predicates and tools that includes DARPA
funding. That all needs to be defunded and dismantled and----
Mr. Carter. OK.
Mr. Shellenberger [continuing]. An investigation needs to
be done to figure out----
Mr. Carter. All right. I need to get on. Thank you for
those answers.
Mr. Dillon, you have been before Congress before as well,
and thank you again for being here. When the advanced
algorithms that the Big Tech companies use--when they give the
inordinate power to amplify or suppress certain posts--and we
all know that happens--if these companies were determined to be
publishers of content, when they amplify or suppress using an
algorithm, what do you--what do you think the impact would be
on content moderation practices? Would it be better, more,
worse, or what?
Mr. Dillon. You are saying if they were treated as
publishers, would they moderate more aggressively?
Mr. Carter. Exactly. Or less.
Mr. Dillon. Yes. Well, under Section 230, even publisher
activity is not treated as publisher activity; right? They are
not treated as the speakers. They are treated as conduits for
the----
Mr. Carter. OK.
Mr. Dillon [continuing]. Speech of others. So--but if they
were to be treated as publishers, then I imagine they would be
much more mindful of what they allow to be amplified and what
they don't.
Mr. Carter. OK. Thank you. Thank you very much. Dr.
Bhattacharya--I am sorry. But anyway, look. I am a healthcare
professional. I am a pharmacist. And when the vaccine first
came out, I wanted to set a good example both as a healthcare
professional and as a Member of Congress. So I volunteered for
the clinical trials, and I did that.
However, I believe very strongly that people should have
the choice whether they want to do that or not. I encourage
them to. I thought it was safe. But that ought to be a personal
decision, in my opinion.
What are the consequences of suppressing legitimate
scientific and medical studies that don't fit the mainstream
media?
Dr. Bhattacharya. People no longer trust public health.
People no longer trust doctors. And as a consequence, people
won't follow the even true good advice. I argue for older
people to be vaccinated because that's what the evidence said,
and I was the lead when my mom was vaccinated in April 2021.
What I have seen now is a huge uptick in vaccine hesitancy for
really essential vaccines like measles, mumps, rubella as a
consequence of the lack of distrust. And it's a real disservice
to the American people that we allowed this to happen.
Mr. Carter. Great. Thank you all very much for being here,
and thank you, Mr. Chairman, and I will yield back.
Mr. Latta. Thank you. The gentleman yields back, and the
Chair now recognizes the gentleman from California's 29th
District for 5 minutes.
Mr. Cardenas. Thank you very much, Mr. Chairman. There are
real abuses right now on the part of social media companies not
only in America but around the world. We talked about a lot of
them last week when the CEO of TikTok was before us. There is a
real need for accountability here, and reforming Section 230 in
a targeted and thoughtful way is going to be a big part of what
we should be doing in Congress. And hopefully we will get
around to doing that. Many bills have been introduced, but we
haven't been able to pass the legislation. Hopefully we will
have success this time.
But the conversation that the majority seems to be having
back and forth with some of the witnesses today is a bit
bizarre to me. Conservative censorship seems to be what a lot
of my colleagues are focusing on.
But there is a lot more going on, especially when it comes
to life-and-death issues for the American people, especially
American children. The idea that the big fix we need to Section
32 is that we should be preventing social media companies from
taking down harmful content.
Like I said, we should definitely make sure that they are
taking down content that is harming especially our children.
That's not what I've been hearing from my colleagues last week.
And I am not shocked that we are hearing the same thing today.
So I am going to use my time to talk about very real mis- and
disinformation that targets vulnerable communities like the
predominantly Latino community I represent in the San Fernando
Valley.
I am glad we have an actual expert here, Mr. Overton, to
explore this. I have seen firsthand how powerful social media
misinformation and disinformation created vaccine hesitancy,
which actually has cost human life. I have told the story of
how my mother-in-law, whose primary language is Spanish, asked
me if it was true that there were microchips in vaccines.
That came from her Spanish-speaking colleagues who spend
way too much time on social media who, by the way, all of them
in their 60s and 70s--these are not children--who actually were
convinced or led to believe that there are microchips in the
vaccines. Other Spanish-language misinformation said that
vaccines would lead to sterilization or alter your DNA, et
cetera, et cetera, et cetera.
We know the companies do a terrible job taking down
Spanish-language misinformation and also don't do a very good
job of doing--pulling down misinformation and disinformation in
English. And we know that this lack of content moderation
doesn't make social media better. Like some of the witnesses
today suggest, it makes it dangerous. So my question--first
question is to you, Professor Overton. If we follow some of the
proposals here today and alter Section 230 in a way that would
limit the ability of platforms to moderate content like mis-
and disinformation, what could be the potential consequences
for communities like the ones that I just mentioned a minute
ago?
Mr. Overton. Thank you very much, Congressman. Things could
be worse. Things could be worse in terms of medical
misinformation, political disinformation, scams in terms of
economic. And you focused on it in terms of content moderation
being key. That was the original point here in terms of prodigy
and a concern about platforms not taking down the bad stuff
because they were afraid of being sued. That's the whole point
of it.
Mr. Cardenas. Thank you. We also know that election mis-
and disinformation is a huge problem and another one that often
spreads unchecked on platforms when it's in Spanish. We saw in
the runup of the 2022 midterms that election misinformation in
Spanish was widespread on YouTube and other platforms.
Professor Overton, I know this is one of special interest to
you. Can you talk a bit about the special harms associated with
spreading information that misleads voters and why it's
important that social media platforms have the ability to
remove such content?
Mr. Overton. Well, this is incredibly important because
voting is preservative of all other rights. And we have seen
polarization in terms of us being pulled apart. We have seen
foreign interference in terms of Russia, Iran, other entities
dividing us. We have also seen voter suppression in terms of
targeting, for example, in terms of 2016, particular
communities targeted. So there have been some studies that
found that this work is still happening. These activities in
terms of operatives financed by Russia and Iran but folks who
were in places like Ghana and Nigeria scamming and basically
changing our political debate. It's a real danger.
Mr. Cardenas. One of the things that people don't realize:
Just because they see it in print doesn't mean it's news.
Mr. Overton. Right.
Mr. Cardenas. It is just opinion. And so thank you so much.
My time has expired. I yield back.
Mr. Latta. Thank you. The gentleman's time has expired, and
the Chair now recognizes the gentleman from Utah for 5 minutes.
Mr. Curtis. Thank you, Mr. Chairman. Before I began, I
would like to give my home State a shout-out. Just last week,
they passed a law prohibiting social media companies from
allowing people under 18 to open an account. And I would like
to quote from the podcast The Daily from New York Times: ``It
was as if the Governor of Utah was saying to Congress, `You
folks, while you're blathering away about the harms of TikTok,
here in Utah we are actually going to do something about it. We
are taking action while you are having a hearing.'''
Pivoting a little bit, Mr. Shellenberger, I don't know
about you, but I am having a little bit of a deja vu moment.
Yesterday, I boarded an airplane in California. And you were
sitting to my right. And the great Congresswoman from
California was sitting to my left. Unlike yesterday, I only
have 5 minutes, not 5 hours, to question you. So I am going
to--I am going to push you to go a little bit quick. But I'd
like to just explore this idea of--are we missing the mark
here? And let me tell you what I mean by that.
Somehow we are having this conversation about human beings
deciding what is acceptable for us to hear and see imperfect
human beings. I don't know about you, but I have spent my life
in the pursuit of truth, and I don't know anybody that can
define it. If you go back to COVID, we've had a couple of
examples that were obviously problematic. But if you go back to
COVID, the science said no masks. Then the science said masks.
Then it said double masks. It said kids shouldn't play on
playgrounds because it was spread by surfaces. It got it wrong.
And so how is it that we are supposed to objectively decide
what people can see and what they can't see? I know from your
testimony, at least your written testimony, this concept of
objectionable--can you just take a second and describe how
maybe we are off track on this?
Mr. Shellenberger. Sure. Thank you for the question. I
mean, I think it's important to remind ourselves just how
radical the First Amendment is and how our founding--our--the
people that created this country were very clear that it wasn't
a piece of paper that gave us the right--the freedom of speech.
It was an unalienable right. It was something that we were born
with. It's--it's a human--it's a human right. It's a right to
be able to express yourself, to make these noises, to make
these scribbles.
Mr. Curtis. I am going to----
Mr. Shellenberger. That's fundamental to us.
Mr. Curtis. I am just going to--so much I want to ask you.
So I am going to--I am just going to short-change it there just
a little bit. So, like--like do you think the Founders
perceived a situation where there would be a little bit of a--a
jury appointed by Facebook that would make these decisions?
Mr. Shellenberger. Absolutely not. I mean, there was----
Mr. Curtis. Is there any way, even with good intent, they
can do that right?
Mr. Shellenberger. Absolutely not. I mean, it's actually--
we think that we are so much more advanced than we were 250
years ago. But 250 years ago, there was a very strong
understanding that you needed people to be wrong. You needed
people to the----
Mr. Curtis. Yesterday on the plane, I pointed out how, in
my district, Native Americans actually wrote on rocks. And some
people, quite frankly, would find some of the things they put
up there offensive. I am not sure I'd want my kids to fully see
them, yet----
Mr. Shellenberger. Right.
Mr. Curtis [continuing]. They put them up there, and that's
the way it is.
OK. Very quickly because I am--I am out of time. A couple
of my colleagues have pooh-pawed this hearing and this concept
that it's not that big a deal. Can you explain, as an
individual, what it feels like to be censored?
Mr. Shellenberger. It's absolutely horrible. It's one of
the worst experiences you'll ever have. It's humiliating. It's
being told by one of the most powerful corporations in the
world that you are wrong. And not--in my case, it wasn't that
the facts were wrong. It was the concern that it would be
misleading, that people would get the wrong idea from it.
It's--it's dehumanizing. It's not what this country is about.
It's--it's grossly inappropriate. There is no appeals process.
There is no--your voice is----
Mr. Curtis. No accountability.
Mr. Shellenberger [continuing]. Denied.
Mr. Curtis. Yes, no accountability.
Mr. Shellenberger. It's the Star Chamber----
Mr. Curtis. Yes.
Mr. Shellenberger [continuing]. Effectively.
Mr. Curtis. Thank you. Mr. Dillon, let me pivot to you for
just a second. Let me talk about Section 230 in algorithms. To
try to put it simply, there is--I think 230 sees two buckets.
One bucket is a published content. John Curtis can publish
content on there. The other bucket is kind of a distributor of
that content. I think Section 230 tries to protect the
distributor of that content. But this assumes that distributors
of social media platforms are nothing more than a large
bulletin board. You use the word ``conduit'' for others where--
where we can all place content for the world to see and with
the exception of some predefined bad behavior, we don't hold
them liable for that.
But is it possible that, instead of black and white, there
is actually a gray area between hosting that post and making
decisions that hide or amplify that post where somebody
actually shifts from a bulletin board to actually ownership of
that content and shouldn't be protected from 230?
Mr. Dillon. Yes. I definitely think so. When they get too
hands-on with how the content is displayed, yes. And when they
are also deciding who can speak, I think the big--the main
issue is the viewpoint discrimination when they start deciding
who can speak and what they can say. They go far beyond what
Section 230 had in mind, which was objectively, you know,
unacceptable speech, like unlawful speech.
Mr. Curtis. Clearly defined. I wondered today if my
favorite sitcom, ``Seinfeld,'' would be taken down from some of
these social media platforms. I really am out of time. Mr.
Chairman, I yield back.
Mr. Latta. The gentleman yields back. The Chair now
recognizes the gentleman from Florida's Ninth District for 5
minutes.
Mr. Soto. Thank you, Chairman. Hailing from the great State
of Florida, we see book banning, eliminating AP African-
American studies, silencing the LGBTQ community, and voices
downplaying or denying the Holocaust, slavery or genocide of
Native Americans. As someone of Puerto Rican descent, I would
find it particularly offensive that they are censoring Roberto
Clemente's own biography and books about him, an amazing Puerto
Rican, an amazing baseball player, and one who contributed so
much to helping out children and families. And then I think
about the Borinqueneers, Puerto Ricans who were discriminated
against and fought for our country nonetheless in World War II
and before who were honored in a bipartisan fashion.
These stories need to be told. We even see a new bill
that's attempting to allow politicians to sue news media easier
because they use anonymous sources or not. I mean, if it's too
hot, get out of the kitchen, right? This is part of our First
Amendment rights. These are all censorship efforts happening in
Florida under the grip of Governor DeSantis. The Republican
majority last week got on the censorship crusade by continuing
the book-banning efforts. So I think we all agree there is some
need for censorship discussions, and so I appreciate us having
that here today. In the context of social media, the question
is what to do about it.
And Professor Overton, I appreciate you being here today. I
want to talk briefly about 230 reform since that's really a lot
of what we are talking about. I am empathetic to the
discussions that other witnesses have said here today about
being silenced. I think, Professor, first of all, it's great to
have a GW law professor here, as my--being an alumni. And Dr.
Dunn is also an alumni of the med school. We will give him
credit for purposes of this hearing.
I want to focus on two common-ground issues, Federal civil
rights violations that happen over social media and then
protecting our kids. So let's first start out with efforts that
would be clear violations if someone did it outside of social
media.
What are some ideas in what we could do to draft
legislation to ensure that civil rights are protected within
the social media sphere?
Mr. Overton. You know, one idea is a carve-out for civil
rights violations. So we have a carve-out for IP, for Federal
criminal law, and for a few other categories. And one would be
this carve-out for Federal civil rights violations.
Mr. Soto. Can you expound on that a little bit?
Mr. Overton. Sure----
Mr. Soto. How do you think----
Mr. Overton [continuing]. So----
Mr. Soto [continuing]. We should put it together?
Mr. Overton. So Airbnb, for example, they designed a
platform that shows somebody's face and, you know, their name.
And there is discrimination happening on their platforms. But
right now, they are saying they are not liable because of 230.
Facebook, basically their algorithms steer housing and
employment ads to White folks away from Black and Latinos. And
they've got drop-downs that allow for folks to target on
those--but they say, ``Hey, we are not liable because of
Section 230.'' So this carve-out would basically say, hey, 230
applies generally but not for Federal civil rights violations,
just like it does with IP, just like it does with Federal
criminal law.
Mr. Soto. And about protecting our kids, you know, we have
disagreements over books and things like that. But where there
is common ground, yes, we all believe parents should be able to
have a strong say in which books their kids are reading. They
should just not be able to ban what other kids and their
parents decide for--is best for their kids. In the case of the
Utah law, they are empowering parents to make decisions about
access to social media pre-18, which I think there is--there is
definitely some positivity there as far as where we could go
with something like this. What would you say we could be doing
to protect our kids better vis-a-vis 230 reforms?
Mr. Overton. Yes. I--I certainly think this concept of
requirements in terms of Utah is not a bad--a bad thing. I
think the big thing, though, is if we are chilling people for
moderating and platforms for moderating, we are going to see
more pornography, we are going to see more obscenity, sexual
solicitation. All of that comes with restraints on moderation.
So that--that's a big concern I have.
Mr. Soto. But if we established having some requirement
pre-18 for parental consent, do you think that would have a
substantial effect based upon your research in helping
protect----
Mr. Overton. Yes.
Mr. Soto [continuing]. Our kids from some of those things?
Mr. Overton. I think that that could, and I certainly would
love to talk to you more about it and study it in more detail.
Mr. Soto. Thanks, and I yield back.
Mr. Latta. Thank you. The gentleman yields back, and the
Chair now recognizes the gentleman from Georgia's 12th
District--oh, I am sorry. Mr. Joyce came in. I didn't see you.
I am sorry. The Chair recognizes the gentleman from
Pennsylvania for 5 minutes.
Mr. Joyce. Thank you, Chairman Latta and Ranking Member
Matsui, for holding today's hearing and for you, the witnesses,
for your time and your testimony. Last week, we held a hearing
regarding data privacy and the pervasive manner in which
nefarious actors can manipulate content and exploit user data
for financial gain.
Today, we are faced with another issue: Big Tech companies
that have inconsistently applied content moderation policies,
manipulated content on their platforms, and even gone so far as
to ban or blacklist users for exercising their right to free
speech. These companies claim to operate as politically neutral
public forums where speech, ideas and thoughts are supposed to
be shared equally and unabridged.
Unfortunately, this has not been the case, as evidenced by
the witnesses here today and your testimony. These companies
often silence opposing ideas that do not align with their
platforms' ideologies, all the while unabashedly using Section
230 as a vehicle to indemnify themselves. It goes without
saying that the internet, our use of the internet and how we
communicate, exchange ideas, and interact across the internet
has evolved. And Section 230 is long overdue to evolve and
reflect the reality of what we are facing today.
Dr. Bhattacharya, thank you for being here today. As a
physician myself, I understand that robust scientific
discussion and discourse, especially amidst an unprecedented
public health emergency, is critical to a healthy medical
community. But in your case, Twitter--and I am quoting here--
``trend blacklisted''--unquote--and stifled scientific
discussion. Is that correct?
Dr. Bhattacharya. Yes. I was on a trend blacklist.
Mr. Joyce. And can you please briefly describe what a trend
blacklist means?
Dr. Bhattacharya. It limits the visibility of my tweets so
that only my followers can see it, that it has no chance of
going outside of the set of people who happen to follow me.
Mr. Joyce. And do you find that by limiting your ability to
communicate that that is a healthy medical community?
Dr. Bhattacharya. It--it--no. I think it is a terrible
thing to limit the ability for scientists to discuss openly
with one another in public our disagreements.
Mr. Joyce. Thank you, Dr. Bhattacharya. The Great
Barrington Declaration offered a sensible alternative approach
to handling COVID-19, emphasizing more focused protection of
the elderly and other higher-risk groups. Tragically, this
approach was not followed in my home State of Pennsylvania,
where our former Governor ordered nursing homes to receive
COVID-19-positive patients. And that was to a devastating
effect.
Dr. Bhattacharya, can you briefly describe what this
reaction was to the declaration by public health officials and
how our own Government tried to suppress that free flow of
ideas?
Dr. Bhattacharya. So Francis Collins, then head of the NIH,
labeled me a fringe epidemiologist. Then I started--I started
getting death threats. I started getting essentially questions
from reporters ask--accusing me wanting to let the virus rip
when I was calling for better protection of elderly people.
What happened in nursing homes in Pennsylvania and New York
where COVID-infected patients were sent back was a violation of
that principle of focused protection. Had we had that debate
openly, maybe that might have been avoided.
Mr. Joyce. So the medical community at large was restricted
from your ideas. Is that correct?
Dr. Bhattacharya. The social media companies and also the
Government, the Federal Government in the form of the head of
the National Institutes of Health, worked to essentially create
a propaganda campaign, to make this illusion of consensus that
their ideas--Francis Collins' ideas, Tony Fauci's ideas--were a
consensus of scientists when, in fact, it wasn't factually
true.
There were tens of thousands of scientists who signed on
who opposed the lockdowns, that were in favor of focused
protection of vulnerable older people. That debate should have
happened without suppression but didn't.
Mr. Joyce. Do you feel that this silencing of speech--and
particularly for an individual like you from the medical
community--do you feel that this has damaged the trust in
public health apparatus?
Dr. Bhattacharya. It's as low as I've ever seen it in my
career. And it's tragic because public health is very
important. It is important that Americans trust public health.
And when public health doesn't earn that trust, very bad things
happen to the health of the American public.
Mr. Joyce. So take us to the next step, how do we earn back
that public trust?
Dr. Bhattacharya. Public health needs to apologize for the
errors that it made, embrace honestly and list them out and
say, ``We were wrong about the ability of the vaccine to stop
transmission, we were wrong about school closures, we were
wrong to suppress the idea of focused protection,'' and then
put in place reforms so that people can trust that, when public
health says something, it's actually the true thing--truth and
allow dissenting voices to be heard all the time.
Mr. Joyce. Dr. Bhattacharya, thank you for your candor and
thank you for your expertise.
Mr. Chairman, I yield back.
Mr. Latta. Thank you. The gentleman yields back, and the
Chair now recognizes the gentlelady from California 16th
District for 5 minutes.
Ms. Eshoo. Thank you, Mr. Chairman, and thank you to each
one of the witnesses.
This is a very important discussion today. You know, I have
always thought of the American flag as the symbol of our
country, but the Constitution is the soul of our Nation. And so
the discussion about First Amendment is a very, very important
one. It is a sacred one, in my view.
In listening to each one of the witnesses, I think that my
sensibilities move from one kind of--they swing from one
direction to another. Are these sensibilities of individuals,
professionals who have a great deal of pride about their
profession, what they write, what they say?
You know, in politics we say throw a punch, take a punch.
Is it someone's ego that is offended by the reaction to what
they have written?
Dr. Bhattacharya, you wrote the Great Barrington
Declaration. I think context is very important in this as well.
That was in October of 2020. There were 24,930 deaths due to
COVID in October of 2020, an average of 787 precious souls that
were dying every day.
We didn't have the vaccine yet. Now, your complaint about
being censored is with the platform, and I think that you also
have a beef with who was the head of NIH and Dr. Fauci because
they didn't agree with you and there was fierce opposition to
what you put out.
That is all part of the enormously important debate that
takes place in academia and in the medical community. That is
vibrant, it is a reflection of our democracy. But what I would
like to get to is what the definition of censorship is.
Mr. Overton, would an accurate definition of censorship be
the suppression or prohibition of speech by the Government?
Mr. Overton. It is this concept of the Government. And the
Government is key, in terms of censorship. The courts have come
up with a test if Government is being coercive, in terms of
social media.
So are they being coercive? Are they going to punish social
media for keeping things up? That is the state action. That is
the problem.
One other quick note here is that some of this reality
that, frankly, I just think we have missed here. In Q4 2021
alone, YouTube removed 1.26 billion comments. 1.26 billion
comments.
So if we think that they have got to go through and give an
explanation for every comment that they have removed, that is
not going to happen. Basically, what's going to happen is they
are going to say, ``We are going to get out of this business.
We will just leave up the smut, the obscenity, the hate speech.
That is your internet, if that is what we have to do.''
Ms. Eshoo. Congress has a major responsibility in all of
these areas, whether it is the reforming of Section 230, let's
see what the Supreme Court does. My sense is they are going to
kick it back to Congress again.
Mr. Overton. Right.
Ms. Eshoo. National privacy law, I don't take a back seat
to anyone on that issue. Congresswoman Lofgren and myself wrote
what academicians said was the most comprehensive privacy
legislation in the Congress. So we have a lot on our plate and
a lot of responsibilities to meet.
Would a State law that prohibits private-sector employers
or public university professors or students from discussing
diversity, racial equity, systemic racism, or sexual identity
be considered censorship?
Mr. Overton. It would be, and in fact it was. A couple of
courts last year--yes.
Ms. Eshoo. Let me get to another question because you said
yes.
Would a State law preventing public school teachers from
discussing their own sexual identity and requiring them to hide
it from their students be considered censorship? Would the
Florida new law on banning books, would you consider that
censorship?
Mr. Overton. Certainly, as applied to universities and
private-sector employers, yes, and courts have agreed with me.
Ms. Eshoo. Well, it seems to me that some of us speak out
on what we consider censorship. There is a convenience in this,
that what we don't like, we consider censorship. But I think it
is very broad, under the First Amendment. And I think the steps
that Congress needs to take is to certainly address the reforms
in 230 and a very strong national privacy law.
I wish I had more time, but I thank you again for your
testimony and for your answers.
Mr. Latta. Thank you. The gentlelady yields back.
The Chair now recognizes the gentleman from Florida's 2nd
District for 5 minutes.
Mr. Dunn. Yes, thank you very much, Mr. Chairman. I have a
few questions for the panel, but I notice that we ran out of
time as Dr. Bhattacharya was trying to respond to Madame Eshoo.
I thought I would give you a brief moment first to do that.
Dr. Bhattacharya. Thank you, Congressman. I will take very
short.
A couple of things. One is, in October 2020, when we wrote
the Great Barrington Declaration, it was already clear from the
scientific evidence that school closures were a tremendous
mistake.
It was already clear that there was this huge age gradient,
that it was really older people that were really high risk. And
so a call for protecting vulnerable people was not a
controversial thing. It should not have been a controversial
thing, and yet it was censored and suppressed by social media.
Second thing: This was not simply a problem of ego. It is
fine to have scientific debate. It's, in fact, I like
scientific debate. The problem here was that we had Federal
authorities with the ability to fund scientists saying, putting
their thumb on the scale and then the Federal Government using
its power to suppress that scientific discussion online and
other----
Mr. Dunn. I do agree with you, Dr. Bhattacharya, and I am
going to get back to you with a question here in a minute, but
thank you for that.
I think there is a clear pattern of censorship, and it
reveals the political leanings of those who were censored,
versus those doing the censoring, and I think that is evident
and I think it's self-evident that the arbitrary censorship
role of Big Tech has led to partisan outcomes.
The same holds true with fact-checkers when they collude
with other interests. For instance, the company NewsGuard
defines itself as a journalism and technology tool that rates
the credibility of news information and tracks online
misinformation. However, they are partnered with Big Tech, Big
Pharma, the National Teachers Union, and even Government
agencies. In fact, $750,000 went from the Department of Defense
to NewsGuard in a Government contract.
Mr. Shellenberger, do you find this pattern of censorship
and political bias to be real?
Mr. Shellenberger. To be real?
Mr. Dunn. Yes.
Mr. Shellenberger. Yes, sir.
Mr. Dunn. I do too. It is also my understanding that the
vast majority of outlets targeted by NewsGuard, specifically,
are conservative-leaning outlets. Do you think that is true?
Mr. Overton. I think that NewsGuard--I mean, we know that
NewsGuard rated discussion of COVID origins as coming from a
lab as disinformation.
Mr. Dunn. That's right. Yes, I remember that.
Mr. Overton. One big----
Mr. Dunn. Well, thank you. I agree with you. I think fact-
checkers need to be fact-checked and removed from the
Government payroll.
As a medical professional, I find it extremely disturbing
to see medicine become partisan, enabling global institutions,
Big Pharma, and government to have the power to make sweeping
mandates and censor personal health freedoms.
This is an unequivocal departure from the same platforms
that we saw what we saw with those platforms back in the days
when Twitter was claiming that they are the free speech wing of
the free speech party. A lot's changed in the 10 year since
they made that claim.
Dr. Bhattacharya, in your testimony you mentioned the mass
censorship of the Great Barrington Declaration, and that was a
declaration where tens of thousands of doctors and public
health scientists signed onto a very straight-forward
declaration. In fact, I am one of those doctors. So thank you
very much for that.
As a medical doctor, do you consider the opinions of tens
of thousands of doctors endorsing a single medical opinion as a
sort of consensus of sorts?
Dr. Bhattacharya. I mean, I don't think that there was a
consensus, but I also don't think that we were a fringe
position. I think that there was a legitimate discussion to be
had, and had we had it openly, we would have won the debate.
Mr. Dunn. I think that is true too, and I was going through
that in real time with my colleagues here and elsewhere.
Last year's Twitter Files revealed that, Dr. Bhattacharya,
that you were placed on their trends blacklist, which prevented
your Tweets from trending on the site.
Were you ever contacted by Twitter regarding your placement
on that blacklist or did you have any idea that they were
targeting your account?
Dr. Bhattacharya. No, not until Elon Musk took over.
Mr. Dunn. That's excellent. So I have to say, thank you
very much, Dr. Bhattacharya. We have to do more about
transparency in medicine. We have to do more about censorship.
We need to get back to the times where I know you remember,
I recall, the times when we had free and open debate. In fact,
it was demanded of us, if you will, in postoperative M&M
conferences and whatnot, that we actually review the truth,
face our faults, our flaws, our mistakes.
I hope that we can get back to that in the future. Thank
you very much for coming.
Mr. Chairman, I yield back.
Mr. Latta. Thank you. The gentleman yields back. The Chair
recognizes the gentlelady from New Hampshire for 5 minutes.
Ms. Kuster. Great. Thank you very much, Mr. Chair. I want
to spend my time focusing what I believe are real victims of
online harms and examine how Section 230 plays a role in those
harms.
As the founder and cochair of the Bipartisan Task Force to
End Sexual Violence, I am particularly concerned about reports
of online dating apps being used to commit sexual assaults and
how Section 230 has prevented the survivors from seeking
justice.
I recognize that Section 230 is the bedrock of our modern-
day internet, but Congress has a responsibility to ensure that
these legal protections are functioning as intended.
The protections that Section 230 provide online platforms
should not extend to bad actors and online predators. Dating
platform companies have defeated numerous lawsuits regarding
egregious and repeated cases of sexual assaults on the grounds
of Section 230. And I think this committee can agree that
Section 230 was not intended to protect dating apps when they
failed to address known flaws that facilitate sexual violence.
Mr. Overton, if you could, Congress has previously examined
and enacted changes to Section 230 to strengthen protections.
Can you speak to how additional reforms to Section 230 could
better protect the American public?
Mr. Overton. Thank you so much, Congresswoman. And just
this notion that platforms, you know, if you are a company and
you engage in the activity, you can be sued. But if you
basically set up a platform to facilitate the activity and get
paid for it? Hey, you are fine. You hide behind Section 230.
And this is the true problem. So this notion of requiring
that entities act in good faith and to take reasonable steps in
order to enjoy the immunity is one reform that has been, you
know, held up. That is sufficiently flexible to deal with
different contexts that is a possibility, in terms of dealing
with this.
Professor Danielle Citron has put forth this proposal. She
is kind of tweaking it now. But certainly these folks who know
that there is a problem and they are profiting off of these
platforms, effectively profiting off of Section 230, which was
designed to make it easy for folks to take down this type of
activity and has been twisted by courts to basically allow for
a free-for-all.
So I agree with you. Exploitation, particularly of minors,
is a major issue that hopefully there's some bipartisan
agreement on addressing.
Ms. Kuster. And based upon your expertise, do you believe
that Congress should look to reform Section 230 in this way?
Mr. Overton. I definitely think that we need to think about
it in a nuanced way. We definitely need to reform. I think that
is one of the leading proposals, and I am very open and
supportive of it. There may be some other proposals. The SHIELD
Act, there are a few others. SHIELD Act. There are a few that
are out there that are important.
Ms. Kuster. Well, thank you for sharing your expertise.
It remains clear to me that there are real opportunities to
make the internet a safer place for the American people.
Section 230 was enacted almost 30 years ago, and it is past
time for Congress to take a closer look at these legal
protections.
I ask that this committee refocus its effort on Section 230
on preventing real online harms and sexual violence in our
communities. And I yield back.
Mr. Latta. Thank you. The gentlelady yields back, and the
Chair recognizes the gentleman from Georgia's 12th District for
5 minutes.
Mr. Allen. Thank you, Chair Latta, and for convening this
hearing. And I want to thank our witnesses for being here. This
is a very important discussion we are having today.
Big Tech currently has unilateral control over the majority
of public debate in our culture, and it is concerning to most
Americans.
What is even more concerning is that, as a result of the
Twitter Files, it has been made clear that Big Tech is also
working in direct coordination with Government officials to
silence specific individuals whom unelected bureaucrats
disagree with.
This Orwellian scenario is un-American, and House
Republicans will not stand for it. Last year the Poynter
Institute, a self-appointed clearinghouse for fact-checkers,
made news when one of its fact-checkers, PolitiFact,
incorrectly labeled third-party content that challenged the
Biden administration's definition of a recession as false
information.
It is clear that PolitiFact was biased in the content it
was flagging as misinformation or false information to fit the
narrative it preferred, rather than reflecting the known facts.
Mr. Dillon, in your experience, are these fact-checkers
apolitical, neutral, fact-based researchers?
Mr. Dillon. No, that's a pretty good joke. They are not.
You know, in the whole fact-checking apparatus, there's
unbelievable hubris in the whole project. You know, this idea,
especially when we are talking about medical information too, I
often hear people going back to say, ``Well, it was based on
what we knew at the time that we were saying this was true or
that this was false.''
All that is is an admission that our knowledge changes over
time. It is a knockdown argument against censorship. If
knowledge changes over time, you should never try to say that
these are the facts, these are the only things that you can
say, everyone who says something opposing to that should be
silenced.
It is a knockdown argument against censorship in favor of
open debate, which is the fastest and best way to get to the
truth.
Mr. Allen. Dr. Bhattacharya, give me your experience with
these fact-checkers.
Dr. Bhattacharya. They have been tremendously inadequate
during the COVID debate and the pandemic, just to police
scientific debate. They can't tell the difference between true
scientific facts and false scientific facts. They serve as
narrative enforcers more than as true referees of scientific
debate, which takes lots of years of experience that fact-
checkers don't have.
Mr. Allen. Mr. Shellenberger, do you know who funds these
fact-checkers?
Mr. Shellenberger. No, I do not.
Mr. Allen. As far as--well, obviously, if somebody's paying
them to do this information?
Mr. Dillon. Can I respond to that really quick?
Mr. Allen. Yes.
Mr. Dillon. We were fact-checked. We made a joke about how
the Ninth Circuit Court had overruled the death of Ruth Bader
Ginsburg, and USA Today fact-checked it, and that fact-check
was paid for by grants from Facebook, and then Facebook
threatened to demonetize us in response to the false rating on
that joke.
Mr. Allen. OK. Well, great. Thank you, Mr. Dillon.
As a followup, did the Twitter Files or any research that
you have done to expose the practices of Big Tech show if fact-
checkers coordinate with Federal agencies when they flag
information? Mr. Dillon?
Mr. Dillon. I am sorry. Can you repeat?
Mr. Allen. As far as the Twitter Files, is there any
research that you have done to expose the practices of Big Tech
that show if fact-checkers coordinated with Federal agencies
when they flagged information?
Mr. Dillon. The Twitter Files, I think, exposed a breadth
of coordination with state actors to control the flow of
information.
Mr. Allen. OK.
Mr. Dillon. It was ongoing discussion between the two.
Mr. Allen. Dr. Bhattacharya, what do you--do you have----
Dr. Bhattacharya. The Federal Government financed--funded
projects at universities that then reached out to two social
media companies, then told social media companies how to censor
and who to censor during COVID.
Mr. Allen. Mr. Dillon, real quickly: What did Twitter's
censure of your company do to your revenue?
Mr. Dillon. Well, initially we did see a spike, because we
had a lot of people sign up in support of us, but being off of
Twitter for 8 months took its toll. Currently, it is where we
generate the most impressions and the most traffic.
I just posted the other day that we generated more
impressions on Twitter in the last week than we have on
Facebook, Instagram, and YouTube combined, partly because
Facebook has been throttling us so much, we would get more
views on a post if we stuck it on a telephone pole in a small
town than we are on lately.
Mr. Allen. Yes, so much of that is just to hide the truth,
to be honest with you.
I met with Dr. Caldwell, who is an associate professor at
the Medical College of Georgia, which is Augusta University,
and she gave me a page here, protecting young people online,
and I would like to submit this for the record.
Mr. Latta. Without objection.
Mr. Allen. OK. Thank you very much, all of you, and I yield
back.
Mr. Latta. Thank you very much. The Chair now recognizes
the gentlelady from Tennessee for 5 minutes.
Mrs. Harshbarger. Thank you, Mr. Chairman, and thank you to
the witnesses for being here today.
And Mr. Dillon, I will start with you. Would you agree that
social media can't take a joke and that they can't handle the
truth? Yes or no?
Mr. Dillon. Yes, I think that there is actually an ongoing
outright war on the truth and reality, and a lot of the reason
why some of our jokes have been censored are because they carry
the truth.
You know, with every joke there is a grain of truth.
Mrs. Harshbarger. Absolutely.
Mr. Dillon. And the joke that we were censored for and
locked out for--the thing I say about it most frequently is
that the truth isn't hate speech. It included truth. And so
they were actually moderating--this is where the, you know, the
bias and censorship comes into play in a lot of different
areas.
In their terms of service, they have baked radical gender
ideology into them, so that you must either affirm it or remain
silent. If you say anything to criticize it or even joke about
it, you can get kicked off the platform.
So the bias is in the terms of service.
Mrs. Harshbarger. In the terms of service. You know, in
your statement you said censorship guards the narrative, not
the truth. It guards the narrative at the expense of the truth.
And you went on to say about Twitter--now, this is pre-Elon
Musk and pre--we know that freedom of speech costs 44 billion,
but ``instead of removing our joke themselves--they required us
to delete it and admit that we'd engaged in hateful conduct''
and, you know, it sounds to me like they forced you to make a
plea deal, basically, and say you committed fraud and all that
kind of stuff.
Just respond to that please, sir.
Mr. Dillon. Yes. My reaction to that, when I first saw that
they were requiring that we delete the joke, you know,
censorship would be them deleting the joke. That would be them
taking it down and saying that ``we don't want this on our
platform.''
It went beyond censorship to what I would refer to as
subjugation by telling us that we must delete it ourselves and
admit, in the process--there was red font over the delete
button that said we admitted that we engaged in hateful
conduct. So that is why we refused to delete the joke, because
we did not engage in hateful conduct. The truth is not hate
speech.
Mrs. Harshbarger. No, and it makes me want to put your
``Fulfilled Prophecies'' from the Babylon Bee and enter them
into the Congressional Record just for posterity's sake,
honestly, just to show that truth is stranger than fiction, and
it seems that satire can be a predictor of the truth, honestly.
Dr. Bhattacharya, I have been a pharmacist 37 years. I am
the other pharmacist in Congress. And, you know, we were
constantly being told to follow the science. And it sounds like
you agree with me that there was corroboration between Federal
Government agencies and social media platforms to suppress the
truth.
And that goes back to the origins of COVID, the lab leak
theory, vaccinations, masks, lockdowns, whole nine yards, and
you know, you state that the suppression of scientific
discussion online clearly violates the U.S. First Amendment,
and I agree with that.
So where do you go back to get your good credibility and to
get your good name? How do we restore that and how--we know
that 75 percent of Americans do not trust platforms, social
media platforms.
So when it comes to healthcare, how do we get that trust
factor back, and how do we go forward?
Dr. Bhattacharya. I think we need fundamental reform that
establishes the principle that scientific debate can happen
without this kind of thumb on this.
Very quick funny story from this Missouri v. Biden case
that I have--or I am a party. We got to depose a whole bunch of
witnesses inside the Federal Government, including Tony Fauci
and some others in the White House. There is a huge volume of
emails from the White House to Facebook pressuring Facebook to
censor things.
One thing that happened, at one point the White House
noticed that its Facebook page wasn't growing very fast, and it
turned out the reason was the CDC had put this pause on the J&J
vaccine. The White House had put that on their page, and as a
result, the algorithms picked up the White House as an anti-vax
group, and so it suppressed the growth of the White House page.
This censorship regime affects everybody. Everyone should
have the opportunity to say honestly their scientific opinion
online. There should not be a thumb on the scale like there has
been.
Mrs. Harshbarger. Well, that is why I left the most trusted
profession to come to the least trusted profession. So I
understand.
In 40 seconds that I have left, Mr. Shellenberger, you say
in your statement ``the only guaranteed remedy to Big Tech
censorship is the elimination of Section 230 liability
protections,'' but you go on to say ``Congress could reduce
rather than eliminate liability protections in Section 230.''
Can you expound on that?
Mr. Shellenberger. Well, my argument is actually for
transparency. I think that is the right next step. I think
that's the step that could get bipartisan agreement, but I
think that if you don't take--something has to be done, and I
would think reducing the liability protections would be a
moderate step in between the chaos that we have now and----
Mrs. Harshbarger. OK.
Mr. Overton [continuing]. The transparency that I think is
best.
Mrs. Harshbarger. I think so too. I agree. With that, Mr.
Chairman, I yield back.
Mr. Latta. Thank you. The gentlelady yields back, and the
Chair now recognizes the Chair of the full committee, the
gentlelady from Washington, for 5 minutes.
Mrs. Rodgers. Thank you, Mr. Chairman. Appreciate everyone
being here.
Mr. Shellenberger, I wanted to start just a little bit
about the state of free speech in America online. And it has
certainly been illuminated through the Twitter Files, the
lawsuits from State attorney generals, investigative reporting
like your own should concern every American.
Big Tech has used their platforms to censor Americans
without due process or sufficient recourse. We also know that
the Biden administration has worked with Big Tech to censor
specific people or content that cuts against political
narratives.
Throughout it all the mainstream media has not only turned
a blind eye, but it oftentimes seems like they are a willing
partner in defending Big Tech's actions. Big Tech, we know,
plays a central role in controlling what people see and hear
and what they believe, and controlling thought and expression.
Their censorship actions are really a risk to our
democracy. I led the Protecting Speech from Government
Interference Act with Chairmen Comer and Jordan to prohibit
Federal employees from colluding with Big Tech to censor speech
online. This bill passed the House earlier this year, but I
don't think that we can take our foot off the pedal.
So I would like to ask you, what more can Congress do to
restore and preserve the battle of ideas online, and what is
the risk if we don't?
Mr. Shellenberger. Well, I think the risk is the loss of
this fundamental right, the loss of trust in our institutions.
We are in the middle of a mental health crisis. I think we need
to--I think we need more transparency, and we just need to see
what is going on and to be able to open up that debate more,
otherwise--these are the most powerful mass media
communications entities that have ever existed, and their power
is enormous.
And we have seen extraordinary abuses of power in that
situation. Sunlight remains the best disinfectant, and I would
recommend that as the next step.
Mrs. Rodgers. Thank you. Mr. Dillon, parody and humor have
often been used to facilitate tough conversations central to
public discourse, and since our Nation's founding political
cartoons, especially those critical of government, have been
ingrained in our history, so much so that, in 1798, the
Government tried to silence Americans by passing the Sedition
Act, which prohibited American citizens from printing,
uttering, or publishing any scandalous writing of the
Government. But we have overcome every attempt to silence
American voices and return to the core principles of freedom of
expression.
The difference now is that it is Big Tech, not Big Brother,
that's doing the censoring. So what are the consequences for a
society if we continue to allow this censorship of satire?
Mr. Dillon. Well, I mean, it is just--it's that much more
egregious when you see it happening with comedy. You know,
because comedy is bringing levity and laughter, and to be
censoring that just seems so outrageous to me.
You know, and a lot of these things, like you said, we are
aimed up, we are punching up at the powers that be. You know,
the purpose and part of the project of comedy is to poke holes
in the popular narrative. Like I said in my statement, you
know, if we are restricted from doing that, then the narrative
goes unchallenged. And so it is extremely important that we
have the freedom to be able to do that.
And it is extremely notable too--we haven't had much
discussion about this, but you know, we mentioned that Big Tech
is the biggest threat to our speech right now, and we haven't
been able to do anything about it, legislatively, up to this
point.
And so our only recourse has been that a billionaire came
in and bought one of these platforms and said he was going to
make it a free speech platform? I think it is crazy that it's
gotten to this point where that is what we've had to depend on
to be able to speak freely.
And to the point of transparency, he said that he is going
to open up the algorithms and make them public and show you
what is going on behind the scenes, and there will be no shadow
banning because you will be able to see exactly how your
account is being impacted.
You know, so it is great that somebody stepped in and did
that, but there is a lot that can be done legislatively to
prevent discrimination without compelling or curbing the
platform's speech themselves.
Mrs. Rodgers. Pretty fundamental. Thank you.
Mr. Bhattacharya, if we fail to stop Big Tech censorship of
satire or scientific thoughts, how do you think it will impact
our kids and future generations? Or do we already see the
impact?
Dr. Bhattacharya. Sometimes I have heard that the
availability of social media, the ability to communicate with
so many people, is a justification for censorship. You know,
this is the same debate that happened when a printing press was
invented.
The printing press allowed the communication with
tremendous numbers of people much more easily. And it was the
decision to allow that to happen that led to the scientific
enlightenment.
We are going to go back to a dark age if we decide that,
just because we have a new printing press, that we should start
to suppress speech.
Mrs. Rodgers. Well, Mr. Overton, I saw Facebook label a
Bible verse as false information. What do you say about that?
Mr. Overton. Of the billions of posts that they have, they
can get some things wrong.
Mrs. Rodgers. They certainly got that one wrong. I yield
back, Mr. Chairman.
Mr. Latta. Thank you. The gentlelady yields back, and the
Chair now recognizes the ranking member of the full committee,
the gentleman from New Jersey, for 5 minutes.
Mr. Pallone. Thank you, Chairman Latta. It is well
documented that social media platforms have helped facilitate
drug sales, influence teenagers to engage in dangerous and
deadly behavior, incite a violent mob, to election denialism
and hate speech, and led to increased forms of violence against
individuals.
Unfortunately, the majority didn't call any experts today
to speak about these important issues, and instead their
witnesses are here on a mission of personal grievance and
expansion of their wealth and influence, in my opinion.
But let me talk, my questions are of our witness, Professor
Overton. Let me ask, what are the consequences now and into the
future of our failure to reform Section 230, particularly for
the health and well-being of our youth, our society, our
democracy? If you would.
Mr. Overton. We could see magnification of discrimination
and a variety of other harms that we see companies hiding
behind 230 to avoid liability here. I think if we, though,
prevent and discourage companies from taking down harmful
material, we could really be in a very bad place, in terms of
much more pornography, hate speech, swastikas, et cetera, just
throughout the internet.
Mr. Pallone. So the First Amendment is a key part of what
makes America exceptional and distinguishes us from so many
other countries around the world, especially our foreign
adversaries.
And we heard a lot today about when it is appropriate for
the Government to interact with tech companies about the
content broadcasted on their platforms, and that is an
important discussion to have. But I assume the witnesses today
were just as outraged when former President Trump called on the
FCC numerous times to review and revoke broadcast licenses and
asked Big Tech platforms to remove content.
So again, Professor Overton, isn't it true that the
Government has an interest in stopping misinformation and
disinformation on these platforms, including dangerous content
that leads to real-world harm to especially to our young
people?
Mr. Overton. Yes, absolutely. To prevent Dylann Roof from
shooting up a South Carolina church? Absolutely. Yes, the FBI
and other officials should be able to contact social media.
Mr. Pallone. So in fact we saw that last week when
committee members flagged and condemned TikTok content that
appeared to threaten violence to our Members, but the
conversation today seems to suggest that platforms should be
forced to carry all speech, or at least all lawful speech, but
I don't think that's how the First Amendment or Section 230
works, frankly.
So my last question--you can take your time since there's 2
minutes--is if C2 is amended or if the law is amended, would
platforms be compelled to carry all lawful content?
Mr. Overton. Well, number one, they wouldn't necessarily
have to because they would have First Amendment right. They
have a First Amendment right to take things down, right?
The problem is that it opens the door for kind of other
lawsuits. This was their original problem, in terms of this
case called Prodigy.
If C2 is restrained, companies might just say, ``We are not
going to be in the business of content moderation,'' and we
could see more instructions on self-harm, how to commit
suicide, White supremacy radicalization, and real harms in
terms of anxiety, depression, in terms of young folk, eating
disorder, real discrimination.
Again, it sounds good to focus on, you know, you are not
going to have a Stanford medical debate in a content moderation
room. That is not going to necessarily happen. They are not
going to always get it right, right?
But if we require, these platforms are just going to say,
``We are not going to moderate, and here's your smut. It is on
you, you can take this pornography, this obscenity, this
solicitation of your children.'' It will be open season.
Mr. Pallone. All right. Thank you very much. Thank you, Mr.
Chairman.
Mr. Latta. Thank you. The gentleman yields back. And the
Chair now recognizes the gentleman from Ohio's 12th District
for 5 minutes.
Mr. Balderson. Thank you, Mr. Chairman. And thank you for
having this hearing today. And thank you gentleman all for
being here.
My constituents have real concerns about the power and
influence of Big Tech. They are worried their views will be
censored or that they will be banned for sharing their beliefs.
We now know that they are right to be worried. It has been
reported that during the pandemic Facebook was in contact with
the CDC, asking them to vet claims related to the virus.
In addition to that, the Twitter Files revealed that
Twitter was taking requests from the FBI, DHS, and HHS to
remove content from its platform.
My first question is for Mr. Shellenberger. Mr.
Shellenberger, what type of Government interaction with social
media platforms did you learn about through the release of the
Twitter Files?
Mr. Shellenberger. There was extensive Government pressure
on Twitter to censor content and also censor users. It was
direct, it was specific, it was shocking, actually, to
discovery that it was by many different Government agencies,
including the FBI.
Mr. Balderson. Are you aware of other social media
platforms engaging in censorship of nonillegal content on their
sites at the direction of the Government agencies?
Mr. Shellenberger. Yes, absolutely. There was both the
Election Integrity Project and something called the Virality
Project in 2021, which was funded by the Federal Government,
which actually organized most of the social media platforms to
censor content and also including accurate content that they
felt was contributing to narratives that they disfavored.
Mr. Balderson. All right. Thank you.
My next question is for Dr. Bhattacharya, and our cheat
sheet has left us. You took it. Thank you for being here.
In your testimony you note, ``If we learn anything from the
pandemic, it should be that First Amendment is more important
during a pandemic, not less.'' I couldn't agree with you more.
Could you expand on some of the scientific theories you
promoted that were censored at the request of the Government
officials?
Dr. Bhattacharya. Sure. So the Office of the Surgeon
General asked for, in 2021, a list of misinformation online
that people had found. So I sent a letter in with a list of
nine things that the Government got wrong during the pandemic
as a source of misinformation itself.
So overcounting COVID-19 cases; the distinction between
dying from COVID and with COVID is really important, and yet
the Government is systemically aware of that; questioning
immunity after COVID recovery. That would have been very, very
important, especially when we were making decisions about who
should get the vaccine, what is the most, you know, the benefit
and the harms for people to get the vaccine.
That was a real--and the questions about vaccine mandates.
Whether the COVID vaccines prevent transmission; whether school
closures were effective and costless; whether everyone is
equally at risk of hospitalization and death from COVID-19;
whether there was any reasonable policy alternative to
lockdowns; whether mask mandates were effective in reducing the
spread of the virus; whether mass testing of asymptomatic
individuals, contract tracing of positive cases were effective
in reducing disease spread; whether the eradication or
suppression of control of COVID-19 is a feasible goal.
In each of these areas the Government was the primary
source of misinformation.
Mr. Balderson. Thank you. And has time shows these theories
and ideas you promoted to be misinformation? I mean, you just
said that.
Dr. Bhattacharya. Yes, I mean, science evolves that way
with things we don't know now that are subject to debate later
becomes clear. If you suppress the debate, it takes longer for
the truth to emerge.
And that's why it is so important for the First Amendment
to play a role in scientific debate, especially in times of
crisis.
Mr. Balderson. All right. Thank you very much.
Change of direction a little bit. Mr. Dillon, you mentioned
in your testimony that once your jokes started to get flagged
and fact-checked, it resulted in a drastic reduction in your
reach. You have said that earlier also.
I am curious about the impact Big Tech can have on the
reach of accounts posting content it may not agree with. Can
you elaborate on how drastic the reduction in your reach was
and actions that you took to restore your account?
Mr. Dillon. Yes, so we are apparently subject to something
that is called a news quality score rating on Facebook, for
example, where when you get fact-checked a certain number of
times--well, we also have issues where we have been flagged for
incitement to violence with silly jokes--you end up getting
dinged repeatedly and getting flags on your account, which can
affect your reach.
With a low news quality score, you are deprioritized in the
feed. And so we found we used to generate 80-plus percent of
our traffic came from Facebook. It is now below 20 percent.
So Facebook has gone from by far the most dominant traffic
source for us to one of the lowest traffic sources.
Mr. Balderson. All right. Thank you very much. Mr.
Chairman, I yield back.
Mr. Dillon. Could I just say one more thing really quick?
Mr. Balderson. Yes, you may. You have 12 seconds.
Mr. Dillon. Some of the points that haven't been made here
with misinformation are that people have a right to be wrong.
That is one thing that no one's really discussing here, is that
we all have the right to be wrong, and whatever happened to
reputation? Why can't we engage in debate about these things
and try to refute each other, rather than silencing each other?
This idea that the Government needs to step in and shut
people up and kick them off these platforms, or these platforms
need to kick people off for saying the wrong thing. Why not
just refute them? What happened to reputation?
Mr. Latta. Thank you. The gentleman's time has expired, and
the Chair now recognizes the gentleman from California's 23rd
District for 5 minutes.
Mr. Obernolte. Well, thank you very much, Mr. Chairman.
Dr. Bhattacharya, I have a question for you. First of all,
thank you very much for the Great Barrington Declaration. I
remember vividly, the first time I read that, even though I had
had my own doubts about the Government's reaction, but I read
that and I thought, thank goodness other people agree with me.
So it was a very courageous thing to have done. Here is the
question. So we have been having this discussion about the
censorship that you endured as a result of that and
particularly at the time the fact that the Government agencies,
multiple Government agencies, played an active role in
suppressing that point of view.
So here is the question: If you ask those agencies at the
time why they were pushing back, their response would have
been, well, there is a public health interest in doing this,
right?
It is like, you know, if you had people advocating for
jumping off a cliff and young people were actually jumping off
a cliff, you know, many Government agencies would say, ``Woah,
you can't say that,'' because people are following the advice,
and it is bad advice.''
And you know, the Supreme Court, when we are talking about
this First Amendment right that we have and the debate over
free speech, the Supreme Court has said, you know, with a
famous example, you can't yell fire in a crowded theater.
So can you talk about why Government agencies pushing back
on your declaration was not the equivalent of yelling ``Fire!''
in a crowded theater?
Dr. Bhattacharya. Thank you for that, Congressman. So a
couple of things. One is that the declaration itself
represented a century of pandemic management. We were just
restating how we managed pandemics in the past, respiratory
virus pandemics in the past, successfully. So it wasn't, in
that sense, fringe at all.
Second--and this is probably more to the heart of your
question--if public health--if someone's in public health, a
Stanford professor, stands up and says smoking is good for you,
I am violating an ethical norm to accurately reflect what the
scientific evidence actually says. I am harming the public by
doing that.
If I stand up and say something that is part of an active
scientific discussion--how best to manage a pandemic--that is
what I am supposed to do as a professor. That is my job as a
person in public health, and then to have that suppressed?
Well, that itself was unethical. It was an abuse of power
by the Federal Government and in particular by Tony Fauci and
Francis Collins, who have the ability to fund scientists, who
make the careers of scientists, to put their fingers on the
scale. And that is why, I think, what you said is not--doesn't
actually apply in this case.
Mr. Obernolte. Sure. You made the point in your testimony
that lack of scientific consensus should have been a red flag,
which I agree with. In fact, I was working on my own doctorate
at the time, and I looked at the evidence that was produced and
thought that the lack of scientific rigor was just astonishing.
You know, but by the same token, if that is the bar, you
know, if consensus is the bar, we are never going to get there,
because even if you yelled ``Fire!' in a public theater, there
would be some scientist somewhere saying, well, you know,
actually, technically it is not a fire, it is a chemically
induced combustion reaction. You know what I mean?
So I think if we are going to criticize the Government's
reaction, which I think is totally justifiable, we also need to
come up with constructive solutions to how this--how to handle
this in the future because, you know, certainly we all agree
that there is a public health interest that Government agencies
are supposed to promote, but I would love to continue the
discussion.
Mr. Overton, thank you very much for your testimony. You
had responded to a couple questions already on Section 230 and
the way that you think it needs to be reformed, because I think
we are all in agreement that reform is necessary.
You talked a little bit about algorithms and the way that
they factor into whether or not content is being moderated. Can
you talk about how that would--how you think that should be
folded into modifications to Section 230?
Mr. Overton. Sure. There is an algorithmic carveout
proposal here that would basically say that, information
distributed via algorithms would not enjoy the Section 230
immunity.
That could be very attractive. I think one problem is that
algorithms are used for content moderation generally, and we
don't want to prevent these algorithms from taking down
pornography, obscenity, hate speech, a variety of other things.
Mr. Obernolte. Sure. You know, I agree that algorithms need
to factor into this. I think the devil's in the details,
though. You had raised an example, in your response to
Congresswoman's Matsui's question about how an advertiser's
ability to set parameters on the target audience of their
advertisement----
Mr. Overton. Right.
Mr. Obernolte [continuing]. How that was algorithmic. And I
would just----
Mr. Overton. No, I would agree with you. That is not
algorithmic, that's platform design. Something that is separate
would be algorithms and data collection, in terms of the
advertiser doesn't even know there is discrimination.
Mr. Obernolte. Yes.
Mr. Overton. So those are two different methods.
Mr. Obernolte. OK.
Mr. Overton. So thanks for clarifying that. Yep, I agree
with you.
Mr. Obernolte. Well, I mean, I think we are in agreement
that----
Mr. Overton. Yep.
Mr. Obernolte [continuing]. An algorithm that looks at
content is the kind of algorithm that is actually monitoring.
An algorithm that doesn't look at content, you know, is one
that I think could be allowable.
Mr. Overton. But I do think there are issues in terms of
platform design, in terms of hey, you know, these design
features are being used to discriminate on your platform.
Mr. Obernolte. Sure. Well, it is a complex issue, and I am
glad we are having the discussion. I see my time's expired,
although I got a million more questions, but thanks for
everyone for being here and taking part. I yield back.
Mr. Overton. Thank you.
Ms. Cammack [presiding]. All right. At this time, the Chair
recognizes the gentleman from Idaho, Mr. Fulcher.
Mr. Fulcher. Thank you, Madame Chair.
And to the panelists, thank you for being here, and I want
to address my first question to Mr. Shellenberger, because you
talked about transparency, and that is a topic that I
personally have been interested in, and I think it is part of
this solution as well.
So I want to tee up my question this way: First of all, if
a platform is directed to modify or censor by an outside
entity, whether it be a Government entity or whatever, or if a
platform decides to do that same itself, any ideas, any
thoughts on how to properly enforce that?
Once we get the rule put in place, what is an efficient and
realistic enforcement mechanism?
Mr. Shellenberger. That is a really good question. I mean,
I am trying to propose a thin--I am trying--I would love to see
something done. And so I was deliberately not trying to get
into whether you needed to have that housed in an existing
agency or a new agency, or just allow citizen enforcement.
But certainly, I think the idea of government having to
report right away any content moderation communications and
also to social media platforms also having to immediately
report it, then any whistleblower, either in the Government or
the social media platforms, have discovered nonreporting or
nondisclosure would be in a position to leak that information.
I think would create a high--I think it would reduce the need
for some onerous new enforcement body.
Mr. Fulcher. And I concur with that. That makes sense. My
thought process was actually to consider taking it one step
further, whereby there needs to be some kind of disclosure
anytime someone modifies or either magnifies or restricts a
post.
And so that probably gets into the algorithm content and,
first of all, your thoughts on that? Some kind of notice we
have opted to magnify this response, or we have opted to
restrict this response? Your thoughts on the practicality and
reasonableness of that?
Mr. Shellenberger. Yes. I mean, obviously, I mean, 99
percent of this stuff is occurring through AI at this point. It
is all mechanized with algorithms, and so that just needs to be
disclosed.
So, you know, I think Mr. Overton raised this issue of you
have a lot of that content moderation occurring, so you would
have to do some amount of it en masse, you know, to describe
YouTube is taking down, you know, all discussions of COVID
vaccine side effects. You would need to make that public----
Mr. Fulcher. Right.
Mr. Shellenberger [continuing]. And disclosed right away.
Mr. Fulcher. Do you see any conflict there? In raising this
earlier in different settings, I have heard the comment that is
not reasonable because that could influence or have an
intellectual property problem: ``It is our algorithm, we can't
disclose that.''
Do you see that as a viable argument not to do it?
Mr. Shellenberger. It may be, but of course you have
naturally that Big Tech is going to, you know, they are going
to oppose all regulation, just instinctively. And so I think I
am skeptical of it because you are not asking them to reveal
like the code, you are just looking to reveal the decision ``We
are restricting discussion of COVID vaccine side effects.'' You
don't need to say what the code--what the actual code is or
release code on doing it.
Mr. Fulcher. Right.
Mr. Shellenberger. You just need to say what the decision
is.
Mr. Fulcher. Right. Thank you for that.
I am going to quickly go to Mr. Bhattacharya, or Dr.
Bhattacharya, and I am going to ask the same question very
quickly to Mr. Dillon: In your experience of the censorship you
have experienced, have you seen that censorship in the form of
either your messages being magnified or restricted, or has it
simply been just cut off?
Dr. Bhattacharya. It is of the former. It is a restriction
on the visibility of my messages is the form of the censorship.
And some of my colleagues, inappropriate labels of misleading
content, even though they are posting true scientific
information. Those are the two major ones.
Mr. Fulcher. And so it drives, I assume correctly, that the
reason you think why is they simply disagree with your content?
Dr. Bhattacharya. Yes.
Mr. Fulcher. OK. Mr. Dillon, same question: Have you
experienced magnification or restriction, or is it just simply
been cut off?
Mr. Dillon. A combination of the two. We have seen
throttling of our reach and also takedowns of our posts. As
well, with the Twitter situation, you know, obviously our
account was locked until we deleted a Tweet or a billionaire
bought the platform.
Mr. Fulcher. And once again, it is for the same reason,
just disagreement of your content?
Mr. Dillon. Yes. Yes, the content itself.
Dr. Bhattacharya. Can I amend my answer real fast?
Mr. Fulcher. Yes.
Dr. Bhattacharya. There was a lot of pressure by the
Federal Government on these platforms to make them disagree
with my content.
Mr. Fulcher. Which goes back to my initial, my first
question to Mr. Shellenberger.
Madame Chair, I yield back--or Mr. Chair, I yield back.
Mr. Latta [presiding]. I got in here without you seeing.
Mr. Fulcher. You changed on me.
Mr. Latta. Well, thank you. The Chair now--the gentleman's
time has expired, and the Chair now recognizes the gentlelady
from Florida's 3rd District for 5 minutes.
Ms. Cammack. Thank you. I know we have been playing musical
chairs, so please forgive us, but thank you all for appearing
before the committee today. I know we have kind of circled
around this in a number of different ways.
Mr. Shellenberger, it is good to see you again. I feel like
we are coming full circle. I saw you in the Weaponization
committee and, Doc, you have been a frequent topic in a lot of
the testimony and line of questioning that I have had.
In fact, just over a month ago in our full committee
hearing, I produced some emails between Dr. Francis Collins and
Dr. Anthony Fauci referencing the Great Barrington Declaration,
and you have it right there saying that there needed to be,
quote, ``quick and devastating takedown of scientific opinions
that differed from that of the CDC.''
And I think that kind of puts you on a wild path to where
we are here today. Dr. Fauci at one point said that ``attacks
on me are attacks on science,'' and you have alluded a couple
of times in this hearing today that a fundamental component of
scientific inquiry is to be critical of your colleagues'
research and findings. Is that correct?
Dr. Bhattacharya. Absolutely. Yes. I mean, it is scientific
debate--science does not advance without debate.
Ms. Cammack. So I am assuming--and I am guessing that this
is a ``yes''--but you find Dr. Fauci's statement to be
hypocritical given that he rejected criticism of scientific
research that he believed during the COVID-19 pandemic?
Dr. Bhattacharya. I mean, I think he is entitled to his
scientific opinion and I respect his scientific authority, but
that just only goes so far. You have to still discuss what the
facts actually say.
Ms. Cammack. Thank you. Mr. Shellenberger, in your
testimony you submitted earlier this month, you stated that,
quote, ``Government-funded censors frequently invoke the
prevention of real-world harm to justify their demands for
censorship, but the censors define harm far more extensively
than the Supreme Court does.''
Can you expand on what you mean by ``Government-funded
censors'' and harm being redefined expansively?
Mr. Shellenberger. Yes, absolutely. I mean, this invocation
of harm for speech has just gone really too far. I mean, we
have--the courts have very narrowly limited harm to basically
the immediate incitement of violence or in the case of things
like fraud.
But this idea that, you know, somehow indirectly it would
lead to COVID spreading? There is just no way that would ever
be considered incitement of immediate violence. So I mean, I
see it used all the time.
I find it somewhat disturbing because it basically gets you
in the position where you are saying we have to censor this
accurate information because people might get the wrong idea
and they might do something that causes harm.
You see how many different chains in that link--or how many
links in that chain--are being constructed there. So I think it
involves a lot of predictions, a lot of assumptions, and a lot
of paternalism, frankly, that really, when this country was
founded, we did not engage in.
You were not--it was not like we have to protect you from
these ideas. The idea was, we need to give people the freedom
to express their ideas, and we are not going to treat everybody
like children, other than children.
Ms. Cammack. Well, and you hit on something, Mr.
Shellenberger, you talking about our Founding Fathers. One of
my favorite quotes is from James Madison: ``Our First Amendment
freedoms give us the right to think that what we like and say
what we please, and if we the people are to govern ourselves,
we must have these rights, even if they are misused by a
minority.''
So I think the topic of discussion today couldn't be more
important, certainly.
Mr. Dillon, love the site, checked it out, get a lot of
good laughs out of it. Kind of sad that over 100 of your fake
news stories have actually proven to be real. You hit on
something a little bit ago about Facebook have a news quality
score. Is that correct, right?
Mr. Dillon. Yes.
Ms. Cammack. As a satire site, how can you have a news
quality score if you are a satire site?
Mr. Dillon. Well, Facebook has defined news as anything
that shows in the news feed. So anything that is in your feed
is news. So everyone is publishing news, so that everyone can
have a news quality score under that system.
Ms. Cammack. That is an interesting way to define anyone
posting anything. So if I post a picture of my vacation, that
is somehow a news?
Mr. Dillon. Could be.
Ms. Cammack. There is a news quality score----
Mr. Dillon. Broadly construed, it could fall under that
category, yes.
Ms. Cammack. Interesting. What do you think the difference
is between fake news, satire, misinformation, and
disinformation?
Mr. Dillon. Well, I mean, ultimately that really comes down
to intent. You know, if somebody believes that a Babylon Bee
article is true, there's a couple of reasons for that.
Potentially, they are very gullible or it is just believable
because the world is so insane. I can't really fault them for
that, but it wasn't our intent to mislead them.
And that is the key distinction between satire, which has
satirical intent. It is criticizing something. It is trying to
evoke laughter or provoke thought or criticize something in the
culture that deserves it. And so it is offering commentary, it
is not trying to breed confusion.
But there are misinformation sites and obviously fake news
sites that just publish a fake headline like ``Denzel
Washington dies in car accident,'' which is not satire, it is
just a false headline. People are spreading things like that
all over the place, which is very, very distinguished from
satire.
Ms. Cammack. Thank you. My time has expired, but I
appreciate you all being here today. Thank you.
Mr. Latta. Thank you very much. And the gentlelady's time
has expired. The Chair now recognizes the gentleman from Ohio's
6th District for 5 minutes.
Mr. Johnson. Well, thank you, Mr. Chairman, for allowing me
to waive on for this very important hearing. And thank you to
our witnesses that are here today. It is important that we hear
from you about your experiences with Big Tech, specific
examples of how these companies are using their power to
silence free speech on their platforms.
I firmly believe that with great power comes great
responsibility, and nowhere does that apply more than with
these social media platforms. Big Tech has the responsibility
to uphold free speech, to return to being a forum for the free
and open exchange of ideas. That is what our country was
founded on. And I look forward to working with my colleagues on
this committee to implement much-needed reforms to Section 230
to get us back to that.
When Big Tech goes beyond serving as a platform to host
third-party ideas and instead abuses their role as a content
moderator, using algorithms to pick and choose what people see
or silencing opinions that run counter to their agenda, they
should not be granted the protections afforded by Section 230
and instead should be held accountable for their actions.
I am an IT professional. Both of my degrees are in
information technology, and I am, you know, even after doing
this now for over 12 years, I am still amazed at how many
Americans buy into this notion that, well, it is just the
algorithm, it is the algorithm.
Algorithms are written by human beings. Computers do,
networks do, platforms do what human beings tell them to do,
and it is the people writing those algorithms that are putting
this stuff in there.
Doctor--I'm going to butcher your name, I am sorry. Dr.
Bhattacharya, is that good? You shared in your testimony how
you have been censored on social media because your opinions on
COVID-19 contradicted the Government's response to the pandemic
at that time.
Can you expand on how that censorship harmed the scientific
community and the general public?
Dr. Bhattacharya. The primary ways is that by putting a
pall over true scientific facts that would have come out had a
true scientific debate allowed to have happened, many, many
people in the scientific community censored themselves because
they are afraid of being labeled as spreading misinformation,
even though they knew, for instance, that the harms of school
closures were tremendous.
Mr. Johnson. Right.
Dr. Bhattacharya. Many, many people censored themselves
over the idea of immunity after COVID recovery, censored
themselves about the inability of the vaccine to stop disease
spread. All of these ideas led to harmful policies that harmed
actual people, right?
People lost their jobs because of vaccine mandates and
vaccine passports. People were excluded from coming in basic
civil life because of these ideas that would have been
overturned had there really been a debate about, an open debate
about it.
Mr. Johnson. Gotcha. Well, thank you.
Mr. Dillon, your publication, the Babylon Bee, is based on
satire. You shared how you have been censored on social media
and your posts have been removed or flagged as misinformation.
In your view, in your opinion, how should social media
handle posts that are intended to be humorous and not published
with the intent to spread misinformation? And a follow on to
that--you can answer both at the same time: Given the First
Amendment, should they be flagging misinformation in the first
place?
Mr. Dillon. Well, flagging misinformation, I think, is
vastly different from taking it down or silencing the person
who uttered the misinformation, so-called misinformation. I
think there is a big distinction to be made there.
I don't necessarily have much of a problem with a platform,
for example, exercising its own speech rights. Twitter, for
example, can tag on a message to whatever Tweet that they want.
In fact, they are doing it now with these community notes where
the community will give a statement on adding context or
refuting a Tweet that was misleading.
That is more speech as an answer to speech that they think
is wrong, which is the proper solution to misinformation, not
taking it down or silencing the person who spoke it.
As far as satire goes, I would prefer that it not be
labeled at all because that ruins the joke. Satire operates by
kind of drawing you in, making you think that this is a real
story, and then you get to the punch line and you realize this
is a joke. That is destroyed if you put a big label on it that
says, this is--what you are about to read is satire, what you
just finished reading was satire. You put disclaimers all over
it, you ruin it.
Mr. Johnson. Yes. I remember when I was a kid--and you guys
probably do too--the first social media platforms was that
circle that you would get in in school and somebody would
whisper something into their neighbor's ear and it would go
around the circle----
Mr. Dillon. Telephone. Yes.
Mr. Johnson [continuing]. And end up--and the last person
would say what they actually heard, right?
Mr. Dillon. Right.
Mr. Johnson. It was humorous.
Mr. Dillon. Right.
Mr. Johnson. We have gotten so far off the mark.
Mr. Dillon. Yes.
Mr. Johnson. Mr. Chairman, thanks, again, for letting me
waive on, and I yield back. Thank you folks----
Mr. Latta. Well, thank you very much. The gentleman yields
back, and seeing that there are no further Members to ask
questions, that is going to conclude our Members' questioning
of our witnesses.
I ask unanimous consent to insert in the record the
documents included on the staff hearing documents list. And
without objection, so ordered.
I also want to thank our witnesses again for being with us
today. And also, sorry about the--what was happening here
today. We actually have three subcommittees running today. So
we had one downstairs, at the same time we have another one
starting right now. So we have Members back and forth. So I
appreciate your indulgence on that.
I remind Members that they have 10 business days to submit
questions for the record, and I ask the witnesses to respond to
the questions promptly. And Members should submit their
questions by the close of business, that is three business
days, I believe I want to say.
So without objection, the subcommittee is adjourned. Thank
you very much.
[Whereupon, at 1:14 p.m., the subcommittee was adjourned.]
[Material submitted for inclusion in the record follows:]
[GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]
[all]