[Senate Hearing 118-33]
[From the U.S. Government Publishing Office]
S. Hrg. 118-33
PLATFORM ACCOUNTABILITY:
GONZALEZ AND REFORM
=======================================================================
HEARING
before the
SUBCOMMITTEE ON PRIVACY,
TECHNOLOGY, AND THE LAW
of the
COMMITTEE ON THE JUDICIARY
UNITED STATES SENATE
ONE HUNDRED EIGHTEENTH CONGRESS
FIRST SESSION
__________
MARCH 8, 2023
__________
Serial No. J-118-7
__________
Printed for the use of the Committee on the Judiciary
[GRAPHIC NOT AVAILABLE IN TIFF FORMAT]
U.S. GOVERNMENT PUBLISHING OFFICE
52-654 PDF WASHINGTON : 2023
COMMITTEE ON THE JUDICIARY
RICHARD J. DURBIN, Illinois, Chair
DIANNE FEINSTEIN, California LINDSEY O. GRAHAM, South Carolina,
SHELDON WHITEHOUSE, Rhode Island Ranking Member
AMY KLOBUCHAR, Minnesota CHARLES E. GRASSLEY, Iowa
CHRISTOPHER A. COONS, Delaware JOHN CORNYN, Texas
RICHARD BLUMENTHAL, Connecticut MICHAEL S. LEE, Utah
MAZIE K. HIRONO, Hawaii TED CRUZ, Texas
CORY A. BOOKER, New Jersey JOSH HAWLEY, Missouri
ALEX PADILLA, California TOM COTTON, Arkansas
JON OSSOFF, Georgia JOHN KENNEDY, Louisiana
PETER WELCH, Vermont THOM TILLIS, North Carolina
MARSHA BLACKBURN, Tennessee
Joseph Zogby, Chief Counsel and Staff Director
Katherine Nikas, Republican Chief Counsel and Staff Director
Subcommittee on Privacy, Technology, and the Law
RICHARD BLUMENTHAL, Connecticut, Chair
AMY KLOBUCHAR, Minnesota JOSH HAWLEY, Missouri, Ranking
CHRISTOPHER A. COONS, Delaware Member
MAZIE K. HIRONO, Hawaii JOHN KENNEDY, Louisiana
ALEX PADILLA, California MARSHA BLACKBURN, Tennessee
JON OSSOFF, Georgia MICHAEL S. LEE, Utah
JOHN CORNYN, Texas
David Stoopler, Democratic Chief Counsel
Michael Velchik, Republican Chief Counsel
C O N T E N T S
----------
MARCH 8, 2023, 2:05 P.M.
STATEMENTS OF COMMITTEE MEMBERS
Page
Blumenthal, Hon. Richard, a U.S. Senator from the State of
Connecticut.................................................... 1
Durbin, Hon. Richard J., a U.S. Senator from the State of
Illinois....................................................... 1
Hawley, Hon. Josh, a U.S. Senator from the State of Missouri..... 4
WITNESSES
Witness List..................................................... 35
Bennett, Jennifer, principal, Gupta Wessler PLLC, San Francisco,
California..................................................... 9
prepared statement........................................... 36
Farid, Hany, professor, School of Information and Electrical
Engineering and Computer Science, University of California,
Berkley, Berkeley, California.................................. 8
prepared statement........................................... 42
Franks, Mary Anne, professor of law and the Michael R. Klein
Distinguished Scholar Chair, University of Miami School of Law,
Miami, Florida................................................. 6
prepared statement........................................... 46
Schnapper, Eric, professor of law, University of Washington
School of Law, Seattle, Washington............................. 12
prepared statement........................................... 52
Sullivan, Andrew, president and chief executive officer, Internet
Society, Reston, Virginia...................................... 11
prepared statement........................................... 62
QUESTIONS
Questions submitted to Hany Farid by Chair Durbin................ 71
Questions submitted to Andrew Sullivan by Senator Padilla........ 72
ANSWERS
Responses of Hany Farid to questions submitted by Chair Durbin... 74
Responses of Andrew Sullivan to questions submitted by Senator
Padilla........................................................ 77
MISCELLANEOUS SUBMISSIONS FOR THE RECORD
Submitted by Chair Blumenthal:
Arora, Saanvi, and Ani Chaglasian, letter, February 10, 2023. 86
`` `Carol's Journey': What Facebook knew about how it
radicalized users,'' NBC News, October 26, 2021............ 88
``Deconstructing the terrorism discourse on social media,''
European Commission, April 17, 2019........................ 98
Email correspondence from Nitsana Darshan-Leitner to Eric
Schnapper et al., subject: ``Hearing,'' March 05, 2023..... 99
``Facebook recommended QAnon groups to a new user within 2
days of joining the platform, according to a new
whistleblower report,'' Insider, Oct 25, 2021.............. 101
``How ISIS Uses Social Media for Recruitment,'' Canadian
Forces College, 2020....................................... 106
``ISIS's Use of Social Media Still Poses a Threat to
Stability in the Middle East and Africa,'' The Rand Blog,
December 11, 2018.......................................... 132
``The Islamic State's Use of Online Social Media,'' Military
Cyber Affairs, January 2016................................ 133
``Media Warfare and the Discourse of Islamic Revival: The
Case of the Islamic State (IS),'' European Commission,
January 31, 2019........................................... 143
``A new group of TikTok-savvy Palestinian fighters tests
Israeli forces in the West Bank,'' NPR, October 26, 2022... 158
``Taking a TikTok journey straight to the Lions' Den,''
CTech, October 31, 2022.................................... 164
``The Use of Social Media by United States Extremists,''
National Consortium for the Study of Terrorism and
Responses to Terrorism, research brief..................... 167
``Use of social networks among the Palestinian public - Data
and insights,'' Information Center for Intelligence and
Terrorism name after General Meir Amit at the Intelligence
Heritage Center--MLM....................................... 177
``What is to blame for the involvement of Palestinian kids in
terror attacks?'', Jerusalem Post, February 16, 2023....... 184
Submitted by Senator Padilla:
Access Now, et al., letter, March 8, 2023.................... 188
PLATFORM ACCOUNTABILITY:
GONZALEZ AND REFORM
----------
WEDNESDAY, MARCH 8, 2023
United States Senate,
Subcommittee on Privacy, Technology,
and the Law,
Committee on the Judiciary,
Washington, DC.
The Subcommittee met, pursuant to notice at 2:05 p.m., in
Room 226, Dirksen Senate Office Building, Hon. Richard
Blumenthal, Chair of the Subcommittee, presiding.
Present: Senators Blumenthal [presiding], Klobuchar,
Hirono, Padilla, Hawley, and Blackburn.
Also present: Chair Durbin.
OPENING STATEMENT OF HON. RICHARD BLUMENTHAL,
A U.S. SENATOR FROM THE STATE OF CONNECTICUT
Chair Blumenthal. The Senate Subcommittee on Privacy,
Technology, and Law is convened. We are a Subcommittee of the
Judiciary Committee and the Chairman of our Committee is with
us today. I want to thank all of our panel for being here, all
of the members of the audience who are attending, and my
Ranking Member, colleague, partner in this effort, Senator
Hawley. I'm going to turn first to the Chairman because he has
an obligation on the floor for some opening remarks. We're very
pleased that he's with us today.
OPENING STATEMENT OF HON. RICHARD J. DURBIN,
A U.S. SENATOR FROM THE STATE OF ILLINOIS
Chair Durbin. Senator Blumenthal and Senator Hawley, thank
you for holding this important meeting. We had a rather
historic meeting of the Senate Judiciary Committee just a few
weeks ago. I think everybody agreed on subject matter of the
hearing. I don't know when that's ever happened, at least
recently. And it was encouraging.
The hearing considered the subject of protecting kids
online. One of the witnesses we heard from, Kristin Bride, a
mother with a son who died by suicide after he was mercilessly
bullied on an anonymous messaging app. There were several other
mothers in attendance carrying color photos of their kids who
suffered similar heartbreak.
In addition to tragically losing children, these mothers
had something else in common. They couldn't hold the online
platform that played a role in their child's death accountable.
The reason, Section 230, well known to everyone who's taken a
look at this industry.
Coincidentally, after that hearing, I had a meeting with
the administrator of the Drug Enforcement Administration, Anne
Milgram. She described for me how illegal and counterfeit drugs
are sold over the internet to kids, often with devastating
results. When I asked her what online platforms were doing to
stop it, she said very little, and refusing to cooperate with
her agency to even investigate.
I asked her, ``How do they deliver these drugs? By mail?''
Oh, no. By valet service. They bring boxes of these counterfeit
drugs, deadly drugs, leave them on the front porch of the homes
of these kids. Imagine this, we're talking about a medium that
is facilitating that to occur in America. These platforms know
these drug transactions are happening. What are they doing
about them? Almost nothing. Why? Section 230.
In our hearing last month, there seemed to be a consensus
emerging, Democrats and Republicans, that we've got to do
something to make Section 230 make sense. Something needs to
change so online platforms have an incentive to protect
children, and if they don't, they should be held liable in
civil actions.
I look forward to hearing from the witnesses today. I'm
sorry I can't stay because I have major on the floor to
consider in a few minutes, but I will review your testimony,
and thank you for your input. Thank you, Mr. Chairman, Ranking
Member.
Chair Blumenthal. Thanks very much, Senator Durbin. I think
it is a mark of the importance and the imminence of reform that
Senator Durbin is here today. His leadership led to the hearing
that we had just a couple weeks ago, showing the harms, really
desperate, despicable harms that can result from some of the
content on the internet and the need to hold accountable the
people who put it there. And that's very simply why we are here
today.
I want to thank Senator Durbin for his leadership. Also,
Senator Coons who preceded me as head of this Subcommittee.
There are certainly challenging issues before us on this
Subcommittee from reining in Big Tech to protecting our civil
rights in an era of artificial intelligence.
And I am enormously encouraged and energized by the fact
that we have bipartisan consensus on this first hearing. Not
always the case in the Judiciary Committee, not always the case
in the United States Senate, but I'm really appreciative of
Senator Hawley's role, especially his amicus brief to the
United States Supreme Court in Gonzalez.
The comments by the Solicitor General in that case, some of
the comments by the Justices, we have no ruling yet, but I
think what we are seeing is, as Senator Durbin said, an
emerging consensus that something has to be done.
So here's a message to Big Tech, reform is coming. Can't
predict it'll be in the next couple weeks or the next couple
months, but if you listen, you will hear a mounting consensus
and a demand from the American public that we need to act in a
bipartisan way.
Section 230 dates from a time when the internet was a
young, nascent startup kind of venture that needed protection
if it tried to weed out the bad stuff. And now it's used to
defend keeping the bad stuff there. This so-called shield has
been long outdated as we enter an era of algorithms and
artificial intelligence, which were unknown and perhaps
unimaginable on the scale that they now operate when Section
230 was adopted.
And the caselaw--and I've read it, the Gonzalez Court
addressed it--simply doesn't provide the kind of remedy that we
need quickly enough and thoroughly enough. I think that the
time when the internet could be regarded as a kind of neutral
or a passive conduit has long since passed. Obviously, we need
to look at platform design, the business operations, the
personalization of algorithms, recommendations that drive
content.
And we've seen it particularly with children. Toxic content
driven by algorithms in a very addictive way toward children
with this overwhelming protection that is accorded by Section
230 to the tech platforms that are responsible and need to be
held accountable.
Section 230 actually was designed to promote a safer
internet. Plainly, it's doing the opposite right now. And what
we have heard graphically as Senator Durbin described it again
and again and again at hearings in the Commerce Committee, the
Subcommittee on Consumer Protection, which I chaired, hearing
from the whistleblower, Frances Haugen, documents that we've
seen from Facebook and the victims and survivors, Mrs. Bride,
who lost her son, Carson.
Anastasia, who wrote me along with another young woman,
Saanvi Arora and Anastasia Chaglasian, they started a petition
that received 30,000 signatures from Americans across the
Nation after they were victimized. Pictures of their sexual
abuse repeatedly transmitted on anonymous platforms. And I'm
going to put their letter to me in the record without
objection.
[The information appears as a submission for the record.]
Chair Blumenthal. But the point is, we've seen the harms.
We need to take action to address those harms. And we've also
seen harms, Section 230 has shielded platforms like Craigslist
when they hosted housing ads that openly proclaimed no
minorities. Section 230 has immunized Facebook when its own
advertising tools empowered and encouraged landlords to exclude
racial minorities and people with disabilities. For any other
company, these would be violations of the Fair Housing Act, but
Section 230 shut the door on accountability for them and in so
many other instances.
The case history on Section 230 is clear. When Big Tech
firms invoke it, those being denied justice are often women,
people of color, members of the LGBTQ community or children,
and the victims and survivors of sexual abuse. So this hearing
is very simply part of a broader effort to reform Section 230.
We've seen some of the models and the frameworks that are
possible for reform. I'm not taking sides right now, but by the
end of these hearings, I hope to do so, and this enterprise is
not new for me. Fifteen years ago when I was Attorney General
dealing with Myspace and Craigslist and many of the same issues
that we're confronting today, I said to my staff, ``We should
repeal Section 230.'' And they came down on me like a house of
bricks and said, ``Whoa, you can't repeal Section 230. That's
the Bible of the internet.''
Well, it's not the Bible of the internet. It's not the 10
Commandments that have been handed down. It is a construct that
is now outdated and outmoded and needs reform. And I'm really
so thankful to have the leadership of Senator Hawley, who is
also a longstanding champion of survivors and victims of sexual
abuse and other harms. And to his great credit, a former State
attorney general. Senator Hawley.
OPENING STATEMENT OF HON. JOSH HAWLEY,
A U.S. SENATOR FROM THE STATE OF MISSOURI
Senator Hawley. Thank you very much, Senator Blumenthal.
Thank you, Mr. Chairman, for being here as well. Thanks to all
the witnesses for making, in some cases, the long trek here. I
just want to add a few remarks. I am delighted that the first
meeting of this Subcommittee is focusing on what is, I think,
maybe the critical issue in this space, and that is Section
230.
And I want to amplify something that Senator Blumenthal
just said, which is that Section 230, as we know it today, is
not only outmoded, it's not only outdated, it's really
completely unrecognizable from what Congress wrote in the
1990s. I mean, let's be honest, the Supreme Court heard
arguments to this effect just a few weeks ago, but the Section
230, as we know it today, has been almost completely rewritten
by courts and other advocates, usually at the behest of Big
Tech, the biggest, most powerful corporations, not just now,
but in the history of this country.
They have systematically rewritten Section 230. And listen,
I hope that the United States Supreme Court will do something
about it because frankly, they share some of the blame for
this. And I hope in the Gonzalez case, they'll begin to remedy
that. But whatever the case may be there, it is incumbent upon
Congress to act. We wrote Section 230 originally. We should fix
it now. And I welcome these hearings to collect evidence, to
hear from experts such as those who are before us today about
the paths forward.
From my own view, I think that some of the common ground
that Senator Blumenthal mentioned and that the Chairman
mentioned that we've heard in our hearings recently really
boils down to this: It really is time to give victims their day
in court. What could be more American than that? Every American
should have the right, when they have been injured, to get into
court, to present their case, to be heard, and to try to be
made whole.
Section 230 has prevented that for too many years. And I
would hope that if we could agree on nothing else, we could
agree on that basic, fundamental, dare I say, fundamentally
American approach. And I hope that that's something that we'll
be able to explore together.
Now, I just note that progress on reforming Section 230 has
been very slow. As a Republican, I would love to blame that on
my Democrat colleagues, but the sad fact of the matter is
Republicans are just as much to blame, if not more. And my own
side of the aisle when it comes to vindicating the rights of
citizens to get into court, to have their day in court, has
often been very, very slow to endorse that approach and very,
very wary.
But I think that the time has come to say that we must give
individuals, we must give parents, we must give kids and
victims that most basic right. And I hope that this
Subcommittee and the Committee as a whole, the Judiciary
Committee as a whole, will prove in this Congress that real
bipartisan action with real teeth is possible. And we will see
real reform for America's families and children. Thank you, Mr.
Chairman.
Chair Blumenthal. Thanks, Senator Hawley. I'm going to
introduce the panel. And then, as is our custom, I will swear
you in and ask you for your opening remarks.
Dr. Mary Anne Franks is an internationally recognized
expert on the intersection of civil rights and technology.
She's a professor of law and the Michael Klein Distinguished
Scholar Chair at the University of Miami, and the president and
legislative and tech policy director of this Cyber Civil Rights
Initiative, a nonprofit organization dedicated to combating
online abuse and discrimination.
Professor Hany Farid is a professor of computer science at
UC, Berkeley. He specializes in image and video analysis and
developing technologies to mitigate online harms, ranging from
child sexual abuse to terrorism and deepfakes.
Ms. Jennifer Bennett is a principal at Gupta Wessler, where
she focuses on appellate and Supreme Court advocacy on behalf
of workers, consumers, and civil rights plaintiffs. She
recently argued and won Henderson v. Public Data, a Section 230
appeal before the Fourth Circuit that established a framework
for interpreting the statute that has for the first time
garnered widespread support.
Andrew Sullivan is the president and CEO of the Internet
Society, a global nonprofit organization founded to build,
promote, and defend the internet. Mr. Sullivan has decades of
experience in the internet industry having worked to enhance
the internet's value as an open global platform throughout his
career.
Finally, Professor Eric Schnapper is professor of law at
the University of Washington School of Law in Seattle. He
recently argued the cases of Gonzalez v. Google and Twitter v.
Taamneh before the United States Supreme Court. Before joining
the University of Washington faculty, he spent 25 years as an
assistant counsel for the NAACP Legal Defense and Educational
Fund in New York City, and he also worked for Congressman Tom
Lantos. He is a member of the Washington Advisory Committee of
the United States Commission on Civil Rights. I assume that
your appearance today will not be as arduous as arguing two
Supreme Court cases back to back.
Would the witnesses please stand and raise your right hand?
[Witnesses are sworn in.]
Chair Blumenthal. Thank you.
Senator Whitehouse. Mr. Chairman, does this mean that for
the first time you're not the person in the room who's argued
the most Supreme Court decisions?
Chair Blumenthal. Well, I've done four, but I think Mr.
Schnapper may exceed my record in total. I'm not sure. Let's
begin with Dr. Franks.
STATEMENT OF MARY ANNE FRANKS, PROFESSOR OF LAW AND THE MICHAEL
R. KLEIN DISTINGUISHED SCHOLAR CHAIR, UNIVERSITY OF MIAMI
SCHOOL OF LAW, MIAMI, FLORIDA
Professor Franks. Thank you. In 2019, nude photos and
videos of an alleged rape victim were posted on Facebook by the
man accused of raping her. The posting of non-consensual
intimate imagery is prohibited by Facebook's terms of service.
The company's operational guidelines stipulate that such
imagery should be removed immediately and that the account of
the user who has posted it should be deleted.
However, Facebook moderators were blocked from removing the
imagery for more than 24 hours, which allowed the material--
which the company itself described in internal documents as
revenge porn--to be reposted 6,000 times and viewed by 56
million Facebook and Instagram users leading to abuse and
harassment of the woman.
The reason why, according to internal documents obtained by
the Wall Street Journal, was that the man who had posted the
non-consensual pornography was a famous soccer star. That is,
this was no mere oversight, but rather an intentional decision
by the company to make an exception for an elite user. This was
in accordance with a secret Facebook policy known as cross-
check, which grants politicians, celebrities, and popular
athletes special treatment for violation of platform rules.
The public only knows about this policy because of
whistleblowers and journalists who also revealed Meta's full
knowledge of Facebook's role in genocide and other violence in
developing countries, the harmful health effects of Facebook
and Instagram use on young users, and the corrosive and anti-
democratic impact of misinformation and disinformation
amplified through its platforms.
The law that is currently interpreted to allow Facebook and
other tech platforms to knowingly profit from harmful content
was passed by Congress in 1996 as a Good Samaritan law for the
internet. Good Samaritan laws provide immunity from civil
liability to incentivize people to help when they are not
legally obligated to do so.
The title of the operative provision of this law and the
text of Section 230(c)(2) reflect the 1996 House Committee
reports description of the law as providing, quote, ``Good
Samaritan protections from civil liability for providers or
users of an interactive computer service for actions to
restrict or to enable the restriction of access to
objectionable online material.''
How did a law that was intended as a shield for platforms
who restrict harmful content become a sword for platforms that
promote harmful content? By ignoring the legislative purpose,
history, and the statute's language as a whole to focus on a
single sentence that reads, ``No provider or user of an
interactive computer service shall be treated as the publisher
or speaker of any information provided by another information
content provider.''
The use of the words publisher or speaker, which are terms
of art from defamation law, make clear that this provision bars
certain types of defamation and defamation-like claims that
attempt to impose liability on people simply for repeating or
providing access to unlawful content.
But many courts have instead interpreted this sentence to
grant unqualified immunity to platforms against virtually all
claims and for virtually all content. An interpretation that
not only destroys any incentive for platforms to voluntarily
restrict content, but in fact provides them with every
incentive to encourage and amplify it.
The Supreme Court, in taking up Gonzalez v. Google, has the
opportunity to undo more than 20 years of the preferential and
deferential treatment of the tech industry that has resulted
from the textually unsupported and unintelligible reading of
the statute. It was an encouraging sign during oral argument
that many Justices pushed back against the conflation of a lack
of community with the imposition of liability and seemed
unconvinced by claims that the loss of preemptive, unqualified
immunity would destroy the tech industry.
As Justice Kagan observed, ``Every other industry has to
internalize the costs of its conduct. Why is it that the tech
industry gets a pass?'' Supporters of the Section 230 status
quo respond that the tech industry is special because it is a
speech-focused industry. This claim is disingenuous for two
reasons.
First, Section 230 is invoked as a defense for a wide range
of conduct, not only speech. And second, other speech-focused
industries do not enjoy the supercharged immunity that the tech
industry claims is essential for its functioning.
Colleges and universities are very much in the business of
speech, but they can be sued. As can book publishers and book
distributors, radio stations, newspapers, and television
companies. Indeed, The New York Times and Fox News have both
recently been subjected to high-profile defamation lawsuits.
The newspaper and television industries have not collapsed
under the weight of potential liability, nor can it be
plausibly claimed that the potential for liability has
constrained them to publish and broadcast only anodyne, non-
controversial speech.
There's no guarantee that the Supreme Court will address
the Section 230 problem directly or in a way that would
meaningfully restrict its unjustifiably broad expansion. And
so, Congress should not hesitate to take up the responsibility
of amending Section 230 to clarify its purpose and foreclose
interpretations that render the statute internally incoherent
and allow the tech industry to inflict harm with impunity.
At a minimum, this would require amending the statute to
make clear that the law's protections only apply to speech, and
to make clear that platforms that knowingly promote harmful
content are ineligible for immunity. Thank you.
[The prepared statement of Professor Franks appears as a
submission for the record.]
Chair Blumenthal. Thank you. Thank you very much, Dr.
Franks. Professor Farid.
STATEMENT OF HANY FARID, PROFESSOR, SCHOOL OF INFORMATION AND
ELECTRICAL ENGINEERING AND COMPUTER SCIENCE, UNIVERSITY OF
CALIFORNIA, BERKLEY, BERKELEY, CALIFORNIA
Professor Farid. Chair Blumenthal, Ranking Member Hawley,
and Members of the Subcommittee, thank you. In the summer of
2017, three Wisconsin teenagers were killed in a high-speed car
crash. At the time of the crash, the boys were recording their
speed of 123 miles an hour on Snapchat's speed filter.
Following the strategy, the parents of the passengers sued
Snapchat claiming that the product which awarded trophies,
streaks, and social recognition was negligently designed to
encourage dangerous high-speed driving. In 2021, the Ninth
Circuit ruled in favor of the parents and reversed a lower
court's ruling that had previously emphasized that the speed
filter as creating third-party content, thus finding that
Snapchat was not deserving of 230 protection.
Section 230, of course, immunizes platforms and that they
cannot be treated as a publisher or speaker of third-party
content. In this case, however, the Ninth Circuit found the
plaintiff's claims did not seek to hold Snapchat liable for
content, but rather for a faulty product design that
predictably encouraged dangerous behavior. This landmark case,
Lemmon v. Snap, made a critical distinction between a product's
negligent design and the underlying user-generated content, and
this is going to be the theme of my opening statements here.
Frustratingly, over the past several years, most of the
discussion of 230, and most recently in Gonzalez v. Google,
this fundamental distinction between design and content has
been overlooked and muddled. At the heart of Gonzalez is
whether 230 immunizes YouTube when they not only host third-
party content, but make targeted recommendations of content.
Google's attorneys argued that fundamental to organizing
the world's information is the need to algorithmically sort and
prioritize content. In this argument, however, they
conveniently conflate a search feature with a recommendation
feature. In the former, the algorithmic order of content is
critical to the function of a Google or a Bing search.
In the latter, however, YouTube's ``watch next'' and
``recommended for you'' features, which lie at the core of
Gonzalez, are a fundamental design decision that materially
contributes to the product safety. The core functionality of
YouTube as a video-sharing site is to allow users to upload a
video, allow other users to view the video, and possibly search
videos.
The basic functionality of recommending content--of which
70 percent of watched videos on YouTube are recommended--is
done in order to increase user engagement and, in turn, ad
revenue. It is not a core functionality. YouTube has argued
that the recommendation algorithms are neutral and that they
operate the same way as it pertains to a cat or an ISIS video.
This means then that because YouTube can't distinguish between
a cat and an ISIS video, they've negligently designed their
recommendation engine.
YouTube has also argued that with 500 hours of video
uploaded every minute, they must make decisions on how to
organize this massive amount of content. But again, searching
for a video based on a creator or a topic is distinct from
YouTube's design of a recommendation feature whose sole purpose
is to increase YouTube's profits by encouraging users to binge-
watch more videos.
In so doing, the recommendation feature prioritizes
increasingly more bizarre and dangerous rabbit holes full of
extremism, conspiracies, and dubious alternate facts. Similar
to Snapchat's design, a decision to create a speed filter,
YouTube chose to create this recommendation feature, and they
either knew or should have known that it was leading to harm.
By focusing on 230 immunity from user-generated content, we
are overlooking product design decisions, which predictively
have allowed and even encouraged terror groups like ISIS to use
YouTube to radicalize, recruit, and glorify global terror
attacks.
While much of the debate around 230 has been highly
partisan--on this, Senator Hawley, we agree--it need not be.
The core issue is not one of over or under moderation, but
rather one of a faulty and an unsafe product design. As we
routinely do in the offline world, we can insist that the
technology in our pockets are safe.
So, for example, we've done a really good job of making
sure that the battery powering our device doesn't explode and
kill us, but we've been negligent in ensuring that the software
running on device is safe. The core tenets of 230, limited
liability for hosting user-generated content, can be protected
while insisting, as in Lemmon v. Snap, the technology that is
now an inextricable part of our lives be designed in a way that
is safe.
This can be accomplished by clarifying that 230 is intended
to protect platforms from liability based exclusively on their
hosting of user-generated content and not as has been expanded
to include a platform's design features that we now know is
leading to many of the harms that Senator Blumenthal opened
with at the very beginning. Thank you.
[The prepared statement of Professor Farid appears as a
submission for the record.]
Chair Blumenthal. Thank you very much, Professor. Ms.
Bennett.
STATEMENT OF MS. JENNIFER BENNETT, PRINCIPAL,
GUPTA WESSLER PLLC, SAN FRANCISCO, CALIFORNIA
Ms. Bennett. Good afternoon. Thank you for the opportunity
to testify before you today. I'm going to focus on, Senator
Blumenthal mentioned this case, Henderson v. Public Data, and
I'm going to focus on that case. And the reason for focusing on
that case is because if you look at the transcript in Gonzalez
of the oral argument, what you'll see is that the parties there
disagreed about virtually everything, the facts, the law,
whether the sky is blue and the grass is green, everything.
The one thing, the one place they found common ground was
that this case, Henderson, got Section 230 right. And so in
thinking about what Section 230 means, what it means, how it
might be reformed, I think Henderson might be a good starting
place. So what is this magical framework that gets Google and
the people suing Google and the United States Government all on
the same page?
This framework has two parts, and it mirrors the two parts
of 230 that people typically fight about. So part 1 addresses
what does it mean to treat someone as a publisher? Because
Section 230 says, ``We'll protect you from claims that treat
you as a publisher of third-party content.'' But it doesn't say
what that means.
And what Henderson says is, ``Well, we know that publisher
liability, what Section 230 is saying about publisher liability
comes from defamation law.'' And in defamation law, what
publisher liability means is holding someone liable for
disseminating to third parties' content that's improper.
So, for example, someone goes on Facebook, they say,
``Jennifer Bennett is a murderer.'' I am not in fact a
murderer, so I sue Facebook for defamation. That claim treats
Facebook as a publisher. Because what it's saying is,
``Facebook, you're liable because you've disseminated to third
parties information that I think is improper.''
On the other hand, say I apply for a job and the employer
wants to find out some things about me so they go online and
they buy a background check report about me, and the online
background check company doesn't see if the employer got my
consent. And so, I sue that company. I say the Fair Credit
Reporting Act requires you to ask the employer if they have
consent. You didn't do that.
That claim, as Henderson holds, doesn't treat the company
as a publisher. And the reason for that is that the claim
doesn't depend on anything improper about the content. The
claim says, ``You, company, were supposed to do something and
you didn't do it.'' It's a claim based on the conduct of the
company, not on content.
So that's part 1 of the Henderson framework. A claim only
treats someone as a publisher if it imposes liability for
disseminating information to third parties, where the claim is
that information is improper for some reason.
Part 2 of the Henderson framework is what it means to be
responsible for content. Because even if a claim treats someone
as a publisher, Section 230 as written, offers no protection if
they're responsible, even in part for the creation or the
development of that content.
And what Henderson says, and this is what a lot of courts
have said actually, is that at the very least, if you
materially contribute to what makes the content unlawful, then
you're responsible and Section 230 should offer no protection
to you.
So to take a seminal example, say there's a housing
website, and to post a listing on the housing website, the
website requires you to pick certain races of people to which
you'll offer housing. And so, there's a listing that says
whites only. Someone sues the website and says, you're
discriminating. It violates the Fair Housing Act.
The website should have no protection in that case because
the website materially contributed to what's unlawful about the
posting. The website said you have to pick races of people who
the listing should be available to. So that's part 2 of the
Henderson framework, which is you're responsible for conduct,
content rather, and you're outside the protection of Section
230, even as it currently exists, if you created that content
or materially contributed to what's unlawful about it.
And I just want to end by noting that both parts of this
framework depend on the same fundamental premise. And I think
that's what's driving people's, you know, even Google's
willingness to say this case is correct.
And that fundamental premise is that Section 230 protects
internet companies and internet users from liability when the
claim is based solely on improper content that someone else
chose to put on the internet, but it doesn't protect, and what
it was never intended to protect, is to protect platforms from
liability based on their own actions. Thank you, again. I look
forward to any questions.
[The prepared statement of Ms. Bennett appears as a
submission for the record.]
Chair Blumenthal. Thank you very much, Ms. Bennett. Mr.
Sullivan.
STATEMENT OF MR. ANDREW SULLIVAN, PRESIDENT AND CHIEF EXECUTIVE
OFFICER, INTERNET SOCIETY, RESTON, VIRGINIA
Mr. Sullivan. Good afternoon, Chair Blumenthal, Ranking
Member Hawley, and distinguished Members of this Subcommittee.
Thank you for this opportunity to appear before you today to
discuss platform accountability.
I work for the Internet Society. We are a U.S. incorporated
public charity founded in 1992. Some of our founders were part
of the very invention of the internet. We have headquarters in
Reston, Virginia, and in Geneva. Our goal is to make sure that
the internet is for everyone. Making sure that is possible is
what brings me here before you today.
The internet is in part astonishing because it is about
people. Many communications technologies either allow
individuals only to speak to one another, or they allow one
central source, often corporate-controlled, to address large
numbers of people at one time. The internet, by contrast,
allows everyone to speak to anyone. That can sometimes be a
problem. I too am distressed by the serious harms that come
through the internet and that we have heard about today.
But I also know the benefits that the internet brings,
whether that be for isolated people in crisis who find the
health that they need online, or to those who learn a new
useful skill through freely shared resources, or to still
others who are led to new insights or devotions through their
interactions with others. People interact with one another on
the internet and Congress noted this important feature in
Section 230 with its emphasis on how the internet is an
interactive computer service.
Yet the internet is a peculiar technology because it is not
really a single system. Instead, it is made up of many separate
participating systems, all operating independently. The
independent participants, including ordinary people just using
the internet, all use common technical building blocks without
any central control. And when we put all these different
systems together, we get the internet.
Section 230 emerged just as the internet was ceasing to be
a research project and turning into the important communication
medium it is today. But even though Congress was facing
something strange and new, the legislators understood these two
central features. The interactive nature meant that people
could share in ways other technologies hadn't enabled. And the
sheer number of participants meant that each of them needed to
be protected from liability for things that other people said.
The internet has thrived as a result. And this is what concerns
me about proposals either to repeal Section 230 or to modify it
substantially.
Outright repeal would be a calamity as online speech would
quickly be restricted from fear of liability. Even the trivial
things, retweeting a news article, sharing somebody else's
restaurant review would incur too great a risk that somebody
would say something and make you liable. So anyone operating
anything on the internet would rationally restrict such
behaviors.
Even something narrowly aimed at the largest corporate
players presents a risk to the internet. In a highly
distributed system like this, you can try something without
anyone else being involved, but if some players have special
rules, it is important that everyone else not be subject to
those rules by accident, because those others don't have the
financial resources of the special players.
It would be bad to create a rule that only the richest
companies could afford to meet. It would give them a permanent
advantage over potential new competitors. Issues of the sort
Americans are justly worried about naturally inspire a
response. It is entirely welcome for this Subcommittee to be
examining these issues today.
But because Section 230 protects the entire internet,
including the variability of individuals to participate in it,
it is a poor vehicle to address admittedly grave and insidious
problems that are nevertheless caused by a small subset of
those online. This is not to say that Congress is powerless to
address these important social problems.
Approaches that give rights to all Americans, such as
baseline privacy legislation, could start to address some of
the current lack of protections in the online sphere. Given the
concerns about platform size, competition policy is another
obvious avenue to explore.
We at the Internet Society stand ever willing to consult
and provide feedback on any proposals to address social
problems online. I thank you for the opportunity to speak to
you today. I look forward to answering any questions you have,
and of course, we would be delighted to engage with any of your
staff on specific proposals. Thank you.
[The prepared statement of Mr. Sullivan appears as a
submission for the record.]
Chair Blumenthal. Thanks, Mr. Sullivan.
Professor Schnapper.
STATEMENT OF ERIC SCHNAPPER, PROFESSOR OF LAW, UNIVERSITY OF
WASHINGTON SCHOOL OF LAW, SEATTLE, WASHINGTON
Professor Schnapper. Thank you. Senator Durbin and----
Chair Blumenthal. You might turn on your microphone.
Professor Schnapper. Senator Durbin and yourself, Senator
Blumenthal, you put your finger on the core problem here, which
is that Section 230 has removed the fundamental incentive that
the legal system ought to provide to avoid doing harm. And the
consequence of that statute has been precisely as Senator
Hawley described, that the right of Americans to obtain redress
if they've been harmed by knowing misconduct has been
eviscerated.
Now, part of the concern that led to the adoption of the
statute was that internet companies wouldn't know what was on
their websites, but there's--we have decades of experience with
the fact that they know exactly what's going on and they don't
do anything about it. And the presence of terrorist materials
on their websites, and the fact that those materials are being
recommended has long been known.
Federal officials have been raising this with the internet
companies for 18 years. In 2005, Senator Lieberman, whom you
know well, wrote a letter to these companies and asked them to
do something about terrorist materials on their websites.
Since then, Members of the other body and of the
administration have made that point publicly. There have been
dozens of published articles about the use of the websites by
terrorist organizations. I brought a sample today, a small
fraction. I mean, I'm happy to provide the staff with other
examples.
Chair Blumenthal. We'll ask that those materials be entered
in the record without objection.
[The information appears as submissions for the record.]
Professor Schnapper. You may want to see how many there are
before you put them all on the record.
[Laughter.]
Chair Blumenthal. We have a big record.
Professor Schnapper. The terrorist attacks were so rooted
in what was going on in the internet that when there was a rash
of terrorist attacks in the state of Israel, they were known as
the Facebook intifada. And complaints were made to the social
media companies without effect.
In January 2015, the problem was so serious that there was
a meeting with internet executives in which the representatives
of the Federal Government were the Attorney General, the
Director of the FBI, the Director of National Intelligence, and
the White House Chief of Staff and I urge the Committee to ask
for a readout of that meeting and what those companies were
told.
Most recently, in the Twitter litigation, a group of
retired generals filed a brief describing the critical role
that social media had played in the rise of ISIS. And again, I
commend that brief to you. I think it's extremely informative
of their informed military judgment about the consequences of
what's been happening.
The response of social media to this problem has often been
indifferent and sometimes deeply irresponsible. In August and
September 2014, two American journalists were murdered by ISIS.
They were brutally beheaded and the killings were videotaped.
When Twitter was called upon to stop publicizing those
types of events, an official commented, ``One man's terrorist
is another man's freedom fighter.'' That illustrates how
fundamentally wrong the status of the law is today. And there's
a good account of other comments like that from social media in
a brief that was filed by the Concerned Women for America,
which describes efforts and responses of that kind.
What we have learned from the past 25 years is that
absolute immunity can breed absolute irresponsibility. Now, we
understand that private corporations exist to make a profit,
but they also have obligations to the rest of the country and
to your constituents to be concerned about the harms they can
cause. Google and Meta have made billions of dollars since the
enactment of Section 230, and Twitter may yet turn a profit.
But those firms have a long way to go before they emerge from
moral bankruptcy. Thank you.
[The prepared statement of Professor Schnapper appears as a
submission for the record.]
Chair Blumenthal. Thank you, Professor Schnapper. You
argued before the United States Supreme Court. I think it's
pretty fair to say that the Court was struggling with many of
these issues. And Justice Kagan said, quote, ``Every other
industry has to internalize the costs of misconduct. Why is it
that the tech industry gets a pass? A little bit unclear,'' end
quote.
She went on to say, ``On the other hand, I mean, we're a
court. We really don't know about these things. You know, we
are not like the greatest experts on the internet.'' That
became clear, I think, in the course of the argument, but it
also emphasizes the importance of what we're doing here.
Because, ultimately, my guess is that the Court will turn to
Congress.
But I think it's also worth citing a remark by Chief
Justice Roberts when he said, ``The videos,'' I'm quoting,
``just don't appear out of thin air. They appear pursuant to
the algorithms,'' end quote. The Supreme Court understands that
these videos, the content very often is driven, it's
recommended, it's promoted, it's lifted up, sometimes in a very
addictive way to kids. And some of it absolutely abhorrent, to
which they have been, as you put it Professor Schnapper,
indifferent or downright irresponsible.
And let me just make clear, Mr. Sullivan, we are not
denying the benefits of the internet that--important benefits
in interactive communication and the large number of
participants. But the cases that have begun to make a start
toward reining in Section 230: Henderson, described by Ms.
Bennett, but before it, Roommates and Lemmon, both cases that
try to do carve-outs in a way, Henderson, based on the material
contribution case, show that we can establish limits without
breaking the internet and without denying those benefits.
Let me ask you, Ms. Franks, you know well, the material
contribution test. In your testimony, you distinguish--you make
another potential distinction or test involving information
versus speech. I wonder if you could comment on the material
contribution test, whether it is sufficient or whether we need
a different kind of standard incorporated into the statute.
Professor Franks. Thank you. As to the first question, I
think the material contribution test would be useful if we had
agreement about what it meant. And there seems to be a lot of
uncertainty about how to apply that test. And so, I would be
concerned that that test would be difficult to codify. What I
think on the other hand would be--a promising approach would be
to incorporate some standard along the lines of deliberate
indifference to unlawful content or conduct.
And to relate to the other part of your question, the
reason why I've advocated for a specific amendment that would
change the word ``information'' to ``speech'' is partly because
a lot of the rhetoric that surrounds much of the defense of the
status quo is that it's intended to defend free speech in some
sort of general sense.
That the tech industry is able to leverage that halo of the
First Amendment to say, ``If it weren't for us, you wouldn't
get to have any free speech.'' And I think that is suspect for
many reasons, not least because the kind of speech that is
often encouraged by these platforms and amplified is speech
that silences and chills vulnerable groups.
But it is also troubling because a lot of what gets invoked
for Section 230's protections are not speech, or at least are
not uncontroversially speech. And what I mean by this is that
the Supreme Court has actually had to struggle over decades to
figure out whether or not, for instance, an armband is speech
or whether the displays of certain flags are speech. And,
ultimately, the Supreme Court has been quite protective of
certain types of conduct that they deem to be expressive.
But usually, that takes some sort of explicit consideration
and reflection as to, is this expressive enough conduct to get
the benefit of First Amendment protection? And by putting the
word information and allowing that to be interpreted incredibly
widely, what companies are able to do is to short-circuit that
kind of debate over whether or not what they're actually doing
and what they're involved with is in fact speech.
And I think that the clarification that it has to be speech
and that the burden should have to be on companies to show that
what they are at--what is at issue is in fact speech, I think
that would be very helpful.
Chair Blumenthal. Thank you. I have many more questions.
I'm going to stay within the 5-minute limit so that as many as
possible my colleague can ask their question. And turn now to
Senator Hawley.
Senator Hawley. Thank you very much, Mr. Chairman.
Professor Schnapper, let me start with you. Thinking about the
arguments that you made in both the Gonzalez case and then also
in the Twitter case recently, in both of those cases, just to
make sure that folks who are listening understand it, you were
arguing on behalf of victims' families that were challenging
the tech companies. Have I got that basically?
Professor Schnapper. Yes, sir.
Senator Hawley. So the Court, of course, is deliberating on
this case as we don't know exactly what they're going to do.
We'll have to wait to find out. But in both of these cases,
help us understand your argument and set the scene for us. You
are arguing that there is a difference. These tech companies
have moved beyond merely hosting user-generated content to
affirmatively recommending and promoting user-generated
content. Is that right? Is that----
Professor Schnapper. That's correct.
Senator Hawley. So explain to us the significance of that.
What's the difference between claiming immunity from not just
hosting user-generated content, but now claiming immunity from
promoting and affirmatively recommending and pushing user-
generated content?
Professor Schnapper. Well, I think that's a distinction
that derives from the wording of the statute. The statute seeks
to distinguish between conduct of a website itself and
materials that were simply created by others. And that
distinction's clear on the face of the statute and the
legislative history.
Representative Lofgren at one point said, ``Holding
internet companies responsible for defamatory material would be
like holding the mailman,'' those are the language that we used
at the time, ``responsible for delivering a plain, brown
envelope.'' What's happening today is a far afield from merely
delivering plain, brown envelopes. Internet companies are
promoting this material, and they're doing it to make money.
At the end of the day, social media companies make money by
selling advertisements. The longer someone is online, the more
advertisements they sell. And they have developed an
extraordinarily effective and sophisticated system of
algorithms to promote material and keep people online. And it
sweeps up cat videos and it sweeps up terrorist materials and
it sweeps in depictions of tragically underweight young women
with dreadful consequences. So that's the distinction we were
drawing.
Senator Hawley. You mentioned algorithms and I think this
is so important. Tell us why you think these algorithms which
didn't generate themselves--the algorithms are designed by
humans, they're designed by the companies. In fact, the
companies regard them as very proprietary information. I mean,
they protect them with their lives, the essence of their
companies, their business model in many cases.
Tell us what legal difference under Section 230 you think
these algorithms and algorithmic promotion makes in these kind
of cases. Why is that such a key factor?
Professor Schnapper. Well, the algorithms are the method by
which the companies achieve their goal of trying to interest a
viewer in a particular video or text or whatever.
And it's done in a variety of ways. It's done with auto-
play so that you turn on one video and you start to see a
series of others that you never asked for. It's done through
little advertisements. They're known as thumbnails, which
appear on a YouTube page. It's done with feed and newsfeed,
where Facebook, in the hopes of keeping you online more,
proffers to you materials which they think you'll be interested
in.
Senator Hawley. So let me just ask you this. Does anything
in the text of Section 230 as it was originally written,
suggest, in your view, that platforms ought to get this really
form of super immunity for promoting, taking other people's
content, hosting it, promoting it, and in promoting it, making
money off of it? I mean, does the statute immunize them from
that? Does anything in the text support the super immunity in
that way?
Professor Schnapper. I spent a very long hour and a quarter
trying to answer that question a few weeks ago. We think the
text does draw that distinction. And that brings back so many
happy memories that you ask that.
[Laughter.]
Professor Schnapper. So yes, that's our view, but we're not
here to retry the case. But that is our view of the meaning of
the statute, but it doesn't--it would be entirely appropriate
for the Committee to clarify that.
Senator Hawley. Let me just get to that point and finish my
first round of questions with that. If Congress acts on this
issue, what would be your recommendations for the best way to
address this problem, from a policy legislative perspective?
The problem you've identified in this case is about affirmative
recommendations. How should we change the statute, reform the
statute to address this problem?
Professor Schnapper. I prefer not to try to frame
legislative proposal as I sit here. It's complicated. And I'd
be happy to work with your staff and my colleagues here, all of
them, on that for you. But I think it would be inappropriate
for me to start tossing out language as I sit here.
Chair Blumenthal. Thanks, Professor Schnapper. Thanks,
Senator Hawley. Senator Padilla.
Senator Padilla. Thank you, Mr. Chair. I want to start out
by asking consent to enter a letter into the record for more
than three dozen public interest organizations, academics,
legal advocates, and members of industry. A letter that notes,
``In policy conversations, Section 230 is often portrayed by
critics as a protection for a handful of large companies. In
practice, it's a protection for the entire internet
ecosystem.''
Chair Blumenthal. Without objection, your letter is made a
part of the record.
[The information appears as a submission for the record.]
Senator Padilla. Thank you. As we heard from the Supreme
Court, this is a very thorny and nuanced issue, and we need to
make sure that we treat it as such. Because of Section 230, we
have an internet that is a democratizing force for speech,
creativity, and entrepreneurship.
Marginalized and underserved communities have been able to
break free of traditional media gatekeepers and communities
have leveraged platforms to organize for civil rights and for
human rights. But it's also important to recognize that there
is a horrifying conduct and suffering that we can and must
address.
My first question is for Professor Franks. In your
testimony, you call for internet companies to more aggressively
police their sites for harassment, hate speech, and other
abhorrent conduct. And you recommend changes to Section 230 to
compel that conduct. I share your concerns about the prevalence
of this activity online.
Now, that said, I also know that many marginalized
communities rely on platforms to organize. Many of the same
communities fall prey to the automated and inaccurate tools
employed by companies to enforce their content moderation
policies at scale. Is it possible to amend Section 230 in a way
that does not encourage providers to over-remove lawful speech,
especially by users from marginalized groups?
Professor Franks. Thank you for this question. I'd first
like to state that the current status quo where companies
essentially have no liability for their decisions means that
they can make any decisions that they would like, including
ones that would harm disproportionately marginalized groups.
And so, while it is encouraging to see that some platforms have
not done so, some platforms have behaved responsibly, some have
even made it a commitment to in fact amplify marginalized
voices.
These are all decisions that they are making essentially
according to their own profit lines or according to their own
motivations. And they can't really be relied upon as a
guideline for how to run businesses that are so influential
throughout our entire society.
So when I suggest that Section 230 should be changed, I do
want to, again, emphasize the distinction between immunity
versus the presence of liability, which is to say Section 230
presumably provides immunity from certain types of actions.
That is not the same thing as saying you are responsible for
those actions, if you are found not to have immunity.
So my suggestions are really directed towards asking the
industry the same question that Justice Kagan has asked, which
is, Why shouldn't this industry be just as subject to the
constraints of potential litigation as any other industry? So,
not that they should be treated worse, but that they should be
treated the same as many other industries.
And that what that would hopefully do would be to
incentivize these platforms to at least take some care in the
way that they design their products and the way that they apply
their policies, not to give them a sort of directive to say,
this is how you have to do it, because you don't need a
directive like that.
Essentially, what you need is to allow companies to act in
a certain way. And if they do so in a way that contributes to
harm and there is a plausible theory of liability, they should
have to be accounted for that. But nothing preemptively that
should allow them to say, ``We are excused from this conduct,
or that we are guilty of this conduct,'' but to simply change
the incentive so that they have to sometimes worry about the
possibility of being held accountable for their contribution to
harm.
Senator Padilla. Thank you. Next question is for Mr.
Sullivan. Yesterday we had a Subcommittee hearing on
competition policy that focused on digital markets. I want to
make sure our legislative efforts to promote an open,
innovative, equitable, and competitive internet harmonize with
the platform accountability efforts here.
Notably, in response to questioning during oral arguments
in Google v. Gonzalez, Google's attorney acknowledged that
while Google might financially survive liability for some
proposed conduct presented as a hypothetical, smaller players
most definitely could not. Can you speak to the role Section
230 plays in fostering a competitive digital ecosystem?
Mr. Sullivan. Yes. Thank you for the question, because this
is the core of why the Internet Society is so interested in
this. This is precisely what the issue is. If there are changes
to 230, it is almost certain that the very largest players will
survive it because they've amassed so much wealth. But a small
player is going to have a very difficult time getting into that
market, and that's one of the big worries that I have.
You know, the internet is designed with no permanent
favorites, and if we change the rules to make that favoritism
permanent, it's going to be harmful for all of us.
Senator Padilla. All right. Complex indeed. Thank you, Mr.
Chair.
Chair Blumenthal. Thanks, Senator Padilla. I'm going to
call now on Senator Blackburn, who has been like Senator
Hawley, a real leader in this area. She and I have co-sponsored
the Kids Online Safety Act, which would provide real relief to
parents and children, tools and safeguards they can use to take
back control over their lives, and more transparency for the
algorithms.
And then we will turn to Senator Klobuchar, who has been
such a steadfast champion on reforming laws involving Big Tech,
her SAFE TECH bill, as well as the competition bills that you
mentioned, Mr. Sullivan, that I've been very privileged to help
her lead on. Senator Blackburn.
Senator Blackburn. Thank you, Mr. Chairman. And this is one
of those areas where we have bipartisan agreement. And as the
Chairman said, I've worked on this issue of safety online for
our children for quite a while, and for privacy for consumers
when they're online, data security, as they've added more of
their transactional life online.
And, Ms. Bennett, I think I want to come to you on this.
When I was in the House and Chairman of Comms and Tech there, I
passed FOSTA/SESTA, and that has been implemented. And we had
so much bipartisan support around that and finally got the
language right and finally got it passed and signed into law.
And some of the people that worked with us during that time
have come to me recently and have said, ``Hey, the courts are
trying to block some of the victims' cases based on 230
language.'' And, Professor, I see you nodding your head also. I
would like to hear from you what you see as what they have
ascertained to be the problem, how we fix it if you think there
is a fix, or is this just an excuse that you think they're
using not to move these cases forward?
Ms. Bennett. Sure. So I actually don't litigate FOSTA/SESTA
cases. So if--was it Professor Franks who was nodding their
head? I unfortunately don't know the answer to that for you,
but I'd be happy to get it for you and could submit it
afterwards.
Senator Blackburn. I would appreciate that.
Ms. Bennett. I'd be very happy to get that.
Senator Blackburn. Go ahead, Professor.
Professor Farid. Yes. I'm not the lawyer in the room. I'm
the computer scientist, but I will say I've seen the same
arguments being made. I want to come back to something earlier
too because I think this speaks to your question, Senator,
about small platforms. Small platforms have small problems.
They don't have big problems.
In fact, we have seen in Europe when we deploy more
aggressive legislation, small companies comply quite easily. So
I don't actually buy this argument that somehow regulation is
going to squash the competition because they don't have big
problems.
Coming back to your question, Senator Blackburn, we also
saw--and I think this is important as we're talking about 230
reform--the same cries of, if you do this, you will destroy the
internet. And it wasn't true. And so we can have modest
regulation. We can put guardrails on the system and don't
destroy the internet. I am seeing, by the way, and I don't know
the legal cases, but I am seeing some pushback on enforcing
SESTA/FOSTA and I think that's something Congress has to take
up.
Senator Blackburn. Well, I think you're right about that.
That's probably another thing that we'll need to revisit and
update that as we look at children's online privacy in COPPA
2.0. Senator Markey, when we were in the house, led on that
effort. And then, Senator Blumenthal and I have had the Kids
Online Safety Act. Recently, Senator Ossoff and I introduced
the REPORT Act, which would bolster NCMEC. And we think that's
important to do. It would allow keeping CSAM info for a longer
period of time so that these cases can actually be prosecuted.
And it's interesting that one of the things we've heard
from some of the platforms is that changes to Section 230 would
discourage the platforms from moderating for things like CSAM,
and I would be interested from the professor, really from each
of you, on the panel, if you believe that reforming 230 would
be a disadvantage, that it would make it more difficult to stop
CSAM and some of this information because it's amazing to me
that changing--they think changing the law, being more explicit
in language, removing some of the ambiguous language in 230
would be an incentive for the platforms to allow more rather
than a disincentive. Ms. Franks, I'll start with you.
Professor Franks. Thank you. I think the clarity that we
need here about Section 230 and about this criticism is to say,
which part of Section 230, because if the objection is that
changes to (c)(1), which is really the part of the statute that
is being used so expansively, if the argument is that some of
those changes would make it harder and would disincentivize
companies from taking these kinds of steps, I'd say that's
absolutely false.
(C)(2) quite clearly and expressly says this is exactly how
you get immunity, is by restricting access to objectionable
content. So what that means, of course, is that if it's a
Section 230(c)(1) revision, you still have (c)(2)to encourage
and to incentivize platforms to do the right thing.
That being said, potential attacks on (c)(2) could in fact
have an effect on whether or not companies are properly
incentivized to take down objectionable material. But there is,
of course, also the First Amendment that would come into play
here, too. Because as private companies, these companies have
the right to take down, to ignore, to simply not associate with
certain types of speech if they so choose.
Senator Blackburn. Okay. Professor, anything to add?
Professor Farid. I'll point out a couple of things here. I
was part of the team back in 2008 that developed technology
called PhotoDNA that is now used to find and remove child
sexual abuse material, CSAM. That was in 2008.
That was after 5 years of asking, begging, pleading with
the tech companies to do something about the most horrific
content, and they didn't. It defies credibility that changes to
230 is going to make them less likely to do this. They came
kicking and screaming to do the absolute bare minimum, and
they've been dragging their feet for the last 10 years as well.
So I agree with Professor Franks. I don't think that this is
what the problem is. I think they just don't want to do it
because it's not profitable.
Senator Blackburn. Thank you. Ms. Bennett, anything to add?
Ms. Bennett. I will do what everybody should always do,
which is agree with Professor Farid and Professor Franks, which
is, you know, to the extent we're talking about (c)(1), it
shouldn't have any impact. If you're keeping the good-faith
defense for removing content, then that's still there. And
nothing, no changes to (c)(1) should impact that.
Senator Blackburn. Thank you. Mr. Sullivan.
Mr. Sullivan. While I agree with everything that has just
been said, the truth of the matter is that this illustrates why
this is such a complicated problem, because when you open the
legislation, the chances that only one little piece of it is
going to get changed, not so high. And so, the problem that we
see is, you know, Section 230 is what gives the platforms the
ability to do that kind of moderation. It's what protects them.
And therefore, you know, we're concerned about the potential
of, you know, for that to change as well.
Senator Blackburn. Okay. Professor?
Professor Schnapper. I can't quite agree with everybody.
It's gotten a little more complicated, but I think you can
reform Section (c)(1) without creating disincentives to remove
dangerous material. I think that's sort of a make-weight
argument. I think you have to be careful about changes to
(c)(2), although I understand that there are issues there.
But I just may bring home a point, I guess it was Professor
Farid made. Spending money to remove dangerous material from a
website is not a profit center. And I think Elon Musk has
explained that to the country in exquisite detail. If there are
no financial incentives to avoid harm, you don't make money by
doing it, and you've got to change those incentives.
Senator Blackburn. I'm way over and I thank you for your
indulgence.
Chair Blumenthal. Thanks a lot, Senator Blackburn. Senator
Klobuchar.
Senator Klobuchar. Oh, thank you very much. And thank you
to both you, Chair Blumenthal, and Senator Hawley for holding
this hearing, Senator Blackburn for her good work in this area.
So I was thinking Section 230 was enacted back in 1996.
Probably there's just one or two remaining Members that were
involved in leading that bill when we had dial-up modems
accessing CompuServe. That's what we're dealing with here.
To say that the internet of 2023 is different from what
legislators contemplated in 1996 is a drastic understatement.
And yet, as I said at our Antitrust Subcommittee hearing
yesterday, the largest dominant digital platforms have stopped
everything that we have tried to do to update our laws to
respond to the issues we are seeing from privacy to
competition.
And like Senator Blumenthal, I--with the exception of the
human trafficking that I'd been involved in early on--I was not
crying for major changes to Section 230 either at the
beginning. And part of what's brought me to this moment is the
sheer opposition to every single thing we try to do.
Even when we tried, Lindsey Graham and I, before that
Senator McCain did the Honest Ads Act to put disclaimers and
disclosures, we got initial objection and then eventually some
support, but it still hasn't passed the competition bills. The
work even on algorithms.
The simple idea is that we should do some reforms to the
app stores. This idea that we shouldn't be self-preferencing
their own products when they have a 90 percent or a 40 percent
market share, depending on which platform it is. The hypocrisy
of things that we were told would break the internet that we
now see them agreeing to do in Europe. That is the final dagger
as far as I'm concerned and why you see shifting positions on
Section 230.
Obviously, this is also a cry for some ability of the
companies to come forward and actually propose some real
reforms we can put into law, because so far it's just buy it
all off with money, commercials, ads attacking those of us who
have been trying to make a difference.
So my question, I guess, to you, Professor Farid, first is,
they've said, ``Trust us, we've got this,'' for so long. And
the way the internet companies amplify content profit as
Senator Hawley was explaining off of it, allowing criminal
activity to persist on their platforms, we clearly need our
reforms.
And I always think of it like if you yell ``Fire'' in a
crowded theater, you know, the theater or multiplex, as long as
they have nice exits, they aren't going to be liable. But if
they broadcasted it in all their theaters, that would be called
algorithms, that would be a different story.
You noted in your testimony that some legal arguments have
conflated search algorithms with recommendation algorithms. Can
you explain how these algorithms differ and their role in
amplifying content on platforms?
Professor Farid. Good. Thank you, Senator. So if you go to
Google or Bing and you search for whatever topic you want, your
interests and the company's interests are very well-aligned.
The company wants to deliver to you relevant content for your
search, and you want relevant content, and we are aligned and
they do a fairly good job of that. That is a search algorithm.
It is trying to find information when you proactively go and
search for something.
When you go to YouTube, however, to watch a video of a link
that I sent you, you didn't ask for them to queue up another
video. You didn't ask for the thumbnails down the right hand
side. You didn't ask for any of that. And, in fact, you can't
really turn any of that off. That's a recommendation algorithm.
And the difference between the search algorithm where the
company's interests and your interests are aligned, that is not
true of recommendation algorithms. Recommendation algorithms
are designed for one thing: to make the platform sticky, to
make you coming back for more because the more time you spend
on the platform, the more ads are delivered, the more money we
make.
And if we're talking about harms, we've talked about
terrorism, we've talked about child sexual abuse, we've talked
about illegal drugs and illegal weapons, we should also talk
about things like body image issues. We should talk about
suicidal ideation.
Go to TikTok, go to Instagram. Start watching a few videos
on one topic, and you get inundated with those. Why? That's
because the recommendation is vacuuming up all your personal
data and trying to figure out what is it that is going to bring
you here over and over again.
Last thing on this issue, because it goes to knowledge, is
that the Facebooks of the world, the YouTubes of the world know
that the most conspiratorial, the most salacious, the most
outrageous, the most hateful content drives user engagement.
Their own internal studies have shown that as you drive content
from cats, to lawful, to awful but lawful, and then across the
violative line to illegal, engagement goes up.
And so, the algorithms have learned to recommend exactly
the problematic content, because that is what it drives user
engagement. We should have a conversation about what is wrong
with us. Why do we keep clicking on this stuff? But the
companies know that they are driving the most harmful content
because it maximizes profit.
Senator Klobuchar. Okay. Thank you. Professor Franks, kind
of along those lines, circuit courts have interpreted Section
230 differently, with some saying that social media and
internet companies are not liable for content that could only
have been created with the tools they designed.
However, unlike most other companies that make dangerous or
defective products, internet and social media companies are
often shielded by Section 230 from cases that involve design
defects. Should Congress consider reforming Section 230 to
allow for design defect cases to move forward when a site is
designed in a way that causes harm?
Professor Franks. Thank you. I think it would be, as I said
with the other suggestion about revisions to Section 230, the
concern I would have is about how exactly to codify that sort
of standard. And I think that the impulse there is a good one.
I think that the distinction between faulty design as opposed
to simply recommendations or making access to other content, I
think that is a solid distinction to make.
My concern is that the way--or, the best way and most
efficient way to reform Section 230 is to try not to think
about discreet categories of harmful content or conduct, but
rather to talk about the underlying fundamental problem with
Section 230, which is this idea that you should provide
immunity in exchange for basically doing nothing or for even
for accelerating or promoting harmful content.
Senator Klobuchar. Yes, I agree. I was just trying to, you
know, throw it out there. The last thing, how would reforms to
Section 230, Professor Franks, create a safer internet for kids
and where Congress should focus its efforts. We have a lot of
things and talk a little bit about why that could be a
priority.
Professor Franks. Well, one of the reasons that's a high
priority is exactly for the reasons that Professor Farid has
been speaking to, that those types of behavioral changes that
we see that are essentially an intended consequence or an
intended strategy on the part of companies to keep people on
their platforms longer, to keep them engaging with those
platforms, these are dangerous for adults, but they're
particularly pernicious for children.
This is a kind of approach that is essentially trying to
encourage a form of addiction to these services. And it is
part, I think, of what explains some of the very heightened
rhetoric on the site, on the part of the tech industry and
those who are convinced that the status quo is the best way
forward. People identify so closely with their social media
platforms at this point that any changes that are suggested to
Section 230 feel like personal attacks.
And I think that that is a testament to how much Google and
Facebook and TikTok, and every other company we can think of,
is really striving and succeeding to make us feel that we
cannot live without these products. That they're not products
that we are using, but they're using us. And so, I think it is
a particular importance and concern when this kind of effect is
having on younger and younger children who have had really no
time to develop their own personalities and their own
principles.
Senator Klobuchar. Okay. Thank you.
Chair Blumenthal. Thanks, Senator Klobuchar. Senator
Hirono.
Senator Hirono. Thank you, Mr. Chairman. It's very clear
that we want to make changes to Section 230, but there are
always unintended consequences whenever we attempt to do that.
There has been a lot of discussion of unintended consequences
arising out of the SESTA/FOSTA.
Six workers have raised legitimate concerns about the
consequences of that legislation and its effect on their
safety. But that does not mean that we should shy away from
reforming Section 230 to protect other marginalized groups.
Just that we need to be very intentional about doing so and
paying attention to the potential unattended consequences. This
is for Professor Franks. Can you explain how the experience of
SESTA/FOSTA should inform the types of reforms we should
pursue?
Professor Franks. Thank you. I do think that SESTA/FOSTA is
a good and instructive example of what can go wrong when
Section 230 is amended. It, of course, had the very best of
intentions. There were concerns, however, throughout the
process that were coming from some of the individuals and
groups who were saying, this kind of change is going to affect
us most, and please listen to our concerns about how it should
be done.
So I think that one lesson there is definitely to identify
and to bring into the conversation the individuals who are most
likely to be impacted by any form of reform. That's lesson one.
I think the other lesson is that this shows the dangers of
attempting to highlight a certain category of bad behavior and
try to carve that out into the statute as opposed to, as I said
before, identifying the fundamentally flawed nature of Section
230 as it stands right now as it's interpreted by the courts
and try to fix this on a more generalized level. Because I
think the more particularized we tend to get with this, the
more likely it is that we are going to make mistakes and have
unintended consequences.
Senator Hirono. Well, when you talk about not focusing on
certain types of bad behaviors, but to look at the sort of the
general problem with Section 230, so how would you make the
kind of changes that you're talking about to protect vulnerable
communities?
Professor Franks. The two forms of amendment that I
particularly suggest are changing the word ``information'' in
(c)(1) to refer to ``speech'' instead. And the other is to
limit (c)(1)'s protections to those who are not engaged in the
knowledgeable promotion or contribution to unlawful content.
I've suggested that the language there should be a deliberate
indifferent standard because that is a standard that is used in
other forms of third-party liability cases and areas.
And so, what I think would be useful about that approach,
is that this is not an approach that's going to try to take one
type of harm and say that that is more harmful than something
else. But, rather, to say this is really how this form of
liability tends to work in other industries and in other
places.
And to be clear, not just industries that have very little
to do with what the tech industry supposedly does, namely
sometimes speech, but actually the industries that are very
much about speech, including newspapers and television
broadcasters and universities, all of whom have to be
responsible at a certain level if they are deliberately
indifferent to unlawful conduct.
Senator Hirono. I think you were asked this question
related to the SAFE TECH Act, which does talk about protecting
speech rather than information. So are the other panelists
aware of the provisions of the Safe Tech Act? And if so, any of
you, would you agree that to protect speech is okay, but, you
know, protecting information is not where we want to go? That
may be one of the approaches that we should take to reforming
Section 230. Would any one of the other panelists like to weigh
in?
Professor Schnapper. This may be too complicated to solve
quite that way. Turning to Senator Klobuchar's point, it's not
difficult to imagine it with lawyers putting back into the word
speech, everything that the Committee thought it was taking
out.
Senator Hirono. Darn those lawyers. Okay. Yes, I realize
that if we're going to make that kind of change, I think we
need to provide more guidance as to what we mean by what we
want to protect.
Again, for Professor Franks, one of the concerns I've had,
particularly after the Supreme Court struck down a 50-year
precedent and the right to abortion is that reproductive health
data collected by these tech platforms may be used to target
individuals seeking these services.
So these apps and websites are collecting location data,
search histories, and other reproductive health information.
And last Congress, I introduced My Body, My Data Act to help
individuals protect private sexual health data. However, what
I'm understanding is that though that Act creates a private
right of action to allow individuals to hold regulated entities
accountable for those violations, these tech platforms can
currently just hide behind Section 230, even when put on notice
that this information is being used for nefarious purposes,
unintended purposes.
Based on your extensive legal experience, is there a way to
hold tech companies disseminating reproductive health
information from behind the shield of protection accountable?
Professor Franks. I think it's possible. I think that it
would require moving away from this dominant interpretation of
Section 230 as it currently stands.
Because that view of Section 230, that revision of (c)(1)
as providing some sort of unqualified immunity to these
platforms, really makes it difficult for any individual who is
harmed in this way to even get their foot in the courtroom
door.
And so, I think what we would need at this point is either
a very wise decision from the Supreme Court about how to
properly interpret (c)(1) and, or we would need Congress to
clarify that once again, (c)(1) can be modified to make sure
that it is clear, that these companies can in fact be sued if
there is a plausible theory of liability and a causal
connection between what those platforms did and the ultimate
harm that is resulting to a plaintiff.
Senator Hirono. It's not that easy for the plaintiff to
show that, but she should have that opportunity, I would say.
Professor Franks. Exactly.
Senator Hirono. Thank you. Thank you, Mr. Chair.
Chair Blumenthal. Thanks, Senator Hirono. We may have other
Members of the Subcommittee or our Committee come, but why
don't we begin the second round of questions now and we can
interrupt to accommodate them when they come here.
Let me just say to Dr. Franks, I appreciate your comments
about SESTA, as one of the principal authors and co-sponsors.
We endeavored to listen and we will change the statute if it
has unintended consequences, and we will listen in the course
of ongoing Section 230 reform, whether it's the EARN IT Act
that Senator Graham and I are co-sponsoring. A number of us
have proposals.
As I mentioned, Senator Klobuchar with the Safe Tech Act
and Senator Hirono is a co-sponsor, as I am. Senator Hawley has
a number of very promising proposals, but I think we should be
very clear about what is really going on here. And, Professor
Schnapper, I think you made reference to the money involved.
The fact of the matter is that Big Tech is making big bucks
by driving content to people knowing of the harms that result.
We saw that in the documents that were before the Commerce
Subcommittee on Consumer Protection, that help support the Kids
Online Safety Act. More eyeballs for longer periods of time
mean more money. And Big Tech may be neutral or indifferent on
the topic of eating disorders, suicide or bullying or other
harms. They may not want to take out an ad saying, engage in
these activities, but they know that repeating it and
amplifying it, and in fact, addicting kids to this kind of
content has certain consequences. And as Justice Kagan said,
``Why should Big Tech be given a pass?''
An airline, Boeing, that has a faulty device that causes
the plane to nose-dive, a car company like GM that has a
defective ignition switch that causes the car to stop and go
off the road, they're held responsible. Why shouldn't Big Tech
be held responsible? Whether the standard's deliberate
indifference or some other standard? It may be difficult, it
may be complicated, but it's hardly impossible to impose a
standard.
So let me ask you, Mr. Sullivan, what's your solution here?
I'm asking Big Tech to be part of the solution, not just the
problem.
Mr. Sullivan. Well, let me be clear that I can't speak for
Big Tech because I work for a nonprofit.
Chair Blumenthal. Well, I'm asking then.
Mr. Sullivan. But we have, you know, our concern is really
what users need. Our concern is really what people need. And
what we are trying to point out is that 230 is the thing that
allows the internet to exist. So I am not here to say that the
behavior that we see, the behavior of various large tech
corporations, the behavior of some platforms, you know, that
are perhaps outside of the United States for that matter, that
those are all unproblematic. There are definitely problems
there.
What I'm suggesting is that a narrow application, the
attacking of this narrow piece of legislation is going to harm
the internet in various ways. And so, if you want to do
something about large corporations, for instance, then you've
got an issue having to do with industrial policy. It's not an
issue to do actually with the internet. And it seems to me
that, you know, these concerns are legitimate ones, but I think
we're trying to go after the wrong tool.
Chair Blumenthal. Well, the tool can't be just more
competition. The tool can't be more privacy, as you've
suggested in your opening comments. It has to be something
dealing with this harm. And as I have said before, I'll say
again, the carve-outs, the limits that have been imposed so
far, whether it's Henderson or Lemmon or other caselaw, haven't
broken the internet. I don't think you can argue that Section
230 as it exists right now is essential to continuing the
internet. That's not your position. Is it?
Mr. Sullivan. I think that Section 230 is a critical part
of keeping the internet that we have built. And the reason I
think that is because it protects people in those interactions.
It protects from that kind of third-party liability. I am not,
you know, I'm not here to suggest that it is logically
impossible to find a particular carve-out from 230 that will
help solve some of these problems.
I haven't seen one yet, and so I'm very skeptical that
we're going to get one. But I am not here to suggest that it's
logically impossible. I'm just very concerned that we
understand the potential to do a lot of harm to the internet.
When people say destroy the internet, I think that this, you
know, this sounds like an on-off switch, but that's not how the
internet works.
And we can either drift in the direction of losing the
advantages of the internet, losing the interactivity, losing
the ability of users to have the experience that they need
online and in favor of a centrally controlled system. And that
is the thing that I'm mostly concerned about.
Chair Blumenthal. So taking the Kids Online Safety Act,
which simply requires these tech platforms to enable and inform
parents and children, they can disconnect from the algorithms.
And if they do something, you know, let's use some non-legal
term, something really outrageous, and they violate a basic
duty of care, which under our common law is centuries old, they
can be held liable.
And as Senator Hawley said so well, they get a day in
court. That's fundamental to our system. I don't understand why
there would be harms inevitably as a result of that kind of
change.
Mr. Sullivan. The concern that I have is that, you know, in
the United States, it's easy to initiate a lawsuit and it's
expensive and complicated to defend against it. So very, very
large players, incumbents that we have today, the people who
are, you know, the richest corporations in the history of
capital, they have the resources to do this.
But if you are like a community website, you know, or a
church website and you allow discussions on there and somebody
comes on and they start doing terrible things, you're going to
end up with the exact same liability and that will gradually
turn down the ability of the internet to connect people to one
another. That's what I am concerned about.
You know, I mean, I'm not carrying any water for a giant
tech corporation. I don't work for one, and I can't really, you
know, influence their direction. But my point is that the way
230 works right now, it protects all of the interaction on the
internet. And if we lose that, we will most certainly lose the
internet. We'll still have something we call the internet for
sure, but it will not be the thing that allows people to reach
out and connect to one another. All of these terrible harms,
all of these terrible things that happen online, there are
corresponding examples of people getting help online.
Chair Blumenthal. I appreciate your concern about the
community websites, but they're not the ones driving suicidal
ideation or bullying or eating disorders to kids. And I
understand that Section 230 dates from a time when the internet
was young and small. Nobody's forever young, and these
companies are no longer small.
They're among the most resourced of any companies in the
history of capitalism. And for them of all companies to have
this free pass, as Justice Kagan called it, seems to me not
only ironic, but unacceptable. And again, what's at stake here
ultimately are the dollars and cents that these companies are
able to make by elevating content.
And I sort of am reminded of Big Tobacco, which said, ``Oh,
we're not interested in kids. We don't advertise to them. We
don't promote our products to children.'' And of course, their
files, like Facebook's files, showed just the opposite. They
knew what they were doing in order to raise their profits, and
they put profits over those children.
So I think, again, I'm hoping, not talking to you
personally, but to the internet world out there and the tech
companies that have the resources and responsibility, they will
be a constructive part of this conversation. Thank you.
Senator Hawley. You know, isn't the best--let me just pose
this question on the panel. Maybe I should start with you,
Professor Franks, but isn't the best way to address the many
abuses that we're seeing by the Big Tech companies, and we've
talked about some of them today--with CSAM, Professor Farid,
you mentioned CSAM, that's got to be one of the leading ones.
I'm a father of three children, all of them very small. I worry
about this every day--my oldest is 10--as they get old enough
to want to be on the internet.
There are other abuses, the videos that promote suicide,
the videos that promote violence, and we could go on and on.
Isn't the best way to deal with that just to allow people to
get into court and hold these companies accountable? Here's
what I've learned in my short time in the Senate, is that we
can write regulations and we can give the various regulatory
agencies the power, whether it's the FTC or others, the power
to enforce them.
But my experience is, my observation is, is that the Big
Tech companies tend to own the regulators at the end of the
day. I mean no offense to any of the regulators who are
watching this, but, you know, you know I'm right. At the end of
the day, it's a revolving door. They go to work for the Big
Tech companies. They come out of employment and go into the
Government. And it's just amazing how the regulators always
seem to end up on the side of tech.
And for that matter, even when they do fine tech, even if
it's a big fine, Meta got fined, I think, a billion dollars a
couple years ago. They didn't care. Didn't change anything.
That's nothing to them. The revenues are massive. Their profits
are massive. But what strikes fear into their hearts is if you
say, ``Oh, but we'll allow plaintiffs to go to court.''
Take the Big Tobacco example. What finally changed the
behavior of Big Tobacco? Lawsuits. Normal people got into
court, class action suits. So isn't that really what we're
talk--just to simplify this, isn't that what we're really
talking about today?
I mean, the thing that we ought to be doing is figuring out
a way, and you proposed a way, Professor Franks. I was just
reviewing your written testimony here a second ago with the
changes you would make to the statute. But the gist of that is
to create a system and a standard that is equitable in terms of
being, it's the same across the board for everybody. You don't
single out one particular piece of conduct. You would just
change the standard. But the point of it is, people would be
able to be using this standard to get into court, to have their
day in court, and to hold these companies accountable. Is that
fair to say? Is that too simplified?
Professor Franks. I think that is fair to say, that as
you're pointing out, litigation is one of the most powerful
ways to change an industry. And it's not just because of the
ultimate outcome of those cases, but also because of the
discovery in those cases. What we get to see instead of having
to wait for whistleblowers or wait for journalists, is actually
the documents themselves, internal documents about what you
knew, when you knew it, and what you were doing.
And so, I think that exactly for this reason, we have to be
interpreting Section 230 as not providing some kind of
supercharged immunity that no other industry gets, but
actually, yes, allow people who have been harmed to get into
court, make their claim. They may not prevail, they might. But
at any event, we will see some public service also in terms of
the discovery process that shows us what these companies are
doing and to the tune of how much money.
Because a lot of what is being said about the distinctions
between the big corporations and the little ones, how did the
big corporations get so big? Because they didn't get sued. And
so, if we care about that kind of monopolization, if we care
about that kind of disproportionate influence, what will
benefit the entire market is actually letting those companies
be sued if they have caused harm.
Senator Hawley. Yes, I couldn't agree more. And with all
due respect to you, Mr. Sullivan, I know you have an obligation
to represent the people for whom you work, but I would just say
it's pretty hard to argue that the social media landscape, the
social media industry right now, for example, is a good example
of a competitive industry.
It's not particularly competitive at all. It's controlled
overwhelmingly by one or two players. It is the very
quintessence of monopoly. I mean, you have to go back over a
century in this country's history to find similar
monopolization. And to Professor Franks' point, I think you can
make a very strong argument that Section 230 and the incredible
immunity that it has provided for a handful of players has
contributed to this monopolization. It is in effect a massive
Government subsidy to the tune of billions of dollars a year.
So I'll just say that, listen, as a conservative
Republican, I mean, I want to be clear about this. I am a
conservative Republican. I believe in markets. I'm skeptical of
massive regulatory agencies, but one of the reasons I'm
skeptical is I just see them get captured time, after time,
after time. But I believe in the right of people to be heard
and to be vindicated and to have their days in court.
And I think the best way you protect the little guy and
give him the power or her the power to take on the big guy, is
allow them into court, let them get discovery, let them hire a
tort lawyer, let them bring their suits. And you're right,
Professor Franks, maybe they win, maybe they don't, but that's
justice, right? It'll be a fair, even-handed standard.
The last thing I would say is that, I think that's actually
much closer to what Section 230, when Congress wrote it, was
meant to be. If you look at the language of 230, you know, it's
interpreted by courts to provide this super immunity, as you
were saying, Professor Franks. I think it's very arguable, and
this is the argument I made in the Google case of my amicus
brief, was that listen, what it really was meant to do is
preserve distributor liability.
There's a baseline at the common law of distributor
liability that says if a distributor who doesn't originate the
speech, but merely hosted and distributes it, they can't be
liable for somebody else's speech and shouldn't be. I think we
all agree on that. Only if they promote speech that they know
is unlawful or should have known as unlawful, regardless of the
nature of the speech. Which gets to your point, Professor
Franks, you know, whatever category you want.
If it's unlawful, they know it, they should have known it,
then under traditional distributor liability, then and only
then they can be liable. But what has happened is, the courts
have obviously swept that away completely now too, and 230 bars
even that form of liability. Surely we can agree that there
should be--whether it's the standard you propose Professor
Franks, which I think is pretty close actually to traditional
distributor liability.
We could find a way to allow people to have their basic
claims vindicated to hold accountable these companies when they
are actively promoting harmful content that they know or should
know is harmful.
And I would just submit that's the best way forward here,
and I look forward with working with the Chairman here as we
continue to gather information and try to put forward proposals
that will do that in a meaningful way. Thank you, Mr. Chairman.
Chair Blumenthal. Thanks a lot, Senator Hawley. I have
another question or two relating to AI. We haven't really
talked about it specifically in much detail, but obviously
Americans are learning for the first time with growing
fascination and some dread about ChatGPT and Microsoft Bing
passing law school exams and making threats to users, both
fascinating and pretty unsettling in some instances.
And some of what's unsettling involves potential
discrimination. There's a study from Northeastern University,
Harvard, the nonprofit Upturn--studies by them finding that
some of Facebook's advertising discriminates on the basis of
gender, race, and other categories. Maybe I could ask the panel
whether those threats, beginning with you, Professor Franks,
are distinct or whether they're part of this algorithm threat
that we see generally involving some of these tech platforms.
Professor Franks. With the caveat that I'm not an AI expert
by any means, I think I would reiterate my position that my
concern about approaches that try to parse whether something is
an algorithm or whether something is artificial intelligence or
whether something is troubling from a different perspective, I
would rather the conversation be about, again, the fundamental
flaws and the incentive structure that Section 230 promotes,
rather than trying to figure out whether one particular
category or another is presenting a different kind of harm.
I think the better approach is to look at that fundamental
incentive structure and ensure that these companies are not
getting unqualified immunity.
Chair Blumenthal. Professor Farid.
Professor Farid. There's a lot that can be said on this
topic, Senator Blumenthal. I'll say just two things here. One
is we are surely but quickly turning over important decision-
making to algorithms, whether they are traditional AI or
machine learning or whatever it is.
So, for example, the courts are using algorithms to
determine if somebody is likely to commit a crime in the future
and perhaps deny them bail. The financial institutions have,
for decades, used algorithms to determine whether you get a
mortgage or a small business loan.
Medical institutions, insurance companies, employers are
now, more likely than not, the young people sitting behind you
when they go to apply for a job, will sit in front of a camera
and have an AI system determine if they should even get an
interview or not. And I don't think it's going to surprise you
to learn that these algorithms have some problems. They are
biased. They are biased against women. They are biased against
people of color.
And we are unleashing them in these black box systems that
we don't understand them, we don't understand the accuracies,
we don't understand the false alarm rates, and that should
alarm all of us. And I haven't gotten to the ChatGPTs of the
world yet, or the deepfake yet.
So the second thing I want to say about this, is there's
going to be something interesting here around 230 and
generative AI, as we call it. So generative AI is ChatGPT,
OpenAI's DALLE, the image synthesis, and deepfakes.
Let's say we concede the point that platforms get immunity
for third-party content, but if the platforms start generating
content with AI systems, as Microsoft is doing, as Google is
doing, and as other platforms are doing, there's no immunity.
This is not third-party content.
If your ChatGPT convinces somebody to go off and harm
themselves or somebody else, you don't have immunity. This is
your content. And so, we have to--I think the platforms have to
think very carefully here. More broadly, we need to think very
carefully how we are deploying AI very fast and very
aggressively. The Europeans have been moving quite aggressively
on this. There is legislation being worked on in Brussels to
try to think about how we can regulate AI while encouraging
innovation, but also mitigating some of the harms that we know
are coming and have already come to our citizens.
Chair Blumenthal. Thank you very much. Important points.
Ms. Bennett.
Ms. Bennett. So your question, is there anything
fundamentally different? I think with respect to 230 with these
different kinds of technologies and with the same caveat as
Professor Franks gave, which is I'm not an AI expert, you know,
I think the answer is no. And the fundamental distinction here
is, are we trying to impose liability on these companies for
something someone else said? So, because Facebook allowed
somebody to post something?
Or, is the harm really caused by something the company
itself is doing? You know, there have been claims against
Facebook, for example, that it differentially provides ads on
insurance and housing and things like that, by race or by
gender or by age.
And the problem there isn't the housing ad. The housing ad
is fine. The problem is the distribution to by race or by age
or by gender. The harm is being caused by what the platform is
doing, not by the content.
And I think that's true. You know, you see that ChatGPT.
You know, similar principles. The harm isn't what people are
putting into ChatGPT. It's what ChatGPT might spit out. And
there again, it's the conduct of the platform itself. And so, I
think the principles apply, you know, no matter what the
technology is, this distinction between content that somebody
else put on the internet and what the platform itself has done.
Chair Blumenthal. Mr. Sullivan
Mr. Sullivan. I think that's broadly right. More
importantly, I don't really think that AI is fundamentally a
part of the internet. It's just a thing that happens to use it
a lot of the time. But the reality under those circumstances is
it's another piece of content. Somebody else has made it. And
so, for 230 purposes, I don't think it's--I don't think it's
part of the conversation.
Chair Blumenthal. Professor.
Professor Schnapper. I just add that it's my understanding
that to some degree, AI is in place now. That is when these
algorithms were constantly being tweaked to be more effective,
and some of it's done by software engineers, but some of it is
machine learning. As the software discovers what works and what
doesn't, it changes what it does. That's been going on for some
time.
Chair Blumenthal. Thank you. Well, I think that that last
question shows some of the complexity here. I'm not sure we all
agree that AI is totally distinguishable. I guess it depends on
how you define AI and algorithms. But I do think that we can
make a start on reforming Section 230 without waiting for a
comprehensive or precise definition of AI.
And I want to thank this panel. It's been very, very
informative and enlightening and very, very helpful. We've had
a good turnout and many of you have come from across the
country. Really appreciate it. And the record is going to be
held open for 1 week in case there are any written statements
or questions from Members of the Subcommittee.
I really do thank you and we are going to be back in touch
with you, I'm sure, as we proceed, but have no doubt we are
moving forward. I think the bipartisan showing here and the
bipartisan unanimity that we need change is probably the
biggest takeaway. And I think we are finally at a point where
we could well see action.
Can't predict it with certainty. Some of it will depend on
the cooperation from the tech platforms and social media
companies that have a stake in these issues. But I'm hoping
they will be constructive and helpful. And you have certainly
been all of that today. Thank you so much. This hearing is now
adjourned.
[Whereupon, at 3:51 p.m., the hearing was adjourned.]
[Additional material submitted for the record follows.]
A P P E N D I X
Additional Material Submitted for the Record
[GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]
[all]