[Senate Hearing 117-819]
[From the U.S. Government Publishing Office]
S. Hrg. 117-819
PROTECTING KIDS ONLINE: INSTAGRAM AND REFORMS FOR YOUNG USERS
=======================================================================
HEARING
before the
SUBCOMMITTEE ON CONSUMER PROTECTION,
PRODUCT SAFETY, AND DATA SECURITY
of the
COMMITTEE ON COMMERCE,
SCIENCE, AND TRANSPORTATION
UNITED STATES SENATE
ONE HUNDRED SEVENTEENTH CONGRESS
FIRST SESSION
__________
DECEMBER 8, 2021
__________
Printed for the use of the Committee on Commerce, Science, and
Transportation
[GRAPHIC NOT AVAILABLE IN TIFF FORMAT]
Available online: http://www.govinfo.gov
______
U.S. GOVERNMENT PUBLISHING OFFICE
54-588 PDF WASHINGTON : 2024
SENATE COMMITTEE ON COMMERCE, SCIENCE, AND TRANSPORTATION
ONE HUNDRED SEVENTEENTH CONGRESS
FIRST SESSION
MARIA CANTWELL, Washington, Chair
AMY KLOBUCHAR, Minnesota ROGER WICKER, Mississippi, Ranking
RICHARD BLUMENTHAL, Connecticut JOHN THUNE, South Dakota
BRIAN SCHATZ, Hawaii ROY BLUNT, Missouri
EDWARD MARKEY, Massachusetts TED CRUZ, Texas
GARY PETERS, Michigan DEB FISCHER, Nebraska
TAMMY BALDWIN, Wisconsin JERRY MORAN, Kansas
TAMMY DUCKWORTH, Illinois DAN SULLIVAN, Alaska
JON TESTER, Montana MARSHA BLACKBURN, Tennessee
KYRSTEN SINEMA, Arizona TODD YOUNG, Indiana
JACKY ROSEN, Nevada MIKE LEE, Utah
BEN RAY LUJAN, New Mexico RON JOHNSON, Wisconsin
JOHN HICKENLOOPER, Colorado SHELLEY MOORE CAPITO, West
RAPHAEL WARNOCK, Georgia Virginia
RICK SCOTT, Florida
CYNTHIA LUMMIS, Wyoming
Melissa Porter, Deputy Staff Director
George Greenwell, Policy Coordinator and Security Manager
John Keast, Republican Staff Director
Crystal Tully, Republican Deputy Staff Director
Steven Wall, General Counsel
------
SUBCOMMITTEE ON CONSUMER PROTECTION, PRODUCT SAFETY,
AND DATA SECURITY
RICHARD BLUMENTHAL, Connecticut, MARSHA BLACKBURN, Tennessee,
Chair Ranking
AMY KLOBUCHAR, Minnesota JOHN THUNE, South Dakota
BRIAN SCHATZ, Hawaii ROY BLUNT, Missouri
EDWARD MARKEY, Massachusetts JERRY MORAN, Kansas
TAMMY BALDWIN, Wisconsin MIKE LEE, Utah
BEN RAY LUJAN, New Mexico TODD YOUNG, Indiana
C O N T E N T S
----------
Page
Hearing held on December 8, 2021................................. 1
Statement of Senator Blumenthal.................................. 1
Statement of Senator Blackburn................................... 4
Statement of Senator Klobuchar................................... 16
Statement of Senator Markey...................................... 19
Statement of Senator Baldwin..................................... 21
Statement of Senator Thune....................................... 23
Statement of Senator Lujan....................................... 25
Statement of Senator Lee......................................... 27
Statement of Senator Sullivan.................................... 30
Statement of Senator Young....................................... 32
Statement of Senator Lummis...................................... 34
Statement of Senator Cantwell.................................... 36
Statement of Senator Cruz........................................ 39
Witnesses
Adam Mosseri, Head of Instagram, Meta Platforms Inc.............. 6
Prepared statement........................................... 7
Appendix
Article entitled, ``Hiding in Plain Sight: Exposure of Adolescent
White Males in Appalachia to Harmful Content on Social Media''
by Dr. Joel Beeson, Professor, Reed Collge of Media, West
Virginia University............................................ 49
Response to written questions submitted to Adam Mosseri by:
Hon. Maria Cantwell.......................................... 55
Hon. Richard Blumenthal...................................... 71
Hon. Amy Klobuchar........................................... 76
PROTECTING KIDS ONLINE: INSTAGRAM AND REFORMS FOR YOUNG USERS
----------
WEDNESDAY, DECEMBER 8, 2021
U.S. Senate,
Subcommittee on Consumer Protection, Product
Safety, and Data Security,
Committee on Commerce, Science, and Transportation,
Washington, DC.
The Subcommittee met, pursuant to notice, at 2:40 p.m., in
room SR-253, Russell Senate Office Building, Hon. Richard
Blumenthal, Chairman of the Subcommittee, presiding.
Present: Senators Blumenthal [presiding], Cantwell,
Klobuchar, Schatz, Markey, Baldwin, Lujan, Thune, Cruz,
Sullivan, Blackburn, Young, Lee, and Lummis.
OPENING STATEMENT OF HON. RICHARD BLUMENTHAL,
U.S. SENATOR FROM CONNETICUT
Senator Blumenthal. [Technical problems.]--from social
media to children and teens on social media. We really
appreciate your being here, Mr. Mosseri. Your response to our
invitation is very welcomed. I want to thank you and your team
for your cooperation, and I want to thank the Ranking Member,
Senator Blackburn, for being such a really close partner in
this work, as well as our Chairwoman Maria Cantwell and our
Ranking Member Roger Wicker for their support as well and all
the members of our committee for being so engaged on this
topic.
As a note to start, I understand Mr. Mosseri has a hard
stop at five, so I am going to be strict on the 5-minute time
limit. I know everybody thinks of me as a very nice guy, but I
am going to be ruthless, at least attempting to be ruthless as
best any Senator can be with his colleagues.
In this series of hearings we have heard some pretty
powerful and compelling evidence about the dangers of big tech
to children's health, well-being, and futures. Our Nation is in
the midst of a teen mental health crisis. Social media didn't
create it, but it certainly fanned the fuel and the flames, and
it has fueled it. And if anybody has any doubts about the
potential harmful effects of social media, the surgeon general
yesterday issued a powerful report about the implications of
social media, as well as video gaming and other technologies on
teen mental health.
And that is part of the reason we are here. The hearings
have shown that social media, in particular big tech actually
fans those flames with addictive products and sophisticated
algorithms that can exploit and profit from children's
insecurities and anxieties. And our mission now is to do
something about it.
We are here to do more than shake fists. We really are
seeking solutions. And we welcome the voices and the vision of
big tech itself in that effort. I believe that the time for
self-policing and self-regulation is over. Some of the big tech
companies have said, trust us. That seems to be what Instagram
is saying in your testimony. But self-policing depends on
trust.
The trust is gone. What we need now is independent
researchers, objective overseers not chosen by big tech but
from outside, and strong, vigorous enforcement of standards
that stop the destructive, toxic content that now too often is
driven to kids and takes them down rabbit holes to dark places.
The day before this hearing, Instagram announced a set of
proposals. These simple time management and parental oversight
tools should have, they could have, been announced years ago.
They weren't.
And in fact, these changes fall way short of what we need,
in my view. Many of them are still in testing, months away.
Rollouts will be done at some point in the future, we don't
know exactly when, and unfortunately, these announced changes
leave parents and kids with no transparency into the black box
algorithms. The 600 pound gorillas in those black boxes that
drive that destructive and addictive content to children and
teens. No effective warning or notice to parents when their
children are spiraling into eating disorders, bullying, or
self-harm.
Nothing more than the bare minimum controls for parents.
And, of course, no real accountability to assure parents and
kids that these safeguards will work. I am troubled with the
lack of answers on Instagram Kids. Once again, this pause looks
more like a public relations tactic brought on by our hearings,
just as these announced changes seem to be brought on by these
proceedings announced just hours before your testimony, and we
need real serious review of those changes.
The magnitude of these problems requires bold and broad
solutions and accountability, which has been lacking so far.
Facebook's own researchers have been warning management,
including yourself, Mr. Mosseri for years, about Instagram's
harmful impacts on teens' mental health and wellbeing, and the
whistleblower who sat exactly where you are told us about those
documents, about the research, the studies, which showed that
Facebook knew, it did the research, had the studies, but it
continued to profit from the destructive content because it
meant more eyeballs, more advertising, more dollars.
Given those warnings, it seems inexcusable that Facebook
waited a decade to begin, and only to begin, figuring out that
Instagram needed parental controls. In the past 2 months, this
subcommittee has heard horrifying stories from countless
parents whose lives and their children's lives have been
changed forever. One father from Connecticut wrote to me about
his daughter who developed severe anxiety in high school
because of constant pressure from Instagram.
That pressure became so intense, following her home from
school, following her everywhere she went, following her into
her bedroom in the evening that she attempted suicide.
Fortunately, her parents stepped in and sought help and found a
recovery program, but the experience continues to haunt her and
her family. Facebook's researchers call this fall into that
kind of dark rabbit hole a perfect storm, that is the quote,
``perfect storm,'' created by its own algorithm that exacerbate
downward spirals harmful to teens.
Again, Facebook knows about the harm, it has done the
research, the studies, the surveys repeatedly, it knows the
destructive consequences of the algorithms and designs, it
knows teens struggle with addiction and depression on
Instagram, but that data has been hidden like the algorithms
themselves. Just yesterday, that surgeon general's report
provided powerful documentation on how social media can fan
those flames and fuel the fires of the mental health crisis
that we face among teens, and it signals that something is
terribly wrong. What really stuns me is the lack of action.
In fact, just within the last two months. Two months ago,
this subcommittee heard testimony from Facebook's global head
of safety, Ms. Antigone Davis. At that time, I showed her the
pro-eating disorder rampant on Instagram, I demonstrated
through an experiment how its algorithms will flood a teen with
triggering and toxic messages in just hours after we created an
account.
This was glorification of being dangerously underweight,
tips on skipping meals, images we could not in good conscience
show in this room. It has been 2 months, so we have repeated
our experiment. On Monday, we created another fake account for
a teenager and followed a few accounts promoting eating
disorders. And again, within an hour, all of our
recommendations promoted pro-anorexia and eating disorder
content. Two months ago, the global head of public safety for
Facebook was put on notice by this subcommittee. Nothing has
changed. It is all still happening. And in the meantime, more
lives have been broken, real lives, with real families and
futures, and you hear from them yourself.
We all know that if Facebook saw a significant threat to
its growth or ad revenue, it wouldn't wait 2 months to take
action. So why does it take months for Facebook to act when our
kids face danger, when time is not on our side? Time is not on
our side. So no wonder parents are worried. In fact, parents
are furious. They don't trust Instagram, Google, TikTok and all
of their big tech peers.
And by the way, this is not an issue limited to Instagram
or Facebook. Parents are asking, what is Congress doing to
protect our kids? And the resounding bipartisan message from
this committee is legislation is coming. We can't rely on trust
anymore, we can't rely on self-policing. It is what parents,
and our children are demanding. Senator Blackburn and I are
listening to them, as are other members of committee. We are
working together.
Your proposal for an industry body, as parents yet again,
they trust us, we will do it ourselves. But self-regulation
relies on that trust that has been squandered. We need to make
sure that the responsibility is on big tech to put a safe
product on the market. You can't be allowed to conceal when
products are harming kids, so the first imperative is
transparency. We need real transparency into these 800 pound
gorilla black box algorithms and addictive designs, and
disclosure has to include independent qualified researchers who
will then tell the story to the public. We need to update our
children's privacy laws.
Congress should pass the bipartisan Children's and Teens
Online Privacy Protection Act authored by Senator Markey, who
is here today. I am proud to be working with him on updating
and expanding it. Parents and children need more power and more
effective tools to protect themselves on the platform.
And that is why Senator Blackburn and I are working on a
framework, and we have made good progress to enable that
protection. There really should be a duty of care. United
Kingdom has imposed it. It is part of the law there. Why not
here? That ought to be part of the framework of legislation
that we are considering. Section 230 reform. You make reference
to it in your testimony.
The days of absolute broad unique immunity for big tech are
over. And finally, enforcement. State authorities, Federal
authorities, law enforcement has to be rigorous and strong. So
I hope that we will begin the effort of working together, but
one way or the other, this committee will move forward. And
again, I thank you for being here. I thank all of my colleagues
for attending. And I ask for remarks by the Ranking Member
Senator Blackburn.
STATEMENT OF HON. MARSHA BLACKBURN,
U.S. SENATOR FROM TENNESSEE
Senator Blackburn. And thank you, Senator Blumenthal. And
welcome to everyone. We are appreciative that you are here
today, Mr. Mosseri. We are grateful for your time and for your
testimony. I do want to thank Senator Blumenthal and his team
for the work. This is the fifth hearing that we have held
dealing with the issues around big tech and the invasions of
privacy, the lack of data security, the need for Section 230
reforms, and looking very directly at social media platforms,
and the effect--the negative and adverse effect--that they are
having on our children.
I will tell you that today I am just a little bit
frustrated. I am frustrated because this is now the fourth time
in the past 2 years that we have spoken with someone from Meta,
as you are now calling yourselves, and I feel like the
conversation continues to repeat itself ad nauseum.
And when I go back home to Tennessee, I know this the
people there, lots of moms and dads and teachers and
pediatricians, they share this frustration because they
continue to hear from you that change is coming, that things
are going to be different, that there are going to be more
tools in the toolbox, that kids are going to be safer online,
that privacy is going to be protected, and that data is going
to be secure. But guess what? Nothing changes. Nothing. The
chairman just talked about what happened with Ms. Davis when
she came in and how we pointed all of this out specifically of
what we had found. And yet yesterday, what happened? The exact
same thing.
So I hope that you appreciate the frustration that the
American public feels, that they appreciate what the Internet
can do for them in bringing the world closer, but the
applications that you are pushing forward, the social media,
the addictive nature, the way this affects children, there is
such a frustration that you turn a blind eye toward taking
responsibility and accepting accountability for your platform,
how you are structured, and how you use the data on these
children.
Yesterday, at 3 a.m., which is midnight in the Silicon
Valley, you released a list of product updates that you said
would raise the standard for protecting teens and supporting
parents online. And I am not sure what hours you all keep in
California, but where I am from, the middle of the night is
when you drop news that you don't want people to see.
And maybe you thought that doing it in this manner would
keep members of this subcommittee from seeing it right away and
from raising concerns. Because while I am sure you know that we
fully share the goal of protecting kids and teens online, what
we aren't sure about is how the half measures you have
introduced are going to get us to the point where we need to be
to truly protect teens and young adults online. For example, we
know that social media is an integral part of teens' daily
lives.
According to the Mayo Clinic, 97 percent of teens between
ages 13 and 17 use a social media platform, and 45 percent say
they are online almost constantly. So while telling teens to
take a break might seem helpful on the face of things, it's
probably not going to get most teenagers to stop doing what
they are doing and take a break. Educational tools for parents
can be helpful, but frankly, I am more concerned about the
things we know kids and teens are hiding from their parents.
We know that Facebook and Instagram have encouraged teens
to use secondary accounts and told them to be authentic, and we
all remember what it was like to be a teenager. So while
parents might gain some insight into what their teens do on
their main accounts, what do they do about the accounts they
don't even know exist, the ones that Instagram is encouraging
them to create?
And Instagram announced in July that it would default to
all teens onto private accounts when they sign up for the site.
Yet just yesterday, my team created an account as a 15-year-old
girl and it defaulted to public. So, while Instagram is touting
all these safety measures, they aren't even making sure that
these safety measures are in effect. For me, this is a case of
too little, too late, because now there is bipartisan momentum
both here and in the House to tackle these problems we are
seeing with big tech.
As Senator Blumenthal said, we are working on children's
privacy, online privacy, data security, and Section 230
reforms. This is the appropriate time to pass a national
consumer privacy bill, as well as kid specific legislation to
keep minors safe online. We also need to give serious thought
to how companies like Facebook and Instagram continue to hide
behind Section 230s liability shield when it comes to content
like human trafficking, sex trafficking, drug trafficking.
Despite Congress speaking clearly to this issue when we
passed FOSTA and SESTA a few years ago. Mr. Mosseri, there is a
lot of work for us to do to improve the online experience and
to protect our children and our grandchildren. I think it is
best if we do this together, and I look forward to hearing your
ideas and your testimony today. Thank you for your appearance.
Senator Blumenthal. Thanks, Senator Blackburn. And I am
pleased to introduce Adam Mosseri. He spent over 11 years at
Meta, and oversees all functions of the Instagram app,
including engineering, product management, and operations. Mr.
Mosseri, the floor is yours.
STATEMENT OF ADAM MOSSERI, HEAD OF INSTAGRAM,
META PLATFORMS INC.
Mr. Mosseri. Right. Apologies--oh, my apologies. Thank you,
Senator. Chairman Blumenthal, Ranking Member Blackburn, members
of the subcommittee, I am Adam Mosseri and I have served as the
head of Instagram since 2018. And over the last few months, the
subcommittee has held a number of hearings on the safety and
well-being of young people online. This is a critically
important topic, as you said in your opening Statement, and it
is something that we think about and work on every day at
Instagram.
The Internet has changed how we all communicate. It has
changed how we express ourselves. It has changed how we stay
connected to the people that we care about. It has also changed
what it's like to be a teenager. Teenagers have always spent
time with their friends, developed new interests, and explored
their identities. Today, they are doing those things on
platforms like Instagram, YouTube, TikTok, and Snapchat.
I firmly believe that Instagram and that the Internet more
broadly can be a positive force in young people's lives. I am
inspired every day by teens on Instagram, and I am proud that
our platform is a place where they can spend time with the
people that they care about, where they can start incredible
movements, where they can find new interests, or they can even
turn a passion into a business. I also know that sometimes
young people can come to Instagram dealing with difficult
things in their lives. I believe that Instagram can help in
those critical moments.
That is one of the things that our research has shown, and
to me, this is the most important work that we can do, taking
on complex issues like bullying and social comparison and
making changes. Now, I recognize that many in this room have
deep reservations about our company. But I want to assure you
that we do have the same goal. We all want teens to be safe
online.
The Internet isn't going away, and I believe there is
important work that we can do together, industry and
policymakers, to raise the standards across the Internet to
better serve and protect young people. But the reality is that
keeping people safe is not just about any one company. An
external survey just last month suggested that more teens are
using TikTok and YouTube than Instagram. This is an industry
wide challenge that requires industry wide solutions and
industry wide standards. Now, we have a specific proposal.
We believe there should be an industry body that will
determine the best practices when it comes to what I think are
the three most important questions with regards to youth
safety, how to verify age, how to build age appropriate
experiences, and how to build parental controls. The body
should receive input from civil society, from parents, and from
regulators. The standards need to be high and the protections
universal. I believe that companies like ours should have to
earn some of their Section 230 protections by adhering to those
standards. And we have been calling for regulation for nearly 3
years now. And from where I sit, there is no area more
important than new safety.
That said, I understand that developing policy takes time,
so we are going to continue to push forward on the safety and
well-being of young people online. An age verification. We are
developing new technologies to address this industry wide
challenge.
We are creating a menu of options to allow people to verify
that they are old enough to use Instagram that extend beyond
simply relying on an ID card. And we are building new
technology to proactively find and remove accounts belonging to
those under the age of 13. We are also using technology to
understand if people are above or below the age of 18, so that
we can create a more age appropriate version of Instagram for
them.
For example, adults can no longer message people under the
age of 18 that don't follow them. And as of this week, we
announced that people can no longer tag or mention teens that
don't follow them as well. We also provide tools for parents.
Parents and guardians know what is best for their teens and
relaunching Instagram's first set of parental controls in March
of next year, allowing them to see how much time their teens
spend on Instagram and to set time limits. We will also give
teens a new option to notify their parents if they report
someone, giving their parents an opportunity to talk about it
with them.
As a father of three, I care a great deal about creating an
online world that is safe for my children and that allows them
to benefit from all the amazing things the Internet has to
offer. As the head of Instagram, I recognize the gravity of my
role in making this happen not only for my kids, but for
generations to come. I am hopeful that we can work together to
reach that goal. Thank you.
[The prepared statement of Mr. Mosseri follows:]
Prepared Statement of Adam Mosseri, Head of Instagram,
Meta Platforms Inc.
I. Introduction
Chairman Blumenthal, Ranking Member Blackburn, and members of the
Subcommittee, my name is Adam Mosseri, and I have served as the Head of
Instagram since 2018. Over the last few months, this Subcommittee has
held a number of hearings about the safety and well-being of young
people online. This is a critically important topic, and it is
something that we think about--and work on--every day at Instagram.
Our mission at Instagram is to bring people closer to the people
and things they love. Our platform began a decade ago with a few
million users. Today, we proudly serve well over a billion people.
While our platform began as a simple photo-sharing app, we have evolved
to provide new ways for people to express themselves, including
Stories, Reels, and Live. Teens use our app every day to spend time
with the people they care about, explore their interests, and express
themselves. They are doing incredible things on our platform, and I
firmly believe that Instagram can be a force for good in the lives of
young people.
Much has been said recently about Instagram and its impact on young
people. As a parent and as the Head of Instagram, this is an issue I
care deeply about. It's an area our company has been focused on for
many years, and I'm proud of our work to help keep young people safe,
to support young people who are struggling, and to empower parents with
tools to help their teenagers develop healthy and safe online habits.
I hope we can work together--across industry and government--to
raise the standards across the Internet and better serve young people.
The reality is that keeping young people safe online is not just about
one company. An external survey from just last month suggested that
more U.S. teens are using TikTok and YouTube than Instagram.\1\ With
teens using multiple platforms, it is critical that we address youth
online safety as an industry challenge and develop industry-wide
solutions and standards.
---------------------------------------------------------------------------
\1\ Mike Prouix, Weekly Usage of TikTok Surpasses Instagram Among
U.S. Gen Z Youth, Forrester (Nov. 18, 2021), https://www.forrester.com/
blogs/weekly-usage-of-tiktok-surpasses-instagram-among-us-gen-z-youth/.
---------------------------------------------------------------------------
II. Keeping Young People Safe on Instagram
As Head of Instagram, I am especially focused on the safety of the
youngest people who use our services. This work includes keeping
underage users off our platform, designing age-appropriate experiences
for people ages 13 to 18, and building parental controls.
Age Verification on Instagram
Instagram is built for people 13 and older. If a child is under the
age of 13, they are not permitted on Instagram. When we learn someone
underage has created an account, we remove them. In fact, in the third
quarter of this year, we removed over 850,000 accounts on Instagram
that were unable to demonstrate that they meet our minimum age
requirement.
Understanding people's age on the Internet is a complex and
industry-wide challenge--especially considering that many young people
in the U.S. do not have a driver's license until they are 15 or 16
years old. However, we're building new technology to proactively find
and remove accounts belonging to those under 13 and to identify those
people who may be under the age of 18.
In addition to requiring people to share their date of birth when
they register and allowing anyone to report a suspected underage
account, we train our technology to identify if people are above or
below 18 using multiple signals. We look at things like wishing people
a happy birthday and the age written in those messages--for example,
``Happy 21st Bday!'' or ``Happy Quinceanera.'' This technology isn't
perfect, and we're always working to improve it, but that's why it's
important that we use it alongside many other signals to understand
people's ages.
There is more that we can do as an industry to ensure that there
are clear standards of age verification across apps. For instance, I
think it would be much more effective to solve the problem at the phone
level so that a young person using a phone has an age-appropriate
experience across any of the apps that they use on that device.
Keeping Instagram Safe
Understanding age is important so that we can create a more age-
appropriate version of Instagram for the youngest people on our
platform. We've put in place multiple protections to create safe and
age-appropriate experiences for people between the ages of 13 and 18.
Wherever we can, we want to stop young people from hearing from
adults they don't know or that they don't want to hear from. We believe
accounts that offer people more control about who can see and respond
to their content are the best way to prevent this from happening, and
we recently announced that everyone who is under 16 years old in the
U.S. is defaulted into what is called a private account when they join
Instagram. For young people who already have a public account on
Instagram, we are sharing a notification highlighting the benefits of a
private account and explaining how to change their privacy settings.
Private accounts let people control who sees or responds to their
content. If a young person has a private account, people have to follow
them to see their posts, Stories, and Reels, unless they choose to
allow others to re-share their content. We're also--by default--
eliminating the ability for young people to be tagged or mentioned by
others or to have their content included in Reels Remixes or Guides.
Additionally, people can't comment on their content in those places,
and they won't see the young person's content at all in places like
Explore or through hashtags.
Encouraging young people to have private accounts is important when
it comes to stopping unwanted contact from adults. But we've gone even
further to make young people's accounts difficult to find for certain
adults. We developed technology that allows us to find accounts that
have shown potentially suspicious behavior--for example, an adult
account that might already have been blocked by another young person--
and to stop those accounts from interacting with young people's
accounts. Using this technology, we don't show young people's accounts
in Explore, Reels, or `Accounts Suggested For You' to these adults. If
they find young people's accounts by searching for their usernames,
they are not able to follow them. They also are not able to see
comments from young people on other people's posts nor are they able to
leave comments on young people's posts.
Additionally, we've launched a number of tools to restrict direct
messaging between teens and adults and to prompt teens to be more
cautious about interactions in direct messaging. To protect teens from
unwanted contact from adults, we introduced a new feature that prevents
adults from sending messages to people under 18 who don't follow them.
For instance, when an adult tries to message a teen who doesn't follow
them, they receive a notification that says that sending a Direct
Message isn't an option.
In addition to preventing conversations between adults and teens
who don't follow one another, we started using prompts--or safety
notices--to encourage teens to be cautious in conversations with adults
they're already connected to. These safety notices alert young people
when an adult who has been exhibiting potentially suspicious behavior
is interacting with them. For example, if an adult is sending a large
amount of friend or message requests to people under 18, we use this
tool to alert the recipients and give them an option to end the
conversation, or block, report, or restrict the adult.
Our work to create age-appropriate experiences for teenagers on
Facebook and Instagram also includes age gating certain content,
prohibiting certain types of ads from being served to minors, and
limiting options for serving any ads to these users.
We've always had rules about the kinds of content we suggest to
people in places like the Explore tab. These rules apply to everyone,
but we're going to go a step further for young people. We're developing
a new experience that will raise the bar even higher for what we
recommend for them in Search, Explore, hashtags, and suggested
accounts. This new experience will make it harder for young people to
find potentially sensitive content on Instagram.
We're also optimistic about using nudges to point people towards
different topics. External experts have suggested that, if people are
dwelling on one topic for a while, it could be helpful to nudge them
towards other topics.\2\ \3\ That's why we're building a new experience
that will nudge people towards other topics if they've been spending
time on one topic for a while.
---------------------------------------------------------------------------
\2\ Aditya Purohit et al., Designing for Digital Detox: Making
Social Media Less Addictive with Digital Nudges, Assoc. for Computing
Machinery (Apr. 2020), https://dl.acm.org/doi/10.1145/3334480.3382810.
\3\ Christoph Schneider et al., Digital Nudging: Guiding Online
User Choices through Interface Design. Communications of the ACM (July
2018), https://cacm.acm.org/magazines/2018/7/229029-digital-nudging/
fulltext.
---------------------------------------------------------------------------
When it comes to advertising, we've long restricted certain kinds
of ads from being served to minors, and we recently limited
advertisers' options for serving ads to people under 18. Now,
advertisers can only serve ads to people under 18 based on age, gender,
and location but not interests or activity. This means that previously
available targeting options, like those based on interests or on their
activity on other apps and websites, are no longer available to
advertisers.
Supporting Teens Who May Be Struggling
In addition to making sure young people are safe on Instagram, we
believe it's important to support young people who are struggling with
mental health and well-being.
Sometimes young people come to Instagram dealing with hard things
in their lives. I believe Instagram can help many of them in those
moments. This is something that our research has suggested as well. One
of the internal studies that has been the subject of much discussion
showed that teen boys and girls who reported struggling with
loneliness, anxiety, sadness, and eating disorders were more likely to
say that Instagram made those difficult times better rather than worse.
We care deeply about the teens on Instagram, which is in part why
we research complex issues like bullying and social comparison and make
changes. We have a long track record of using research and close
collaboration with our Safety Advisory Board, Youth Advisors, and
additional experts and organizations to inform changes to our apps and
provide resources for the people who use them.
We don't allow people to post graphic suicide and self-harm
content, content that depicts methods or materials involved in suicide
and self-harm (even if it's not graphic), or fictional content that
promotes or encourages suicide or self-harm. In the third quarter of
2021, we removed 96 percent of this content before it was reported to
us.
Since 2019, we've taken steps to protect more vulnerable members of
our community from being exposed to suicide and self-harm related
content that is permissible under our policies, for example, if someone
posts about their recovery journey. We remove known suicide-and self-
harm-related posts from places where people discover new content,
including our Explore page, and we will not recommend accounts we have
identified as featuring suicide or self-injury content.
We also remove certain hashtags and accounts from appearing in
search. When someone starts typing a known hashtag or account related
to suicide and self-harm into search, we restrict these results. We
also add sensitivity screens to blur more content that isn't graphic
but could have a negative impact on someone searching.
We have a resource center \4\ developed with help from mental
health partners, and, when a post is identified as being about suicide
(either because a friend reported it or our technology detected it), a
person at Meta reviews the post. If it's about suicide, we provide
resources to the poster such as a one-click link to the Crisis Text
Line. Additionally, whomever reported the post also receives resources
and information about how to help the person in distress.
---------------------------------------------------------------------------
\4\ Suicide Prevention, https://www.facebook.com/safety/wellbeing/
suicideprevention.
---------------------------------------------------------------------------
Similarly, we don't allow content that promotes or encourages
eating disorders on our platforms. We use technology and reports from
our community to find and remove this content as quickly as we can, and
we're always working to improve. We follow expert advice from academics
and mental health organizations, like the National Eating Disorder
Association (``NEDA''), to strike the difficult balance between
allowing people to share their mental health experiences while
protecting them from potentially harmful content.
We've made a number of changes to support those struggling with
eating disorders. When someone searches for or posts content related to
eating disorders or body image issues, they'll see a pop-up with tips
and an easy way to connect to organizations like NEDA in the US.
We also introduced a dedicated reporting option for eating disorder
content. People have always been able to report content related to
eating disorders, but, until recently, this was combined with the
option to report suicide and self-harm-related content, because they
are part of one policy--but now people will see a separate dedicated
option for eating disorder content.
We also worked with the JED Foundation to create expert-and
research-backed educational resources for teens on how to navigate
experiences like negative social comparison.\5\
---------------------------------------------------------------------------
\5\ More information on this work is available here: https://
pressuretobeperfect.jedfounda
tion.org/.
---------------------------------------------------------------------------
Lastly, we don't allow people to bully or harass other people on
Instagram and have rules in place that prohibit this type of content.
We've also built tools that help prevent bullying from happening in the
first place and empower people to manage their accounts so they never
have to see it.
We launched Restrict in 2019, which allows people to protect
themselves from bullying without the fear of retaliation.\6\ We also
created comment warnings when people try to post potentially offensive
comments. So far, we've found that, about 50 percent of the time,
people edited or deleted their comments based on these warnings.
---------------------------------------------------------------------------
\6\ Introducing the ``Restrict'' Feature to Protect Against
Bullying, Instagram Blog
(Oct. 2, 2019), https://about.instagram.com/blog/announcements/stand-
up-against-bullying-with-restrict.
---------------------------------------------------------------------------
We recently announced a new tool called `Limits' that lets people
automatically hide comments and direct message requests from people who
don't follow them, or who only recently followed them. We developed
this feature because we heard that creators and public figures
sometimes experience sudden spikes of comments and message requests
from people they don't know. In many cases, this is an outpouring of
support, but sometimes it can also mean an influx of unwanted comments
or messages. Now, if you're going through that--or think you may be
about to--you can turn on Limits and protect yourself.
We also recently launched Hidden Words, which automatically filters
message requests containing offensive words, phrases, and emojis into a
separate inbox so people never have to see them. Because messages are
private conversations, we don't proactively look for hate speech or
bullying the same way we do elsewhere on Instagram, so Hidden Words
allows people to control what they see and receive in messages and
protect themselves from abuse. In addition, all accounts on Instagram
have the option to switch off messages from people they don't follow.
This means people never have to receive a message from anyone they
don't know.
These are just a few examples of the tools we developed to protect
people from bullying and harassment. We have numerous other tools
including comment controls, blocking, and managing who can comment on
your posts and who can tag and mention you.
Giving Teens Tools to Control their Experience
We want to give people on our platform--especially teenagers--tools
to help them manage their experiences in the ways that they want and
need, including the time they spend. We have built time management
tools including Daily Limit, which lets people know when they've
reached the total amount of time they want to spend on Instagram each
day; `You're All Caught Up,' which notifies people when they've caught
up with new content on their feed; and controls to mute notifications.
This week, we launched `Take A Break' to go even further and
empower people to make informed decisions about how they're spending
their time on Instagram. We'll show reminders suggesting that people
close Instagram if they've been scrolling for a certain amount of time,
and we'll show them expert-backed tips to help them reflect and reset.
We want to make sure young people are aware of this feature, so we'll
show them notifications suggesting they turn the reminders on.
Also this week, we began testing a new activity center, a central
place for people to see and manage their information on Instagram. For
the first time, people will be able to bulk delete content they've
posted like photos and videos as well as their previous likes and
comments. While available to everyone, this tool will help young people
more fully understand what information they've shared on Instagram and
what is visible to others and give them an easy way to manage their
digital footprint.
Prioritizing and Expanding Parental Controls
We want parents to have the information to help their teens have a
safe and positive experience on Instagram. That's why in March we're
launching Instagram's first set of controls for parents and guardians,
allowing them to see what their teens are up to on Instagram and manage
things like the time they spend in our app. These new features, which
parents and teens can opt into, will give parents tools to meaningfully
shape their teen's experience.
In the US, we've also collaborated with The Child Mind Institute
and ConnectSafely to publish a new Parents Guide that includes the
latest safety tools and privacy settings as well as a list of tips and
conversation starters to help parents navigate discussions with their
teens about their online presence.\7\
---------------------------------------------------------------------------
\7\ Instagram Teen Safety for Parents, https://about.instagram.com/
community/parents#guide.
---------------------------------------------------------------------------
III. Using Research to Improve Instagram
A lot of focus in recent weeks has been about internal research. As
our Head of Research Pratiti Raychoudhury has written, the public
reporting about our internal research was mischaracterized, so I want
to take a moment to address it. Among other things, the research in
question actually demonstrated that many teens said that using
Instagram helped them when they were struggling with the kinds of hard
moments that teenagers have always faced.
In addition to putting specific findings in context, it is also
critical to make the nature of this research clear. This research, some
of which relied on input from only 40 teens, was designed to inform
internal conversations about teens' most negative perceptions of
Instagram. It did not measure causal relationships between Instagram
and real-world issues.
Our goal with all of the research that we do is to improve the
services that we offer. That means our insights often shed light on
problems so that we can evaluate possible solutions and work to
improve. We believe this work is critical to delivering a better
Instagram.
Moving forward, we will continue to collaborate and engage in data-
sharing with researchers on issues related to young people. We have
been working with external academics and research partners in this
space for many years, and we plan to do even more early next year. This
is something that we have done in our program with independent
academics around the U.S. 2020 elections. We will take the methodology
from the U.S. 2020 program and apply it to well-being research over the
coming year. This will involve collaborative co-design of studies and
peer-reviewed publication of findings.
In addition, we are continuing our investment in external research
to better understand how to keep young people safe and to ensure their
well-being is protected in the metaverse. For example, we committed to
providing $5 million over three years to the Digital Wellness Lab at
Boston Children's Hospital for independent research on these important
topics.
IV. Supporting Industry Regulation to Protect Young People
The reality is that keeping young people safe online is not just
about one company. We've been calling for updated regulations for
nearly three years. From where I sit, there is no area more important
than youth safety.
Specifically, we believe there should be an industry body that will
determine best practices when it comes to at least three questions: how
to verify age, how to design age-appropriate experiences, and how to
build parental controls. This body should receive input from civil
society, parents, and regulators to create standards that are high and
protections that are universal. And I believe that companies like ours
should have to adhere to these standards to earn some of our Section
230 protections.
In addition, the body could take steps to require each member to
publish regular reports on the progress they are making against each
standard and to develop a free and accessible information hub for
parents and educators.
This proposal is a work in progress, but we hope that it will
contribute to the ongoing discussion about how appropriate regulation
can help us address these critical issues. In the meantime, we will
continue to push forward on safety and well-being for young people
online.
V. Conclusion
We want young people to enjoy using Instagram while making sure we
don't compromise on their privacy and safety. As we work toward that
goal, we'll continue listening to them, their parents, lawmakers, and
experts to build an Instagram that works for everyone and is trusted by
parents.
Senator Blumenthal. Thanks, Mr. Mosseri. I will take the
first round of questions. Again, we are going to do 5 minute
rounds. Just a short while ago at our last hearing, TikTok,
Snapchat, and YouTube sat at that table, and they all committed
to making internal research algorithms and datasets about their
effect on children and teens available to independent
researchers. Will you commit to doing the same?
Mr. Mosseri. Senator, we believe it is important to be
transparent, both about ranking and algorithms and about data
for research. I can commit to you today that we will provide
meaningful access to data so that third party researchers can
design their own studies and make their own conclusions about
the effects of well-being on young people and on ranking. I can
commit to do all I can to explain how ranking works and to find
other ways for us to be transparent about algorithms.
Senator Blumenthal. Will you support a legal requirement
that independent overseers and researchers not only have access
to the data sets, but also check the way algorithms are driving
content and recommend changes that you will adopt?
Mr. Mosseri. Senator, I would be happy to have my office
work with you on that. I believe that direction is an important
one. We do a number of things in this area already. We provide
information every month on the effects of our algorithms that
are removing problematic content from our system.
Senator Blumenthal. Will you commit to a legal requirement
that the access be provided and that an independent, separately
appointed and separately funded body, not an industry body, as
you have suggested, but an independent overseer and researcher,
have that access?
Mr. Mosseri. Senator on the specifics of how the body
works, I am not a legal expert, but yes, I think there should
be requirements and standards for how companies like ours are
transparent about both data and algorithms.
Senator Blumenthal. Because an industry body is not
Government regulation that Mark Zuckerberg or others at
Facebook and elsewhere have called for. An industry body
setting standards is not the same as an independent one.
Let me ask you, shouldn't children and parents have the
right to report dangerous material and get a response, get some
action? Because we have heard harrowing stories from parents
who tried to report and have heard no response.
My office made a report and got no response until CNN made
the report to press relations. Shouldn't there be an obligation
that Instagram will respond?
Mr. Mosseri. Senator, yes, I believe we try and respond to
all reports and if we ever fail to do so, that is a mistake
that we should correct.
Senator Blumenthal. Instagram is addictive. That is the
view that has been repeated again and again and again by people
who are experts in this field. Parents know it. And for teens
who see Instagram's algorithms encouraging, for example, eating
disorders, they find it almost impossible to stop. The UK code
restricts Instagram's use of addictive design, legally
restricts its use of addictive design. Shouldn't we have a
similar rule in the United States?
Mr. Mosseri. Senator, respectfully, I don't believe the
research suggests that our products are addictive. Research
actually shows that on 11 of 12 difficult issues that teens
face, teens that are struggling said Instagram helps for their
harms. Now we always care about how people feel about their
experiences on our platform, and it is my responsibility as
Head of Instagram to do everything I can to help keep people
safe, and we are going to continue to do so.
Senator Blumenthal. We can debate the meaning of the word
addictive, but the fact is that teens who go to the platform,
find it difficult, maybe sometimes impossible, to stop. And
part of the reason is that more content is driven to them to
keep them on the site, to aggravate the emotions that are so
seductive and ultimately addictive.
The UK recognized it by imposing that design restriction.
The same ought to be done in the United States. Let me ask you,
will you commit to make the pause on Instagram Kids permanent?
In other words, stop developing a site for, an app for,
children under 13?
Mr. Mosseri. Senator, the idea of building a version of
Instagram for 10 to 12 year olds was trying to solve a problem.
The Idea being that we know that 10 to 12 year olds are online.
They want to use platforms like Instagram, and it is difficult
for companies like ours to verify age for those that are so
young, they don't yet have an ID.
The hope is to always or the plan was to always make sure
that no child between 10 and 12 had access to any version of
Instagram, even one that was designed for them, without their
parent's consent.
And so what I can commit to today is that no child between
the ages of 10 to 12, should we ever manage to build Instagram
for 10 to 12 year olds, will have access to that without their
explicit parental consent.
Senator Blumenthal. I have more questions. My time has
expired. I thank you for answering my questions, Mr. Mosseri.
Senator Blackburn.
Senator Blackburn. Thank you, Mr. Chairman. Staying on
Instagram Kids for a moment. I know you were doing research
into 8 year olds and pulling together data on 8 year olds, and
I assume and that that was in relation to Instagram Kids. So,
are you still doing research on children under age 13?
Mr. Mosseri. Sorry, I am making sure my mic is on. Senator,
I don't believe we ever did any research with eight-year olds
for Instagram Kids, and neither are we doing that today. I
think we entirely paused the project.
Senator Blackburn. OK. And then if you were to completely
remove that project, who would make that decision?
Mr. Mosseri. Senator, it was my decision to pause Instagram
Kids----
Senator Blackburn. OK, so you would--it would be your
decision to just do away with it?
Mr. Mosseri. Senator, I am responsible for Instagram, so
yes, it would be my decision.
Senator Blackburn. OK, let's talk about Jane Doe v.
Facebook.
Mr. Mosseri. Senator, what?
Senator Blackburn. Jane Doe v. Facebook.
Mr. Mosseri. OK. Apologies.
Senator Blackburn. OK. I assume you can't get into the
details of that because the Supreme Court is still deciding
whether or not to take that case. But the petition, which
alleges that Facebook enabled the trafficking of a minor on its
platform, really raises some very serious questions and
concerns about what we are seeing and how people were using
Instagram. So do you prohibit known sex offenders from creating
Instagram accounts?
Mr. Mosseri. Senator, human trafficking and any
exploitation of children is abhorrent, and we don't allow it on
our platforms.
Senator Blackburn. OK, do you require minors to link their
accounts to a parent or guardian's account?
Mr. Mosseri. Senator, no.
Senator Blackburn. You don't?
Mr. Mosseri. If you are over the age of 13, you can sign up
for an Instagram account. But we do believe that parental
controls are incredibly important, which is why we are
launching our first version in March of next year.
Senator Blackburn. OK. You know, yes, the controls are
going to be vitally important, but an industry group is not
going to give the controls that are needed and probably not
even an independent group. That is why we will do something
with Federal statute.
Also, I think it would be interesting to know how many
people that are in human trafficking, sex trafficking, and drug
trafficking that have been indicted or convicted that were
using Instagram. Could you all provide that number for us?
Mr. Mosseri. Senator, I would be happy to talk to the team
and get back to you.
Senator Blackburn. That would be excellent. My staff
created an Instagram account for a 15-year-old girl and it
defaulted to public. I mentioned that earlier. Isn't the
opposite supposed to happen? And have you considered turning
off the public option altogether for minor accounts?
Mr. Mosseri. Senator, I appreciate the question. I learned
of that just this morning. It turns out that we default those
under the age of 16 to private accounts for the vast majority
of accounts which are created on Android and iOS. We have
missed that on the web, and we will correct that quickly.
Senator Blackburn. OK. Also, when they created this
account, it defaulted to this Statement, ``include your account
when recommending similar accounts people might want to
follow.'' Is this a feature that should remain on by default
for minors?
Mr. Mosseri. Senator, we think it is important that no
matter what your age, it is easy for you to find accounts that
you are interested in.
Senator Blackburn. Even if you are under 18?
Mr. Mosseri. Senator, I believe teenagers too have
interests, and that should be easy for them to find those.
Senator Blackburn. Teenagers have interests, yes. But what
we are trying to address are the adverse and negative effects
that are happening to children because they are on your
platform. Can adults not labeled as suspicious by you still
find, follow, and message minors?
Mr. Mosseri. Senator, if your account is private, if
someone follows you, you have to approve it. So adults can ask
to follow you, but you have the decision or the ability to
decide whether or not they are allowed to.
Senator Blackburn. OK. In your testimony, you said you
removed more than 850,000 accounts because they did not meet
your minimum age requirement. These accounts were disabled
because the users did not provide identification showing that
they were at least 13 years old. So why did you say you didn't
want to know when Jojo Siwa said she had been on Instagram
since she was 8 years old. Is that your general attitude toward
kids who are on your platform?
Mr. Mosseri. Absolutely not, Senator. We invest a lot to
try to identify those under the age of 13, and whenever we find
them, we remove.
Senator Blackburn. OK, but at that moment, when you
responded to her that you did not want to know, why didn't you
use that as a teaching moment?
Mr. Mosseri. Senator, I would say it was a missed
opportunity.
Senator Blackburn. Indeed, it was a missed opportunity, and
it sends the wrong message. It looked as if you were
encouraging kids that want to be online stars to get on earlier
and to build their audience. This is a part of our frustration
with you, with Instagram, and with these platforms. Thank you,
Mr. Chairman.
Senator Blumenthal. Thanks, Senator Blackburn. Senator
Klobuchar.
STATEMENT OF HON. AMY KLOBUCHAR,
U.S. SENATOR FROM MINNESOTA
Senator Klobuchar. Thank you. Mr. Mosseri, I am looking at
this from a perspective of parents, and I guess I have talked
to parents, since so many of them have told me that they have
done everything they can to try to get their kids off your
product, kids who are addicted at age 10. And they are scared
for their kids. They want their kids to do their homework and
not get addicted to Instagram.
And yet we then find out that what your company did was to
increase your marketing budget to try to woo more teens, from
$67 million in 2018 to $390 million focused on kids this year.
And so when I hear you are going to suddenly, with all your
technological wizards, develop some kind of new way to check to
see if really young kids are on there, you could have been
spending this money, $390 million, to do that for years. You
have the money to do it. I think that we are in diametrically
opposed goals, the goals of parents out there and the goals of
your company. Our kids aren't cash cows.
And that is exactly what has been going on. Because when
you look at the marketing budget and you look at what your
companies has done, it is to try to get more and more of them
on board. And when I look at your company's quotes from one
document, you--not you personally, but your company viewed
losing teen users as an ``existential threat,'' whereas parents
are viewing their kids' addictions to your product and other
products as an existential threat to their families.
So my first question is, is that in fact the truth, that
you have been increasing money, advertising money to woo more
teen kids onto your platform?
Mr. Mosseri. Senator, no, I don't believe those statistics
are correct. We increased our overall marketing budget between
last year and this year, but it was not--I think, as you
characterized it, as the majority of it was focused on teens,
and that is not true.
Senator Klobuchar. OK, had you view the kids as a feeder
way for people to get into your product? Have you not done
things to get more teenagers interested in your product? Are
you not worried about losing them to other platforms? You
better tell the truth. You are under oath.
Mr. Mosseri. Absolutely, Senator. Senator, we try and make
Instagram as relevant as possible for people of all ages,
including teens. Teens do amazing things on Instagram every
day, but we also invest, I believe, more than anyone else in
keeping people, including teens, safe. We will spend around $5
billion this year alone, and we have over 40,000 people working
on safety integrity at the company.
Senator Klobuchar. And do you think 3 hours a day is an
appropriate amount of time for kids to spend on Instagram?
Mr. Mosseri. Senator----
Senator Klobuchar. I am asking this because just when you
put out those new rules, that was an option for parents 3 hours
a day. Is that a good use of kids' time?
Mr. Mosseri. Senator, I appreciate the question.
Senator Klobuchar. And it was in your safety tools that you
just put out there. The first option given to kids and to
parents was 3 hours a day.
Mr. Mosseri. Senator----
Senator Klobuchar. I have them. Can I put them on the
record? So, Chair----
Senator Blumenthal. Without objection.
[The information referred to was unavailable at time of
printing.]
Senator Klobuchar. Thank you.
Mr. Mosseri. If I may, Senator, I am a parent and I can
understand that parents have concerns about how much screen
time their kids have. I think that is--I think every parent
feels that way. I ultimately think that as a parent, that a
parent knows best what is best for their teens.
So the appropriate amount of time should be a decision by a
parent about the specific teen. If one parent wants to set that
limit at 10 minutes and another parent wants to set that limit
at 3 hours, who am I to say that they don't know what is best
for their children?
Senator Klobuchar. And do you believe your company has
invested enough in identifying that young children are not on
the platform, when you know that they are not supposed to be on
there, and making sure you are registering and using all your
technology not to just increase your profit, but to make sure
the kids aren't on there? You think you have done enough?
Mr. Mosseri. Senator. Two things. One, yes, I believe that
we have invested more than anyone else. But I also believe that
it is still a very challenging industry wide issue. I think
there is a number of things that we can do at the industry
level to better verify age. Specifically, I believe it would be
much more effective to have age verification at the device
level. Have a parent who gives their 14 year old a device, tell
the phone that their child is 14, as opposed to having every
app, and there is millions of apps out there, trying to verify
age on their own.
That should happen at the device level. We understand that
might not happen or that might take time. And in the meantime,
we are going to invest heavily in getting more sophisticated in
how we identify the age of people under the age of 18.
Senator Klobuchar. And is it true that someone in your
company said that it was an existential threat if you lost teen
users?
Mr. Mosseri. Senator, I don't----
Senator Klobuchar. Is that true or not? Because we have a
document that said that.
Mr. Mosseri. Senator, I assume that is true.
Senator Klobuchar. OK, so you understand what we are
thinking up here when it is our job to protect kids, and we
have parent after parent calling our office, e-mailing us, one
of the parents likened to me that it was like a water faucet
going off and it was overflowing and she is sitting there with
a mop, trying to figure out how to use all of the tools you
give them that she can't figure out how to use.
So I think at some point the accountabilities on you guys.
And that means everything from the privacy bills to expanding
the child protections online to the competition policy, because
maybe if we had actual competition in this country instead of
Meta owning you and owning most of the platforms and most of
the back and forth for kids, maybe we could have another
platform developed that would have the privacy protections that
you have not been able to develop in terms of keeping teens off
your platform that aren't even old enough to be on there.
So that is what I think--some food for thought for all of
you is the opposition to some of the competition policy,
capitalism, pro-capitalism ideas that we have over in
judiciary, and I will hand it back to our Chair.
Senator Blumenthal. Thank you, Senator Klobuchar. Thanks
for your work, your leadership on this issue here and on the
Judiciary committee, where we are on the Subcommittee on
Antitrust which you chair. I will ask a couple of questions
because we have a vote ongoing, so a number of my colleagues
will be returning from the floor. You know your suggestion for
tech companies to earn Section 230 protection has a certain
appeal to me since I am the author of the EARN IT Act along
with Senator Graham.
It is also the concept that underlies other proposals that
we have made. But that is not Government regulation. So the
question is, will you support the UK's children's code that
Instagram has to obey in the UK? Shouldn't kids in the United
States have protection as good as the kids in UK enjoy?
Mr. Mosseri. Absolutely, Senator. A few quick things. One
is I believe that is the age appropriate design for getting the
last letter of the acronym. And I believe it is something that
we support. And we support safety standards for kids
everywhere, including here in the U.S.
But also, if you would indulge me for a minute, I would
like to clarify that my proposal is, yes, an industry body that
sets standards for youth safety with input from civil society,
from policymakers, and from parents, but one--that once the
standards are proposed, it would be approved by policymakers
like yourself. And I also believe that policymakers or
regulators should make the decision of whether or not any
individual company like mine is adhering to those standards. So
it is not simply----
Senator Blumenthal. And then enforce them, bring lawsuits,
seek damages?
Mr. Mosseri. Senator, we believe that a strong incentive
would be to tie some of the Section 230 protections to
adherence, and that could be a decision by regulators.
Senator Blumenthal. So would the Attorney General of the
United States or the Attorney General of a state like
Connecticut, where I was Attorney General, have the power to
enforce those standards?
Mr. Mosseri. Senator, we believe in enforcement.
Specifically how to implement that enforcement is something
that we would like to work with your office on and other
offices as well.
Senator Blumenthal. Well, that is a simple yes or no.
Enforceability has to be part of your proposal.
Mr. Mosseri. Senator, I agree, enforceability is incredibly
important. Without enforcement, it is just words.
Senator Blumenthal. So you think that the Attorney General
of the United States could enforce those standards, which means
they would be written in the statute?
Mr. Mosseri. Senator, I don't know, because I am not a
legal expert, if the best way to enforce it would be with the
Attorney General. But in general, I think it should happen at
the Federal level, and it is something that I would be happy to
have my team work with you.
Senator Blumenthal. Do you favor private rights of action,
so individuals who were harmed could bring an action against
Meta?
Mr. Mosseri. Senator, I believe that it is important, that
companies like ours are held accountable to high standards. But
I believe the most important way or the most effective way of
doing so is to define industry standards and best practices at
the Federal level ideally and to seek enforcement, as you
suggest.
Senator Blumenthal. These are really yes or no questions.
They are pretty clear. And I know you are knowledgeable about
them, so I hope you will answer them more clearly in the
answers that you provide in writing. I am going to yield to
Senator Klobuchar.
Senator Klobuchar. I am just, and I can do this on the
record, but I did want to, since you denied this idea that the
marketing budget went from $67 million to $390 million, and
that much of the budget was allocated to wooing teens, that was
reported on by the New York Times from internal documents from
your company. So do you still deny that this is the fact?
Mr. Mosseri. Senator, occasionally, there are reports that
are inaccurate. In this case, I believe that report or that
article said that the majority of our budget was focused on
teens, and I know for a fact that that was not the case.
Senator Klobuchar. Much of the budget. Is that accurate?
Mr. Mosseri. Senator, I don't remember off the top of my
head.
Senator Klobuchar. Then could you give me the number or
what percentage of the budget was focused on teens? You must,
as a business, be able to break it down that way, so that would
be helpful. I will pass that in writing.
Mr. Mosseri. I would be happy to follow up with you on
that.
Senator Klobuchar. OK, thank you.
Senator Blumenthal. Senator Markey.
STATEMENT OF HON. EDWARD MARKEY,
U.S. SENATOR FROM MASSACHUSETTS
Senator Markey. Thank you, Mr. Chairman. You know, thanks
to your leadership, Mr. Chairman, we have Frances Haugen, who
has told us quite clearly that 32 percent of teen girls say
that when they feel bad about their bodies, Instagram makes
them feel worse. And 6 percent of American teen users trace
their desire to kill themselves to Instagram. That is your own
research, Mr. Mosseri.
Yet, faced with these frightening findings, did Facebook
back off its efforts to target children? No, just the opposite.
Facebook pursued plans to launch a version of the platform for
even younger users, Instagram Kids, and that is appalling. Mr.
Mosseri, I am glad Facebook has heeded my calls and paused
these plans. But you have since publicly doubled down on
Instagram Kids and said it is, ``the right thing to do.''
Your Statement makes crystal clear that self-regulation is
not an option for parents and children in the United States of
America. Instagram sees a dollar sign when it sees kids.
Parents should see a stop sign when it sees Instagram. Mr.
Mosseri, do you support my bipartisan legislation with Senator
Blumenthal, Senator Cassidy, Senator Lummis to update the
Children's Online Privacy Protection Act and give 13, 14, and
15 year olds control over their data, yes or no?
Mr. Mosseri. Senator, respectfully, it is important that we
are clear on what the research actually shows. Any loss of life
to suicide or to anything any other reason is a tragedy. But
that was--the 6 percent number is inaccurate. It was 1 percent
of teens who traced their thoughts back. Now, anybody feeling
worse about themselves is something that we take incredibly
seriously.
Now you asked if I support this specific Act that you are
proposing. I do strongly support Federal regulation, not
industry regulation, when it comes to youth safety. That said,
if you move the age from 13 to 16, we know that 14 and 15 year
olds also want to be online, they also can lie about their age,
and you are going to make the challenge of age verification
even more difficult.
That said, I do believe that 13 isn't a magic number.
People's needs as they grow up, evolve and we should build age
appropriate experiences based on children's age.
Senator Markey. So will you give 13 to 15 year olds the
right to have all of their information expunged----
Mr. Mosseri. Senator----
Senator Markey--that is being gathered online? Do you
support legislation that would give parents and children the
ability to have their records expunged?
Mr. Mosseri. Senator, you can already delete your account
and all of your data. You should have that right, whether or
not you are a teenager or an adult.
Senator Markey. Would you support national legislation
mandates that each parent and child be given the ability to
expunge it? Would you support legislation to do that, make it
mandatory?
Mr. Mosseri. Senator, I would support legislation that
required companies like ours to allow people to delete their
data, yes.
Senator Markey. Yes, OK. And just to make that a permanent
protection that is on the books. Would you support legislation
to ban targeted ads toward children--both teens and children?
Mr. Mosseri. Senator, we believe that anyone should always
have an age appropriate experience on Instagram or in any
social platform, and that extends to ads. We have different
rules for ads on Instagram and on Facebook. We only allow
advertisers to target based on age, based on gender, and based
on location, and we don't allow certain types of ads, things
like weight loss ads and dating ads for those under the age of
18 or alcohol related ads for those under the age of 21.
Senator Markey. So do you support legislation that would
ban targeting of ads to children, yes or no?
Mr. Mosseri. Senator, I believe it is valuable for ads to
be relevant, but I do believe that some measures need to be
taken to keep children safe, which is why I would support
something in the direction of what we do, which is to limit the
targeting abilities of platforms.
Senator Markey. Well, again, yes and no. Mandate or no
mandate. That is the question. It is exactly why we have to
make sure that Facebook and Instagram don't reserve the right
to be able to target these kids. Yes or no?
Mr. Mosseri. Senator, I am trying to be specific about what
I would support, which is what we build, which is a limited
targeting options.
Senator Markey. Would--again, I just keep coming back to
the fact that your answers are too vague to make it possible
for us to make these decisions on a legislative way and to do
so in the very near future, which is what I think we have to
do. And the chilling truth, unfortunately, continues to just be
that in the absence of regulation, that big tech has become a
threat to our democracy, our society, and to the children in
our country. And let's just be clear, Facebook, which owns
Instagram, opposes regulation.
Your idea of regulation is an industry group creating
standards that your company follows. That is self-regulation.
That is status quo, and that just won't cut it post the
revelations that this subcommittee has made public. We do need
laws. We need laws passed by this body. We have to ban targeted
ads.
We have to make sure that that is the law in our country.
And everything that this subcommittee has unveiled continues to
make that a necessity, including the testimony that you are
delivering today. Thank you, Mr. Chairman.
Senator Blumenthal. Thanks, Senator Markey. Senator
Baldwin.
STATEMENT OF HON. TAMMY BALDWIN,
U.S. SENATOR FROM WISCONSIN
Senator Baldwin. So I--sorry, I missed just a segment as I
went over to vote, and it may have come up because this is work
I did with Senator Klobuchar, but I joined Senators Klobuchar
and Capito in writing to Meta for more information about how
Instagram is combating eating disorder content and the harms it
brings to users, particularly young people.
In response to a question about how the platform is working
to remove this content, Meta indicated that it uses a
combination, I am quoting now from the letter, ``a combination
of reports from our community, human review, and artificial
intelligence'' to find and take down material that violates
your terms of service.
The response further notes that there are more than 15,000
human reviewers on staff. I also met with Frances Haugen, the
former Facebook employee whose disclosures has spurred this
series of hearings.
And when I asked her about what more Instagram could do to
remove content like this, content glorifying eating disorders,
she argued that more human review is really the key. According
to her, many community reports simply are not investigated, and
artificial intelligence cannot successfully identify patterns,
networks, and distribution points for problematic content.
Given that Meta's platforms, according to your own data,
have 3.6 billion monthly active users, the 15,000 reviewers
would seem to pale in comparison to the amount of content those
3.6 billion monthly active users could post.
Do you agree that more human reviewers will help you move
more successfully at removing problematic content more quickly?
And if so, will your company commit to strengthening its
investment in human review?
Mr. Mosseri. Senator, thank you for the question. We have
over 40,000 people, human reviewers and otherwise, and
engineers who work on safety and integrity. I mean, we are
investing about $5 billion this year. And at a high level,
people are better at nuance, and technology is often better at
scale. I think the most effective thing we can do not only for
eating disorder content, which is tragic and a complicated
societal issue, is to invest more, particularly on the
technologies that on both, and we are going to continue to do
so.
Senator Baldwin. Let me ask, I know that this hearing is
focused mostly on youth and harmful--but how many, in how many
countries is Instagram available?
Mr. Mosseri. Senator, Instagram is available in over 70
languages. I unfortunately don't know the number of countries
off the top of my head, but I would be happy to get back to you
with that number.
Senator Baldwin. OK, so 70 languages. That is the point I
was going to get to. Of those 40,000 or 15,000, which is what
was in the, I think, the letter response, how many are language
specific, sort of, monitoring content in each of those 70
languages?
Mr. Mosseri. Thank you, Senator. Actually, I misspoke. We
review content in 70 languages. We have even more languages
that we, people speak that use Instagram, and we are always
looking to increase the number that we cover. I apologize for
the mistake.
Senator Baldwin. OK, so there are gaps then in terms of
human review in those areas?
Mr. Mosseri. Senator, we are always looking to improve, not
only through language coverage, but through building more
accurate classifiers, through improving the efficiency of our
reviewers because it helps keep people more safe.
Senator Baldwin. I may have some followups with regard to
that that are more specific, but we also all know that there is
tremendous social pressure for kids to utilize social media
platforms and services. And while it is the industry standard
to block or restrict access to those younger than 13, we know
that younger teens and tweens are still signing up for social
media accounts.
I appreciate that your company decided to press pause on
its proposed service focused on younger kids earlier this year.
I understand from your announcement yesterday that Meta is
looking to introduce new parent controls and other tools for
Instagram in the coming months.
But I am concerned about what you are doing today to keep
kids under 13 off the platform. So tell me a little bit more
about what you are doing to strengthen age verification and
why, given the problems of which you are already well aware
with Instagram and Meta's experience with other services
focused on younger kids like Messenger Kids, why have you
waited to institute more parental controls or other steps
protecting young users?
Mr. Mosseri. Senator, on the parental controls question, I
believe as a parent, it is going to be more responsible to
develop an age appropriate version of Instagram for those under
13. But I paused that project, and I took the exact work they
were building, which was parental controls because no one was
going to have access unless they had their parent's consent. We
pivoted that to teens because 13 isn't a magical number. But
you also asked, how do we verify the age of those under 13? It
is difficult, given that young people of that age don't have an
ID in most countries.
We built what we call classifiers, which try to predict
age, and then we ask people to prove their age if it looks like
they might be too young. We look at things like, you know, in
certain countries, you know, sweet 16 is a cultural norm here
in the U.S.
And so we look forward to people say that on someone's
birthday, does that line up with the age that they said? And we
get better over time as we get more signals, but it is, I want
to be very clear, it is not perfect, which is why I believe
there are better industrywide ways to solve age verification
because it really is an industry challenge that is not unique
to Facebook or to Instagram.
Senator Blackburn. Thank you. Chairman Blumenthal will be
back from his vote in a moment, and we are going to be starting
our second round. We have some other members that are coming
for their first round. In the meantime, I want to return to a
question I asked you about those that are human traffickers,
sex traffickers, drug traffickers, their utilization of your
site. Now, I know that in 2020, you sent over 21 million sex
abuse images on your platform, Facebook sent these to NCMEC.
So I thank you for doing that. That is the right thing to
do. I am interested to know whether these child exploitation
images and reports, if in your report, do you include
traffickers, and do you include those violations when you make
that report to NCMEC with these images?
Mr. Mosseri. Senator, I have to check on that specific and
get back to you, but we do allow you to report an image or
photo for violating any of our standards. And you can see how
well we are at reducing the prevalence of those problems, and
our consumer----
Senator Blackburn. Get back to us and let us know if you
are also reporting these individuals that are posting and
sharing these sexual abuse images of children. Senator Thune,
you are recognized for five----
STATEMENT OF HON. JOHN THUNE,
U.S. SENATOR FROM SOUTH DAKOTA
Senator Thune. Thank you, Madam Chair. Thanks for holding
today's hearing. The lack of transparency from big tech
companies and the effect these companies have on consumers is
concerning to the public and rightfully so. Because of the
secrecy with which big tech protects their algorithms and
content moderation practices, we have little idea how these
companies use algorithms to amplify or suppress content, or how
they can affect the behavior of users without their knowledge.
Tomorrow, the Communications subcommittee on which I serve
as Ranking Member will take a closer look at the effects of
this persuasive technology, and I look forward to hearing from
the panel about the details of how algorithms and artificial
intelligence are designed and deployed on Internet platforms to
manipulate users, as well as the bigger picture about what the
future might hold for us when corporations and Governments know
more information about each of us than we know about ourselves.
We must find ways to improve more transparency and
accountability in the algorithms deployed on Internet platforms
that select the content that billions of people see every day.
Since hearing from the Facebook whistleblower, we now have more
insight into Instagram and Facebook's troubling practices with
regard to how it uses algorithms. And in my view, it is long
past time for Congress to enact legislation to ensure that
these companies are held accountable.
There is also bipartisan support for shedding more light on
the secretive content moderation processes big tech uses and to
provide consumers more options when engaging with Internet
platforms, which is why I have introduced two bipartisan bills
to address these issues, the PACT Act and the Filter Bubble
Transparency Act.
And I look forward to, in the time that I have, to
discussing those issues with you today, Mr. Mosseri. And let me
start by just asking, does Instagram use persuasive technology,
meaning technology that is designed to change people's
attitudes and behaviors?
Mr. Mosseri. Senator, I have worked on ranking and
algorithms for years, and that is not how we work. We use
ranking to try and connect people with the friends that they
find meaningful, and we use them to try and keep people safe.
Senator Thune. The Wall Street Journal revealed that
Instagram often ignored warnings about the harmful impact the
platform had on users, particularly on girls. With that being
said, do you believe consumers should be able to use Instagram
without being manipulated by algorithms that are designed to
keep them hooked on the platform?
And would you support giving consumers more options when
engaging on Instagram's platform. For instance, providing
consumers a feed that is not being fed to them by algorithms or
that is in a chronological order?
Mr. Mosseri. Senator, I believe it is important that people
have control over their experience. So yes, I would support
giving people the option to have a chronological feed.
Senator Thune. On the issue of Section 230 reform, Senator
Schatz and I have introduced legislation that would, among
other things, require platforms like Instagram to provide for
more due process to users regarding their moderation and
censorship practices, and submit public transparency reports
about content that has been removed or de-emphasized. Do you
believe this provision would help build trust with Instagram's
users?
Mr. Mosseri. Senator, we believe in more transparency and
accountability, and we believe in more control. That is why we
are currently working on a version of a chronological feed that
we hope to launch next year. That is why we provide a number of
ways for you to see how content moderation works on the
platform.
So, for instance, today you can go to the account center. I
believe it is called account status and see any of your content
that has been taken down. And that is why we are working on
more ways to give people more control over their experience and
create more transparency about how Instagram works.
Senator Thune. Do you believe that algorithm explanation or
algorithm transparency are appropriate policy responses?
Mr. Mosseri. I believe very strongly in algorithmic
transparency. I think it would be hard for you to find someone
who has tried as much to explain how ranking works. There is a
number of ways to be transparent. I think in some cases, the
most effective is to look at the output of algorithms like we
do in our community guidelines enforcement report. In other
cases, it is more appropriate to explain how they work instead
of releasing millions of lines of code.
Senator Thune. And could you just elaborate a little bit on
when you talk about creating and giving consumers a feed that
is not being fed to them by an algorithm or that is in a
chronological order? You said that you are going to implement
that policy beginning next year. How did you come to that
decision, and sort of more specifically, what the dates for
that implementation, and maybe if you could elaborate a little
bit on just exactly what that might look like?
Mr. Mosseri. Absolutely, Senator. So we have been focused
for a few years now on how to give people more control over
their experience. One idea that we have experimented with
publicly is called favorites, where you can pick a subset of
people you want to have show up at the top of feed. Another we
have been working on for months now is a chronological version
of Instagram. I wish I had a specific month to tell you right
now, but right now we are targeting the first quarter of next
year
Senator Thune. OK. And we would like to take what you are
proposing to do and codify it. And that is what the Filter
Bubble Act does. Thank you, Madam Chair.
Senator Blackburn. Senator Lujan, you are recognized.
STATEMENT OF HON. BEN RAY LUJAN,
U.S. SENATOR FROM NEW MEXICO
Senator Lujan. Thank you very much, Chair Blackburn, and
want to thank everyone for calling this important hearing as
well to our Chair and to our Ranking Member. Tomorrow, I will
also be convening a hearing titled, disrupting dangerous
algorithms, addressing the harms of persuasive technology in
the Communications, Media, and Broadband subcommittee, where we
will discuss legislative solutions to online amplification of
content that spreads misinformation, threatens the mental and
physical well-being of our children, and promotes extremism.
One of the lines of questions that I had today based on
questions that came before from other colleagues earlier in
this important hearing lies around data retention and deletion,
and I think there was a line of questioning, from one of my
colleagues as well. Does Instagram have in place practices to
abide by the principle of data minimization, especially for
sensitive personal information?
Mr. Mosseri. Yes, Senator.
Senator Lujan. Can you provide those to the Committee?
Mr. Mosseri. Senator, I would be happy to follow up with
that.
Senator Lujan. That is a yes?
Mr. Mosseri. Senator, yes, I will get the exact details on
what we do and follow up with the Committee.
Senator Lujan. Appreciate that. How long does Instagram
store data related to what websites users visit and what
internal links they click from inside the app?
Mr. Mosseri. Senator, I apologize. I do not know that
offhand, but again, I would be happy to get back to you with
the specifics there.
Senator Lujan. Do you know how long Instagram stores
location information for a user?
Mr. Mosseri. Senator, if you post a photo that has a
location, it will say that location on that photo, so that will
be stored for as long as that photo is up. In other cases, I
assume we have retention policies that are quite short. We will
get back to you with the specifics, but it will depend on the
usage of location data.
Senator Lujan. When was the last time Instagram reviewed
and updated its data decision, deletion and retention policies?
Mr. Mosseri. Senator, we operate as one company, so would
be for both Instagram and Facebook, so we wouldn't--there
wouldn't be a specific retention policy for Instagram. I don't
know the last time the policies were specifically updated, but
I can tell you that we are constantly working on privacy,
making sure that we can do more to empower people to protect
their own data, making sure that we are compliant with the
increasing number of privacy regulations around the world.
Senator Lujan. Would you support Federal policy legislation
that enforces data retention limits and data deletion
requirements?
Mr. Mosseri. Senator, not only would we support that, but
we believe it is important for there to be privacy regulation
here in the U.S. and that is something that we have been very
public about for years now.
Senator Lujan. When users request to download their data
from Instagram, are users given all information that the
company holds on them, including assumptions that the company
has about them based on their behavior?
Mr. Mosseri. Senator, when you download your data, we give
you everything that is associated with you as far as I know,
but I want to make sure I double check that. There are certain
instances where--actually I can't think of any exceptions, but
I will get back to you.
Senator Lujan. So the other way that I would ask that
question and maybe it requires follow up is, is there any
information that Instagram does not share with users when they
request their data?
Mr. Mosseri. Senator, we do our best to share all the data,
all your data when you asked to download your data. As we add
new features, we add them to what we call download your data to
make sure that you have all the data.
Senator Lujan. Instagram has the option for users to
request to delete their data that Instagram holds. Is that
correct?
Mr. Mosseri. Yes. And if you delete your account, you can
delete your data if you request it.
Senator Lujan. Is there any data that Instagram holds from
a user after a user deletes their information?
Mr. Mosseri. Not that I know of Senator, no.
Senator Lujan. Is that something you can get back to me on
as well?
Mr. Mosseri. Absolutely. But I can also assure you that we
do all we can to delete all your data, if you ask us to. To do
otherwise would be incredibly problematic.
Senator Lujan. I appreciate that. There was a question that
I asked Mr. Zuckerberg back in 2017 about Facebook's collection
behavior about non-users, to which he responded to me that
Facebook did not collect non-user information. Facebook, about
a week later released a correction to that that Mr. Zuckerberg
must have been mistaken or had a lapse in how he responded to
that particular question.
But nonetheless, I really want to get to the bottom of
that, especially with the rampant collection of data from
individuals as well. And then the last question, Madam Chair
that I will submit into the record because I am out of time now
is the work that has to be done in non-English language
disinformation and misinformation. It is a huge problem. I
think it is important for Facebook, for Instagram, and Meta,
and I get confused with which term I should be using here with
rebranding. I guess I am not so good with it, but----
Mr. Mosseri. I am comfortable with any term you would
prefer.
Senator Lujan [continuing]. That we are able to get
disaggregated data as we requested in this hearing from a
Facebook witness, and we still have not been responded to. And
I think it is very important for the Committee's wishes to be
respected, which are done in a bipartisan way.
Mr. Mosseri. Senator, I appreciate that point. Since we
talked about that the other day, I have started to look into
that. There are a number of ways we might be able to break out
data around our Community Standards Enforcement policy.
It is possible--one possibility is what language the
content is in, another possibility is what language the person
who sees that content speaks, another possibility is what
country. And so we are going to get back to you on what we
think is the most efficient and responsible thing we can do in
this space. And I will personally make sure that we do that and
get back to you promptly.
Senator Blumenthal. Thanks, Senator Lujan. Senator Lee.
STATEMENT OF HON. MIKE LEE,
U.S. SENATOR FROM UTAH
Senator Lee. Mr. Mosseri, does Instagram advocate weight
loss or plastic surgery for teenage girls under the age of 18?
Mr. Mosseri. Sorry, Senator, I apologize, I missed the
second word on the question. Does what--?
Senator Lee. Does Instagram advocate for, does it recommend
weight loss or plastic surgery for girls under the age of 18?
Mr. Mosseri. Absolutely not. We don't recommend an eating
disorder related content to people of any age.
Senator Lee. Very glad to hear that. I beg to differ here.
Leading up to this hearing, I have heard about a lot of
complaints from people across Utah and elsewhere who have told
me about inappropriate content available through the explore
page on Instagram, available specifically to children. And so I
was encouraged to look into it myself. So I had my staff create
a fictitious 13 year old account for a fictitious 13 year old
girl.
The explore page yielded fairly benign results at first
when all we did was create the account, knowing that it was a
13-year-old girl. But Instagram also provided that same account
for this fake 13-year-old that included a list of
recommendations of accounts that we should follow, including
multiple celebrities and influencers. So we followed the first
account that was recommended by Instagram, which happened to be
a very famous female celebrity. Now, after following that
account, we went back to the explore page and the content
quickly changed.
You see right at first, all that came up when we opened up
the account were some fairly benign hairstyling videos. But
that is not what we saw after we followed this account, the
account that was recommended by Instagram itself, and it
expanded into all sorts of stuff, including content that was
full of unrealistic standards for women, including plastic
surgery and commentary on women's height, content that could be
detrimental to the self-image of any 13-year-old girl.
And if you need any kind of evidence on the kinds of harms
that this can produce, you can look to the report that I
recently issued through my Joint Economic committee team,
specifically on this topic last week.
So, Mr. Mossier, why did following Instagram's top
recommended account for a 13-year-old girl cause our explore
page to go from showing really innocuous things like
hairstyling videos to content that promotes body dysmorphia,
the sexualization of women, and content otherwise unsuitable
for a 13 year old girl. What happened?
Mr. Mosseri. Senator, eating disorder related content, or
eating disorders more broadly are very complicated and
difficult issues in society. Without----
Senator Lee. And they are complicated enough without a
social media site recommending it.
Mr. Mosseri. Senator, I have personally spoken to teens in
multiple countries around the world that use Instagram to get
support when suffering from things like eating disorders.
We absolutely do not want any content that promotes eating
disorders on our platform. We do our best to remove it. I
believe, and I will get back to the specifics of that, it is
roughly 5 in every 10,000 things viewed. And my responsibility,
as the Head of Instagram, is to get to that number to as close
to zero as possible.
But we believe that every company, Snapchat, TikTok,
YouTube should be public like we are about exactly what the
prevalence of important content problems are on our platform.
Senator Lee. Right. I get that. And I can only take your
word for it here. I understand what you are saying about the
overall numbers. That is not how it appeared on this account.
That is not how it happened at all. It was hairstyling videos
and innocuous stuff 1 minute. The next minute, after we
followed a famous female celebrity, it changed, and it went
dark fast. It was not 5 in 1,000 or 5 in 10,000.
It was rampant. The thing that gets me is, what changed was
following this female celebrity account, and that female
celebrity account was recommended to this 13-year-old girl. So
why are you recommending that somebody follow a site with the
understanding that by doing that, you are exposing that girl to
all sorts of other things that are not suitable for any child?
Mr. Mosseri. Senator, I appreciate the question because it
is an incredibly important and difficult space. If we
recommended something that we shouldn't have, I am accountable
for that. I am the Head of Instagram. But you said a second
ago, you have to take my word for it, and I don't believe you
should.
Our first Community Standards Enforcement Report, sorry, we
have been doing it for years, but this next quarter, this
quarter we are in right now, is going to be independently
audited by Ernst & Young, and we are committed to doing
independent audits going forward.
Senator Lee. That is great, and I have independently
audited myself. And what I am saying to you is I will take your
word for it as to the 5 in 1,000, 5 in 10,000 point. What I am
saying is it was decidedly not 5 in 1,000 or 5 in 10,000 on
this page for this for poor unsuspecting, albeit fake, 13-year-
old girl.
Mr. Mosseri. Senator----
Senator Lee. That is a concern. Look, I am running out of
time, but I am also running out of patience from a company that
has told us over and over and over again, we are so concerned
about your children, we are so concerned, we are commissioning
a blue ribbon study to be done or we are doing a review and
stuff like this is still happening.
Meanwhile, the Tech Transparency Project, TTP recently
conducted another experiment demonstrated how miners can use
their instrument Instagram accounts to search for prescription
and illicit drugs and connect with drug dealers. In fact,
according to TTP, it only took two clicks, two clicks to find
drug dealers on that platform. So, why are children's accounts
even allowed to search for drug content to begin with, much
less allowed to do so in a way that leads them to a drug dealer
in two clicks?
Mr. Mosseri. Senator, accounts selling drugs or any other
regulated goods are not allowed on the platform.
Senator Lee. Apparently they are.
Mr. Mosseri. Senator, respectfully, I don't think you can
take one or two examples and indicate that that is indicative
of what happens in the platform more broadly, which is why--and
I want to be clear----
Senator Lee. No, but it only took two clicks.
Mr. Mosseri. Senator. I am not familiar with that specific
report. I am more than happy to look into it. I do want to be
clear though, because I have been talking about the Community
Standards Enforcement Report a lot, and I know it sounds like
numbers. And I know behind every one of those numbers is a
person who is experiencing something difficult.
So I don't want to sound callous in any way. So if there is
room for us to improve, I embrace that. That is why we invest
more than I believe anybody else, $5 billion this year, over
40,000 people. That is why we believe in industry standards and
industry accountability.
And that is why we are calling on the entire industry,
YouTube, TikTok, Snapchat, to come together to set industry
standards that are approved by regulators like here in the U.S.
in order to make the Internet more safer for not only kids
online, but for everyone.
Senator Lee. There is a scene from the movie Monty Python
and the Holy Grail. There is a big fight. Somebody concludes
the discussion by saying, ``look, let's not bicker and argue
about who killed who here.'' I think we have to reach the point
where we realize some real bad stuff is happening, and you are
the new tobacco, whether you like it or not, and you have got
to stop selling the tobacco, in quotation marks, to kids. Don't
let them have it, don't give it to them. Thank you.
Senator Blumenthal. Thanks, Senator Lee. Senator Sullivan.
STATEMENT OF HON. DAN SULLIVAN,
U.S. SENATOR FROM ALASKA
Senator Sullivan. Thank you, Mr. Chairman, and thank you
for holding this important hearing and a series of hearings
that you and the ranking member have been holding. Mr. Mosseri,
have you read the Surgeon General's report that he issued
yesterday, protecting youth mental health?
Mr. Mosseri. Senator, I have started to read it, I haven't
quite finished it from where my read so far. It makes it clear
that teens in this country are struggling,----
Senator Sullivan. Let me, I will get into it a little bit.
Mr. Mosseri. Yes, Senator.
Senator Sullivan. I agree. A very sobering reading. It
mentions in 2021, emergency room visits for suicide attempts by
adolescent girls are 51 percent, 51 percent. I mean, that is
just shocking. And the surgeon general said our obligation is
to act. It is not just medical; it is moral.
So, the way I have read it, it is kind of a witch's brew of
two things that are driving so much of these horrendous
statistics related to mental health and suicide. It has been
the pandemic in the negative impacts of social media. That is
in the report. One of the recommendations from the surgeon
general is limiting social media usage. Do you agree with that?
Mr. Mosseri. Senator, I will answer that question, but
first, I want to be clear that I don't believe the research
shows that social media is driving the rise in suicides.
Senator Sullivan. Wait, but why do you think the Surgeon
General of the United States who just issued a 53-page report
on mental health and teen suicides, said that we should limit
social media to help get out of this crisis?
Mr. Mosseri. Senator, from what I have read of the surgeon
general's report so far, it is about a number of different
issues, not just suicides. So just to make a connection between
one problem that he talks about and one of the recommendations
he makes I think is a bit of a leap.
Senator Sullivan. Let me ask my question again. The Surgeon
General of the United States makes his recommendations to
address what clearly is a mental health crisis for teenagers,
particularly teenage girls in America. One of his
recommendations is to limit social media usage. So do you agree
with him?
Mr. Mosseri. Senator, two things. One is----
Senator Sullivan. Answer the question.
Mr. Mosseri. Senator. I believe parents should be able to
set limits for their children because I believe a parent knows
best, which is why we have developed or why we are currently
developing parental controls that let parents not only see how
much time their teens spend on Instagram but set limits. I
also----
Senator Sullivan. Let me just add, I mean this is a really
important question because if we have experts saying we need to
limit social media usage, which is what the surgeon general
just said yesterday, to help address mental health issues, does
that go against the business model of Instagram or Facebook or
Meta? Isn't your business model to get more eyeballs for a
longer time on social media? Isn't that what you are about?
Mr. Mosseri. Senator, if people don't feel good about the
time that they spend on our platform, if for any other reason
people want to spend less time on our platform, I have to
believe that it is better for our business over the long run.
Senator Sullivan. Do you make more money when people spend
more time on your platform or less?
Mr. Mosseri. Senator, on average, we make more money when
people spend more time on our platform because we are an
advertising business.
Senator Sullivan. Right. So, but you agree with the surgeon
general that people should limit their social media usage. My
point is they seem to be in direct contradiction with each
other: what the surgeon general is saying, we need to better
the health of our young Americans and what your basic business
model proposition is. They seem to be actually colliding with
each other.
Mr. Mosseri. Senator, respectfully, I disagree. I think the
important thing is to distinguish between the short term and
the long term. Over the long run, it has to be better for us as
a business for people to feel good about the time that they
spend on our platform. It has to be better for us as business,
for parents to not only have a meaningful amount of control but
be able to exercise their control over how much time their
teens spend on our platform. And so we take a very long view on
this, which is why----
Senator Sullivan. Do you have internal data relating to
mental health and suicide and usage on your platform or
Facebook or Meta?
Mr. Mosseri. Senator, we do research to make Instagram
better and safer. And as a parent, that is exactly what I would
want. I believe we lead the industry and do more than anyone
else on suicide.
Senator Sullivan. Yes. You are not answering my question.
Just answer--do you have an internal data related to the issue
of teen suicide and usage of your platform?
Mr. Mosseri. Senator, I am not sure I am understand your
question specifically, but yes, we do research and we talk to
third party experts and academics about difficult issues like
suicide, which is inspired work like not allowing any content
that talks about the methods of suicide, connecting people who
seek out that type of content with expert backed resources, and
if someone looks like they are at risk of hurting themselves,
proactively reaching to local emergency services on their
behalf, not only here in the U.S., but in a number of countries
around the world.
Senator Sullivan. Mr. Chairman, I am just--one final
question. Can I ask very quickly, I looked into a little bit of
this issue of your announcement on Instagram for kids and just
that phrase kind of makes me nervous. It sounds like a gateway
drug to more usage. Why did you put a pause on that and are you
going to permanently pause that? And do you worry that you are
going to get now kids hooked on more usage with Instagram for
Kids?
Mr. Mosseri. Senator, the idea is trying to solve a
problem. We know that 10 to 12 year olds are online. Your
average age, I believe when you get a cell phone in this
country, is currently 11 or 10. We know that they want to be on
top of things like Instagram, and Instagram, quite frankly,
wasn't designed for them.
So the idea was to give parents the option to give their
child an age appropriate version of Instagram, where they could
control not only how much time that they spent, but who they
could interact with and what they could see. It was always
going to be a parent's decision.
Now I personally, as the Head of Instagram, am responsible
for Instagram, and I decided to pause that project so that we
could take more time to speak to parents, experts, and to
policymakers to make sure that we get it right.
Senator Sullivan. Thank you, Mr. Chairman.
Senator Blumenthal. Thanks, Senator Sullivan. Senator Young
by WebEx.
STATEMENT OF HON. TODD YOUNG,
U.S. SENATOR FROM INDIANA
Senator Young. Thank you, Chairman. Welcome Mr. Mosseri. In
Ms. Haugen's testimony, she discussed how Instagram generates
self-harm and self-hate, especially for vulnerable groups like
teenage girls. Now I happen to have three young daughters and
two teenage daughters, and this issue hits home to me, but it
hits home to a lot of Americans. So you are here today, you are
the Head of Instagram, and you have an opportunity to tell your
side of the story.
And I do believe that if we are not receiving some
constructive, actionable, and bold measures to deal with what
is popularly believed to be a serious and significant public
health issue, Congress will act because our constituents insist
that we act. We have held a lot of hearings now. We have done
our best to educate ourselves. But frankly, since you run the
platform, since you know the technology, since you spend so
much time working on these matters, you can really help us.
If you don't, we are going to feel an imperative to act. So
that is just the reality of it. So with that said, with that
foundation laid, do you believe there are any short-term or
long-term consequences of body image or other issues on your
platform?
Mr. Mosseri. Senator, I appreciate the question. The
research that we have found shows that many teens use Instagram
to get support when suffering from issues like body image
issues. For 11 out of 12 issues for teenage girls, and for 12
out of 12, issues like body image, anxiety, and depression, we
found more teens who are struggling find that Instagram made
things better than worse.
The one exception was body image for teenage girls, which
is why I personally--actually before we even did this research,
started the social comparison team that researchers inspired
ideas like Take a Break, which launched this week, and nudges,
which we are currently working on, which encourages you to
switch topics if you spend too much time in any one topic. I am
not here to say that there is only one perfect solution, but
just to give an update on what the research says and what we
are doing to make Instagram safer.
Senator Young. Got it. I am familiar with nudges. I am
somewhat familiar with behavioral science, I know that is
something that can be harnessed by our tech community to
generate traffic. What is engagement based ranking, Mr.
Mosseri?
Mr. Mosseri. Senator, I appreciate the question. I have
worked on ranking and algorithms for years. That term is often
used to describe trying to connect people with content that
they find interesting. So what we do when you open up
Instagram, is we look at all of the posts from all of the
people that you or the accounts that you follow, and we try and
show you the one that you find the most relevant, and we try to
make sure to take out anything that might be against our
community guidelines in order to keep people safe. At a high
level, that is usually what people refer to when they say
engagement based ranking to the best of my knowledge.
Senator Young. So is there a behavioral bias for teenage
girls to look disproportionately at content that adversely
impacts their self-image?
Mr. Mosseri. Not that I know of, Senator. I do think that
it is important that teens don't have negative experiences on
our platform. I do think it is important that we try to
understand the issue that you are raising, which I appreciate,
which is negative social comparison or body image social
comparison. And we are trying to understand how we can best
help and support those who might be struggling.
Senator Young. So there is no negativity bias, just as
adults have a negative news bias, which is why so much of the
news and current events coverage online can be so caustic and
so corrosive to our public discourse because so often people
marinate in the negative. But there is no similar bias that you
have discovered, and I want to hear from one from any of your
internal experts pertaining to negative self-image for teenage
girls.
Mr. Mosseri. Senator--sorry to interrupt you, Senator.
Senator, I appreciate the question. I think it is important to
call out that social media allows you to connect with anyone
you are interested.
And in a world where the definition of beauty here in the
United States used to be very limited and very focused on a
very unrealistic definition of beauty, social media platforms
like Instagram have allowed, or not allowed, but have helped
important movements like body positivity to flourish so that if
you are a teenage girl of color or if you are a plus size
teenage girl, you can see models of color, plus size models.
It has helped diversify the definitions of beauty, and that
is something that we think is incredibly important. I don't
know of any specific bias, to answer your question very
directly, but I want to call out that we help people reach a
more diverse set of not only definitions of beauty, but points
of view and perspectives.
Senator Young. Do you have behavioral scientists who work
internal to Instagram?
Mr. Mosseri. Senator, we have data scientists who try and
understand how people use the platform in order to make
Instagram both better and safer.
Senator Young. And they would inform you, I presume---but I
don't want to presume, I want to get you on the record. They
would be informing you if they ever discovered a negativity
bias in the research or in the behaviors of your user community
as it relates to teenage girls and self-harm or self-hate,
right?
Mr. Mosseri. Senator, I expect my today to data scientists
as I expect my researchers to keep me abreast of any important
developments with regards not only to safety but to Instagram
and the industry more broadly.
Senator Blumenthal. Thanks, Senator Young.
Senator Young. Thank you.
Senator Blumenthal. Senator Lummis.
STATEMENT OF HON. CYNTHIA LUMMIS,
U.S. SENATOR FROM WYOMING
Senator Lummis. Thank you, Mr. Chairman. Companies like
Instagram are often designing technology to maximize the
collection of our data and subsequently sell visibility into
its users private lives and interests.
That is why when a company called Signal bought
advertisements designed to show us the information it collects
and sells about us, those ads were banned by Instagram's parent
company. It is the black box of highly secretive algorithmic
systems that companies like Instagram deploy that operate
largely undetected by the user and which allow them to continue
to operate free of meaningful scrutiny. This is not an open
source system.
Sunlight disinfects and Congress must not scroll past this
critical moment without properly addressing the harms young
people are encountering on these platforms. Mr. Mosseri, thank
you for being here. In your testimony, you stated that
Instagram has limited advertisers' options for serving ads to
people under 18 to age, gender, and location.
But your testimony neglects to mention that any similar
prohibition on Instagram's own machine learning ad delivery
system. Does Instagram's machine learning ad delivery system
target ads to children using factors other than age, gender,
and location?
Mr. Mosseri. Senator, I appreciate the question. There is
one ads delivery system both for Instagram and for Facebook. We
only allow advertisers to target those under 18 based on age,
gender, and location, and overall, the system also uses
activity that teens use within the app to make sure that ads
are relevant to teens.
Senator Lummis. OK, so you don't limit yourselves. You hold
yourselves to a lower standard than your advertisers?
Mr. Mosseri. Senator, we do limit ourselves and that we
don't use any off-platform data, but we do also use activity in
the apps to make sure that ads are relevant. For instance, if I
am not interested in a specific band because it is around the
part of the country or because I don't like that type of music,
it doesn't make sense for me to see that type of band.
Senator Lummis. So, Mr. Chairman, I ask unanimous consent
to enter into the record a report from Fair Play that shows
Meta is still using an AI delivery system to target ads at
children.
Senator Blumenthal. Without objection.
[The information referred to was unavailable at time of
printing.]
Senator Lummis. Thank you, Mr. Chairman. Your Head of
Safety and well-being, Vaishnavi J., recently stated that ``any
one piece of content is unlikely to make you feel good or bad
or negative about yourself. It is really when you are viewing,
say 20 minutes of that content or multiple pieces of that
content in rapid succession that may have a negative impact on
how you feel.''
So to me, her Statement says that your company knows the
time spent on the platform increases the likelihood of real
world negative impacts. So how do you square a business model
that prioritizes user time and engagement with knowing there is
a direct correlation between time and harm?
Mr. Mosseri. Senator, respectfully, using our platform more
will increase any affect, whether it is positive or negative.
We try and connect people with their friends, we try to help
them explore their interests, we even try to help them start
new businesses. But if people don't feel good about the time
that they spend on our platform, that is something I personally
take seriously and why we build things like daily limits, and
why we are currently working on parental controls that are
focused on time.
Senator Lummis. Does Instagram make money from ads that are
seen when placed next to highly viewed and viral but also
harmful content that violates the rules of your platform?
Mr. Mosseri. Senator, we don't allow content that violates
our rules on the platform. We release publicly how effective we
are at removing that content, and we receive revenue based on
ads shown.
Senator Lummis. OK, so is the money returned to the
advertisers then?
Mr. Mosseri. Senator, not that I know of, no.
Senator Lummis. Is it your position that Instagram will
always comply with the laws of the country in which it
operates?
Mr. Mosseri. Senator, we are going to do our best to always
comply with the law.
Senator Lummis. OK. Will you commit to releasing those
guidelines to members of this committee?
Mr. Mosseri. Senator, apologies, which guidelines?
Senator Lummis. It would be guidelines related to how you
comply. So let me give you an example. If an authoritarian
regime submitted a lawful request for your platform to censor
political dissidents, would you comply? And if not, what are
your guidelines on something like that? Here's--let me give you
some real world examples. If a Government, let's say Uganda,
criminalizes homosexuality. And if the Government submitted a
lawful request for a data on users that are members of the LGBT
community, would Instagram comply?
Mr. Mosseri. Senator, we try and use our best judgment in
order to keep people safe. I also believe that transparency on
this specific issue is incredibly important. I will double
check and get back to your office, but I believe we also are
public about incoming requests we get at a high level.
Senator Lummis. So one of the reasons that I am concerned
about the fact that this is sort of secretive data collection
based on a non-open source algorithm is because it gives you
that veil of secrecy. People don't know what information is
being collected about them.
Yet, if a hostile Government is able to identify people
like women who are learning against the law, being educated
against the law, or someone who is homosexual in a Government
where homosexuality is punishable by death, and there are
Governments that do this, if you turn over that data and it is
collected by artificial intelligence, that artificial
intelligence is not going to discern that they are putting a
human being in danger. These platform--artificial intelligence,
when not guided and not open sourced, can be a real problem.
Mr. Chairman, thank you. I yield back.
Senator Blumenthal. Thanks, Senator. Senator Cantwell.
STATEMENT OF HON. MARIA CANTWELL,
U.S. SENATOR FROM WASHINGTON
The Chair. Thank you, Chairman Blumenthal, and thank you to
you and Senator Blackburn for this fabulous hearing. I know we
have had great attendance from members. I am so impressed by
the questions that all our colleagues have been asking, so I
hope it will lead us to some good legislative solutions. And
appreciate Mr. Mosseri--is that the right way to saying,
Mosseri?
Mr. Mosseri. That is just great.
The Chair. OK, for being here today. Obviously a big new
day on the job. So I wanted to ask you specifically about
privacy violations. Do you believe that claims of privacy
violations by kids should go to arbitration? I mean, that is,
do you believe that when people are signing up for your service
when their 14 year olds, they understand that they are giving
away their rights when they sign up to your service?
Mr. Mosseri. Senator, respectfully, I disagree with the
characterization that anyone gives away their rights when they
sign up for our service. I think people--I think privacy is
incredibly important and we do the best we can, and we invest a
lot of resources in making sure we respect people's privacy.
The Chair. So if a child has suffered harm of the magnitude
and they tried to try to get those issues addressed, do you
think there should be an arbitration?
Mr. Mosseri. Senator, I am not sure I understand the exact
hypothetical, but I do believe if a child is at risk
specifically of hurting themselves----
The Chair. No, one of my constituents was working with a
mother whose 14 year old daughter was groomed by adults on
Instagram, ultimately was lured into sex trafficking and taken
across State lines for prostitution. Under Instagram's terms of
service, Instagram can argue that a child's only recourse
against Instagram's failure to provide a safe environment would
be an arbitration, with no open court, no discovery, no judge,
no jury, no appeal. So I am asking you what you think about
when real harm is created against children and what should be
the process?
Mr. Mosseri. Senator, that story is terrifying. We don't
allow child or human trafficking of any kind. We try to be as
public as we can about how well we do on difficult problems
like that one. And we believe that we should be---there should
be industry standards, there should be industry wide
accountability, but the best way to do that is Federal
legislation, which is specifically what I am proposing today.
The Chair. On this point--what we are trying to get at is
that when users, in this case particularly young children, are
signing up for service, what they are signing up to and a check
mark is that you are signing up to binding arbitration with the
company. So if there is a dispute about something that
happened, and yes, we have been considering privacy
legislation, our colleagues here have been trying to protect
young children in other ways, and we have certainly found some
very egregious situations of late, the only recourse they have
is to go into binding arbitration with you as a company.
And we are saying when there is something as egregious as a
privacy violation, that they should have other recourse. I am
just simply asking you whether you believe they should have
other recourse.
Mr. Mosseri. Senator, I appreciate the question. I believe
the most responsible approach in this area more broadly, not
only for privacy but for safety, is Federal regulation here in
the U.S.
The Chair. Do you think everybody has to go through you or
one of your other software ask companies to get redress? Do you
think that the only redress consumers should have is through
binding arbitration with the company?
Mr. Mosseri. Senator, I believe that whatever the law
states should apply to all companies like ours equally.
The Chair. I am asking you what you think as a company?
Mr. Mosseri. Senator, I am not familiar with the specific--
--
The Chair. OK, I am going to ask you for the record, and
then that way you will get a little more time and you can
consider it. But these are serious issues about the fact that
serious issues are happening to children and the only redress
they have is to go into binding arbitration with you.
And I think while that might be like, hey, I don't like
your service or something happened or you overcharged me or
this happened, that might be great for binding arbitration, but
serious harm to people I don't think should be sent to binding
arbitration. OK, back to the advertising question if we could
for a second.
My colleagues have done a really good job of asking about
this, but obviously people have been talking about the ability
to make money off of specific content, whether in the Facebook
that was described as potential reach metric. I think we have
been talking about that, right, people have been discussing
that. Are you aware of inaccuracies in the potential reach
metric?
Mr. Mosseri. Senator, I am not aware of any specific
inaccuracies, but we do our best we can to make sure
advertisers understand reach before they spend using tools like
that.
The Chair. Do you think that there is hate speech that is
not taken down by--that is included in that? Would you agree
that informing advertisers in the public, how much hate speech
there is, or if it is taken down or not taken down?
Mr. Mosseri. Senator, respectfully, I believe the potential
reach tool allows you to get a sense of how many people you
will reach, which is different than how much content on hate
speech content specifically. In our Community Standards
Enforcement Report, you can see that I believe 3 in 10,000
pieces of content seen qualify as hate speech by our
definition.
The Chair. Do you think that advertising can be inaccurate,
or misleading based on certain metrics?
Mr. Mosseri. Senator, as an advertising business, I believe
it is in our interest to be as accurate as possible. I think
whenever we make mistakes, we know that undermines our
credibility and advertising businesses are based on trust.
The Chair. Right. And they are also based on being truthful
to your advertisers. And so what I am getting at is Ms. Haugen
came to testify before the Committee. She is saying that
Facebook purposely made a decision to keep up metrics that
drove more traffic, even though she knew that they--that the
company knew that it included things that were related to hate
speech. That that certainly motivated more traffic. And when
presented with the information, the company and various members
of the company decided to continue using that metric.
And so what I am saying, there could be instances where
Instagram has also continued to have advertisers not fully
aware. So do you believe advertisers should be aware if there
was any content that was related to hate speech that they
should be aware that that metric--that they should be aware of
what content they are being served with?
Mr. Mosseri. Senator, I believe advertisers should have
access to data about how much hate speech there is on the
platform, just like everyone should be have access to that kind
of data. I am not--I don't believe, actually I am not familiar
with what you are specifically referencing with regard to her
testimony, but it doesn't line up with any of my experience
through my 13 years here at the company that we would
intentionally mislead advertisers. That would be a gross
violation of trust, and it would inevitably come out and
undermine our credibility.
The Chair. So you don't think there is any deceptive
practices with advertisers that you are involved in at
Instagram?
Mr. Mosseri. Senator, not only do I not believe that, I
think that would be----
The Chair. Do you think advertisers know everything about
your algorithm and what it is attached to in giving them page
views and information?
Mr. Mosseri. Senator, I believe deeply in transparency. I
have spent an immense amount of time over the years not only
trying to be transparent about how our algorithms work, but
also investing in making sure we are transparent about how much
problematic content is on the platform. And I believe you can
see that in our Community Standards Report, and I believe other
companies like Snapchat, TikTok, and YouTube should do the
same.
The Chair. I see my time is way over, Mr. Chairman, but I
am going to ask you questions for the record on this as well,
because the point is if companies are involved in deceptive
practices with advertisers and they haven't told them how they
are artificially increasing their traffic and it is related to
something that the advertisers aren't aware of, that is a
deceptive practice. So thank you, Mr. Chairman.
Senator Blumenthal. Thank you, Senator Cantwell. Thanks for
your excellent work on this issue and your help and support in
these hearings. Senator Cruz.
STATEMENT OF HON. TED CRUZ,
U.S. SENATOR FROM TEXAS
Senator Cruz. Thank you, Mr. Chairman. Mr. Mosseri,
welcome. Thank you for being here. Thank you for testifying
before the Committee and thank you for being here in person. As
you are aware, I and many members of this committee have had
significant concerns about Instagram's practices and Facebook's
practices and big tech more broadly. In September 2021, The
Wall Street Journal published a series of investigative
articles titled ``Facebook Files.''
And as you know, the Wall Street Journal reported that
researchers inside of Instagram found that 32 percent of teen
girls using the product felt that Instagram made them feel bad
about their bodies. The )Wall Street Journal further reported
that 13 percent of British users and 6 percent of American
users trace their desire to kill themselves to Instagram. Now,
those are deeply troubling conclusions. Are you familiar with
the research that was cited by The Wall Street Journal?
Mr. Mosseri. Senator, yes, but if we are going to have a
conversation about the research, I think we need to be clear
about what it actually says. It actually showed that one out of
three girls who suffer from body image issues find that
Instagram makes things worse. And that came from a slide with
23 other statistics where more teens found that Instagram made
things better. Now we have--doesn't mean it is not serious. And
on suicide, it was actually, and any one life loss to suicide
is an immense tragedy, but on suicide, it was 1 percent who
trace their thoughts back to Instagram. And I think it is
important that we are clear about what the research says.
Senator Cruz. So I am glad to see that we found some common
ground. You just said twice there, it is important for us to be
clear what the research said. I agree. At prior hearings, I
have asked your colleagues repeatedly for copies of the
research, and to my knowledge, you have refused to produce it.
Will you commit now to produce the research to this committee
so we can, as you just said, be clear about what the research
says?
Mr. Mosseri. Senator, I really appreciate this question
because it is incredibly important that we are transparent
about research. I can commit personally to doing all I can to
release the data behind the research. The two challenges that I
need to let you know of are one, privacy in certain cases, and
two, in often cases we do not have the data anymore due to our
data retention policies.
But given that I can also commit to you that we will
provide meaningful access to data, to third party researchers
outside the company so that they can draw their own conclusions
and design their own studies to understand the effects of not
only Instagram or Facebook on well-being, and I think other
companies should do the same.
Senator Cruz. In what format was this research communicated
to you? You just referenced a slide that had bullet points. I
would love to see that slide. You criticize this committee for
not appreciating the full contents of the research when you
haven't given us the research. In what form did this research
come to you? Was it a PowerPoint presentation? How was it
memorialized and presented to you?
Mr. Mosseri. Senator, there are two forms that the research
comes in. I believe the most important is the data because that
allows any researcher, and I am committed to making sure
external researchers outside the company can do research and
have access to that to draw their own conclusions, and then
presentations like PowerPoint slides.
And the specific of that slide, we have actually made that
slide public, but I believe the most important thing over time
is that we provide regular access to meaningful data about
social media usage across the entire industry, to academics and
experts to design their own studies and draw their own
conclusions. And that will take time because many of these
studies can often take years, but it is something I am
personally very committed to.
Senator Cruz. Now, when you saw your own study, finding a
significant percentage of girls reported that Instagram caused
them to think about killing themselves, were you concerned by
that finding?
Mr. Mosseri. Senator, just to clarify, I believe the study
said they trace their thoughts, but yes, I am concerned about
anybody who feels worse about themselves after using the
platform and certainly anyone, any one individual, because we
are talking about people here not numbers, that has any
suicidal thoughts.
Senator Cruz. Well, let's talk about numbers for a second.
Did Instagram do anything to quantify how many teenage girls
have killed themselves because of your product?
Mr. Mosseri. Senator, we do research and talk to third
party experts about not only suicide, but self-harm on a
regular basis. And that research has inspired much of our work
to not only make sure that we have very clear policies----
Senator Cruz. So did you quantify it or not?
Mr. Mosseri. Senator, I am not sure exactly what you would
mean by quantifying the situation, but----
Senator Cruz. Did you do research to estimate, to count how
many teenage girls have taken their lives because of your
product?
Mr. Mosseri. Senator, we do research to understand problems
and identify solutions, in the case of suicide, to make sure we
take down suicide related content from the platform, to connect
those seeking out this type of content with expert backed
resources, and to connect anyone who looks like they are a
threat of hurting themselves with local emergency services.
Senator Cruz. How did you change your policies as a result
of this research to protect young girls?
Mr. Mosseri. Senator, I appreciate the question. We use
research to not only change our policies, but to change our
product on a regular basis. With regards to bullying, the
research has inspired things like restrict, which allow you to
protect yourself from someone who is harassing you without them
knowing and limits because we learned that teens struggle
during moments of transition.
With suicide and self-injury, we learned that we need to be
incredibly careful because often teens suffering from these
really difficult issues use Instagram to find support, and we
need to make sure that they can find that support and talk
about recovery while making sure we don't----
Senator Cruz. So, look, big tech loves to use grand
eloquent phrases about bringing people together. But the simple
reality and why so many Americans distrust big tech is you make
money the more people are on your product, the more people are
engaged in viewing content, even if that is harmful to them,
even if they are viewing every eyeball, you are making money.
And when your colleagues have been asked the same question, as
a result of this research, what policies did you change? This
committee has been unable to get a straight answer to that
question about what is different. And I think the reason for
that is if you change policies to reduce the eyeballs, you
would make less money as it is. Why is that inference not
correct?
Mr. Mosseri. Senator, if people don't feel safe on our
platform, if people don't feel good about the time that they
spend on our platform, over time they are going to use other
services. Competition has never been fiercer, particularly here
in the states with YouTube and TikTok and Snapchat. And so I
have to believe that over the long run, it is not only
incredibly important that we keep people safe, but we make sure
people feel good about the time that they spend on our
platform.
Senator Cruz. So my time has expired, but I want to make
sure I understand the commitment you made to this committee. As
I understand it, you have committed to providing this committee
with the raw data from the research you did on users of your
product and in particular body image issues and tendencies
toward suicide, and also with the PowerPoint presentations that
memorialized that raw data. Is that correct that you will
provide them to this committee?
Mr. Mosseri. Senator, what I am committing to you is to do
everything I can do to release that data----
Senator Cruz. Is there a reason you can't do what I just
said?
Mr. Mosseri. Senator, the challenge on the data, which I
think is the most important thing, is that in many cases, we no
longer have it.
Senator Cruz. How about the PowerPoint presentations? Will
you give us the PowerPoint presentations?
Mr. Mosseri. Senator, I think the most responsible thing to
do is to provide access to data to external researchers outside
the company.
Senator Cruz. We would like both. Is there a reason you are
hiding the PowerPoint presentations? You said you wanted
maximum transparency. Maximum transparency would be show us the
presentations that were prepared for you. Presumably, you had
some reason to trust them because they were prepared for your
consumption.
Mr. Mosseri. Senator, I believe you already have the
presentations, which is why we are focused on the data. We
think that any researcher should be able to draw their own
conclusions based on the raw data. That is the most important
and the most knowledgeable part of the process. Unfortunately,
much of that data we no longer have due to data retention
policies, which is why I am very committed to making sure that
we can allow access to meaningful engagement data to
researchers outside the company in the future to focus
specifically on the effects of social media and well-being, and
I am calling for the rest of the industry to do the same.
Senator Cruz. So your commitment is to provide all of the
data that you have, and the PowerPoint presentation summarized?
Mr. Mosseri. Senator, my commitment is to provide
meaningful access to data based on what researchers request,
because I think that is the most responsible thing for us to do
over the long run.
Senator Cruz. OK, we are requesting right now.
Mr. Mosseri. Senator, for it--to do a study on the effects
of well-being--I am sorry, the effects of social media on well-
being, you would have to design that study. So I would like to
talk to--we have even worked with Pew, we have worked with
Harvard, we have worked with other organizations around the
world. I would like to talk to the researchers and understand
what specific data they would like access to. I can't just
provide all data that we have. That is an untenable thing to do
physically.
Senator Cruz. The data that was the basis for the study
quoted in the Wall Street Journal report. That is the data I am
asking about, and the presentations that summarized the
conclusions of that.
Mr. Mosseri. Senator, I would love to provide that data. I
am personally working on trying to find out if there is any way
we can provide it in a privacy safe way and in a way where we
still have it, and because I think that is important. I have
been working on that. I don't want to overpromise and
underdeliver here, which is why I am more focused on how we
make sure researchers have access to data going forward.
Senator Blumenthal. Thanks, Senator Cruz. I just want to
say Mr. Mosseri, because you and I have discussed this point,
the data sets are not enough. This answer is, in my view, just
completely unsatisfactory. We want the studies. We want the
research. We want the surveys. The whistleblower has disclosed
a lot of it. And the answer that you have given, very
respectfully, simply won't get it. It is in your files. If it
was destroyed, we want to know about that, too. But information
is absolutely the coin of the realm when we go about devising
legislation.
And I must say there is a kind of disconnect here. Senator
Sullivan asked you about content relating to suicidal or self-
harm. I think I am quoting you almost directly, and you said
there isn't any. Well, we have a teen account with all the
protections on, the filters. We searched, ``slit wrists,'' and
the results, I don't feel I can describe in this hearing room.
They are so graphic. That is within the past couple of days. I
described to you an account that looked at, in effect, eating
disorders and attracted the same deluge of self-harm and
anorexia coaches.
So, I just feel that there is a kind of real lack of
connection to the reality of what is there in the testimony
that you are giving today, which makes it hard for us to have
you as a partner and maybe we need to have some kind of
compulsory process. You know, Senator Cruz and I don't always
agree.
But on this point, on the need for information, I think you
have heard here a bipartisan call for a reality check and for
action. And the fact that this content continues to exist on
the site despite your denials, I think, really is hard to
accept. You know, Instagram suggested as a solution here to
nudge teens. I don't know whether your kids have reached the
teen age yet. It takes more than a nudge to move teens.
I am well beyond the teenage years of my four children, but
if you said to a teen who was on Instagram, fixated on eating
disorders, why don't you try snorkeling in the Bahamas, that
nudge just won't work. And Instagram has a real asymmetric
power here. It drives teens in a certain direction and then
makes it very difficult for the teen, once in a dark place, to
find light again and to get out of it.
So, my question to you is, don't we need enforceable
standards set by an independent authority, not an industry
body, objective, independent researchers with full access to
your algorithms? Will you commit to support full disclosure of
your algorithms and a commitment to an independent authority?
Mr. Mosseri. Senator, we are actually--we are very aligned.
We agree on the transparency--the importance of transparency,
not only of how ranking works----
Senator Blumenthal. Well, then you would make available all
of the studies, like the ones that Frances Haugen presented to
us when she was here.
Mr. Mosseri. Senator, I am confident that we are more
transparent than any other tech company in the industry.
Senator Blumenthal. That is a pretty low bar, Mr. Mosseri.
That is like you are in the gutter, forgive me, in terms of
transparency, because they committed to make available their
algorithms, but only after we pressed them to do it, and we
still are awaiting full compliance.
Mr. Mosseri. Senator, we have been publishing research for
years. We are going to publish over 100 things this year alone.
I believe that there is an immense amount of data in our
quarterly reports. I believe that we are going to start having
them audited by Ernst & Young starting this quarter. I believe
that our ads library provides more transparency in advertising
than any other advertising business in any industry, tech or
otherwise. Yes, I believe there is more to do.
Yes, I believe in Federal legislation. Yes, I believe that
policymakers should be actively involved in that, and I am
looking forward to having our teams work with yours on shaping
exactly what that looks like.
Senator Blumenthal. Will you support the EARN IT Act?
Mr. Mosseri. Senator, directionally we believe strongly in
transparency and accountability. I am unfortunately not
familiar with every provision in that Act, but more than happy
to have our team work with you on that.
Senator Blumenthal. If you believe what you have testified
here, you would say, yes, I support the EARN IT Act.
Mr. Mosseri. Senator, respectfully, I don't think would be
appropriately appropriate for me to commit to something that I
haven't read in full. But I really do want our team to work
with yours. As I have said, we are calling for industry wide
regulation. We believe it is incredibly important. It is why we
are having this hearing today. It is why I appreciate these
questions, even though they are difficult at times because we
believe there is nothing more important than keeping teens safe
online. And we believe we need to come together and----
Senator Blumenthal. Do you support prohibitions, bans on
advertising and marketing to teens for products that are
illegal for them to consume?
Mr. Mosseri.--Senator, I believe we already prohibit that.
So in the case of under 18 year olds, we don't provide--we
don't allow ads for--we don't we don't allow ads for tobacco to
any age, but we don't allow ads for things like gambling for
under 18. We don't allow ads for alcohol for under 21.
Senator Blumenthal. Would you support legally enforceable
prohibitions where you can be held liable?
Mr. Mosseri. Senator, yes, we support industry standards
and accountability, and I believe part of those industry
standards, as I call out in my testimony, include age
appropriate design, which will inevitably include content rules
about what is appropriate.
Senator Blumenthal. If you host child sexual abuse
material, should the victims be able to sue you?
Mr. Mosseri. Senator, child exploitation is an incredibly
serious issue. I believe an earlier Senator mentioned how we
could collaborate with NCMEC on this. I believe that we are
going to continue to invest more than anyone else on this
space. And I believe that Federal regulation is the best form
of accountability with enforcement, to your point before.
Senator Blumenthal. I am going to turn to Senator
Blackburn. I have a few more questions if we have time to get
to them.
Senator Blackburn. Yes, and in the question about NCMEC, I
wanted to know if you reported the traffickers. And I think
that that is something important to do. You would need to come
back to us on that. Also, you mentioned referring children to
local authorities. You need to let us know how many are we
talking about? Is it in the hundreds or thousands? Give us
those numbers so that we have that data. Also, I would like to
know how long you hold the data on your research?
You have mentioned you can't give us the data and then you
said, ``well, you may not have some of this data.'' We need to
know how long you are holding this data on minors, those
children that you are data mining. And I was glad to hear you
admit that you all are a big advertising business. I thought
that that was helpful to our discussion.
You mentioned in response to Senator Lee's question that
you hoped you did not sound callous when we were talking about
children that take their life or that have lifelong problems
because of what they have encountered on Instagram.
And sir, I have to tell you, you did sound callous, because
every single life matters, every life matters. And this is why
we need this research. I thought it is also interesting you
basically give teen girls no recourse if they get into a dark
spot using Instagram. But we have got a lot of parents that
come to us, this is why we are doing these hearings, and they
are concerned about how their children are going to be affected
for the rest of their life.
And I asked you yesterday when we talked, if you ever
talked to these parents whose children have taken their lives
or have they ended up having to have mental health services
because of what they have been incurred, and you said, yes, you
do.
So I want to give you 1 minute. I speak to parents who are
struggling. Their children have attempted suicide, or maybe
some of them have taken their life.
So take the next minute and speak directly to these
parents. Because as I told you yesterday, I have talked to a
lot of parents. They have never heard one word from Instagram
or Facebook or Meta, and they are struggling with this. Senator
Cantwell brought this up to you. So, sir, the next 60 seconds,
the floor is yours.
Senator Blumenthal. You can have longer than a minute, if
you would like.
Senator Blackburn. Speak to these parents because we are
not talking to people that have ever had any kind of response
from Instagram. And you have broken these children's lives and
you have broken these parents' hearts are flawed. Floor is
yours. Have at it.
Mr. Mosseri. Thank you, Senator. Senator, I am a father of
three. To any parent who has lost a child or even had a child
hurt themselves, I can't begin to imagine what that would be
like for one of my three boys. As the head of Instagram, it is
my responsibility to do all I can to keep people safe.
I have been committed to that for years and I am going to
continue to do so. Whether or not we invest more than every
other company or not doesn't really matter for any individual,
and if any individual harms themselves or has a negative
experience on our platform that is something that I take
incredibly seriously.
Now, I know I have talked a lot about parental controls. As
I have said, I really do believe that a parent knows what is
best for their child or guardian, but I also know that a lot of
parents are busy. I have got three kids and I have a lot of
support.
I can't imagine what it would be like to have four kids or
three kids and be a single parent working two jobs. And so I
don't want to rely on parental controls. I think it is
incredibly important that the experience is safe and
appropriate for your age, no matter what it is, 13, 15, 17.
But if you have the time and the interest, I also think as
a parent you have the right to be able to understand what your
kids are doing online, and you should have control to shape
that experience into what is best for them. And if you don't
have time, that is OK too. It is my responsibility to do all I
can to help, not only to keep young people safe on our
platform, but anybody who uses our platform.
Senator Blackburn. Mr. Mosseri, we are telling you,
children have inflicted self-harm. They are getting information
that is destroying their young lives. And we are asking you,
have some empathy and take some responsibility. And it seems as
if you just can't get on that path. So we are going to continue
to work on this issue. I wish that your response had been a
more empathetic response.
Senator Blumenthal. Thanks, Senator Blackburn. I just have
one more question, and I think we are going to make the 5 p.m.
deadline. We want to be respectful of your time. I understand
that it is important to have your internal discussions and
debate, as any company must. But these studies and research are
really important for parents to make decisions. And I am
reminded of actually some work I did when I was State Attorney
General in Connecticut. We were one of the first states, with
the help of a company in Connecticut named Legos, to require
warnings about small parts on toys, which I urged the
Legislature to do as State Attorney General.
Then I fought the industry when it challenged that labeling
and we won against challenges based on the Commerce Clause and
other constitutional claims. And the Supreme Court denied
certiorari, and then the industry decided it wanted a Federal
standard because it didn't want to deal with state by state
requirements. Well, the point was that the law required
companies to disclose risks. It encouraged them to compete over
values that were positive and that promoted safety.
And that is the kind of competition that we need in your
industry. As Senator Klobuchar mentioned earlier, we have been
working on antitrust measures to promote more competition so
that maybe there will be some on safety, but disclosure, the
disinfected in sunlight is so important. So my question to you
is, would Instagram support legal requirements on social media
platforms to provide public disclosures about the risks of
harms, the risk of harms in content?
Mr. Mosseri. Senator, I would support Federal legislation
around the transparency of data, around the access to data from
researchers, and around the prevalence of content problems on
the platform. I think all of those are important ways that
parents or anyone really can get a sense for what a platform is
doing and what its effects on people are. I believe deeply that
transparency is important, which is why I am confident we are
going to continue to lead the industry on being incredibly
transparent about what happens on our platform.
Senator Blumenthal. Well, you have said repeatedly that you
are in favor of one goal or another directionally. And I find
that term really incomplete when it isn't accompanied by
specific commitments, a yes or a no. And we are going to move
forward on this committee, directionally with specifics. The
kinds of baby steps that you have suggested so far, very
respectfully, are underwhelming, a nudge, a break. That isn't
going to save kids from the addictive effects.
And there is no question there are addictive effects of
your platform, and I think you will sense on this committee
pretty strong determination to do something well beyond what
you have indicated you have in mind, and that is the reason
that I think self-policing based on trust is no longer a viable
solution. So we thank you for being here today. We are going to
continue the effort to develop legislation. Many of us, Senator
Markey, Senator Thune, Senator Klobuchar, myself, working with
Senator Blackburn, who has been a great partner in this effort.
This hearing record will remain open for two weeks.
If you feel you want to supplement any of your answers or
my colleagues who would like to submit questions for the
record, should do so by December 22. And we ask that your
responses be returned to the Committee as quickly as possible
and no more than two weeks after received. That concludes
today's hearing. Thank you very much for being here. And I
think we are pretty much on time. Thank you.
Mr. Mosseri. Thank you, Chairman Blumenthal. Thank you,
Ranking Member Blackburn. I appreciate your time.
Senator Blumenthal. Thank you.
[Whereupon, at 5:01 p.m., the hearing was adjourned.]
A P P E N D I X
Hiding in Plain Sight: Exposure of Adolescent White Males in Appalachia
to Harmful Content on Social Media
Dr. Joel Beeson, Professor, Reed College of Media, West Virginia
University
Prologue
My scholarship encompasses decades of historical research on racism
during wartime in the nation, and digital forensic investigation of
online radicalization in social media and gaming platforms. I've been a
member of the Congressional Black Caucus Veterans Braintrust since 2008
and was invited in 2017 by the Hon. Charles Rangel to provide testimony
on the historical parallels of the WWI era to the rising divisions,
polarization, extremism and racism in our Nation's present. I am
currently co-producing with my colleague (and spouse) a feature-length
documentary film on the vectors of youth radicalization in the
Appalachia region through the social media and gaming digital
ecosystem.
Since 2016, we have been leading a team of researchers and
investigative journalists at West Virginia University studying how
social media and gaming platforms expose and radicalize youth in
Appalachia (primarily on boys and young men ages 10-22). A significant
component of this research has included data collection on harms to
youth within Instagram specifically, uncovering a mix of violent,
pornographic, misogynistic (including radical incel or ``involuntarily
celibate'' content), racist and homophobic content targeted to youth
and teens--often posed as jokes or memes. This toxic content is
interspersed with benign content and serves to desensitize, manipulate,
normalize and groom youth toward forms of extremism and violence. This
research has been funded by grants from the Ford Foundation and the
Democracy Fund.
From healthcare to the economy, lessons learned in Appalachia are
broadly applicable on a national scale. Harmful, toxic content on
social media in our region is no different.
Joel Beeson
______
Introduction
Social media and smartphone use has become ubiquitous feature of
teen's everyday life and culture. According to the 2018 Pew Research
Center study, Teens, Social Media and Technology, 95 percent of
adolescents have access to a smartphone, while 45 percent say they are
online on a near constant basis. Roughly nine-in-ten teens reported
being online multiple times per day (Pew, 2018). In another recent
study, nearly 80 percent of teens checked their smartphones at least
hourly, while 50 percent of teens and nearly 60 percent of parents feel
their teens are addicted to their devices (Felt & Robb, 2016). Social
media is undoubtedly where adolescents spend most of their time online,
and it is a dominant social space in the formation of teen culture.
For young people, Instagram, YouTube, Snapchat and Tiktok are the
most popular social media platforms. In the Pew study, 72 percent of
teens reported using Instagram, a social media platform that relies on
images as primary means of communication. A study of young adults using
Instagram found that entertainment (diversion) and sharing image posts
(social interaction) was the chief reason for using the app (Huang &
Su, 2018). A key mode of visual messaging on Instagram is the
widespread use and sharing of memes. In our research, we have
identified memes as the primary conveyor of harmful content on
Instagram.
The ironic, unsettled semiotic equations of memes enable their use
in propaganda as a ``ploy''--a sort of linguistic trojan horse--to
throw so much doubt and chaos on its actual interpretation so that the
user is unsure if it's a joke or not (Michigan Academician, 2021). The
``It's just a prank, Bruh'' disclaimer gives the author or publisher
plausible deniability and ironic distance.
We documented anti-Semitic and Islamophobic speech and memes shared
within an adolescent community in the years and months prior to the
2018 Tree of Life synagogue shooting in Pittsburgh and subsequently the
Christchurch massacre at two mosques in 2019, killing 51 people and
injuring 40, including children. This was a clarifying tragedy in which
the shooter's own manifesto referenced the same ``Sub to Pewds''
(YouTube influencer PewDiePie) and other extreme right-wing memes
circulating amongst middle and high schoolers in Appalachia and across
the country. As we observed the online social worlds of youth
communities in the region, we increasingly understood the path by which
adolescent hate memes inevitably collided with real-world violence,
regional and international networks of organized white supremacists,
abetted by trolls, digital profiteers and the mechanics and economic
model of the platforms themselves.
The impact and success of networked propaganda and memetic warfare
has continued to be evident in the proliferation of disinformation and
normalization of conspiracy theories in the COVID-19 pandemic era, and
in the rise of armed and violent illegal militias and paramilitary
groups--such as the so-called ``Boogaloo Bois''--in the wake of Black
Lives Matter protests in the summer of 2020.
Several scholars have recognized the importance of online spaces as
an increasingly important vector of youth socialization. As Julia
DeCook (2018, p. 485) argues in her study of memes used by the Proud
Boys, a violent right-wing group:
``The growth of digital technology and social media use among
populations across the world has given rise to a new
socializing institution for children, teenagers, and young
adults (Alava, Frau-Meigs, and Hassan, 2017). These online
platforms are places of civic engagement and political
expression, particularly among youth, and thus have the
potential to socialize youth into political ideologies and
sensemaking processes of their worlds (Bennett et al., 2012;
Edgerly et al., 2016).''
Kurek, Jose and Stuart (2019) observe that the online media
landscape offers a unique social space for adolescents to explore
identity formation in what Suler (2004) and others have termed
``increased disinhibition'' or the ``freedom for dissociative self-
expression.'' Studies have shown that most teens believe the online
environment provides a space safe from criticism, judgement, or real-
life consequences. (Bauman, 2010; Runions & Bak, 2015 and Suler, 2004).
Teens confess to sharing a different self-identity offline compared to
online (Kaplan & Haenlein, 2010). In a developmental stage where the
formation of self and anxieties about sexual and gender identity can be
difficult, confusing and uncertain, adolescents are uniquely
susceptible to manipulation and disinformation.
In interviews and collaboration with sociologists, psychologists,
public health professionals, we concur that these online effects
comprise a moral injury to affected youth.
Other scholars have identified how online spaces are used to prey
on boys in particular, and we suggest the notion of ``self-
radicalized'' individuals belies the complexities of an entire
ecosystem of influences--one in which they are targeted, exposed, and
socialized to normalized, incentivized dehumanizing content over hours,
weeks, months and years.
According to a review of current research, evidence suggests that
exposure to radical or violent online material is associated with a
higher risk of committing political violence offline (Hassan, et. al,
2018). The review found that extremist groups disproportionately target
youth for recruitment with ``narratives that resonate with their
grievances and need for belonging and excitement'' (p.72). Because
adolescence is a period of searching for identity, those youth without
a sense of belonging or positive national identity are considered at a
higher risk for radicalization. As Cynthia Miller-Idriss (2020) writes,
``almost all recent research finds the `need for belonging' is key to
extremism, along with a need for control.'' Our current research on
adolescent exposure to right-wing extremism and white supremacy online
bears out these observations.
Research Methods and Results
Digital Ethnography
From 2017 to 2021, we have observed actual teen Instagram accounts,
mapping networks of harmful content regionally and ties to their global
connections. We have collected and looked at the material semiotics of
memes circulating authentically in Appalachian youth culture.
Focus Groups and Interviews
Throughout 2019 and 2020 we conducted seven focus groups and
numerous individual interviews with young males in West Virginia (ages
11-19, with middle and high school students, as well as one group of
college freshman. This research revealed that Instagram is the teens'
overwhelmingly favorite social media platform for interacting with
peers specifically for entertainment. In addition, most reported being
exposed to hate speech or traumatic content online and expressed the
sentiment that it was simply normal--a ``natural'' feature of the
Internet and social media.
Focus groups with teachers, school technologists, and interviews of
affected groups, including religious and racial minorities, revealed
that this online content spilled over into real-world behavior in
schools. Two siblings in high school had this exchange with an
interviewer:
Interviewer: ``Okay. Have you heard friends or other students
say anything about Nazis from memes and games, just joking?''
Both: ``Yes.''
Teen 2: ``I think also because we're Jewish.''
Teen 1: ``I hear a lot of people just walking the halls at my
school, I hear a lot of groups of people joke about things like
that. I've even had friends say that in conversations I'm in
who know that I'm Jewish. I think it probably just slips their
mind or something. It's not that I'm going to be like `oh you
said that' but I would prefer if they wouldn't say that in
front of me.''
Interviewer: ``So, what kinds of things do they say?''
Teen 1: ``Just like Hitler jokes and different things about
Jews.''
Teen 2: ``Yes like Holocaust jokes.''
Interviewer: ``If you could put a number on it, how frequently
does that occur?''
Teen 2: ``I've at least heard it daily. I don't think it's been
directed at me daily, but I've definitely heard it daily when
I'm at school. ``
Surveys
In addition to focus groups, we conducted anonymous, text-based
surveys of nearly 300 middle and high school students in six schools in
West Virginia. A little more than one-half of students (51 percent)
said that Instagram was the platform of choice for sharing memes.
Nearly three-quarters (72 percent) reported being exposed to memes that
they regarded as hate speech, which we operationally defined as
``harmful content targeted to groups based on race, religion, class,
sexual orientation or disability.'' Written responses elaborating on
this question were as follows:
``I see a lot of memes targeting Jews.''
``People use African American, Latino, and white stereotypes
as jokes.''
``They say the N word and I partake in the slop.''
``There are memes that make jokes about kids on the autism
spectrum.''
``It's funny because it's offensive.''
``Yes, Muslims exploding.''
``Memes making fun of SIMPS (suckers idolizing mediocre
pussy).''
``Memes about the Coronavirus and the Asian race.''
``Of course. I've seen memes targeting blacks, whites,
Asians, communist, etc. They'll be jokes or stereotypes. But
you have to take them like a grain of salt. There's just
jokes.''
When asked if they had ever reported this content to the platform
or told an adult parent, guardian or teacher, 72 percent said no. A
sample of responses to this question indicate that teens feel that
reporting hate speech is futile:
``No, because no one does anything.''
``No, that's just stupid and what would they even do about
it.''
``No, it's not offensive or a problem.''
``No why would I do that?''
The Instagram Experiment
Conventional wisdom and media myths focus on the actions of
individuals, giving rise to ideas of the ``lone wolf'' who seeks out
toxic content on the ``dark web.'' Based upon observations, surveys and
interviews of teens and their social media exposure, we felt like this
was an inadequate explanation for the ubiquity of toxic and harmful
content we observed in youth feeds and in off-line behavior.
We established a protocol designed to test the platform's and
external actors' actions through Instagram algorithms that suggest
content to users:
Suggested Accounts--Instagram suggests accounts based upon
algorithmic analysis of the user.
Explore Feature--Instagram suggested posts, videos and
stories based upon algorithmic analysis of the user.
We wanted to identify the how the platform curates content for a
presumed 13-year-old boy in West Virginia. We also based the design of
the protocol on the steps youth take in creating an Instagram account
for the first time, which reveals their initial reliance on the
suggestions of the platforms coupled with targeting by accounts that
immediately begin to follow their new account. We followed setting up a
13-year-old boy's account based on their reported behavior:
Interviewer: ``What app is the one that is used most for
sending memes?''
Teen: ``Instagram.''
Interviewer: ``So, if you had to say like a ratio of how much
time you spent on one app vs. another what would you say?''
Teen: ``I spend a lot more time on Instagram than I do any
other app because that's like one of the ways I get ahold of
people. Like, just going through and seeing what got posted.''
Interviewer: ``How do you pick the people you follow?''
Teen: ``Say, if I just made an Instagram account and, like, the
first person I followed, if you go to their account, it will
say suggested. And then you, like, go in there and follow
people from that.''
To inhibit the variable of user influence and to help isolate the
role of algorithms, we took minimal user actions. These steps in part
included:
1. Set up burner phones with a new e-mail and Instagram account.
This insured the device and account had no data history to
affect suggestion and curating algorithms.
2. We created usernames based on information from teen interviews
and focus groups about how youth decide on a username.
3. We followed a few popular official gaming accounts and regional
sports accounts (Minecraft, Fortnite, Call of Duty and
Mountaineer Football, The Red Cup and Mountaineer Maniacs) when
first signing up to signal location and gender/age of the user.
4. We followed recommended meme accounts through Instagram's
``Explore'' and ``Suggested Accounts'' features.
5. We followed back accounts who followed us.
6. We took screenshots of harmful content.
When we first piloted the Instagram experiment in the summer of
2019, we expected to encounter harmful content in the span of weeks or
months. Frankly, we were shocked to see this content appear within
three or four days of being on Instagram for a few hours per day. Eight
subsequent experiments during 2019 and 2020 confirmed that the actions
of the platform algorithms, suggested accounts and explore features,
and follows by other accounts -independent of user decisions or
searches--leads to a mix of racist, antisemitic, homophobic, sexually
explicit, ableist, misogynistic, Islamophobic and violent content in
the form of memes.
This harmful media was intermingled with benign content, as well as
memes calling for a return to traditional gender roles, masculinity and
physical strength, orthodox Christianity and the Crusades, COVID-19
conspiracy memes and related natural health remedies and ``wellness''
content.
A member of our research team, after conducting an experiment
session, reported that ``it made me want to throw up and cry.''
Discussion and Conclusions
Although this research is still in process, presented initially at
the Social Science Research Council's program Extreme Right
Radicalization Online: Platforms, Processes, Prevention, as well as in
closed-door briefings to other researchers and mental health
practitioners, we felt compelled to share excerpts for this hearing in
light of Facebook whistleblower Frances Haugen's revelations, as this
work includes investigation into Instagram specifically and is relevant
to the scope of this committee's investigation into online harm from
social media platforms.
The scope of this research supports that in the current media
ecosystem, where the dominant economic model of social media--the
attention economy--drives machine learning algorithms to maximize the
amount of time teens are on their devices and are interacting with
social media platforms such as Instagram, has the networked effect of
indiscriminately amplifying content and topics that prioritize
engagement. With teens, this content is often sensational and serves to
manipulate teens natural developmental anxieties, fears and
uncertainties about identity, sexuality, religion and spirituality, and
most importantly, social belonging and acceptance. What is most
striking is our observations of how few actions need to be taken by a
user to encounter dangerous and toxic content. A recent study by the
Center for Countering Digital Hate in the UK confirmed that Instagram's
suggestion algorithms for new users were ``actively pushing
radicalising, extremist misinformation to users'' including QAnon and
COVID-19 anti-vaccination conspiracy misinformation, content promoting
far-right ``accelerationist and militia groups who aim to bring about
civil war'' along with natural health and ``wellness'' Instagram
influencers connected or in adjacent networks (Center for Countering
Digital Hate, 2021). Their study provides additional evidence
corroborating our research findings.
Extreme, provocative and often traumatic content targeted to youth
in dark and ironic memes include references to child abuse, child
pornography, incest, rape, suicide, racism, addiction, gun violence and
school shootings. That this content is often juxtaposed in an endless
scroll with benign content only further serves to desensitize youth to
traumatic matter, and to create feelings of shame, isolation and
susceptibility that is a vector for manipulation toward extremism.
Economic and opioid decimation in Appalachia have eroded family
stability, and strained social and community infrastructure. In West
Virginia, it's estimated that more than 50 percent of children in some
counties are not being raised by their parents, and instead are in
foster care or living with a grandparent because parents have either
died from overdose, are in treatment, or in prison. This is complicated
by ``kin-caregiving'' by aging grandparents who may not fully
understand young people's technology use, and may be economically
overwhelmed, in poor health or experiencing other life traumas. This
awareness of a grim and present reality, combined with continuous
exposure to toxic content, exacerbated by the pandemic and polarizing
portrayals of persistent civil unrest only further serves to confirm
the very premise repeatedly asserted by extremist propaganda and other
information warfare that ``civilization is in decline.''
In prior presentations of this research, we are often asked, Who is
behind this? But it's the wrong question to ask. It's not a who
question--it's a what question.
There are no boogeymen in this digital landscape, but a complex
ecosystem and a constellation of actors who have learned to ``game'' or
weaponize the system for a host of reasons. These include: Trolls, and
nonstate bad actors engaged in mischief and disruptive behaviors
online; State Actors (Sophisticated disinformation campaigns by agents
of other countries designed to amplify polarization, seed social
dysfunction, mistrust and disseminate propaganda, which can also
include sophisticated white supremacist content (Martineau, 2018));
Non-organized white supremacists (Individuals who have already been
``red-pilled'' or co-opted by conspiratorial thinking who actively
distribute and engage in extremist or extremist adjacent content but
are not formally or knowingly associated with organized groups);
Organized extremist groups (Organizations recognized as hate groups or
terrorist organizations, which include homegrown splinter cells and
international networks); And what we refer to as ``Arms Dealers''
(Individuals who are not ideologically motivated but who create and
distribute extremist and extremist adjacent content and participate in
amplification for financial reasons); which can include bots, written
by individuals to harvest high ranking content that combines benign and
toxic content; And of course algorithms (the complex, proprietary
machine learning software that platforms apply to systematically
personalize and amplify content with high engagement potential and
monetization).
From our vantage point we have been able to view this system
holistically and over time. We observe the private Instagram accounts
that use what we have come to term ``teen lures'' such as ``don't-tell-
your-parents'' accounts adjacent to the porn and gore meme generators.
We observe the links dropped into Instagram comment threads to follow
de-platformed content to new locations, we see the sickening memes that
systematically dehumanize people of color adjacent to harmless middle-
school memes; we see the ``Holy Wars'' crusades cartoonized memes that
are adjacent to insidious and dangerous adult accounts valorizing
extremist violence.
We see the tactic in which extremist and extremist-adjacent content
overlaps or is embedded in pornographic and ``gore'' videos and other
dark or ``edgy'' material targeting adolescent males in a highly
stimulating cocktail of complex messaging. This content intentionally
preys on adolescent instincts toward what has been described as ``dark
play''--a natural curiosity exploring sexuality and death outside of
the boundaries of adult rules for gameplay, but which has become
weaponized to drive users toward extremes coupled with the infinite
scroll of Instagram designed to stimulate the same bursts of brain
chemicals that drive gambling addiction at slot machines.
Others have cited the cognitive theory and behavioral neuroscience
that go into designing systems that drive individuals to extremes and
for great profit. When this is amplified by sophisticated bad actors,
it is nothing less than information warfare--with tactics that are too
complex for children, parents or individual community members to combat
alone. This perspective does not reflect a moral panic or a call to
surveil authentic teen culture online. Rather we suggest there is
little that is in fact authentic about these highly manipulated spaces.
We have come to view the unregulated ecosystem of social media as a
public health crisis and a potential moral injury to a generation.
It is important to realize that while adult content moderators are
suffering PTSD, trauma, and depression from viewing this content--youth
on platforms such as Instagram are consuming this without any
mitigating messaging or support interventions to help them cope with
what they see.
As one teen told is in an interview of the content he's seen
online, ``I stopped feeling in the 5th grade.''
Citations
Bauman, S. (2009). Cyberbullying in a Rural Intermediate School: An
Exploratory Study. The Journal of Early Adolescence, 30(6), 803-833.
https://doi:10.1177/0272431609350927
Bennett, Lance & Freelon, D.G. & Hussain, M.M. & Wells, Chris.
(2012). Digital media and youth engagement. http://doi:10.4135/
9781446201015.n11
Center for Countering Digital Hate. (2021). Malgorithm: How
Instagram's algorithm publishes misinformation and hate to millions
during a pandemic. https://www.counterhate.com/malgorithm
DeCook, J. R. (2018). Memes and symbolic violence: #proudboys and
the use of memes for propaganda and the construction of collective
identity. Learning, Media and Technology, 43(4), 485. https://
doi:10.1080/17439884.2018.1544149
Felt, Laurel & Robb, Michael. (2016). Technology Addiction:
Concern, Controversy, and Finding Balance. Common Sense Media. https://
www.commonsensemedia.org/sites/default/files/uploads/research/
csm_2016_technology_addiction_research_brief_0.pdf
Hassan, G., Brouillette-Alarie Sebastien, Alava Seraphin, Frau-
Meigs, D., Lavoie, L., Fetiu, A., Varela, W., Borokhovski, E.,
Venkatesh, V., Rousseau Cecile, Sieckelinck, S., Scheithauer, H.,
Leuschner, V., Bockler Nils, Akhgar, B., & Nitsch, H. (2018).
Exposure to extremist online content could lead to violent
radicalization: a systematic review of empirical evidence.
International Journal of Developmental Science, 12(1-2), 71-88. https:/
/doi.org/10.3233/DEV-170233
Huang, Y.-T., & Su, S.-F. (2018). Motives for Instagram use and
topics of interest among young adults. Future Internet, 10(8), 77-77.
https://doi.org/10.3390/fi10080077
Kaplan, A. M., & Haenlein, M. (2010). Users of the world, unite!
the challenges and opportunities of social media. Business Horizons,
53(1), 59-59. https://doi-org.wvu.idm.oclc.org/10.1016/
j.bushor.2009.09.003
Kurek, A., Jose, P. E., & Stuart, J. (2019). `I did it for the
lulz': how the dark personality predicts online disinhibition and
aggressive online behavior in adolescence. Computers in Human Behavior,
98, 31-40. https://doi.org/10.1016/j.chb.2019.03.027
Language Games, Irony and Satire as Socio-Material Frames for
Offensive Posts and Memes in Social Media. (2021). Michigan
Academician, 47(3), 18. https://wvu.idm.oclc.org/login?url=https://
www.proquest.com/scholarly-journals/language-games-irony-satire-as-
socio-material/docview/2576372125/se-2?accountid=2837
Martineau, Paris. (2018). How Instagram Became the Russian IRA's
Go-To Social Network. Wired. https://www.wired.com/story/how-instagram-
became-russian-iras-social-network/
Maxime, D. (n.d.). The ``great meme war:'' the alt-right and its
multifarious enemies. Angles, 10. https://doi.org/10.4000/angles.369
Miller-Idriss, C. (2020, September 08). Portland and Kenosha
violence was predictable--and preventable. The Conversation. https://
theconversation.com/portland-and-kenosha-violence-was-predictable-and-
preventable-145505
Munn, L. (2019). Alt-right pipeline: Individual journeys to
extremism online. First Monday, 24(6). https://doi.org/10.5210/
fm.v24i6.10108
Phillips W., Milner R.M. (2017) Decoding Memes: Barthes' Punctum,
Feminist Standpoint Theory, and the Political Significance of
#YesAllWomen. In: Harrington S. (eds) Entertainment Values. Palgrave
Entertainment Industries. Palgrave Macmillan, London. https://doi.org/
10.1057/978-1-137-47290-8_13
Pew Research Center (May 2018). Teens, Social Media & Technology
2018. https://www.pewresearch.org/internet/2018/05/31/teens-social-
media-technology-2018/
Runions, K. C., & Bak, M. (2015). Online Moral Disengagement,
Cyberbullying, and Cyber-Aggression. Cyberpsychology, Behavior, and
Social Networking, 18(7), 400-405. https://doi:10.1089/cyber.2014.0670
Seraphin, Alava & Frau-Meigs, Divina & Hassan, Ghayda. (2019).
Youth And Violent Extremism On Social Media Mapping The Research.
United Nations Educational, Scientific and Cultural Organization.
https://unesdoc.unesco.org/ark:/48223/pf0000260382
Shevyrolet. (2020). Image Macros. Know Your Meme https://
knowyourmeme.com/memes/image-macros
Suler, J. (2004). The Online Disinhibition Effect. CyberPsychology
& Behavior, 7(3), 321-326. https://doi:10.1089/1094931041291295
______
Response to Written Questions Submitted by Hon. Maria Cantwell to
Adam Mosseri
Children and Adolescent Use and Misuse of Meta Platforms.
Information provided by Frances Haugen indicates that data regarding
users' interests, friends, and other interactions can and is used by
Meta to infer with relatively precise accuracy the real age of users.
Question 1. Is it true, from a technological perspective, that data
about a user's interests, friends, and the accounts with which they
interact can predict an individual's age?
(a) If yes:
i. With how much precision? Please also describe the methods and
metrics by which Meta predicts an individual's age.
ii. Explain each instance in which Meta has used these age-
prediction models in connection with its products.
(b) If no:
i. Does Meta make its advertising clients or investors aware that
Meta is unable to make predictions about the ages of its
users? If so, please provide copies of such disclosures.
Answer. Determining the age of people on social media is a complex
challenge across our industry.
We've developed artificial intelligence technology that allows us
to estimate people's ages, like if someone is below or above 18. We
train the technology using multiple signals. We look at things like
people wishing someone a happy birthday, and the age written in those
messages--for example, ``Happy 21st Bday!'' or ``Happy Quinceanera.''
We also look at the age of that person's Facebook friends or Instagram
followers. These age estimations are used to ensure people are having
an age-appropriate experience, for example, by preventing them from
accessing services for people o ver 18 years old, like Facebook Dating,
or correctly operating certain safety features, like identifying people
under 18 to provide proactive warnings in a conversation with an adult
that may be inappropriate.
We're also focused on using existing data to inform our artificial
intelligence technology. Where we do feel we need more information,
we're developing a menu of options for someone to prove their age. This
is a work in progress. Our technology isn't perfect, and we're always
working to improve it, but that's why it's important we use it
alongside many other signals to understand people's ages.
We're also in discussions with the wider technology industry on how
we can work together to share information in privacy-preserving ways to
help apps establish whether people are over a specific age. One area
that we believe has real promise is working with operating system
(``OS'') providers, Internet browsers, and other providers so they can
share information to help apps establish whether someone is of an
appropriate age.
This has the dual benefit of helping developers keep underage
people off their apps, while removing the need for people to go through
different and potentially cumbersome age verification processes across
multiple apps and services. While it's ultimately up to individual apps
and websites to enforce their age policies and comply with their legal
obligations, collaboration with OS providers, Internet browsers, and
others would be a helpful addition to those efforts.
Finally, we're also building technology to find and remove accounts
belonging to people under the age of 13. We have tools and processes to
identify and remove people who falsely state they are 13 years old or
older. For example, anyone can report an underage account to us. Our
content reviewers are also trained to flag reported accounts that
appear to be used by people who are underage. If these people are
unable to prove they meet our minimum age requirements, we delete their
accounts. In the last two quarters of 2021, Meta removed more than 4.8
million accounts on Facebook and 1.7 million accounts on Instagram
because they were unable to meet our minimum age requirement.
Question 2. Meta uses data about its users' interests, friends, and
other accounts with which they interact in order to provide targeted
advertising to its users. Can Meta use this data to make its platforms
safer for users under the age of 13? Can Meta use this data to identify
users under the age of 13 and disable their accounts? If so, does Meta
do so?
Answer. We prohibit people under the age of 13 from using Facebook
or Instagram. When we learn an underage user has created an account, we
remove them from the platform. As discussed above, determining people's
age on social media is a complex challenge across our industry. We have
various methods of finding and removing accounts used by people who
falsely state they are 13 years old or older. For more information on
our efforts to keep people under 13 years old off of our platforms,
please see the response to your Question 1.
We are working on developing technology to proactively identify
individuals under 13 years old to prevent them from signing up for our
services and to remove them from the platforms. This process is not
straightforward and presents many challenges, including but not limited
to considerations pertaining to children's data.
Although this is an industry-wide issue, we are dedicated to
investigating existing and novel avenues for validating age, while also
working to ensure we are striking the right balance between using
personal information to verify age and the principle of data
minimization. To that end, we are in discussions with the wider
technology industry about ways to share information in privacy-
preserving ways that help apps--such as Instagram--establish whether
people are over a specific age. As discussed above, one such area that
we are exploring is accessing device/operative system-level signals
further upstream (e.g., at app download) that would help Instagram
understand a user's age, in addition to in-app verification methods.
As explained in our Data Policy, we collect three basic categories
of data about people: (1) data about things people do and share (and
who they connect with) on our services; (2) data about the devices
people use to access our services; and (3) data we receive from
partners, including the websites and apps that use our Business Tools.
Our Business Tools Terms expressly prohibit our partners from sharing
with us data they know or reasonably should know is from or about
people under the age of 13. For more information, please visit https://
www.facebook.com/policy.php.
Question 3. Explain in detail all actions Meta takes to keep users
under the age of 13 off its platforms. In your response, include: (a)
any actions previously considered and not implemented (and explain why
they were not implemented); and (b) any actions that are planned for
the future but not currently implemented (and explain why they have not
been implemented yet).
Answer. Please see the response to your Questions 1 and 2.
Question 4. Does Meta have information regarding the actual or
estimated number of users on each of its platforms who are under the
age of 13?
(a) If yes:
i. By platform, what is the actual or estimated number of users
who are under the age of 13? Please state whether the
number is actual or an estimate, and how Meta determined
those numbers.
ii. Does Meta make this information available to the public,
including advertisers and investors? If not, why?
iii. Explain why Meta does not keep these users under the age of
13 off its platforms. For each reason, state whether it is
a technological barrier or a business decision and fully
explain the nature of the technological barrier or the
rational for the business decision.
(b) If no:
i. Will Meta commit to obtaining this information about its users
and make it publicly available, including to advertisers
and investors?
Answer. Per Facebook's Terms of Service and Instagram's Terms of
Use, people under 13 are not allowed on our platforms. When we learn
that someone under 13 years old is on our platform, we remove them.
As discussed in the response to Question 1, Meta has published data
regarding the number of accounts it has removed for failing to meet our
minimum age requirement. In the last two quarters of 2021, Meta removed
more than 4.8 million accounts on Facebook and 1.7 million accounts on
Instagram because they were unable to meet our minimum age requirement.
Children and Adolescent Social Media Health and Safety.
Question 1. Has Meta ever studied, or is Meta aware of any studies,
that shed light on how the use of Instagram detrimentally affects the
mental or physical health of children or adolescents? If so:
(a) Identify and provide copies of the studies and their results and
describe what actions, if any, Meta took in response to such
studies and results.
(b) Do any of these studies indicate the number of hours a child or
adolescent can use Instagram without risk of detrimental mental
or physical health impacts? If so, state how many hours a child
or adolescent can safely use Instagram.
Answer. We take the issues of safety and well-being on our
platforms very seriously, especially for the youngest people who use
our services. We are committed to working with parents and families, as
well as experts in child development, online safety, and children's
health and media, to ensure we are building better products for
families. That means building tools that promote meaningful
interactions and helping people manage their time on our platform. It
also means giving parents the information, resources, and tools they
need to help their children develop healthy and safe online habits. And
it means continued research in this area.
We employ and work with researchers from backgrounds that include
clinical psychology, child and developmental psychology, pediatrics
research, public health, bioethics, education, anthropology, and
communication, and we collaborate with top scholars to navigate various
complex issues, including those related to well-being for users on
Facebook and Instagram. Meta also awards grants to external researchers
in order to help us better understand how experiences on Facebook and
Instagram relate to the safety and health of our community, including
teen communities. And because safety and well-being aren't just Meta
issues, but societal issues, we work with experts in the field to look
more broadly at the impact of mobile technology and social media on
children and how to better support them as they transition through
different stages of life. Additionally, we support the bipartisan,
bicameral Children and Media Research Advancement (``CAMRA'') Act,
which would provide funding for the National Institutes of Health to
study the impact of technology and media on the cognitive, physical,
and socio-emotional development of children and adolescents.
Our insights not only shed light on problems, but they also may
inspire new ideas and changes. Most importantly, we do research to make
our products better. We evaluate possible solutions and work every day
to make our platforms a positive and safer experience for our
community. We have a long track record of using research and close
collaboration with our Safety Advisory Board, Youth Advisors, and
additional experts and organizations to inform changes to our apps and
provide resources for the people who use them. For example:
We created a dedicated reporting flow for eating disorder-
related content after learning some people were having
difficulty reporting such content using our prior flow.
We launched Hidden Words, which allows people to
automatically filter Direct Message (``DM'') requests that
contain offensive words, phrases, and emojis into a Hidden
Folder that they never have to open if they don't want to. This
feature also filters DM requests that are likely to be spammy
or low-quality.
We launched Restrict, which allows people to protect
themselves from bullying without the fear of retaliation.
To prevent bullying, we've created comment warnings when
people try to post potentially offensive comments. We've found
that, about 50 percent of the time, people edited or deleted
their comments based on these warnings.
We worked with the Jed Foundation to create expert and
research-backed educational resources for teens on how to
navigate experiences like social comparison on Instagram.
We updated our policies to prohibit graphic content related
to suicide and took steps to protect vulnerable people from
being exposed to content related to suicide and self-injury
more generally in places like Explore.
These are just some examples of the types of products and controls
that we have launched publicly or are continuing to explore based on
this research. And we're constantly working to improve. For example,
our research shows--and external experts agree--that if people are
dwelling on one topic for a while, it could be helpful to nudge them
towards other topics at the right moment. That's why we're building a
new experience that will nudge people towards other topics if they've
been dwelling on one topic for a while.
When it comes to time spent, we want to give people on our
platforms--especially teenagers--tools and resources to help them
manage their experiences in the ways that they want and need, including
the time they spend. We have built time management tools including
Daily Limit, which lets people know when they've reached the total
amount of time they want to spend on Instagram each day; ``You're All
Caught Up,'' which notifies people when they've caught up with new
content on their feed; and controls to mute notifications. We also
recently launched ``Take A Break'' to go even further and empower
people to make informed decisions about how they're spending their time
on Instagram. We show reminders suggesting that people close Instagram
if they've been scrolling for a certain amount of time, and we show
them expert-backed tips to help them reflect and reset. We try to make
young people aware of this feature, so we show them notifications
suggesting they turn the reminders on. Finally, we're also launching
Instagram's first set of controls for parents and guardians, an opt-in
feature that will allow them to see what their teens are up to on
Instagram and manage things like the time they spend in our app. These
new features will give parents tools to meaningfully shape their teen's
experience.
We also offer resources for parents and teens on issues like screen
time, digital citizenship, and well-being. For example, in 2020 we
launched Get Digital, which provides lessons and resources based on
many years of academic research by our expert partners to help young
people develop the competencies and skills they need to navigate the
Internet more safely. We are also a founding sponsor of the Digital
Wellness Lab, run jointly by Harvard University and Boston Children's
Hospital, which has resources for parents and teens on issues like
screen time in their Family Digital Wellness Guide, found here: https:/
/digitalwellnesslab.org/wp-content/uploads/Family-Digital-Wellness-
Guide-2021.pdf. Finally, we offer on-demand safety trainings that
explore child safety tools and resources available on our apps and
provide people with expert-informed, research-based information. These
trainings are available at https://www.facebook.com/safety/childsafety/
trainings.
For more information about research on this issue, please see the
answer to your Question 2 below.
Much of the research requested is confidential, in part to respect
the privacy of the people who participated in these studies and also to
promote full and frank discussion within Meta about important issues
like teen mental health. That said, greater transparency and
appropriate context are things we think about a lot. We know there is
great interest in the way our platforms operate and the steps we take
to improve them. We'll continue to look for opportunities to work with
more partners to publish studies in this area, and we're working
through how we can allow external researchers more access to our data
in a way that respects people's privacy. For example, in September
2021, we released two presentations related to these issues.
Question 2. Does Meta believe that any of its platforms are capable
of addicting users?
Answer. Facebook and Instagram were built to bring people closer
together and build relationships. We design our services to be useful.
And we want the time people spend on Facebook and Instagram to be
intentional, positive, and inspiring. The effects of social media are
still being studied, and we work in collaboration with leading experts
to better understand issues around mental health and well-being and to
make product decisions that enable meaningful social interactions.
Meta has been working for years to better understand and empower
people who use our services to manage problematic use. There are many
challenges with conducting research in this space, and we are not aware
of a consensus among studies or experts about how much screen time is
``too much.'' Many experts and research studies suggest it's not
necessarily about how much time you spend on social media but more
about what you're doing and the experiences you're having that's
important. For example, although some research shows that passive use
of social media--browsing and clicking links, but not interacting with
people--can be linked with negative outcomes, research also shows that
meaningful use of social media--sharing messages, posts, and comments--
can be linked with positive outcomes, like feeling less lonely and more
socially supported. This understanding has led to product changes to
facilitate active interactions and meaningful connections between
people as well as new features that give people more control over their
experience on our services.
Additionally, our own research as well as external research has
revealed significant variation in the number of people who self-report
problematic use, depending on how it's measured. A causal link between
social media and addiction has not been established. The research on
the effects of social media on people's well-being is mixed. For
example, a mixed methods study from Harvard described the ``see-saw''
of positive and negative experiences that U.S. teens have on social
media. The same person may have an important conversation with their
friend on one day and fall out with them the next day. According to
research by Pew Internet on teens in the US, 81 percent of teens said
that social media makes them feel more connected to their friends,
while 26 percent reported social media makes them feel worse about
their lives. Our findings were similar. In one internal study, surveyed
users were more likely to say that Instagram made problematic use
better or had no impact rather than make it worse.
That said, we still want to provide people with tools to help them
manage their experiences on our platforms however they see fit. For
more information, please see our Newsroom article on this subject:
https://about.fb.com/news/2021/11/wsj-report-ignores-our-approach-to-
well-being-research/. For example, on Instagram and as discussed in the
answer to your previous Question, we publish expert-informed resources
and have built time management tools including Daily Limit, which lets
people know when they've reached the total amount of time they want to
spend on Instagram each day; `You're All Caught Up,' which notifies
people when they've caught up with new content on their feed; and
controls to mute notifications. We also recently launched ``Take A
Break'' to go even further and empower people to make informed
decisions about how they're spending their time on Instagram. We show
reminders suggesting that people close Instagram if they've been
scrolling for a certain amount of time, and we show them expert-backed
tips to help them reflect and reset. We want to make sure young people
are aware of this feature, so we show them notifications suggesting
they turn the reminders on.
Further, on Facebook, we recently made it easier to sort and browse
News Feed, giving people more control over what they see. We also
launched Favorites, a new tool where people can control and prioritize
posts from the friends and Pages they care about most in their News
Feed. Specifically, people can select up to 30 friends and Pages to
include in Favorites, and posts from these selections will appear
higher in ranked News Feed and can also be viewed as a separate filter.
And, on Instagram, we've started to test the ability to allow people to
switch to different feed views, allowing people the option to see posts
in chronological order. People can also make a close friends list on
Stories and share with just the people they've added. We developed some
of the tools referenced in this answer based on collaboration and
inspiration from leading mental health experts and organizations,
academics, internal experts, and feedback from our community. Our hope
is that these tools give people more control over the time they spend
on our platforms.
Question 3. According to the Centers for Disease Control and
Prevention, there has been a 57 percent increase in teen suicide since
2008, which some experts attribute to the exponential increase social
media use among teens. Does Instagram believe there is any relationship
between teen suicide and mental health issues (including anxiety,
depression, or self-harm), and social media? Please explain your
response.
Answer. We care deeply about teens and take these issues incredibly
seriously. Mental health, and self-harm and suicide in particular, are
complex issues. We rely on the input of experts in these fields to help
shape our approach; they tell us that some people find it helpful to
share their experiences of mental health and get support from friends,
family, and others in the community. Social media can also help tackle
stigma associated with mental health. At the same time, experts tell us
what's helpful for some may be harmful for others. It is really
important that we find the right balance between helping protect people
from content that has a higher likelihood of resulting in risk for them
and allowing individuals to express themselves and seek support in
times of need. To that end, we have specific policies about suicide and
self-injury. While we allow people to discuss these topics because we
want Facebook and Instagram to be a space where people can share their
experiences, raise awareness about these issues, and seek support from
one another, we don't allow people to post graphic suicide and self-
harm content, content that depicts methods or materials involved in
suicide and self-harm (even if it's not graphic), or fictional content
that promotes or encourages suicide or self-harm. In the third quarter
of 2021, we removed over 96 percent of violating content before it was
reported to us. As is discussed in greater detail below, we regularly
consult with experts in suicide and self-injury to help inform our
policies and enforcement in this area.
For years, we've taken steps to help protect more vulnerable
members of our community from being exposed to suicide and self-harm
related content that is permissible under our policies, for example, if
someone posts about their recovery journey. We remove known suicide-and
self-harm-related posts from places where people discover new content,
including our Explore page, and we will not recommend accounts we have
identified as featuring suicide or self-injury content. We also remove
certain hashtags and accounts from appearing in search. When someone
starts typing a known hashtag or account related to suicide and self-
harm into search, we restrict these results. We add sensitivity screens
to blur content that isn't graphic but could have a negative impact on
someone searching. Additionally, we have a resource center that we
developed with help from mental health partners.
When our technology detects content that clearly violates our
policies, it will automatically remove it without the need for human
review. And when a post is reported by a concerned friend or family
member or identified by machine learning as including suicide or self-
harm content, a member of Facebook's Community Operations team reviews
the report to determine whether there are any policy violations and if
there may be an imminent risk of self-harm--and, if so, the original
poster is shown support options. For example, we encourage people who
are going through a difficult time to reach out to a friend, and we
offer pre-populated text to make it easier for people to start a
conversation. We also suggest contacting a helpline and offer other
tips and resources for people to help themselves in that moment.
Additionally, in the US, if someone reported the post, that person also
receives resources and information about how to help the person in
distress. These resources were created in partnership with our clinical
and academic partners.
Finally, if our reviewers identify that someone is at immediate
risk of harming themselves, we will contact local emergency services in
the U.S. to get them help. We use automation so the team can more
quickly access the appropriate first responders' contact information.
By using technology to prioritize and streamline these reports, we are
able to escalate the content to our Community Operations team, which
can more quickly decide whether there are policy violations and whether
to recommend contacting local emergency responders. Thanks to our
technology, our Community Operations team, and as a result of reports
from friends and family on Facebook and Instagram, we've helped first
responders quickly reach people globally who needed help. For more
information, please visit https://www.facebook
.com/safety/wellbeing/suicideprevention. Please note that our
technology does not seek to determine whether an individual posting the
content is suffering from a mental health condition.
All of our efforts related to suicide and self-harm are informed by
subject-matter experts from around the world. We could not do this work
without them, which is why in 2019 we made the decision to set up a
regular check in with experts from over 20 countries to discuss some of
the complex issues associated with suicide and self-injury content,
revisit decisions we have made to ensure they align with the latest
research, and ensure we are doing our best to support all those on our
platform. We cover a wide range of issues during these discussions,
including how should we deal with suicide notes posted on our platform,
what are the risks associated with viewing aggregated sad content
online, and when should we allow newsworthy depictions of suicide. We
also seek their input on product enhancements to foster the well-being
of our community. For more information, please visit https://www.face
book.com/safety/wellbeing/suicideprevention/expertengagement.
More broadly, we also work with clinical and social psychologists,
social scientists, and sociologists, and we collaborate with top
scholars to navigate complex issues related to well-being for users on
Facebook and Instagram. Meta awards grants to external researchers in
order to help us better understand how experiences on Facebook and
Instagram relate to the safety and health of our community, including
teen communities. And because safety and well-being isn't just a Meta
issue, but a societal issue, we work with experts in the field to look
at the impact of mobile technology and social media more broadly on
youth, and how to better support them as they transition through
different stages of life.
We have been clear in our statements that Meta conducts research to
understand the impact of our products and to make our products better,
like many other large companies and especially other technology
companies. That means our insights often shed light on problems, but
they also inspire new ideas and changes. We evaluate possible solutions
and work every day to make our platform a positive and safer experience
for our community. We have a long track record of using our research--
as well as external research and close collaboration with our Safety
Advisory Board, Youth Advisors, and additional experts and
organizations--to inform changes to our apps and provide resources for
the people who use them.
We are committed to learning even more about issues related to
well-being, and we welcome the opportunity to work with Congress and
others in the industry to develop industry-wide standards. For example,
Meta and the Aspen Institute have collaborated to advance the
collective understanding of loneliness, social connection, technology
and how they all intersect. This effort has connected more than 60
cross-sector experts thus far--from academia, health, technology,
nonprofits, and government--to share research and identify gaps to
inform future research and potential solutions. In early 2021, we also
supported the launch of the Digital Wellness Lab at Boston Children's
Hospital, a first-of-its-kind research and innovation incubator
bringing together science-based solutions and information about the
effects of digital technology on our brains, bodies, and behaviors.
Additionally, we support the bipartisan, bicameral CAMRA Act, which
would provide funding for the National Institutes of Health to study
the impact of technology and media on the cognitive, physical, and
socio-emotional development of children and adolescents.
Question 4. Has Meta performed or commissioned any studies or data
analyses to identify suicidal ideation or predict suicide risk among
its users? If so, identify and describe these studies.
Answer. As discussed in the responses to your previous questions,
we conduct research and collaborate with top scholars to navigate
various complex issues, including those related to well-being for users
on Facebook and Instagram. For additional information regarding our
approach to suicide and self-injury, including our efforts to help
protect our community, please see the response to your previous
Question 3.
Question 5. Is it technologically possible for Instagram to
identify adolescent users who are experiencing depression, anxiety,
self-harm or suicidal ideation and warn such users and/or their parents
to seek psychological or medical help?
Answer. Experts say that one of the best ways to help prevent a
suicide is for people in distress to hear from others who care about
them. Meta has a unique role to play in helping to connect people in
distress with people who can offer support. When people post or search
for suicide or self-injury-related content, we will direct them to
local organizations that can provide support, and if our reviewers
identify that someone is at immediate risk of harming themselves, we
will contact local emergency services to get them help. We work with
162 suicide and self-injury prevention helplines around the world. For
more information about our detection and enforcement efforts, please
see the response to your Question 3. For more information on our
suicide prevention efforts, please see https://www.facebook.com/safety/
wellbeing/suicideprevention.
Question 6. Frances Haugen testified before this Committee in
October 2021 regarding studies performed by Facebook in 2019
associating negative body images and eating disorders among adolescent
girls' use of Instagram. After Instagram became aware of this hazard in
2019, what efforts were made to warn adolescent users and their parents
of the potential association of eating disorders and Instagram usage?
Answer. To clarify, our research shows that Instagram helps many
teens who are struggling with some of the hardest issues they
experience. For difficult issues including eating issues, loneliness,
anxiety, and sadness, teenage girls who said they experienced these
challenges were more likely to say that Instagram made these issues
better rather than worse. And, importantly, our research did not
measure causal relationships between Instagram and real-world issues.
More broadly, we prohibit any content that celebrates, encourages,
or promotes self-injury, including eating disorders. We use technology
and reports from our community to find and remove this content as
quickly as we can, and we're always working to improve. In the third
quarter of 2021, we removed about 12 million pieces of suicide and
self-injury content (which includes eating disorder content) from
Facebook and Instagram; we detected over 96 percent of that content
before people reported it to us. A significant portion of that is
detected and removed when it is uploaded. In some cases, content
requires human review to understand the context in which material was
posted.
We're constantly working, including with global experts, to improve
in this important area. For example, on Instagram, we created a
dedicated option to report eating disorder content, making it easier to
report violating content and provide resources to those who may be
struggling. While people have always been able to report content
related to eating disorders, users now see a separate, dedicated option
to do so. In fact, we have a long track record of using research and
close collaboration with our Safety Advisory Board, Youth Advisors, and
additional experts and organizations to inform changes to our apps and
provide resources for the people who use them. For more information on
the changes we've made, please see the response to your Question 1.
We do allow people to share their own experiences and journeys
around self-image and body acceptance on our platforms because we know,
and experts agree, that these stories can prompt important
conversations and provide community support. But we also know such
content can be triggering for some. To address this, when someone tries
to search for or share eating disorder related content on Facebook or
Instagram, we blur potentially triggering images and point people to
helpful resources. Additionally, if someone tries searching for terms
related to eating disorders, we share dedicated resources, including
contacts for local eating disorder hotlines in certain countries. In
the US, for example, we surface expert informed resources, including
from the National Eating Disorder Association (``NEDA''). These
resources will also be surfaced if someone tries sharing this content.
Additionally, for those concerned that a person's post suggests they
may need help with these issues, our Help Center provides information
about eating disorders and guidance to help start a conversation with
someone who may be struggling with eating disorders. We also provide a
list of recommended Dos and Don'ts (developed with NEDA) for talking to
someone about their eating disorder. We'll continue to follow expert
advice from academics and mental health organizations, like NEDA, to
strike the balance between allowing people to share their mental health
experiences and helping protect them from content that may potentially
be harmful to them.
Additionally, for the third year in a row, we worked with NEDA to
share programming during National Eating Disorders Awareness Week in
the US. Throughout the week, community leaders shared Reels to
encourage positive body image, push back against weight stigma and
harmful stereotypes, and show that all bodies are worthy and deserve to
be celebrated.
We are taking steps to protect users on Instagram from being
exposed to content that is permissible (but possibly triggering) by
making it harder to find. We remove such posts from places where people
discover new content, including in our Explore page, and we are not
recommending accounts identified as featuring suicide or self-injury
content. In addition, when someone starts typing a known hashtag or
account related to suicide and self-harm into search, we are also
working to restrict results. And our Help Center provides information
about eating disorders and how to support someone who may be struggling
with these issues.
We don't want anyone on Instagram to feel marginalized,
particularly people with eating disorders or body image issues. While
we already work in partnership with experts to understand how to
support those affected by eating disorders, there's always more we can
learn. That's why we're hosting feedback sessions with community
leaders and experts globally to learn more about emerging issues in the
eating disorders space and new approaches for offering support.
Question 7. What steps has Meta taken to allow access to
Instagram's user data to academic researchers for the purposes of
studying the mental health effects on its user base, in particular,
children and young adults? Please explain these steps in detail.
Answer. We offer researchers a number of privacy-protective methods
to collect and analyze data. We welcome research that holds us
accountable and doesn't compromise the security of our platform or the
privacy of the people who use it. That's why we created tools like the
Ad Library and launched initiatives like Data for Good and Facebook
Open Research and Transparency (``FORT'')--to provide privacy-protected
APIs and data sets for the academic community. FORT aims to provide
academics and independent researchers with the tools and data they need
to study Facebook's impact on the world. These include the FORT
Researcher Platform, which provides a secure way for academics to
access Facebook data in a privacy-protective environment.
Meta researchers have published and shared hundreds of papers in
2021 alone. We will continue to work to publish research externally and
to engage and collaborate with experts, including in data-sharing with
researchers on issues related to young people. For example, we have
ongoing relationships with groups like the Aspen Institute and the
Humanity Center, and we are a founding sponsor of the Digital Wellness
Lab run jointly by Harvard University and Boston Children's Hospital.
We also collaborated with independent academics around the U.S. 2020
elections; we will take the methodology from the U.S. 2020 program and
apply it to well-being research. This will involve collaborative co-
design of studies and peer reviewed publication of findings.
We know there is interest in the way our platforms operate and the
steps we take to improve them. We don't shy away from that scrutiny,
and we are working to find an appropriate path forward when it comes to
communicating about our research in a way that allows us to continue to
promote full and frank discussion while also respecting the privacy of
our users.
Mandatory Arbitration. Instagram's terms of service contain a
mandatory arbitration clause that requires users to arbitrate their
claims and give up their right to participate in a class action.
These terms apply to users of all ages. There have been reports,
including in Washington state, of minor children being contacted
through Meta's platform by strangers who lure them into sex trafficking
or providing explicit information and images--sometimes via Meta's
platform(s) and sometimes using Meta services to make initial contact
before moving to other channels.
Question 1. Is it Meta's position that the arbitration and class
action provision apply to minors?
Answer. At Meta, we take the issue of safety and well-being on our
platforms very seriously, especially for the youngest people who use
our services. As per our terms, in the US, we require people to be at
least 13 years old to sign up for Facebook and Instagram. Instagram's
Terms of Use are applicable to all people when they create an Instagram
account or use Instagram.
When we become aware of content that violates our human trafficking
policy, we remove it, and, where appropriate, we refer content to
relevant authorities, including the National Center for Missing and
Exploited Children (``NCMEC'') as required by law. We also respond to
law enforcement requests related to sex trafficking. We engage with
agencies across the world that are dedicated to combatting sex
trafficking and help inform prevention efforts on our services. We have
developed strong relationships with NCMEC, the International Center for
Missing and Exploited Children (``ICMEC''), Internet Watch Foundation,
ECPAT International, Polaris, the U.S. Department of Health and Human
Services' Office of Child Support Enforcement, and other NGOs to
disrupt and prevent sex trafficking online.
We also prohibit content that endangers or seeks to sexually
exploit children through inappropriate interactions, such as obtaining
or requesting sexual material from children in messages, arranging or
planning real world sexual encounters with children, grooming or
purposefully exposing children to sexually explicit language or sexual
material, or engaging in implicitly sexual conversations in messages
with children.
Question 2. Has Meta ever sought to enforce the arbitration and
class action provision with respect to a minor? If yes, how many times?
Answer. We are aware of at least one case that appears to fit the
criteria stated in your Question. This case does not involve sexual
exploitation of a minor.
Question 3. Has Meta ever agreed to a confidential settlement--
whether in arbitration or court--of any claim for damages arising from
an allegation that a child using any of its products, including
Instagram, was harmed by Meta's action or inaction? If so, please
describe each such settlement, including the alleged harm and the year
of the settlement.
Answer. Based on a reasonable review of recently filed litigation
matters, Meta has not settled claims alleging that its products caused
physical harm or injury to a user under age 18.
Digital Advertising. A class action complaint against Facebook (now
Meta) alleged that the company purported it could run ads with a
Potential Reach that exceeded the U.S. Census Bureau's population count
of 18- to 34-year-olds in each of the 50 states.
Question 1. Does an ad's estimated Potential Reach (or ``Instagram
Reach'' or similar metric) impact the price of the ad? Please describe
the relationship between Potential Reach and similar metrics and the
price of ads.
Answer. Generally, advertisers are charged based on the number of
clicks or the number of impressions their ads receive. Meta provides
pre-campaign estimates to help advertisers understand (i) the estimated
number of people who meet the targeting (or audience selection) and ad
placement criteria they select or (ii) how many people they may be able
to reach and how many results (e.g., conversions) they can get each
day, based on their selected criteria, budget, and performance.
Historically, these included Potential Reach, Estimated Daily Results,
and--if using interest targeting categories--an estimate of the number
of people who are associated with particular interest categories based
on their activities on Facebook. Meta does not charge advertisers based
on pre-campaign estimates.
In order to make the presentation of these pre-campaign estimates
consistent, Meta recently changed Potential Reach and interests into
ranges instead of specific numbers, which is how Estimated Daily
Results were already presented. Ranges are also in line with how pre-
campaign estimates are presented on other platforms across the
advertising industry. As part of this update, Meta changed the name of
Potential Reach to Estimated Audience Size.
Estimated Audience Size generally estimates a range of how many
people meet the targeting and ad placement criteria that advertisers
select while creating an ad. Estimated Audience Size is a directional
tool to understand how advertisers' targeting choices affect target
audience size. Estimates are provided in the Ads Manager interface and
update in real time during the ad creation experience as advertisers
input and modify their targeting and placement criteria, and the number
of people who meet an advertiser's selected targeting and placement
criteria depends on many factors, including user activity, and will
change over time. Meta discloses to advertisers that Estimated Audience
Size uses a methodology that considers many factors, such as:
Ad targeting criteria and placement locations;
How many people were shown ads on Meta apps and services in
the past 30 days;
What content people interact with on Meta apps and services
(such as liking a Page);
Self-reported demographics like age and gender; and
Where people see ads (for example, in a Facebook News Feed
or Instagram Stories).
As Meta discloses to advertisers, Estimated Audience Size may vary
over time and should not be interpreted as the number of people who
will actually view particular ads. The estimated range provided may not
include the number of people who meet the targeting and placement
criteria in some instances. Estimated Audience Size is not a proxy for
monthly or daily active users or engagement. (Meta's quarterly earnings
announcements provide this information.) Estimates are not designed to
match population, census estimates, or other sources and may differ
depending on factors such as:
How many Meta apps and services accounts a person has;
How many temporary visitors are in a particular geographic
location at a given time; and
Meta user-reported demographics.
Meta has a number of systems in place to detect and remove fake
accounts. Meta discloses to advertisers that, in some cases, the
presence of fake accounts may also have some impact on unique estimates
like Estimated Audience Size.
In addition, in cases where a person's Facebook and Instagram
accounts are linked in Accounts Center, their Facebook and Instagram
accounts will be counted collectively as a single account for ads
estimation purposes. If a person's Facebook and Instagram accounts are
not connected in Accounts Center, their accounts will be counted as
multiple accounts for ads estimation purposes.
The number of people a campaign actually ends up reaching depends
on an advertiser's budget and an ad's performance, which are not
factors that are considered in Estimated Audience Size. If there is
enough data available, Meta provides advertisers Estimated Daily
Results, which are pre-campaign predictions about how many people
advertisers can reach (``Estimated Daily Reach'') and how many results
they can get (``Estimated Daily Results'') per day if they spend their
full budget (for a daily budget) or are scheduled to (for a lifetime
budget).
Estimated Daily Reach and Estimated Daily Results depend on factors
like bid, budget, targeting and ad placement criteria, and campaign
performance. Estimated Daily Results appears in the Ads Manager
interface where advertisers create their ads contemporaneously with and
immediately below Estimated Audience Size. Meta shows advertisers an
Estimated Daily Reach range as well as a range of Estimated Daily
Results specific to their chosen campaign objective. Estimated Daily
Results updates in real time as the advertiser enters or refines its
targeting criteria, placement choices, bid, and budget. The predictions
are a way to help advertisers understand what results they might get,
before having to spend any money. Additional information on Estimated
Daily Results can be found here: https://www.facebook
.com/business/help/1438142206453359?id=561906377587030.
Meta provides two main buying types for advertisements: (1) auction
buying and (2) Reach & Frequency buying. While auction buying is
available to all advertisers and used by most advertisers, Reach &
Frequency buying only is available to qualified advertisers.
For advertisers who use auction buying, the price of Meta ads is
based on an auction where ads compete for ad impressions based on bid
and performance. Each time there is an opportunity to show an ad to
someone, an auction takes place to determine which ad to show to that
person. Using information provided by advertisers in the ads creation
process, including the advertiser's budget, the auction determines
which of the ads in the auction is most likely to maximize total value
for the advertiser and the user--for the price the advertiser bids or
less, and never higher. The winner of the auction is the ad with the
highest total value, which is determined based on the bid placed by an
advertiser for that ad and the ad relevance. Advertisers are in control
of how much they spend: they control the overall amount they spend
through their budget and their cost per result through their bid
strategy. When an advertiser runs an ad, they are only charged for the
number of clicks or the number of impressions the ad received. Meta
provides information to advertisers during and after their campaigns
about the performance of their ads and recommends that advertisers view
their results during their campaigns to help them understand the
results they are getting and to make adjustments to their campaign as
needed. Additional information on Meta's ad auction system can be found
here: https://www.facebook.com/business/help/
430291176997542?id=561906377587030. Advertisers are not charged based
on their Potential Reach or Estimated Daily Results estimates.
Reach and Frequency is an alternative method for buying ads on Meta
that allows advertisers to book campaigns in advance with predictable,
optimized reach and controlled frequency. Reach and Frequency allows an
advertiser to specify the number of times their audience will see their
ads, the days they see the ads, the times of day they see the ads, and
the order in which they see the ads. Reach and frequency advertisers
pay a fixed cost per 1,000 impressions (``CPM'') based on the
advertiser's chosen audience details and their budget. Once a campaign
order is placed, the CPM an advertiser pays for impressions will not
change, provided the campaign is not paused.
Question 2. In what ways can an advertiser on Meta's platforms
verify the Potential Reach or similar metric of its advertising? What
data does Meta make available to provide advertisers assurance that the
reported reach is accurate? Does Meta use independent third parties to
verify the Potential Reach metrics and methods or similar advertising
metrics or methods? If so, please list the third parties and provide a
copy of any reports issued.
Answer. Please see the response to your Question 1. As noted in
that response, Estimated Audience Size and Estimated Daily Results are
pre-campaign estimates, meaning they are estimates provided to
advertisers before they publish an ad and the ad begins to run (as
distinct from reported metrics once an ad is running). Estimated
Audience Size is a directional tool to help advertisers understand how
their targeting choices affect target audience size and is not an
estimate of how many people will see an ad. Estimated Daily Results are
a way to understand what results an advertiser could get before having
to spend any money. As Meta discloses to advertisers, confidence in the
estimations can be affected by factors such as volatility in the
performance of an ad and whether or not an advertiser's entire budget
will be spent.
Estimated Audience Size is classified as an estimated metric,
meaning that it is derived through statistical sampling or modeling
rather than a straight count. Meta explains to advertisers that
sampling lets Meta look at a portion of data that represents a larger
population included in an entire set of data.
Like Estimated Audience Size, Meta tells advertisers that Estimated
Daily Results, including Estimated Daily Reach, are estimated and
sampled metrics. For example, to generate Estimated Daily Reach, Meta
uses internal data to identify a sampling of people who generated ad
impressions and who meet the targeting and placement criteria that an
advertiser has entered into Ads Manager and runs a simulation to
estimate how many people an advertiser might reach per day.
These pre-campaign estimates are based on data available to Meta.
Meta does not use third parties to verify its pre-campaign estimates,
which are neither reported nor billed.
Once advertisers launch an ad campaign, Meta provides them with
real-time, actual results that the advertisers can use to assess their
ads' performance, including on Facebook, Instagram, and Audience
Network, and to compare performance to prior campaign results or the
advertisers' own data (if any). This data reflects the results an ad is
receiving and is available to advertisers on Meta's Ads Manager.
Impressions from certain placements are accredited by the Media Rating
Council.
Question 3. Identify every instance in which Meta previously
learned of inaccuracy in its Potential Reach metric (or ``Instagram
Reach'' or similar metric). In doing so, please describe the nature of
the inaccuracy, and the correction, if any, that Meta took to address
the inaccuracy.
Answer. Please see the responses to your Questions 1 and 2.
Estimated Audience Size is an estimated range of the number of people
who meet the targeting and ad placement criteria an advertiser selects.
It varies over time, and it is not an estimate of how many people will
see an ad. Estimated Daily Results, including Estimated Daily Reach,
are also pre-campaign estimates, and an ad's actual results will depend
on numerous other factors once an ad begins running. These estimates
are not exact calculations or guarantees of an ad's performance.
Meta endeavors to continually maintain, evolve, and improve its
product offerings to users (including advertisers), including
directional estimate tools like Estimated Audience Size and Estimated
Daily Reach. Over the past several years, Meta has made modifications
to Estimated Audience Size (previously Potential Reach), and it has
updated its disclosures to advertisers to explain improvements and
provide even more transparency. Meta also monitors the estimates for
bugs or other issues in Meta's data pipelines that may affect the
estimates.
In 2017, Meta received press reports about differences between
Estimated Audience Size (then called Potential Reach) and governmental
census estimates. While Estimated Audience Size estimates how many
people within specified Meta platforms meet the targeting and ad
placement criteria that advertisers select while creating an ad, the
census counts how many people are included in a given population. Each
government conducts its census differently, with varying accuracy
depending on the country's methodology. These distinctions give rise to
differences between these measurements.
Meta has also identified other factors that may cause the estimates
to differ from population, census estimates, or other sources,
including: (1) how many Meta apps and services accounts a person has;
(2) how many temporary visitors are in a particular geographic location
at a given time; and (3) user-reported demographics.
Meta has made a number of updates to its description of Estimated
Audience Size (previously Potential Reach) to provide even more
information to advertisers:
Estimated Audience Size (1) is not designed to match
population, census estimates, or other sources; (2) updates in
real time based on a methodology that considers factors like ad
targeting criteria and placement locations, how many people
were shown ads on Meta apps and services in the past 30 days,
what content people interact with on Meta apps and services
(such as liking a Page), self-reported demographics like age
and gender, and where people see ads; and (3) may differ
depending on factors such as how many Meta apps and services
accounts a person has, how many temporary visitors are in a
particular geographic location at a given time, and Meta user-
reported demographics.
Meta may count a user's actions separately if they have more
than one account and take actions (such as liking photos or
adding comments) on the separate accounts.
In cases where a person has connected their Facebook and
Instagram accounts in Accounts Center, their Facebook and
Instagram accounts will be counted collectively as a single
account for ads estimation purposes. If a person, however, has
not connected their Facebook and Instagram accounts in Accounts
Center, their accounts will be counted as multiple accounts for
ads estimation purposes.
Meta has a number of systems in place to detect and remove
fake accounts, but in some cases, the presence of fake accounts
may have some impact on unique metrics, such as potential reach
estimates.
In addition, prior to March 2019, Estimated Audience Size was
estimated based on data reflecting people who were active on Meta's
platforms and were eligible to see an ad over the immediately preceding
30-day period. In March 2019, Meta changed how it calculated Estimated
Audience Size to base it on how many people who match the advertiser's
selected audience and placement criteria have been shown an ad on a
Meta product in the past 30 days. This change did not impact an
advertiser's actual target audience, delivery results, or how it is
charged for ads; it only impacted the Estimated Audience Size
estimates.
Most recently, in order to make the presentation of these pre-
campaign estimates consistent, Meta changed Potential Reach into ranges
instead of specific numbers, which is how Estimated Daily Results were
already presented. As part of this update, Meta changed the name of
Potential Reach to Estimated Audience Size.
Question 4. Describe all research Meta has conducted on the
accuracy Potential Reach (or ``Instagram Reach'' or similar metric) and
the financial or other harm that inaccuracies in such metrics has (or
potentially has) on advertisers.
Answer. Please see the responses to your Questions 1, 2, and 3.
Advertisers are not charged based on the Estimated Audience Size.
Instead, Estimated Audience Size is meant to help advertisers
understand how their targeting and placement choices affect the size of
their target audience. Advertisers are not charged based on those pre-
campaign estimates of the size of a target audience. Rather,
advertisers are charged based on the number of clicks or the number of
impressions their ads received.
Question 5. What percentage of Meta's revenue comes from
advertising?
Answer. Meta generates substantially all of its revenue from
selling advertising placements to third parties. Our total revenue for
the past five years and the percentage of which came from third-party
ads is disclosed in our public SEC filings, and provided below.
2021: $117.929 billion (97 percent from third-party ads)
2020: $85.965 billion (98 percent from third-party ads)
2019: $70.697 billion (98.5 percent from third-party ads)
2018: $55.838 billion (98.5 percent from third-party ads)
2017: $40.653 billion (98 percent from third-party ads)
Question 6. What representations does Meta make to advertisers
regarding brand safety?
Answer. Brand safety allows advertisers to control where their ads
are delivered on Facebook, Instagram, WhatsApp, and Audience Network.
We offer several brand safety controls for preventing ads from running
alongside certain types of content on Audience Network, Facebook, and
Instagram. These controls include:
Placement Controls. When they create an ad, advertisers can
choose where they want their ad to appear on Facebook,
Instagram, Messenger, and Audience Network. They can also opt
out of specific placements if they don't want their ads to run
within those environments.
Inventory Filter. Inventory Filter gives advertisers an
extra layer of control over sensitive content, allowing them to
choose Full Inventory, Standard Inventory, or Limited Inventory
on Facebook Instant Articles, Facebook in-stream video,
Facebook overlay ads in Reels, Instagram in-stream video, and
Audience Network. Inventory Filter allows advertisers to
control the type of content their ad appears within. Remember
that only apps, Facebook Pages, and Instagram accounts that
comply with our policies can be part of these placements in the
first place.
Topic Exclusions for Facebook In-Stream Videos. By excluding
specific topics, advertisers can choose which published
Facebook in-stream videos they want their ads to display on.
Advertisers can choose content-level exclusions from four
different topics: news, politics, gaming, and religious and
spiritual content.
Content Type Exclusions. On the Facebook in-stream video
placement, ads can appear in partner live streams or videos
claimed by rights holders.
Live Streams. Advertisers can prevent Facebook in-stream
video ads from appearing in live videos.
Videos Claimed by Rights Holders. Advertisers can stop
Facebook in-stream video ads from appearing in videos that were
published by non-partners but are being monetized by the rights
holders.
Block Lists. Block Lists prevent ads from running on
specific publishers within Audience Network, Facebook in-stream
videos, Facebook Instant Articles, Instagram in-stream videos,
and Facebook overlay ads in Reels.
Publisher Allow Lists. A publisher allow list is a list of
Audience Network, Instagram in-stream, and Facebook in-stream
video publishers that advertisers choose for their ads to
appear on.
Content Allow List. Content allow lists give advertisers the
ability to work with trusted Meta Business Partners to review
and customize lists of brand suitable videos for running
Facebook in-stream video campaigns.
Publisher Lists. Our publisher list details the URLs where
we could place an ad on Audience Network, Facebook in-stream
videos, Facebook Instant Articles, Instagram in-stream videos,
and Facebook overlay ads in Reels. Advertisers can download the
list and review the list, then copy chosen URLs into block
lists or publisher allow lists. They can also search, sort and
filter publishers in the Brand Safety Controls interface to
spot check publishers without downloading the full publisher
list.
Delivery Reports. Delivery reports provide advertisers with
access to impression level data at the publisher and content
levels, giving greater transparency into which individual
pieces of content their ads were embedded.
Advertisers can use these controls in combination or on their own.
We also enforce Community Standards for the content individuals and
publishers can share. More information on Meta's brand safety approach
and controls can be found here:
https://www.facebook.com/business/help/
1559334364175848?id=176915609319
7771;
https://www.facebook.com/business/help/
1926878614264962?id=176915609319
7771;
https://www.facebook.com/business/good-questions/safety;
https://www.facebook.com/business/help/
2116031745379221?id=176915609319
7771; and
https://www.facebook.com/business/news/introducing-more-
control-for-brands-and-people-in-feed.
Question 7. In what ways can an advertiser on Meta's platforms
verify Meta's brand safety claims? What data does Meta make available
to provide advertisers assurance that brand safety promises are kept?
Does Meta use independent third parties to verify that its brand safety
promises are kept? If so, please list the third parties and provide a
copy of any reports issued.
Answer.
Community Standards Enforcement Report--As part of our
continued commitment to transparency, we publish our Community
Standards Enforcement Report on a quarterly basis detailing our
progress in preventing and/or taking action against content
that violates our policies. We measure the prevalence of
violating content to gauge how we're performing against that
goal. Reports can be found at https://transparency.fb.com/. In
2020, we committed to undergoing an independent evaluation of
our Community Standards Enforcement Report to validate that our
metrics are measured and reported correctly. We have selected
Ernst & Young to assess the metrics shared in our 2021 fourth
quarter report, and we plan to share the results in Spring of
2022.
MRC Accreditation--In July 2020, we began our work and audit
with the Media Rating Council (``MRC'') to seek accreditation
of our Partner and Content Monetization Policies and Brand
Safety and Suitability controls. This audit is underway with
results expected to be published in the first half of 2022.
Oversight Board--We established an Oversight Board with
independent judgment to help Facebook answer some of the most
significant and difficult questions around freedom of
expression online: what to take down, what to leave up, and
why. Additional information can be found here: https://
transparency.fb
.com/oversight/.
Trustworthy Accountability Group--On September 10, 2020, the
Trustworthy Accountability Group (``TAG''), a global program in
digital advertising, announced Facebook in their inaugural TAG
Brand Safety Certified Program. With the completion of our 2020
audit, our certification now continues under the new TAG seal
and is expanded from the United Kingdom (``UK'')--only to a
global certification. On September 23, 2020, Facebook was
awarded the Internet Advertising Bureau (``IAB UK'') Gold
Standard 2.0.
Joint Industry Committee for Web Standards (``JICWEBS'')--In
September 2019, Facebook successfully completed the JICWEBS
Digital Trading Standards Group's brand safety audit. As a
result, Facebook, Instagram, and Audience Network were listed
as Digital Trading Standards Group (``DTSG'') brand safety
certified for demonstrating commitment to brand protection and
taking steps to reduce the risk of unsafe ad placements.
Partnership with industry leading bodies--We continue to
work with industry bodies and other tech platforms, like the
Global Alliance for Responsible Media (``GARM''), to share
knowledge, build consensus and help make all platforms safer
for people and advertisers.
Third-Party Brand Safety Partners--Advertisers can also work
with third parties to manage their brand safety controls for
their campaigns. The Meta Business Partner Brand Safety program
recognizes companies offering proprietary solutions that can
help advertisers review content options and control where their
ads will appear. Additional information can be found here:
https://www
.facebook.com/business/partner-directory/
search?solution_type=measurement&
capabilities=Brand%20Safety.
Third-Party Fact-Checking Partners--Because we want people
to see accurate information and because misinformation can be
harmful to our community, we work hard to fight it. To that
end, Meta prohibits ads that include claims debunked by third-
party fact checkers or, in certain circumstances, claims
debunked by organizations with particular expertise.
Advertisers that repeatedly post information deemed to be false
may have restrictions placed on their ability to advertise.
Meta works with independent, International Fact-Checking
Network certified fact-checkers who identify, review, and rate
viral misinformation across our platforms. We rely on
independent fact-checkers to review and rate the accuracy of
stories through original reporting, which may include
interviewing primary sources, consulting public data, and
conducting analyses of media, including photos and videos.
Each time a fact-checker rates a piece of content as false on our
platforms, we significantly reduce that content's distribution
so that fewer people see it, label it accordingly, and notify
people who try to share it. Fact-checkers do not remove
content, accounts, or Pages from our apps. We remove content
when it violates our Community Standards, which are separate
from our fact-checking programs.
Please see the response to your Question 6. Additional information
on how advertisers can manage and evaluate brand safety settings is
available here: https://www.facebook.com/business/help/
297590664193809?id=1769156093197771.
Question 8. Meta states that it restricts a variety of advertising
content on its platforms, including content relating to alcohol,
gambling, and other age-inappropriate categories. To do so, it relies
on a Facebook ad review system that primarily uses automated tools to
ensure that inappropriate content does not reach those under the age of
18. Has Meta ever studied, or is Meta aware of any studies, that have
determined the accuracy of these automated tools? If so, identify and
provide copies of the studies and their results. In addition, describe
what actions, if any, Meta took in response to such studies and
results.
Answer. As your Question recognizes, our Advertising Policies
restrict certain content based on age. Ads targeted to minors must not
promote products, services, or content that are inappropriate, illegal,
or unsafe, or that exploit, mislead, or exert undue pressure on the age
groups targeted. Ads that promote or reference alcohol, for instance,
must only be targeted to people 21 years or older in the US. Similarly,
any ads marketing weight loss products and services, cosmetic
procedures, gambling, or dating services, among other topics, must be
targeted to people 18 years or older at a minimum. If someone whom Meta
knows is under a certain age and attempts to view a Page or account
with an age restriction, they will be blocked from viewing it.
Additionally, in 2021, we began limiting advertisers' ability to
target ads to people under 18 (or older in certain countries), allowing
them only to target based on their age, gender, and location. This
means that previously available targeting options for users under 18,
like those based on interests or on their activity on other apps and
websites, are no longer available to advertisers. These changes are
global and apply to Instagram, Facebook, and Messenger. When young
people become adults, we notify them about targeting options that
advertisers can now use to reach them and the tools we provide to them
to control their ad experience.
We have several mechanisms for advertisers and Page admins to
control the audience eligible to view the content they produce. When an
advertiser decides to create an ad, we provide age and location
targeting options during the ad creation process. The advertiser must
comply with our Advertising Policies and any applicable local laws, and
they can do so, for example, by specifying that their ads be shown only
to users that meet a minimum age or are located in a specific country.
Page admins can also use age restrictions to limit the audience of
their Page.
When it comes to detection and enforcement, our advertising review
system is designed to review all ads before they go live, including
those related to alcohol, weight loss products, gambling, and other
restricted topics. This system relies primarily on automated technology
to apply our Advertising Policies to the millions of ads that run
across our apps. While our review is largely automated, we rely on our
teams to build and train these systems, and in some cases, to manually
review ads. For additional information, please see: https://
www.facebook.com/business/news/facebook-ad-policy-process-and-review.
Ad review is typically completed within 24 hours, but it may take
longer, and ads can be reviewed again, including after they're live.
Based on the results of the review, an ad is either rejected or allowed
to run. If an ad is rejected, an advertiser can create a new ad--either
with new ad creative or by revising the rejected ad--or request another
review if they believe their ad was incorrectly rejected. Unlike the
initial ad review, we rely more heavily on teams of human reviewers to
process re-review requests from advertisers, but we are continuously
assessing ways to increase automation.
We also have reporting, authenticity, and transparency features to
encourage advertiser accountability. People can report ads they believe
violate our policies by clicking the three dots in the upper right hand
corner of the ad. These reports are an important signal for our
advertising review systems and may prompt a re-review of the ad. This
feedback also helps to improve our policies and enforcement.
Beyond reviewing individual ads, we may also review and investigate
advertiser behavior, like the number of previous ad rejections and the
severity of the type of violation, including attempts to get around our
advertising review process. Advertisers who violate our policies may
have actions taken against them, including losing the ability to run
ads.
Reviewing ads from millions of advertisers globally against our
Advertising Policies can be challenging. Our enforcement isn't perfect,
and both machines and people make mistakes. When we launch a new
policy, it can take time for the various parts of our enforcement
system, both automated technology and trained global teams, to learn
how to correctly and consistently enforce the new standard, but as we
gather new data and feedback, our machine learning models, our
automated enforcement, and our manual review teams improve. We
regularly assess and continue to make improvements to our review system
to improve our detection of ads that violate our policies and to help
protect young people from seeing inappropriate ads. We make changes
based on trends in the ads ecosystem and adjust for new tactics that we
find from people misusing the platform.
Question 9. Can advertisers on Instagram direct ads to users of a
specific gender, age, or sexual orientation?
Answer. We strongly believe that the best advertising experiences
are personalized. They enable people to discover products and services
from small businesses that may not have the ability to market them on
broadcast television or other forms of media. They also enable
nonprofits, social causes, and organizations to reach the people most
likely to support and benefit from them, such as connecting people to
fundraisers for charitable causes they care about.
At the same time, we want to better match people's evolving
expectations of how advertisers may reach them on our platform and
address feedback from civil rights experts, policymakers, and other
stakeholders on the importance of preventing advertisers from abusing
the targeting options we make available.
Meta's ads generally allow marketers to select audiences for their
ads based on a variety of factors including age, gender, location,
interests, and behaviors. These audience selection (or targeting) tools
are not available for ads related to housing, employment, and credit.
Recently, we also announced specific changes to detailed targeting
options.
As of January 19, 2022, we remove detailed targeting options that
relate to topics people may perceive as sensitive, such as options
referencing causes, organizations, or public figures that relate to
health, race or ethnicity, political affiliation, religion, or sexual
orientation. Examples include:
Health causes (e.g., ``Lung cancer awareness,'' ``World
Diabetes Day,'' ``Chemotherapy'')
Sexual orientation (e.g., ``same-sex marriage'' and ``LGBT
culture'')
Religious practices and groups (e.g., ``Catholic Church''
and ``Jewish holidays'')
Political beliefs, social issues, causes, organizations, and
figures
Additionally, we give people ways to tell us that they would rather
not see ads based on their interests or on their activities on other
websites and apps, such as through controls within our ad settings. We
also know that young people may not be well equipped to make these
decisions on their own, which is why we take a more precautionary
approach. In 2021, we began limiting advertisers' ability to target ads
to people under 18 (or older in certain countries), allowing them only
to target based on their age, gender, and location. This means that
previously available targeting options for users under 18, like those
based on interests or on their activity on other apps and websites, are
no longer available to advertisers. These changes are global and apply
to Instagram, Facebook, and Messenger. When young people become adults,
we notify them about targeting options that advertisers can now use to
reach them and the tools we provide to them to control their ad
experience.
Question 10. Does a user's gender play any role, directly or
indirectly, in the content or advertising displayed to the user?
Answer. Please see the response to Question 9 about ad targeting
options available to advertisers when selecting audiences for their
ads. Ads are ranked for delivery using different sets of machine
learning models that are constantly learning and changing. There are
hundreds of models that use billions of data points, including gender.
The machine learning models are only one part of the ads delivery
process. Before ranked ads are delivered to a particular user, they
compete against each other in an auction and against other content,
such as posts from friends and family, for the limited space available
in user News Feeds and other placements. As a result, no single input
is determinative of the delivery of a particular ad to a particular
user, let alone of how an ad is ranked. Accordingly, we do not believe
it is accurate to characterize gender as ``playing a role'' in what is
displayed to users, but provide this information in the interest of
transparency.
______
Response to Written Questions Submitted by Hon. Richard Blumenthal to
Adam Mosseri
Requirements for internal audits. Technology companies should not
be allowed to put products on the market without knowing they are safe.
While Facebook and Instagram were doing research on teen mental health,
they hid the research and results from parents and stonewalled
Congress. Under the U.K.'s Children's Code, Instagram is now required
to conduct impact assessments on risks to children.
Question 1. Does Instagram support extending the U.K.'s Children's
Code's self-assessment obligation to the United States? If not, please
explain why not.
Answer. Meta recognizes the Age Appropriate Design Code (``AADC'')
as a valuable source of guidance in the global approach to youth and
the potential value of a U.S. equivalent. The privacy, safety, and
well-being of young people on our platforms are essential to our
services, and the AADC is one of the inputs that informs the expansive
work we do every day to protect the safety and privacy of young people
using our apps globally. We welcome the ``best interests of the child
approach'' and support a holistic view of the best interests of the
child standard in assessing the appropriateness of specific practices
and in product development. We believe that adopting principles-based
approaches that are consistent across different jurisdictions is
necessary to enable effective and scalable technology-driven solutions
to protect young people online globally.
In the past several years, we've reflected on the AADC and
incorporated the standards as a starting point to develop internal
guidance on designing our products and features with youth in mind.
Meta has also leveraged the AADC, in addition to other guidance from
around the world, to improve internal structures and help product teams
ensure the best interest standard is directly embedded in the product
development process.
The reality is that keeping young people safe online is not just
about one company. We continue to welcome productive collaboration with
lawmakers and elected officials. Regulatory frameworks like the AADC
underpin the work we're doing to create privacy and safety standards
for building youth products at Meta. In fact, we believe there should
be an industry body that will determine best practices when it comes to
at least three questions: how to verify age, how to design age-
appropriate experiences, and how to build parental controls. This body
should receive input from civil society, parents, and regulators to
create standards that are high and protections that are universal. And
we believe that companies like ours should have to adhere to these
standards to earn some of our Section 230 protections. In addition, the
body could take steps to require each member to publish regular reports
on the progress they are making against each standard and to develop a
free and accessible information hub for parents and educators. This
proposal is a work in progress, but we hope that it will contribute to
the ongoing discussion about how appropriate regulation can help us
address these critical issues. In the meantime, we will continue to
push forward on safety and well-being for young people online.
Question 2. Please list all internal audits or assessments about
the efficacy of the measures Instagram currently takes to protect kids.
If none, please explain why not, as this is now required under U.K.
law.
Answer. We are constantly reassessing our approach to children's
privacy to ensure that we're designing age-appropriate products and
experiences for youth. On all products, including Instagram, internal
assessments help us ensure that our approach is appropriate and
effective. As previously noted, the internal guidance we've developed
for product teams on designing products and features with youth in mind
leverage the AADC, but they are also shaped by and reflect the
expertise of internal and external child safety, policy, and privacy
specialists.
Meta has also established a significant cross-functional and cross-
jurisdictional team who are focused on addressing youth-related
regulatory requirements, including the AADC. This team consists of
approximately 500 people from a wide variety of teams within the
company, including Product; Engineering; Legal; Policy (Safety,
Privacy, and Public); Communications; Marketing; and Research. The work
from this cross-functional team informs both existing product changes
and new product development for youth.
Access for independent researchers. Mr. Mosseri, you testified that
``there should be requirements and standards for how companies like
ours are transparent about both data and algorithms.'' Qualified,
independent researchers not funded by an industry body should be able
to study and assess the impact that Instagram has on children and teens
without onerous barriers.
Question 3. How does Instagram currently support requests regarding
access to its datasets and information about its algorithms from any
independent researcher (i.e., any academic or civil society researcher
who is not an employee of or subcontractor for Instagram) that wishes
to study children and teens' mental health and well-being?
Answer. We offer researchers a number of privacy-protective methods
to collect and analyze data. We welcome research that holds us
accountable and doesn't compromise the security of our platform or the
privacy of the people who use it. That's why we created tools like the
Ad Library and launched initiatives like Data for Good and Facebook
Open Research and Transparency (``FORT'')--to provide privacy-protected
APIs and data sets for the academic community. FORT aims to provide
academics and independent researchers with the tools and data they need
to study Facebook's impact on the world. These include the FORT
Researcher Platform, which provides a secure way for academics to
access Facebook data in a privacy-protective environment.
Meta researchers have published and shared hundreds of papers in
2021 alone. We will continue to work to publish research externally and
to engage and collaborate with experts, including in data-sharing with
researchers on issues related to young people. For example, we have
ongoing relationships with groups like the Aspen Institute and the
Humanity Center, and we are a founding sponsor of the Digital Wellness
Lab run jointly by Harvard University and Boston Children's Hospital.
We also collaborated with independent academics around the U.S. 2020
elections; we will take the methodology from the U.S. 2020 program and
apply it to well-being research. This will involve collaborative co-
design of studies and peer reviewed publication of findings.
Relatedly, we also work to be transparent when it comes to
describing how Instagram works. We've recently published blog posts
that explain how we personalize people's experiences and how our search
technology works. For more information, please visit https://
about.instagram.com/blog/announcements/shedding-more-light-on-how-
instagram-works and https://about.instagram.com/blog/announcements/
break-down-how-instagram-search-works.
We know there is interest in the way our platforms operate and the
steps we take to improve them. We don't shy away from that scrutiny,
and we are working to find an appropriate path forward when it comes to
communicating about our research in a way that allows us to continue to
promote full and frank discussion while also respecting the privacy of
our users.
Question 4. If Instagram restricts the types of research, datasets,
studies, or other relevant information or freedom to publish findings
for independent researchers, please detail these restrictions and its
reasons for doing so.
Answer. Please see the response to your Question 3.
Pro-eating disorder content. I am stunned that Instagram has done
so little to address the eating disorder content that is rampant on its
platform. In March, according to whistleblower documents, a Facebook
engineer raised alarms that Instagram had not kept up with the risks
associated with users promoting and glorifying eating disorders. The
Tech Transparency Project released a report this month that found,
after just four days, that anorexia ``coaches'' began reaching out to
an account that resembled a young teen. Instagram is not detecting and
removing clearly dangerous material, despite repeated warnings,
internal reports, and harrowing tragedies.
Question 5. What specific steps has Instagram taken to detect and
remove content promoting and glorifying eating disorders since its
warning in March?
Answer. We don't allow anyone to encourage or promote eating
disorders on Instagram, which includes offers of coaching or
instructions, and we remove this content whenever we become aware of
it. For example, we block hashtags that break our rules, such as
#thinspo, #proana, and#anabuddy. In the third quarter of 2021, we
removed about 12 million pieces of suicide and self-injury content
(which includes eating disorder content) from Facebook and Instagram;
we detected over 96 percent of that content before people reported it
to us. A significant portion of that is detected and removed when it is
uploaded. In some cases, content requires human review to understand
the context in which material was posted. We'll continue to follow
expert advice from academics and mental health organizations, like the
National Eating Disorder Association (``NEDA''), to strike the balance
between allowing people to share their mental health experiences and
helping protect them from content that may potentially be harmful to
them.
We're constantly working, including with global experts, to improve
in this important area. For example, on Instagram, we recently created
a dedicated option to report eating disorder content, making it easier
to report violating content and provide resources to those who may be
struggling. While people have always been able to report content
related to eating disorders, users will now see a separate dedicated
option to do so.
We do allow people to share their own experiences and journeys
around self-image and body acceptance on our platforms because we know,
and experts agree, that these stories can prompt important
conversations and provide community support. But we also know such
content can be triggering for some. To address this, when someone tries
to search for or share self-harm related content on Facebook or
Instagram, we blur potentially triggering images and point people to
helpful resources. Additionally, if someone tries searching for terms
related to eating disorders, we share dedicated resources, including
contacts for local eating disorder hotlines in certain countries. In
the US, for example, we surface expert informed resources, including
from NEDA. These resources will also be surfaced if someone tries
sharing this content. Additionally, for those concerned that a person's
post suggests they may need help with these issues, our Help Center
provides information about eating disorders and guidance to help start
a conversation with someone who may be struggling with eating
disorders. We also provide a list of recommended Dos and Don'ts
(developed with NEDA) for talking to someone about their eating
disorder.
For the third year in a row, we worked with NEDA to share
programming during National Eating Disorders Awareness Week in the US.
Throughout the week, community leaders shared Reels to encourage
positive body image, push back against weight stigma and harmful
stereotypes, and show that all bodies are worthy and deserve to be
celebrated.
Additionally, we are taking steps to protect vulnerable users on
Instagram from being exposed to content that is permissible (but
possibly triggering) by making it harder to find. We remove such posts
from places where people discover new content, including in our Explore
page, and we are not recommending accounts identified as featuring
suicide or self-injury content. In addition, when someone starts typing
a known hashtag or account related to suicide and self-harm into
search, we are also working to restrict results. And our Help Center
provides information about eating disorders and how to support someone
who may be struggling with these issues.
We don't want anyone on Instagram to feel marginalized,
particularly people with eating disorders or body image issues. While
we already work in partnership with experts to understand how to
support those affected by eating disorders, there's always more we can
learn. That's why we're hosting feedback sessions with community
leaders and experts globally to learn more about emerging issues in the
eating disorders space and new approaches for offering support.
School disruptions. I have heard from teachers and educators in
Connecticut that social media apps--Instagram specifically--disrupt the
educational process. One teacher told me that she effectively could not
teach in the afternoons because students would spend the rest of the
day distracted by what happened online during lunch. The Wall Street
Journal also recently reported about ``gossip'' accounts on Instagram
and other platforms, accounts students create to spread rumors about
each other, cyberbully other students, and instigate fights.
Question 6. What support does Instagram provide to teachers and
school administrators? How long has it offered this support?
Answer. As educators nationwide continue to navigate the ongoing
challenges of teaching during an unprecedented time, we believe that
supporting educators is crucial. That's why we work to help educators
find and build community, discover resources, and learn about other
tools to support them and their learning communities via our Educator
Hub. For more information, please visit https://www.facebook
.com/fb/education/educator-hub.
Additionally, in 2018, Meta announced a partnership with the
National Parent Teacher Association (``National PTA'') to launch
Digital Families Community events across the country. In 2019, 200
community safety events took place in all 50 states to help families
address tech-related challenges, from online safety and bullying
prevention to digital and news literacy. The toolkits for these events
were developed with experts including the Youth and Media Team at the
Berkman Klein Center for Internet and Society at Harvard. The events
included interactive workshops for families on healthy online habits
and a family tech talk around family social media values and social
media and phone ``off times.''
In 2020, we also launched new Resources for Educators that
specifically address digital literacy. Get Digital provides lessons and
resources based on many years of academic research by our expert
partners to help young people develop the competencies and skills they
need to navigate the Internet more safely. These resources are designed
to be used by educators and families both in the classroom and at home.
Get Digital allows students to perform a deep dive into core
digital citizenship and well-being skills, and learn how to:
Get connected and leverage digital tools to stay safe while
navigating information in the digital world.
Use technology to explore their identities and engage with
others in positive ways to protect their health and well-being
while online.
Interpret cultural and social differences, respond and
engage respectfully, and evaluate, create, and share different
types of media content.
Participate in public matters and advocate for issues they
care about.
Learn the skills they need to fully leverage the
opportunities the digital world may offer.
Get Digital also offers a facilitator's guide specifically designed
for after school programming to support educators' usage of these
digital literacy lessons in an after school setting. To support
teachers as they look to incorporate digital citizenship and well-being
lessons into their curriculum, we designed five different PowerPoint
professional development guides. These can be used by teacher leaders
to train educators on how to use these materials in the classroom.
We partnered with the United Nations Educational, Scientific and
Cultural Organization (``UNESCO''), the International Society for
Technology in Education, National PTA, and EVERFI to distribute our new
digital literacy tools to parents and educators around the world.
Lessons are drawn from the Youth and Media team at the Berkman Klein
Center for Internet & Society at Harvard University, which has made
them freely available worldwide under a Creative Commons license, and
the Greater Good Science Center. And in the US, we've collaborated with
The Child Mind Institute and ConnectSafely to publish a new Parents
Guide. It includes the latest safety tools and privacy settings and a
list of tips and conversation starters to help parents navigate
discussions with their teens about their online presence.
We've also partnered with the Jed Foundation to release our
`Pressure to be Perfect' toolkit, a guide for parents and teens on how
to manage social comparison on Instagram. It includes information on
how to support positive teen expression as well as tips for how young
people can share their stories authentically and find supportive
communities online.
Finally, we've collaborated with Yale University's Center for
Emotional Intelligence and Lady Gaga's Born This Way Foundation to
create inspirED, a free program designed to help young people build a
more positive school climate. inspirED provides free resources,
designed by teens, educators, and social emotional learning experts
that empower students to work together to create more positive school
climates and foster greater well-being in their schools and
communities. By engaging with inspirED, teams across the Nation are
empowered to change the way that students and teachers feel in school
every day.
For more information regarding our efforts to combat bullying,
please see the answer to your Question 7 below.
Question 7. What steps, if any, has Instagram taken to address
``gossip'' accounts? If accounts were removed in response to reports,
how long did it take Instagram to act?
Answer. We take the issues of safety and well-being on our
platforms very seriously, especially for the youngest people who use
our services. We are committed to working with parents and families, as
well as experts in child development, online safety, and children's
health and media, to ensure we are building better products for
families.
We prohibit bullying, hate speech, or harassment on our platform,
and we use a combination of user reports and technology to find and
remove this type of content. If an account repeatedly breaks our
Community Guidelines, by posting hate speech and bullying content for
example, we will remove it from Instagram.
Our Hate Speech policy prohibits attacks against individuals based
on a number of protected characteristics, including race, ethnicity,
national origin, disability, religious affiliation, caste, sexual
orientation, sex, gender identity, and serious disease. Our Bullying
and Harassment policies are applied to a broad range of content that
attacks individuals, including content that's meant to degrade or
shame. We recognize that bullying and harassment can have more of an
emotional impact on minors, which is why our policies provide
heightened protections for users between the ages of 13 and 18. And in
October 2021, we updated our policies on online bullying and harassment
to help protect people from mass harassment and intimidation from
multiple accounts. We now remove coordinated efforts of mass harassment
that target individuals at a heightened risk of offline harm, for
example victims of violent tragedies or government dissidents--even if
the content on its own wouldn't violate our policies. We will also
remove objectionable content that is considered mass harassment towards
any individual on personal surfaces, such as direct messages in inbox
or comments on personal profiles or posts.
We're always working on new tools, re-evaluating our policies, and
continually investing in detection technology to ensure we are
proactively tackling the problem as best we can, as we know how
important it is to get this right. We spent approximately $5 billion on
safety and security in 2021 alone and have 40,000 people working on
these issues, including 15,000 people who review content in more than
70 languages working in more than 20 locations all across the world to
support our community. We also use artificial intelligence technology.
In the third quarter of 2021, we removed almost 8 million pieces of
bullying and harassment content from Instagram; of that content, we
removed over 80 percent of it proactively, before people reported it.
In addition, we have created several tools to combat bullying on
Instagram:
In 2018, we launched Restrict, which allows people to
protect themselves from bullying, without the fear of
retaliation. Once someone Restricts an account, they won't
receive any notifications from that account. Comments from a
restricted account will only be visible to the user and the
person they restricted, and messages from a restricted account
will automatically be moved to Message Request. The restricted
account will not be able to see when the user has read their
direct messages or when the user is active on Instagram. This
tool was based on research with thousands of people across
different countries and languages to develop a clearer picture
of how they experience bullying on Instagram. It found that
tools like reporting or blocking weren't always the right
options for people. For example, in one of our studies we
observed that, of people surveyed in the US, Brazil, UK,
Indonesia, and Turkey, 45 percent who had deleted offensive
comments from someone they knew personally didn't report them
because they were afraid the reported account would get in
trouble and/or would find out that they had reported them. And
in another study, more than half of people surveyed who
reported experiencing bullying knew their bullies personally.
We've created comment warnings when people try to post
potentially offensive comments. Reminding people of the
consequences of bullying on Instagram and providing real-time
feedback as they are writing the comment is the most effective
way to shift behavior. These warnings let people take a moment
to step back and reflect on their words and lay out the
potential consequences should they proceed. We've found that,
about 50 percent of the time, people edited or deleted their
comments based on these warnings.
We launched Hidden Words, which allows people to
automatically filter Direct Message (``DM'') requests that
contain offensive words, phrases, and emojis into a Hidden
Folder that they never have to open if they don't want to. This
feature also filters DM requests that are likely to be spammy
or low-quality.
Finally, to help protect people when they experience or
anticipate a rush of abusive comments and DMs, we introduced
Limits, a feature that automatically hides comments and DM
requests from people who don't follow them or only recently
followed them. We developed this feature because we heard that
creators and public figures sometimes experience sudden spikes
of comments and DM requests from people they don't know. Limits
allows people to hear from their long-standing followers, while
limiting contact from people who might only be coming to their
account to target them.
Finally, we also believe it is important to provide parents with
the information, resources, and tools they need to have conversations
with their children about online technologies and to help them develop
healthy and safe online habits. To that end, we offer dedicated
resources for parents and guardians on the Facebook and Instagram
Services, including a Parents Portal, Parent Center (which includes a
downloadable PDF guide available in multiple languages), and Parent's
Guide with information about the privacy and safety tools available to
their teens on the Facebook and Instagram Services, top questions from
parents, and advice for talking to their kids about staying safe on
Instagram. We also created a Bullying Prevention Hub, developed in
partnership with bullying prevention experts, to serve as a resource
for educators and families seeking support for issues related to
bullying and other conflicts. It offers step-by-step plans, including
guidance on how to start important conversations for people being
bullied, advice for parents and caregivers who have a child who is
being bullied or accused of bullying, and educators who have had
students involved with bullying. We provide tips and tools for bullying
prevention in our Safety Center, available here: https://
www.facebook.com/safety/childsafety/bullyingprevention.
In 2020, we also launched Get Digital, which provides lessons and
resources based on many years of academic research by our expert
partners to help young people develop the competencies and skills they
need to navigate the Internet more safely. It includes a bullying
prevention toolkit especially for educators and provides tips for
helping teens recognize others' perspectives and feelings. Finally, we
work closely with organizations like the Cyberbullying Research Center
and the International Bullying Prevention Association.
Question 8. Does Instagram currently take or is Instagram planning
to enact any additional steps to prevent the disruption of safe
educational environments in schools?
Answer. We remove content, disable accounts, and, when appropriate,
work with law enforcement when we believe there is a genuine risk of
physical harm or direct threats to public safety, including in schools.
We also aim to prevent potential offline harm that may be related to
content on Facebook or Instagram and remove language that incites or
facilitates serious violence. We're always evolving our policies to
make sure they are staying up-to-date with evolving trends. We will
continue to improve our technology and make sure our policies keep up
with the latest research and with people's changing behavior to make
our platform a safe and supportive place for everyone.
For more information on our efforts to support educators, please
see the responses to your Questions 6 and 7.
______
Response to Written Questions Submitted by Hon. Amy Klobuchar to
Adam Mosseri
Instagram's Marketing Spend. The New York Times has reported on
leaked documents about Instagram's intentional focus on marketing to
kids.\1\ That was contested by the head of Instagram at the hearing.
Please respond to the following questions related to Instagram's
marketing efforts.
---------------------------------------------------------------------------
\1\ https://www.nytimes.com/2021/10/16/technology/instagram-
teens.html
According to company planning documents, Instagram's global
marketing budget was slated to increase from $67.2 million in
2018 to $390 million in 2021, more than a five-fold increase.
According to documents, this planned budget was earmarked
mostly to target teens, which Mr. Mosseri denied in his
testimony before the Senate Commerce Committee. The New York
Times also reported that Mr. Mosseri personally approved these
---------------------------------------------------------------------------
budgets.
What was Instagram's planned total global marketing
budget for each year from 2018 to 2021? And for the United
States?
What percentage of Instagram's planned global
marketing budget was earmarked for attracting, retaining,
or reaching potential users under 18? Provide percentages
for each year from 2018 to 2021.
What percentage of Instagram's planned global
marketing budget was earmarked for attracting, retaining,
or reaching users under 18 through digital advertisements?
Provide percentages for each year from 2018 to 2021.
Answer. As we've previously shared, teens are an important
community that helps spot and set some trends. It shouldn't come as a
surprise that they, like many other demographics, are a part of our
marketing strategy. We require a minimum age of 13 to use Facebook and
Instagram in the US.
We started work to build an Instagram experience for tweens (aged
10-12) to address an important problem seen across our industry: kids
are getting phones younger and younger, misrepresenting their age, and
downloading apps that are meant for those 13 or older. We were working
on delivering experiences that are age-appropriate and give parents and
guardians visibility and control over what their tweens are doing
online, like an Instagram experience for tweens. However, that work is
now paused.
Instagram's global marketing budget has increased approximately 4.7
times between Fiscal Years 2018 and 2021. But we do not focus
Instagram's entire marketing budget towards teens.
Did any internal company documents use the term
``existential threat'' when describing the possible loss of
teen engagement on Instagram? If so, please provide copies of
all such documents from 2018 to the present.
Answer. Please see the response to your Question 1. In addition,
from its earliest beginnings, Instagram has been an app widely used and
enjoyed by young people, including teens. As such, marketing our
products and services to young people, including educating them about
features on our platform, has been a long-standing part of our
marketing strategy. However, we don't allocate marketing spend by age
group (i.e., teens). Over the last two years, Instagram has launched a
series of product marketing campaigns, each with different goals and
objectives. To optimize these efforts, we target a broad range of age
groups, which may include teens, but is mostly spread across 16- to 34-
year-olds. While we utilize marketing to inform people about what's new
on our platform, the bulk of our advertising dollars are spent on
acquiring new users, and our new user acquisition spend is largely
aimed at people who are older than 18 years old, not teens.
Please provide copies of the top 50 most-seen-by-teens
advertisements run by Instagram and intended to attract
teenagers in the United States to Instagram in each year from
2018 to 2021.
For each advertisement in response to this question,
please state how many times it was seen by teens and how
many unique teens saw it.
Answer. We do not currently track on an age-segmented basis
impression-level data for ads intended to attract new people to our
platform.
In addition to advertising its own products and services to
teenagers, Meta sells ads to other companies that are aimed at
teenagers on Instagram. As previously requested, please provide
copies of the 100 most-seen-by-teens advertisements on
Instagram in the United States in the last year.
For each advertisement in response to this question,
please state how many times it was seen by teens and how
many unique teens saw it.
Answer. Our policies and procedures limit our ability to produce
advertisers' ads and related data. During an approximately 90-day time
period between October 2021 and January 2022, Instagram users known to
be under 18 most often saw ads from Consumer Packaged Goods, E-
Commerce/Retail, and Gaming verticals, which are also among the largest
verticals for Meta's advertising business as a whole.
Meta also offers the Facebook Ad Library, an ads transparency
surface, which provides a view of ads across our apps and services. It
helps make advertising transparent by giving people more information
about the ads they see and contains all active ads running across our
products. For additional information, please visit https://
www.facebook.com/business/help/2405092116183307?id=288762101909005.
Please provide all documents that were sent to Mr. Mosseri
or anyone who reports (or reported) directly to him indicating
Instagram's planned global marketing spend from 2018 to 2021 as
well as those indicating how much of the planned or actual
global marketing budget was directed towards attracting,
retaining, or otherwise reaching teenagers. Please also provide
copies of all such documents Mr. Mosseri wrote or sent on these
topics.
Answer. Please see the responses to your Questions 1 and 2.
Please provide copies of all documents that Mr. Mosseri
wrote or sent about increasing the use of Instagram by people
under age 18.
Answer. Please see the responses to your Questions 1 and 2.
Instagram's New Safety Features. In early December, Instagram
announced new features and plans for new features, including
Instagram's first planned tools for parents for monitoring their
children's use of Instagram.\2\ Please answer the following questions
for each of the planned features described in the announcement,
including:
---------------------------------------------------------------------------
\2\ https://about.instagram.com/blog/announcements/raising-the-
standard-for-protecting-teens-and-supporting-parents-online
Will the feature be on or off by default when it is rolled
---------------------------------------------------------------------------
out?
Who will the feature be rolled out to and when will it be
rolled out?
Please describe whether the feature will only be
rolled out to specific groups of users (e.g., teen users in
the US), which groups that includes, and why Instagram has
decided to limit the feature to such groups.
How does Instagram plan to make users and parents aware of
the new feature?
Answer. At Instagram, we've been working for a long time to keep
young people safe on our platform; as part of that work, we recently
announced some new tools and features to keep young people even safer
on Instagram.
Parental Controls. Parents and guardians know what's best
for their teens, so we plan to launch our first opt-in tools in
March 2022 to help them guide and support their teens on
Instagram. Parents and guardians will be able to view how much
time their teens spend on Instagram and set time limits. We'll
also give teens a new option to notify their parents if they
report someone, giving their parents the opportunity to talk
about it with them. This is the first version of these tools;
we'll continue to add more options over time. We're also
developing a new educational hub for parents and guardians that
will include additional resources, like product tutorials and
tips from experts, to help them discuss social media use with
their teens.
Take A Break. We recently launched ``Take A Break'' in
certain countries to empower people to make informed decisions
about how they're spending their time. If someone has been
scrolling for a certain amount of time, we ask them to take a
break from Instagram and suggest that they set reminders to
take more breaks in the future. We also show them expert-backed
tips to help them reflect and reset. To make sure that teens
are aware of this feature, we show them notifications
suggesting they turn these reminders on. We're encouraged to
see that teens are using Take A Break. Early test results show
that once teens set the reminders, more than 90 percent of them
keep them on. We launched this feature in the US, UK, Ireland,
Canada, New Zealand, and Australia in December 2021, and plan
to bring it to everyone by early this year. The Take A Break
reminders build on our existing time management tools including
Daily Limit, which lets people know when they've reached the
total amount of time they want to spend on Instagram each day,
and offers the ability to mute notifications from Instagram.
Viewing and Managing Instagram Activity. We've also
developed a new experience for people to see and manage their
Instagram activity. We know that as teens grow up, they want
more control over how they show up both online and offline so,
for the first time, they are able to bulk delete content
they've posted like photos and videos, as well as their
previous likes and comments. While available to everyone, this
tool is particularly important for teens to understand more
fully what information they've shared on Instagram, what is
visible to others, and to have an easier way to manage their
digital footprint.
Stopping People from Tagging or Mentioning Teens Who Don't
Follow Them. In 2021, we began defaulting teens under 16 years
old into private accounts when they signed up for Instagram,
and we stopped adults from being able to DM teens who don't
follow them. Now, we also plan to switch off the ability for
people to tag or mention teens who don't follow them, or to
include their content in Reels Remixes or Guides by default
when they first join Instagram. We're testing these changes to
further minimize the possibility that teens will hear from
those they don't know, or don't want to hear from, and plan to
make them available to everyone early this year.
Further Restricting Recommendations to Teens in Search,
Explore, Hashtags, and Suggested Accounts. In July 2021, we
launched the Sensitive Content Control, which allows people to
decide how much sensitive content shows up in Explore. The
control has three options: Allow, Limit, and Limit Even More.
``Limit'' is the default state for everyone and based on our
Recommendation Guidelines, ``Allow'' enables people to see more
sensitive content, whereas ``Limit Even More'' means they see
less of this content than the default state. The ``Allow''
option is unavailable to people under the age of 18. We're
exploring expanding the ``Limit Even More'' state beyond
Explore for teens. This will make it more difficult for teens
to come across potentially harmful or sensitive content or
accounts in Search, Explore, Hashtags, Reels, and Suggested
Accounts. We're in the early stages of this idea and will have
more to share in time.
Nudging Teens Towards Different Topics if They've Been
Dwelling on One Topic for a While. Lastly, our research shows--
and external experts agree--that if people are dwelling on one
topic for a while, it could be helpful to nudge them towards
other topics at the right moment. That's why we're building a
new experience that will nudge people towards other topics if
they've been dwelling on one topic for a while. We'll have more
to share on this and changes we're making when it comes to
content and accounts we recommend to teens soon.
Kids Purchasing Drugs Online. A recent report about Instagram found
that, in an experiment, it only took two clicks to find an account to
buy drugs on Instagram, but five clicks to log out of Instagram. When
typing in the phrase ``buyxanax'' in the search bar, Instagram auto-
completed the query for buying Xanax before the simulated user even
finished typing.\3\
---------------------------------------------------------------------------
\3\ https://www.techtransparencyproject.org/articles/xanax-ecstasy-
and-opioids-instagram-offers-drug-pipeline-kids
Please describe any measures, including human-review or
automated, Instagram uses to prevent minors from buying drugs
---------------------------------------------------------------------------
on Instagram.
Please describe the technologies Instagram uses to auto-
complete user queries within the app and how these technologies
treat searches for content related to drugs, eating disorders,
and self-harm.
Please describe, in detail, how such content could
proliferate on Meta's platforms, given Meta's content
moderation policies.
Answer. Drug sales are prohibited on Instagram, and we remove
content that attempts to buy, sell, or trade illicit drugs.
Instagram's Community Guidelines and Facebook's Community Standards
make it very clear that buying, selling, or trading non-medical or
pharmaceutical drugs is not allowed. Any time we become aware of
content on Facebook or Instagram that is facilitating activity like
illicit drug sales, we remove it. We have taken measures to minimize
the opportunity for these activities to take place on our platforms.
Views of violating content that contains regulated goods are very
infrequent, and we remove much of this content before people see it. In
the third quarter of 2021 alone, we removed about 4.5 million pieces of
content related to drug sales on Facebook and Instagram, and due to our
improving detection technology, the prevalence of such content is about
0.05 percent of content viewed on Facebook. Additionally, the hashtags
#mdma, #buyfentanyl, and #buyxanax have all been blocked, and we're
reviewing additional hashtags to understand if there are further
violations of our policies. We'll continue to improve in this area in
our ongoing efforts to keep Instagram safe, particularly for our
youngest community members.
We do, however, allow people to talk about their recovery from
addiction, and we try to offer help to those who may be struggling by
connecting them with free and confidential treatment referrals, as well
as information about substance use, prevention, and recovery. When
people search for drugs on Facebook and Instagram, we direct them to
the Substance Abuse and Mental Health Services Administration National
Helpline to help educate people about the risks and prevent drug
misuse. Meta partners with federal, state, and local authorities, as
well as nonprofits, on innovative ways they can use social media as a
tool to respond to the opioid epidemic. We have seen that Meta products
and tools can complement work on prevention, education, de-
stigmatization, addiction support and awareness, and we continue to
support community groups and NGOs that have used our platform for good.
We care deeply about opioid addiction in our communities, and we are
committed to doing our part to implement solutions.
For example, since 2013, we've also been a member of the Center for
Safe Internet Pharmacies (``CSIP''), a nonprofit organization to
address the global problem of consumer access to illegitimate
pharmaceuticals from illegal online pharmacies and other sources. As
one of its member companies, we have the shared goal of helping address
the growing problem of consumer access to illegitimate pharmaceutical
products on the Internet. Meta's work with CSIP also includes serving
as a founding member of Tech Together, an industry coalition formed in
November 2018 and led by the CSIP to enable members to share best
practices and find ways to increase our collective impact to address
the crisis. We also partner with Song for Charlie, a family-run
nonprofit charity dedicated to raising awareness about counterfeit pill
sales targeting young people, to help give young people more
information about the danger posed by illicit and counterfeit drugs
sold online.
[all]