[Senate Hearing 115-460]
[From the U.S. Government Publishing Office]


                                                        S. Hrg. 115-460

                   OPEN HEARING ON FOREIGN INFLUENCE
               OPERATIONS' USE OF SOCIAL MEDIA PLATFORMS
                          (COMPANY WITNESSES)

=======================================================================

                                HEARING

                               BEFORE THE

                    SELECT COMMITTEE ON INTELLIGENCE

                                 OF THE

                          UNITED STATES SENATE

                     ONE HUNDRED FIFTEENTH CONGRESS

                             SECOND SESSION

                               __________

                      WEDNESDAY, SEPTEMBER 5, 2018

                               __________

      Printed for the use of the Select Committee on Intelligence
      

[GRAPHIC NOT AVAILABLE IN TIFF FORMAT]      


        Available via the World Wide Web: http://www.govinfo.gov
                    
                    
                               __________
                               

                    U.S. GOVERNMENT PUBLISHING OFFICE                    
31-350 PDF                  WASHINGTON : 2019                     
          
-----------------------------------------------------------------------------------
For sale by the Superintendent of Documents, U.S. Government Publishing Office, 
http://bookstore.gpo.gov. For more information, contact the GPO Customer Contact Center, 
U.S. Government Publishing Office. Phone 202-512-1800, or 866-512-1800 (toll-free).
E-mail, [email protected].                     
                    
                   
                    
                    
                    
                    
                    SELECT COMMITTEE ON INTELLIGENCE

           [Established by S. Res. 400, 94th Cong., 2d Sess.]

                 RICHARD BURR, North Carolina, Chairman
                MARK R. WARNER, Virginia, Vice Chairman

JAMES E. RISCH, Idaho                DIANNE FEINSTEIN, California
MARCO RUBIO, Florida                 RON WYDEN, Oregon
SUSAN COLLINS, Maine                 MARTIN HEINRICH, New Mexico
ROY BLUNT, Missouri                  ANGUS KING, Maine
JAMES LANKFORD, Oklahoma             JOE MANCHIN III, West Virginia
TOM COTTON, Arkansas                 KAMALA HARRIS, California
JOHN CORNYN, Texas
                 MITCH McCONNELL, Kentucky, Ex Officio
                  CHUCK SCHUMER, New York, Ex Officio
                   JAMES INHOFE, Oklahoma, Ex Officio
                  JACK REED, Rhode Island, Ex Officio
                              ----------                              
                      Chris Joyner, Staff Director
                 Michael Casey, Minority Staff Director
                   Kelsey Stroud Bailey, Chief Clerk
                                
                                
                                CONTENTS

                              ----------                              

                           SEPTEMBER 5, 2018

                           OPENING STATEMENTS

Burr, Hon. Richard, Chairman, a U.S. Senator from North Carolina.     1
Warner, Mark R., Vice Chairman, a U.S. Senator from Virginia.....     3

                               WITNESSES

Sandberg, Sheryl, Chief Operating Officer, Facebook..............     6
    Prepared statement...........................................     9
Dorsey, Jack, Chief Executive Officer, Twitter, Inc..............    19
    Prepared statement...........................................    21

                         SUPPLEMENTAL MATERIAL

Responses to Questions for the Record by:
    Sheryl Sandberg..............................................    68
    Jack Dorsey..................................................   133

 
                   OPEN HEARING ON FOREIGN INFLUENCE
                    OPERATIONS' USE OF SOCIAL MEDIA
                     PLATFORMS (COMPANY WITNESSES)

                              ----------                              


                      WEDNESDAY, SEPTEMBER 5, 2018

                                       U.S. Senate,
                          Select Committee on Intelligence,
                                                    Washington, DC.
    The Committee met, pursuant to notice, at 9:32 a.m., in 
Room G-50, Dirksen Senate Office Building, Hon. Richard Burr 
(Chairman of the Committee) presiding.
    Present: Senators Burr, Warner, Risch, Rubio, Collins, 
Blunt, Lankford, Cotton, Wyden, Heinrich, King, Manchin, 
Harris, and Reed.

   OPENING STATEMENT OF HON. RICHARD BURR, CHAIRMAN, A U.S. 
                  SENATOR FROM NORTH CAROLINA

    Chairman Burr. I'd like to call the hearing to order. And 
I'd like to welcome our witnesses today: Jack Dorsey, chief 
executive officer at Twitter--Jack, welcome--and Sheryl 
Sandberg, chief operating officer at Facebook. I thank both of 
you for being here with us this morning.
    Before I make my remarks, I want to say a few words about 
our colleague, our friend, and committee ex officio member 
Senator John McCain.
    John could be blunt, and he could be direct, but when it 
came to committing himself to a cause that he believed in, John 
McCain was without equal. This Senate, this deliberative body, 
with its history and its traditions, will survive the passing 
of John McCain, but there can be no denying that the place is a 
little smaller without him. We will continue to do the 
important work we do here with passion, resolve, and a sense of 
purpose born from moral conviction. John would want that. In 
fact, he would insist on it from each of us.
    My friends, if I can borrow the phrase: Arizona's loss is 
our loss, and our loss is America's loss. John McCain will be 
dearly missed, and as you can see, we have set his spot on the 
dais today.
    Jack, Sheryl--as a committee, we've learned more about 
social media over the last 18 months than I suspect most of us 
ever thought we would in a lifetime. We've learned about social 
media's boundless potential for good and its ability to enable 
thoughtful and engaged interactions on a global scale.
    But we've also learned about how vulnerable social media is 
to corruption and misuse. The very worst examples of this are 
absolutely chilling and a threat to our democracy: the founding 
ideal of different people from different beliefs and ideas all 
living peacefully under a single flag. The committee takes this 
issue very seriously and we appreciate the fact that Facebook 
and Twitter are represented here this morning with an 
equivalent and appropriate measure of seriousness.
    The purpose of today's hearing is to discuss the role that 
social media plays in the execution of foreign influence 
operations. In the past, we've used terms like misinformation 
and divisive content to describe this activity.
    Now as we go into our fourth and final hearing on this 
subject, I think it's important that we be precise and candid 
with our language, because that's what the significance of this 
threat demands. We need to be precise about the foreign actors 
we're talking about, we need to be precise about the 
consequences of not acting, and we need to be candid about 
where responsibility for solving this problem lies.
    Two weeks ago your companies announced a series of 
successful disruptions that resulted in the removal of 652 
Facebook pages, groups, and accounts, and 284 Twitter accounts 
based on their violating your company's standards of 
coordinated manipulation and inauthentic behavior. Google's own 
internal security teams did commendable work disrupting this 
influence operation and we would have valued the opportunity to 
speak with them at the appropriate level of corporate 
representation. Nevertheless, their efforts should be 
acknowledged.
    In a departure from what we've all gotten a little 
accustomed to, this activity didn't come from Russia. It came 
from Iran. My instinct is to applaud the diligence of your 
security teams and credit you with taking the problem very 
seriously.
    But I'm not sure your success is the big story here. As I 
understand it, a third-party security team was crucial to 
identifying the scope of the Iranian activity. And even more 
concerning is that more foreign countries are now trying to use 
your products to shape and manipulate American political 
sentiment as an instrument of statecraft.
    Jack, I was pleased when informed about your efforts to 
improve conversational health at Twitter. I think that kind of 
initiative can do a lot to improve the transparency of public 
discourse on your platform, and foreign influence operations 
thrive without transparency.
    Sheryl, I fully support Facebook's hiring of the right 
security experts, building the necessary technologies and 
collaborating across law enforcement, commercial, 
cybersecurity, and social media company lines.
    I think the observation that no one company can fight this 
on their own is spot on. Unfortunately, what I described as a 
national security vulnerability and an unacceptable risk back 
in November remains unaddressed. That risk and vulnerability 
was highlighted yet two weeks ago. Without question, positive 
things are happening. The collaboration, dedication, and 
resources and demonstrated willingness to work with us are 
critical and valued by every member of this committee.
    It takes courage to call out a state actor and your 
companies have done that. But clearly this problem is not going 
away. I'm not even sure it's trending in the right direction. I 
will go back to what I said up front: we need to be candid 
about responsibility, and by that, I mean both the 
responsibility we have to one another--from one side of this 
dais to the other--as participants in this public policy 
discussion. And more importantly our shared responsibility to 
the American people.
    Technology always moves faster than regulation, and to be 
frank, the products and services that enable social media don't 
fit neatly into the consumer safety or regulatory constructs of 
the past. The old definitions that used to differentiate a 
content publisher from a content facilitator are just not 
helpful here. I think that ambiguity has given rise to 
something of a convenient identity crisis, whereby judgments 
about what is and isn't allowable on social media are too 
episodic, too reactive, and too unrestricted. People are 
affected by the information your platforms channel to them. 
That channeling isn't passive or random. It's a function of 
brilliant algorithms and an incentive structure that prizes 
engagement. None of that is under attack here.
    What is under attack is the idea that business as usual is 
good enough. The information your platform disseminates changes 
minds and hardens opinions. It helps people make sense of the 
world. When you control that or you influence a little of it, 
you're in a position to win wars without firing a shot. That's 
how serious this is.
    We've identified the problem. Now it's time to identify the 
solution. Sheryl and Jack, I'm glad you decided to appear and 
your willingness to be part of the solution. I'm disappointed 
Google decided against sending out the right senior-level 
executive to participate in what I truly expect to be a 
productive discussion.
    If the answer is regulation, let's have an honest dialogue 
about what that looks like. If the key is more resources or 
legislation that facilitates information sharing and government 
cooperation, let's get it out there. If it's national security 
policies that punish the kind of information and influence 
operations we're talking about this morning, to the point that 
they aren't even considered in foreign capitals, then let's 
acknowledge that. But whatever the answer is, we've got to do 
this collaboratively and we've got to do it now. That's our 
responsibility to the American people.
    I'll offer a closing point. This is for the witnesses and 
the members alike. There are no unsolvable problems. There is 
only the will to do what needs to be done--or its absence.
    With that, I turn to the Vice Chairman for any comments.

OPENING STATEMENT OF HON. MARK R. WARNER, VICE CHAIRMAN, A U.S. 
                     SENATOR FROM VIRGINIA

    Vice Chairman Warner. Thank you, Mr. Chairman. And let me 
first of all echo your comments about our colleague and friend, 
John McCain. I hope we all take his advice to continue to put 
country first.
    Welcome to the witnesses. Mr. Chairman has pointed out that 
today is an important public discussion. I am pleased that both 
Facebook and Twitter have sent their company's top leadership 
to address some of the critical public policy challenges. I 
look forward to a constructive engagement.
    I'd say, though, that I am deeply disappointed that Google, 
one of the most influential digital platforms in the world, 
chose not to send its own top corporate leadership to engage 
this committee. Because I know our members have a series of 
difficult questions about structural vulnerabilities on a 
number of Google's platforms that we will need answers for: 
from Google Search, which continues to have problems surfacing 
absurd conspiracies; to YouTube, where Russian-backed 
disinformation agents promoted hundreds of divisive videos; to 
Gmail, where state-sponsored operatives attempted countless 
hacking attempts. Google has an immense responsibility in this 
space.
    Given its size and influence, I would have thought that 
leadership at Google would have wanted to demonstrate how 
seriously it takes these challenges and actually take a 
leadership role in this important discussion. Unfortunately, 
they didn't choose to make that decision. But for the two 
companies that have chosen to constructively engage and to 
publicly answer some difficult and challenging questions, 
again, thank you.
    Now, it would be an understatement to say that much has 
changed in the aftermath of the 2016 campaign. With the benefit 
of hindsight, it's obvious that serious mistakes were made by 
both Facebook and Twitter. You, like the Federal Government, 
were caught flat-footed by the brazen attacks on our election.
    Even after the election, you were reluctant to admit there 
was a problem. I think in many ways it was pressure that was 
brought to bear by this committee that led Facebook, Twitter, 
and yes, Google to uncover the malicious activities of the 
Russian-backed internet Research Agency activities on each of 
your platforms.
    Now each of you have come a long way with respect to 
recognizing the threat. We've seen important action by your 
companies to make political advertising more transparent--and 
we discussed this yesterday--by complying with the terms 
Senator Klobuchar and I put forward in the Honest Ads Act. In 
addition, as the Chairman mentioned, since last September you 
have identified and removed some bad actors from your 
platforms.
    The bad news, I'm afraid, is that there's still a lot of 
work to do, and I'm skeptical that ultimately you'll be able to 
truly address this challenge on your own. I believe Congress is 
going to have to act.
    First, on the disinformation front: Russia has not stopped. 
Russian-linked information warfare exists today. Just recently, 
we saw the two of you take action to take down suspected 
Russian operations. We also know Microsoft uncovered Russian 
attempts to hack political organizations and potentially 
several political campaigns.
    The Russians also continue to infiltrate and manipulate 
American social media to hijack our national conversation. 
Again, you've gotten better, and I'm pleased to see that you've 
begun to take action, but also the Russians are getting better 
as well. They have now become harder to track. Worse, now that 
the Russian playbook is out there, other adversaries, as we saw 
recently, like Iran, have joined the fray.
    But foreign-based disinformation campaigns represent just a 
fraction of the challenge before you. In the same way that 
bots, trolls, fake pages, algorithmic gaming can be used to 
spread fake news, these same tools can be used to assist 
financial stock pumping fraud, to create filter bubbles and 
alternative realities, to incite ethnic and racial violence, 
and countless other misuses.
    Imagine the challenge and damage to the markets if Ford's 
communications from the Fed Chairman were leaked online. Or 
consider the price of a Fortune 500 company's stock if a 
dishonest short seller was able to spread false information 
about the company's CEO or the effects of its products rapidly 
online.
    Russian disinformation has revealed a dark underbelly of 
the entire online ecosystem, and this threatens to cheapen 
American discourse, weaken privacy, erode truth, and undermine 
our democracy on a previously unimagined scale. Worse, this is 
only going to get harder as we move into artificial 
intelligence, use of Deepfake technology.
    During the 2016 election campaign, the Russians 
demonstrated how bad actors can effectively marry offensive 
cyber operations, including hacking, with information 
operations. I'm afraid that we're on the cusp of a new 
generation of exploitation, potentially harnessing hacked 
personal information, to enable tailored and targeted 
disinformation in social engineering efforts. That future 
should concern us all.
    As someone who was involved in the tech industry for more 
than 20 years, I respect what this industry represents, and I 
don't envy the significant technical and policy challenges you 
face. But the size and reach of your platforms demand that we 
as policy makers do our job to ensure proper oversight, 
transparency, and protection for American users and our 
democratic institutions.
    The era of the Wild West in social media is coming to an 
end. Where we go from here, though, is an open question. These 
are complicated technological challenges, and Congress has at 
times demonstrated that it still has some homework to do. I do 
think this committee has done more to understand the threat to 
our democracy posed by social media than any others, and I want 
to commend my colleagues on this committee for tackling this 
challenge in a bipartisan way.
    As has been mentioned, this is our fourth public hearing on 
the subject, and we've met behind closed doors countless times 
with third-party researchers, with government officials, and 
with each of the platforms. We've done the work, and we're 
positioned to continue to lead in this space.
    Again, as the Chairman has already indicated, today's 
hearing is not about gotcha questions or scoring political 
points. Our goal today is to begin to shape actual policy 
solutions which will help us tackle this challenge.
    Now, I've put forth some ideas that I'd like to get your 
constructive thoughts on. For instance, don't your users have a 
right to know when they're interacting with bots on your 
platform? Isn't there a public interest in insuring more 
anonymized data is available to help researchers and academics 
identify the potential problems and misuse? Why are your terms 
of service so difficult to find and nearly impossible to read, 
much less understand? Why shouldn't we adopt ideas like data 
portability, data minimization, or first-party consent? And 
after witnessing numerous episodes of misuse, what further 
accountability should there be with respect to the flawed 
advertising model that you utilize?
    Now these are just some of our ideas. We have received a 
lot of positive feedback on some of these ideas from both 
experts and users. We've also been accused of trying to bring 
about the death of the internet. I'm anxious to hear your views 
on our proposals and suggestions your teams can bring to the 
table on this front.
    We have to be able to find smart, thoughtful policy 
solutions that get us somewhere beyond the status quo, without 
applying ham-handed 20th-century solutions to 21st-century 
problems. At the same time, we should be mindful to adopt 
policies that do not simply entrench the existing dominant 
platforms.
    These are not just challenges for our politics or our 
democracy. These threats can affect our economy, our financial 
system, and other parts of our lives. I'm hopeful that we can 
get there. I'm confident in American ingenuity. And I'm 
optimistic that Congress led by this committee in a bipartisan 
fashion can move this conversation forward.
    I look forward to the discussion and appreciate the hearing 
being called. Thank you, Mr. Chairman.
    Chairman Burr. I thank the Vice Chairman. At this time, I'd 
like to swear in our witnesses. If I could ask both of you to 
stand and raise your right hand?
    Do you solemnly swear to give this committee the truth, the 
full truth and nothing but the truth so help you God?
    [The witnesses answered in the affirmative.]
    Please be seated. Ms. Sandberg, I'd like to recognize you 
first and then Mr. Dorsey for any opening statement you'd like 
to make. The floor is yours.

STATEMENT OF SHERYL SANDBERG, CHIEF OPERATING OFFICER, FACEBOOK

    Ms. Sandberg. Thank you. Chairman Burr, Vice Chairman 
Warner, and members of this select committee, thank you for 
giving me the opportunity to speak with you today. My written 
testimony goes into more detail about the actions we're taking 
to prevent election interference on Facebook. But I wanted to 
start by explaining how seriously we take these issues and talk 
about some of the steps we're taking.
    Free and fair elections are the foundation of any 
democracy. As Americans, they are part of our national identity 
and that's why it's incumbent upon all of us to do all we can 
to protect our democratic process. That includes Facebook. At 
its best, Facebook plays a positive role in our democracy, 
enabling representatives to connect with their constituents, 
reminding people to register and to vote, and giving people a 
place to freely express their opinions about the issues that 
matter to them.
    However, we've also seen what can happen when our service 
is abused. As a bipartisan report from this committee said, 
Russia used social media as part of, and I quote: a 
comprehensive and multi-faceted campaign to sow discord, 
undermine democratic institutions and interfere in U.S. 
elections and those of our allies.
    We were too slow to spot this and too slow to act. That is 
on us. This interference was completely unacceptable. It 
violated the values of our company and of the country we love. 
Actions taken show how determined we are to do everything we 
can do to stop this from happening.
    The threat we face is not new. America has always 
confronted attacks from determined, well-funded opponents who 
want to undermine our democracy. What is new is the tactics 
they are using. To stay ahead, we all need to work together, as 
Chairman Burr said: government, law enforcement, industry and 
experts from civil society. And that is why I'm grateful for 
the work this committee is doing.
    At Facebook, we're investing in security for the long term. 
As our defenses improve, bad actors learn and improve too, and 
that's why security is never a finished job. We have more than 
doubled the number of people we have working in safety and 
security and we now have over 20,000 people and we are able to 
view reports in 50 languages, 24 hours a day.
    Better machine learning and artificial intelligence have 
enabled us to be more proactive in finding abuse. In the first 
three months of 2018 alone, over 85 percent of the violent 
content we took down or added warning labels to was identified 
by our technology before it was reported. These are expensive 
investments, but that will not stop us because we know they are 
critical.
    Our first line of defense is finding and shutting down fake 
accounts, the source of much of the inauthentic activity we see 
on Facebook. Authenticity matters because people need to trust 
that the content they're seeing is valid and they need to trust 
the connections they make. We are now blocking millions of 
attempts to register false accounts each and every day.
    We're making progress on fake news. We're getting rid of 
the economic incentives to create it and we're limiting the 
distribution it gets on Facebook. We demote articles rated by 
third-party fact-checkers as false. We warn people who have 
shared them or who are about to share them, and we show them 
related articles to give them more facts.
    We've also taken strong steps to prevent abuse and increase 
transparency in advertising. Today on Facebook, you can go to 
any page and see all the ads that page is running, even if they 
wouldn't be shown to you. For political and issue ads, you can 
also see who paid for the ads, how much was spent, and the 
demographics of the people who saw them.
    We're also going to require people running large pages with 
large audiences in the United States to go through an 
authorization process and confirm their identity. These steps 
won't stop everyone who's trying to game the system, but they 
will make it a lot harder.
    As these past few weeks and months have shown, this work is 
starting to pay off. In July, we removed 32 pages and accounts 
involved in coordinated, inauthentic behavior. In August, we 
removed 650 pages and accounts that originated in Iran, as well 
as additional pages and accounts from Russia. And just last 
week, we took down 58 pages and accounts from Myanmar, many of 
which were posing as news organizations.
    We are focused, as I know you are, on the upcoming U.S. 
midterms and on elections around the world. Our efforts in 
recent elections from Germany, to Italy, to Mexico, to the 
Alabama special Senate election, show us that the investments 
we are making are yielding results. We also know, as Chairman 
Burr said, that we cannot stop interference by ourselves. We're 
working with outside experts, industry, partners and 
governments, including law enforcement, to share information 
about threats and prevent abuse.
    We're getting better at finding and stopping our opponents, 
from financially motivated troll farms to sophisticated 
military intelligence operations. We don't have access to the 
intelligence governments have access to, so we don't always 
know exactly who is behind these attacks or their motives, and 
that's why we will continue working closely with law 
enforcement.
    Chairman Burr, I want to thank you for your leadership. 
Vice Chairman Warner, I want to thank you for your white paper, 
which has so many ideas on how we can work together to 
strengthen our defense. Senators, let me be clear, we are more 
determined than our opponents and we will keep fighting.
    When bad actors try to use our site, we will block them. 
When content violates our policies, we will take it down. And 
when our opponents use new techniques, we will share them so we 
can strengthen our collective efforts.
    Everyone here today knows that this is an arms race, and 
that means we need to be ever more vigilant. As Chairman Burr 
has noted, nothing less than the integrity of our democratic 
institutions, processes, and ideals is at stake. We agree, and 
we will work with all of you to meet this challenge.
    Thank you.
    [The prepared statement of Ms. Sandberg follows:]
    [GRAPHICS NOT AVAILABLE IN TIFF FORMAT]
    
    Chairman Burr. Thank you, Ms. Sandberg. Mr. Dorsey, the 
floor is yours.

  STATEMENT OF JACK DORSEY, CHIEF EXECUTIVE OFFICER, TWITTER, 
                              INC.

    Mr. Dorsey. Thank you Chairman Burr, Vice Chairman Warner 
and the committee for the opportunity--for the opportunity to 
speak on behalf of Twitter to the American people. I look 
forward to our conversation about the work we're doing to help 
protect the integrity of U.S. elections and elections around 
the world.
    I am someone of very few words and typically pretty shy, 
but I realize how important it is to speak up now. If it's OK 
with all of you I'd like to read you something I personally 
wrote as I considered these issues. I'm also going to tweet 
this out now.
    First, I want to step back and share our view of Twitter's 
role in the world. We believe many people use Twitter as a 
digital public square. They gather from all around the world to 
see what's happening and have a conversation about what they 
see. In any public space you will find inspired ideas and 
you'll find lies and deception--people who want to help others 
and unify, and people who want to hurt others and themselves, 
and divide.
    What separates a physical and digital public space is 
greater accessibility and velocity. We're extremely proud of 
helping to increase the accessibility and velocity of a simple, 
free, and open exchange. We believe people would learn faster 
by being exposed to a wide range of opinions and ideas, and it 
helps make our Nation and the world feel a little bit smaller. 
We aren't proud of how that free and open exchange has been 
weaponized and used to distract and divide people and our 
Nation. We found ourselves unprepared and ill-equipped for the 
immensity of the problems that we have acknowledged: abuse, 
harassment, troll armies, propaganda through bots and human 
coordination, misinformation campaigns, and divisive filter 
bubbles. That's not a healthy public square. Worse, a 
relatively small number of bad faith actors were able to game 
Twitter to have an outsized impact.
    Our interests are aligned with the American people and this 
committee. If we don't find scalable solutions to the problems 
we're now seeing, we lose our business and we continue to 
threaten the original privilege and liberty we were given to 
create Twitter in the first place.
    We weren't expecting any of this when we created Twitter 
over 12 years ago. We acknowledge the real world negative 
consequences of what happened and we take the full 
responsibility to fix it. We can't do this alone and that's why 
this conversation is important and why I am here.
    We've made significant progress recently on tactical 
solutions like identification of many forms of manipulation 
intending to artificially amplify information, more 
transparency around who buys ads and how they are targeted, and 
challenging suspicious logins and account creation. We've seen 
positive results from our work. We're now removing over 200 
percent more accounts for violating our policies. We're 
identifying and challenging 8 to 10 million suspicious accounts 
every week, and we're thwarting over a half million accounts 
from logging in to Twitter every single day.
    We've learned from 2016, and more recently from other 
nations' elections, how to protect the integrity of elections: 
better tools, stronger policy, and new partnerships are already 
in place. We intend to understand the efficacy of these 
measures to continue to get better, but we all have to think a 
lot bigger than decades past, today. We must ask the question, 
what is Twitter incentivizing people to do, or not do, and why? 
The answers will lead to tectonic shifts in Twitter and how our 
industry operates. Required changes won't be fast or easy.
    Today we're committing to the people and this committee to 
do that work and do it openly. We're here to contribute to a 
healthy public square, not compete to have the only one. We 
know that's the only way our business thrives and helps us all 
defend against these new threats.
    In closing, when I think of my work, I think of my mom and 
dad in St. Louis, a Democrat and a Republican. For them, 
Twitter has always been a source of joy, a source of learning, 
and a source of connection to something bigger than themselves. 
They're proud of me, proud of Twitter, and proud of what made 
it all possible. What made it possible was the fact that I was 
born into a Nation built by the people for the benefit of the 
people--where I could work hard to make something happen which 
was bigger than me. I treasure that and will do everything in 
my power to protect it from harm. Thank you.
    [The prepared statement of Mr. Dorsey follows:]
    [GRAPHICS NOT AVAILABLE IN TIFF FORMAT]
    
    Chairman Burr. Jack, thank you very much for that testimony 
and I might add that the Vice Chairman and I commented as you 
grow older, you will find a need for a bigger device to go to 
your notes on than that small one. We have a hard time with the 
small devices.
    For members, we will do seven minute question rounds today. 
For planning purposes, we will break at approximately 10:45 for 
five minutes just to let our witnesses stretch and take a 
breath. And we will limit today's hearing to one round. We'll 
try to accommodate any members that might be caught in the 
Judiciary Committee but want to try to get back, but I know 
that they've got their own challenges. With that, I would 
recognize myself for seven minutes.
    This question is to both of you. How would you define 
social media for this committee and more importantly for the 
American people? And I will start with you, Ms. Sandberg.
    Ms. Sandberg. Social media enables you to share what you 
want to share when you want to share it, without asking 
permission from anyone. And that's how we meet our mission, 
which is giving people a voice. And I think what's more 
important than just the content people share, is the 
connections they make. Social media enables people to celebrate 
their birthdays. In the last year, people have raised $300 
million on Facebook on birthday funders for nonprofits they 
care about. Safety check: Millions of people in the worst 
circumstances of their lives have let their loved ones know 
they're safe. And small businesses to grow. All around the 
country I meet with small businesses, from a woman making 
dresses in her living room and selling them on Instagram, to a 
local plumber, who are able to find their customers on Facebook 
and then able to grow and hire people and live their American 
dream.
    Chairman Burr. Jack.
    Mr. Dorsey. I believe it's really important to--to 
understand how the people see it. And we believe that the 
people use Twitter as they would a public square and they often 
have the same expectations that they would have of any public 
space. For our part, we see our platform as hosting and serving 
conversations. Those conversations are in the public. We think 
there's a lot of benefit to those conversations being in the 
public, but there's obviously a lot of risks as well.
    We see that news and entertainment are actually byproducts 
of public conversation. And we see our role as helping to not 
only serve that public conversation so that everyone can 
benefit, even if they don't have a Twitter account, but also to 
increase the health of that conversation as well. And in order 
to do that, we need to be able to measure it. We need to 
understand what healthy participation looks like in a public 
square, and we need to amplify that. And more importantly, we 
need to question a lot of the fundamentals that we started with 
12 years ago in the form of incentives. When people use our 
product every single day--when they open our app up--what are 
we incentivizing them to do? Not telling them what to do, but 
what are we actually incentivizing them to do? And that 
certainly speaks to the buttons that we have in our service, 
all the way to our business model.
    Chairman Burr. Ms. Sandberg, this question is for you. One 
root problem that we see is that users don't truly understand 
the types of data that are being collected on and off your 
platform. How is that data shared with advertisers or others to 
deliver targeted advertising and what vetting, if any, do you 
do on targeted advertising to prevent hostile actors from 
targeting your users for their products?
    Ms. Sandberg. Senator, it's a really important question 
because it goes to the heart of our service. We sell ads and we 
use information that people share with us or share with third-
party sites to make those ads relevant to them. But privacy and 
advertising are not at odds. In fact, they go together. When 
people share information with us, we do not give it to 
advertisers without their permission. We never sell data. And 
they have control over the information we use.
    Chairman Burr. Again for both of you, and I'll start with 
you, Mr. Dorsey. What's your company's ability to collaborate 
with other social media companies in this space?
    Mr. Dorsey. We have a real openness to this and we have 
established a more regular cadence with our industry peers. We 
do believe that we have an opportunity to not only create more 
transparency with an eye towards more accountability, but also 
a more open way of working and a way of working that, for 
instance, allows for a review period by the public on how we 
think about our policies.
    But more so, taking some of the lessons that we have 
learned and benefited from in the open-source software space to 
actually think about developing our policies, our enforcement, 
and also our products going forward. We've been experimenting a 
little bit with this recently, but we would like to be a 
company that is not only hosting an open conversation but is 
also participating in that open conversation. So, we're more 
than open to more collaboration, and not just with our industry 
peers but with scholars, academics, and also our government 
partners.
    Chairman Burr. Thank you.
    Ms. Sandberg.
    Ms. Sandberg. I think our collaboration has greatly 
increased. We've always worked closely with law enforcement and 
we continue to do that and particularly the FBI's new task 
force. We've always shared information with other companies but 
I think we are doing better and we can continue to do better.
    Mr. Chairman, you noted in your opening remarks that some 
of the tips we got came from a private security firm. In our 
mind that's the system working. Our opponents are very well-
funded. They are very organized, and we are going to get those 
tips from law enforcement, from each other, from private firms. 
And the faster we can collaborate, the faster we share those 
tips with each other, the stronger our collective defenses will 
be.
    Chairman Burr. Last question from the Chair--again for both 
of you and I'll go in reverse--you first, Ms. Sandberg. If a 
foreign-influence campaign is detected among your platforms, is 
there a defined process by which other platforms are alerted to 
the campaign that you've discovered?
    Ms. Sandberg. Our security teams have been in close contact 
and so right now when we find something, we are reaching out to 
our companies--other companies to do it and working more 
closely together.
    We've been talking about how, I think, there's still room 
for improvement there. I think we can do more to formalize the 
process. We've had a series of meetings and I think we're going 
to continue to work and we can do better.
    Chairman Burr. Mr. Dorsey.
    Mr. Dorsey. This is not something we want to compete on. We 
hosted our peer companies at our offices just in the past two 
weeks on this very topic and helping to increase our cadence of 
meeting and also what we can share. If there were an 
occurrence, we would immediately look to alert our peer 
companies and this committee and our government law enforcement 
partners.
    Chairman Burr. Thank you for that. Let me just say in 
closing that I hope both of you, if you see impediments that 
exist in your ability to notify or to collaborate as it relates 
to nefarious actors, that you'll certainly make this committee 
aware in cases where we can help. With that, Vice Chairman.
    Vice Chairman Warner. Thank you, Mr. Chairman. As I 
indicated in my opening statement, I hope we can move forward 
on the policy discussion, so I'd like to get your thoughts on 
some of the ideas I and others have suggested, and I want to 
start with you, Mr. Dorsey.
    I think after some initial false starts, it does really 
appear that you have committed to a shift in your company's 
culture with respect to the safety and security on your 
platform. Obviously, I have been impressed by some of the 
increasing efforts you've taken. A question I have, though, is 
that obviously on your platform there are a lot of automated 
accounts or bots, and there's nothing inherently good or bad 
about an automated account. As a matter of fact, there are 
certain very good things that come out of some of these 
automated accounts. But, do you believe that an individual 
Twitter user should have the right to know when he or she is 
being contacted, whether that contact is initiated by a human 
being or a bot?
    Mr. Dorsey. I do believe that first and foremost, anyone 
using Twitter has the right to more context around not only the 
accounts that they're seeing, but also the information.
    Vice Chairman Warner. Would that go as far as actually 
having a policy on your platform indicating--I wouldn't ask you 
to take them down--but at least allowing the user to know 
whether that contact was initiated by a human being versus a 
machine?
    Mr. Dorsey. As far as we can detect them. We can certainly 
label and add context to accounts that come through our API. 
Where it becomes a lot trickier is where automation is actually 
scripting our website to look like a human actor. So as far as 
we can label--and we can identify these automations--we can 
label them, and I think that is useful context and it's an idea 
that we have been considering over the past few months. It's 
really a question of the implementation, but we are interested 
in it and we are going to do something along those lines.
    Vice Chairman Warner. It's not going to solve the problem, 
but I do think giving that indication to users would allow them 
then perhaps to make a little more judgment. Because we had, 
for example, back in early August, we had a panel of experts, 
and they were saying that some of the content--in terms of 
political content, I'm not talking about total tweets--but 
total political content was 25 to 30 to 1 on the far left and 
far right generated by either foreign actors or automated 
accounts. And my question is: Doesn't that volume on the 
extremes drown out real conversation and political conversation 
amongst Americans, regardless of where they fall on the 
political spectrum?
    Mr. Dorsey. It does, in the shared areas of Twitter. So 
there are two main categories of usage in Twitter. One, is the 
people you follow, and those Tweets end up in your timeline. 
Two, are the more common shared spaces, like Search, Trends, 
and also Replies. That's where anyone could interject 
themselves, and that's where we see the most gaming of our 
systems, and that's where we've also made the most progress in 
terms of identifying these patterns and shutting them down 
before they spread too far. That is independent of our work on 
automation, because we're seeing the same patterns through 
human coordination as well.
    Vice Chairman Warner. I appreciate your comments about the 
willingness to notify a user whether it's a human being or a 
machine contacting you. I also think that there's room for 
improvement on some of the high volume Twitter accounts, to 
really do a little bit of extra examination.
    Ms. Sandberg, let me move to you. Obviously, in a digital 
economy, I think data increasingly represents the single 
greatest asset you have. Obviously it's a part of the 
advertising model that you've created.
    But I think most users are actually pretty much in the dark 
about how much data is actually being collected on them, what 
it's actually worth. I think as we've seen from other fields, 
like health care, the fact that we have such a lack of price 
transparency really makes health care reform really 
challenging.
    I think some of that lack of price transparency and value 
within social media also exists, so I'd like to first of all 
ask, does a Facebook user have a right to know what information 
you are collecting about that user?
    Ms. Sandberg. Yes, and we really agree with you that people 
who use Facebook should understand what information is being 
used, how it's used, and the controls they have. We've worked 
hard to simplify this. We've put out things like privacy 
shortcuts, which show you all your settings in one place, and 
something called download your information, where you can 
download all of your information in a portable way and be able 
to take it with you and see what it is.
    Vice Chairman Warner. I understand, and I think you're 
making progress there, but again, if a user has that 
information, he or she may not know the value. Wouldn't it be 
actually helpful to your user to actually be able to then put 
some valuation on the data you're collecting from the user and 
publish that in a way so that people actually know what their 
information is worth?
    Ms. Sandberg. Mr. Vice Chairman, I think this is one of the 
proposals you laid out in your white paper, and like all of 
this, you know, we don't think it's a question of whether 
regulation--we think it's a question of the right regulation 
that supports users, is transparent, and doesn't squash 
innovation. And we're happy to work with you on the proposal.
    Vice Chairman Warner. Well, I just think it's that more 
price transparency is always better, and I think this would be 
something that would help users sort through. There was another 
question that we've talked in the past about: Is there 
anything, even with a willing user, are there any rights or 
details about an individual user that they should not be able 
to give up or consent to having used?
    Ms. Sandberg. I'm sorry, I don't understand the question.
    Vice Chairman Warner. My question is this: At some point, 
are there certain pieces of personalized information that a 
user shouldn't be able to voluntarily give to an enterprise 
like yours or Twitter?
    Ms. Sandberg. I think there are, and I think there are many 
ways users have control over what they do. I also think there 
are probably corner cases of law enforcement holds or security 
matters where information is critically important.
    Vice Chairman Warner. I just wonder whether--just a 
question of whether you can consent away all of your rights--
ought to be something we ought to have a discussion on. I've 
only got a few more seconds.
    Let me ask, Ms. Sandberg, you made mention in your opening 
testimony the fact that sometimes political actors are using 
the platforms really to incent violence. I think you made at 
least some mention of Myanmar, where we've obviously seen a 
great tragedy take place there, where hundreds of thousands of 
Rohingya Muslims are fleeing in many ways. The U.N. High 
Commissioner has said that fake accounts on Facebook have 
incented that violence.
    Do you believe that Facebook has both a moral obligation 
and potentially even a legal obligation to take down accounts 
that are actually incentivizing violence?
    Ms. Sandberg. I strongly believe that. In the case of 
what's happened in Myanmar, it's devastating, and we're taking 
aggressive steps and we know we need to do more. Probably the 
most important thing we've done is ramped up our ability to 
review reports in Burmese.
    Vice Chairman Warner. I appreciate your comment that 
Facebook would have both a moral and legal obligation, so 
sorting through what that would look like so that if there were 
other platforms that weren't being as responsible, there ought 
to be some sanctions. So I look forward to working with you on 
that issue as well.
    Thank you, Mr. Chairman.
    Chairman Burr. Senator Risch.
    Senator Risch. Thank you. Thank you both for being here 
today. This is, I think, the third hearing we've held over the 
last year or so--fourth--the Chairman says the fourth--that 
we've had on this issue.
    I think the problem is really well laid out. We've spent 
hours and hours and hours talking about this and what the 
issues are and what the problems--I'm still not hearing what--
very specifically how we're getting after this. I know there're 
some things being done. I tend to agree with you that no matter 
what's done, as long as these platforms are there, there's 
going to be people finding their way into it to do bad things. 
And obviously, everybody wants to get that reduced as much as 
possible.
    And I'm glad to hear that you and the entire industry are 
trying to do something about this. The entity up here that I 
serve in, there are lots of people that would love to help you 
run your organizations through what we call the regulatory 
process. That isn't all of them, obviously, and hopefully it 
isn't even a majority of them, but there will be--and you've 
already seen efforts in that regard--but you're going to have 
to do things yourselves to try to get around this so that we 
don't have the horrible things happen that spawn that type of 
regulation.
    I want to drill down a little bit. In each of your 
companies, who sets these standards or the description of what 
a coordinated manipulation or inauthentic behavior is? What 
entity do you have in each of your companies who make these 
determinations?
    Ms. Sandberg, let me start with you.
    Ms. Sandberg. Our policy team is setting those, and our 
security team is finding them. And coordinated inauthentic 
behavior means behavior on our site that's inauthentic, so 
people are not representing themselves to be who they are to 
be. And coordinated means they are coordinating it, and they 
can be coordinating with authentic actors and coordinating with 
inauthentic actors. Both are unacceptable.
    Senator Risch. When the team is sitting there meeting, is 
there generally a unanimity amongst them on something--a fact 
situation comes in front of them. Is this something that is 
easy to recognize--people are unanimous about it--or do you 
wind up with debates as to whether or not a certain platform 
should be shut down?
    Ms. Sandberg. I think on a lot of issues we face like hate 
speech, there's broad debate. When it comes to what is an 
inauthentic actor, which is a fake account posing as someone, 
they're hard to find. But once we find them, we know what they 
are.
    Senator Risch. And what about--the Chairman referred to 
standards in his opening statement. Who sets these standards, 
the same committee?
    Ms. Sandberg. The same group of people.
    Senator Risch. And are they published, so that a user can 
look at that? Well, give me some examples of standards that are 
unacceptable.
    Ms. Sandberg. In the coordinated inauthentic behavior or in 
general?
    Senator Risch. In general.
    Ms. Sandberg. Yes, so we publish our community standards 
comprehensively. And what that does is define what's permitted 
on Facebook and what's not permitted on Facebook. So some 
examples are, bullying is not permitted, hate is not permitted, 
language that leads to violence is not permitted, and this is 
published in detail publicly.
    Senator Risch. Mr. Dorsey, where's your company on these 
things?
    Mr. Dorsey. So, we have a team called Trust and Safety who 
is responsible for designing and writing these policies that 
reports up to our lead of legal and safety, and--and our 
compliance teams which report directly to me.
    Senator Risch. I'd like to ask both of you: One of the 
things this committee wrestles with frequently when it comes to 
privacy issues and those kinds of things is the difference 
between a U.S. citizen and a non-U.S. citizen. And under U.S. 
law, they can be treated differently under different 
circumstances.
    Do your companies make any distinction between a U.S. 
citizen versus a non-U.S. citizen? And I guess, now I'm more 
focusing in on the kind of behavior we saw where elections are 
attempted to be manipulated and--and that sort of thing. Ms. 
Sandberg, let's start with you. Does your company make a 
distinction as they're weighing the activity of certain actors?
    Ms. Sandberg. So for political and issue ads, we are now 
going through a verification process. And in order to run those 
in the United States, people have to verify that they are 
legally able to do that. So that's one area where we would 
distinguish.
    Senator Risch. And what does that mean, legally able to do 
that? If a citizen of another country, any other country, 
decides they want to say something about a U.S. election, are 
they disqualified from doing that with your company?
    Ms. Sandberg. In the free content--so what their posts are 
to their friends and family or publicly--people are allowed to 
talk about any issues in any country, as long as they're not 
crossing over into the areas we discussed that aren't allowed, 
like hate and bullying. In advertising, in U.S. elections, you 
have to be a U.S. citizen.
    Senator Risch. Mr. Dorsey.
    Mr. Dorsey. We have very similar policies and we do segment 
them by advertising and also the more organic social creation 
of content as well.
    We don't always have an understanding of where an account 
is located. We have to infer this oftentimes. And this is where 
we do get a lot of help from our law enforcement partners. It 
is not only to understand where some of these threats are 
coming from, but also the intent. And the faster that we get 
that information, the faster that we can act.
    Senator Risch. One of the concerns that I have--and I 
appreciate that explanation--but what we've seen on this 
committee, and have actually seen in other contexts, is that in 
today's world it is so easy to either employ or even 
impersonate a U.S. citizen to do something in a given context. 
Do you have difficulties in that regard?
    Ms. Sandberg. Well, finding inauthentic behavior is a 
challenge and I think you're seeing us put real resources to 
bear. This is why we're investing so heavily in people and 
technology. This is why we're investing in programs like 
verification.
    I think the other step we're taking here is around 
transparency. So being able to see if people bought political 
ads, where they're located, being able to see who's running a 
page; these are steps we think are really important for helping 
us find what--to your point--can be very difficult things to 
find.
    Senator Risch. Mr. Dorsey, briefly.
    Mr. Dorsey. We've decided to focus a lot more on the 
behavioral patterns that we're seeing across the network. While 
we can't always recognize in real-time where someone might be 
coming from or if they were--if they are representing someone 
who does not exist, we can see common patterns of behavior and 
utilizing the network to spread their information.
    So we have been building a lot of our machine learning and 
deep learning technology to recognize these patterns and shut 
them down before they spread too quickly. And then, also, link 
them to other accounts that demonstrate similar patterns. And 
we've gotten a lot more leverage out of that in terms of 
scalability than working on systems to identify whether it's a 
fake profile or not.
    Senator Risch. Interesting, thank you.
    Chairman Burr. Senator Wyden.
    Senator Wyden. Thank you, Mr. Chairman.
    Mr. Chairman, I want to thank you and Senator Warner for 
your kind comments about John McCain. And what is not often 
remembered is John McCain wrote some of the really important 
rules of the road for the internet when he was Chairman of the 
Commerce Committee. And it was always bipartisan, so I very 
much appreciate both of you mentioning our wonderful friend, 
John McCain.
    And Ms. Sandberg, Mr. Dorsey, welcome and I've enjoyed 
visiting with you. Let me go right to the question that is 
foremost on my mind, and that is consumer privacy as a national 
security issue.
    Technology companies like yours hold vast amounts of very 
private information about millions of Americans. The prospect 
of that data being shared with shady businesses, hackers, and 
foreign governments is a massive privacy and national security 
concern. Russians keep looking for more sophisticated ways of 
attacking our democracy.
    Personal data reveals not just your personal and political 
leanings, but what you buy, even who you date. My view is 
personal data is now the weapon of choice for political 
influence campaigns. And we must not make it easier for our 
adversaries to seize these weapons and use them against us.
    So I'd like to see if we could do a yes or no on this. And 
I wrote it because I think we can. My view is, from this point 
on, beefing up protections and controls on personal privacy 
must be a national security priority. I'd like a yes or no, Ms. 
Sandberg.
    Ms. Sandberg. Yes.
    Senator Wyden. Mr. Dorsey.
    Mr. Dorsey. Yes.
    Senator Wyden. Okay. Let me turn now to a question based on 
a lot of analysis my office has done and you all have talked to 
us about. We have reviewed Facebook privacy audits required by 
the 2011 consent agreement after your company was found to use 
unfair and deceptive practices.
    One section of the audits deals with how Facebook shared 
the personal information of Americans with smart phone 
manufacturers. These included the Chinese companies Huawei and 
ZTE. I found portions of this audit very troubling and the 
findings could affect many Americans. I believe, Ms. Sandberg, 
the American people deserve to see this information. Will you 
commit this morning to making public the portion of your audits 
that relate to Facebook's partnerships with smart phone 
manufacturers?
    Ms. Sandberg. Senator, I really appreciate the question and 
the chance to clarify this issue because it's really important. 
With regards to the audits, our third-party auditor, PWC, does 
audits on a rolling basis every two years, but they're 
continual. They are given to us. We have shared them with the 
FTC voluntarily and we will continue to do that.
    I can't commit right in this moment to making that public 
because a lot of that has sensitive information which could 
help people game the system, but we will certainly work with 
you to see what disclosures would be prudent. But----
    Senator Wyden. Let's do this. Because that's a constructive 
answer and I've got other things I've got to cover. I'm just 
going to assume you will work with this. We understand the 
question of redaction on sensitive national security matters.
    Can you get back to me within a week with respect to how 
Facebook will handle what I think is troubling information?
    Ms. Sandberg. We're going to get back to you as quickly as 
possible. We can definitely prioritize this request. So we'll 
do it as fast as we can depending on the volume of requests 
everyone has.
    Senator Wyden. Thank you. And look, so you all know where 
I'm going with this. To me, protecting data privacy has to be a 
higher tier issue in terms of national security. It's going to 
be the foundation of the legislation that I've talked to both 
of you about. So that's why I feel strongly and I think your 
answer is constructive and I hope we can get that quickly.
    What I also want to get to with you, Ms. Sandberg, is the 
issue of micro targeting to discourage voting. This is one of 
the most powerful tools in the propaganda arsenal. Going after 
individual Americans with ads and really lasering in on the 
ability to affect political campaigns. It's certainly been used 
in the past with the Russians to discourage minority Americans 
from voting. Would Facebook's current policies prohibit using 
micro targeting to discourage voting?
    Ms. Sandberg. Senator, we feel very strongly about this. 
There is a long history in this country of trying to suppress 
civil rights and voting rights and that activity has no place 
on Facebook. Discriminatory advertising has no place on 
Facebook.
    Senator Wyden. So what are you doing to prohibit this 
micro-targeting? I mean what about ads that share false 
information about the date of the election or the location of a 
polling place or ads that tell people they can vote with a text 
message from their phone. You have said that it's unacceptable 
to target minorities and others, but I really need to drill 
down more deeply in knowing, because I think this is a 
primary--we can get bipartisan agreement on. What do you do to 
deal with micro targeting?
    Ms. Sandberg. So with everything when we're looking for 
abuse of our systems and things that are against our policies, 
we have a combination of people reviewing ads, and we have a 
combination of automated systems and machine learning that help 
us find things and take them down quickly.
    Senator Wyden. OK, I'll hold the record open for that. 
Could I have, say within a week, a written answer that would 
get into some of those specifics?
    Ms. Sandberg. We're going to get you answers to your 
questions as quickly and thoroughly as we can.
    Senator Wyden. Good. My last question deals with foreign 
governments aiding hoaxes and misinformation and I'd like to 
get both of you, in fact. Why don't you start with this Mr. 
Dorsey?
    Do either of you or your companies have any indication that 
Iran, Russia, or their agents have supported, coordinated with, 
or attempted to amplify the reach of hoaxes?
    Mr. Dorsey.
    Mr. Dorsey. Of hoaxes?
    Senator Wyden. Yes.
    Mr. Dorsey. We certainly have evidence to show that they 
have utilized our systems and gamed our systems to amplify 
information. I'm not sure in terms the definition of hoax in 
this case, but it is likely.
    Senator Wyden. Okay.
    Ms. Sandberg.
    Ms. Sandberg. Just two weeks ago, we took down 650 pages 
and accounts from Iran. Some were tied to state-owned media and 
some of them were pretending to be free press, but they weren't 
free press. So it depends on how you define a hoax, but I think 
we're certainly seeing them use misinformation to campaign----
    Senator Wyden. My time is up. The only other area I'm going 
to want to explore with you is, we've got to deal with this 
back and forth between the private sector and the government. 
Very often, we ask you all about things you're doing and you 
say we need the government to also help us get to A, B, C, and 
then the government says the same thing about you. We'll want 
to explore that. Thank you Mr. Chairman for the extra time.
    Chairman Burr. Senator Rubio.
    Senator Rubio. I want to thank you both for being here.
    First of all, there's an empty chair next to you from 
Google. They're not here today and maybe it's because they're 
arrogant or maybe it's because there's a report that as of last 
night--this was posted at 3:36 yesterday--this group went on 
basically pretending to be Kremlin-linked trolls. They did 
everything. They used the details of the Internet Research 
Agency, which is a Kremlin-linked troll farm, and were able to 
buy ads online and place them on sites like CNN, CBS This 
Morning, HuffPost, The Daily Beast, so I'm sure they don't want 
to be here to answer these questions.
    But I thank you both for being here. I was happy to read in 
your opening statement, Ms. Sandberg, that you talk about our 
democracy, our democratic process. You acknowledge 
responsibility for protecting our process. And you talked about 
our adversaries, clearly linking the company to the values and 
the importance of this country and I think in acknowledgment 
that your company would not exist were it not in the United 
States, because of the freedoms that we have.
    Twitter didn't go as far, but you did describe yourself as 
a global town square--but you did say that you want to support 
free and open democratic debate. You did refer to our democracy 
and you did say that Twitter was built on the core tenet of 
freedom of expression, which is a very important core tenant.
    Here is why this is relevant, because we're here today 
because we learned--and we've learned the hard way--that social 
media was largely seen as a tool for incredible good. Also, 
what makes it good can be manipulated by bad actors to do harm. 
And that's what happened. We have all learned that the hard 
way.
    And so what we're asking you to do, and I think what you've 
agreed to do, is to use the powers that you have within your 
platforms to crack down on certain users who are hostile 
actors, who are using disinformation or misinformation or hate 
speech for the purposes of sowing discord, or interfering in 
our internal affairs--and that's a positive.
    Here's the problem though: we have to start thinking about 
what happens when an authoritarian regime asks you to do that 
because their definition of disinformation or misinformation 
could actually be the truth. Their discord, or what they define 
as discord, would be things like defending human rights. 
Interfering in their internal affairs, they would define as 
advocating for democracy. And the reason why I think that 
answering that question is so important is because it's going 
to define what your companies are. Are your companies really 
built on these core values, or are they global companies, like 
all these other companies that come around here, who see their 
number one obligation to make money and therefore market access 
irrespective of the price they have to pay to do so?
    So, for example, in 2016 the New York Times reported that 
Facebook was working on a program to restrict stories from 
showing up in newsfeeds based on the user's geography. The 
story implies--and I know that it hasn't been implemented--but 
it implies that that was being used in order to potentially try 
to get back into China, but any authoritarian government could 
try to use that tool.
    Vietnam, by the way, where you do operate, has a new law 
beginning on 2019 January 1st that will require you to store 
user data inside the country and hand over that data, to the 
government, of users suspected of anti-state activity, 
including spreading news that may impede Hanoi or hurt the 
economy, for example, democracy activists.
    Twitter has a policy of accommodating countries that have 
different ideas about the contours of freedom of expression by 
selectively blocking tweets and accounts. For example, one of 
the countries you complied with is Pakistan, who has asked you 
to block sites for blasphemy. The blasphemy--647 cases of 
blasphemy over a ten-year period from 1986 to 2007. Fifty 
percent of those cases were on non-Muslim Pakistanis--in a 
country three percent non-Muslim.
    One high-profile case is Asia Bibi, who has been sentenced 
to death after a personal dispute over drinking water with a 
group of women. They accused her of insulting the prophet. 
She's arrested, imprisoned, sentenced to death. Not relevant to 
Twitter but relevant to the blasphemy laws that Pakistan has 
asked you to comply with.
    Turkey has requested that you block over 12,000 accounts. 
Since 2014, you've blocked over 700. Many of them are 
journalists. One of them is an NBA player, Enes Kanter. Russia 
blocked almost 80 accounts as of last check. You complied with 
that. One of them was a pro-Ukrainian account in 2014.
    And so here's why all of this is relevant. I guess the 
first question for Facebook is: These principles of our 
democracy--do you support them only in the United States or are 
these principles that you feel obligated to support around the 
world?
    Ms. Sandberg. We support these principles around the world. 
You mentioned Vietnam. We do not have servers in Vietnam. And 
with very minor exceptions of imminent threats that were 
happening, we've never turned over information to the 
Vietnamese government, including political information.
    Senator Rubio. And you never will?
    Ms. Sandberg. We would not.
    Senator Rubio. You would not agree to do so in order to 
operate?
    Ms. Sandberg. We would only operate in a country when we 
can do so in keeping with our values.
    Senator Rubio. And that would apply to China as well?
    Ms. Sandberg. That would apply to China as well.
    Senator Rubio. Thank you. And on Twitter, how is blocking 
the account of journalists or an NBA player in keeping with the 
core tenant of freedom of expression?
    Mr. Dorsey. We enacted a policy some time ago to allow for 
per-country content takedown. Meaning that within the 
boundaries of that nation, the content would not be able to be 
seen but the rest of the world can see it. And that's important 
because the world can still have a conversation around what's 
happening in a market like Turkey. And also, we have evidence 
to show that a lot of citizens within Turkey access that 
content through proxies and whatnot, as well.
    So, we do believe--and we have fought the government--the 
Turkish government--consistently around their requests and 
oftentimes won. Not in every case, but oftentimes have made 
some moves. So we would like to fight for every single person 
being able to speak freely and to see everything, but we have 
to realize that it's going to take some bridges to get there.
    Senator Rubio. Well, because a Twitter spokesman in 
response to a Buzzfeed article--I think about two years ago--
here's the quote defending this policy. It said, ``Many 
countries including the United States have laws that may apply 
to tweets and/or Twitter account content.'' And then you went 
on to say what you said, ``On our continuing efforts to make 
services available to users everywhere et cetera.'' You would 
agree that there's no moral equivalency between what we're 
asking you to do here and what Turkey has asked you to do, or 
other countries have asked you to do, in that same realm?
    Mr. Dorsey. We do have to comply with the laws that govern 
us within each one of these nations, but our ideals are similar 
and our desires----
    Senator Rubio. Whose ideals are similar? I'm sorry.
    Mr. Dorsey. The company's.
    Senator Rubio. Are similar to who?
    Mr. Dorsey. Similar to how we were founded and where we 
were founded in this country.
    Senator Rubio. I guess my point is, you're not arguing 
though that what we're asking you to do here--on this 
misinformation against foreign efforts to interfere in our 
elections--is the same as what Turkey or other authoritarian 
regimes have asked you to do abroad, against political 
opponents of theirs. They're not morally equivalent, these two 
things?
    Mr. Dorsey. Correct.
    Senator Rubio. Thank you.
    Chairman Burr. The Chair will recognize Senator Heinrich 
for questions and then members should know that we will take a 
short recess, no more than five minutes, and then reconvene.
    Senator Heinrich.
    Senator Heinrich. Thank you, Mr. Chair, and thank you both 
for being here. I think we've learned quite a bit over the 
course of the last couple of years. I think it would be an 
understatement to say that we were all caught flat-footed in 
2016: social media platforms, the intelligence community, this 
committee, government as a whole.
    Obviously, we want to learn from that and what I'd like to 
start with is to ask from each of you, since 2016 your 
platforms have been used throughout the course of a number of 
subsequent elections--elections in France, in Germany, and 
other Western allies across Europe.
    What have you learned from those consequential elections 
after 2016 and how has that informed your current posture in 
terms of how you're gaining transparency into this activity? Go 
ahead.
    Ms. Sandberg. Senator, I think we've learned a lot and I 
think we're going to have to continue to learn because as we 
learn, our opponents learn, and we have to keep up. We're 
working on technology and investments in people making sure 
fake news is disseminated less on the platforms--transparency 
actions and taking down bad actors.
    And we've seen everywhere, from Mexico to Brazil to other 
places around the world, these same techniques deployed 
differently and each time we see it, I think we get smarter. I 
think we see the new threat and I think we're able to connect 
the dots and prevent those threats going forward.
    Senator Heinrich. Mr. Dorsey.
    Mr. Dorsey. We've also learned a lot from elections around 
the world, most recently the Mexican election. We have opened a 
new portal to cover that election, that allows any journalist 
or government law enforcement to actually report any suspicious 
behavior very quickly to us, so we can take more actions.
    Otherwise, we have been investing in artificial 
intelligence and machine learning models to, again, recognize 
the patterns of behavior because we believe this is where the 
greatest leverage will come from, recognizing how people 
artificially amplify information and shutting it down before it 
spreads into the shared spaces of Twitter and more broadly into 
someone's replies to a tweet.
    Senator Heinrich. I want to get to the basic issue of 
whether our incentives in this case are aligned to deal with 
these challenges. If your users were to lose confidence in your 
platforms, in the authenticity of what you, Mr. Dorsey, called 
a public square--I might call it a digital public square--I 
assumed there would be very serious economic implications for 
your companies. Do you think the--the incentives have aligned 
for platform providers of all types in the digital space, to 
want to get at these issues, and have a plan, and be able to 
respond in real time?
    Ms. Sandberg and then you, Mr. Dorsey.
    Ms. Sandberg. Absolutely. Trust is the cornerstone of our 
business. People have to trust that what they see on Facebook 
is authentic. People have to trust that this is a positive 
force for democracy and the things they care about. And so this 
has been a huge issue for us and that's why we're here today 
and that's why we're going to keep working to get ahead of 
these threats and make sure we can minimize all of this 
activity.
    Mr. Dorsey. Our incentives are aligned but I do believe it 
goes a lot deeper than just the alignment of our company 
incentives with this committee and the American people. I 
believe we need to question the fundamental incentives that are 
in our product today.
    Every time someone opens up our service, every time someone 
opens up our app, we are implicitly incentivizing them to do 
something or not to do something. And that extends all the way 
to our business and those answers that we get from asking that 
question are going to create massive shifts in how Twitter 
operates and I also believe how our industry operates. So what 
worked 12 years ago does not work today--it hasn't evolved fast 
enough--but I think it is a layer--many, many, many, many 
layers deeper than the surface symptoms that we often find 
ourselves discussing.
    Senator Heinrich. Ms. Sandberg, you mentioned a number of 
things that would violate your standards, for example, hate 
speech, advocacy of violence. What about when you were dealing 
with real people, authentic users, intentionally spreading 
false information? And obviously there are huge free speech 
implications there. But, for example, what if a real person, a 
U.S. citizen, says that victims of the mass shootings were 
actually actors? Would that violate your standards and if the 
answer is no, how should we, and by we, I mean government and 
industry, deal with those very real challenges?
    Ms. Sandberg. Well let me start by saying I find claims 
like that personally, unbelievably upsetting. If you've been a 
victim or a parent of a victim, they deserve all our full 
support. And finding a line between what is hate speech and 
what is misinformation is very, very difficult, especially if 
you're dedicated to expressing free expression, and sometimes 
free expression is expressing things you strongly disagree 
with.
    In the case of misinformation, what we do is we refer it to 
third-party fact-checkers. We don't think we should be the 
arbiter of what's true and what's false, and we think that's 
really important. Third-party fact-checkers then mark it as 
false. If it's marked as false, we dramatically decrease the 
distribution on our site. We warn you if you're about to share 
it. We warn you if you have shared it and, importantly, we show 
related articles next to that so people can see alternative 
facts.
    The fundamental view is that bad speech can often be 
countered by good speech, and if someone says something is not 
true and they say it incorrectly, someone else has the 
opportunity to say, actually you're wrong. This is true and 
that's what we're working on through our systems.
    Senator Heinrich. I think one of the things we found in 
2016 is that we didn't have the transparency and the literacy 
to do what you just pointed out there: to counter false speech 
with accurate speech to understand how this speech was 
propagating in the digital public space.
    What more do you think we should be doing to simply make 
the public more literate about the fact that this information 
warfare is very real? It's going on all the time. It's not fake 
news. It's not a hoax. It's something we're all going to have 
to deal with, that our kids, even playing platforms like 
Pokemon Go, may have to--have to deal with as well.
    Do either of you have a quick opinion on that? And then my 
time will be expired. I apologize, Mr. Chair.
    Mr. Dorsey. I believe we need to point to where we see 
healthy participation and clearly mark what is healthy and what 
is unhealthy. And also realize that not everyone is going to 
choose healthy participation in the short term. But how do we 
encourage healthy participation in order to increase the reach 
and also increase the value of what they're giving to that 
digital public square.
    Chairman Burr. This hearing stands in a recess subject to 
the call of the Chair.
    [Whereupon the hearing recessed at 10:51 a.m. and 
reconvened at 11:01 a.m.]
    Chairman Burr. I'd like to call the hearing back to order. 
The chair would recognize Senator Collins for questions.
    Senator Collins. Thank you, Mr. Chairman. First let me 
thank you both for being here and also to express my outrage 
that your counterpart at Google is not at the table as well.
    Mr. Dorsey, as of January of this year, Twitter has taken 
down more than 3,800 Russian IRA accounts that by Twitter's own 
estimate reached approximately 1.4 million people. One of those 
accounts purported to be under the control of the Tennessee 
GOP, although it was not. It was a Russian IRA account. It had 
more than 140,000 followers and would sometimes spread 
conspiracy theories and false claims of voter fraud.
    My question to you is: Once you have taken down accounts 
that are linked to Russia, these impostor accounts, what do you 
do to notify the followers of those accounts that they have 
been following or engaged in accounts that originated in 
Russia, and are not what they appear to be?
    Mr. Dorsey. Thank you for the question. We simply haven't 
done enough. So in this particular case, we didn't have enough 
communication going out in terms of what was seen and what was 
tweeted, and what people are falling into.
    We do believe transparency is a big part of where we need 
the most work and improvement, and it's not just with our 
external communications, it's actually within the product and 
the service itself.
    We need to meet people where they are, and if we determine 
that people are subject to any falsehoods or any manipulation 
of any sort, we do need to provide them the full context of 
that. And this is an area of improvement for us and something 
that we're going to be diligent to fix.
    Senator Collins. I think this is critically important. If a 
follower just gets a message that says this Twitter account is 
no longer available, that does not alert the individual that he 
or she has been receiving messages--tweets--from a Russian 
entity whose goal is to undermine public confidence in elected 
officials and our democratic institutions.
    So I really think we need something more than even the 
tombstone, or something else. We need to tell people that they 
were taken in or victims--innocent victims--of a foreign 
influence campaign.
    Ms. Sandberg, let me ask you this same question. What is 
Facebook doing?
    Ms. Sandberg. We agree with you that people need to know, 
so we've been discussing these publicly, as well as in specific 
cases notifying people. So we notified people directly if they 
had liked--or had liked the original IRA accounts.
    Most recently when there was an event that was going to be 
happening in Washington that inauthentic accounts--we notified 
all the people who either RSVP'd to that event, or who said 
they were interested in possibly going to that event.
    Senator Collins. Thank you. That was the Night to Defeat 
the Right, or something like that, as I recall.
    Mr. Dorsey, back to you. Clemson University researchers and 
others have shown that these Russian IRA accounts target 
specific leaders and social movements across the political 
spectrum. And again, the goal of the Russians, the Iranians--
anyone else who is involved in this influence campaign--is to 
undermine the public's confidence in political leaders and 
weaken our democratic institutions and turn us against one 
another.
    Well, I learned not from Twitter but from Clemson 
University that I was one of those targeted leaders and that 
there were 279 Russian-generated tweets that targeted me that 
had gone to as many as 363,000 followers. So why doesn't 
Twitter notify individuals like me that we have been targeted 
by foreign adversaries? I shouldn't find out from looking at 
Clemson University's database and working with their 
researchers. It seems to me that once you determine that, you 
should notify the people who are the targets.
    Mr. Dorsey. I agree. It's unacceptable. And as I said 
earlier, we want to find ways to work more openly, not just 
with our peer companies but with researchers and universities 
and also law enforcement because they all bring a different 
perspective to our work, and can see our work in a very 
different light. And we are going to do--we're going to do our 
best to make sure that we catch everything and we inform people 
when it affects them. But, we are not going to catch 
everything. So it is useful to have an external partnership and 
work with them to make sure that we're delivering a message in 
a uniform manner where people actually are, without requiring 
them to find a new channel to get that information.
    This is where a lot of our thinking is going and a lot of 
our work is going. But we recognize we need to communicate more 
directly where people are on our service, and we also recognize 
that we're not going to be able to catch everything alone, so 
we need to develop better partnerships in order to do that.
    Senator Collins. I would close my questioning by 
encouraging both of you to work more closely with academia, 
with our government. The Clemson University researchers have 
done extraordinary work, but they have said that they've been 
provided data that is only within the last three years, which 
does not allow them to do the kind of analysis that they'd like 
to do and that's probably because of the new European Union 
privacy laws. But the EU has provided research exemptions. So I 
hope that you will commit to providing data that goes beyond 
that three year window to researchers who are looking into 
Russian influence efforts on your platforms. Thank you.
    Chairman Burr. Senator Harris.
    Senator Harris. Thank you, Mr. Chairman, for accommodating 
me. I'm in another hearing as you know. Good morning, and to 
the invisible witness, good morning to you. So I have a few 
questions for Ms. Sandberg. On November 2, 2017, your company's 
general counsel testified in front of this Intelligence 
Committee on Russian interference, and I asked a few questions.
    I asked how much money did you make, and this is of the 
representative from both Facebook and Twitter--both of your 
general counsels were here. And I asked how much money did you 
make from legitimate advertising that ran alongside the Russian 
propaganda. The Twitter general counsel said, quote, ``We 
haven't done the analysis but we'll follow-up with you and work 
on that.'' And the Facebook general counsel said the same is 
true for Facebook.
    Again, I asked Facebook CEO Mark Zuckerberg on April 10, 
2018, and he said that, quote, ``Internet Research Agency, the 
Russian firm, ran about $100,000 worth of ads.'' Following the 
hearing, I asked Facebook the same question in writing, and on 
June 8, 2018, we received a response that said, quote, ``We 
believe the annual revenue that is attributable to inauthentic 
or false accounts is immaterial.''
    So my question is: What did you mean by immaterial? Because 
I'm a bit confused about the use of that term in this context.
    Ms. Sandberg. Thank you for the question.
    Again we believe the total of the ad spending that we have 
found is about $100,000. And so the question you're asking is 
with the inorganic content, I believe, what is the possible 
revenue we could have made? So here's the best way I can think 
of to estimate that, which is that we believe between 2015 and 
2017, up to 150 million people may have seen the IRA ads or 
organic content in our service. And the way our service works 
is, ads don't run attached to any specific piece of content, 
but they're scattered throughout the content. This is 
equivalent to .004 percent of content in news feed and that was 
why they would say it was immaterial to our earnings.
    But I really want to say that from our point of view, 
Senator Harris, any amount is too much.
    Senator Harris. If I may, just so I'm clear about your 
response--so are you saying that then the revenue generated was 
.004 percent of your annual revenue? Of course that would not 
be immaterial.
    Ms. Sandberg. Again, the ads are not attached to any piece 
of content so----
    Senator Harris. So what metric then? Just help me with 
that. What metric are you using to calculate the revenue that 
was generated, associated with those ads? And what is the 
dollar amount that is associated then with that metric?
    Ms. Sandberg. The reason we can't answer the question to 
your satisfaction is that ads are not--organic content--ads 
don't run with inorganic content on our service, so there is 
actually no way to firmly ascertain how much ads are attached 
to how much organic content. It's not how it works.
    In trying to answer what percentage of the organic----
    Senator Harris. But what percentage of the content on 
Facebook is inorganic?
    Ms. Sandberg. I don't have that specific answer, but we can 
come back to you with that.
    Senator Harris. Would you say it's the majority?
    Ms. Sandberg. No. No.
    Senator Harris. An insignificant amount? What percentage? 
You must know.
    Ms. Sandberg. If you ask about our inauthentic accounts on 
Facebook, we believe at any point in time it's 3 percent to 4 
percent of accounts, but that's not the same answer as 
inorganic content because some accounts generate more content 
than others.
    Senator Harris. I agree. So what percentage of your content 
is inorganic?
    Ms. Sandberg. Again, we don't know. I can follow up with 
the answer to that.
    Senator Harris. Okay, please. That would be great. And then 
your company's business model is obviously--it's complex but 
benefits from increased user engagement and that results of 
course in increased revenue. So, simply put, the more people 
that use your platform, the more they are exposed to third-
party ads, the more revenue you generate. Would you agree with 
that?
    Ms. Sandberg. Can you repeat? I just want to make sure I 
got it exactly right.
    Senator Harris. So the more user engagement will result--
and the more then that they are exposed to third-party ads--the 
more that will increase your revenue. So the more users that 
are on your platform----
    Ms. Sandberg. Yes. Yes. But only I think when they see 
really authentic content. Because I think in the short run and 
over the long run it doesn't benefit us to have anything 
inauthentic on our platform.
    Senator Harris. That makes sense. In fact, the first 
quarter of 2018, the number of daily active users on Facebook 
rose 13 percent, I'm told. And corresponding ad revenue grew by 
half to $11.79 billion. Does that sound correct to you?
    Ms. Sandberg. Sounds correct.
    Senator Harris. And then would you agree that--I think it's 
an obvious point--that the more people that engage on the 
platform, the more potential there is for revenue generation 
for Facebook?
    Ms. Sandberg. Yes, Senator. But again, only when the 
content is authentic.
    Senator Harris. I appreciate that point. And so a concern 
that many have is how you can reconcile an incentive to create 
and increase your user engagement when the content that 
generates a lot of engagement is often inflammatory and 
hateful.
    So, for example, Lisa-Maria Neudert, a researcher at Oxford 
and Internet Institute, says, quote, ``The content that is the 
most misleading or conspiratorial, that's what's generating the 
most discussion and the most engagement, and that's what the 
algorithm is designed to respond to.''
    My concern is that according to Facebook's community 
standards, you do not allow hate speech on Facebook. However, 
contrary to what we've seen, on June 28, 2017, a ProPublica 
report found that Facebook's training materials instructed 
reviewers to delete hate speech targeting white men but not 
against black children because black children are not a 
protected class. Do you know anything about that, and can you 
talk to me about that?
    Ms. Sandberg. I do. And what that was, I think, a bad 
policy that's been changed, but it wasn't saying that black 
children--it was saying that children--it was saying that 
different groups weren't looked at the same way, and we've 
fixed it.
    Senator Harris. But isn't that the concern with hate, 
period? That not everyone is looked at the same way?
    Ms. Sandberg. Well, hate speech is against our policies and 
we take strong measures to take it down. We also publish 
publicly what our hate speech standards are. We care 
tremendously about civil rights. We have worked very closely 
with civil rights groups to find hate speech on our platform 
and take it down.
    Senator Harris. So when did you address that policy? I'm 
glad to hear you have. When was that addressed?
    Ms. Sandberg. When it came out--and again, that policy was 
a badly written, bad example, and not a real policy.
    Senator Harris. The report that I'm aware of was from June 
of 2017. Was the policy changed after that report or before 
that report from ProPublica?
    Ms. Sandberg. I can get back to you on the specifics of 
when that would have happened.
    Senator Harris. You're not aware of when it happened?
    Ms. Sandberg. I don't remember the exact date.
    Senator Harris. Do you remember the year?
    Ms. Sandberg. Well, you just said it was 2017.
    Senator Harris. So do you believe it was 2017 that the 
policy changed?
    Ms. Sandberg. It sounds like it was.
    Senator Harris. Okay. And what is Facebook's official 
stance on then hate speech regarding so-called, and legally 
defined, unprotected classes, such as children?
    Ms. Sandberg. Hate speech is not allowed on our platform 
and hate speech is, you know, important in every way. And we 
care a lot that our platform is a safe community. When people 
come to Facebook to share, they're coming because they want to 
connect on the issues that matter to them.
    Senator Harris. So, have you removed the requirement that 
you will only protect with your hate speech policy those 
classes of people that have been designated as protected 
classes in a legal context? Is that no longer the policy of 
Facebook?
    Ms. Sandberg. I know that our hate speech policies go 
beyond the legal classifications and they are all public and we 
can get back to you on any of that. It's all publicly 
available.
    Senator Harris. Thank you so much. Thank you, Mr. Chairman.
    Chairman Burr. Senator Blunt.
    Senator Blunt. Thank you, Chairman. Mr. Dorsey, Wired 
magazine last week had an article that said you'd admitted 
having to rethink fundamental aspects of Twitter. Would that be 
an accurate reflection of where you've been the last year?
    Mr. Dorsey. Yes. We are rethinking the incentives that our 
service is giving to people.
    Senator Blunt. And what would be the biggest area where 
you're trying to rethink how you thought this was going to work 
out and the way it's turned out to be?
    Mr. Dorsey. Well--and this is pretty far-reaching--so we're 
still in the process of doing this work, but when we created 
the service 12 years ago, we had this concept of followers. And 
we made the number of followers big and bold and a very simple 
but noticeable font.
    And just that decision alone has incentivized people to 
want to grow that number, to increase that number. And the 
question we're now asking is, ``Is that necessarily the right 
incentive? Is the number of followers you have really a proxy 
for how much you contribute to Twitter and to this digital 
public square?'' And we don't believe it is. But that's just 
one question. The way we lay out our buttons on the bottom of 
every tweet in a reply and a retweet and a like, that also 
implies an incentive and a point of view that we're taking that 
we want to encourage people to do.
    So as we think about serving the public conversation, as we 
think about our singular priority of increasing the health of 
that public conversation, we are not going to be able to do 
long-term work unless we are looking at the incentives that our 
product is telling people to do every single day.
    Senator Blunt. All right, that's helpful. Thank you. 
Senator Collins asked her last question--I didn't really quite 
get the answer to that question. But I think what she was 
asking is a question I had also, which was: In the interest of 
transparency and public education and looking at things 
available to researchers and policy makers, are you willing to 
archive suspended accounts so that people can look back at 
those? And would that be a period of, I think, three years was 
part of the question she asked. Give me a little better, more 
specific answer. You didn't have time to answer that, and I'd 
like you to have time to answer that.
    Mr. Dorsey. We are looking at things like a transparency 
report. We put out a transparency report around terrorism, but 
we're looking at expanding that transparency report around 
suspensions of any account.
    We are still coming up with the details of what this will 
look like and what it will include.
    Senator Blunt. As opposed to just a transparency report, 
are you willing to archive some of this where you may not be 
reporting on it at the time, but someone could look three years 
down the road and try to do an analysis of why that information 
was out there the way it was and how it fit into your overall 
policy of taking whatever action you're taking?
    Mr. Dorsey. I think it's a great idea to show the 
historical public record. We just need to understand what the 
legal implications are, and we can get back to you on that.
    Senator Blunt. Yes, I may come back with a question if I 
have time on legal implications, generally. I think for both of 
your companies, who have been pretty forward-leaning in the 
last couple of months as this conversation has moved pretty 
dramatically, the business implications, the liability 
implications of what we're asking you to do are pretty great.
    Well, let me see if I can get a couple of Facebook 
questions in first. Ms. Sandberg, does Facebook differentiate 
between foreign and domestic influence operations when deciding 
whether to take down a page or remove an account from the 
platform?
    Ms. Sandberg. Our focus is on inauthenticity, so if 
something is inauthentic, whether it's trying to influence 
domestically or trying to influence on a foreign basis--and 
actually a lot more of the activity is domestic--we take it 
down.
    Senator Blunt. You take it down indiscriminately, whether 
it's a foreign influence or--or a domestic influence?
    Ms. Sandberg. And you saw that with the IRA. With the IRA 
accounts, the original ones for our election were targeted at 
the United States, but then there were another 270 accounts 
that were almost all targeted in Russia or at Russia--for 
Russian speakers and nearby languages. So a lot of those were 
domestic, and those are down.
    Senator Blunt. Well, it's been mentioned several times, and 
I think appropriately so, Google is not here today. But the two 
of you are, and Ms. Sandberg, again, just what seems like a 
long time ago, but only a few months, since Mr. Zuckerberg was 
here testifying before Congress. It seems like to me that 
Facebook has been pretty active in finding and taking down 
things that should not have been out there: the recent Iranian 
takedown, the Russian things that have been taken down.
    Do you want to talk a little about what's the big challenge 
about being at the forefront of trying to figure this out from 
a business perspective or a liability perspective, either one? 
Then I'm going to come to Mr. Dorsey with the same question.
    Ms. Sandberg. Well I really appreciate what you said, 
because we have been investing very heavily in people, in our 
systems, in decreasing the dissemination of fake news, in 
transparency, and I think that's what you're seeing pay off.
    I think we've all said, in the private meetings we had as 
well as this public discussion, that tighter coordination 
really helps us. If you look at our recent takedowns, some of 
it was information we found ourselves, some of it were hints we 
got from law enforcement, some of it is information we can 
share with other companies.
    And so this is a big threat, and our opponents are going to 
keep getting better and we have to get better. We have to stay 
ahead. And the more we can all work together the better off 
we're going to be, and that's why I really appreciate the 
spirit with which this hearing this morning is taking place.
    Senator Blunt. And how does the takedown, the practice 
work, where legitimate accounts are sold then maybe--and 
repurposed by others? What are you looking at there as a 
challenge?
    Ms. Sandberg. So our policy is inauthenticity. If you are 
an inauthentic account, if you are pretending to be someone 
you're not, you come down. If you have touched the account of 
someone who is authentic, then we would leave the authentic 
account up, but in cases like I was answering with Senator 
Collins, if you are an authentic person who RSVP'd to an event 
that's not authentic, we would let you know.
    Senator Blunt. Okay, thank you for that. Okay, Mr. Dorsey, 
back to that other question. From a business and legal 
liability standpoint, what's the downside of being out there 
where you are now trying to every day implement policies that 
nobody's ever implemented before?
    Mr. Dorsey. I think there are a number of short-term risks, 
but you know, we believe that the only way that we will grow 
and thrive as a company is by increasing the health of this 
digital public square that we're helping to build. We also 
benefit, as Sheryl mentioned, from tighter collaboration and 
tighter partnership. We've really strengthened our partnership 
with our government agencies since 2016.
    There are a few areas that we would like to see more 
strength. We would like a more regular cadence of meetings with 
our law enforcement partnerships. We would love to understand 
the secular trends that they are aware of, and seeing in our 
pure companies or other mediums, or more broadly that would 
inform us about how to act much faster. And we would appreciate 
as much as we can consolidating to a single point of contact, 
so that we are not bouncing between multiple agencies to do our 
work.
    So that is what we've found in attempting to do a lot of 
this new policy and work, in terms of partnership, but 
ultimately it comes back to: we need to build our technologies 
to recognize new patterns of behavior and new patterns of 
attack, and to understand what they actually mean, and then 
ideally get some help from our law enforcement partners to 
understand the intent and to understand the motivations behind 
it.
    Senator Blunt. Thank you, Mr. Dorsey. I'm sure my time is 
up. Thank you, Chairman.
    Chairman Burr. Senator King.
    Senator King. Thank you, Mr. Chairman, and I want to also 
thank our witnesses. And thank you to your companies and your 
policy makers for making really great strides in the last year. 
As many of the people have talked about, we were all on our 
heels a year ago on this subject. And this has emerged as one 
of the most important parts of this committee's investigation.
    I try to focus on what we're after here. And we're after 
the heart of democracy. Ms. Sandberg, you said the heart of 
democracy was free and fair elections. I would argue that the 
heart of free and fair elections is information. And that's 
really what we're talking about: getting information to people 
in a democratic setting. And also on all kinds of other topics, 
birthdays and everything else, but that's what we're talking 
about here.
    There are three ways to defend ourselves it seems to me. 
One is better consumer discrimination about what they're 
seeing. The second is deterrence, which hasn't been mentioned 
here, that our adversaries need to understand that there's a 
price to be paid for trying to manipulate our society and our 
democracy. And the third is technical, and that's mostly what 
we've been talking about.
    I had an experience, ironically, a couple of months before 
the 2016 election, meeting here in this building with a group 
of people from Lithuania, Estonia, and Latvia, who have been 
experiencing Russian interference with their elections and 
their propaganda, their information for years. And I said, 
``How do you defend yourself?'' You can't unplug the internet. 
You can't turn off the TV station. The most interesting thing 
they said was, universally, the best defense is for the people 
to know it's happening.
    And I would like from each of you some thoughts and 
hopefully a commitment to educating your users about the 
potential for abuse of the very medium that they're putting 
their trust in.
    Ms. Sandberg.
    Ms. Sandberg. We really agree with you. And we've done this 
broadly and we're going to continue to do more. So we've worked 
on media literacy programs. We've worked on programs in public 
service announcements around the world that help people 
discern--this is real news, this is not--and help people be 
educated. I think one of the most important things we're doing 
is that once a piece of content has been rated as false by our 
third-party fact-checkers--if you're about to share it, we warn 
you right there. Hey this has been rated as false. And so, you 
are educated as you are about to take that critical step.
    Senator King. And Mr. Dorsey, I hope you're doing the same 
to educate your users as to the potential that they can be 
misled on your platform.
    Mr. Dorsey. Yes. And to be frank, we haven't done a good 
job at this in the past. And I think the reason why is because 
we haven't met our customers where they are, in terms of 
actually when they're using the product and adding more context 
there.
    We do benefit on Twitter that we have this amazing 
constituency of journalists globally using our service every 
single day, and they often, with a high degree of velocity, 
call out nonfactual information. We don't do a great job at 
giving them the best tools and context to do that work. And we 
think there's a lot of improvements we can make to amplify 
their content and their messaging so that people can see what 
is happening with that content.
    Senator King. If that can be amplified and underlined, it 
can become a self-healing process, whereby the response 
immediately responds to false or misleading information.
    Deterrence, I'm not going to spend a lot of time on, except 
to say that many of us believe that one of the great gaps in 
our defenses against election interference and interference in 
our democracy is the fact that our adversaries feel no pain if 
they do so--that we have to develop a doctrine of cyber 
deterrence just as we have doctrines of military deterrence. 
And that's a gap, and that's something that we're working on 
both here and at Armed Services, other places.
    Let me talk about the technical for a minute. How about 
feedback from users? And Ms. Sandberg, you testify that you 
have third-party fact-checkers. Also, would it be useful to 
have more in the way of ratings? And, you know, the eBay 
sellers--you have rating process and number of stars, and those 
kinds of things. Is there more you could do there to alert 
people as to the validity and the trustworthiness of what 
they're seeing?
    Ms. Sandberg. Senator, the most important determinant of 
what anyone sees on Facebook are decisions they make. So I 
choose my friends, you choose yours. I choose the news 
publications I follow, you choose yours. And that's why your 
news feed is so different from mine. And so, yes, if you don't 
want to follow someone, if you don't want to like a page, we 
encourage you to do that. We also make it very easy to unfollow 
on our site. So if I don't believe what you're saying anymore, 
I don't have to receive your----
    Senator King. But I'm talking about alerting a viewer or a 
reader to something that's come across on their newsfeed that 
has been found manifestly false or misleading: a banner, a 
note, a star.
    Ms. Sandberg. We do that through related articles. We note 
this has been rated as false, and here's a related article 
which would give you other facts that you could consider.
    Senator King. One of the things that we've been talking 
about here, and Senator Rubio has been a leader in discussing 
this, is what we call Deepfake, as I'm sure you're aware, the 
ability to manipulate video to the point where it basically 
conveys a reality that isn't real.
    Is there a technological way that you can determine that a 
video has been manipulated in that way and tag it? So that 
people on Facebook, if they see a video that it'll be tagged: 
warning, this has been manipulated in a way that may be 
misleading. That's a question you may want to take under 
advisement. But it seems to me, again, this is an area--this is 
a new area that's going to get more and more serious, I'm 
afraid. And again, what I'm trying to do is give the consumer 
the maximum amount of information.
    Ms. Sandberg. We agree with you, Deepfakes is a new area 
and we know people are going to continue to find new ones. And 
as always, we're going to do a combination of investing in 
technology and investing in people so that people can see 
authentic information on our service.
    Senator King. As you're thinking about these cures, I hope 
you'll continuously come back to the idea that what we need to 
do is give people more information. I must say, I'm a little 
uncomfortable with where the line is between taking down 
misleading or fake information and taking down what someone 
else may consider legitimate information in the marketplace of 
ideas. Jefferson said we can tolerate error, as long as truth 
is left free to combat it. We have to be sure that we're not 
censoring. But at the same time, we're providing our customers, 
our users--your users with information that they can--the 
context, I think, is the word you use--they can have context 
for what it is that they're seeing.
    I'd hate to see your platforms become political in the 
sense that you're censoring one side or the other of any given 
debate.
    Mr. Dorsey.
    Mr. Dorsey. So yes, we absolutely agree. As we are building 
a digital public square, we do believe expectations follow 
that. And that is a default to freedom of expression and 
opinion. And we need to understand when that default interferes 
with other fundamental human rights such as physical security 
or privacy. And what the adverse impact on those fundamental 
human rights are.
    And I do believe that context does matter in this case. We 
had a case of voter suppression around 2016 that was tweeted 
out. And we are happy to say that organically, the number of 
impressions that were calling it out as fake were eight times 
that of the reach of the original tweet. That's not to say that 
we can rely on that, but asking the question how we make that 
more possible, and how we do it at velocity is the right one to 
ask.
    Senator King. That's the self-healing aspect. Thank you 
both very much. And if you have further thoughts as you're 
flying home, about technical ways you can increase the 
information available to your users through tags, ratings, 
stars, whatever, please share them with us and we'll look 
forward to working with you on this problem that is one that's 
important to our country. Thank you very much.
    Chairman Burr. Senator Lankford.
    Senator Lankford. Thank you, Mr. Chairman. I want to follow 
up on a statement that Senator King was mentioning as well 
about Deepfakes. That's something I've spoken to both of you 
about before in the past. It is a challenge for us and I would 
just reiterate some of the things that he was saying publicly. 
When it's the possibility and now the opportunity to be able to 
create video that looks strikingly real, but none of it is 
actually real--all of it is computer-generated--that is a very 
different day for video-sharing in the days ahead. And I know 
as you all have attacked issues like child pornography and 
other things on your platforms in the past, you all will 
aggressively go after these things. We're just telling you 
we're counting on it because Americans typically can trust what 
they see, and suddenly in video they can no longer trust what 
they see because the opportunity to be able to create video 
that's entirely different than anything in reality has now 
actually come. And so I appreciate your engagement on that.
    And I want to talk to you a little bit, Mr. Dorsey, about 
following up some of the things that Senator Blunt had 
mentioned as well about suspended accounts. When you suspend an 
account, obviously there's information that's still there. Do 
you archive all of that information to be able to maintain for 
a suspended account that this is an account that we determine 
is either from a foreign actor or hostile actor or is 
inappropriate--not an authorized user? Is that something you 
hold that information, so you can maintain it?
    Mr. Dorsey. I need to follow up with you on the exact 
details of our policies, but I believe we do, especially in 
regards to any law enforcement action.
    Senator Lankford. Terrific. For Facebook, what is the 
practice when you suspend an account and say this is not an 
authorized user or we think this is a foreign or hostile user?
    Ms. Sandberg. If we have any suspicion that it's a foreign 
or hostile user, we would keep the information to be able to do 
further investigation.
    Senator Lankford. So then the question is, is the 
investigation internal for you all? Or obviously if law 
enforcement subpoenas that and comes to you and says I have a 
subpoena to come get that information, that's a whole different 
issue. But is that something you do in your own investigation? 
Because as I'm sure you've seen in the past, some users will 
create a fake account or some sort of hostile account. That 
comes down, they'll create another one, and then there's some 
similarities in where they go and directions and relationships.
    Do you maintain that data to be able to make sure that 
you're well prepared and educated for when they may come back 
to be aware of that again? For Twitter what is that, Mr. 
Dorsey?
    Mr. Dorsey. So we do, do our own internal investigations 
and we are benefited every time our peers recognize something, 
and we do share that data so that we can check our own systems 
for similar vectors or similar accounts. And also work with law 
enforcement to understand the intent. If there is a request to 
allow an account to lay dormant by law enforcement, we will 
allow that to happen and work with them to make sure that we 
are tracking it accordingly.
    Senator Lankford. Mr. Dorsey, the main thing I'm trying to 
identify though is, let's say it happened in 2017. You identify 
an account that you suspended and said this is your problem 
area or an unauthorized user, whatever it may be.
    You take that account off, do you maintain that 
information? And so a year later if somebody comes back on with 
a similar profile you can still track it and say, this is the 
same as what we've seen before and it's going to take 
additional steps for you to get back on board or ways to be 
able to track their initial connections?
    Mr. Dorsey. I'm sorry, yes. We do maintain that information 
and we have a ban evasion policy. So if someone is trying to 
evade a ban or suspension, no matter what the timeframe, we can 
take action on those accounts as well.
    Senator Lankford. Okay.
    Ms. Sandberg.
    Ms. Sandberg. If we have any suspicion that this would be 
engaged in foreign or domestic inauthentic activity or we have 
law enforcement interaction on it, we would keep that 
information.
    Senator Lankford. Okay. Mr. Dorsey, you and I have spoken 
on this as well about data and the business model for both of 
you is obviously--it's a free platform for everyone to use--but 
obviously data and advertising and all those things are very 
helpful just in keeping your business open and keeping your 
employees paid. That's a given, and everyone understands that 
when they join that platform and that conversation. But for 
data in particular, how do you make sure that anyone who 
purchases into data or gets access to that uses it for its 
stated purpose, rather than using it to either sell to a third 
party or to open up as a shell company, and say they're using 
it for one purpose but they're actually using it for a foreign 
purpose or direction to be able to track real-time activity of 
Americans? How do you assure that companies that are purchasing 
into that opportunity to have that data are actually fulfilling 
and using it as they stated they would?
    Mr. Dorsey. Well, there's a few things here. First and 
foremost, we're a little bit different than our peers and that 
all of our data is public by default. So when we sell data, 
what we're selling is speed and comprehensiveness. So you're 
actually purchasing either insights or a real-time streaming 
product. In order to purchase that you have to go through a 
very strict know-your-customer policy that we enact and then we 
audit every single year. If we have any indication that there 
is suspicious activity happening, that is an opportunity for us 
to reach out to law enforcement with the sole purpose of trying 
to understand the intent. That is the thing that we are not 
always going to be able to infer from us looking at the 
relationship.
    You mentioned setting up companies that potentially are in 
front of governments. That is not information that we would 
necessarily have and that is where we are dependent upon the 
intelligence to inform us so that we can take stronger action.
    Senator Lankford. So, how do you determine or what 
relation--is it an initial relationship but there's not a 
follow up after that rapid access as you dictate on that? After 
that is determined, is there any way to check in on those 
companies to be able to make sure they're actually fulfilling 
their terms of service?
    Mr. Dorsey. Absolutely. And we do it every year on a 
regular basis. But if we see anything suspicious at any point 
in time, we'll reach out directly.
    Senator Lankford. Ms. Sandberg, tell me a little bit about 
WhatsApp? WhatsApp has been a feature of Facebook for a while. 
How is the encryption going on that? What's the relationship 
now with WhatsApp and what do you anticipate in the days ahead?
    Ms. Sandberg. We are strong believers in encryption. 
Encryption helps keep people safe. It secures our banking 
system, it secures the security of private messages, and 
consumers rely on it and depend on it. And so we're very 
committed to encryption in WhatsApp and continuing to protect 
the data and information of our users.
    Senator Lankford. So that encryption is end-to-end at this 
point still on the WhatsApp platform?
    Ms. Sandberg. We'll get back to you on any technical 
details, but to my knowledge, it is.
    Senator Lankford. Thank you. I yield back.
    Chairman Burr. Senator Manchin.
    Senator Manchin. Thank you, Mr. Chairman. And Ms. Sandberg 
and Mr. Dorsey, I want to thank both of you for being here. And 
I grew up in an age without computers and social media so I'm 
trying to get acclimated the best I can. I have seen how 
they've been used by my children and grandchildren and how much 
it helps connect people. I see an awful lot of good.
    I also have concerns with internet and social media have 
been--how it's been used against us. And I think you're hearing 
concerns from all of my fellow colleagues up here. It's an 
attempt to divide Americans, change our way of life, change our 
democracy as we know it, and it can be very devastating.
    In my little State of West Virginia--my beautiful little 
State of West Virginia, with all the wonderful people--has been 
hit extremely hard by illicit drugs and pharmaceutical opiates. 
According to the recent Wired article, Eileen Carey spent three 
years regularly reporting accounts illegally selling opiates on 
Instagram. And the practice was widespread on Facebook and 
Twitter, as well.
    In many ways, the tools used by opiate dealers are similar 
to those adopted by other bad actors including Russia, target 
the vulnerable with ads that are easily circumventing the 
platforms, filters, and oversights, and using hashtags to gain 
attention of those interested. Last November, Facebook CEO Mark 
Zuckerberg said learning of the depths of the crisis was the 
biggest surprise and really saddening to see. But it still took 
months to take measures to correct the problem while other 
people were still dying.
    According to the U.S. Code 230, formerly known as a 
Communications Decency Act of 1996, online service providers 
shall not be held civically liable for content that a third 
party posts on their platform, and they shall not be treated as 
a publisher or speaker of the content.
    If we look at the example of drug overdose deaths, many 
prosecutors are increasingly treating the deaths as a homicide 
and looking to hold someone criminally accountable. There are 
now laws devised to hold drug dealers responsible for the death 
of victims using drugs they provided and, in some cases, they 
are charging friends, partners, siblings of the deceased.
    So my question to both of you would be: I've heard of a 
report that details the way drug dealers continue to use your 
platforms for illegal drug sales. To what extent do you bear 
responsibility for the death of a drug user if they overdosed 
on drugs received through your platform?
    Either one. I know it's a tough one.
    Ms. Sandberg. I'm happy to go.
    Senator Manchin. Yes.
    Ms. Sandberg. This is really important to us. The opioid 
crisis has been devastating, and takes the lives of people in 
our country and around the world. It's firmly against our 
policies to buy or sell any pharmaceuticals on Facebook, and 
that includes the opioid drugs. We rely on a combination of 
machines and people reporting to take things down, and I think 
we've seen marked improvements.
    We also took an additional step recently which is very 
important which is, we're requiring treatment centers who want 
to buy ads to be certified by a respected third party because 
another one of the problems has been that some treatment 
centers are actually doing harm, and so we're requiring 
certification before they can purchase ads and they can try to 
reach people for treatment.
    Mr. Dorsey. This is also prohibited on our service and we 
do have a responsibility to fix it anytime we see it. And we 
are looking deeply at how this information spreads, and how the 
activity spreads so that we can shut it down before it spreads 
too far.
    Senator Manchin. I know I asked a tough question. It was, 
do you all feel any responsibility because there has been a lot 
of people that have been affected, and a lot of people have 
died receiving information on how to obtain drugs through your 
all's platform?
    So I would go another step further, just like we passed 
FOSTA and SESTA--FOSTA was the Fight Online Sex Trafficking 
Act, and stop enabling--and SESTA was the Stop Enabling Sex 
Traffickers Act. We passed bills that held you liable and 
responsible. Don't you think we should do the same with opiate 
drugs and the way they're being used in your platform? Would 
you all support us doing that?
    Mr. Dorsey. We're certainly open to dialogue around CDA and 
the evolutions of it. We benefit from a lot of the protections 
it gives in order for us in the first place to take actions on 
the content within our service. The only reason we're able to 
even speculate that we can increase more health in a public 
square is because of CDA 230. So we need to finely balance what 
those changes are and what that means.
    Senator Manchin. Well, did it change your all's approach of 
how you use your platforms with the changing of Code 230?
    Mr. Dorsey. We have to do that independent of changes to 
230.
    Ms. Sandberg. These things are against our policies, and we 
want them off and we want to take all measures to get them off. 
The Safe Harbor of 230 has been very important in enabling 
companies like ours to do proactive enforcement, look for 
things proactively, without increasing our liability. And so, 
we'd want to work very closely on how this would be enacted.
    Senator Manchin. Final question to both of you. Why are you 
not doing business in China?
    Mr. Dorsey. We are blocked in China.
    Ms. Sandberg. We are as well.
    Senator Manchin. You're blocked? For what reasons?
    Ms. Sandberg. The Chinese government has chosen not to 
allow our service in China. I think it happened on the same 
day.
    Senator Manchin. Did you all not accept, basically, the 
terms of how you do business in China? Or you're just blocked 
from coming in to it? Or did you not agree? Did they give you a 
chance, or--? I'm saying other social platforms seem to be 
adapting and going in there.
    I know a lot of our drugs--a lot of the fentanyl and all 
that--is coming from China, and we're trying to shut that down. 
But it was interesting to me that you all both have been 
blocked. And I would assume you didn't agree to their terms?
    Mr. Dorsey. I don't know if there's any one particular 
decision point around understanding what the terms might be in 
our particular case. But when we were blocked, we decided that 
it wasn't a fight worth fighting right now, and we have other 
priorities.
    Senator Manchin. Are you still looking to do business 
there?
    Ms. Sandberg. There was no particular time. You know, we've 
been open about the fact that our mission is to connect the 
world. And that means, it's hard to do that without connecting 
the world's largest population. But in order to go into China, 
we would have to be able to do so in keeping with our values. 
And that's not possible right now.
    Senator Manchin. Thank you.
    Thank you, Mr. Chairman.
    Chairman Burr. Senator Cotton.
    Senator Cotton. I want to commend both of you for your 
appearance here today, for what was no doubt going to be some 
uncomfortable questions. And I want to commend your companies 
for making you available. I wish I could say the same about 
Google.
    I think both of you, and your companies, should wear it as 
a badge of honor that the Chinese Communist Party has blocked 
you from operating in their country. Perhaps Google didn't send 
a senior executive today because they've recently taken actions 
such as terminating cooperation that they had with the American 
military on programs like artificial intelligence that are 
designed not just to protect our troops and help them fight and 
win our country's wars, but to protect civilians as well. This 
is at the very same time that they continue to cooperate with 
the Chinese Communist Party on matters like artificial 
intelligence, or partner with Huawei and other Chinese telecom 
companies that are effectively arms of the Chinese Communist 
Party. And credible reports suggest that they are working to 
develop a new search engine that would satisfy the Chinese 
Communist Party's censorship standards after having disclaimed 
any intent to do so eight years ago.
    Perhaps they didn't send a witness to answer these 
questions because there is no answer to those questions, and 
the silence we would hear right now from the Google chair would 
be reminiscent of a silence that that witness would provide.
    So I just want to ask both of you, would your companies 
ever consider taking these kinds of actions that privilege a 
hostile foreign power over the United States and especially our 
men and women in uniform.
    Ms. Sandberg.
    Ms. Sandberg. I'm not familiar with the specifics of this 
at all, but based on how you're asking the question, I don't 
believe so.
    Mr. Dorsey. Also no.
    Senator Cotton. So thank you for that answer. Mr. Dorsey, 
let's turn to Dataminr, which is one of the services that 
provides basically all of Twitter's data. The last time we had 
an executive from Twitter before this committee in an open 
setting, I asked about reports that Dataminr had recently 
ceased its cooperation with the Central Intelligence Agency, at 
the same time it continued to cooperate with Russia Today and 
other proxies of Russian intelligence services.
    I have since seen reports that Dataminr no longer 
cooperates with Russia Today or any other proxy of Russian 
intelligence services. Is that correct?
    Mr. Dorsey. That is correct.
    Senator Cotton. Did you make that decision personally?
    Mr. Dorsey. No, we have a long-standing term against 
utilizing public Twitter data for ongoing 24/7 surveillance.
    Senator Cotton. And that's why you've decided to cease 
cooperation with the Russian government or proxies like Russia 
Today?
    Mr. Dorsey. No. That's a different matter. This is in 
regards----
    Senator Cotton. Could you explain why you ceased that 
cooperation then, or that relationship with, Russia Today and 
other Russian intelligence proxies?
    Mr. Dorsey. When we learned of the link of Russia Today and 
Sputnik, we ceased to allow them to be an advertiser on the 
platform. We calculated the amount of advertising they did on 
the platform is $1.9 million and we donated that to civil 
liberties nonprofits.
    Senator Cotton. Would you now reconsider the decision to 
cease your cooperation with the Central Intelligence Agency or 
other American intelligence agencies?
    Mr. Dorsey. We are always open to any legal process that an 
agency would present us, so we don't believe it necessary. This 
is a global policy around surveillance in general and real-time 
surveillance. I will state that all this information, because 
Twitter is public by default, is available to everyone by just 
going to our service.
    Senator Cotton. You see a difference between cooperating 
with the United States government and the Russian government or 
the Chinese government?
    Mr. Dorsey. Do I see a difference? I'm not sure what you 
mean.
    Senator Cotton. Is Twitter an American company?
    Mr. Dorsey. We are an American company.
    Senator Cotton. Do you prefer to see America remain the 
world's dominant global superpower?
    Mr. Dorsey. I prefer that we continue to help everywhere we 
serve and we are pushing towards that, but we need to be 
consistent about our terms of service and the reason why. And 
the reason why is we also have a right and a responsibility to 
protect the privacy of the people on Twitter from constant 24/7 
surveillance. And we have other methods to enable any issues 
that an intelligence community might see, to subpoena and to 
give us a proper legal order, and we will work with them.
    Senator Cotton. I have to say I disagree with any 
imperative to be consistent between the governments of China 
and Russia on the one hand and the government of the United 
States on the other hand. Or would you be consistent or even 
handed between the government of China and the government of 
Taiwan?
    Mr. Dorsey. What I meant was a consistency of our terms of 
service. And of course there will always be exceptions, but we 
want to have those go through due legal process.
    Senator Cotton. Let me turn to the actions you've taken 
about the 2016 election--both of your platforms--and 
specifically one action you haven't taken. You have removed 
several accounts as a result of your own investigations and I 
think some of this committee's work--and I commend your 
companies for that.
    One set of accounts that remain on your platforms are 
WikiLeaks and Julian Assange. Secretary of State Mike Pompeo, 
when he was the director of the CIA, characterized WikiLeaks as 
a non-state hostile intelligence service. This committee has 
agreed with that assessment now for a couple years in a row, 
yet, both WikiLeaks, which propagated some of the leaked emails 
in the 2016 election from the Democrats, remain active on both 
Facebook and Twitter as does Julian Assange.
    Ms. Sandberg, could you explain why Facebook continues to 
allow their accounts to be active?
    Ms. Sandberg. I'm not going to defend WikiLeaks and I'm not 
going to defend the actions of any page or actor on our 
platform. WikiLeaks has been public information. It's available 
broadly on other media and as such it doesn't violate our terms 
of service and it remains up on our site.
    Senator Cotton. And Mr. Dorsey.
    Mr. Dorsey. We also have not found any violation of our 
terms of service, but you know we are open as always to any law 
enforcement insight that would indicate a violation of our 
terms.
    Senator Cotton. Thank you. My time has nearly expired. 
Again, I want to commend your companies for making you 
available and both of you for appearing. I would urge both of 
your companies, or any company like yours, to consider whether 
or not they want to be partners in the fight against our 
adversaries in places like Beijing and Moscow and Pyongyang and 
Tehran, as opposed to evenhanded or neutral arbiters. Thank 
you.
    Chairman Burr. Senator Reed.
    Senator Reed. Well thank you, Mr. Chairman. Let me begin by 
thanking you and the Vice Chairman for recognizing my ex 
officio colleague Senator John McCain. We are both service 
academy graduates, so we don't know any Latin so we had various 
translations of ex officio. The one we liked best was real 
cool. So you were real cool, Mr. Chairman. Thank you.
    Thank you both for being here. You have been organizing, 
based on your comments today, very diligently for the 2018 
elections and trying to anticipate malign activities that we 
saw in 2016.
    Have you seen the same type of coherence starting with Ms. 
Sandberg, from the Federal Government in terms of your ability 
to contact them to work with them?
    Ms. Sandberg. We've long had very good relationships with 
law enforcement. We've worked closely with DHS and FBI for a 
long time. And the FBI's new task force on this has been 
particularly helpful.
    Senator Reed. Mr. Dorsey, your comment?
    Mr. Dorsey. We've also had really strong relationships with 
the government. We're always looking for opportunities to 
improve our partnership and I think if I were to list them out 
it would be a more regular cadence of meetings. It would be 
more proactive information about secular trends that they're 
seeing, not just on our platform, but other platforms and also 
in other channels and communication methods. And, finally, a 
consolidation of points to contact--more of a single point of 
contact. And we do have that consolidation for the 2018 
elections, which we're really happy with.
    Senator Reed. Very good. One of the rules is to follow the 
money. And you've talked about how you, in terms of political 
advertising, have identified the citizenship of their 
advertisers but are you able to trace the monies? It's fairly 
easy to set up a corporation in the United States, and the 
money could all be coming from overseas even from some 
pernicious sources. Do you go that far Ms. Sandberg? And then 
Mr. Dorsey.
    Ms. Sandberg. Sir, you're right that there a lot of ways to 
try to game the system and so we are going to keep investing 
and trying to get ahead of any tactics our opponents would use, 
including that one.
    Senator Reed. Mr. Dorsey.
    Mr. Dorsey. Sir, we do our best to understand the intent 
and where people are located and what's behind them, but this 
is where a strong partnership with government comes in. Because 
we will not always be able to infer agendas or intent or even 
location in some cases.
    Senator Reed. In the dialog that you've talked about with 
Lauren Forsman, is this one of those topics where you're asking 
them for information, or they're asking you and you're trying 
to follow the money, or have you seen any of that, or has it 
been sort of one of those issues that's just too hard to think 
about?
    Mr. Dorsey. It's both. We have seen proactive outreach from 
the other side.
    Senator Reed. But that would be, I think, a critical issue 
in terms of governing the behavior campaigns, and I would hope 
that you would continue to work, and we would urge our 
colleagues in government to work with you, in that regard.
    One of the issues, and I think Senator Warner and several 
others have brought it up, is the prevalence of bots. I'm not a 
technologist, but it seems to me that you could identify a 
bot's presence, that you could notify your consumers that 35 
percent or 80 percent of these messages have been generated 
electronically. Is that feasible? And is that something you're 
doing?
    Mr. Dorsey. It's a mixed answer right now. We are able to 
identify automations and activity coming through our API, and 
to Senator Warner's comments, we would be able to label that 
with context. But we are not necessarily as easily able to 
identify people who might be scripting our website, so making 
it look like it's an actual human or even the app--make it look 
like an actual human performing these actions. That becomes 
much more challenging and unclear.
    So in consideration of labeling and context, we need to 
make sure that when people see that bot label, that they're 
assuming that everything it's not on is human. We need to make 
sure that there's a precision and accuracy as we label those 
things.
    Senator Reed. Wouldn't there be a value in beginning the 
labeling process, even with the heavy disclaimer that this 
identifies only a fraction of potential fictitious actors?
    Mr. Dorsey. Yes, it's definitely an idea that we've been 
considering, especially this past year. It's really up to the 
implementation at this point.
    Senator Reed. Ms. Sandberg, your comments?
    Ms. Sandberg. This is one of the ideas I had an opportunity 
to discuss with Vice Chairman Warner yesterday in his office 
and is in his white paper, and we're committed to working with 
you on it.
    Senator Reed. Thank you. Let me just ask you a question. 
Going forward, I think we're going to come to a major debate 
within this country or in the whole world of who owns my data, 
which rapidly is becoming me. Is it a company like Facebook? Is 
it a company like Twitter? Which raises the question of do you 
believe that your users should have the right to control what 
you do with their data, either selectively, on an individual 
occurrence, or generically, or even simply purge it at some 
point? Do you believe that should be----
    Ms. Sandberg. Yes, very strongly. It's your information. 
You share it with us. If you want to delete it, we delete it. 
And if you want to take it with you, we enable you to download 
it and take it with you.
    Senator Reed. What about for those people who--I think many 
people--who in the hustle and bustle of everyday, that's a very 
cumbersome process? Shouldn't they be allowed to sort of have a 
check that says every two months delete it? Or delete it as 
soon as I put it in?
    Ms. Sandberg. Yes, and we're working on some of those 
tools, and we've improved. We've made it easier to understand 
what information we have, how we're getting it, and how we use 
it. And we're going to continue to iterate here.
    Senator Reed. Mr. Dorsey, the same question.
    Mr. Dorsey. We do believe people should have complete 
control over their--of their data. Again, Senator Warner 
brought up an interesting point earlier, which is--I don't 
believe that there's a real understanding of the exchange being 
made in terms of people performing activities on these services 
and services like Twitter, and how they can actually see that 
as an exchange--an exchange of value. And those are things I 
would love to think a lot more about, how do we make that more 
clear? And I think that goes back to the incentives 
conversation.
    Senator Reed. Thank you. Thank you, Mr. Chairman.
    Chairman Burr. Thank you, Senator Reed, and I thank all the 
members for their questions and our panelists for their 
answers. I'm going to turn to the Vice Chairman for any last 
comments he might have.
    Vice Chairman Warner. One, I want to thank you both. I want 
to thank you for the spirit you brought to this, some of the 
suggestions--your responses to some of the suggestions. I wish 
our members were still here, because I think they all performed 
extraordinarily well.
    I take away from this three or four quick points. One, very 
much appreciate, Mr. Dorsey, your acknowledgement that we ought 
to move towards--and I guess Ms. Sandberg echoed this as well--
some ability to indicate to users whether they're being 
contacted by a machine or a human being, recognizing there's 
technical difficulties, and also acknowledging that just 
because it's a bot that does not inherently mean it's good or 
bad. It just must be a data point that an individual ought to 
have as they make determinations going forward.
    I also really appreciated, Ms. Sandberg, your notion that 
not only should users have access to all of the information 
that you or others are collecting, but as we work through to 
this--how you monetize that and let users know the value of 
their data, I think that increased price transparency--and I 
was very grateful at your willingness to at least consider 
that, because I think that would go a long way towards making 
this exchange better understood by individuals.
    Also, and I didn't get a chance to really get into this at 
length, but you and I have had this conversation in the past 
around data portability. I don't want to make the complete 
analogies--an old telecom guy--but when number portability came 
around, we got a lot more competition in the wireless industry 
and elsewhere. Data portability--I know you make it available 
right now--but in an easy, user-friendly format that can move 
from platform to platform, I think would be extraordinarily 
important in terms of making sure that we continue to have 
competition in this space.
    And then finally, I also appreciated your comment--I think 
we're going to have more and more of these areas where 
manipulation may take place that actually incents violence. We 
both cited the horrible example of what's happened with the 
Rohingya in Myanmar, but I appreciate your comment that you've 
said that Facebook ought to have both a moral and legal 
obligation if there are sites that are incenting violence and 
take those down. Getting from that idea into how we spell that 
all out will be a challenge, but I appreciate your willingness 
to work with me on it.
    So Mr. Chairman, thank you for the fourth hearing on this. 
I think it was very, very important, and I hope our committee 
will continue to take the lead on these subjects.
    Chairman Burr. I thank the Vice Chairman. I would ask both 
of you if there are any rules, such as antitrust, FTC 
regulations or guidelines, that are obstacles to collaboration 
between companies, I hope you'll submit for the record where 
those obstacles are so that we can look at the appropriate 
steps that we could take as a committee to open those avenues 
up.
    I want to thank both of you for appearing today and for 
your continued efforts to help find a solution to the 
challenging problem. This hearing represents the capstone of 
the fourth piece of the committee's investigation into Russian 
interference in the 2016 elections. So far we've completed our 
inquiry into the attempted hack of State elections 
infrastructure, the intelligence community assessment on 
Russian activities in recent U.S. elections, the Obama 
Administration's policy response to those operations.
    With your testimony today at this, the fourth hearing we've 
held on social media, we heard the top-level perspective on how 
to address foreign influence operations on your platforms. When 
this committee began its investigation into Russian 
interference in the 2016 elections, neither Mark nor I fully 
appreciated how easily foreign actors could use social media to 
manipulate how Americans form their views.
    Like most technology, social media has the capacity to be 
used for good as intended, but also to advance agendas of those 
bent on manipulation and destruction. Given the amount of 
information companies like Google collect on each and every 
American, it is also too easy for bad actors to craft a message 
that appears tailored just for you.
    The Russians undertook a structured influence campaign not 
against the American government but against the American 
people. Moscow saw the issues that talking heads yell about on 
cable news--race, religion, immigration, and sexual 
orientation--and they used those to sow discord and to foment 
chaos. They leveraged our social media to undermine our 
political system as well, but make no mistake, Russia neither 
leans left nor right. It simply seeks turmoil. A weak America 
is good for Russia.
    I think it is also important to highlight that there is a 
very human component to all of this. No single algorithm can 
fix the problem. Social media is part of our daily lives. It 
serves as the family newsletter, a place to share life's 
personal joys and sorrows, a way to communicate one's status 
during a crisis, and everything in between.
    Unfortunately, other states are now using the Russian 
playbook, as evidenced by the recently uncovered Iranian 
influence operations. We're at a critical inflection point. 
Will using social media to sow discord become an acceptable 
tool of statecraft? How many copycats will we see before we 
take this seriously and find solutions? Your companies must be 
at the forefront in combating those issues. You know your 
algorithms, your customers, and your data collection 
capabilities better than any government entity does--or should. 
Still, the burden is not entirely on your shoulders. 
Government, civil society, and the public will partner with 
you.
    I'd like to take just a moment to thank our staff. They 
have worked diligently to uncover the scope of the problem. 
Their research has been thorough. Their efforts are seamlessly 
bipartisan and their drive to defend the public against foreign 
influence should make Americans watching today proud.
    There is no clear and easy path forward. We understand the 
problem and it is a First Amendment issue. We cannot regulate 
around the First Amendment, but we also cannot ignore the 
challenge. I am confident that working together we can find a 
solution and a path forward that will only make us stronger, 
more connected, more prepared to face down those who seek to 
weaken our democracy.
    For your participation in being part of the solution, we 
thank you immensely today.
    This hearing is now adjourned.
    [Whereupon, at 12:11 p.m., the hearing was adjourned.]

                         Supplemental Material
[GRAPHICS NOT AVAILABLE IN TIFF FORMAT]
  

                                  [all]