[House Hearing, 115 Congress]
[From the U.S. Government Publishing Office]


                TWITTER: TRANSPARENCY AND ACCOUNTABILITY

=======================================================================

                                HEARING

                               BEFORE THE

                    COMMITTEE ON ENERGY AND COMMERCE
                        HOUSE OF REPRESENTATIVES

                     ONE HUNDRED FIFTEENTH CONGRESS

                             SECOND SESSION

                               __________

                           SEPTEMBER 5, 2018

                               __________

                           Serial No. 115-162
                           
[GRAPHIC NOT AVAILABLE IN TIFF FORMAT]
                           


      Printed for the use of the Committee on Energy and Commerce

                        energycommerce.house.gov
                        
                               __________
                               

                    U.S. GOVERNMENT PUBLISHING OFFICE                    
36-155                      WASHINGTON : 2019                     
          
--------------------------------------------------------------------------------------
For sale by the Superintendent of Documents, U.S. Government Publishing Office, 
http://bookstore.gpo.gov. For more information, contact the GPO Customer Contact Center,
U.S. Government Publishing Office. Phone 202-512-1800, or 866-512-1800 (toll-free).
E-mail, [email protected].                                
                        
                        
                        
                    COMMITTEE ON ENERGY AND COMMERCE

                          GREG WALDEN, Oregon
                                 Chairman
JOE BARTON, Texas                    FRANK PALLONE, Jr., New Jersey
  Vice Chairman                        Ranking Member
FRED UPTON, Michigan                 BOBBY L. RUSH, Illinois
JOHN SHIMKUS, Illinois               ANNA G. ESHOO, California
TIM MURPHY, Pennsylvania             ELIOT L. ENGEL, New York
MICHAEL C. BURGESS, Texas            GENE GREEN, Texas
MARSHA BLACKBURN, Tennessee          DIANA DeGETTE, Colorado
STEVE SCALISE, Louisiana             MICHAEL F. DOYLE, Pennsylvania
ROBERT E. LATTA, Ohio                JANICE D. SCHAKOWSKY, Illinois
CATHY McMORRIS RODGERS, Washington   G.K. BUTTERFIELD, North Carolina
GREGG HARPER, Mississippi            DORIS O. MATSUI, California
LEONARD LANCE, New Jersey            KATHY CASTOR, Florida
BRETT GUTHRIE, Kentucky              JOHN P. SARBANES, Maryland
PETE OLSON, Texas                    JERRY McNERNEY, California
DAVID B. McKINLEY, West Virginia     PETER WELCH, Vermont
ADAM KINZINGER, Illinois             BEN RAY LUJAN, New Mexico
H. MORGAN GRIFFITH, Virginia         PAUL TONKO, New York
GUS M. BILIRAKIS, Florida            YVETTE D. CLARKE, New York
BILL JOHNSON, Ohio                   DAVID LOEBSACK, Iowa
BILLY LONG, Missouri                 KURT SCHRADER, Oregon
LARRY BUCSHON, Indiana               JOSEPH P. KENNEDY, III, 
BILL FLORES, Texas                       Massachusetts
SUSAN W. BROOKS, Indiana             TONY CARDENAS, California
MARKWAYNE MULLIN, Oklahoma           RAUL RUIZ, California
RICHARD HUDSON, North Carolina       SCOTT H. PETERS, California
CHRIS COLLINS, New York              DEBBIE DINGELL, Michigan
KEVIN CRAMER, North Dakota
TIM WALBERG, Michigan
MIMI WALTERS, California
RYAN A. COSTELLO, Pennsylvania
EARL L. ``BUDDY'' CARTER, Georgia
JEFF DUNCAN, South Carolina
  
                             C O N T E N T S

                              ----------                              
                                                                   Page
Hon. Greg Walden, a Representative in Congress from the State of 
  Oregon, opening statement......................................     2
    Prepared statement...........................................     3
Hon. Frank Pallone, Jr., a Representative in Congress from the 
  State of New Jersey, opening statement.........................     4
    Prepared statement...........................................     6
Hon. Anna G. Eshoo, a Representative in Congress from the State 
  of California, prepared statement..............................    80

                               Witnesses

Jack Dorsey, CEO, Twitter, Inc...................................     7
    Prepared statement...........................................     9
    Answers to submitted questions \1\...........................   135

                           Submitted Material

Study entitled, ``#toxictwitter: Violence and abuse against women 
  online,'' Amnesty International, 2018, submitted by Ms. DeGette 
  \2\
Statement made by House Majority Leader Kevin McCarthy on 
  Twitter, submitted by Mr. Doyle................................    94
Statement made by Devin Nunes on Fox News, submitted by Mr. Doyle    95
Statement made by President Trump on Twitter, submitted by Mr. 
  Doyle..........................................................    96
Article entitled, ``Twitter admits there were many more Russian 
  trolls on its site during the 2016 presidential elections, 
  Recode, January 19, 2018, submitted by Mr. Lujan...............    97
Article entitled, ``Twitter has suspended more than 1.2 million 
  terrorism-related accounts since late 2015,'' CNBC, April 5, 
  2018, submitted by Mr. Lujan...................................   101
Article entitled, ``Facebook and Twitter remove hundreds of 
  accounts linked to Iranian and Russian political meddling,'' 
  TechCrunch, August 21, 2018, submitted by Mr. Lujan............   105
Statement of technology associations, submitted by Mr. Walden....   114
Article entitled, ``Users looking for child pornography are 
  gathering on Periscope, Twitter's forgotten video service,'' 
  Gizmodo, December 15, 2017, submitted by Mr. Walden............   116
Article entitled, ``Twitter's comeback shows the path for 
  traditional media companies,'' Inc., June 25, 2018, submitted 
  by Mr. Walden..................................................   119
Paper by Kate Klonick entitled, ``The New Governors: The people, 
  rules, and processes governing online speech,'' Harvard Law 
  Review, submitted by Mr. Walden \3\
Article entitled, ``Twitter CEO Dorsey gets backlash for eating 
  at Chick-fil-A,'' NBC, June 10, 2018, submitted by Mr. Walden..   124
Article entitled, ``Periscope has a minor problem,'' Slate, 
  December 12, 2017, submitted by Mr. Walden.....................   126
Article entitled, ``One of Twitter's new anti-abuse measures is 
  the oldest trick in the forum moderation book,'' The Verge, 
  February 16, 2017, submitted by Mr. Walden.....................   132

----------
\1\ The responses to Mr. Dorsey's questions for the record can be 
  found at: https://docs.house.gov/meetings/IF/IF00/20180905/
  108642/HHRG-115-IF00-Wstate-DorseyJ-20180905-SD005.pdf.
\2\ The information can be found at: https://docs.house.gov/
  meetings/IF/IF00/20180905/108642/HHRG-115-IF00-20180905-
  SD015.pdf.
\3\ The information can be found at: https://docs.house.gov/
  meetings/IF/IF00/20180905/108642/HHRG-115-IF00-20180905-
  SD011.pdf.

 
                TWITTER: TRANSPARENCY AND ACCOUNTABILITY

                              ----------                              


                      WEDNESDAY, SEPTEMBER 5, 2018

                  House of Representatives,
                  Committee on Energy and Commerce,
                                            Washington, DC.
    The committee met, pursuant to call, at 1:30 p.m., in room 
2123 Rayburn House Office Building, Hon. Greg Walden (chairman 
of the committee) presiding.
    Members present: Representatives Walden, Barton, Upton, 
Shimkus, Burgess, Scalise, Latta, McMorris Rodgers, Harper, 
Lance, Guthrie, Olson, McKinley, Kinzinger, Griffith, 
Bilirakis, Johnson, Long, Bucshon, Flores, Brooks, Mullin, 
Hudson, Collins, Cramer, Walberg, Walters, Costello, Carter, 
Duncan, Pallone, Rush, Engel, Green, DeGette, Doyle, 
Schakowsky, Butterfield, Matsui, Castor, Sarbanes, McNerney, 
Welch, Lujan, Tonko, Clarke, Loebsack, Schrader, Kennedy, 
Cardenas, Ruiz, Peters, and Dingell.
    Staff present: Jon Adame, Policy Coordinator, 
Communications & Technology; Jennifer Barblan, Chief Counsel, 
Oversight & Investigations; Mike Bloomquist, Deputy Staff 
Director; Karen Christian, General Counsel; Robin Colwell, 
Chief Counsel, Communications & Technology; Jordan Davis, 
Director of Policy and External Affairs; Melissa Froelich, 
Chief Counsel, Digital Commerce and Consumer Protection; Adam 
Fromm, Director of Outreach and Coalitions; Ali Fulling, 
Legislative Clerk, Oversight & Investigations, Digital Commerce 
and Consumer Protection; Elena Hernandez, Press Secretary; Zach 
Hunter, Director of Communications; Paul Jackson, Professional 
Staff, Digital Commerce and Consumer Protection; Peter Kielty, 
Deputy General Counsel; Bijan Koohmaraie, Counsel, Digital 
Commerce and Consumer Protection; Tim Kurth, Senior 
Professional Staff, Communications & Technology; Milly Lothian, 
Press Assistant and Digital Coordinator; Mark Ratner, Policy 
Coordinator; Austin Stonebraker, Press Assistant; Madeline Vey, 
Policy Coordinator, Digital Commerce and Consumer Protection; 
Jessica Wilkerson, Professional Staff, Oversight & 
Investigations; Greg Zerzan, Counsel, Digital Commerce and 
Consumer Protection; Michelle Ash, Minority Chief Counsel, 
Digital Commerce and Consumer Protection; Jeff Carroll, 
Minority Staff Director; Jennifer Epperson, Minority FCC 
Detailee; Evan Gilbert, Minority Press Assistant; Lisa Goldman, 
Minority Counsel; Tiffany Guarascio, Minority Deputy Staff 
Director and Chief Health Advisor; Carolyn Hann, Minority FTC 
Detailee; Alex Hoehn-Saric, Minority Chief Counsel, 
Communications and Technology; Jerry Leverich, Minority 
Counsel; Jourdan Lewis, Minority Staff Assistant; Dan Miller, 
Minority Policy Analyst; Caroline Paris-Behr, Minority Policy 
Analyst; Kaitlyn Peel, Minority Digital Director; Andrew 
Souvall, Minority Director of Communications, Outreach and 
Member Services; and C.J. Young, Minority Press Secretary.

  OPENING STATEMENT OF HON. GREG WALDEN, A REPRESENTATIVE IN 
               CONGRESS FROM THE STATE OF OREGON

    Mr. Walden. The Committee on Energy and Commerce will now 
come to order. The chair now recognizes himself for 5 minutes 
for purposes of an opening statement.
    Good afternoon, and thank you, Mr. Dorsey, for being before 
the Energy and Commerce Committee today.
    The company you and your co-creators founded 12 years ago 
has become one of the most recognizable businesses in the 
world. Twitter has reached that rarified place where using the 
service has become a verb, instantly recognized around the 
globe. Just as people can Google a question or Gram a photo, 
everyone knows what it means to tweet one's thoughts or ideas.
    The list of superlatives to describe Twitter certainly 
exceeds 280 characters. It is one of the most downloaded apps 
in the world, one of the most visited websites.
    It is one of the world's premier sources for breaking news. 
Its power and reach are so great that society-changing events 
like the Arab Spring have been dubbed the Twitter Revolution.
    The service allows anyone with access to the internet the 
power to broadcast his or her views to the world. It's truly 
revolutionary in the way that the Gutenberg press was 
revolutionary.
    It helps set information free. It allows ideas to propagate 
and challenge established ways of thinking. Twitter's success 
and growth rate has been extraordinary but it is not without 
controversy.
    The service has been banned at various times and in various 
countries, such as China and Iran. Here in the United States 
the company itself has come under criticism for impeding the 
ability of some users to post information, remove tweets, and 
other content moderation practices.
    For instance, in July it was reported that some politically 
prominent users were no longer appearing as auto-populated 
options in certain search results. This led to concerns that 
the service might be ``shadow banning'' some users in an 
attempt to limit their visibility on the site.
    Now, this was hardly the first instance of a social media 
service taking actions which appeared to minimize or de-
emphasize certain viewpoints, and in the most recent case, 
Twitter has stated that the action were not intentional but, 
rather, the result of algorithms designed to maintain a more 
civil tone on the site.
    Twitter has also directed the issue of ``bots,'' or 
automated accounts, not controlled by one person. Even the 
removal of these bots from the service raise questions about 
how the bots were identified because the number of followers 
someone has on Twitter has real economic value in our economy.
    We recognize the complexity of trying to manage your 
service, which posts over half a million tweets a day. I 
believe you were once temporarily suspended from Twitter due to 
an internal error yourself. We do not want to lose sight of a 
few fundamental facts. Humans are building the algorithms, 
humans are making decisions about how to implement Twitter's 
terms of service, and humans are recommending changes to 
Twitter's policies.
    Humans can make mistakes. How Twitter manages those 
circumstances as critically important in an environment where 
algorithms to decide what we see in our home feed, ads, and 
search suggestions on.
    It is critical that users are confident that you're living 
up to your own promises. According to Twitter rules, the 
company believes that everyone should have the power to create 
and share ideas and information instantly without barriers.
    Well, that's a noble mission and one that as a private 
company you certainly do not have to take on. The fact that you 
have done so has enriched the world, changed societies, and 
given an outlet to voices that might otherwise never be heard.
    We, and the American people, want to be reassured that 
you're continuing to live up to that mission. We hope you can 
help us better understand how Twitter decides when to suspend a 
user or ban them from the service and what you do to ensure 
that such decisions are made without undue bias.
    We hope you can help us better understand what role 
automated algorithms have in this process and how those 
algorithms are designed to ensure consistent outcomes and a 
fair process.
    The company that you and your co-founders created plays an 
instrumental role in sharing news and information across the 
globe. We appreciate your willingness to appear before us to 
today and to answer our questions.
    With that, I yield back the balance of my time and 
recognize Mr. Pallone from New Jersey for an opening statement.
    [The prepared statement of Mr. Walden follows:]

                 Prepared statement of Hon. Greg Walden

    Good afternoon and thank you Mr. Dorsey for appearing 
before the Energy and Commerce Committee today.
    When you and your co-creators founded Twitter in 2006, you 
probably never envisioned the issues we are going to discuss 
today: so-called ``shadow-banning,'' misinformation, abuse, and 
bots, to name a few. Twelve years later, Twitter bears a great 
responsibility to its users, including nearly 70 million 
Americans.
    Let's be clear from the start: Twitter's algorithms have 
made mistakes and its methods for moderating and policing 
content have been opaque to consumers. We're holding this 
hearing to give you the opportunity to better explain your 
company's actions to Congress, and, more importantly, to the 
American people.
    I do want to take a moment to recognize that you have 
worked in recent weeks to reach out to conservative audiences 
and discuss publicly the issues your company is facing. Earlier 
this year, you and I had a productive conversation here in 
Washington, and have since stayed in contact.
    As Google, Apple, Facebook and others grapple with their 
own controversies, I commend you as a leader among your peers 
in understanding the importance of substantive dialogue with 
Congress and the American people. I reiterate again my open 
invitation to other tech CEOs. Testifying in good faith before 
a scandal happens can go a long way towards building trust and 
goodwill.
    Now, we recognize the complexity of trying to manage your 
service, which posts over half-a-billion tweets a day. We also 
understand that humans build Twitter's algorithms, humans make 
decisions about Twitter's Terms of Service, and humans 
recommend changes to Twitter's policies.
    And people can make mistakes.
    How Twitter manages those circumstances is critically 
important in an environment where algorithms are set up to 
decide what we see in our newsfeed, ads, search suggestions, 
and more.
    It should now be quite clear that even well-intentioned 
algorithms can have unintended consequences. Prominent 
Republicans, including multiple Members of Congress and the 
Chairwoman of the Republican Party have seen their Twitter 
presences temporarily minimized in recent months, due to what 
you have claimed was a mistake in the algorithm.
    When you boil it down, a set of data inputs and algorithmic 
outcomes can shape the national conversation in the time it 
takes for a tweet to go viral. That's why this committee takes 
allegations of bias and algorithms gone awry so seriously, and 
you should, too.
    It takes years to build trust, but it only takes 280 
characters to lose it.
    It is critical that you are living up to your own promises 
and the expectations you set out for consumers. According to 
Twitter's rules, the company believes ``that everyone should 
have the power to create and share ideas and information 
instantly, without barriers.''
    That is a noble mission, and one that has enriched the 
world, changed societies, and given an outlet to voices that 
might otherwise never be heard.
    It has also brought on many of the challenges we're here to 
discuss today.
    It's worth noting that Twitter's content moderation 
decisions are enabled by Section 230 of the Communications 
Decency Act, landmark legislation coauthored by this committee 
in 1996, and since widely credited as ``the law that gave us 
the modern internet.'' Through this legislation, Congress 
entrusted you with broad authority to ban, promote, or 
deprioritize content as you see fit, without taking the kind of 
responsibility for what appears on your website that a 
publisher must.
    But as we saw recently with the enactment of the Fight 
Online Sex Trafficking Act, the Section 230 safe harbor was not 
intended to be an unlimited free pass. It can evolve, and 
Congress must maintain oversight of how the safe harbor is 
being used and the appropriateness of the moderating decisions 
it enables.
    Mr. Dorsey, it is now up to you to assure the American 
people how Twitter continues to live up to its mission, not 
only through public statements but through action. We hope you 
can help us better understand how Twitter decides when to 
suspend a user or ban them from the service, and what you do to 
ensure that such decisions are made without undue bias. We hope 
you can help us better understand what role algorithms have in 
this process, and how those algorithms are designed to ensure 
consistent outcomes and a fair process.
    We also expect to hear what you are doing to implement 
change and make Twitter more transparent for consumers.
    We appreciate your willingness to appear before us today 
and we thank you for taking the time to help us understand this 
important topic.

OPENING STATEMENT OF HON. FRANK PALLONE, JR., A REPRESENTATIVE 
            IN CONGRESS FROM THE STATE OF NEW JERSEY

    Mr. Pallone. Thank you, Mr. Chairman.
    Over the past few weeks, President Trump and many 
Republicans have peddled conspiracy theories about Twitter and 
other social media platforms to with up their base and 
fundraise. I fear the Republicans are using this hearing for 
those purposes instead of addressing the serious issues raised 
by social media platforms that affect Americans' everyday 
lives.
    Twitter is a valuable platform for disseminating news, 
information, and viewpoints. It can be a tool for bringing 
people together and allows one to reach many. In places like 
Iran and Ukraine, Twitter was used to organize and give voice 
to the concerns of otherwise voiceless individuals. Closer to 
home, Twitter and hashtags like #StayWoke, #MeToo, and 
#NetNeutrality have fostered important conversations and 
supported larger social movements that are changing our 
society.
    But Twitter has a darker side. Far too many Twitter users 
still face bullying and trolling attacks. Tweets designed to 
threaten, belittle, demean, and silence individuals can have a 
devastating effect, sometimes even driving people to suicide, 
and while Twitter has taken some steps to protect users and 
enable reporting, more needs to be done.
    Bad actors have co-opted Twitter and other social media 
platforms to spread disinformation and sow divisions in our 
society. For example, Alex Jones used Twitter to amplify 
harmful and dangerous lies such as those regarding the Sandy 
Hook Elementary School shooting. Others have used the platform 
to deny the existence of the Holocaust, disseminate racial 
supremacy theories, and spread false information about 
terrorism, natural disasters, and more.
    When questioned about this disinformation, Twitter's CEO, 
Jack Dorsey, said the truth will win out in the end. But there 
is reason to doubt that, in my opinion. According to a recent 
study published by the MIT Media Lab, false rumors on Twitter 
traveled ``farther, faster, deeper, and more broadly than the 
truth,'' with true claims taking about six times as long to 
reach the same number of people, and that's dangerous.
    And countries like Russia and Iran are taking advantage of 
this to broadly disseminate propaganda and false information. 
Beyond influencing elections, foreign agents are actively 
trying to turn groups of Americans against each other and these 
countries are encouraging conflict to sow division and hatred 
by targeting topics that generate intense feelings such as 
race, religion, and politics.
    Unfortunately, the actions of President Trump have made the 
situation worse. Repeatedly, the president uses Twitter to 
bully and belittle people, calling them names like ``dog,'' 
``clown,'' ``spoiled brat,'' ``son of a bitch,'' ``enemies,'' 
and ``loser.'' He routinely tweets false statements designed to 
mislead Americans and foster discord, and the president's 
actions coarsen the public debate and feed distrust within our 
society.
    President Trump has demonstrated that the politics of 
division are good for fund raising and rousing his base and, 
sadly, Republicans are now following his lead instead of 
criticizing the president for behavior that would not be 
tolerated even from a child. As reported in the news, the Trump 
campaign and the Republican majority leader have used the 
supposed anti-conservative bias online to fund raise. This 
hearing appears to be just one more mechanism to raise money 
and generate outrage, and it appears Republicans are 
desperately trying to rally the base by fabricating a problem 
that simply does not exist.
    Regardless of the Republicans' intentions for this hearing, 
Twitter and other social media platforms must do more to regain 
and maintain the public trust. Bullying, the spread of 
disinformation and malicious foreign influence continue. 
Twitter policies have been inconsistent and confusing. The 
company's enforcement seems to chase the latest headline as 
opposed to addressing systemic problems. Twitter and other 
social media platforms must establish clear policies to address 
the problems discussed today, provide tools to users and then 
swiftly and fairly enforce those policies, and those policies 
should apply equally to the president, politicians, 
administration officials, celebrities, and the teenager down 
the street.
    It's long past time for Twitter and other social media 
companies to stop allowing their platforms to be tools of 
discord of spreading false information and of foreign 
government manipulation.
    So I thank you for having the hearing, Mr. Chairman, and I 
yield back.
    [The prepared statement of Mr. Pallone follows:]

             Prepared statement of Hon. Frank Pallone, Jr.

    Over the past few weeks, President Trump and many 
Republicans have peddled conspiracy theories about Twitter and 
other social media platforms to whip up their base and 
fundraise. I fear the Republicans are using this hearing for 
those purposes instead of addressing the serious issues raised 
by social media platforms that affect American's everyday 
lives.
    Twitter is a valuable platform for disseminating news, 
information, and viewpoints. It can be a tool for bringing 
people together and allows one to reach many. In places like 
Iran and Ukraine, Twitter was used to organize and give voice 
to the concerns of otherwise voiceless individuals. Closer to 
home, Twitter and hashtags like Stay Woke, Me Too, and Net 
Neutrality have fostered important conversations and supported 
larger social movements that are changing our society.
    But Twitter has a darker side. Far too many Twitter users 
still face bullying and trolling attacks. Tweets designed to 
threaten, belittle, demean, and silence individuals can have 
devastating effects, sometimes even driving people to suicide. 
While Twitter has taken some steps to protect users and enable 
reporting, more needs to be done.
    Bad actors have co-opted Twitter and other social media 
platforms to spread disinformation and sow divisions in our 
society. For example, Alex Jones used Twitter to amplify 
harmful and dangerous lies such as those regarding the Sandy 
Hook Elementary School shooting. Others have used the platform 
to deny the existence of the Holocaust, disseminate racial 
supremacy theories, and spread false information about 
terrorism, natural disasters, and more.
    When questioned about this disinformation Twitter CEO Jack 
Dorsey said the truth will win out in the end, but there is 
reason to doubt that. According to a recent study published by 
the MIT Media Lab, false rumors on Twitter traveled ``farther, 
faster, deeper, and more broadly than the truth'' with true 
claims taking about six times as long to reach the same number 
of people. That's dangerous.
    And countries like Russia and Iran are taking advantage of 
this to broadly disseminate propaganda and false information. 
Beyond influencing elections, foreign agents are actively 
trying to turn groups of Americans against each other. These 
countries are encouraging conflict to sow division and hatred 
by targeting topics that generate intense feelings such as 
race, religion, and politics.
    Unfortunately, the actions of President Trump have made the 
situation worse. Repeatedly, the President uses Twitter to 
bully and belittle people calling them names like ``dog,'' 
``clown,'' ``spoiled brat,'' ``son of a bitch,'' ``enemies,'' 
and ``loser.'' He routinely tweets false statements designed to 
mislead Americans and foster discord. The President's actions 
coarsen the public debate, and feed distrust within our 
society.
    President Trump has demonstrated that the politics of 
division are good for fundraising and rousing his base. Sadly, 
Republicans are now following his lead instead of criticizing 
the President for behavior that would not be tolerated from a 
child. As reported in the news, the Trump campaign and the 
Republican Majority Leader have used the supposed anti-
conservative bias online to fundraise. This hearing appears to 
be just one more mechanism to raise money and generate outrage. 
It appears Republicans are desperately trying to rally their 
base by fabricating a problem that simply does not exist.
    Regardless of the Republicans' intentions for this hearing, 
Twitter and other social media platforms must do more to regain 
and maintain the public trust. Bullying, the spread of 
disinformation, and malicious foreign influence continue. 
Twitter's policies have been inconsistent and confusing. The 
company's enforcement seems to chase the latest headline as 
opposed to addressing systemic problems. Twitter and other 
social media platforms must establish clear policies to address 
the problems discussed today, provide tools to users, and then 
swiftly and fairly enforce those policies. And those policies 
should apply equally to the President, politicians, 
Administration officials, celebrities, and the teenager down 
the street.
    It's long past time for Twitter and other social media 
companies to stop allowing their platforms to be tools of 
discord, of spreading false information, and of foreign 
government manipulation.
    Thank you, I yield back.

    Mr. Walden. I thank the gentleman.
    The chair now recognizes Mr. Dorsey for purposes of an 
opening statement. We appreciate your being here and feel free 
to go ahead.

          STATEMENT OF JACK DORSEY, CEO, TWITTER, INC.

    Mr. Dorsey. Thank you.;
    Thank you, Chairman Walden, Ranking Member Pallone, and the 
committee for the opportunity to speak on behalf of Twitter to 
the American people.
    I look forward to our conversation about our commitment to 
impartiality, to transparency, and to accountability.
    If it's OK with all of you, I'd like to read you something 
I personally wrote as I thought about these issues. I am also 
going to tweet it out right now.
    I want to start by making something very clear. We don't 
consider political viewpoints, perspectives, or party 
affiliation in any of our policies or enforcement decisions, 
period. Impartiality is our guiding principle. Let me explain 
why. We believe many people use Twitter as a digital public 
square. They gather from all around the world to see what's 
happening and have a conversation about what they see. Twitter 
cannot rightly serve as public square if it's constructed 
around the personal opinions of its makers.
    We believe a key driver of a thriving public square is the 
fundamental human right of freedom of opinion and expression. 
Our early and strong defense of open and free exchange has 
enabled Twitter to be the platform for activists, marginalized 
communities, whistle blowers, journalists, governments, and the 
most influential people around the world. Twitter will always 
default to open and free exchange.
    A default to free expression left unchecked can generate 
risks and dangers for people. It's important Twitter 
distinguishes between people's opinions and their behaviors and 
disarms behavior intending to silence another person or 
adversely interfere with their universal human rights.
    We build our policies and rules with the principle of 
impartiality, objective criteria rather than on the basis of 
bias, prejudice, or preferring the benefit to one person over 
another for improper reasons.
    If we learn we failed to create impartial outcomes, we 
immediately work to fix. In the spirit of accountability and 
transparency, recently we failed our intended impartiality.
    Our algorithms were unfairly filtering 600,000 accounts, 
including some members of Congress, from our search auto 
complete and latest results. We fixed it, but how did it 
happen?
    Our technology was using a decision-making criteria that 
considers the behavior of people following these accounts. We 
decided that wasn't fair and we corrected it.
    We will always improve our technology and algorithms to 
drive healthier usage and measure the impartiality of those 
outcomes.
    Bias in algorithms is an important topic. Our 
responsibility is to understand, measure, and reduce accidental 
bias due to factors such as the quality of the data used to 
train our algorithms. This is an extremely complex challenge 
facing everyone applying artificial intelligence.
    For our part, machine-learning teams at Twitter are 
experimenting with these techniques in developing roadmaps to 
ensure present and future machine-learning models uphold a high 
standard when it comes to algorithmic fairness.
    It's an important step towards ensuring impartiality. 
Looking at the data, we analyzed tweets sent by all members of 
the House and Senate and found no statistically significant 
difference between the number of times a tweet by a Democrat is 
viewed versus a Republican, even after all of our ranking and 
filtering of tweets has been applied.
    Also, there is a distinction we need to make clear. When 
people follow you, you've earned that audience and we have a 
responsibility to make sure they can see your tweets. We do not 
have a responsibility nor you a right to amplify your tweets to 
an audience that doesn't follow you.
    What our algorithms decide to show in shared spaces like 
search results is based on thousands of signals that constantly 
learn and evolve over time.
    Some of those signals are engagement. Some are the number 
of abuse reports. We balance all of these to prevent gaming our 
system.
    We acknowledge the growing concern people have of the power 
held by companies like Twitter. We believe it's dangerous to 
ask Twitter to regulate opinions or be the arbiter of truth.
    We'd rather be judged by the impartiality of outcomes and 
criticized when we fail this principle.
    In closing, when I think of our work, I think of my mom and 
dad in St. Louis, a Democrat and a Republican. We had lots of 
frustrating and heated debates. But looking back, I appreciate 
I was able to hear and challenge different perspectives and I 
also appreciate I felt safe to do so.
    We believe Twitter helps people connect to something bigger 
than themselves, show all the amazing things that are happening 
in the world, and all the things we need to acknowledge and 
address.
    We are constantly learning how to make it freer and 
healthier for all to participate.
    Thank you, all.
    [The prepared statement of Mr. Dorsey follows:.
    [GRAPHICS NOT AVAILABLE IN TIFF FORMAT]
    
    Mr. Walden. Thank you, Mr. Dorsey.
    We'll now begin the opportunity to have questions and I 
will lead off.
    So, Mr. Dorsey, I am going to get straight to the heart of 
why we are here today. We have a lot of questions about 
Twitter's business practices including questions about your 
algorithms, content management practices, and how Section 230's 
safe harbors protect Twitter.
    In many ways, for some of us, it seems a little bit like 
the Wizard of Oz--we want to know what's going on behind the 
curtain.
    This summer, reports surfaced that profiles of prominent 
Republican Twitter users were not appearing in automatically 
populated drop-down search results. I think you mentioned that 
in your own testimony. This was after a member of this 
committee had her tweets and ads taken off the service because 
of a basic conservative message, and then there are other 
examples that have been sent our way.
    Twitter's public response is, ``We do not shadow ban.'' 
You're always able to see the tweets from accounts you follow, 
although you may have to ``do more work to find them like go 
directly to their profile.''
    But to most people, they might think of that as shadow 
banning. It doesn't matter what your definition of shadow 
banning is when the expectation you are giving to your users 
who choose to follow certain accounts is different from what 
they see on their timeline and in their searches.
    In one example of many, certain prominent conservative 
users including some of our colleagues who have come to us--
Representatives Meadows, Jordan, Gaetz--were not shown in the 
automatically populated drop-down searches on Twitter, correct?
    Out of the more than 300 million active Twitter users, why 
did this only happen to certain accounts? In other words, what 
did the algorithm take into account that led to prominent 
conservatives, including members of the U.S. House of 
Representatives, not being included in auto search suggestions? 
What caused that?
    Mr. Dorsey. Thank you for the question.
    So we use signals, usually hundreds of signals, to 
determine and to decide what to show, what to down rank, or, 
potentially, what to filter.
    In this particular case, as I mentioned in my opening, we 
were using a signal of the behavior of the people following 
accounts and we didn't believe, upon further consideration and 
also seeing the impact, which was about 600,000 accounts--a 
pretty broad base--that that was ultimately fair and we decided 
to correct it.
    We also decided that it was not fair to use a signal for 
filtering in general and we decided to correct that within 
search as well. And it is important for us to, one, be able to 
experiment freely with the signals and to have the freedom to 
be able to inject them and also to remove them because that's 
the only way we are going to learn.
    We will make mistakes along the way and the way we want to 
be judged is making sure that we recognize those and that we 
correct them, and what we are looking for in terms of whether 
we made a mistake or not is this principle of impartiality and, 
specifically, impartial outcomes, and we realized that in this 
particular case and within search that we weren't driving that 
and we could have done a better job there.
    Mr. Walden. Let me ask you another question. Could bots 
game the system or work to block or silence certain voices, 
political or otherwise?
    Mr. Dorsey. We are always looking for patterns of behavior 
intending to amplify information artificially and that 
information could include actions like blocking.
    So that's why it's important that we don't just use one 
signal but we use hundreds of signals and that we balance them 
accordingly.
    There is a perception that a simple report of a violation 
of the terms of service will result in action or down ranking. 
That is not true. It is one signal that we use and weigh 
according to other signals that we see across.
    Mr. Walden. I have one final question. I asked Twitter 
followers I have and one from Oregon asked why Twitter relies 
exclusively on users to report violations.
    Mr. Dorsey. This is a matter of scale. So today, in order 
to remove tweets or to remove accounts, we do require a report 
of the violating and that report is reviewed by an individual.
    Those reports are prioritized based on the severity of the 
report. So death threats have a higher prioritization of all 
others and we take action on them much faster.
    We do have algorithms that are constantly proactively 
searching the network and, specifically, the behaviors on the 
network and filtering and down ranking accordingly. And what 
that means in terms of filtering is it might filter behind an 
interstitial. An interstitial is a graphic or element within 
our app or service that one can tap to see more tweets or show 
more replies.
    So in some cases, we are proactively, based on these 
algorithms, hiding some of the content, causing a little bit 
more friction to actually see it and, again, those are models 
that we constantly learn from and evolve as well.
    Mr. Walden. My time has expired.
    I now recognize the gentleman from New Jersey, Mr. Pallone.
    Mr. Pallone. Thank you, Mr. Chairman.
    Twitter's effect on American society raise genuine and 
serious issues. But that's not why the Republican majority has 
called you here today, Mr. Dorsey.
    I think it's the height of hypocrisy that President Trump 
and congressional Republicans criticize Twitter for supposed 
liberal bias when President Trump uses the platform every day 
for his juvenile tweets and spreading lies and misinformation 
to the whole country and to the world.
    In my opinion, you have an obligation to ensure your 
platform, at a minimum, does no harm to our country or 
democracy and the American public. And as I noted in my 
opening, one persistent critique of Twitter by civil rights 
advocates and victims of abuse and others is that your policies 
are unevenly enforced.
    The rich and powerful get special treatment. Others get 
little recourse when Twitter fails to protect them unless the 
company gets some bad press.
    Now, you have admitted that Twitter needs to do a better 
job explaining how decisions are made, especially those by 
human content moderators who handle the most difficult and 
sensitive questions.
    So let me just ask you, how many human content moderators 
does Twitter employ in the U.S. and how much do they get paid?
    Mr. Dorsey. So we want to think about this problem not in 
terms of the number of people but how we make decisions to 
invest in building new technologies versus hiring folks.
    Mr. Pallone. Well, let me ask you these three questions on 
this point and then if you can't answer it I would appreciate 
it if through the chairman you could get back to us.
    The first one was how many human content moderators does 
Twitter employ in the U.S. and how much do they get paid, 
second, how many hours of training is given to them to ensure 
consistency in their decisions, and last, are they given 
specific instructions to ensure that celebrities and 
politicians are treated the same as everyone else.
    Otherwise, I am going to ask you to get back to us in 
writing because I----
    Mr. Dorsey. We'll follow up with you on specific numbers. 
But on the last point, this is a very important distinction. I 
do believe that we need to do more around protecting private 
individuals than public figures.
    I don't know yet exactly how that will manifest. But I do 
believe it's important that we extend the protection of our 
rules more to private individuals necessarily than public 
figures.
    Mr. Pallone. Well, I appreciate that, because I think 
everyone should be treated the same and you seem to be saying 
that. But we have to make sure that the enforcement mechanism 
is there so that's true.
    Let me ask, if you could report back to the committee 
within one month of what steps Twitter is taking to improve the 
consistency of its enforcement and the metrics that demonstrate 
improvement, if you could, within a month. Is that OK?
    Mr. Dorsey. Absolutely.
    Mr. Pallone. All right.
    Now, let me turn to another issue. I only have a minute. 
Other technology companies like Airbnb and Facebook have 
committed to conducting civil rights audits amid concerns 
raised by members of the Congressional Black Caucus and others 
including Representatives Rush to my left, Butterfield, and 
Clarke on our committee, and these audits seek to uncover how 
platforms and their policies have been used to stoke racial and 
religious resentment or violence, and given the sometimes 
dangerous use of your platform and the haphazard approach of 
Twitter towards developing and enforcing its policies, I think 
your company should take similar action.
    So let me ask these three questions and, again, if you can 
answer them. If not, please get back to us within the month.
    Will you commit to working with an independent third-party 
institution to conduct a civil rights audit of Twitter? Yes or 
no.
    Mr. Dorsey. We will, and we do do that on a regular basis 
with what's called our Trust and Safety Council, which----
    Mr. Pallone. All right. But asking for an independent third 
party institution to conduct it.
    Mr. Dorsey. Yes. Let us follow up with you on that.
    Mr. Pallone. All right.
    Second, let me ask these two together--will you commit to 
making the results of all such audits available to the public, 
including all recommendations and findings?
    Mr. Dorsey. Yes. We do believe we need a lot more 
transparency around our actions and our decisions----
    Mr. Pallone. All right.
    Then the third one, Mr. Chairman, with your permission, 
will you commit, based on the findings of all such audits to 
change Twitter's policies, programs and processes to address 
these areas of concern? Yes or no.
    Mr. Dorsey. We are always looking to evolve our policies 
based on what we find, so yes.
    Mr. Pallone. All right.
    And again, Mr. Chairman, through you, if we could get a 
report back to the committee within one month of the steps that 
Mr. Dorsey is taking, I would appreciate it.
    Mr. Dorsey. Thank you.
    Mr. Walden. All right. Thank you.
    I now turn to Mr. Upton, former chairman of the committee, 
for questions.
    Mr. Upton. Thank you, Mr. Chairman.
    So, Mr. Dorsey, I think it's fair to say that even looking 
at my Twitter feed that there are some fairly ugly things on 
Twitter that come every now and then, and my name is Fred Upton 
and I got a bet that my initials are probably used more than 
just about any other.
    [Laughter.]
    Might even think that it's bipartisan on both sides of the 
aisle. But I would like to see civility brought back into the 
public discourse. In a July post, Twitter acknowledged that 
tweets from bad faith actors who intend to manipulate or divide 
the conversations should be ranked lower.
    So the question is how do you determine whether a user is 
tweeting to manipulate or divide the conversation?
    Mr. Dorsey. This is a great question and one that we have 
struggled with in the past. We recently determined that we 
needed something much more tangible and cohesive in order to 
think about this work and we've come across health as a 
concept.
    And we've all had experiences where we felt we've been in a 
conversation that's a little bit more toxic and we wanted to 
walk away from it. We've all been in conversations that felt 
really empowering and something that we are learning from and 
we want to stay in them.
    So right now, we are trying to determine what the 
indicators of conversational health are and we are starting 
with four indicators. One is what is the amount of shared 
attention that a conversation has? What percentage of the 
conversation is focused on the same things? What is a 
percentage of shared facts that the conversation is having--not 
whether the facts are true or false, but are we sharing the 
same facts. What percentage of the conversation is receptive? 
And finally, is there a variety of perspective within the 
conversation or is it a filter bubble or echo chamber of the 
same sort of ideas.
    So we are currently trying to figure out what those 
indicators of health are and to measure them and we intend not 
only to share what those indicators are that we've found but 
also to measure ourselves against it and make that public so we 
can show progress, because we don't believe we can really fix 
anything unless we can--we can measure it and we are working 
with external parties to help us do that because we know we 
can't do this alone.
    Mr. Upton. So do you believe that Twitter's rules are clear 
on what's allowed and what's not allowed on the platform?
    Mr. Dorsey. I believe if you were to go to our rules today 
and sit down with a cup of coffee, you would not be able to 
understand it. I believe we need to do a much better job not 
only with those rules but with our terms of service. We need to 
make them a whole lot more approachable.
    We would love to lead in this area and we are working on 
this. But I think there's a lot of confusion around our rules 
and also our enforcement and we intend to fix it.
    Mr. Upton. The last question is can a Twitter user's friend 
or someone that they follow grant permission to access to that 
user's personal information to a third party?
    Mr. Dorsey. No. If you are sharing your password of your 
account with another, then they have the rights that you would 
have to take on with that account.
    Mr. Upton. Yield back.
    Mr. Walden. The chair now recognizes the gentleman from New 
York, Mr. Tonko.
    Ms. DeGette is next. OK. The chair now recognizes the 
gentlelady from Colorado, Ms. DeGette. We are going by the 
order we were given.
    Ms. DeGette. Thank you very much, Mr. Chairman.
    Mr. Dorsey, thank you so much for joining us here today 
because these are important issues, and even though the 
Democrats have highlighted that, really, some of the reasons 
why you came are--we think are political and wrong, 
nonetheless, there are some real issues with Twitter that I 
think we can discuss today.
    And as you said, Twitter really has become a tool for 
engagement across society and, recently, we saw some of its 
positive social change with the role it's played in the #Metoo 
movement.
    But nonetheless, Twitter has also experienced its own 
sexual harassment problem to confront and I just wanted to ask 
you some questions about how Twitter is dealing with these 
issues.
    I don't know if you're aware, Mr. Dorsey, of the Amnesty 
International report called ``Toxic Twitter: A Toxic Place for 
Women \*\.'' Are you aware of that?
---------------------------------------------------------------------------
    \*\ The information has been retained in committee files and can be 
found at: https://docs.house.gov/meetings/IF/IF00/20180905/108642/HHRG-
115-IF00-20180905-SD015.pdf.
---------------------------------------------------------------------------
    Mr. Dorsey. I am aware of it.
    Ms. DeGette. Mr. Chairman, I would like to ask unanimous 
consent to put that in the record.
    Mr. Walden. Without objection.
    Ms. DeGette. Now, in that report, it described the issues 
women face on Twitter and how Twitter could change to be more 
friendly to women. I assume you have talked to Amnesty 
International about this report and about some of their 
recommendations?
    Mr. Dorsey. I haven't personally but I imagine that the 
folks on our team have. But we can follow up with you.
    Ms. DeGette. Thank you.
    The report goes into great and, frankly, graphic detail of 
the types of abuses that have been experienced on Twitter 
including threats of rape, bodily harm, and death.
    Now, some have been found to violate Twitter's guidelines 
but others were not, and I think probably you and your staff 
agree that Twitter needs to do a better job of addressing 
instances where some of the users are using the platform to 
harass and threaten others.
    And so I am wondering if you can tell me does Twitter 
currently have data on reports of abuse of conduct including on 
the basis of race, religion, gender, or orientation, targeted 
harassment, or threats of violence? And separately, does 
Twitter have data on the actions that it has taken to address 
these complaints?
    Mr. Dorsey. So a few things here. First and foremost, we 
don't believe that we can create a digital public square for 
people if they don't feel safe to participate in the first 
place, and that is our number one and singular objective as a 
company is to increase the health of this public space.
    We do have data on all violations that we have seen across 
the platform and the context of those violations, and we do 
intend--and this will be an initiative this year--to create a 
transparency report that will make that data more public so 
that all can learn from it and we can also be held publicly 
accountable to it.
    Ms. DeGette. That's good news, and you say you will have 
that this year yet, by the end of----
    Mr. Dorsey. We are working on it as an initiative this 
year. We have a lot of work to do to aggregate all the data and 
to report that will be meaningful----
    Ms. DeGette. And is Twitter also taking actions to address 
some of the deficiencies that have been identified in this 
report and in other places?
    Mr. Dorsey. We are. One other point I wanted to make is 
that we don't feel it's fair that the victims of abuse and 
harassment have to do the work to report it.
    Ms. DeGette. Yes.
    Mr. Dorsey. Today, our system does work on reports, 
especially when it has to take content down. So abuse reports 
is a metric that we would look at, not as something that we 
want to go up because it's easier to report things but as 
something we want to go down not only because we think that we 
can--we can reduce the amount of abuse but we can actually 
create technology to recognize it before people have to do the 
reporting themselves.
    Ms. DeGette. Recognize it and take it down before a report 
has to be made?
    Mr. Dorsey. Yes. Any series of enforcement actions all the 
way to the extreme of it, which is removing content.
    Ms. DeGette. Thank you.
    Mr. Chairman, I just want to say for the record I don't 
think these issues are unique to Twitter. Unlike so many of the 
invented borderline conspiracy theories, I believe this is a 
real threat and I appreciate you, Mr. Dorsey, taking this 
seriously and your entire organization so that we can really 
reduce these threats online.
    Thank you, and I yield back.
    Mr. Dorsey. Thank you.
    Mr. Walden. The gentlelady yields back.
    The chair recognizes the gentleman from Illinois, Mr. 
Shimkus, for questions.
    Mr. Shimkus. Thank you, Mr. Chairman.
    Mr. Dorsey, first of all, go Cards. I am from the St. Louis 
metropolitan area and be careful of Colin behind you, who has 
been known to be in this committee room a couple times. So we 
are glad to have him back.
    And I want to go to my questions and then hopefully have 
time for a little summation. While listening to users is 
important, how can anyone be sure that standards about what 
``distracts'' or ``distorts'' are being handled fairly and 
consistently? And the follow-up is doesn't this give power to 
the loudest mob and, ultimately, fail to protect controversial 
speech?
    Mr. Dorsey. So this goes back to that framework I was 
discussing around health and, again, I don't know if those are 
the right indicators yet. That's why we are looking for outside 
help to make sure that we are doing the right work.
    But we should have an understanding and a tangible 
measurement of our effects on our system and, specifically, in 
these cases we are looking for behaviors that try to 
artificially amplify information and game our systems in some 
ways that might happen----
    Mr. Shimkus. I am sorry to interrupt--but a bot would be--
you would consider that as manipulating the system, right?
    Mr. Dorsey. If a bot is used for manipulating the 
conversation and the way we----
    Mr. Shimkus. What about if the users band together? Would 
that be what you would consider manipulation?
    Mr. Dorsey. And that's why it makes this issue complicated 
is because sometimes we see bots. Sometimes we see human 
coordinations in order to manipulate.
    Mr. Shimkus. Thank you. Twitter has a verification program 
where users can be verified by Twitter as legitimate and 
verified users have a blue checkmark next to their name on 
their page. How does the review process for designating 
verified users align with your community guidelines or 
standards?
    Mr. Dorsey. Well, to be very frank, our verification 
program right now is not where we'd like it to be and we do 
believe it is in serious need of a reboot and a reworking.
    And it has a long history. It started as a way to verify 
that the CDC account was the actual CDC account during the 
swine flu and we brought into--without as many strong 
principles--as we needed and then we opened the door to 
everyone, and, unfortunately, that has caused some issues 
because the verified badge also is a signal that is used in 
some of our algorithms to rank higher or to inject within 
shared areas of the----
    Mr. Shimkus. That was my next question. You do prioritize 
content shared by verified users currently?
    Mr. Dorsey. We do have signals that do that. We are 
identifying those and asking ourselves whether that is still 
true and it's still correct today.
    Mr. Shimkus. And then I am just going to end with my final 
minute to talk about industry standards. I think my colleague, 
Diana DeGette, hit on the issue because this is across the 
technological space.
    You're not the only one that's trying to address these type 
of concerns. Many industries have banded together to have 
industry standards by which they can comply and also can help 
self-police and self-correct.
    I would encourage the tech sector to start looking at that 
model and there's a lot of them out there. I was fortunate to 
get this book, ``The Future Computed,'' in one of my visits to 
Tech World, and, they just mention fairness, reliability, 
privacy, inclusion, transparency, and accountability as kind of 
baseloads of standards that should go across the platform, and 
we need to get there for the use of the platforms and the 
trust.
    And with that, thank you, Mr. Chairman. Yield back.
    Mr. Walden. The gentleman yields back.
    The chair recognizes the gentleman from Texas, Mr. Green, 
for questions.
    Mr. Green. Thank you, Mr. Chairman.
    Mr. Dorsey, thank you for being here today and I am pleased 
that Twitter started taking steps to improve users' experience 
on its platform.
    However, Twitter's current policies still leave the 
consumers in danger of the spread of misinformation and 
harassment.
    Twitter needs to strengthen its policies to ensure that 
users are protected from fake accounts, misinformation, and 
harassment, and I know that's an issue you all are trying to 
address.
    I would like to start off by addressing privacy. Twitter 
has changed its policy in regards to the general data 
protection regulation that went into effect by the European 
Union this summer.
    The GDPR makes it clear that consumers need to be in 
control of their own data and understands how their data is 
being given to others.
    Mr. Dorsey, as it now stands, the United States does not 
mandate these settings are enforced. However, I think they are 
important for an integral part of consumers.
    My question is will Twitter commit to allowing users in the 
United States have the option of opting out of tracking, 
despite the fact that there's no current regulation mandating 
this for protection for consumers?
    Mr. Dorsey. Thank you for the question.
    Even before GDPR was enacted and we complied with that 
regulation, a year prior we were actively making sure that the 
people that we serve have the controls necessary to opt out of 
tracking across the web, to understand all the data that we 
have inferred on their usage, and to individually turn that off 
and on.
    So we took some major steps pre-GDPR and made sure that we 
complied with GDPR as well. We are very different from our 
peers in that the majority of what is on Twitter is public.
    People are approaching Twitter with a mindset of when I 
tweet this the whole world can see it. So we have a different 
approach and different needs.
    But we do believe that privacy is a fundamental human right 
and we will work to protect it and continue to look for ways to 
give people more control and more transparency around what we 
have on them.
    Mr. Green. Thank you.
    One of the steps Twitter has taken to protect consumers has 
been to come together with other social media platforms to 
create the Global Internet Forum to Counter Terrorism.
    However, there is no forum to counter fake bot accounts on 
social media platforms. What steps is Twitter taking to work 
together with social media platforms to combat these fake bots 
accounts like the 770 accounts Twitter and other social media 
platforms recently deleted that were linked to Russian and 
Iranian disinformation campaigns?
    Mr. Dorsey. Yes. So this one is definitely a complicated 
issue that we are addressing head on. There's a few things we 
would love to just generally be able to identify bots across 
the platform and we can do that by recognizing when people come 
in through our API.
    There are other vectors of attack where people script our 
website or our app to make it look as if they were humans and 
they're not coming through our API.
    So it's not a simple answer. But having said that, we have 
gotten a lot better in terms of identifying and also 
challenging accounts.
    We identify 8 to 10 million accounts every single week and 
challenge them to determine if they're human or not and we've 
also thwarted over half a million accounts every single day 
from even logging in to Twitter because of what we detected to 
be suspicious activity.
    So there's a lot more that we need to do but I think we do 
have a good start. We always want to side with more automated 
technology that recognize behavior and patterns instead of 
going down to the surface area of names or profile images or 
what not.
    So we are looking for behaviors and the intention of the 
action, which is oftentimes to artificially amplify information 
and manipulate others.
    Mr. Green. OK.
    Thank you, Mr. Chairman. I know I am out of my time, and 
thank you for being here today.
    Mr. Walden. Mr. Green.
    Mr. Dorsey. Thank you so much.
    Mr. Walden. The chair will now recognize the gentleman from 
Texas, the chairman of our Health Subcommittee, Dr. Burgess, 
for 4 minutes for questions.
    Mr. Burgess. Thank you, Mr. Chairman.
    Thank you, Mr. Dorsey, for being here. I will just say that 
Twitter is--in addition to everything else, it's a news source.
    It's how I learned of the death of Osama bin Laden many, 
many years ago when Seal Team 6 provided that information and 
it happened in real time, late, a Sunday night. The news shows 
were all over, and Twitter provided the information.
    This morning, sitting in conference, not able to get to a 
television, one of my local television stations was attacked 
and Twitter provided the real-time information and updates. So 
it's extremely useful and for that as a tool I thank you.
    Sometimes, though--well, Meghan McCain's husband complained 
a lot on Twitter over the weekend because of a doctored image 
of Meghan McCain that was put up on Twitter and then it seemed 
like it took forever for that to come down.
    Is there not some way that people can--I understand there 
are algorithms. I understand that you have to have checks and 
balances. But, really, it shouldn't take hours for something 
that's that egregious to be addressed.
    Mr. Dorsey. Absolutely, and that was unacceptable and we 
don't want to use our scale as an excuse here. We need to do 
two things.
    Number one, we can't place the burden on the victims and 
that means we need to build technology so that we are not 
waiting for reports--that we are actively looking for 
instances.
    While we do have reports and while we are making those 
changes and building that technology, we need to do a better 
job at prioritizing, especially any sort of violent or 
threatening information.
    In this particular case, this was an image and we just 
didn't apply the image filter to recognize what was going on in 
real time. So we did take way too many hours to act and we are 
using that as a lesson in order to help improve our systems.
    Mr. Burgess. And I am sure you have. But just for the 
record, have you apologized to the McCain family?
    Mr. Dorsey. I haven't personally but I will.
    Mr. Burgess. I think you just did.
    But along the same lines, but maybe a little bit 
different--the chairman referenced several members of Congress 
who had been affected by what was described as shadow banning.
    So does someone have to report? Is it only fixed if someone 
complains about it? And if no one complained, would it have 
been fixed? So with Mr. Jordan, Mr. Meadows, Mr. Gaetz, and 
their accounts being diminished, is it only because they 
complained that that got fixed?
    Mr. Dorsey. It's a completely fair point and we are 
regularly looking at the outcomes of our algorithms. It wasn't 
just the voices of members of Congress.
    We saw, as we rolled this system out, a general 
conversation about it and sometimes we need to roll these out 
and see what happens because we are not going to be able to 
test every single outcome in the right way.
    So we did get a lot of feedback and a lot of conversations 
about it and that is what prompted more digging and an 
understanding of what we were actually doing and whether it was 
the right approach.
    Mr. Burgess. And as a committee, can we expect any sort of 
follow-up as to your own investigations digging that you 
described? Is that something that you can share with us as you 
get more information?
    Mr. Dorsey. We would love to. We want to put a premium on 
transparency and also how we can give you information that is, 
clearly, accountable to changes.
    That is why we are putting the majority of our focus on 
this particular topic into our transparency report that we 
would love to release. It's going to require a bunch of work--
--
    Mr. Burgess. Sure.
    Mr. Dorsey [continuing]. And some time to do that. But we 
would love to share it.
    Mr. Burgess. And we appreciate your attention to that.
    Mr. Chairman, I will yield back. Thank you.
    Mr. Walden. The gentleman yields back.
    The chair recognizes the gentleman from Pennsylvania, Mr. 
Doyle, for 4 minutes.
    Mr. Doyle. Thank you, Mr. Chairman.
    Mr. Dorsey, welcome. Thanks for being here. I want to read 
a few quotes about Twitter's practices and I just want you to 
tell me if they're true or not.
    ``Social media is being rigged to censor conservatives.'' 
Is that true of Twitter?
    Mr. Dorsey. No.
    Mr. Doyle. ``I don't know what Twitter is up to. It sure 
looks like to me that they're censoring people and they ought 
to stop it.'' Are you censoring people?
    Mr. Dorsey. No.
    Mr. Doyle. ``Twitter is shadow banning prominent 
Republicans. That's bad.'' Is that true?
    Mr. Dorsey. No.
    Mr. Doyle. So these were statements made by Kevin McCarthy, 
the House majority leader, on Twitter, Devin Nunes on Fox News, 
and President Trump on Twitter, and I want to place those 
statements into the record, Mr. Chairman.
    Mr. Walden. Without objection.
    [The information appears at the conclusion of the hearing.]
    Mr. Doyle. I think it's important for people to understand 
the premise of this whole hearing and the reason that Twitter 
somehow, with all the other social media platforms out there, 
got the singular honor to sit in front of this committee is 
because there is some implication that your site is trying to 
censor conservative voices on your platform.
    Now, when you tried to explain the shadow banning, as I 
understand it you had a system where if people who were 
following people had some behaviors, that was the trigger that 
caused you to do the shadow banning.
    So you were really like an equal opportunity shadow banner, 
right? You didn't just shadow ban four conservative 
Republicans.
    You shadow banned 600,000 people across your entire 
platform across the globe who had people following them that 
had certain behaviors that caused you to downgrade them coming 
up. Is that correct?
    Mr. Dorsey. Correct.
    Mr. Doyle. So this was never targeted at conservative 
Republicans. This was targeted to a group of 600,000 people 
because of the people who followed them, and then you 
determined that wasn't fair and you corrected that practice. Is 
that correct?
    Mr. Dorsey. Correct.
    Mr. Doyle. So just for the record, since you have been 
singled out as a social media platform before this committee, 
Twitter undertook no behavior to selectively censor 
conservative Republicans or conservative voices on your 
platform. Is that correct?
    Mr. Dorsey. Correct.
    Mr. Doyle. Good. So let the record reflect that because 
that's the whole reason supposedly we are here, because House 
Leader Kevin McCarthy wrote our chairman a letter and said, 
hey, this is going on and we think your committee should 
investigate it, and it's a load of crap.
    Now, let me ask you a couple other things while I still 
have some time. What are you doing to address the real concerns 
many of us have about people that use Twitter to bully, troll, 
or threaten other people.
    We know that this has led to many prominent users, 
particularly women, who have been targeted with sexual threats 
leaving Twitter because of this toxic environment.
    Now, I understand that you're working to address these 
issues and that you want to to use machine learning and AI. But 
I am concerned that these solutions will take too long to 
deploy and that they can't cure the ills that Twitter is 
currently suffering from.
    So my question is how can we be assured that you and your 
company have the proper incentives to address the toxicity and 
abusive behavior on your platform, given Twitter's current 
state?
    Mr. Dorsey. First and foremost, we--our singular objective 
as a company right now is to increase the health of public 
conversation and we realize that that will come at short-term 
cost.
    We realize that we will be removing accounts. We realize 
that it doesn't necessarily go into a formula where--I think 
there's a perception that we are not going to act because we 
want as much activity as possible. That is----
    Mr. Doyle. Right. There's like an economic disincentive to 
act because it takes people from your platform.
    Mr. Dorsey. That is not true. So we see increasing health 
of public conversation as a growth vector for us.
    Mr. Doyle. Good.
    Mr. Dorsey. It's not a short-term growth vector. It is a 
long-term growth vector and we are willing to take the hard 
path and the decisions in order to do so and we communicated a 
lot of these during our last earnings call and the reaction by 
Wall Street was not as positive.
    But we believe it was important for us to continue to 
increase the health of this public square. Otherwise, no one's 
going to use it in the first place.
    Mr. Doyle. Thank you for being here today.
    I yield back.
    Mr. Walden. The gentleman yields back.
    The chair recognizes the gentleman from Texas, former 
chairman of the committee, Mr. Barton, for 4 minutes.
    Mr. Barton. Thank you, Mr. Chairman, and I want to thank 
you, sir, for appearing voluntarily without subpoena and 
standing or sitting there all by yourself. That's refreshing.
    I don't know what a Twitter CEO should look like but you 
don't look like a CEO of Twitter should look like with that 
beard.
    Mr. Dorsey. My mom would agree with you.
    [Laughter.]
    Mr. Barton. I am going to reverse the questions that my 
good friend, Mr. Doyle, just asked so that we get both sides of 
the question.
    In a July blog post, your company, Twitter, indicated some 
Democrat politicians were not properly showing up within search 
auto suggestions. In other words, your company said that your 
algorithm were somewhat discriminatory against Democrats.
    Can you identify which Democrat representatives and 
accounts weren't properly showing up?
    Mr. Dorsey. We typically don't identify those as a matter 
of protecting their privacy and they haven't communicated that. 
But we can certainly follow up with your staff.
    Mr. Barton. All right. Can you identify how many without 
naming names?
    Mr. Dorsey. We'll follow up with your staff on that.
    Mr. Barton. Can you personally vouch that that statement is 
a true statement----
    Mr. Dorsey. Yes.
    Mr. Barton [continuing]. That there are Democrat 
politicians who, when you did the auto search, they didn't show 
up?
    Mr. Dorsey. Yes. It was over 600,000 accounts.
    Mr. Barton. No. No. There were 600,000 accounts affected 
but how many Democrat versus Republican accounts?
    Mr. Dorsey. Yes, I----
    Mr. Barton. The allegation that we made, the Republicans, 
is that you're discriminatory against us--against the 
Republicans. Your post says, well, there were some Democrat 
politicians, too.
    So out of 600,000 if there were a thousand Republicans and 
10 Democrats, it still seems somewhat biased. If it's 50/50, 
then that's a whole different ball game.
    Mr. Dorsey. Well, we agree that the result was not 
impartial and that is why we corrected it and we fixed it.
    Mr. Barton. So you do agree that there were more 
Republicans than Democrats?
    Mr. Dorsey. I didn't say that. But I do----
    Mr. Barton. Well, you can't have it both ways, sir.
    [Laughter.]
    It's either 50/50 or one side is disproportionately 
affected and the allegation is that more Republicans were 
affected.
    Mr. Dorsey. Well, we don't always have the best methods to 
determine who is a Republican and who is a Democrat. We have to 
refer----
    Mr. Barton. Well, usually it's known because we run as 
Republicans or Democrats. That's not hard to identify.
    Mr. Dorsey. Yes. When it is self-identified it's easier. 
But we are happy to follow up with you.
    Mr. Barton. Well, my chairman keeps whispering in my ear. I 
am glad to have a staffer who's the chairman of the committee.
    Do you discriminate more on philosophy like anti-
conservative versus pro-liberal?
    Mr. Dorsey. No. Our policies and our algorithms don't take 
into consideration any affiliation philosophy or viewpoint.
    Mr. Barton. That's hard to stomach. We wouldn't be having 
this discussion if there wasn't a general agreement that your 
company has discriminated against conservatives, most of whom 
happen to be Republican.
    Mr. Dorsey. I believe that we have found impartial outcomes 
and those are what we intend to fix and continue to measure.
    Mr. Barton. All right. Well, my time is about to expire. 
You said you would provide my staff those answers with some 
more specificity and I hope you mean that.
    But, again, thank you for voluntarily appearing. I yield 
back.
    Mr. Dorsey. Thank you. We'll follow up with you.
    Mr. Walden. The gentleman yields back.
    The chair recognizes the gentlelady from California, Ms. 
Matsui, for 4 minutes for questions.
    Ms. Matsui. Thank you very much, Mr. Chairman.
    Mr. Dorsey, thank you for being here. I know it's becoming 
a long day for you.
    I want to talk to you about anonymization. It's been noted 
that advertising is less concerned with identifying the 
individual per se than with the activity of users to predict 
and infer consumer behavior.
    But I wonder if that is quickly becoming a distinction 
without a difference. Even when user content isn't associated 
with that user's name, precise information can and is gathered 
through metadata associated with messages or tweets.
    For instance, Twitter offers geospatial metadata that 
requires parsing the tweet for location and names of interest 
including nicknames. The metadata could then be associated with 
other publicly available social media data to re-identify 
individuals, and researchers have demonstrated this ability.
    So even though advertising itself may not be considered 
with identifying the individual, how is Twitter working to 
ensure its data is not being used by others to do so?
    Mr. Dorsey. Well, first and foremost, the data on Twitter 
is very different than our peer companies, given that the 
majority of our data is public by default, and where we do 
infer information around people's interests or their behaviors 
on the network we enable them, first and foremost, to see what 
we've collected and, second, turn it off.
    And in terms of our data business, our data business is 
actually focused on packaging up and making real time the 
public data, and we send everyone who wants to consume that 
real-time stream of the public data through a know-your-
customer process, which we audit every year as well to make 
sure that the intent is still good and proper and also 
consistent with how they signed up.
    Ms. Matsui. OK. As I previously announced in this 
committee, I am soon introducing legislation to direct the 
Department of Commerce to convene a working group of 
stakeholders to develop a consensus-based definition of block 
chain.
    Distributed ledger technologies such as block chain have 
particularly interesting potential applications in the 
communications space ranging from identity verification to IOT 
deployments and spectrum sharing.
    But there currently is no common definition of block chain, 
which could hinder in its deployment. You had previously 
expressed interest in the broad applications of block chain 
technology including potentially any effort to verify identity 
to fight misinformation and scams.
    What potential applications do you see for block chain?
    Mr. Dorsey. First and foremost, we need to start with 
problems that we are trying to solve and the problems we are 
solving for our customers and then look at all available 
technology in order to understand if it can help us or 
accelerate or make those outcomes much better.
    So block chain is one that I think has a lot of untapped 
potential, specifically around distributed trust and 
distributed enforcement, potentially.
    We haven't gone as deep as we'd like just yet in 
understanding how we might apply this technology to the 
problems we are facing at Twitter but we do have people within 
the company thinking about it today.
    Ms. Matsui. OK. Advertising-supported models like Twitter 
generate revenue through user-provided data. In your terms of 
service, you maintain that what's yours is yours--you own your 
content.
    I appreciate that, but I want to understand more about 
that. To me, it means users ought to have some say about if, 
how, and when it's used.
    But you say that Twitter has an evolving set of rules for 
how partners can interact with user content and that Twitter 
may modify or adapt this content as it's distributed.
    The hearings this committee has held demonstrated that the 
real crux of the issue is how content is used and modified to 
develop assumptions and inferences about users to better target 
ads to the individual.
    Do you believe that consumers own their data, even when 
that data has modified, used to develop inferences, 
supplemented by additional data, or otherwise?
    Mr. Dorsey. Sorry. What was the question? Do I----
    Ms. Matsui. Do you believe that consumers own their data?
    Mr. Dorsey. Yes.
    Ms. Matsui. Even when that data has modified, used to 
develop inferences, supplemented by additional data, or 
otherwise?
    Mr. Dorsey. Yes. Generally, we would want to understand all 
the ramifications of that. But yes, we believe that people own 
their data and should have ultimate control over it.
    Ms. Matsui. OK. Thank you.
    I yield back.
    Mr. Walden. The gentlelady yields back.
    The chair now recognizes the whip of the House, Mr. 
Scalise, for 4 minutes.
    Mr. Scalise. Thank you, Mr. Chairman.
    And Mr. Dorsey, appreciate you coming, and as others have 
said, we are welcoming your testimony and your willingness to 
answer some of these questions, and I think there are serious 
concerns more than anything about how Twitter has been used and 
will continue to be used and, clearly, there is many examples 
of things that Twitter has done and you can just look at the 
Arab Spring.
    Many people would suggest that a lot of the real ability 
for the Arab Spring to take off started with platforms like 
Twitter, and in 2009 you were banned in Iran and we've seen 
other countries--China and North Korea have banned Twitter.
    And I would imagine when Twitter was banned, it wasn't a 
good feeling. But what we are concerned about is how Twitter 
has, in some ways, it looks like selectively adversely affected 
conservatives.
    I want to go through a couple of examples, and I would 
imagine you're familiar with these but our colleague, Marsha 
Blackburn, when she announced her campaign for the Senate, 
Twitter quickly banned her announcement advertisement because 
it had a pro-life message.
    She, at the time, was the chair of the Special Select 
Committee that a number of my colleagues, both Republican and 
Democrat, here were on it that were looking into the sale of 
body parts, and Twitter banned her because they said this 
statement was deemed an inflammatory statement that is likely 
to evoke a strong negative reaction.
    Are you familiar with this?
    Mr. Dorsey. Yes.
    Mr. Scalise. Why was she banned for just stating a fact 
that Congress was actually investigating because of the deep 
concern nationally when this scandal took place?
    Mr. Dorsey. Well, first, we--this was a mistake and we do 
apologize----
    Mr. Scalise. This was a mistake by Twitter?
    Mr. Dorsey. It was a mistake by Twitter. It was a mistake 
by us, which we corrected.
    Mr. Scalise. So was anybody held accountable for that 
mistake?
    Mr. Dorsey. What do you mean by that?
    Mr. Scalise. Well, somebody--I mean, there was a 
spokesperson that said we deem it inflammatory--Twitter deems 
it inflammatory and at the same time the organization that was 
selling the body parts was not banned by Twitter but our 
colleague, who just exposed the fact that the sale of body 
parts was going on, was banned by Twitter, and one of your own 
spokespersons said that it was inflammatory.
    Was that person held accountable for making those kind of 
statements?
    Mr. Dorsey. We use these events and these opportunities to 
improve our process and look for ways----
    Mr. Scalise. And we've talked about that and, obviously, I 
appreciate the fact that you have acknowledged that there have 
been some mistakes made in algorithms and we've talked about 
this with other companies.
    Facebook was in here talking about similar concerns that we 
had with their algorithm and how we felt that might have been 
biased against conservatives.
    A liberal website, Vice, did a study of all members of 
Congress--all 535--and they identified only three that they 
felt were targeted in the shadow banning and that was Reps. 
Meadows, Jordan, and Gaetz.
    And I know while, I think, Mr. Barton was trying to get 
into this in more detail, if there were 600,000, ultimately 
they did a study and found only three members of Congress were 
biased against and all three happened to be conservatives.
    And so can you at least see that that is a concern that a 
lot of us have if there is a real bias in the algorithm as it 
was developed.
    And look, I've written algorithms before. So if somebody 
wrote an algorithm with a bias against conservatives, I would 
hope you are trying to find out who those people are and if 
they're using their own personal viewpoints to discriminate 
against certain people.
    Because if it's your stated intention that you don't want 
that discrimination to take place, I would hope that you would 
want to know if there are people working for Twitter that did 
have that kind of discriminatory viewpoint against 
conservatives that you would at least hold them accountable so 
that it doesn't happen again.
    Mr. Dorsey. I would want to know that, and I assure you 
that the algorithm was not written with that intention. The 
signal that we were using caught people up in it and it was a 
signal that we determined was not relevant and also not fair in 
this particular case.
    And there will be times--and this is where we need to 
experiment, as you know, in writing algorithms in the past--
that you need to test things and see if they work at scale and 
pull them back correctly if they don't and that is--that is our 
intention.
    Mr. Scalise. But also you shouldn't inject your own 
personal viewpoint into that unless that's the intention of the 
company. But you're saying it's not the intention of the 
company.
    Mr. Dorsey. That is not the intention and they should never 
be ejecting people.
    Mr. Scalise. And I know I am out of time. But I appreciate 
at least your answering these questions. Hopefully, we can get 
some more answers to these examples and there are others like 
this that we'd surely like to have addressed.
    Thank you. Yield back.
    Mr. Walden. The chair now recognizes the----
    [Disturbance in hearing room.]
    Mr. Walden. Order. We'll have order in the hearing room or 
you will be asked to leave. Ma'am, if you will please take a 
seat or we'll have to have you--then you will need to relieve--
--
    [Disturbance in hearing room.]
    Mr. Long. Huh? What's she saying? I can't understand her. 
What? What's she----
    Mr. Walden. Officer, will you escort this young lady out, 
please?
    Somehow I think our auctioneer in residence is going to get 
tweeted about today. Yes.
    I would remind members of the audience you're here to 
observed, not participate, and I appreciate that.
    We'll now turn to the gentleman from New York, Mr. Engel, 
for 4 minutes.
    Mr. Engel. That's a hard act to follow, Mr. Chairman. 
That's a hard act to follow. Maybe I will get Mr. Long to help 
me along a little bit as well.
    Thank you, Mr. Chairman and Mr. Pallone.
    Mr. Dorsey, welcome. Our country is facing a direct threat 
to our democratic institutions. We need to find ways to stop 
foreign adversaries like Russia and Iran from using American 
technology against us.
    Earlier this year, Special Counsel Robert Mueller filed an 
indictment against a Russian internet research agency, charging 
that they created fake social media accounts, sometimes using 
American stolen identities, to sow discord and interfere with 
our 2016 elections. I have a copy of that indictment here, and 
Mr. Chairman, I would like to introduce it for the record.
    Mr. Walden. Without objection.
    Mr. Engel. Mr. Dorsey, Twitter recently took down a number 
of Russian- and Iranian-linked accounts after it was tipped off 
by a cybersecurity firm.
    I am glad to see that Twitter is taking action to protect 
us. But do you think we should be concerned that an outside 
cybersecurity firm detected fraudulent activity before you did?
    Mr. Dorsey. Well, I think it's really important that we 
have outsiders and we have an open channel to them because 
they're always going to approach the data and the work in a way 
that we may not see, and we are going to do our best to capture 
everything that we can and to be as proactive as we can.
    But we want to leave room for others to bring a different 
perspective that might look at what's happening on the platform 
in a different way than we do.
    Mr. Engel. So how confident are you that Twitter can 
identify and remove all of the fake and automated accounts 
linked to a foreign adversary on your platform?
    Mr. Dorsey. We are getting more and more confident. But I 
do want to state that this is not something that has an end 
point that reaches perfection.
    We are always going to have to stay 10 steps ahead of the 
newest ways of attacking and newer vectors and we are getting 
more agile and better at identifying those and that's showing 
in some of our results, which I talked about earlier in the 
terms of being able to identify 8 to 10 million suspicious 
accounts every single week and then also challenging them to 
see if they're humans or bots or some sort of malicious 
automation.
    Mr. Engel. I understand that Twitter is now requiring some 
suspicious accounts to respond to recapture to prove that 
they're human accounts and not bots.
    I was surprised to learn that you're not requiring users to 
do the same thing when they first sign up to Twitter. New 
accounts are authenticated using only an email address. Could 
you tell me why that is?
    Mr. Dorsey. We actually do send accounts through a variety 
of authentication including sometimes reCAPTCHA. It really 
depends on the context and the information that we have. We 
have thwarted over a half a million accounts from even logging 
in in the first place because of that.
    Mr. Engel. I understand that dealing with foreign 
adversaries can be difficult. Twitter may respond to one 
practice only to find new tactics being used to sow discord. 
Can you commit to us with any level of certainty that the 2018 
mid-term elections in the United States will not be subject to 
interference by foreign adversaries using bots or other fake 
accounts on your platform?
    Mr. Dorsey. We are committing to making it our number-one 
priority to help protect the integrity of the 2018 mid-terms 
and especially the conversation around it.
    Mr. Engel. Let me ask you this, finally. Are you aware of 
foreign adversaries using any different tactics on your 
platform to interfere in our 2018 mid-term elections?
    Mr. Dorsey. None that we haven't communicated to the Senate 
Intelligence Committee and any that we do find we will be 
communicating and sharing with them.
    Mr. Engel. OK. Thank you very much. Thank you, Mr. 
Chairman.
    Mr. Dorsey. Thank you.
    Mr. Walden. I thank the gentleman.
    We now go to the gentleman from Ohio, Mr. Latta, for 4 
minutes.
    Mr. Latta. Thank you, Mr. Chairman.
    And Mr. Dorsey, thanks very much for being here with us 
today. I would like to ask my first question on how you're 
protecting that--users' data. Do you collect any data from 
other third parties about Twitter users?
    Mr. Dorsey. We don't collect data from third parties about 
Twitter folks. We do have embeds of tweets around the web and 
when people do go visit those sites we note that and we can 
integrate it when they do login to Twitter. But people can turn 
that off as well.
    Mr. Latta. How does Twitter use that data?
    Mr. Dorsey. We use the data to personalize the experience 
specifically around--it might infer a particular interest so 
that we can show them specific topics or make our advertising 
targeting better.
    Mr. Latta. Is that sold or offered in some other forum then 
for the advertisers?
    Mr. Dorsey. I am sorry?
    Mr. Latta. Is it sold to the advertisers?
    Mr. Dorsey. Is it sold to the advertisers? No.
    Mr. Latta. OK.
    Let me back up to where Mr. Shimkus was when we were 
talking about the verification of the blue checkmark. How easy 
is it for someone to obtain a verified Twitter handle and what 
does Twitter take to ensure it is not highlighting one 
political viewpoint over another through the use of that 
verification on the platform?
    Mr. Dorsey. Well, right now it's extremely challenging 
because we've paused the verification program because we've 
found so many faults in it that we knew we needed a restart.
    We do make exceptions for any representatives of 
government, particular brands, or public figures of interest. 
But we generally have paused that work.
    Before that pause, we did allow anyone to submit an 
application to be verified and it used various criteria in 
order to determine if the verification was necessary.
    Mr. Latta. With that verification for that has said--you 
all have said that it can be removed for the activity on the 
on/off platform. What off platform is the basis for someone 
using that blue verified checkmark?
    Mr. Dorsey. We look at specifically any violent extremist 
groups and off platform behavior for violent extremist groups, 
when we consider not just verification but also holding an 
account in the first place.
    Mr. Latta. OK. In your statement, it said in the last year 
Twitter developed and launched more than 30 policy and product 
changes designed to ``foster information, integrity, and 
protect the people who use our service from abuse and malicious 
automation.''
    Can you share with the committee what those 30-plus policy 
and product changes are or highlight some and then give us the 
others in written?
    Mr. Dorsey. Yes, and we can certainly follow up with all of 
you on exactly the details. But we established new models, for 
instance, to detect where people are gaming our systems. These 
are algorithms with an intent to artificially amplify.
    We have new reporting flows that enable people to report 
tweets or accounts. We have changed policies reflective of 
current circumstances and what we are seeing and we have 
certainly done a bunch of work around GDPR, which has affected 
our work in general. But we will follow up with you with 
enumeration.
    Mr. Latta. If we could get those 30 points that would be 
great and submit those to the committee.
    You also indicated in your written statement that the 
company conducted an internal analysis of members of Congress 
affected by the auto suggest search issue and that you'd make 
that information available to the committee if requested.
    Will you commit to us on the committee that you will 
present all of Twitter's analysis as soon as that is possible 
after this hearing?
    Mr. Dorsey. Yes, and we also hope to include this in our 
long-standing initiative of a transparency report around our 
actions.
    Mr. Latta. Thank you.
    Mr. Chair, my time has expired.
    Mr. Walden. I thank the gentleman from Ohio.
    The chair recognizes the gentlelady from Florida, Ms. 
Castor, for 4 minutes.
    Ms. Castor. Thank you, Mr. Chairman.
    Good afternoon. Mr. Dorsey, do you feel like you're being 
manipulated yourself--you're part of a manipulation campaign 
because, when you see the majority leader of the Congress is 
running ads on Facebook to fundraise around allegations of 
anti-conservative bias on social media platforms and then you 
see the Trump campaign use President Trump's tweets where he 
claims anti-conservative bias at Google, Facebook, and Twitter, 
and then we saw this outburst today.
    The woman jumped up, of course, with her phone so that she 
can get that and that's probably trying to spread on the web. 
And now, the Justice Department even says boy, this is so 
serious we have to investigate.
    Does this feel like a manipulation campaign itself to you?
    Mr. Dorsey. Look, as I noted in my opening, I do believe 
that there's growing concern around the power that companies 
like ours hold and the reason why is people do see us as a 
digital public square and that comes with certain expectations 
and we----
    Ms. Castor. That's a very diplomatic answer, I have to say, 
because there are very serious questions. The Russian trolls 
created thousands of bots to influence our democracy--our 
elections. They're doing it in other countries across the 
world.
    Do you feel like you have a handle on these bots? You said 
earlier in your testimony you ID 8 to 10 million accounts per 
month. Is that right?
    Mr. Dorsey. Per week.
    Ms. Castor. Per week?
    Mr. Dorsey. And to thwart over half a million accounts from 
logging in every single day.
    Ms. Castor. Can Twitter keep up?
    Mr. Dorsey. We intend to keep up. So----
    Ms. Castor. If they are using automated accounts, don't we 
reach a point where they have the ability to overwhelm content 
on Twitter and affect your algorithms?
    Mr. Dorsey. Maybe. Others have described this as an arms 
race. But I believe it's very much like security. There's no 
perfect end point.
    When you build a lock, someone else will figure out how to 
break it, and therefore, you can't try to design and optimize 
for the perfect lock. You always have to build those into the 
system.
    Ms. Castor. Can't you identify the bots at least as they 
sign up in some way so that folks understand OK, that's a fake 
automated account?
    Mr. Dorsey. In certain cases, we can--and it's a great 
point--especially through our API. There are more sophisticated 
ways of automation that actually script our site and our app 
that are much harder to detect because they're intending to 
look like human behavior with the slowness of human behavior 
rather than the speed of through an API.
    So it's a little bit more complicated. It's not a challenge 
we are not intending to face. We are taking it head on.
    Ms. Castor. You have some creative minds. I would think you 
can put all of those creative minds, all of your expertise, to 
work to do that.
    I want to ask you a little bit about privacy. Twitter and 
other companies collect information on users and nonusers 
oftentimes without their knowledge.
    Twitter's business model is based on advertising and you 
serve targeted advertising to users based on vast amounts of 
data that you collect, which raises consumer privacy concerns.
    Up until last year, the privacy policy included a promise 
to support do not track. But then you changed your mind.
    Why? Why shouldn't it be up to consumers? Why shouldn't it 
be the consumer's choice on tracking?
    Mr. Dorsey. Well, we do allow consumers within the app to 
turn off tracking across the web.
    Ms. Castor. But you're still able to build a profile on 
each and every user. Isn't that correct?
    Mr. Dorsey. If they log into the account then yes, and we 
allow them to turn that off.
    Ms. Castor. But I understand that even when they go and 
they opt out that you're still collecting data on them. You're 
still tracking them.
    Mr. Dorsey. I don't believe that's the case. But happy to 
follow up with you with our team.
    Ms. Castor. OK, and let's do that because I am out of time. 
Thank you.
    Mr. Walden. The chair now recognizes the chairman of the 
Republican Conference, the gentlelady from Washington State, 
Cathy McMorris Rodgers, for 4 minutes.
    Mrs. McMorris Rodgers. Thank you, Mr. Chairman, and thank 
you, Mr. Dorsey, for joining us today. I want to start off by 
saying that I think Twitter is a valuable tool in modern 
communication and it's why, back in 2011, I was spearheading an 
effort to get our members signed up and using this tool.
    I think it's a great way to interact with the people that 
we represent and since then it's been amazing to see the growth 
of Twitter and the Twitter users all across America and the 
world.
    It's why I think this hearing is so timely. There's a lot 
of serious questions that Americans have regarding tech 
platforms and the ones that they're using every day and the 
issues like data privacy, community standards, and censorship.
    Today, I want to focus on Twitter's procedures for taking 
down offensive and inappropriate content. And as you know, 
there's been examples that were already shared today.
    I was going to highlight the one with Meghan McCain with 
the altered image of a gun pointed at her when she was mourning 
her father's loss, and the tweet image said, ``America, this 
one's for you.''
    Obviously, this offensive tweet was reported by other 
users, even to you, I understood. Yet, it took nearly 16 hours 
for there to be action to take it down.
    So I just wanted to ask, first, do you think that this is a 
violation of Twitter's content policies and rules against 
violence and physical harm and that I would also like to 
understand how much of this is driven by the algorithm versus 
human content managers?
    Mr. Dorsey. So it definitely is a violation and we were 
slow to act. The tweet was actually up for 5 hours, but 5 hours 
is way too long, and our current model works in terms of 
removing content based on reports that we receive and we don't 
believe that that is fair, ultimately. We don't believe that we 
should put the burden of reporting abuse or harassment on the 
victim of it.
    We need to build algorithms to proactively look for when 
these things are occurring and take action. So the number of 
abuse reports that we get is a number that we would like to see 
go down not only because there's less abuse on the platform but 
because our algorithms are recognizing these things before 
someone has to report them and that is our goal, and it will 
take some time. And meanwhile, while we----
    Mrs. McMorris Rodgers. Can you talk to me then just about 
what are your current policies? What are the current policies 
for prioritizing timely take downs and enforcement?
    Mr. Dorsey. Yes. So any sort of violent threat or image is 
at the top of our priority list in order to review and enforce, 
and we do have a prioritization mechanism for tweets as we get 
the reports.
    But, obviously, this one was too slow and is not as precise 
as it needs to be. In this particular case, the reason why was 
because it was captured within an image rather than the tweet 
text itself.
    Mrs. McMorris Rodgers. So I think much of the concern 
surrounding this incident and some others has been how long it 
takes to remove the content when there's a clear violation, and 
the issue only seemed to be resolve after people publicly 
tweeted about it, providing a larger platform for this type of 
content than it ever should have had.
    So I did want to hear what steps the company is going to be 
taking to speed up its response time to future ones to ensure 
these kinds of incidences don't continue.
    Mr. Dorsey. In the short term, we need to do a better job 
at prioritizing around the reports we receive, and this is 
independent of what people see or report to us on the platform.
    And in the longer term, we need to take the burden away 
from the victim from having to report it in the first place.
    Mrs. McMorris Rodgers. OK. Well, clearly, you hold a large 
amount of power in the public discourse. Allowing speech that 
incites violence could have devastating consequences and this 
is one way where I believe it's very important that Twitter 
take action to help restore trust with the people and your 
platform.
    So and with that, I will yield back my time.
    Mr. Walden. The gentlelady yields back.
     The chair recognizes the gentleman from Maryland, Mr. 
Sarbanes, for 4 minutes.
    Mr. Sarbanes. Thank you, Mr. Chairman.
    Mr. Dorsey, thank you for coming. There are a number of 
important topics that we could be discussing with you today 
but, unfortunately, the Republican majority has decided to 
pursue the trumped-up notion that there is a special 
conservative bias at work in the way Twitter operates, and 
that's a shame.
    What worries me is this is all part of a campaign by the 
GOP and the right wing to work the refs--complaining of non-
existent bias to force and over correction, which then can 
result in some actual bias going in the other direction, and we 
saw this actually with Facebook.
    Conservatives cried bias because Facebook was seeking to 
make information available using reputable news sources instead 
of far right-wing outlets or conspiracy platforms. So Facebook 
got pushed into this correction and it got rid of its human 
editors and the result was immediately it was overrun with 
hoaxes that were posing as news.
    I actually have questions about the subject of the hearing 
but I am going to submit those for the record and ask for 
written responses because I don't really have confidence that 
this hearing was convened for a serious purpose, to be candid.
    Like I said, I think it's just a chance to work the ref to 
push platforms like yours away from the serious task of 
empowering people with good and reliable information.
    But what is really frustrating to me about today's inquiry 
is that my Republican colleagues know there are plenty of other 
kinds of investigations that we should be undertaking in this 
Congress but they don't have any interest in pursuing them.
    And that's not just conjecture. There's actually a list 
that's been circulating that Republicans put together of all 
the investigations that they've been blocking, sweeping under 
the rug because they want to hide the truth from the American 
people.
    And this spreadsheet which is going around is pretty 
telling. It's circulating in Republican circles. So what are 
these things that they know could and should be investigated 
but they are determined to dismiss or bury or ignore 
altogether?
    According to their own secret cover-up list, Republicans 
don't want the public to see President Trump's tax returns. 
They don't want the public to know about Trump's business 
dealings with Russia.
    They're determined not to investigate Secretary of Treasury 
Steven Mnuchin's business dealings. They're blocking public 
inquiry into the personal email use of White House staff.
    They're wilfully ignoring how taxpayer money has been 
wasted by corrupt cabinet secretaries for first class travel, 
private jets, large security details, office expenses, and 
other misused perks.
    They're giving the President a pass on investigation into 
the motives behind his travel ban and his family separation 
policy.
    They definitely don't want the public to see how poorly the 
Trump White House responded to Hurricane Maria in Puerto Rico 
and, finally, they don't want the public to see how the 
administration is failing to protect our elections and guard 
against hacking attempts.
    These are all things that deserve attention and inquiry of 
this Congress. But the Republicans are not going to let it 
happen.
    Let me just go back in the last 40 seconds and talk about 
election security because we are 60 days away from the mid-term 
election. We know there are ongoing efforts to disrupt our 
democracy. We know these same actors, these foreign and hostile 
actors, are using this very platform--Twitter and others--to 
sow discord.
    We know the public is desperate that their 
representatives--that's us--will act to protect their democracy 
and we know, thanks to this list, that the Republicans know 
they should be investigating our nation's election security and 
hacking attempts by hostile actors.
    Instead, here we are, using our precious resources to feed 
Deep State conspiracy theories preferred by the President and 
his allies in Congress. It's a shame that this committee, 
frankly, has been drawn into such a charade.
    I yield back my time.
    Mr. Walden. The gentleman's time has expired.
    The chair now recognizes the gentleman from Mississippi, 
chair of the Oversight Subcommittee, Mr. Harper, for 4 minutes.
    Mr. Harper. Thank you, Mr. Chairman, and thank you, Mr. 
Dorsey, for taking this time to be here. It's a very important 
topic.
    We all utilize Twitter. You have a very daunting task to 
try to work through this. It's a lot, and we've talked a lot 
today about algorithms and, of course, those are really only as 
good as the people who create them, edit them, and guide them, 
and algorithms have to be trained, which means, as you know--
feeding them a lot of data.
    My understanding is that oversight of machine learning 
algorithms involves examining the data sets or the search 
results to look for that bias. If bias is spotted, then the 
algorithm can be adjusted and retrained.
    So I want to understand the oversight that Twitter does of 
its own algorithms. The algorithms that support Twitter's 
algorithmic time line are adjusted, if not daily, almost daily.
    Why is that and what are some reasons why the algorithms 
would need to be adjusted daily?
    Mr. Dorsey. So bias in algorithms is a rather new field of 
research within broader artificial intelligence and it's 
something that is certainly new to us as a company as well.
    We do have teams who are focused on creating roadmap so 
that we can fully understand best practices for training, data 
sets, and also measuring impartiality of outcomes.
    But I will say that we are pretty early in that work. We 
intend to get better much faster but we are very, very early. 
We are learning as quickly as possible, as is the industry, on 
how best to do this work and also how best to measure whether 
we are doing the right thing or not.
    In terms of why we need to change the signals all the time 
is because when we release some of these models we release them 
in smaller tests and then as they go out to the broader Twitter 
at scale, we discover some unexpected things and those 
unexpected things will lead to questions, which then cause us 
to look deeper at the particular signals that we are using and 
as we recognize that there are any sort of impartiality within 
the outcome, we work to fix. And it is somewhat dependent upon 
people giving us feedback.
    Mr. Harper. And those teams that you're talking about, 
those are individuals, correct?
    Mr. Dorsey. They're----
    Mr. Harper. That are employees of Twitter?
    Mr. Dorsey. Yes. Yes----
    Mr. Harper. And how do you take into account what their 
leanings are or their bias or life story? Does that have an 
input into what they determine is important or what to look 
for, or how do you factor that in?
    Mr. Dorsey. It doesn't have an input that we use. The way 
we judge ourselves ultimately is are the algorithms making 
objective decisions--our engineers using engineering rigor, 
which is free of bias and free of any action that might be 
aligned with one particular perspective or not. So----
    Mr. Harper. OK. If I can ask this, because we only have a 
few moments. What are they looking for? What do they look for 
when they're deciding whether or not to make a change?
    Mr. Dorsey. They're looking for fairness. They're looking 
for impartiality. They're looking for whether----
    Mr. Harper. If I can interrupt must for a moment. Who 
defines fairness? What is that fairness that's determined there 
and--because your fairness may be different than my definition 
of fairness, depending on what the issue or the interpretation 
of it is.
    Mr. Dorsey. Yes. This goes back to those health indicators 
that we are trying to search for. So are we showing, for 
instance, a variety of perspectives or are we creating more 
echo chambers and filter bubbles.
    Mr. Harper. And as you looked at the 600,000 users and then 
specifically you were asked earlier about that you--you said 
you would follow up on the number of Democrats or Republicans 
in the House----
    Mr. Dorsey. Where we can determine that.
    Mr. Harper [continuing]. So my question is, that's a pretty 
limited pool. We are talking about 435 members of the House.
    Do you have that info and just don't want to discuss it or 
do you have to find that info on how many House members there 
were that were affected?
    Mr. Dorsey. We do have the info and we will share it.
    Mr. Harper. Can you share it now?
    Mr. Dorsey. Yes, we'll share it with you.
    Mr. Harper. Can you share it now in your testimony?
    Mr. Dorsey. I don't have it in front of me.
    Mr. Harper. OK. But you will provide it?
    Mr. Walden. The gentleman's time----
    Mr. Harper. Thank you. With that, I yield back my time.
    Mr. Walden. The gentleman's time has expired.
    The chair now recognizes the gentleman from California, Mr. 
McNerney, or 4 minutes.
    Mr. McNerney. I thank the chairman, and I thank you, Mr. 
Dorsey, for the frankness you have been showing on answering 
our questions.
    But this hearing is really a desperate effort to rally the 
Republican base before the November election and to please 
President Trump.
    However, there are some real serious issues that we should 
be examining--for example, targeting. Some social media 
networks have been accused of facilitating discriminatory 
advertising such as housing and employment ads.
    So when targeting ads, are advertisers able to exclude 
certain categories of users on Twitter, which would be 
discriminatory?
    Mr. Dorsey. I am sorry. For political ads or issues ads?
    Mr. McNerney. No, for non-political ads. Are advertisers 
able to exclude groups or categories of users?
    Mr. Dorsey. Advertisers are able to build criteria that 
include and exclude folks.
    Mr. McNerney. So that could end up being discriminatory?
    Mr. Dorsey. Perhaps, yes.
    Mr. McNerney. Apart from reviewing how ads are targeted, 
does Twitter review how its ads are ultimately delivered and if 
any discriminatory effects occur as a result of its own 
optimization process?
    Mr. Dorsey. Yes, we do do regular audits of how our ads are 
targeted and how they're delivered and we work to make sure 
that we have fairness within them.
    Mr. McNerney. Sure. Could you briefly describe the process 
that Twitter uses for making changes to algorithms?
    Mr. Dorsey. In terms of making changes to ads algorithms, 
we are looking first and foremost at the data test sets.
    We run through tests to make sure that they're performing 
in the way that we expect with those outcomes and then we bring 
them out to production, which is at scale on the live system, 
and then also we are doing checks to make sure that they are 
consistent with constraints and boundaries that we expect.
    Mr. McNerney. Has Twitter ever taken down an ad because of 
potential discriminatory effects--non-political?
    Mr. Dorsey. I will have to follow up with you on that to 
get that information.
    Mr. McNerney. Well, it's difficult to know if Twitter's 
platforms are having discriminatory effects because there's no 
real way for watchdog groups to examine what's happening for 
potential biases.
    Twitter announced now that it's making political ads 
searchable. How about non-political ads? Is there a way for 
watchdog groups to examine how non-political ads are being 
targeted?
    Mr. Dorsey. Yes. Our ads transparency center is 
comprehensive of all ads.
    Mr. McNerney. Thank you. OK, moving on to privacy--
Twitter's privacy policy states that we believe you should 
always know what data we collect from you and how we use it and 
you should have meaningful control over both.
    But most Americans really don't know what's happening with 
their data. There's a saying that if you aren't paying for a 
product that you are their product. Do you agree with that?
    Mr. Dorsey. I don't necessarily agree with that. I do 
believe that we need to make more clear the exchange--what 
people are trading to get a free service.
    I don't think we've done a great job at that, certainly 
within the service, and I do believe that that is important 
work and we should clarify it more.
    Mr. McNerney. Is Twitter running educational campaigns to 
inform users about how data is being used?
    Mr. Dorsey. Not at the moment, but we should be looking at 
that and also the incentives that we are providing people on 
the platform.
    Mr. McNerney. I am going to follow up on some prior 
questions here. If users disable the track mechanism, then does 
Twitter still store previously collected data or does it erase 
it when they ask to be excluded when they opt out?
    Mr. Dorsey. I believe it's erased. But we'll have to follow 
up with the details.
    Mr. McNerney. OK. And so can you commit to erasing data 
when people opt out?
    Mr. Dorsey. Yes, but let me just make sure I understand and 
we understand the constraints and the ramifications of that.
    Mr. McNerney. OK. Thank you.
    Mr. Chairman, I yield back.
    Mr. Harper [presiding]. The gentleman yields back.
    We will now take a 5-minute recess and reconvene in 5 
minutes.
    [Recess.]
    Mr. Walden [presiding]. Our guests will take their seats.
    If our guests will take their seats and our members, we 
will resume the hearing now, and I recognize the gentleman from 
New Jersey, Mr. Lance, for 4 minutes for questions.
    Mr. Lance. Thank you, Mr. Chairman.
    Mr. Dorsey, I have three areas of questioning. Number one, 
in the Meghan McCain matter, in your opinion would the photo 
have been taken down if those close to the victim, including 
her husband, had not complained to Twitter?
    Mr. Dorsey. If it would have been taken down if they had 
not complained?
    Mr. Lance. Correct.
    Mr. Dorsey. We would have taken it down because we--I 
imagine we would have received other reports. Our system does 
work today based on reports for take down.
    Mr. Lance. Let me say that I think it's the unanimous view 
of this committee that 5 hours is intolerable and it was 
horribly violent and we are all opposed to this type of 
violence on Twitter, regardless of when it occurs, and 
certainly we hope that you do better in the future.
    Number two, you state in your testimony on Page 6, ``Bias 
can happen inadvertently due to many factors such as the 
quality of the data used to train our models. In addition to 
ensuring that we are not deliberately biasing the algorithms, 
it is our responsibility to understand, measure, and reduce 
these accidental bias. The machine learning teams at Twitter at 
learning about these techniques and developing a roadmap to 
ensure our present and future machine learning models uphold a 
high standard when it comes to algorithmic fairness.''
    Can you give the committee a time frame as to when we might 
expect that that would receive results that are fair to the 
American people, conservatives and perhaps liberals as well?
    Mr. Dorsey. I can't predict a very precise time frame at 
the moment. This is something that is a high priority for us in 
terms of as we roll out algorithms understanding that they are 
fair and that we are driving impartial outcomes.
    But it's hard to predict a particular time frame because 
this is not just a Twitter issue. This is the entire industry 
and a field of research within artificial intelligence.
    Mr. Lance. I was asked on air in New York over the weekend 
whether this will require regulation by the fFederal 
Government. After all, we are a committee of jurisdiction in 
this regard.
    I certainly hope not, but I am sure you can understand, Mr. 
Dorsey, that we would like this to occur as quickly as possible 
because of the great concern of the American people that there 
not be bias, intentional or unintentional.
    Mr. Dorsey. I do believe you're asking the important 
questions, especially as we move more of our decisions not just 
as a company but also as individuals to artificial intelligence 
and we need to understand as we use this artificial 
intelligence for more and more of the things that we do that, 
number one, that there are unbiased outcomes and, number two, 
that they can explain why they made the decision in the first 
place.
    Mr. Lance. Thank you, Mr. Dorsey.
    And then my third area of questioning, prior to 2016 did 
Twitter have any policies in place to address the use of the 
Twitter platform by foreign governments or entities for the 
purpose of influencing an election in the United States?
    I am certainly as concerned as any member of this 
committee, regardless of political party, about what happened 
regarding Russia in 2016. And so prior to 2016, did you have 
any policies in place?
    Mr. Dorsey. We can follow up with you. I don't have that 
data right now in terms of what policies against foreign actors 
that we had before 2016. But we did learn a lot within the 2016 
elections that impacted both our technology and also the 
policies going forward.
    Mr. Lance. Let me state that I do not believe this is a 
partisan matter. This is a bipartisan matter. It is intolerable 
that there was any interference and, of course, we hope that it 
never occurs again.
    Thank you, Mr. Chairman. I yield back.
    Mr. Walden. The gentleman yields back.
    The chair recognizes the gentleman from Vermont, Mr. Welch, 
for 4 minutes.
    Mr. Welch. Thank you very much, Mr. Chairman.
    There's really two hearings going on. One is about that man 
in the White House who has been accusing, as you have been 
sitting here, the social media giants of interfering in the 
election and making this claim even as you were testifying and, 
in fact, recently said that the media giants were all in favor 
of Hillary Clinton in the election.
    I will just give you a chance to ask whether the company 
Twitter had a policy of the company for either candidate in the 
presidential election.
    Mr. Dorsey. No, we did not.
    Mr. Welch. Absolutely not, I expect, right?
    The second is a job that we are not doing. We are having 
Mr. Dorsey here and it's a good opportunity, given his 
experience in his company. But these social media platforms are 
being abused in some cases and there's efforts that are being 
made at Twitter--we had Mr. Zuckerberg here some time ago--
efforts being made at Facebook to deal with false accounts, to 
deal with hate speech, which you're trying to deal with, to 
deal with flat-out false information, which is not the kind of 
thing you want on the digital town square, right?
    But the fundamental question that this committee refuses to 
ask itself is whether there's a role for publicly-elected 
officials to make some of these decisions about how you protect 
people from hate speech, how you protect people from flat-out 
false information.
    Now, you mentioned, Mr. Dorsey, that your company is 
investigating this. You have got your team working on it, and 
that's a good thing.
    But bottom line, do you believe that this should be 
something that's decided company by company or should we have 
rules of the road and a process that is monitored by elected 
officials in a regulatory agency. That's the question we are 
coming to.
    As Mr. Harper earlier, I thought, asked a very good 
question--what you determine to be fair or I determine to be 
fair, we may disagree. So who's going to be the decider of 
that.
    Do you believe that ultimately it should be a decision on 
these important questions of privacy, on these important 
questions of hate speech, on these important matters you're 
trying to contend with about the abuse of your platform should 
be decided on a company by company basis or should that be a 
public discussion and a public decision made by elected 
representatives?
    Mr. Dorsey. First, we want to make it a public discussion. 
This health and increasing health in the public space is not 
something we want to compete on. We don't want to have the only 
healthy public square.
    We want to contribute to all healthy public conversation. 
Independent of what the government believes it should do, we 
are going to continue to make this our singular objective----
    Mr. Welch. Right.
    Mr. Dorsey [continuing]. Because we believe it's right and 
we are going to continue to share our approach and our work so 
that others can learn from it and we are going to learn from 
others.
    So I do believe that we have worked a lot more closely with 
our peers in order to solve some of these common issues that we 
are seeing and we'll come up with common solutions, as long as 
we all have a mind set of this is not an area for us to 
compete.
    Mr. Welch. It's not an area to compete but it's also 
ultimately as responsible and you and other companies want to 
be, which I grant you you do.
    Ultimately, there will be a debate between the president 
and his vision of what is fair and perhaps my vision of what is 
fair, and in the past, what we've had, we now have the FCC, the 
FTC, that basically were designed to address problems when we 
used dial-up telephones, and this committee has not done 
anything to address the jurisdictional issues and public policy 
questions and I do not believe that we should just be leaving 
it to the responsibility of private companies. But I appreciate 
the efforts the private companies are making.
    And I yield back. Thank you, Mr. Chairman. Thank you, Mr. 
Dorsey.
    Mr. Walden. Gentlemen. The chair now recognizes the 
gentleman from Texas, Mr. Olson, for 4 minutes.
    Mr. Olson. I thank the chair and welcome Mr. Dorsey.
    You mentioned in your opening statement the group called 
the Trust and Safety Council within Twitter.
    On Twitter's BOG, it relies on the Trust and Safety Council 
for guidance in evaluating and developing its own community 
guidelines, to use your words from your statement, to create 
that public square for a free exchange of ideas.
    And you have been pretty honest about your personal biases 
and the biases of people within Twitter. How pervasive are the 
biases on the Trust and Safety Council?
    Mr. Dorsey. Well, just for some context, our Trust and 
Safety Council is a external organization of about 40 
organizations that are global and are focused on particular 
issues such as online harassment or bullying or misinformation.
    So these are entities that help us give feedback on our 
policies and also our solutions that we are coming up with but 
we take no direction from.
    Mr. Olson. Are these entities either Republican, Democrat, 
Tea Party, Green Party? Any identity with their affiliation 
politically that comes into Twitter's world?
    Mr. Dorsey. We do have some conservative-leaning 
organizations but we don't add to the council based on 
ideology. It's on the issues.
    Mr. Olson. And I am sure this council in Twitter does not 
operate in this Twitter vote of secrecy a vacuum. What other 
groups outside of this group help Twitter influence your 
developing and shaping your community guidelines? Anybody else 
out there besides this Trust and Safety Council you rely upon?
    Mr. Dorsey. Well, the Trust and Safety Council is advisory. 
It makes no decisions for us. Most of our decisions are made 
internally and we definitely take input from external folks and 
we look at what's happening in more of the secular trends of 
what's going on. But we don't take direction from anything 
external.
    Mr. Olson. Could we list those members of that council--the 
Trust and Advisory Council, those 40 entities that are your 
members--Trust and Safety Council?
    Mr. Dorsey. They are listed on our web page.
    Mr. Olson. OK.
    Mr. Dorsey. So we have an accurate list of those and we can 
send you----
    Mr. Olson. I apologize. I will look that up. I also want to 
turn to back home, and as you probably heard, a little more 
than a year ago southeast Texas was fighting 4 feet of water 
from floods from Hurricane Harvey.
    A recent report from my alma mater, Rice University, 
highlights how platforms like Twitter played an important role 
in natural disasters and recovery.
    The report showed the increased use of mobile devices 
combined with social media platforms have empowered everyday 
citizens to report dangerous situations and lifesaving 
operations. They can see people in trouble and report that very 
quickly.
    How does Twitter prioritize emergency services information 
during disasters? Like, for example, if Harvey comes up and 
hits us--another Harvey within a month or so, because it's 
hurricane season?
    Mr. Dorsey. We do prioritize community outreach and 
emergency services on the platform. We actually do have some 
really good evidence of this specifically with Harvey. So we 
saw about 27 million tweets regarding Hurricane Harvey.
    In Texas, 911 systems failed and people did use Twitter to 
issue SOS calls and we saw as many as 10,000 people rescued 
from this.
    So this is something that we do prioritize and want to make 
sure that we are working with local agencies to make sure that 
we have a lot strength there.
    Mr. Olson. Thank you, and close by recognizing that as a 
fan of the St. Louis Cardinals and a high-tech leader, I will 
forgive you for your Cardinals hacking into my Astros accounts. 
They hacked into my Astros accounts. We won the World Series. 
Thank you, St. Louis Cardinals.
    I yield back.
    Mr. Dorsey. Thank you. Go Cards.
    Mr. Walden. The gentleman yields back.
    The chair now recognizes the gentleman from New Mexico for 
4 minutes--Mr. Lujan.
    Mr. Lujan. Thank you, Mr. Chairman.
    Mr. Dorsey, thank you for being here today as well.
    Mr. Dorsey, yes or no--is it correct that President Trump 
lost followers because your platform decided to eliminate bots 
and fake accounts?
    Mr. Dorsey. Yes.
    Mr. Lujan. During the initial purge of bots, who lost more 
followers, President Trump or former President Obama?
    Mr. Dorsey. I am not sure of those details. But there was a 
broad based action across all of Twitter.
    Mr. Lujan. Subject to confirmation, do these numbers sound 
familiar--President Obama lost 2.3 million followers, President 
Trump lost, roughly, 320,000 followers?
    Mr. Dorsey. I would need to confirm that.
    Mr. Lujan. That's what's been reported.
    So, Mr. Dorsey, based on that, is it correct that Twitter 
is engaged in a conspiracy against former President Barack 
Obama?
    Mr. Dorsey. I don't believe we have any conspiracies 
against the former president.
    Mr. Lujan. I don't either. I don't think you have them 
against this president. I want to commend you on your work with 
what was done associated with the evaluation following the 2016 
election, which led to some of this work.
    In your testimony, you note that Twitter conducted a 
comprehensive review of platform activity related to the 2016 
election.
    I assume that after your preview, you felt that Twitter had 
a responsibility to make changes to the way your platform 
operates to address future attempts at election manipulation. 
Is that correct?
    Mr. Dorsey. Yes. We are working and this is our number-one 
priority to help protect the integrity of 2018 elections.
    Mr. Lujan. Further, Mr. Dorsey--and Mr. Chairman, I would 
ask unanimous consent to submit three articles into the 
record--one from January 19th, recode.net, cnbc.com, April 5th, 
2018, and from techcrunch.com, August 21st, 2018.
    Mr. Walden. Without objection.
    [The information appears at the conclusion of the hearing.]
    Mr. Lujan. The first article, Mr. Dorsey, says that Twitter 
admits that there were more Russian trolls on its site during 
the 2016 U.S. presidential election as reported by recode.net, 
January 1, 2018.
    Is that correct? Was this a revelation that Twitter shared?
    Mr. Dorsey. Yes.
    Mr. Lujan. Was that an outcome of some of the research?
    Mr. Dorsey. That was an outcome of the continued work as we 
dug deeper into the numbers in 2016.
    Mr. Lujan. Mr. Dorsey, is it also correct as was reported 
by CNBC on April 5th, 2018, that Twitter has suspended more 
than 1.2 million terrorism-related accounts since late 2015?
    Mr. Dorsey. Correct. Yes.
    Mr. Lujan. How did that work come about?
    Mr. Dorsey. We have been working for years to automatically 
identify terrorist accounts and terrorist-like activity from 
violent extremist groups and automatically shutting that down, 
and that has been ongoing work for years.
    Mr. Lujan. I would hope that this committee would commend 
your work in closing those accounts.
    Lastly, Mr. Dorsey, Facebook and Twitter removed hundreds 
of accounts linked to Iranian and Russian political meddling. 
This was reported August 21st, 2018. Is that correct?
    Mr. Dorsey. Yes.
    Mr. Lujan. So, Mr. Dorsey, are you aware of any significant 
legislation that Congress has passed to protect our democracy 
and our elections?
    Mr. Dorsey. I am not aware.
    Mr. Lujan. The reason you're not aware is because none of 
it is--it's not happened. We've not done anything in this 
Congress.
    Mr. Dorsey, after it was revealed that 87 million Facebook 
users' data was improperly shared with Cambridge Analytica, 
this committee heard testimony from Facebook CEO Mark 
Zuckerberg. This was in April of this year. It's now September.
    Are you aware of any significant privacy legislation that 
passed this committee since Mr. Zuckerberg's testimony?
    Mr. Dorsey. No.
    Mr. Lujan. Again, nothing has happened.
    Mr. Chairman, we've not done anything as well for the 148 
million people that were impacted by Equifax. I think we should 
use this committee's time to make a difference in the lives of 
the American people and live up to the commitments that this 
committee has made to provide protections for our consumers.
    I yield back.
    Mr. Walden. The gentleman's time has expired.
    The chair now recognizes the gentleman from West Virginia, 
Mr. McKinley, for 4 minutes.
    Mr. McKinley. Thank you, Mr. Chairman, and thank you, Mr. 
Dorsey, for coming today.
    Earlier this year, and we just referred to it in testimony, 
the FDA commissioner, Scott Gottlieb, reported that there were 
``offers to sell illegal drugs all over social media, including 
Twitter, and the easy availability in online purchases of these 
products from illegal drug peddlers is rampant and fuels the 
opioid crisis.''
    Now, Mr. Dorsey, do you believe that Twitter's platform and 
your controls has contributed to fueling the opioid crisis?
    Mr. Dorsey. Well, first and foremost, we do have strong 
terms of service that prevent this activity and we are taking 
enforcement actions when we see it.
    Mr. McKinley. OK. Well, there was a recent study just 
published by the American Journal of Public Health that 
analyzed over a 5-month period of time the Twitter accounts and 
went through several hundreds of thousands of those and found 
that there were still 2,000 illegal drug sites being sold on 
your account.
    So my curiosity now that we have this report in our hand 
about the 2,000--your website states that this is prohibited.
    It's against your standards and you just said that. Can you 
tell me how many of these sites are still up?
    Mr. Dorsey. I can't tell you. I would have to follow up 
with you on the exact data.
    Mr. McKinley. But they shouldn't be up, right?
    Mr. Dorsey. They shouldn't be. It is prohibited activity.
    Mr. McKinley. If I could, just within the last hour--Mr. 
Dorsey, within the last hour here's an ad for cocaine on 
Twitter. It's still up, and it goes on and it says that, not 
only from that--on that site they can buy cocaine, heroin, 
meth, Ecstasy, Percocet. I would be ashamed if I were you, and 
you say this is against your public policy and you have got 
ways of being able to filter that out and it's still getting on 
there. So I am astounded that that information is still there.
    And then we have the next commercial. This is one on 
cocaine. Here's the next one, that here you can contact us for 
any medicine you want.
    That doesn't say you have to have a prescription. Contact 
these people, and it's on your site and you said you have got 
ways of checking that. Just within the last hour it's still up 
there.
    We ran into the same problem with Facebook and Zuckerberg 
came back to me within 2 hours later and it had all come down. 
They took them off. They weren't aware. They had missed it. 
Their algorithm had missed it.
    I am hoping that in the hours after this hearing you will 
get back to us and tell us that these are down as well--that 
you're serious about this opioid epidemic.
    I just happen to come from a state that's very hard hit 
with this. We don't need to have our social media promoting the 
use of illegal drugs in our children and our families.
    So I hope I hear from you that you will be taking them 
down. Is that a fair statement?
    Mr. Dorsey. Yes. I agree with you this is unacceptable and 
we will act.
    Mr. McKinley. I would also hope that you would move the 
same resources that have complicated so much of what this 
hearing has been about today so that you can focus on this to 
make sure that this doesn't happen again--that we wouldn't have 
to reprimand you to follow the guidelines that you have 
published and you're so proud about that you have the ways of 
stopping opioid sales. But it's not happening.
    So please take a good hard look at it and be serious about 
this this next time.
    Thank you very much. I yield back.
    Mr. Dorsey. Thank you.
    Mr. Walden. The gentleman yields back.
    The chair now recognizes the gentleman from Iowa, Mr. 
Loebsack, for 4 minutes for questions.
    Mr. Loebsack. I thank the chairman and ranking member for 
having this oversight hearing today and I thank you, Mr. 
Dorsey, for being here. You have exhibited a lot of patience, 
you have been very diplomatic and I commend you for that.
    And there have been a lot of great issues brought up, with 
what our most recent colleague here from West Virginia 
mentioned. I think that's a very, very important issue.
    It's something that's affecting rural America as well as 
urban America as well, where I am from, and I think this 
discussion today has really demonstrated how important Twitter 
is to our national conversation--the good, the bad, the ugly, 
all of it--and for our democracy and I am glad we are shining a 
light on many issues of concern of Americans across the country 
with regard to Twitter and the role it plays in our society 
today and will continue to play into the future, obviously.
    And many of my colleagues have raised legitimate concerns 
about data privacy, the influence of hostile actors in our 
elections and the spread of misinformation that can distort and 
harm our very democracy.
    I think these are all important issues, but I want to for a 
second on the issue of online harassment and the use of Twitter 
by young people.
    Social media use among the under 18 population continues to 
increase, as you know, and while reaching online communities 
may allow young people to find friendship and community in ways 
we cannot have imagined growing up--I certainly wouldn't have 
imagined--Twitter may also be creating unimaginable crises for 
many kids, as I am sure you're aware.
    Social media in general and Twitter specifically has been 
used frequently for abusive purposes like harassment and cyber 
bullying, and Twitter has too often been too slow to respond 
when victims report abuse and harassment.
    These interactions which adults might view as merely 
stressful and hurtful when we look at our Twitter account or 
things that are said that might hurt our feelings, whatever the 
case may be, for young people these can be devastating, as we 
know, because they're still developing and often place large 
importance on their reputations with their peers.
    We've seen too many tragic stories of what can happen when 
individuals feel moved to harm themselves in response to online 
harassment and it should be a goal of all of us to stop that 
kind of bullying.
    So, Mr. Dorsey, my first question is, as part of the 
healthiness of conversations on Twitter, are you making any 
specific changes to the experience of your youngest users?
    Mr. Dorsey. Yes. We agree with all your points and one of 
our areas of focus is around harassment in particular and how 
it is used and weaponized as a tool to silence others, and the 
most important thing for us is that we need to be able to 
measure our progress around it and understand if we are 
actually making any progress whatsoever. So----
    Mr. Loebsack. There is a minimum age of 13. Is that correct 
that you're----
    Mr. Dorsey. Yes.
    Mr. Loebsack [continuing]. Now trying to enforce?
    Mr. Dorsey. Yes.
    Mr. Loebsack. Does Twitter put any safety checks on the 
accounts of teenage users?
    Mr. Dorsey. We do have various safety checks and we can 
follow up with your team on that.
    Mr. Loebsack. That would be good. Does Twitter do anything 
to look for indications of harmful or dangerous interactions, 
specifically?
    Mr. Dorsey. Yes. Yes.
    Mr. Loebsack. It'd be good to know that. I appreciate what 
those are specifically. Has Twitter conducted any research with 
outside independent organizations to determine how it can best 
combat online harassment, bullying, or other harmful 
interactions either for children or teenagers or for other 
groups of people?
    Mr. Dorsey. We do this through our Trust and Safety 
Council. So we do have an organization that represents youth on 
digital platforms.
    Mr. Loebsack. And will you commit to publishing a discreet 
review with outside organizations to help evaluate what more 
Twitter can be doing to protect our kids?
    Mr. Dorsey. We haven't yet, but we will certainly work with 
our partners to consider this.
    Mr. Loebsack. Because I think your three principles--
impartiality, transparency, and accountability--I think we can 
put those into effect and operationalize those when it comes to 
these particular questions that I've asked you.
    And so I really do appreciate your time and we can expect 
such a review to be provided to the public then in the future?
    Mr. Dorsey. Yes.
    Mr. Loebsack. OK. Thank you very much for your time, and I 
yield back, Mr. Chair.
    Mr. Dorsey. Thank you.
    Mr. Walden. I thank the gentleman from Iowa.
    I recognize the gentleman from Kentucky, Mr. Guthrie, for 4 
minutes.
    Mr. Guthrie. Thank you very much. I am here. Thank you for 
being here today. I appreciate it.
    I've had to manage the floor debates. I've been over in the 
Capitol building most of the afternoon. I apologize. It was a 
conflict of scheduling.
    But glad to be here, and I know that I missed some of your 
answers and some of the--what we've talked about previously. 
But I want to further go down the path of--on a couple of 
things.
    But many of my constituents who use Twitter perceive it to 
be an open market of ideas that you have referred to in your 
testimony, and we are obviously here today because some 
questions have been raised about the rules for posting content 
and whether some viewpoints are restricted in practice--
specifically, political conservatives.
    So I will come to a question of editorial judgment, but one 
major issue for my constituents starts with transparency and 
how their data is being collected and used by Twitter.
    I understand you have spoken about data a few times already 
this afternoon. So to build on those previous questions asked 
by my colleagues, what specific data points are collected on 
Twitter users and with whom do you share them?
    Mr. Dorsey. So we infer interest around usage. So when 
people follow particular accounts that represent interests in 
basketball or politics, for instance, we can utilize that 
information to introduce them to new tweets that might be 
similar or accounts that might be similar as well.
    So a lot of our inference of that data is interest. This is 
all viewable within the settings of the app so you can see all 
the interests that we've inferred about you within the settings 
and you can also turn them off or delete them.
    Mr. Guthrie. Is that shared with outside parties?
    Mr. Dorsey. It's not.
    Mr. Guthrie. It's not shared? So it's only used by Twitter?
    Mr. Dorsey. Yes.
    Mr. Guthrie. And how do you obtain consent from users if--
so you don't share with any third parties so you don't have to 
go through the consent then? OK.
    When it comes to questions of editorial judgment, and I am 
not an expert on Section 230 but I would like to ask you about 
your thoughts on publisher liability.
    Could you comment on what some have said--that there is a 
certain amount of inherent editorial judgment being carried out 
when Twitter uses artificial intelligence-driven algorithms or 
promotes content through Twitter Moments and the questions 
would be so where should we draw the line on how much editorial 
judgement can be exercised by the owner of a neutral platform 
like Twitter before the platform is considered a publisher?
    Mr. Dorsey. Well, we do defend Section 230 because it is 
the thing that enables us to increase the health in the first 
place. It enables us to look at the content and look for abuse 
and take enforcement actions against them accordingly.
    We do have a section of the service called Moments where we 
do have curators who are looking through all of the relevant 
tweets for a particular event or a topic and arranging them and 
they use a internal guideline to make sure that we are 
representative of as many perspectives as possible, going back 
to that concept of variety of perspective.
    We want to see a balanced view of what people think about a 
particular issue. Not all of them will be as balanced as others 
but that's how they measure themselves against. But it is one 
area that people can choose to use or ignore altogether.
    Mr. Guthrie. OK. Thanks. And then finally, I have 52 
seconds left--some people say and I've heard some people say 
that Twitter could be classified as a media outlet due to 
certain content agreements you may have now or consider in the 
future. Do you have any comment on that?
    Mr. Dorsey. I don't think the broader categories are 
necessarily useful. We do see our role as serving conversation. 
Like, we do see our product as a conversational product, a 
communication product, and we do see a lot of people use 
Twitter to get the news because we believe that news is a by-
product of public conversation and allows to see a much broader 
view of what's currently happening and what's going on.
    So that's what we are focusing on is how do people use us 
rather than these categories. We do have partnerships where we 
stream events like this one--this one is live on Twitter right 
now--where people can have a conversation about and everyone 
can benefit and engage in that conversation accordingly.
    Mr. Guthrie. OK. Thank you. And my time has expired and I 
yield back.
    Mr. Walden. The chair recognizes the gentleman from 
Massachusetts, Mr. Kennedy, for 4 minutes.
    Mr. Kennedy. Thank you, Mr. Chairman.
    Mr. Dorsey, thanks so much for being here. Thank you for 
your--over here--thank you for your patience. I know you were 
over on the Senate side earlier today. So thank you for 
enduring all these long hours of questioning.
    I wanted to just make sure we were clear on a couple 
things. One, you have talked at length--I will get into a 
little bit more detail--about the mechanisms that you use to 
look at different aspects of content on the site.
    But you have also talked about how your algorithms are a 
bit imperfect--how they have impacted some members of this 
body, Democrats and Republicans. Is that true?
    Mr. Dorsey. Yes.
    Mr. Kennedy. And you have also indicated that there are 
others that get caught up in that, liberal activists that use 
perhaps profane language in response to political leaders. Is 
that true?
    Mr. Dorsey. That may or may not be a signal that we use in 
terms of the content. We tend to favor more of the behavior 
that we are seeing and that's what I was describing in terms of 
the signal was the behavior of the people following these 
accounts.
    Mr. Kennedy. Fair enough. You yourself were actually 
suspended at a time. Was that not true?
    Mr. Dorsey. I was.
    Mr. Kennedy. So fair to say that sometimes that----
    Mr. Dorsey. There are errors. There are errors.
    Mr. Kennedy. Yes, there are, unless you engage in that 
destructive behavior of your own site, which you did not, 
right?
    Mr. Dorsey. I am sorry?
    Mr. Kennedy. Unless you engaged in that own destructive 
behavior that you were talking about, which I don't think you 
did.
    Mr. Dorsey. Correct.
    Mr. Kennedy. Right. So you have talked about essentially 
depending on those automated tools and then individual users to 
report tweets, behavior, one of these horrifying instances with 
Ms. McCain.
    But that's basically the self-regulation mechanisms that 
you all use, right?
    Mr. Dorsey. Yes. Our model currently depends upon reports 
to remove content or to remove accounts.
    Mr. Kennedy. And why is it that you depend on those reports 
rather than having a more robust network within your company to 
do that? Why is it that you basically outsource that to users?
    Mr. Dorsey. Well, we don't feel great about this. We don't 
believe that the burden should be on the victim in the first 
place. So this is something we'd like to change. We have to 
build that technology and----
    Mr. Kennedy. But if you change that, right, I understand 
you don't feel good about putting that on the victims or the 
observers, but you also expressed a reticence for your company 
to be the arbiter as to what is decent, fair, truth.
    You mentioned the term false fact earlier in your 
testimony. I have no idea what a false fact is. But putting 
that aside for a second, it seems like you're trying to 
basically meld this world of outside crowd sourcing what works 
versus internalizing some of it.
    I want to try to push you on that in a minute and a half, 
which is not exactly fair. As you say you're trying to fix it, 
what are you trying to do? What does that look like?
    Mr. Dorsey. We are trying to build proactive systems that 
are recognizing behaviors that are against our terms of service 
and take action much faster so that people don't have to report 
them.
    Mr. Kennedy. One of my Republican colleagues asked earlier, 
I believe, how many folks you have working on that. You said 
the issue wasn't so much how many people but you deflected that 
a bit, understanding that, I am certain, technology can advance 
here.
    But is that two people? Is it 20 people? Is it 200 people? 
Do you expect to be hiring more here? That's got to be some 
sort of reflection of an area of focus, right?
    Mr. Dorsey. Yes. We have hundreds of people working on it. 
But the reason I don't want to focus on that number is because 
we need to have the flexibility to make a decision between 
investing to build more new technology or hiring people, and in 
my experience companies naturally just want to grow and that 
isn't always the right answer because it doesn't allow for a 
lot of scalability.
    Mr. Kennedy. All right, sir. Thank you. I yield back.
    Mr. Dorsey. Thank you.
    Mr. Walden. Now we recognize the gentleman from Illinois, 
Mr. Kinzinger, for 4 minutes.
    Mr. Kinzinger. Thank you, Mr. Chair, and Mr. Dorsey, thank 
you again for coming in here. Recognizing that there's multiple 
swords to free speech--there's good and bad that comes with it.
    I think it's important to also mention that Twitter as well 
as other social media platforms has been key in liberating 
oppressed people and allowing oppressed people to communicate.
    If you look in Syria, although that situation is not good 
over there, people have been able to get their message out. 
When chemical weapons attacks happen, we know about that very 
quickly because government-censored media, which would never 
report a chemical weapons attack, is usurped by Twitter use and 
Facebook and some of these others.
    So part of a very big concern with that too is also foreign 
interference in our democracy. I am very concerned about the 
role that the Russians played in attempting to undermine 
democracy.
    I don't think Russia elected President Trump, but I think 
it's obvious they're trying to sow instability in democracy. 
And so I think the more we can get a grip on this and a grasp 
and make people aware of just the fact of what's happening we 
can begin to inoculate ourselves.
    I would like to ask you, though, about Twitter's practices 
with respect to information sharing with foreign governments.
    It's a topic I addressed in the Facebook hearing with Mr. 
Zuckerberg and in which I think Senator Rubio broached with you 
a little earlier today.
    On September 1st, 2015, Russian Federal Law Number 242-FZ, 
known by many as the data localization law, went into effect.
    It requires social media companies offering service to 
Russian citizens to collect and maintain all personal 
information of those citizens on databases physically located 
in their country. Is Twitter in compliance with this law?
    Mr. Dorsey. I need to follow up with you on that.
    Mr. Kinzinger. You don't know if you're in compliance with 
that law right now?
    Mr. Dorsey. Which law again?
    Mr. Kinzinger. It's the Russian Federal Law 242-FZ, which 
requires--the data localization requires storage of information 
to be kept in Russia. This has been in the news for a couple 
years now so I would hope you would know.
    Mr. Dorsey. I don't. I need my team follow up with you on 
that.
    Mr. Kinzinger. You got a bunch of people back there. You 
can ask them if I----
    Mr. Dorsey. We don't have servers in Russia.
    Mr. Kinzinger. You do not have them.
    Mr. Dorsey. No.
    Mr. Kinzinger. OK. So you're not technically in compliance, 
which I think is good. So that might answer my second 
question--if you store user data, because there would be 
concern about breaches and everything else in dealing with 
Russia.
    And legitimate and well-defined requests for data that may 
aid in the investigation of a crime, does Twitter make any user 
data available to Russian state entities including intelligence 
and security agencies?
    Mr. Dorsey. No.
    Mr. Kinzinger. OK. Let me ask you then--we've touched on 
this a few times--with the minute I have left--parents, young 
adults, teenagers using Twitter.
    I think our laws haven't caught up with the new reality, 
the 21st century that we are in. We have to address how 
technology can be used to hurt innocent people.
    In Illinois, there's laws to prevent people from 
distributing photos with malicious intent. A fake account can 
be created in a matter of minutes to slander someone and do 
damage and circulate photos.
    Mr. Zuckerberg testified before this committee that 
Facebook is responsible for the content on Facebook, which I 
think you can appreciate how newsworthy that was, given the 
longstanding interpretations of Section 230.
    Your user agreement clearly states that all content is the 
sole responsibility of the person who originated such content. 
You may not monitor or control the content posted via services 
and we cannot take responsibility for the content.
    Your corrective actions and the statements you have made a 
little bit seem to be somewhat in conflict with the language. 
Can you just take a little bit of time with what we have left 
to clarify your stance on content?
    Mr. Dorsey. In what regard?
    Mr. Kinzinger. Are users responsible? Is Twitter? Is it 
mixed? What area does Twitter have a responsibility or when you 
step in, why?
    Mr. Dorsey. So people are responsible for their content. We 
have made our singular objective as a company to help improve 
the health of the content that we see on the service, and for 
us that means that people are not using content to silence 
others or to harass others or to bully each other so that they 
don't even feel safe to participate in the first place and that 
is what CDA 230 protects us to do is to actually enforce these 
actions--make them clear to people in our terms of service but 
also to enforce them so that we can take actions.
    Mr. Kinzinger. OK. I am out of time. So I yield.
    Mr. Walden. The gentleman's time has expired.
    The chair recognizes the gentleman from California, Mr. 
Cardenas, for 4 minutes.
    Mr. Cardenas. Thank you very much, Mr. Chairman and 
colleagues, for participating in this important matter.
    I want to follow up on some of Mr. Loebsack's line of 
questioning. While the President and the Republicans are 
criticizing social media--I think it's to whip up their base--
there are real issues such as the shocking number of teens that 
are reporting being bullied.
    Physical playground bullying is bad enough. But, 
increasingly, this cruelty is moving online where one click of 
a button sends hateful words and images that can be seen by 
hundreds or even thousands of people at a time.
    People, kids, are being targeted for being who they are or 
for being a certain race or a certain sexual orientation and so 
on.
    We know it's a pervasive problem. The First Lady has made 
combating cyber bullying a national priority, oddly enough. At 
the same time, adults are not giving kids a great example to 
follow.
    Public figures including the President spew inflammatory 
harmful words every day. These actions cannot be erases and may 
follow their victims and families forever.
    For example, how does it feel to be in front of us for 
hours at a time?
    Mr. Dorsey. I am enjoying the conversation.
    Mr. Cardenas. Yes. But do you get to go home? Do you get to 
do what you choose to do once you leave this room?
    Mr. Dorsey. Yes.
    Mr. Cardenas. Well, that's what's incredibly important for 
us to think about when we think about bullying online because 
it's inescapable, really, and that's really an issue that is 
new to us as human beings and certainly with platforms like 
yours it's made possible. It can take many forms.
    It can be hurtful. It's about words. It's about 
appearances. It's about many, many things. So I think it's 
really important that the public understands that something 
needs to be done about it and what can be done is something 
that, hopefully, we can come to terms with you over at Twitter 
and with all the millions of people who use it.
    As very public examples, for example, celebrities such as 
14-year-old Millie Bobby Brown, Kelly Marie Tran, Ariel Winter, 
and Ruby Rose have stopped using Twitter or taken breaks from 
Twitter because the intensified bullying that they experience 
on the platform have persisted. If Twitter couldn't or wouldn't 
help these public figures, how does it deal with all the kids 
who aren't famous? I want to know how you handle bullying 
claims for American families who are not in the news.
    You have explained that Twitter investigates when it gets a 
report of behavior that crosses the line into abuse including 
behavior that harasses, intimidates, or uses fear to silence 
other voices.
    How many reports of cyber bullying does Twitter receive 
each month is my first question.
    Mr. Dorsey. We don't disclose that data but we can follow 
up with you.
    Mr. Cardenas. OK. Appreciate you reporting to the committee 
on that answer. How about Periscope?
    Mr. Dorsey. The same.
    Mr. Cardenas. The same? OK. Look forward to that answer 
submitted to the committee.
    And how many of those reports are for accounts of people 
age 18 or younger?
    Mr. Dorsey. In what regard? Periscope or Twitter?
    Mr. Cardenas. Yes. Do you ever take into account whether or 
not it's a report to somebody who's been attacked who are 18 
years or younger?
    Mr. Dorsey. We'll have to follow up with you on that. We 
don't have the same sort of the demographic data that our peers 
do because we are not a service of profiles but of 
conversation.
    Mr. Cardenas. That makes it even more critical for us to 
understand that. What actions are taken in response to these 
reports and how long does it take for Twitter to take such a 
response?
    Mr. Dorsey. We rank according to the severity of the report 
and, again, this is something that we need to improve to 
understand the severity of each report and how that is ranked 
so we can move much faster.
    Ultimately, we don't want the reporting burden to be on the 
victim. We want to do it automatically.
    Mr. Cardenas. OK. Thank you very much. I am out of time.
    Thank you very much, Mr. Chairman. I yield back.
    Mr. Walden. I thank the gentleman.
    And we now turn to the gentleman from Virginia, Mr. 
Griffith, for 4 minutes.
    Mr. Griffith. Thank you very much, Mr. Chairman. I 
appreciate you being here, Mr. Dorsey.
    I represent that portion of Virginia that's in the 
southwest corner and borders a big chunk of southern West 
Virginia and so I had some questions similar to Mr. McKinley's 
questions because we are suffering from a huge opioid problem 
but drugs in general.
    And so I know you're trying and you're working on it and 
you're looking for things. But last year in an edition of 
Scientific American, they talked about having artificial 
intelligence scan Twitter for signs of opioid abuse, and it 
would seem to me that on something that's an illegal conduct, 
if somebody is selling drugs that's not just an inconvenience 
or trying to judge whether it's truly something that's bad or--
it's illegal--it would seem to me that you all ought to be able 
to deploy an artificial intelligence platform that would knock 
down anybody trying to sell illegal substances on your 
platform. Can you address that?
    Mr. Dorsey. Yes. We have to prioritize all of our models 
and we have been prioritizing----
    Mr. Griffith. Shouldn't illegal be at the very top of that 
model?
    Mr. Dorsey. Absolutely. But we have been prioritizing a lot 
of what we saw in 2016 and 2017 in terms of election 
interference and our readiness for 2018. That does not say----
    Mr. Griffith. Here's what I got. I got people writing me 
whose kids have died because they've been in treatment, they 
have a relapse, and one of the easiest ways to get in there is 
to get on social media and, if scientists can use artificial 
intelligence to track opioid abuse in this country, it would 
seem to me you ought to be able to track illegal sales with 
artificial intelligence. Now, wouldn't you agree with that? Yes 
or no.
    Mr. Dorsey. I agree with that. It's horrible and definitely 
it's something we need to address as soon as possible.
    Mr. Griffith. I appreciate that very much.
    Now, look, I don't think there's a conspiracy. I think that 
there's a lot of folks out there, though, that may not have 
that many conservative friends who might be living in your 
neighborhood or living in the area that you live in, and I 
looked at your advisory council.
    There may be some right-leaning groups but I didn't see any 
right groups in there that would--look, we are not all crazy on 
the right. Get in there and find some groups that can help out 
on your advisory council.
    Also, I would say to you, and I said this to Mr. Zuckerberg 
when he was here, it seems to me that if you don't want the 
government in there--and I think it's better not to have the 
government in there telling you all what to do as social 
media--that you all as a group ought to get together and come 
up with something.
    1894 had this new-fangled thing. Electronic devices were 
coming onto the scene and an engineer says, maybe we ought to 
test all this, and they got the insurance companies and the 
electric manufacturers together and they funded United 
Laboratories, and as an industry without government coming in 
and saying, this is what you have to do, they came up with 
standards.
    It would seem to me that the social media, particularly the 
big actors like yourself, but others ought to come together, 
figure out something that's a template that works for all to 
make sure that we are not having political bias because I 
really do believe you when you say that you all aren't trying 
to do it.
    But it's happening anyway, and I think it's an accident. I 
am not trying to assess blame. But I am saying you have got to 
help us because I don't think it's good for the internet or 
social media to have the government laying down rules that may 
or may not make sense.
    But somebody's got to do something because we need to 
protect privacy, as you have heard, and we need to make sure 
there's not any political bias intentional or unintentional. 
Would you agree to that?
    Mr. Dorsey. It's a great idea and that is why we want to be 
a lot more open around these health indicators that we are 
developing and we don't see this as a competition.
    Mr. Griffith. And last but not least, one of the questions 
that's come up as I've been discussing this issue with a lot of 
folks is if you do put the kibosh on somebody's post or 
somebody's Twitter account, can you at least tell them about it 
so that they have some idea so they can do the appeal? Because 
if they don't know about it, they're not likely to appeal, are 
they?
    Mr. Dorsey. Yes. We need a much more robust way of 
communicating what happened and why and also a much more robust 
appeals process.
    Mr. Griffith. Thank you very much. My time is up. I yield 
back.
    Mr. Walden. I thank the gentleman.
    I turn now to the gentleman from California, Mr. Peters, 
for 4 minutes.
    Mr. Peters. Thank you, Mr. Chairman, and thank you, Mr. 
Dorsey, for being here.
    I don't know if anyone else has mentioned the breathtaking 
irony that Donald Trump is complaining about Twitter. It's hard 
for me to imagine he would have done nearly as well as he did 
without your platform and he's a master of using it. I think it 
has done some wonderful things for democracy. It's democratized 
democracy in many ways.
    We saw that here in the House when we livestreamed the 
protest over guns in 2016. It brought people into the chamber 
in a way that I think none of us had imagined before. I use it 
a lot just to stay connected back home in San Diego.
    I find out what's going on every day in the local 
government, in the local activities. I follow my baseball 
team's promising minor leagues through it and I think it's been 
a great platform.
    The problem with when anyone can be on your platform, 
though, is that now everyone's a journalist and I just want to 
explore in that context your discussion of the term fairness.
    Have you ever written down what you mean by fairness? And 
what I am sort of getting at is, you have these allegations 
about facts versus false equivalency that journalism has been 
dealing with I think more successfully recently, trying to 
provide truth rather than balance.
    Is that something that goes into your calculation of 
fairness and what kind of standards do you impose on content 
that's on Twitter?
    Mr. Dorsey. Fairness to us means that we are driving more 
impartial outcomes, which are more objective driven, not basing 
anything on bias, and we do want to be able to measure this and 
also make public what we find, and that's why we kicked off 
this initiative to understand the health of conversation and 
how it might trend.
    One of the indicators that we are considering is shared 
facts and that is the percentage of conversation that shares 
the same facts. That is not an indication of truth or not, just 
what percentage of people participating in a conversation are 
actually sharing the same facts versus having different facts, 
and we think a greater collection of shared facts leads to a 
healthier conversation.
    So then if we understand the makeup of them currently, how 
can we help drive more people towards sharing more of the facts 
and if we can do that then we can see a lot more healthy 
conversations. So that's our intent.
    But first, we are at the phase where we just need to 
measure it against those four indicators I laid out earlier, 
and we can send you more of our information and thinking about 
how we are developing these.
    Mr. Peters. I would love to hear that. One of the problems 
with everyone having their own facts is it's very hard to have 
conversations about difficult issues.
    One that I am concerned about is climate change. If 
everyone has a different understanding of the facts it's hard 
to agree on what to do about it. Mr. Sarbanes raised the 
concept of this hearing being a way to work to refs. I don't 
know if you recall that reference.
    Is that something that we should be concerned about? Is 
that something that strikes you as going to have an impact on 
your business, the notion that the committee would be working 
the refs with the majority?
    Mr. Dorsey. I honestly don't know what that means so----
    Mr. Peters. OK. Good. So the idea is that they're going to 
put so much pressure on you to avoid pressure from us that you 
will change your behavior in a way that's not fair. Is that 
something that we should be concerned about?
    Mr. Dorsey. Well, I think we've articulated what we think 
is important and what we are trying to drive and I see the role 
of government as being a checkpoint to that and also being a 
clarifier and asking questions of our path and, I do believe 
the system is working in that regard.
    So we are putting out what we believe is critical for us to 
focus on and if there are disagreements en masse in feedback we 
get, we will certainly change our path.
    Mr. Peters. Well, I appreciate your testimony today. My 
time has expired and I thank the chairman.
    Mr. Walden. I thank the gentleman.
    The chair recognizes the gentleman from Florida, Mr. 
Bilirakis, for 4 minutes.
    Mr. Bilirakis. Thank you, Mr. Chairman. I appreciate it. 
Thank you very much, and thank you for your testimony, Mr. 
Dorsey.
    Mr. Dorsey, I've heard from my local Pasco County school 
district--that's located on the west coast of Florida--it has 
consistently responded to threats of school violence.
    I've heard from the superintendent, Kurt Browning, who's 
doing an outstanding job, that it faced as many as 19 threats 
in one week. Many of those threats have come from individual 
tweets.
    News reports and studies show this is a widespread problem, 
as you can imagine. What is your company's process for 
notifying local law enforcement officials and school districts 
when these threats emerge?
    Mr. Dorsey. We do have outreach to local entities and local 
law enforcement when we see anything impacting someone's 
physical security. We can follow up with you on exactly what 
those implementations are.
    Mr. Bilirakis. Well, how effective have they been? Can you 
give me----
    Mr. Dorsey. I am not sure how to determine the efficacy. 
But we can follow up with you on that and share what we have.
    Mr. Bilirakis. Please do. Please do.
    And would you consider an internal process in which Twitter 
can work directly with the school districts to address these 
tweets quickly? Obviously, time is of the essence.
    Mr. Dorsey. Yes. One of the things we are always looking 
for is ways to quickly, especially where it impacts physical 
security, ways to quickly alert us to things that we might be 
able to help with in terms of the conversation around it.
    So we are certainly open to it and open to an 
implementation that we think we can scale.
    Mr. Bilirakis. Let me ask you a question. How did you 
determine the--and I know social media, Facebook too--minimum 
age of use, 13, and are you considering raising that age?
    Mr. Dorsey. We, I don't believe, have considered raising 
the age but we do determine it upon sign-up.
    Mr. Bilirakis. OK. Thank you.
    The next question--according to Twitter's website, 
Twitter's Moments are defined as ``curated storage showing the 
very best of what's happening on Twitter and customized to show 
you topics that are popular or relevant so you can discover 
what is unfolding, again, customized to show you topics and 
what's relevant so you can, again, what is unfolding on Twitter 
in an instant.''
    In my experience, Twitter Moments more often features a 
specific point of view or political narrative, and the question 
is how are these Moments--again, ``Moments,'' compiled and 
prioritized.
    You said earlier that Moments are selected by employees 
publishing content. What are the internal guidelines the 
company has set to determine what makes a Moment?
    Mr. Dorsey. Yes. So we, first and foremost, take a data-
driven approach to how we arrange these Moments and, again, 
these are collections of tweets that we look at, based on any 
particular topic or event, and we bring them into a collection, 
and we use a data-driven approach meaning that we are looking 
for the amount of conversation, first and foremost, that's 
happening around a particular event, and then as we rank that, 
then we go into impartiality to make sure that we are looking 
for opportunities to show as many perspectives as possible.
    So a variety of perspectives and a high score on a variety 
of perspectives is beneficial to the people reading because 
they can see every side of a particular issue or a particular 
event.
    Mr. Bilirakis. OK. Very good. I thank you and look forward 
to getting some information on this particular----
    Mr. Dorsey. Thank you.
    Mr. Bilirakis [continuing]. Following up and we'd like to 
have you back, in my opinion, even though I am not the 
chairman, to see the progress that you have made with regard to 
these issues.
    Thank you, and I yield back.
    Mr. Dorsey. Thank you.
    Mr. Walden. I thank the gentleman.
    The chair recognizes the gentlelady from Michigan, Mrs. 
Dingell, for 4 minutes.
    Mrs. Dingell. Thank you, Mr. Chairman, and thank you, Mr. 
Dorsey.
    You're actually one of my husband's heroes. I am married to 
what we affectionately call around here the Dean of Twitter 
who, quite frankly, at 92 is better on Twitter than probably 
everybody in this room, which means I know the power of this 
platform and I think it's a very important tool.
    But to those who have been doing conspiracy theories and 
politicizing this, it is not only Meghan McCain--that I, 
myself, have had some of those same threats and those same 
caricatures and, quite frankly, I was blissfully ignorant until 
law enforcement brought it to my attention.
    So I do think that the threats that are happening on 
Twitter do need to be better understood and more quickly acted 
upon.
    But I would rather ask some questions right now because 
you're educating all of us and we all need to understand social 
media better, period, and its tool in the background.
    So I would like to ask some questions about privacy and the 
use of machine learning and artificial intelligence on the 
platform.
    You have spoken about how you are trying to deploy machine 
learning to combat the disinformation, the harassment, the 
abuse, and I want to build on what some of my other colleagues 
have said about the black box nature of these algorithms and 
the lack of what they call accountability but how we improve 
it.
    So building on what actually my colleague, Representative 
Harper, was saying, what type of data sets do you use to train 
AI and how often do you retrain them?
    Mr. Dorsey. That's a great question. We try to use data 
sets that will be predictive of what we would expect to see on 
the service and as we train these models we are certainly using 
previous experiences and outputs that we've seen in natural 
uses of how people use the system and then also trying to test 
some edge cases as well.
    But, again, all these tests are great and help us 
understand what to expect but, ultimately, they're not really 
put to the test until they're released on production and we 
actually see how people use it and how it's affecting usage and 
also what might be unexpected, which I talked about earlier.
    So that's training. AI is not a new field but the 
application of AI at scale is rather new, especially to us and 
our company.
    So there are best practices being developed that we are 
learning as quickly as possible from and, more importantly, 
trying to measure those outcomes in terms of bias and 
impartiality.
    Mrs. Dingell. So as we build on that, do your engineers 
have an ability to see and understand why an algorithm made 
certain decisions?
    Mr. Dorsey. That is a great question because that goes into 
another field of research in AI which is called explainability, 
which is encouraging engineers to write a function that enables 
the algorithm to describe how it made the decision and why it 
made the decision and I think that is a critical question to 
ask and one to focus on because we are offloading more and more 
of our decisions to these technologies, whether they be 
companies like ours who are offloading our enforcement actions 
to algorithms or ranking actions to algorithms or even 
personally.
    I am wearing an Apple watch right now and it tells me when 
to stand. I've offloaded a decision to it, and if it can't 
explain the context to why it made that decision or why it's 
taking that action, it becomes quite scary.
    So I do believe that is a valid form. It is extremely early 
in terms of research--this concept of explainability--but I 
think it will be one that bears the greatest fruit in terms of 
trust.
    Mrs. Dingell. For the record because I am going to be out 
of time. You have talked about how these algorithms have missed 
things. It's made mistakes. What is an acceptable error rate? 
You can do that for the record later, but I just----
    Mr. Dorsey. We'll come back.
    Mr. Walden. The chair now recognizes the gentleman from 
Ohio, Mr. Johnson, for 4 minutes.
    Mr. Johnson. Thank you, Mr. Chairman, and Mr. Dorsey, thank 
you for being here today. Is it safe to say that an algorithm 
is essentially a decision tree that once it's turned into 
software it operates on a data set as input and it produces a 
desired action or result? Would that be a good layman's term of 
what an algorithm is?
    Mr. Dorsey. For a general algorithm, yes. But it gets a lot 
more complicated.
    Mr. Johnson. I know it gets a lot more complicated than 
that and I am going to get into the complication. I am a 
software engineer by trade and I've written thousands and 
thousands of algorithms.
    There's as much art that goes into writing an algorithm as 
there is science. Would you agree with that?
    Mr. Dorsey. I agree with that.
    Mr. Johnson. So and, essentially, there's a part of the 
heart of the algorithm writer that's writing that algorithm, 
correct?
    Mr. Dorsey. In----
    Mr. Johnson. If you have got a painter--if you put 10 
painters in 10 different rooms and say, paint me a picture of a 
tree, you're going to get Charlie Brown's Christmas tree in one 
room.
    You're going to get a tree with an oak tree and a swing and 
grass underneath it. You're going to get 10 different pictures 
of a tree. If you ask 10 software engineers to develop you an 
algorithm you're going to get 10 different solutions to solve 
that problem, right?
    Mr. Dorsey. Which is why testing is so important because we 
are looking for other algorithms.
    Mr. Johnson. Which is why testing is so important. What 
kind of testing do you guys do with your algorithms to make 
sure that that innate bias that's inevitable because you--it's 
already been admitted that Twitter has got bias in your 
algorithms because you have acknowledged that and you have 
tried to correct it.
    So how do you go about weeding out that innate bias? Do you 
do any peer reviews of your algorithms before you send them to 
production?
    Mr. Dorsey. We do do those internally, yes.
    Mr. Johnson. Well, can't you modify your algorithms, 
especially in this age of artificial intelligence to be more 
intelligent in identifying and alerting on specific things.
    In the automotive industry today we've got artificial 
intelligence in automobiles that doesn't just tell you that 
there's in front of you. It actually puts the brakes on. It 
takes some action and it's instantaneous because it saves 
lives.
    Is it unreasonable to think that Twitter could not modify 
its algorithms to hit on illegal drug sales, on violent 
terminology, and those kinds of things and make faster alerts 
to stop some of this?
    Mr. Dorsey. Not unreasonable at all. It's just a matter of 
work and doing the work and that is our focus.
    Mr. Johnson. OK. Well, I would submit to you that you need 
to do that work and you need to get to it pretty quick.
    Let me ask you another quick question. The trending topics 
list is an important issue and I want to understand that one. 
Can you tell me how a topic is determined to be trending? Give 
me some specific--what's it based on?
    Mr. Dorsey. Well, so in a tweet when you use a particular 
key word or hashtag, when the system notices that those are 
used en masse in aggregate, it recognizes that there's a 
velocity shift in the number of times people are tweeting about 
a particular hashtag or trend and it identifies those and then 
puts them on that trending topic list.
    Now, there is a default setting where we personalize those 
trending topics for you and that is the default. So when you 
first come on to Twitter, trending topics are personalized to 
you and it's personalized based on the accounts you follow and 
how you engage with tweets and what not.
    Basically, we could show you all the trending topics 
happening in the world but not all of them are going to be 
relevant to you. We take the ones that are relevant to you and 
rank them accordingly.
    Mr. Johnson. So it's trending based on what's relevant to 
you, essentially?
    Mr. Dorsey. Correct.
    Mr. Johnson. OK. My time is up. But let me just say this, 
and I said this to Mr. Zuckerberg. In the absence of massive 
federal regulations telling you guys how to do your business, 
the responsibility bar goes really, really high.
    And I think, coming back to what Mr. Griffith says, I think 
you guys need to look at an outside entity of some sort to help 
you bounce off ideas of how to address this stuff before legal 
or market forces drive you to a place that you're not going to 
want to go.
    Mr. Walden. The gentleman's time has expired.
    Mr. Johnson. I yield back.
    Mr. Walden. The chair now recognizes the gentleman from New 
York, Mr. Tonko, for 4 minutes.
    Mr. Tonko. Thank you, Mr. Chair, and thank you, Mr. Dorsey, 
for all the time you have given the committee.
    I want to echo my dismay that our Republican colleagues 
have chosen to hold this hearing to rile up their base and give 
credence to unsupported conspiracies when there are real issues 
here that run to the heart of our civic life that deserve our 
immediate attention.
    It is unfortunate and a missed opportunity on behalf of our 
majority.
    Mr. Dorsey, I know that Twitter has said it is taking steps 
to help make political advertising more transparent on the 
platform and is now working to do something similar with issue 
ads.
    Unfortunately, looking at Twitter today, I am concerned 
that even for political ads you haven't made anything clear 
necessarily to consumers. On some platforms, and Facebook for 
an example, if a user visits a political or politician's 
website, that user can immediately see all the advertisement 
that she or he has purchased on the platform.
    On Twitter, I have to find a separate resource--the ads 
transparency center--and then search for the politician to see 
what promotion she or he purchased in the past. It is, indeed, 
difficult to find and seems ill advised, particularly when your 
competitors are doing it differently and perhaps better.
    So did Twitter do any research regarding how best to make 
election advertising information available to its consumers?
    Mr. Dorsey. We did do some research. But this is not a 
stopping point for us. So we want to continue to make ad 
transparency something that is meeting our customers where they 
are so that it is relevant so it's easy to get to.
    We did some things a little bit differently. We have 
launched the issue ad feature of the ad transparency as well. 
But we also enabled anyone, even without a Twitter account, to 
search Twitter ads to see who is behind them and also the 
targeting criteria that are used.
    Mr. Tonko. Thank you. And have you kept any statistics that 
you can share with this committee today about how often average 
consumers even searched the ads transparency center?
    Mr. Dorsey. We do keep statistics and track usage of all of 
our products. We can certainly follow up with your office to 
give you some relevant information.
    Mr. Tonko. Thank you. And I know that you said this is not 
a stopping point--that you're still exploring--but why is it 
that it appears that you're making it harder for Americans to 
see who's trying to influence them?
    Mr. Dorsey. That's not our intention and, we do know we 
need to do a lot more work to meet people where they are, and 
in the interface there's just some design choices that we need 
to make in order to do this the right way.
    Mr. Tonko. What's more, it seems that political advertising 
information that Twitter makes available only shows 
advertisements served in the past 7 days. Is that correct?
    Mr. Dorsey. I am not aware right now of the constraints on 
it. But we'll follow up with you.
    Mr. Tonko. OK. But if that is correct, that seems vastly 
insufficient, given that political campaigns in the U.S. last 
months, if not years.
    So, Mr. Dorsey, why doesn't your platform reflect that 
insight and disclose political advertising beyond 7 days if 
that, indeed, is the time frame?
    Mr. Dorsey. We'll look into that.
    Mr. Tonko. OK. I appreciate that immensely, and I thank 
you.
    And I yield back, Mr. Chair, the balance of my time.
    Mr. Walden. I thank the gentleman.
    We now go to the gentleman from Missouri, Mr. Long, for 4 
minutes.
    Mr. Long. Thank you, Mr. Chairman, and thank you, Mr. 
Dorsey, for being here.
    I think it's pretty easy to understand why you have been as 
successful as you have because your mannerisms today, your 
decorum--a lot of people come into these hearings and they 
practice and they coach them and they tell them how to act. 
It's obvious that no one did that for you.
    You are who you are and that shows today and I think that 
that has a lot to do with how successful you have been. So 
thank you for your time and being here today.
    Mr. Dorsey. Thank you.
    Mr. Long. I do have a couple of questions. Mr. Bilirakis 
asked you about Moments. I am not sure exactly what Moments are 
but when my staff got a hold of me a couple days ago they said, 
well, what do you want to ask Mr. Dorsey--where do you want to 
take this--what direction--do a little research.
    And I just, off the top of my head I said, well, let me 
send you some stuff so I started shooting them emails, and 
these are emails that I received--they're called highlights, as 
you're familiar with--daily highlights to my personal Twitter 
account about the most interesting content from Twitter that is 
tailored just for me.
    And when we are talking about impartiality and, somebody 
said the Republicans are all full of conspiracy theories over 
here. You're a thoughtful guy. I just want you to take into 
consideration what I am going to say and do with it what you 
want to.
    But if you're saying hey, we are impartial--we really are--
this, that and the other, out of the--I just started firing off 
emails to my lege director and I sent him 14 emails of 
highlights that were sent to me just in the last few days and I 
guess, I don't know, it might have been over 14 days--I don't 
know how often you send them.
    But there's six highlighted tweets per email. So that's a 
total of 84 recent examples that you all picked out and said 
hey, this conservative congressman from Missouri--and thank 
goodness you're a Cardinal fan--but and you being from 
Missouri--but this conservative congressman that we found out 
what this guy wants to read and here it is.
    Twelve of them of the 84 were from Glenn Thrush, reporter 
for the New York Times; Maggie Haberman--you sent me nine from 
her--White House correspondent for the New York Times, 
political analyst for CNN; Chris Cillizza, political 
commentator for CNN; David Frum, senior editor at The Atlantic 
and MSNBC contributor; Nicole Wallace, current anchor of 
Deadline White House and chief political analyst for MSNBC and 
NBC News; Sam Stein, former political editor of the Huffington 
Post, politics editor at the Daily Beast and MSNBC contributor; 
Rep. Eric Swalwell, Democratic congressman from California's 
15th District; Robert Costa, national political reporter for 
the Washington Post, a political analyst for NBC News and 
MSNBC; Kaitlan Collins, White House correspondent for CNN; 
Michael Schmidt, New York Times correspondent and contributor 
to MSNBC and NBC; Tommy Vietor, former spokesman for President 
Obama; David Corn, MSNBC analyst and author of the ``Russian 
Roulette'' book; Kasie Hunt, NBC News correspondent, host of 
MSNBC Show; Richard Painter, commentator on MSNBC and CNN, 
outspoken critic of Trump; David Axelrod, commentator for CNN, 
former chief strategist for Obama's campaign, senior advisor to 
Obama.
    I did not cherry pick these. Here's a Republican--a former 
Republican. I am not sure what he is now. But you did send me 
one from Bill Kristol, founder and editor of the ``At Large 
Weekly'' and a vocal supporter and a never Trumper guy, and you 
did send me another one from Fox News--I will put that in 
there--Brit Hume, senior political analyst for Fox News 
channel.
    I want to submit these for the record so you can peruse 
them at your leisure. That's the only two I remember being 
Republican--Kristol--and out of 84 they were handpicked, 
tailored for me because they know what I want to read. But 
Glenn Thrush, Chris Cillizza--it just goes on and on.
    I have, I guess, 14 pages of them here, and they're all 
pretty much Trump bashing. They're all pretty much Trump 
bashing. If you just go right down the line, one after another.
    So just, if you will, take that into consideration and, 
again, I do--and I think that there was a fake news tweet sent 
out yesterday by a guy that was sitting here earlier and he's 
not here anymore.
    Reporter John Gizzi sent out a fake news tweet yesterday. 
He said he was headed to the Nationals' park--that he was going 
to watch them beat the Cardinals. That was fake news.
    [Laughter.]
    I yield back.
    Mr. Dorsey. Thank you. It doesn't sound like we served you 
well in matching your interests.
    Mr. Duncan [presiding]. The gentleman's time has expired.
    The chair will recognize Ms. Schakowsky.
    Ms. Schakowsky. Thank you, Mr. Chairman.
    So while you have been sitting here all day--we appreciate 
that--according to the Wall Street Journal, the Justice 
Department to examine whether social media giants are 
``intentionally stifling'' some viewpoints, and it quotes the 
President.
    It says that in an interview Wednesday morning with the 
Daily Caller, Mr. Trump accused social media companies of 
interfering in elections in favor of Democrats: ``The truth is 
they were all on Hillary Clinton's side,'' he said.
    Would you agree with that characterization by the 
President?
    Mr. Dorsey. No.
    Ms. Schakowsky. The other thing it says in this article is 
that they expressed--I guess it's in the Senate--they expressed 
contrition for allowing their platform to be abused in the past 
while pledging to protect the system during the 2018 mid-term 
elections a priority.
    First of all, I just want to say about contrition, we heard 
from Facebook's CEO, Mr. Zuckerberg, one example after another 
after another through the years--you haven't been there that 
long, Twitter--of contrition. We are sorry, we are sorry, we 
are sorry.
    But even today, if I had listed well, we made a mistake--we 
are going to do better, et cetera.
    So, first let me ask you, what are you going to do to make 
sure that the election is not in some way influenced by foreign 
governments in an inappropriate way?
    Mr. Dorsey. Well, this is our number-one priority in our 
information quality efforts----
    Ms. Schakowsky. I hear that.
    Mr. Dorsey [continuing]. And our broader health and we have 
benefited from learning from recent elections like the Mexican 
election and were able to test and refine a bunch of that work 
accordingly.
    So we are doing a few things. First, we are opening portals 
that allow partners and journalists to report anything 
suspicious that they see so that we can take much faster 
action.
    Second, we are utilizing more technology to identify where 
people are trying to artificially amplify information to steer 
or detract the conversation.
    Third, we have a much stronger partnership with law 
enforcement and federal law enforcement to make sure that we 
are getting a regular cadence of meetings that we are seeing 
more of the trends going on and that we can understand intent 
behind these accounts and activities so we can act much faster 
as well.
    Ms. Schakowsky. Well, I appreciate that because that's 
where the emphasis ought to be. I have to tell you, the 
President and the Republicans have concocted this idea of a 
supposed anti-conservative bias to, it seems to me, distract 
from the fact that their majority has absolutely done nothing 
to prevent foreign governments from using social media 
platforms to spread misinformation, and if we don't do that 
then I think our democracy itself is actually at stake.
    But also, in terms of your motives, Mr. Dorsey, the 
majority of Twitter's revenue comes from selling advertising on 
the platform, right?
    Mr. Dorsey. Correct.
    Ms. Schakowsky. And Twitter is a for-profit publicly-traded 
company. Is that right?
    Mr. Dorsey. Correct.
    Ms. Schakowsky. And generally speaking, businesses, 
political campaigns, and other advertisers choose to advertise 
on Twitter because Twitter promises to deliver targeted highly-
engaged audience. Is that what you'd say?
    Mr. Dorsey. Correct.
    Ms. Schakowsky. So you actually said that you are 
incentivized--it says Twitter is incentivized to keep all 
voices on the platform. Is that correct?
    Mr. Dorsey. No. That is where we need to make sure that we 
are questioning our own senses but also we understand that 
making health our top and singular priority means that we are 
going to be removing accounts and we have done so.
    Ms. Schakowsky. OK. I am quoting, actually--that you said 
from a business perspective Twitter is incentivized to keep all 
voices on the platform.
    Mr. Dorsey. Oh. All perspectives. But I thought you meant 
more the accounts. But we do want to make sure that we believe 
we are used as a public square for people and that all 
perspectives should be represented.
    Ms. Schakowsky. Thank you, and thank you for being here.
    Mr. Dorsey. Thank you.
    Mr. Duncan. The gentlelady's time has expired. The chair 
will recognize the gentleman from Indiana, Mr. Bucshon.
    Mr. Bucshon. Thank you. Thank you, Mr. Dorsey, for being 
here.
    I just want to say I don't see this as particularly 
partisan. The hearing, I think, is completely appropriate and 
relevant to the American people across political ideology.
    I would respectfully disagree with my Democrat colleagues 
and some of the comments they've made and I would just like to 
say this.
    Ironically, in my view, they're the ones most likely to 
want heavy-handed government intervention into your industry 
and I would argue that people like me, Republicans, are trying 
to help you avoid it. So take that for what it's worth.
    You have implied and you have said that Twitter is taking 
all these different actions to improve all the things that 
you're doing as it relates to algorithms and other things.
    What's your timeline? And I know you have said that this is 
an ongoing process, right. You're not going to get a checkered 
flag, right. But what's your timeline for getting some of this 
really done?
    Mr. Dorsey. We want to move as fast as possible, and I know 
that's a frustrating answer because it's really hard to predict 
these outcomes and how long they may take.
    But it is our singular objective as a company in terms of 
increasing the health of the public square that we are hosting.
    Mr. Bucshon. Yes. Thank you.
    Is there any way that users and third parties can verify 
whether or not their political standards or judgments are 
embedded accidentally into Twitter's algorithms?
    I guess I am asking are your algorithms publicly available 
for independent coders to assess whether there is bias, whether 
it's intended or unintended?
    Mr. Dorsey. Not today. But that is an area we are looking 
at and we'd love to be more open as a company including our 
algorithms and how they work.
    We don't yet know the best way to do that. We also have to 
consider in some cases when we are more clear about how our 
algorithms work it allows for gaming of the system, so people 
taking advantage of it.
    Mr. Bucshon. Yes.
    Mr. Dorsey. So we need to be cognizant of that, and it's 
not a blocker by any means.
    Mr. Bucshon. Oh, I understand.
    Mr. Dorsey. We'd love for it to be open. But that's a big 
understanding that we need to understand how to correct.
    Mr. Bucshon. Yes, I totally get that. I could see where if 
the algorithms were there, then smart people are going to find 
ways to subvert that, right. And there's probably some 
proprietariness there that you may have a competitor in the 
future named something else and you don't want your processes 
out there. I totally respect that.
    Mr. Dorsey. Although this is an area we don't want to 
compete. We do not want to compete on health. We want to share 
whatever we find.
    Mr. Bucshon. OK. And I think many people have said, all of 
us, whether we know it or not, have some inherent biases based 
on where we grew up, what our background is, what our life 
experiences are.
    So I am really interested in how you recruit to your 
company, because I think--obviously, the tech industry has had 
some criticism about its level of diversity.
    But I think it would be important to get your feel for if 
you're going to avoid group think and you're creating 
algorithms, how do you recruit and--you're not going to ask 
somebody, hey, are you pro-Trump or against Trump. I get that, 
right. But I would argue you want to have people from 
everywhere, different races, men, women, different political 
view, because my impression is, diversity is, in some respects, 
in certain industries fine as long as it's not political 
diversity.
    So can you give me a sense of how you build the team?
    Mr. Dorsey. Yes. This is an active conversation within the 
company right now. We recognize that we need to decentralize 
our workforce out of San Francisco. Not everyone wants to be in 
San Francisco. Not everyone wants to work in San Francisco. Not 
everyone can afford to even come close to living in San 
Francisco and it's not fair.
    So we are considering ways of how we hire more broadly 
across every geography across this country and also around the 
world and being a lot more flexible. It's finally the case that 
technology is enabling more of that and we are really excited 
about this and I am personally excited to not consider San 
Francisco to be a headquarters but to be a more distributed 
company.
    Mr. Bucshon. Yes. I just want to say I think it's very 
important to make sure that companies like yours do get a 
variety of perspectives within your employee base.
    Thank you.
    Mr. Dorsey. I agree.
    Mr. Bucshon. Thanks for being here.
    Mr. Dorsey. Thank you.
    Mr. Duncan. The chair will recognize the gentleman from 
California, Mr. Ruiz, for 4 minutes.
    Mr. Ruiz. Mr. Dorsey, you have had a long day. You're in 
the home stretch.
    So thank you for being with us today. I am glad my 
colleagues on this side of the aisle have been focusing on the 
issues that are very important to our democracy and how we 
combat foreign influences and bots and harassment and other 
challenges on your platform.
    I would like to take a step back and look more precisely at 
the makeup of Twitter's users and I am not sure we or even 
possibly you have a true understanding of who is really using 
your services and your website.
    So as you have said previously, the number of followers an 
account has is critically important, both in terms of the 
prominence of an account but also the ranking of algorithms 
that push content to users.
    So when tens of thousands of new accounts created every day 
both real and fake and by humans and bots alike, I am concerned 
about the accuracy of those numbers we are using here today and 
the implications those numbers have.
    So you have said that 5 percent of your accounts are false 
or spam accounts. Is that correct?
    Mr. Dorsey. Correct.
    Mr. Ruiz. OK. And how do you measure that? Is that at any 
one time or is that over the course of any one year? How did 
you come to the conclusion of 5 percent?
    Mr. Dorsey. Yes. We have various methods of identification, 
most of them automations and machine learning algorithms to 
identify these in real time, looking at the behaviors of those 
accounts and----
    Mr. Ruiz. So that's how you identify which ones are false. 
But how did you come up with the 5 percent estimate of total 
users are fake?
    Mr. Dorsey. Well, it's 5 percent, we believe, are taking on 
spammy like behaviors, which would indicate an automation or 
some sort of coordination to amplify information beyond their 
earned reach.
    So we are looking at behaviors and that number----
    Mr. Ruiz. So you just take that number versus the total 
number of users?
    Mr. Dorsey. The total active, and that number has remained 
fairly consistent over time.
    Mr. Ruiz. OK. In 2015, you reported that you had 302 
million monthly active users on your platform. In 2016, it was 
317 million monthly active users. In 2017, 330 million, and in 
2018 you said 335 million monthly active users.
    How do you define monthly active users?
    Mr. Dorsey. It's someone who engages with the service 
within the month.
    Mr. Ruiz. So is that somebody who tweets or somebody who 
retweets or somebody who just logs in?
    Mr. Dorsey. Someone who just logs in.
    Mr. Ruiz. OK. And is it 5 percent of those yearly numbers 
that you believe to be somebody who just simply logs in?
    Mr. Dorsey. Yes, who are taking on spam like behaviors or 
spam like threats.
    Mr. Ruiz. And has the 5 percent been consistent over the 
years?
    Mr. Dorsey. It has been consistent.
    Mr. Ruiz. OK. So we have heard reports of hundreds of 
Twitter accounts run by just one person. It's my understanding 
that each of those accounts are counted as separate monthly 
active users. Is that correct?
    Mr. Dorsey. Correct.
    Mr. Ruiz. OK. Good. So my concern with these questions is 
that the number of followers an account has, which is, 
obviously, comprised of the subset of those 335 million Twitter 
users, is an incredibly important metric to your site and one 
you even said this morning in front of the Senate presented too 
much of an inventive for account holders.
    Based on what we've heard, though, it appears that the 
number of followers may not be an accurate representation of 
how many real people follow any given account.
    For example, last year Twitter added, roughly, 13 million 
users but early today you said you are flagging or removing 8 
to 10 million per week.
    How can we be confident the 5 percent fraudulent account 
number you are citing is accurate?
    Mr. Dorsey. Well, we are constantly updating our numbers 
and our understanding of our system and getting better and 
better at that. We do see our work to mitigate----
    Mr. Ruiz. Before we end the time, I am going to ask you one 
question and you can submit the information, if you don't mind, 
and that's basically in medicine or any screening utility--I am 
a doctor--for any screening utility we use a specificity and 
sensitivity and that just measures how well your methodology 
works. And the higher specificity the lower false positive you 
have. The higher sensitivity the lower false negatives that you 
have.
    In this case, you can see the different arguments is how 
many false positives versus how many false negatives. We are 
concerned that you're going to have false negatives with the 
Russian bots.
    Some are concerned that your false positive you're taking 
out people that legitimately should be on there.
    So if you can report to us what those specificity and 
sensitivity of your mechanism in identifying bots, I would 
really appreciate that. That will give us a sense of where your 
strengths are and where your weaknesses are.
    Mr. Dorsey. Thank you.
    Mr. Duncan. Point's well-made and the gentleman's time has 
expired.
    The chair will go to Mr. Flores from Texas.
    Mr. Flores. I thank you, Mr. Chair, and I appreciate, Mr. 
Dorsey, you showing up to help us today.
    If you don't mind, I am going to run through a bunch of 
questions it will take and ask Twitter to supplementally answer 
those later, and then I have a question or two at the close 
that I would like to try to get asked.
    Our local broadcasters provide a valuable service when it 
comes to emergency broadcasting or broadcasting of different 
events that happen. You heard Mr. Burgess earlier talk about 
the TV station that was attacked this morning and the first 
notice he got was on Twitter.
    So my question is this. Should Twitter be considered a 
trusted advisor in the emergency alerting system and how do you 
manage the intentional or unintentional spread of 
misinformation or abuse by bad actors on this platform during 
times of emergency? And you can supplementally answer that, if 
you would.
    And then the next question is--this has to do with free 
speech and expression--does Twitter proactively review its 
content to determine whether a user has violated its rules or 
is it only done once another user voices the concerns.
    And the next question is do you have a set of values that 
Twitter follows when it makes decisions about flagged content 
or is it done on a case by case basis and which individuals at 
Twitter make judgement calls.
    The next one has to do with how do you--this is a 
conceptual question I would like you to try to answer, and 
that's how do you balance filtering versus--and moderating 
versus free speech.
    I mean, there's always this tenuous balance between those 
two. So if you could, I would like to have you respond to that.
    Then we need some definition. This is an oversight hearing. 
We are not trying to legislate. We are just trying to learn 
about this space.
    And so I would like to have Twitter's definitions of 
behavior, Twitter's definition of hateful conduct, Twitter's 
definition of low quality tweets.
    An explanation of the abuse reports process, and also you 
said you had signals for ranking and filtering. I would like to 
know how that process works, if we can.
    I would like to know more about the Trust and Safety 
Council, how it works, and its membership--some of that's 
publicly available, some of it's not--and then the Twitter 
definition of suspicious activity.
    And here's the question I have in the last minute that I 
have that I would like you to respond to. A lot of the social 
media space has been through some tumultuous times over the 
past 18 to 24 months, and so my question is this.
    If we were to have a hearing a year from now, what would be 
the three biggest changes that Twitter has made that you would 
share with Congress?
    Mr. Dorsey. That's an excellent question. So I believe, 
first and foremost, we see a lot of progress on increasing the 
health of public conversation.
    Second, I believe that we have reduced a bunch of the 
burden that a victim has to go through in order to report any 
content that is against them or silencing their voice or 
causing them to not want to participate in the public space in 
the first place.
    And then third, we have a deeper understanding of the real-
world effects off platform of our service both to the broader 
public and also to the individual as well, and those are things 
that I think we can and will make a lot of progress on, the 
latter one being probably the hardest to determine. But I think 
we are going to learn a lot within these 2018 elections.
    Mr. Flores. OK. I thank you for your responses and I know 
you have got team people back there that took good notes on the 
other ones that I left for supplemental disclosure.
    Thank you. I yield back.
    Mr. Costello [presiding]. Yields back.
    The gentleman from Illinois, Mr. Rush, is recognized for 4 
minutes.
    Mr. Rush. Mr. Dorsey, I certainly want to thank you for 
being here and for really enduring this marathon of questions.
    I want to go back to the beginning of this hearing where 
Mr. Pallone discussed the need for an independent third party 
institute to conduct a civil rights audit of Twitter and I am 
not sure of your answer. It was kind of vague to me.
    So I ask the question, are you willing to commit to or are 
you saying that Twitter will consider Mr. Pallone's request? Is 
that a commitment or is that just a consideration?
    Mr. Dorsey. Yes. We are willing to commit to working with 
you and staff to understand how to do this best in a way that 
is actually going to show what we can track and the results.
    But I think that is a dialogue we need to have.
    Mr. Rush. Thank you.
    Chicago is experiencing an epidemic of violence 
particularly as it relates to our young people and Facebook has 
already been confirmed as an asset that is being used by some 
of these young people to commit violence.
    And my question to you, are you aware of where Twitter was 
used to organize or perpetuate any form of street violence 
anywhere in the Nation and, certainly, in Chicago?
     Mr. Dorsey. We do look at cases and reports where people 
are utilizing Twitter and coordinating in terms of having off-
platform violence.
    We do have a violent extremist group policy where we do 
look at off-platform information to make judgments.
    Mr. Rush. And is there an automatic process for the removal 
of such posts?
    Mr. Dorsey. Yes. There is a reporting process. But, again, 
it does require right now for removal of the post a report of 
the violation.
    Mr. Rush. So are they removed, though?
    Mr. Dorsey. Sorry?
    Mr. Rush. Are they removed?
    Mr. Dorsey. How many have been removed? We----
    Mr. Rush. No. Have you removed any?
    Mr. Dorsey. Have we removed any? We do often remove content 
that violates our terms of service. We have a series of 
enforcement actions that ranges from a warning to temporary 
suspension and removal of the offending tweet all the way to a 
permanent suspension of the--of the account.
    Mr. Rush. All right. In that regard, do you also have any 
authoritative actions that you have taken to inform local 
police departments of these kind of activities?
    Mr. Dorsey. We do have partnerships with local enforcement 
and law enforcement agencies all over the world and we do 
inform them as necessary.
    Mr. Rush. All right. Let me ask you one other final 
question here. I want to switch. Your legal and policy chief 
told Politico yesterday, ``There is not a blanket exception for 
the President or anyone else when it comes to abusive 
tweeting.''
    Do you consider President Trump's tweets to be abusive or 
harmful at all?
    Mr. Dorsey. We hold every account to the same standards in 
the consistency of our enforcement. We do have a clause within 
our terms of service that allows for public interest and 
understanding of public interest per tweet and we definitely 
weigh that as we consider enforcement.
    Mr. Rush. Mr. Chairman, my time is----
    Mr. Costello. Yes.
    Mr. Pallone. Mr. Chairman, I seek unanimous consent to 
submit a statement for the record on behalf of our colleague, 
Representative Anna Eshoo of California.
    Mr. Costello. Without objection.
    [The information follows:]
    [GRAPHIC] [TIFF OMITTED] T6155.032
    
    [GRAPHIC] [TIFF OMITTED] T6155.033
    
    Mr. Costello. The gentlelady from Indiana, Mrs. Brooks, is 
recognized for 4 minutes.
    Mrs. Brooks. Thank you, and thank you, Mr. Dorsey, for 
being here today and for sitting through an entirely very long 
day of a lot of questions.
    And I want to share with you and stay a little bit on the 
public safety angle. In 2015, I was very pleased because we got 
signed into law the Department of Homeland Security Social 
Media Improvement Act bill and this group has been meeting, 
which I am pleased that they organized and have been meeting.
    They've issued about three different reports and actually 
one of the reports is focused on highlighting countering false 
information and disasters and emergencies.
    Another one focuses on best practices of incorporating 
social media into their public safety exercise all the time, 
and then how do they operationalize social media for public 
safety.
    I would be curious whether or not you and your team, A, if 
you even knew anything about this group and whether or not you 
and your team might be willing to assist this group.
    While I recognize that you have contacts around the globe, 
there actually is a public safety social media group that's 
very focused on this and I think we need to have better 
interaction between the social media platforms and 
organizations and the public safety community so they can 
figure this out.
    Is that something you might be willing to consider?
    Mr. Dorsey. Yes. I was not aware of it, honestly, but I am 
sure my team is and we'll definitely consider.
    Mrs. Brooks. Thank you.
    I am curious, and I asked Mr. Zuckerberg this when he 
appeared before us--with respect to the terrorism groups and 
the extremist groups that you monitor and that you take down--
and I have seen reports that in a short period of time, July of 
2017 to December of 2017, you actually took down 274,460 
Twitter accounts in a 6-month period relative to promoting 
terrorism, and so that seems like a very large number of 
accounts and I am afraid that people believe that it's not 
happening. We don't hear about it as much.
    Can you--and I understand that you have worked with Google, 
YouTube, Facebook, and others to create a shared database of 
prohibited videos and images. But we don't hear anything about 
that either. Is this database still in use? Are you all still 
working together and collaborating?
    Mr. Dorsey. Yes. We are still working together and this is 
a very active collaboration and a lot of the work we've been 
doing over the years continues to bear a lot of fruit here.
    But we are happy to send to the committee more detailed 
results. We do have this in our transparency report.
    Mrs. Brooks. And I was going to ask, the transparency 
report--and you have talked about that a few times--it's not 
done yet. Is that right?
    Mr. Dorsey. It's not finished yet for actions upon content 
in accounts that have to do with our health aspects. It is for 
terrorism accounts.
    Mrs. Brooks. It is finished there. All of these questions 
that you have gotten, and there have been a lot of things, can 
we expect that a lot of these things might be in that 
transparency report that people have been asking you about?
    Mr. Dorsey. Yes. The first step is to figure out what is 
most meaningful to put in there. So, really, designing the 
document so that people can get meaningful insight in terms of 
how we are doing and what we are seeing and what we are dealing 
with, and then we need to aggregate all that data.
    So we are in the early phases of designing this document 
and how we are thinking about it. But we'd like to move fast on 
it because we do believe it will help earn trust.
    Mrs. Brooks. Well, and certainly from a public safety 
perspective you can't and shouldn't divulge everything that you 
do relative to helping keep us safe.
    And while I appreciate that it is very important to have an 
open dialogue and to have as much information as possible in 
the conversation in the public square.
    I, certainly, hope that your work with law enforcement--we 
need to make sure the bad guys don't understand what you're 
doing to help us.
    And so I thank you and look forward to your continued work 
in this space.
    Mr. Dorsey. Thank you so much.
    Mrs. Brooks. Thank you.
    Mr. Walden [presiding]. The gentlelady's time has expired.
    The chair now recognizes the gentleman from Pennsylvania, 
Mr. Costello, for 4 minutes.
    Mr. Costello. Thank you.
    Mr. Dorsey, in your testimony you identified a handful of 
behavioral signals but you noted Twitter uses thousands of 
behavioral signals in your behavioral-based ranking models.
    Could you provide the committee with a complete accounting 
of all of these signals?
    Mr. Dorsey. A lot of those signals are changing constantly. 
So even if we present one today it might change within a week 
or within a month.
    The point is that it's not a thousand behavioral signals. 
It's a thousand decision-making criteria and signals that the 
algorithms use.
    And I don't mean exactly a thousand--it could be hundreds, 
it could be thousands--they all vary--to actually make 
decisions.
    Mr. Costello. Would you consider providing a more expansive 
list of signals beyond the small handful that you have 
provided, specifically those that seem to endure and that don't 
change week to week?
    Mr. Dorsey. We are looking at ways to open up how our 
algorithms work and what criteria they use to make decisions. 
We don't have conclusions just yet and the reason why we are 
pausing a little bit here and considering is because by giving 
up certain criteria we may be enabling more gaming of the 
system----
    Mr. Costello. Sure.
    Mr. Dorsey [continuing]. Taking advantage of the system so 
that people can bypass our protections.
    Mr. Costello. You used the term a little earlier curators. 
Is that a position within your company or did you just kind 
of--what's a curator at your company do?
    Mr. Dorsey. Yes. We have a product within Twitter called 
Moments and what it is is if you go to the search icon you can 
see a collection of tweets that are actually arranged by 
humans, organized around a particular event or a topic. So it 
might be a supporting game, for example.
    And we have curators who are looking for all the tweets 
that would be relevant and one of the things that they want to 
ensure is that we are seeing a bunch of different 
perspectives----
    Mr. Costello. Relevant based on my behavior and do I have 
to manually do that or is that going to show up in my feed?
    Mr. Dorsey. We do that work and then sometimes you make it 
a Moment that is more personalized to you based on your 
behavior. In some cases, all people get the same Moment.
    Mr. Costello. Would that be subject--and, listen, the bias 
issue--but that would open up consideration for there to be 
more bias in any way.
    Bias can mean a lot of different things. It doesn't even 
have to be political. So your curators are making some sort of 
subjective determination on what might be of interest--what 
might pop more--what might get more retweets, comments, et 
cetera?
    Mr. Dorsey. Well, they use a data-driven approach based on 
the percentage of conversation that people are seeing. So we 
are trying to reflect how much this is being talked about on 
the network, first and foremost, and then checking it against 
impartiality and also making sure that we are increasing the 
variety of perspective.
    Mr. Costello. I appreciated your written testimony. You 
said something in there that interests me and that--a lot of 
things--but one was you have no incentive to remove people from 
your--in other words, you have no incentive to remove 
conservatives from your platform because the more people 
talking the better.
    But it strikes me that, when we are talking about hate 
speech or personal insults or things that are just straight up 
mean there's an incentive not to remove that stuff if it's 
driving more participation.
    How do you reconcile that?
    Mr. Dorsey. It's an excellent question, and something that 
we have balanced in terms of, number one, our singular 
objective is to increase the health of this public square and 
this public space, and we realize that in the short term that 
will mean removing accounts.
    And we do believe that increasing the health of the public 
conversation on Twitter is a growth vector for us but only in 
the long term and we--over the past few months we've taken a 
lot of actions to remove accounts en masse.
    We reported this during our past earnings call and the 
reaction was what it was. But we did that because we believe 
that, over the long term, these are the right moves so that we 
can continue to serve a healthy public square.
    Mr. Walden. The gentleman's time----
    Mr. Costello. Yes. Thank you. I yield back.
    Mr. Walden. The chair now recognizes the gentleman from 
Oklahoma, Mr. Mullin, for 4 minutes.
    Mr. Mullin. Thank you, Mr. Chairman, and Mr. Dorsey, thank 
you so much for being here.
    I've got a question, and this isn't a gotcha question. It's 
a point to which I want to try to make because as my colleague 
from Virginia, Mr. Griffith, said earlier, he doesn't believe 
that you're doing it on purpose.
    It's just that the way things are working out the system to 
which you guys use to figure out who's going to be censored and 
who's not.
    So my question is would you consider yourself conservative? 
Liberal? Socialist? How would you consider your political 
views?
    Mr. Dorsey. I try to focus on the issues so I don't.
    Mr. Mullin. Well, I know, but the issues are at hand and 
that's what I am trying to ask.
    Mr. Dorsey. What issues in particular?
    Mr. Mullin. Well, OK. Are you a registered voter?
    Mr. Dorsey. I am a registered voter.
    Mr. Mullin. Republican? Democrat?
    Mr. Dorsey. Independent.
    Mr. Mullin. Independent. So as a business owner myself, 
different departments that I have seem to take on the 
personality of the ones that I have running it--the people that 
I have running a department or a business or an organization.
    When I stepped down as CEO of my company, the new CEO took 
on a different personality and the employees followed. And we 
are choosing one mindset over another in some way, regardless 
if you're doing it on purpose or not.
    The way that it is being picked, the way it's being 
portrayed, is somewhat obvious and let me just simply make my 
point here.
    2016 presidential campaign Twitter was accused of 
suspending an anti-Hillary focused account and de-emphasized 
popular hashtags. October 2017 Twitter barred Marsha 
Blackburn's campaign video for an ad platform, calling it 
inflammatory.
    November 2017, a single rogue employee deactivated Trump's 
account for 11 minutes. That's shocking that a single rogue 
employee could actually have that much authority to do that.
    That's a different question for a different day, maybe. 
July 2018, Twitter was accused of limiting visibility of 
certain Republican politicians by preventing their official 
accounts from appearing in sites--auto-populated drop down 
searches--search bar results.
    August 2018, conservative activist Candace Owens' account 
was suspended after, essentially, imitating a account from a 
New York Times editorial board member, Susan--I think I am 
pronouncing this right--Jeong. Are you familiar with this?
    Mr. Dorsey. Yes.
    Mr. Mullin. Let me read what Ms. Jeong wrote: 
``#cancelwhitepeople. White people marking up the internet with 
their opinions like dogs pissing on fire hydrants. Are white 
people genetically predisposed to burn faster in the sun, thus 
logically being only fit to live underground like grovelling 
goblins? Oh, man, it's kind of sick how much I enjoy--or, how 
much joy I get out of being cruel to old white men. I open my 
mouth to populate--to politely greet a Republican but nothing 
but an unending cascade of vomiting flows from my mouth.''
    Now, that same tweet went out by Candace Owens but replaced 
Jewish for white. Ms. Owens' account was suspended and flagged. 
The New York Times reporter's account wasn't.
    What's the difference?
    Mr. Dorsey. So we did make a mistake with Owens----
    Mr. Mullin. But I've heard you say that multiple times we 
made a mistake. I've heard you say that the whole time you have 
been up here, and you have been very polite and pretty awesome 
at doing it.
    But the fact is it's bigger than a mistake. It's the 
environment to which I think Twitter has. My point of the first 
question was does that fit your political views to which your 
company is following? Because there seems to be----
    Mr. Walden. The gentleman's time----
    Mr. Mullin [continuing]. A pattern here.
    Mr. Dorsey. No, it doesn't. I value variety in perspective 
and I value seeing people from all walks of life and all points 
of views, and we do make errors along the way both in terms of 
our algorithms and also the people who are following guidelines 
to review content.
    Mr. Walden. The gentleman's time has expired.
    Mr. Mullin. Thank you. I yield back.
    Mr. Walden. The chair recognizes the gentleman from 
Michigan, Mr. Walberg, for 4 minutes.
    Mr. Walberg. Thank you, Mr. Chairman, and thank you, Mr. 
Dorsey, for being here, and it's been a long day for you. It's 
an important day, though.
    I guess the only complaint I would have thus far is that 
your staff didn't prepare well enough to go through 535 members 
of Congress to see if there were any biases and have those 
figures for us today that you could answer.
    I would assume that they should have thought that with 
Republicans and Democrats here and the statements that we've 
heard from the other side of the aisle that that question would 
come up--those facts, those statistics--at least on the 535 
members.
    It would have been worth being able to answer right today 
with an imperative no, there was no bias, or yes, it appears 
there was a bias. That's the only complaint I have.
    But let me go to the questions. In a July 26th, blog post, 
Twitter asserted, ``We believe the issue had more to do with 
how other people were interacting with these representatives' 
accounts.'.
    What specific signals or actions of other accounts 
interacting with the representative's account would you 
suggest--this is my question--contributed to the auto suggest 
issue?
    Mr. Dorsey. The behaviors we were seeing were actual 
violations of our terms of service.
    Mr. Walberg. Clear violations of your terms--would muting 
or blocking another user's account contribute to that?
    Mr. Dorsey. No. These were reported violations that we 
reviewed and found in violation.
    Mr. Walberg. And retweeting or boosting wouldn't be a 
contribution to what you did either. Does Twitter have policies 
and procedures in place to notify accounts or users when their 
messages or content have been hidden from other users?
    Mr. Dorsey. We don't have enough of this so we do have a 
lot of work to do to help people understand why--right in the 
products why we might rank or why we might filter or put their 
content behind an interstitial, and that is an area of 
improvement. So we haven't done enough work there.
    Mr. Walberg. So while--and I appreciate the fact you 
don't--you don't want to have users be responsible for 
contacting you about issues, you ought to be catching some of 
this stuff.
    You have no specific timeline or strong policy in place to 
notify me, for instance, that there's a reason why you have 
taken me down, blocked or whatever, for the time being so I can 
at least respond to that and can make a change so that I am a 
productive positive member of Twitter.
    Mr. Dorsey. Well, if we take any enforcement action that 
results in removal of content or asking the removal you get 
notified immediately.
    Mr. Walberg. Immediately?
    Mr. Dorsey. It's just a question of the filtering or the 
time ranking that we don't have a great way of doing this 
today.
    It is our intention to look deeper into this but--and I 
know this is a frustrating answer but the timelines are a 
little bit unpredictable. But we do believe that transparency 
is an important concept for us to push because we want to earn 
more people's trust.
    Mr. Walberg. With regard to internet service providers, 
they're required to disclose if they are throttling or blocking 
their services. Of course, that's been a big issue.
    Would you be open to a similar set of transparency rules 
when you have taken actions that could be viewed as blocking or 
throttling of content?
    Mr. Dorsey. We are considering a transparency report around 
our actions regarding content like this. We are in the phases 
right now of understanding what is going to be most useful in 
designing the document and then to do the engineering work to 
put it in place we can aggregate all the information.
    But I do think it's a good idea and something that I do 
think helps earn people's trust.
    Mr. Walberg. Well, I wish you well on it because I don't 
want to be like my colleagues on the other side of the aisle 
that want to regulate. This is the amazing social media 
opportunity we have.
    We want to keep it going proper. I don't want to see 
government get involved in regulating if you folks can do the 
job yourselves.
    Thank you. I yield back.
    Mr. Walden. The gentleman yields back.
    The chair recognizes Mr. Duncan for 4 minutes.
    Mr. Duncan. Thank you, Mr. Chairman, and Mr. Dorsey, thank 
you for being here. We've heard a lot today about content 
filters, shadow banning, and a little bit about bias, and I 
would like to focus on bias for just a second.
    A member of my staff recently created a test Twitter 
account working on a communications project unrelated to this 
topic and even before we knew that this hearing was going to 
take place.
    They were interested to note who was listed on the 
``suggestions for you to follow'' list. This is a pro-life 
conservative congressional staffer on a work computer whose 
search history definitely doesn't lean left. All they entered 
was an email address and a 202 area code phone number.
    Yet, here's who Twitter suggested they follow, and you will 
see it on the screen: Nancy Pelosi, Kamala Harris, John 
Dingell, Chuck Schumer, John Kerry, Ben Rhodes, David Axelrod, 
Kirsten Gillibrand, Jim Acosta, Alexandria Ocasio-Cortez, Paul 
Krugman, Madeline Albright, Claire McCaskill, Chuck Todd, and 
Jon Lovett--all left leaning political types. That's all she 
got as ``suggested for you to follow.'.
    Forget the fact that there aren't any Republicans or 
conservatives on that list. No singers, no actors, no athletes, 
no celebrities. She's a 20-something female staffer. Didn't 
even get Taylor Swift, Chris Pratt, Christiano Ronoldo, or Kim 
Kardashian. All she got was the suggestions that I had on the 
screen.
    Look, it's one thing not to promote conservatives even 
though Donald Trump is the--truly, the most successful Twitter 
user in history of the site. Say what you want about what he 
tweets but President Trump has utilized Twitter in 
unprecedented ways to get around the traditional news media.
    I would think that someone in your position would be 
celebrating that and him rather than trying to undermine him. 
So how do you explain how a female 20-something-year-old who 
just put in an email address and a 202 area code--why does she 
only get the liberal suggestions?
    Mr. Dorsey. We simply don't have enough information in that 
case to build up a more informed suggestion for her. So the 202 
number is all we have so therefore----
    Mr. Duncan. So I get that you don't have much information 
on her. One hundred percent of the suggested followers were 
biased. Where was Kim Kardashian? Where was Taylor Swift? Where 
was Ariana Grande.
    In fact, I can look at Twitter, most followers, and they're 
not these people that you suggested for her. There was nothing 
on her search history on a government work computer to suggest 
that she was left leaning or right leaning or anything. Katy 
Perry, number one--she wasn't on this list. How do you explain 
that?
    Mr. Dorsey. I think it was just looking at the 202 as a 
D.C. number and then taking D.C.-based accounts and the most 
followed, probably, or most engaged with D.C. accounts. As----
    Mr. Duncan. In the 202 area code area?
    Mr. Dorsey. In the 202 area code.
    Mr. Duncan. OK. Where's Bryce Harper? Where's Ovechkin? 
Where are the Capitols? Where are the Nats? Where's D.C. 
United? Where are the sports teams.
    If you're going to use 202 area code and say that's one of 
the filters, where are those folks outside of the political 
arena? There are no athletes. There are no singers. There are 
no celebrities.
    There were only suggested political figures of a very 
liberal persuasion that were suggested for her to follow. 
Nobody else. That shows bias, sir.
    Mr. Dorsey. Well, yes. We do have a lot more work to do in 
terms of our onboarding and, obviously, you're pointing out 
some weaknesses in our signals that we use to craft those 
recommendations.
    So if she were to start following or following particular 
accounts or engaging with particular tweets, that model would 
completely change, based on those.
    We just don't have information. It sounds like we are not 
being exhaustive enough with the one piece of information we do 
have, which is her area code.
    Mr. Duncan. Mr. Dorsey, let me ask you this. After this 
hearing and me, clearly, showing this bias and a lot of other 
questions, if someone in a 202 area code that's 28 years old 
sets up a Twitter account with very limited information but has 
an email address and a 202 area code----
    Mr. Walden. Gentleman's time----
    Mr. Duncan [continuing]. Are you going to tell me today 
that they're going to get other suggested followers than the 
liberals that I mentioned?
    Mr. Dorsey. That is not a good outcome for us.
    Mr. Walden. Gentleman's time has expired.
    Mr. Duncan. Mr. Chairman, thank you.
    Mr. Walden. The chair recognizes the gentlelady from 
California, Mrs. Walters, for 4 minutes.
    Mrs. Walters. Thank you, Mr. Dorsey, for being here.
    News reports indicate that Periscope--as you know, is 
Twitter's live video feed app--is being used to sexually 
exploit children. These reports detail the targeting of 
children as young as 9 years old.
    At times, coordinated activity for multiple users is 
employed to persuade children to engage in sexual behavior. 
These videos can be live streamed in public or private 
broadcasts on Periscope.
    I recognize that a live video app like Periscope creates 
challenges, especially when attempting to monitor content in 
real time.
    Yet, your testimony discussing malicious election-related 
activity on Twitter reads, ``We strongly believe that any such 
activity on Twitter is unacceptable.''
    I hope that standard of unacceptability is similarly 
applied to sexual exploitation of children on Periscope, and I 
would expect that it is, considering that Twitter has stated 
zero tolerance policy for child sexual exploitation.
    So my questions are does Twitter primarily rely on users to 
report sexually inappropriate content or content concerning 
child safety?
    Mr. Dorsey. We do have some dependency on reports. But this 
is an area that we want to move much faster in automating and 
not, obviously, placing the blame--or not placing the work on 
the victim and making sure that we are recognizing these in 
real time, and we have made some progress with Periscope.
    Mrs. Walters. So what is the average length of a live video 
on Periscope?
    Mr. Dorsey. I am not aware of that right now. But we can 
follow up.
    Mrs. Walters. OK. And what is the average response time to 
remove a live video on Periscope that is deemed to violate 
Twitter's term of service?
    Mr. Dorsey. It depends entirely on the severity of the 
report and what the context is. So we try to prioritize by 
severity. So threats of death or suicidal tendencies would get 
a higher priority than everything else.
    Mrs. Walters. So just out of curiosity, when you say we try 
to eliminate and we have a higher priority, like, who makes 
that decision?
    Mr. Dorsey. So when people report any violations of our 
terms of service, we have algorithms looking at the report and 
then trying to understand how to prioritize those reports so 
they're seen by humans much faster.
    Mrs. Walters. OK. So I would assume that you don't believe 
that you use the reporting as an effective method for 
monitoring live videos on Periscope then?
    Mr. Dorsey. Not over the long term.
    Mrs. Walters. Well, obviously, this is a really, really 
important issue. Is user reporting an effective method for 
monitoring private broadcasts on Periscope?
    Mr. Dorsey. Also not over the long term. But that is 
something that we need to do much more work around in terms of 
automating these.
    Mrs. Walters. So can you indicate that you need to do some 
more work around this? Do you have any timeframe of when you 
think you will be able to get this handled?
    Mr. Dorsey. We'd like to work as quickly as possible and 
make sure that we are prioritizing the proactive approaches of 
our enforcement and, again, it does go down that prioritization 
stack. But we intend to move as quickly as we can. I know that 
it's frustrating not to hear a particular time frame. But we 
are moving fast.
    Mrs. Walters. Can you explain the type of technology that 
you're using in order to change this?
    Mr. Dorsey. Yes. We'll be utilizing a lot of machine 
learning and deep learning in order to look at all of our 
systems at scale and then also prioritize the right review 
cadence.
    Mrs. Walters. OK. I yield back the balance of my time. 
Thank you.
    Mr. Walden. The gentlelady yields back.
    The chair recognizes Mr. Carter, Georgia, our last member 
to participate--thank you--for 4 minutes.
    Mr. Carter. Thank you, Mr. Chairman, and Mr. Dorsey, 
congratulations. I am the last one.
    Mr. Dorsey, in preparation for this hearing, I sent out a 
notice throughout my district and I asked them--I let them know 
that we were having this hearing and I was going to be asking 
questions and I said, what do you think I ought to ask him.
    So I got back some pretty interesting responses for that 
and one of them came from a teenage high school student--a 
conservative teenage high school student down in Camden County. 
That's right on the Georgia/Florida state line.
    And he said, I am a conservative teenage high school 
student and I am on Twitter and I've got over 40,000 followers, 
yet this young man had tried five times to get verification and 
yet he's been turned down all five times.
    And his question to me was, I've got friends who are more 
liberal than me who've got less followers than me and yet 
they've been verified. Why is that? What should I tell him?
    Mr. Dorsey. First and foremost, we believe we need a 
complete reboot of our verification system. It's not serving 
us. It's not serving the people that we serve, well. It really 
depends on when his friends were verified.
    We had an open verification system not too long ago that 
looked for various criteria and we verified people based on 
that. And it's not a function of how many followers you have. 
We have some verified folks who only have 5,000 followers. We--
--
    Mr. Carter. That was his point. He had 40,000. He 
couldn't--and he doesn't understand. I don't know what to tell 
him.
    Mr. Dorsey. Yes.
    Mr. Carter. It seems to me like he would have been verified 
and from what he explained to me and to staff is that they 
were--they applied at the same time.
    Mr. Dorsey. Yes. It----
    Mr. Carter. So why was he denied and they were approved?
    Mr. Dorsey. I would need to understand his particular case. 
So I would want to know his name and we can follow up----
    Mr. Carter. We will get you that information because I 
would like to give the young man an explanation. OK. I think he 
deserves it.
    Mr. Dorsey. OK.
    Mr. Carter. All right. And let me ask you something, and I 
apologize, but being the last one sometimes you're a little bit 
redundant.
    But you were asked earlier because this committee and 
particularly the Health Subcommittee has been the tip of the 
spear, if you will, with the opioid crisis that we have in our 
country.
    As you're aware, we are losing 115 people every day to 
opioid addiction, and we just talked about the algorithms and 
you have been talking about it all day about and why is it that 
we haven't been able to get these sites off?
    What's missing? What are you identifying that you're 
missing not to be able to get these tweets off?
    Mr. Dorsey. I think it's more of a new behavior and a new 
approach. It's----
    Mr. Carter. This has been going on quite a while.
    Mr. Dorsey. It's certainly not an excuse. We need to look 
at these more deeply in terms of how our algorithms are 
automatically determining when we see this sort of activity and 
taking action much faster.
    Mr. Carter. OK. Fair enough.
    My last question is this, and I want to talk about 
intellectual property, particularly as it relates to live 
streaming.
    Now, you have been here all day. You were over at the 
Senate this morning and you have been here this afternoon, and 
all day long, you have been saying--and we have no other reason 
but to believe you--yes, we need to work on this--we are going 
to work on this.
    The piracy that takes place with live streaming movies and 
intellectual property like that, that's been going on for quite 
a while, hasn't it?
    Mr. Dorsey. It has.
    Mr. Carter. Why should I believe you--and we had another 
CEO of another social media that was here a couple of months 
ago--same thing--we are working on it--we are going to get it 
done.
    But yet, this is something that's been going on. You ain't 
got it done yet. Why should I believe you now? And I say that 
because, Dr. Bucshon, Representative Walberg--I echo their 
comments--I don't want the Federal Government to get into this 
business. I don't want to regulate you guys. I think it'll 
stifle innovation.
    But why should I believe you if you hadn't got this fixed?
    Mr. Dorsey. Well, the reason we have to still work on it is 
because the methods of attack constantly change, and we'll 
never arrive at one solution that fixes everything. We need to 
constantly iterate based on new vectors of stealing IP or 
rebroadcasting IP, for instance, because they're constantly 
changing and we just need to be 10 steps ahead of that.
    Mr. Carter. I want to believe you and I am going to believe 
you. But I just have to tell you, I hope you believe me--we 
don't want the federal--and you don't want the Federal 
Government to be in this.
    I think the success of the internet and of your products 
has been because the Federal Government stayed out of it. But 
we got to have help. We have to have a commitment, and when I 
look at this I think, why would I believe him if they've been 
working on this and hadn't even got it fixed yet.
    Mr. Dorsey. Absolutely.
    Mr. Walden. The gentleman's time----
    Mr. Carter. Mr. Chairman, thank you, and I yield.
    Mr. Dorsey. Thank you.
    Mr. Walden. Thank you.
    And while we've been sitting here, I am told that Twitter 
has deleted the account that was trying to sell drugs online. 
So your team has been at work. We appreciate that.
    We have exhausted probably you and your team and our 
members questions for now. We do have some letters and 
questions for the--for the record--concluding script.
    And so I, again, want to thank you for being here before 
the committee. Some of our members didn't get to all their 
questions and so we will be submitting those for the record, 
and we have a number of things we'd like to insert in the 
record by unanimous consent: a letter from INCOMPAS, Consumer 
Technology Association, and the Internet Association; an 
article from Gizmodo; an article from Inc.; a paper by Kate 
Klonick \*\; an article from NBC; an article from Slate; and an 
article from The Verge.
---------------------------------------------------------------------------
    \*\ The information has been retained in committee files and can be 
found at: https://docs.house.gov/meetings/IF/IF00/20180905/108642/HHRG-
115-IF00-20180905-SD011.pdf.
---------------------------------------------------------------------------
    [The information appears at the conclusion of the hearing.]
    Mr. Walden. Pursuant to committee rules, I remind members 
they have 10 business days to submit additional questions for 
the record. I ask the witness to submit their response within 
10 business days upon receipt of that question.
    We ask you remain seated until the Twitter team is able to 
exit. So if you all would remain seated--thank you--then our 
folks from Twitter can leave and, Mr. Dorsey, thank you again 
for being before the Energy and Commerce Committee.
    And with that, the subcommittee is adjourned.
    [Whereupon, at 5:43 p.m., the committee was adjourned.]
    [Material submitted for inclusion in the record follows:]
    [GRAPHICS NOT AVAILABLE IN TIFF FORMAT]
    
    The responses to Mr. Dorsey's questions for the record can 
be found at: https://docs.house.gov/meetings/IF/IF00/20180905/
108642/HHRG-115-IF00-Wstate-DorseyJ-20180905-SD005.pdf.

                                 [all]