[House Hearing, 107 Congress]
[From the U.S. Government Publishing Office]
WHAT CAN BE DONE TO REDUCE THE THREATS POSED BY COMPUTER VIRUSES AND
WORMS TO THE WORKINGS OF GOVERNMENT?
=======================================================================
HEARING
before the
SUBCOMMITTEE ON GOVERNMENT EFFICIENCY,
FINANCIAL MANAGEMENT AND
INTERGOVERNMENTAL RELATIONS
of the
COMMITTEE ON
GOVERNMENT REFORM
HOUSE OF REPRESENTATIVES
ONE HUNDRED SEVENTH CONGRESS
FIRST SESSION
__________
AUGUST 29, 2001
__________
Serial No. 107-77
__________
Printed for the use of the Committee on Government Reform
Available via the World Wide Web: http://www.gpo.gov/congress/house
http://www.house.gov/reform
______
80-480 U.S. GOVERNMENT PRINTING OFFICE
WASHINGTON : 2002
____________________________________________________________________________
For Sale by the Superintendent of Documents, U.S. Government Printing Office
Internet: bookstore.gpr.gov Phone: toll free (866) 512-1800; (202) 512�091800
Fax: (202) 512�092250 Mail: Stop SSOP, Washington, DC 20402�090001
COMMITTEE ON GOVERNMENT REFORM
DAN BURTON, Indiana, Chairman
BENJAMIN A. GILMAN, New York HENRY A. WAXMAN, California
CONSTANCE A. MORELLA, Maryland TOM LANTOS, California
CHRISTOPHER SHAYS, Connecticut MAJOR R. OWENS, New York
ILEANA ROS-LEHTINEN, Florida EDOLPHUS TOWNS, New York
JOHN M. McHUGH, New York PAUL E. KANJORSKI, Pennsylvania
STEPHEN HORN, California PATSY T. MINK, Hawaii
JOHN L. MICA, Florida CAROLYN B. MALONEY, New York
THOMAS M. DAVIS, Virginia ELEANOR HOLMES NORTON, Washington,
MARK E. SOUDER, Indiana DC
JOE SCARBOROUGH, Florida ELIJAH E. CUMMINGS, Maryland
STEVEN C. LaTOURETTE, Ohio DENNIS J. KUCINICH, Ohio
BOB BARR, Georgia ROD R. BLAGOJEVICH, Illinois
DAN MILLER, Florida DANNY K. DAVIS, Illinois
DOUG OSE, California JOHN F. TIERNEY, Massachusetts
RON LEWIS, Kentucky JIM TURNER, Texas
JO ANN DAVIS, Virginia THOMAS H. ALLEN, Maine
TODD RUSSELL PLATTS, Pennsylvania JANICE D. SCHAKOWSKY, Illinois
DAVE WELDON, Florida WM. LACY CLAY, Missouri
CHRIS CANNON, Utah DIANE E. WATSON, California
ADAM H. PUTNAM, Florida ------ ------
C.L. ``BUTCH'' OTTER, Idaho ------
EDWARD L. SCHROCK, Virginia BERNARD SANDERS, Vermont
JOHN J. DUNCAN, Jr., Tennessee (Independent)
Kevin Binger, Staff Director
Daniel R. Moll, Deputy Staff Director
James C. Wilson, Chief Counsel
Robert A. Briggs, Chief Clerk
Phil Schiliro, Minority Staff Director
Subcommittee on Government Efficiency, Financial Management and
Intergovernmental Relations
STEPHEN HORN, California, Chairman
RON LEWIS, Kentucky JANICE D. SCHAKOWSKY, Illinois
DAN MILLER, Florida MAJOR R. OWENS, New York
DOUG OSE, California PAUL E. KANJORSKI, Pennsylvania
ADAM H. PUTNAM, Florida CAROLYN B. MALONEY, New York
Ex Officio
DAN BURTON, Indiana HENRY A. WAXMAN, California
J. Russell George, Staff Director and Chief Counsel
Bonnie Heald, Director of Communications/Professional Staff Member
Mark Johnson, Clerk
David McMillen, Minority Professional Staff Member
C O N T E N T S
----------
Page
Hearing held on August 29, 2001.................................. 1
Statement of:
Carpenter, Jeffrey J., manager, Cert Coordination Center,
Carnegie Mellon University................................. 56
Castro, Lawrence, Chief, Defensive Information Operations
Group, Information Assurance Directorate, National Security
Agency..................................................... 27
Culp, Scott, manager, Microsoft Security Response Center,
Microsoft Corp............................................. 140
Lewis, Alethia, deputy director, Department of Information
Technology, State of California............................ 107
Maiffret, Marc, chief hacking officer, eEye Digital Security. 160
Miller, Harris, president, Information Technology Association
of America................................................. 119
Neumann, Peter G., principal scientist, Computer Science
Laboratory, SRI International, Menlo Park, CA.............. 131
Rhodes, Keith A., Chief Technologist, Center for Technology
and Engineering, General Accounting Office................. 5
Trilling, Stephen, senior director of advanced concepts,
Symantec Corp.............................................. 150
Wiser, Leslie G., Jr., Section Chief, National Infrastructure
Protection Center, Federal Bureau of Investigation......... 37
Letters, statements, etc., submitted for the record by:
Carpenter, Jeffrey J., manager, Cert Coordination Center,
Carnegie Mellon University, prepared statement of.......... 59
Castro, Lawrence, Chief, Defensive Information Operations
Group, Information Assurance Directorate, National Security
Agency, prepared statement of.............................. 31
Culp, Scott, manager, Microsoft Security Response Center,
Microsoft Corp., prepared statement of..................... 142
Horn, Hon. Stephen, a Representative in Congress from the
State of California, prepared statement of................. 3
Lewis, Alethia, deputy director, Department of Information
Technology, State of California, prepared statement of..... 110
Maiffret, Marc, chief hacking officer, eEye Digital Security,
prepared statement of...................................... 163
Miller, Harris, president, Information Technology Association
of America, prepared statement of.......................... 123
Neumann, Peter G., principal scientist, Computer Science
Laboratory, SRI International, Menlo Park, CA, prepared
statement of............................................... 135
Rhodes, Keith A., Chief Technologist, Center for Technology
and Engineering, General Accounting Office, prepared
statement of............................................... 9
Trilling, Stephen, senior director of advanced concepts,
Symantec Corp., prepared statement of...................... 153
Wiser, Leslie G., Jr., Section Chief, National Infrastructure
Protection Center, Federal Bureau of Investigation,
prepared statement of...................................... 40
WHAT CAN BE DONE TO REDUCE THE THREATS POSED BY COMPUTER VIRUSES AND
WORMS TO THE WORKINGS OF GOVERNMENT?
----------
WEDNESDAY, AUGUST 29, 2001
House of Representatives,
Subcommittee on Government Efficiency, Financial
Management and Intergovernmental Relations,
Committee on Government Reform,
San Jose, CA.
The subcommittee met, pursuant to notice, at 10 a.m., in
room 205 of the San Jose Council Chamber at 801 North First
Street, San Jose, CA, Hon. Stephen Horn (chairman of the
subcommittee) presiding.
Present: Representative Horn.
Also present: Representative Honda.
Staff present: J. Russell George, staff director and chief
counsel; Bonnie Heald, director of communications; Elizabeth
Johnston, detailee; Scott Fagan, assistant to the subcommittee;
Mark Johnson, clerk; and David McMillen, minority professional
staff member.
Mr. Horn. This hearing of the Subcommittee on Government
Efficiency, Financial Management and Intergovernmental
Relations will come to order.
The dramatic increase in computer use and the Internet are
changing the way we communicate and conduct business. With 58
percent of Americans now having home Internet access, our
Federal, State and local governments increasingly rely on the
Internet to conduct business. More than 40 million Americans
now perform such routine activities as filing income tax
returns, health benefit claims, and renewing driver's licenses
electronically.
In addition to this wealth of personal information, the
government's computer systems hold information that is vital to
the security and economic well-being of this Nation.
Unfortunately, these systems are increasingly vulnerable to
hostile attacks that are capable of extracting unauthorized
information and potentially threatening the Nation's
infrastructure.
Overall, the number and sophistication of these attacks is
rising dramatically according to the federally funded CERT
Coordination Center. Just to explain CERT, it stands for
Computer Emergency Response Team, and it's our friends at
Carnegie-Mellon that have been working on this for years. The
number of incidents rose from 9,859 in 1999 to 21,765 in the
year 2000.
So far this year, 15,476 incidents have been recorded. An
increasing number of these attacks, often in the form of
viruses or worms, specifically target government systems. There
are more than 48,000 known worms and viruses which enable
hackers to gain access to systems and data stored on the
infected computers. Some of the most destructive of these
programs can delete system and application software and even
destroy the hardware itself. There are nearly 110 million with
Internet connections and, as we have seen, these potentially
devastating viruses or worms can become an epidemic in
microseconds.
In 1999, for example, the Melissa virus gained notoriety
because of the speed at which it spread. The first confirmed
reports of Melissa were received on Friday, March 26, 1999. By
Monday, March 29, the virus had affected more than 100,000
computers.
Last year the ILOVEYOU virus created worldwide havoc in a
matter of days costing an estimated almost $8 billion to fix it
up. Last month, worms called Code Red I and II in Roman
numerals, burrowed into nearly 1 million computer systems
worldwide and affected an estimated 100 million computer users.
E-mail systems went down for days. Workers were locked out of
crucial computer files and some e-commerce ground to a halt.
Government Web sites came under siege with the Pentagon
shutting down public access to all of its Web servers. To date,
the cost of Code Red worms have risen to more than $2 billion
and are mushrooming to about $200 million per day.
So far, these viruses and worms have not caused irreparable
damage to the Federal Government's information systems.
However, as the attacks become more sophisticated, the
magnitude of the potential threat is colossal.
We must do something more than just react to these attacks.
There is no easy fix but governments at every level must be
prepared for the next attempted invasion. Computer security
must have a priority.
Today we will examine the extent of the threat to
government computer systems and the need for policy changes to
ensure that these systems which are vital to this Nation and
its economy and its citizens are protected.
We welcome our witnesses today and we look forward to their
testimony.
[The prepared statement of Hon. Stephen Horn follows:]
[GRAPHIC] [TIFF OMITTED] T0480.001
[GRAPHIC] [TIFF OMITTED] T0480.002
Mr. Horn. Panel one will include Keith Rhodes, Chief
Technologist, Center for Technology and Engineering, of the
U.S. General Accounting Office. That is part of the legislative
branch of government headed by the Controller General of the
United States.
Mr. Castro, Larry Castro, is chief of defensive information
operations group of the Information Assurances Directorate.
General Hadon is the commanding officer of the National
Security Agency, and we welcome Mr. Castro. The Information
Assurance Directorate and the National Security Agency is
really our No. 1 intelligence group in the United States.
Leslie G. Wiser, Jr., Section Chief, National
Infrastructure Protective Center, the Federal Bureau of
Investigation. They have been particularly active and very
cooperative with the Congress just as the National Security
Agency has cooperated with the Congress on this very difficult
situation.
After Mr. Wiser, we will have Jeff Carpenter, manager of
the CERT Coordination Center that I mentioned earlier with
Carnegie-Mellon University and its Computer Emergency Response
Team.
The fifth one is Patricia Kuhar, program manager for
information technology, California State Department of
Information Technology.
In addition, one of my colleagues will be here. Mr. Honda,
the gentleman from California. Michael Honda is making his way
to the hearing from Sacramento. I wish him well. Most of you
know this because a lot of you have been before us before. But
this is an investigating committee and, as such, we do
administer an oath to make sure everything is done under oath.
So if you will stand up and put your right hands up.
[Witnesses sworn.]
Mr. Horn. The clerk will note that all four witnesses
present have taken the oath, and we can now start with Mr.
Rhodes.
STATEMENT OF KEITH A. RHODES, CHIEF TECHNOLOGIST, CENTER FOR
TECHNOLOGY AND ENGINEERING, GENERAL ACCOUNTING OFFICE
Mr. Rhodes. Thank you, Mr. Chairman.
In keeping with the rules of the committee, I'd like to
give a brief summary and have my full statement submitted for
the record.
Mr. Horn. I might add that when I name each individual,
that automatically under our rules their statement goes
immediately into the hearing record. This is being taken down
by very able people, and Mr. Rhodes knows this, and we're
delighted to have a member of the U.S. General Accounting
Office.
Mr. Rhodes. Thank you.
Mr. Chairman and members of the subcommittee, thank you for
inviting me to participate in today's hearing on the most
recent rash of computer attacks. This is the third time I've
testified before Congress over the past several years on
specific viruses. First, the Melissa virus in April 1999 and
second, the ILOVEYOU virus in May 2000. At both hearings I
stressed that the next attack would likely propagate faster, do
more damage, and be more difficult to detect and counter.
Again, we are having to deal with destruction are
reportedly costing billions. In the past 2 months,
organizations and individuals have had to contend with several
particularly vexing attacks. The most notable, of course, is
Code Red but potentially more damaging are Code Red II and its
variants and SirCam.
Together, these attacks have infected millions of computer
users, shut down Web sites, slowed Internet service, and
disrupted business and government operations. They have already
caused billions of dollars of damage, and their full effects
have yet to be completely assessed, partly because viruses and
worms don't just go away, especially the latest Code Red II
variant which seems to have been modified to enable it to
reinfect the systems it attacks.
Despite some similarities, each of the recent attacks is
very different in its makeup, method of attack, and potential
damage. Generally, Code Red and Code Red II are both worms
which are attacks that propagate themselves to networks without
any user intervention of interaction. They both take advantage
of a flaw in a component of versions 4.0 and 5.0 of Microsoft's
Internet Information Services [IIS] Web server software.
The main point I want to make about these two worms as well
as the associated virus is that in and of themselves they might
not be necessarily all that interesting. The potential of the
attacks, however, is what I would like to cover today in my
testimony.
The worms have taken an additional step compared to what
ILOVEYOU or Melissa did. Code Red itself combined a worm with a
denial of service attach, and Code Red II has combined a worm
with the ability for installing a back door for circumventing
security services inside Web service. SirCam, on the other
hand, is a virus but it's a virus that doesn't rely on, as with
ILOVEYOU, the internal mail server capability of the systems it
attacks. Rather, it brings its own e-mail software with it so
that it can send itself out.
Some of the points that I'd like to make today are that
computer security, what we need to understand from these worms
and virus attacks is that computer security is indeed a full-
time job. New threats and vulnerabilities are constantly being
identified, and measures to address those threats and
vulnerabilities are being developed and implemented.
For example, when the vulnerability exploded when Code Red
was announced, a patch was also made available at the same
time. This required installations using the affected software
to: No. 1 keep up with the vulnerabilities associated with
their software; and No. 2, install a patch to address the
vulnerability. Until this announcement, most, if not all, of
these installations did not know they had a problem.
Considering the number of affected servers, a number of sites
did not take the quick response necessary to address this new
vulnerability. For example, install the available patches.
This also underscores a point that we've made to this
committee as well as other committees and the Congress
regarding general controls of computer security across the
government. The government is not in a position to protect
itself. It does not have the talent, it does not have the
training, it does not have the early warning. We are
constantly--in my other capacity I run a computer security test
laboratory in the General Accounting Office that has done work
for this and other committees, and we are always able to break
in and usually we are able to break in undetected and we are
not using any sophisticated techniques. So it's not surprising
that Code Red, Code Red II, Code Red's latest variant, SirCam,
etc., are affected.
For example, I don't know if the gentleman from Symantec,
Stephen Trilling, is going to actually disassemble the Code Red
software for you later, but it's not very smart code. It's not
very sophisticated. Yes, it does combine denial of service
attack with its ability to be a worm, but it's not very good
code at all. When you look at it, it's thrown together and yet
it's still extremely effective.
No. 2 the attacks are coming faster after the vulnerability
is announced. About 1 month after the vulnerability was
announced, an effective attack using that vulnerability was
launched. Shortly after this attack was launched, another
attack with far more serious consequences was launched. That's
Code Red II. Code Red came out, then Code Red II came out and,
as a matter of fact, we were modifying the testimony in real
time over the last week because a new variant had come out.
No. 3 installing software is a complex business. In some
cases, entities are installing software without actually
knowing the services that are being activated. For example, we
understand that some entities were installing Windows 2000
without understanding that the ISS services were being
activated. Therefore, take for example, your own cell phone.
You probably don't know all the services that are associated
with your cell phone, and you probably don't use all of them.
However, when you buy a software package now, you're getting a
complete set of services, some of which you don't know that
they may have vulnerability.
The initial threat associated with a given attack is
difficult to assess. I think one of the reasons, Mr. Chairman,
that you and I get to see one another on an annual basis is
that $8 billion distributed across the entire world, sort of
like the first rules of physics. If I distribute the energy
across a wide enough area, nobody feels the impact. $8 billion
worldwide. Nobody seems to be willing to cry uncle, either the
government or industry or individual users.
Substantial financial impact. It's very hard to get anyone
to say that $8 billion matters. We are now on our way to, as
you pointed out, $200 million a day perhaps in impact and yet
no one is willing to scream uncle. Therefore, what is the
definition of critical infrastructure? If it's truly critical,
someone should be crying uncle by now or somebody is in a
position to not be able to cry uncle.
Affected servers. One of the additional things about the
current set of worms is that the affected servers broadcast the
fact that their resources can be compromised. It's not just
that Code Red goes in and takes over your environment, but Code
Red goes in, takes over your environment and then tells
everyone else that your environment has been compromised. The
vulnerability exploited by Code Red can be used to take over
the server. Nefarious individuals are always looking for
servers that can be compromised in this fashion.
However, rather than seeking out servers that have this
vulnerability, all a person has to do is to look at their own
network to see what servers are attempting to spread the Code
Red worm to them. Based on this information, the individual
knows that the server is vulnerable to this attack. The attacks
are indeed getting worse and worse. The attacks are coming
faster after vulnerabilities are being identified and have a
more devastating impact.
For example, the initial version of Code Red appeared about
1 month after the vulnerability was published. Shortly after
the initial release, another attack that allowed an
unauthorized individual to take over the server was launched.
In the midst of all of this gloom and doom that I'm
presenting, I would like to point out that there was one good
thing that did come out of this legislative Code Red attacks,
and that was there was very good coordination between the U.S.
Government and private industry. It was, to my mind, the first
time the government and industry had effectively worked
together. This is the first time, in a coordinated fashion,
that government and industry had worked to address a problem
such as this. This is a positive step forward. However, I will
say that this is the pound of cure rather than the ounce of
prevention.
One of my last points. Most software is not secure. Instead
of relying on the code and fix approach for software
development and security, we need to build security in the
software during the development process. Although this may
sound simple, it often conflicts with a get to market fast
development program. Users, individual, corporate and
government, are more than willing to state the mantra of it's a
trade-off between usability and cost and the probability of a
compromise remote PC is low. In other words, the users do not
want to spend the time and money to secure systems since the
``other stuff'' we do for a living is more important and
valuable. The fallacy in this argument is that the users have
not done the risk analysis that allows them to make an informed
decision about their security posture.
The last point I'd like to make, Mr. Chairman, is that in
going along with the pound of cure, your committee has talked
time and time again that there's a dearth of management inside
government and so you and others have brought about the
government Information Security Reform Act. But again, that's a
cure as opposed to a prevention because that requires
organizations like OMB, the Inspectors General, and the General
Accounting Office to come in and validate the security posture
of the departments and agencies. Again, we're in a situation,
as we were in Y2K, where the Congress is stepping in to pass
laws to make certain that people do due diligence regarding
their own security posture.
Thank you very much, Mr. Chairman. That concludes my
testimony, and I would entertain any questions from you or
committee members.
[The prepared statement of Mr. Rhodes follows:]
[GRAPHIC] [TIFF OMITTED] T0480.003
[GRAPHIC] [TIFF OMITTED] T0480.004
[GRAPHIC] [TIFF OMITTED] T0480.005
[GRAPHIC] [TIFF OMITTED] T0480.006
[GRAPHIC] [TIFF OMITTED] T0480.007
[GRAPHIC] [TIFF OMITTED] T0480.008
[GRAPHIC] [TIFF OMITTED] T0480.009
[GRAPHIC] [TIFF OMITTED] T0480.010
[GRAPHIC] [TIFF OMITTED] T0480.011
[GRAPHIC] [TIFF OMITTED] T0480.012
[GRAPHIC] [TIFF OMITTED] T0480.013
[GRAPHIC] [TIFF OMITTED] T0480.014
[GRAPHIC] [TIFF OMITTED] T0480.015
[GRAPHIC] [TIFF OMITTED] T0480.016
[GRAPHIC] [TIFF OMITTED] T0480.017
[GRAPHIC] [TIFF OMITTED] T0480.018
[GRAPHIC] [TIFF OMITTED] T0480.019
[GRAPHIC] [TIFF OMITTED] T0480.020
Mr. Horn. Yes. We will have all the presenters and get it
all on the table and then we'll go to questions.
We now have Larry Castro, Chief Defensive Information
Operations Group of the Information Assurance Directorate of
what is probably our greatest national intelligence agency, the
National Security Agency. Thank you, Mr. Castro, for coming.
STATEMENT OF LAWRENCE CASTRO, CHIEF, DEFENSIVE INFORMATION
OPERATIONS GROUP, INFORMATION ASSURANCE DIRECTORATE, NATIONAL
SECURITY AGENCY
Mr. Castro. Thank you, sir. Good morning. Thank you for
that kind introduction. On behalf of our Director, Lieutenant
General Mike Hadon, I am pleased to respond to the
subcommittee's invitation to discuss NSA's view of the threats
posed by malicious computer code, particularly viruses and
worms.
My name is Larry Castro. I lead the Defensive Information
Operations Group within NSA's Information Assurance
Directorate. I'm accompanied today by Mr. Steve Ryan, a senior
technical director in our group. We have submitted to the
committee a formal statement for the record, and what I'd like
to do is just summarize some of the key points of that as well
as refer you to a few graphics that we put together.
As the chairman has most kindly pointed out, NSA is
probably most well known for its signals intelligence or SIGINT
mission which provides critical information about a wide range
of foreign intelligence topics. Our Information Assurance
mission to protect national security related information is an
equally vital part of NSA's 50 year history and it's in this
capacity of representing NSA's information assurance capability
that I appear before you today.
What I'd first like to do in the next chart is to share
with you the larger context with which we approach our
information assurance mission and that is we seek in our
products and the services that we provide to our customers
within the national security community to provide products and
services that emphasize these five attributes. We are, of
course, most well known for historically providing very high-
grade encryption products, but as the world of networking has
evolved, we have branched out and our products now seek to help
ensure the availability of communications, to protect data
integrity, and to ensure the ability to authenticate and have
non-repudiation among users.
Even with these within the even larger framework, we
operate our entire information assurance mission, and that is
to say again we seek to work across a wide spectrum with regard
to computer and cyber incidents ranging from providing the
technology to protect to engaging in services in cooperation
with the U.S. Space Command and Joint Task Force on Computer
Network Operations to detect and report on incidents in cyber
space and then finally in support of the Defense Information
System Agency to react to those incidents.
What the chart seeks to depict is to say that to do all of
this you need to have that mix among technology, operations and
personnel. The technology needs to be robust and the people, as
has been pointed out in Mr. Rhodes' testimony, need to be well-
trained to do the job. And then finally, you have to implement
a sound information assurance policy.
I'd like to share with you all our view of the environment
in which we're operating. Here, this is not a piece of modern
art. It, in fact, is a result of work done by Doctor Bill
Cheswick at Lumina wherein he has developed a capability of
scanning the Internet. This is a scan of some 80,000 Internet
routers. Each of those dots, should they be capable of being
resolved, is one such router and the connections between the
routers are color-coded to show the state of conductivity.
Within NSA and within our Information Assurance Defensive
Operations Group we have a number of customers who correspond
to one or more of those dots, and our job is to provide the
situation awareness of what's going on among that whole milieu
of dots, in particular, looking for the routers associated with
bad actors. And I will try to describe some of the techniques
that we use to do that. The sort of take way though is that the
impression that you're given and the reason I like to use this
chart is that this is an exploding environment. It's continuing
to grow and branch out and that there are no boundaries in that
chart up there. We don't see any State boundaries within the
U.S. Department of Defense. We don't see any boundaries between
U.S. Space Command, U.S. Central Command. And this is the
message that we take, that the vulnerability of one leads to
the vulnerability of all.
Going now to discuss a little bit about the threat. It's
clearly one that has many, many dimensions and, from our
perspective at NSA, we see folks in each of those clouds
playing in cyber space. They have varying motives. Some are
just in it for ego, quite frankly. Others are there for
financial gain and occasionally we detect those who are there
for serious data mining, possibly even espionage.
In the next chart we attempt to define the classes of
attacks that we are contemplating. Starting from the left and
then working to the right, we would simply alert the committee
that there is a credible threat actually even in the
distribution of software. The ability to implant this malicious
code as the software is put into shrink wrap does exist and, of
course, there are many who are concerned about this and are
reacting to it.
Then with regard to the actual communication structures
within the Internet itself, as shown there, there are both
passive and active means of monitoring those structures, of
inserting one's self in for less than good purposes. Of course,
the main thrust of this presentation and this committee's work
is the active remote attack that we show there in the bottom
and that is surely one for which and through which we see the
majority of incidents that we work on today.
And then getting actually into the enclave that we seek to
defend. There are those who would simply stand off just outside
this enclave, perhaps just outside this window, attempting to
influence the cyber environment and then, quite frankly, sir,
the thing that we're most concerned about within the Department
of Defense, and it's been borne out over the last several
years, is the insider threat. Again, the insider, either
cooperating with outsiders or on its own, can do quite a bit of
damage.
The other thing that needs to be noted is more and more we
see the appearance of bulletin boards, chat rooms and other
fora allowing hackers and those who would attempt to do harm in
cyber space to exchange information. What this chart attempts
to depict is that freeware that allows someone to become a
scrip kitty and perhaps even become more extensive is readily
available, is increasing in complexity and simply allows more
efficient work on behalf of the hacker.
Now I'd like to turn to an examination that we completed
within the Department of Defense looking at incidents over the
last quarter. That would be to say the last 3 months preceding
this one. What we did was to look at the apparent origin of the
incidents that we are recording in the Department of Defense in
the Joint Task Force on Computer Network Operations.
Interestingly, as you can see, for that particular quarter and
for a number of different reasons having to do with lots of
things going on in the world, China was the country of apparent
origin for over 20 percent of the incidents recorded within the
Department of Defense. The others in the top 10 are shown
there.
I do have to make one clarification with regard to apparent
origin. As many know, the apparent origin is simply the last
place that we see an attack coming from. As the chart here
shows, the actual perpetrator could be located anywhere behind
that apparent origin location. However, I still think it's
useful to show which countries are being implicated, either
wittingly or unwittingly, in these kind of attacks and
intrusion attempts.
As has been discussed over the last 3 months, there have
been a number of different worms and viruses and attacks that
have shown up. One that impressed us most was the one referred
to as the W32 Leaves worm or just the Leaves worm. Without
going into the details--time doesn't allow--simply to say that
this was a very, very complex attack. What impressed us most
was the fact that when it was all said and done, the intruder
down there in the lower right had the capability, estimates
say, to control with one single set of commands about 24,000
zombies that he had established in his network. He did it in a
very, very sophisticated way, a way that involved from time to
time using encryption of his commands and, as I said before, he
was able in the end to setup a command and control mechanism
that did not require him to communicate individually with each
of the computers under his control, but rather he used an
Internet relay chat channel to provide both updates to his
zombies and to provide commands.
We actually saw no harmful activity that came from this
attempt to setup this distributed computing network, but I
think it is indicative of the sophistication that we can expect
to see in the future.
Now with regard to what we would suggest are the ways
ahead, and they have already been very well covered by Mr.
Rhodes so I will only seek to reiterate one more time. There's
clearly a very, very strong component of education and
awareness, not only for the practitioners but, we would submit,
for the Nation at large. We would commend the committee. We
think that having this hearing involving both government
entities, academia, and the industry is a very, very important
way of getting that message out.
We would also like to share with the committee the fact
that within NSA, trying to get to the point again raised by Mr.
Rhodes with regard to having sufficient folks well-trained, we
have established an Academic Centers of Excellence Program that
uses community-accepted criteria for validating the curricula
of universities who engage in information assurance-related
education.
Within California, of the 23 universities that have been so
designated, U.C. Davis, Stanford University and the Naval Post-
Graduate School of Monterey have been designated as Academic
Centers of Excellence for information assurance education.
The second point is that giving increasing emphasis on
anticipatory defensive measures. Specifically by this, we mean
the fact that, again, as has already been pointed out, every
one of the vulnerabilities that are being exploited by those
who would do harm in cyber space are known beforehand and are
anticipated by the hacker before the defense community makes
the necessary patch.
To give you an idea of how we are always behind the power
curve, last year within the Department of Defense, there were
on the order of 24,000 what we would describe as incidents. Our
definition of incidents is different from those used by the
Search CC, so the numbers aren't quite the same.
But the important take away is that we estimate that at
least 80 percent of the those 24,000 incidents could have been
prevented had the patch to close the particular vulnerability
in question been in place in a proper amount of time. And
that's not to say that the department doesn't give high
visibility to making these patches, but it is, quite frankly, a
resource issue. The same system administrator who's charged
with making that patch is also charged with keeping that
computer system up and supporting his commander and, of course,
that's usually what takes the priority.
And then finally, as was mentioned again previously, the
kind of interaction between governmental entities and between
the government and industry that we saw so well carried out
during the Code Red campaign is in fact what we would suggest
be the model for the future. If we have that kind of continued
cooperation, if we have the mechanisms in place, both
mechanical mechanisms and, quite frankly, emotional and thought
process mechanisms, we believe we can go a long way in getting
ahead of the power curve.
That concludes my testimony, sir, and we'd be glad to take
questions at the appropriate time.
[The prepared statement of Mr. Castro follows:]
[GRAPHIC] [TIFF OMITTED] T0480.021
[GRAPHIC] [TIFF OMITTED] T0480.022
[GRAPHIC] [TIFF OMITTED] T0480.023
[GRAPHIC] [TIFF OMITTED] T0480.024
[GRAPHIC] [TIFF OMITTED] T0480.025
[GRAPHIC] [TIFF OMITTED] T0480.026
Mr. Horn. Well, thank you very much. We'll have a number of
questions very shortly here.
Now we have Leslie Wiser, the Section Chief for the
National Infrastructure Protection Center of the Federal Bureau
of Investigation. I want to thank you very much for the
cooperation you have had with the Congress and this committee
and bringing people from all over the world so we could get a
good look at them. You've always helped us in this area, and
thank you, just as the National Security Agency has helped us.
So proceed, Mr. Wiser.
STATEMENT OF LESLIE G. WISER, JR., SECTION CHIEF, NATIONAL
INFRASTRUCTURE PROTECTION CENTER, FEDERAL BUREAU OF
INVESTIGATION
Mr. Wiser. Chairman Horn, thank you for those kind comments
and thank you for inviting me here today to testify about how
the National Infrastructure Protection Center [NIPC], is
addressing the threats posed to government systems by computer
viruses and worms. I have a formal statement that I will submit
for the committee, and I will continue with other remarks.
I spoke with NIPC Director Ron Dick yesterday, and he
regrets not being able to attend but asked me to forward his
gratitude as well to this committee. It's been suggested that
www stands not for World Wide Web; rather, in this context, it
seems to mean wild, wild west. Cyber crime is a new frontier
requiring new thinking and new skills. Dealing with Internet
viruses, worms and the vast spectrum of threats to government
and private sector information systems requires a dedicated and
cooperative effort. It is fitting that we are in the heart of
the information technology community. It's that cooperative
effort that I will focus on here today.
The mission of the NIPC is to detect, deter, warn of,
investigate and respond to cyber intrusions that threaten our
critical infrastructures. It is the only organization in the
United States with this national infrastructure protection
mandate. The NIPC gathers together under one roof
representatives from, among others, the law enforcement,
intelligence, and defense communities which collectively
provide a unique analytical perspective to cyber intrusion
information obtained from investigation, intelligence
collection, foreign liaison and private sector cooperation.
This perspective ensures that no single discipline addresses
cyber intrusions of critical infrastructures in a vacuum.
Rather, a cyber incident is examined as a system security
matter as well as for its potential as a counter-intelligence
defense and law enforcement matter.
While the mission of the NIPC outlined in Presidential
Decision Directive 63 is broad, our complement is relatively
small with 91 FBI employees and 18 detailees, many of whom
field critical leadership roles. I am pleased to serve with a
fine staff of dedicated men and women including NIPC's Deputy
Director, Rear Admiral James Plehal of the U.S. Naval Reserve,
who hail from 12 Federal entities and 3 foreign governments.
Please allow me to provide a few examples that demonstrate our
approach to protecting U.S. critical infrastructures including
our government information systems.
In July 2001 the NIPC issued a series of timely predictive
warnings regarding the Code Red worm. Before issuing these
warnings, the NIPC conducted daily tele-conferences with the
National Security Council, the National Security Agency, the
Defense Department's Joint Task Force for Computer Network
Operations, the Justice Department, the CIA, CERT and others to
form a consensus response strategy. As a result of this
cooperation, the impact of Code Red was successfully mitigated.
The NIPC was quick to fulfill its warning mission while
simultaneously coordinating the FBI investigation which is
continuing.
Similarly, on July 23, 2001 the NIPC, again working with
the same partners, issued an advisory regarding the Leave worm
which infected over 20,000 machines. The FBI's investigation
and analysis determined the infected computers were
synchronizing, possibly for an attack. Through the execution of
several search warrants and sophisticated analysis by our
computer scientists, we followed the trail to the United
Kingdom where New Scotland Yard identified a subject and
arrested him. In this example, the successful investigation
itself ended the threat.
In contrast to the success of the Leave worm investigation,
we are often frustrated when we are forced to obtain several
separate court orders tracing intruders back through several
ISP hot points. This is difficult enough when all the activity
is within the United States. It often becomes formidable when
the trail leads overseas. The trans-national nature of cyber
attacks requires solid liaison with foreign partners with whom
we can exchange warnings of malicious computer activity.
Currently, the NIPC has connectivity with similar centers
in the U.K., Canada, Australia, New Zealand and Sweden and in
May, I extended an offer to the German Government, which is
under consideration. We think there is great benefit in
establishing a global network including partners in time zones
ahead of us to provide early warning of attacks.
Along with foreign collaboration, cooperation with the
private sector is absolutely essential to successfully protect
U.S. critical infrastructures. As a result, the NIPC
established InfraGard where like-minded professionals can share
best practices and discuss other issues of importance to them.
InfraGard is like a neighborhood watch because members band
together to protect each other. They have shared information
about attacks with each other on a confidential basis by
providing sanitized reports to the NIPC.
In May the Safe America Foundation presented its 2001 World
Safe Internet Safety Award to the NIPC for the InfraGard
partnership. Today InfraGard boasts over 1,800 members
including 87 Fortune 500 companies in 65 chapters across the
United States and Puerto Rico.
In June the NIPC hosted the first annual InfraGard Congress
here in California where private sector representatives from
around the country gathered and elected an executive committee
to help lead this important initiative. In particular, small
startup businesses that cannot afford a dedicated security
office or fees charged by for profit security enterprises have
found a home in InfraGard.
InfraGard is a free service and puts a face on law
enforcement that enhances accessibility, communication,
cooperation and trust. I don't know of another program like it
in the world, and foreign officials and companies have
expressed an interest in creating InfraGard-like programs in
their countries. For example, Mr. Elfen Menses of the
Philippine National Bureau of Investigation, who testified
before this subcommittee last year, attended the InfraGard
Congress as an observer. He left energized and committed to
starting an InfraGard-like program in the Philippines, and we
embrace efforts to establish foreign public/private
partnerships as a step to enhancing global security.
Pursuant to PDD63, the NIPC was appointed to be the Federal
Government's liaison for Emergency Law Enforcement Services
Sector, the ELES Center, one of the critical infrastructures
identified in PDD63. The NIPC works cooperatively with the ELES
Sector Forum, a group of seasoned State and local law
enforcement professionals, to protect State and local law
enforcement data and communication systems, including the 911
system.
On March 2 the NIPC and members of the forum led by Sheriff
Pat Sullivan of Colorado presented the completed sector plan to
the White House. The plan and an accompanying guide, a toolbox
of best practices, worksheets and checklists, is the Nation's
only completed infrastructure protection plan. It is being used
as a model for other infrastructures.
Yet we will not succeed in stemming the tide of devastating
viruses and worms on the Internet without raising public
awareness, continued cooperation with the private sector,
strong relationships at all levels of government, and a united
front with foreign governments. The good news is that through
new thinking and new skills, we have made significant progress
in all these areas.
I remain grateful for the opportunity to discuss this
important topic with you. I'm also gratified to see many of our
U.S. Government and private sector partners here at the table.
We want to work closely with them, this subcommittee, and with
other Members of Congress on infrastructure protection issues.
Thank you very much, sir.
[The prepared statement of Mr. Wiser follows:]
[GRAPHIC] [TIFF OMITTED] T0480.027
[GRAPHIC] [TIFF OMITTED] T0480.028
[GRAPHIC] [TIFF OMITTED] T0480.029
[GRAPHIC] [TIFF OMITTED] T0480.030
[GRAPHIC] [TIFF OMITTED] T0480.031
[GRAPHIC] [TIFF OMITTED] T0480.032
[GRAPHIC] [TIFF OMITTED] T0480.033
[GRAPHIC] [TIFF OMITTED] T0480.034
[GRAPHIC] [TIFF OMITTED] T0480.035
[GRAPHIC] [TIFF OMITTED] T0480.036
[GRAPHIC] [TIFF OMITTED] T0480.037
[GRAPHIC] [TIFF OMITTED] T0480.038
[GRAPHIC] [TIFF OMITTED] T0480.039
[GRAPHIC] [TIFF OMITTED] T0480.040
[GRAPHIC] [TIFF OMITTED] T0480.041
[GRAPHIC] [TIFF OMITTED] T0480.042
Mr. Horn. Thank you very much. We appreciate your testimony
and all your excellent people over there.
We now go to Jeff Carpenter. He is the manager of the CERT
Coordination Center of Carnegie-Mellon University and the CERT
I think has probably got a patent on it or a copyright, but it
stands for Computer Emergency Response Team. We have been
looking with great interest over the last few years that in all
our feeling, Carnegie-Mellon University is ahead of the pack in
terms of the universities of America. So thank you very much
for coming.
STATEMENT OF JEFFREY J. CARPENTER, MANAGER, CERT COORDINATION
CENTER, CARNEGIE MELLON UNIVERSITY
Mr. Carpenter. Thank you, Mr. Chairman. Thank you for your
remarks. My name is Jeff Carpenter. I manage the CERT
Coordination Center which is part of the Software Engineering
Institute at Carnegie-Mellon University. Thank you for the
opportunity to testify before your subcommittee today. I have a
formal statement which I am submitting for the record, and I
will just summarize my remarks now. Today I'm going to talk
about the Code Red worm attacks and the broader implications of
those attacks.
In our first full year of operation in 1989, CERT responded
to more than 100 computer security incidents. In the year 2000,
staff handled more than 21,000 incidents. In total, CERT staff
has handled over 63,000 incidents and catalogued more than
3,700 computer vulnerabilities. This testimony is based on that
broad experience as well as our specific experience with the
Code Red worm.
To begin the story of the Code Red worm, we need to look
back to June 19. On that day, we published an advisory
describing a vulnerability in Microsoft's Internet information
server, Web server software. This vulnerability could allow
intruders to compromise computers running vulnerable versions
of IIS. This means that an intruder could take control of a
vulnerable computer, accessing or changing data on that
computer, or using that computer to launch attacks against
other organizations.
A month later the first signs of Code Red worm appeared on
July 13. Code Red is called a worm because it's self-
propagating. When it compromises a computer, the worm looks for
computers to compromise, compromises those computers and then
those computers begin compromising other computers without the
direct intervention of the intruder that initially launched the
worm. Code Red took advantage of the fact that many computers
on the Internet that were running IIS still a month later were
running vulnerable versions of IIS.
On July 19 the more aggressive version of the worm began
spreading rapidly. As the day progressed, the rate of computers
being scanned and compromised continued to increase
exponentially. On July 20 Code Red changed its type of
activity. Instead of propagating the worm, it changed into
launching a denial of service attack against a high-profile Web
site. When this change occurred, the spreading of the attack
stopped. By the time that the spreading of the attack stopped,
more than 250,000 computers had been compromised and that was
unprecedented in a 24-hour time period.
CERT, along with a number of other government and industry
organizations, worked over the next few weeks to raise
awareness of the need to patch systems immediately. There was a
sense of urgency connected with this joint warning because we
anticipated that the worm would change back to propagation mode
on August 1. Even with the publicity that we did over the next
week or so, when the worm started spreading again on August 1,
about 150,000 computers were compromised by the next day. So
even with the publicity, many machines were not patched.
The significance of Code Red lies beyond the specific
activity we've described. Rather, the worm represents a larger
problem with Internet security and forecasts what we can expect
in the future. My most important message today is not only is
the Internet vulnerable to attack today, but it's going to stay
vulnerable to attack for the foreseeable future. Systems are
vulnerable to problems that have already been discovered,
sometimes years ago, and they remain vulnerable to problems
that will be discovered in the future.
Multiple factors contribute to this problem. CERT
experience shows that intruders will develop exploit scripts
for vulnerabilities in products such as IIS. They will then use
these scripts to compromise computers and will share these
scripts with other intruders so those intruders can attack
systems using them.
New exploits are causing damage more quickly than those
created in the past. One primary reason is that intruders are
developing better techniques for identifying vulnerable
computers and exploiting them. The ability of intruders to
compromise systems quickly limits the time that security
experts have to analyze the problem and warn the Internet
community. Likewise, system administrators and users have
little time to protect their systems from these attacks.
This year CERT expects to catalog well over 2,000
vulnerabilities by the end of the year. The rate of reports is
doubling each year. There's little evidence of improvement in
the security of most products. Developers are not devoting
sufficient effort to applying lessons learned about sources of
vulnerabilities. While we continue to see exploitation of old
vulnerabilities, we're also seeing an increase in new
vulnerabilities. Many of them have the same root causes and
many of them could have been prevented by good software
development practices.
System and network administrators are challenged with
keeping up with all of the systems they have and all the
patches released for those systems. We have found that after a
vendor releases a security patch it takes a long time for
system administrators to fix all the vulnerable computer
systems. It can be months or years before patches are applied
to only 90 percent of the vulnerable computers. For example, we
still to this day receive reports of outbreaks of the Melissa
virus which is over 2 years old.
There are a variety of reasons for the delay. The job might
be time-consuming, too complex or low-priority for the system
administration's staff to handle. But even in an ideal
situation, conscientious system administrators cannot
adequately protect their computer systems because other system
administrators and users including home users do not adequately
protect their systems. The security of each system on the
Internet affects the security of other systems.
Federal, State and local governments should be concerned.
Their increased use of the Internet to conduct business and
provide information has a corresponding increase in the risk of
compromise. Action is needed on many fronts. With the
technology product development, vendors need to be proactive in
proving their software development practices and shipping
products that are configured securely out of the box. Improved
practices will reduce vulnerabilities in products on the market
and reduce risk of compromise. In our experience, once a
vulnerability makes it out into the field installed on systems,
it's very difficult to have that vulnerability fixed on all of
the systems that it reaches. So we want to try to prevent the
vulnerabilities from being in the products that get released to
the field to begin with.
System administrators also need better tools to manage the
updating of software and computers. Home users and business
users alike need to be educated on how to operate computers
most securely and consumers need to be educated on how to
select the products they buy.
To the acquisition community, it's important to evaluate
suppliers for product security but the current ways of
describing security requirements are immature and the problem
today is not the lack of features, it's the software is flawed.
For long-term improvements to occur, the government should
sponsor research and development leading to safer operating
systems that are also easier to maintain and manage. There
should also be increased research in survival of systems that
are better able to resist, recognize and recover from attacks
while still providing critical functionality.
And finally, the government should provide meaningful
infrastructure support for university programs and information
security education and research to produce a new generation of
experts in this field. Problems such as Code Red will occur
again. Solutions are not simple because the underlying causes
must be addressed. However, we can make significant progress
through changes in software design and development practices
and system administration in the knowledge of users and in
acquisition practices. Additionally, the government should
support research and development and education in computer
network security.
Thank you, Mr. Chairman.
[The prepared statement of Mr. Carpenter follows:]
[GRAPHIC] [TIFF OMITTED] T0480.043
[GRAPHIC] [TIFF OMITTED] T0480.044
[GRAPHIC] [TIFF OMITTED] T0480.045
[GRAPHIC] [TIFF OMITTED] T0480.046
[GRAPHIC] [TIFF OMITTED] T0480.047
[GRAPHIC] [TIFF OMITTED] T0480.048
[GRAPHIC] [TIFF OMITTED] T0480.049
[GRAPHIC] [TIFF OMITTED] T0480.050
[GRAPHIC] [TIFF OMITTED] T0480.051
[GRAPHIC] [TIFF OMITTED] T0480.052
[GRAPHIC] [TIFF OMITTED] T0480.053
[GRAPHIC] [TIFF OMITTED] T0480.054
[GRAPHIC] [TIFF OMITTED] T0480.055
[GRAPHIC] [TIFF OMITTED] T0480.056
[GRAPHIC] [TIFF OMITTED] T0480.057
[GRAPHIC] [TIFF OMITTED] T0480.058
[GRAPHIC] [TIFF OMITTED] T0480.059
[GRAPHIC] [TIFF OMITTED] T0480.060
[GRAPHIC] [TIFF OMITTED] T0480.061
[GRAPHIC] [TIFF OMITTED] T0480.062
[GRAPHIC] [TIFF OMITTED] T0480.063
[GRAPHIC] [TIFF OMITTED] T0480.064
[GRAPHIC] [TIFF OMITTED] T0480.065
[GRAPHIC] [TIFF OMITTED] T0480.066
[GRAPHIC] [TIFF OMITTED] T0480.067
[GRAPHIC] [TIFF OMITTED] T0480.068
[GRAPHIC] [TIFF OMITTED] T0480.069
[GRAPHIC] [TIFF OMITTED] T0480.070
[GRAPHIC] [TIFF OMITTED] T0480.071
[GRAPHIC] [TIFF OMITTED] T0480.072
[GRAPHIC] [TIFF OMITTED] T0480.073
[GRAPHIC] [TIFF OMITTED] T0480.074
[GRAPHIC] [TIFF OMITTED] T0480.075
[GRAPHIC] [TIFF OMITTED] T0480.076
[GRAPHIC] [TIFF OMITTED] T0480.077
[GRAPHIC] [TIFF OMITTED] T0480.078
[GRAPHIC] [TIFF OMITTED] T0480.079
[GRAPHIC] [TIFF OMITTED] T0480.080
[GRAPHIC] [TIFF OMITTED] T0480.081
[GRAPHIC] [TIFF OMITTED] T0480.082
[GRAPHIC] [TIFF OMITTED] T0480.083
[GRAPHIC] [TIFF OMITTED] T0480.084
[GRAPHIC] [TIFF OMITTED] T0480.085
[GRAPHIC] [TIFF OMITTED] T0480.086
[GRAPHIC] [TIFF OMITTED] T0480.087
[GRAPHIC] [TIFF OMITTED] T0480.088
[GRAPHIC] [TIFF OMITTED] T0480.089
[GRAPHIC] [TIFF OMITTED] T0480.090
Mr. Horn. Well, we thank you very much and we'll have a lot
of questions coming up very shortly.
From the State of California we have Alethia Lewis, deputy
director of the Department of Information Technology and
Patricia Kuhar, the program manager, Information Security for
the Department of Information Technology. You weren't here when
we noted that we do swear in our various guests and I believe
Ms. Kuhar is the official witness, but Ms. Lewis will be doing
the testifying. So if you'll raise your right hands.
[Witnesses sworn.]
Mr. Horn. Clerk will note both witnesses affirmed the oath.
So Ms. Lewis, proceed. We've got some of your testimony. It's
in the record and if you'd like to submit some more, obviously
we'd be delighted to have your thoughts. So go ahead.
STATEMENT OF ALETHIA LEWIS, DEPUTY DIRECTOR, DEPARTMENT OF
INFORMATION TECHNOLOGY, STATE OF CALIFORNIA
Ms. Lewis. Thank you. My name is Alethia Lewis and I'm
Deputy Director with the Department of Information Technology
responsible for the department's external affairs and liaison
to other State agencies in IT matters. As stated, I have with
me today Ms. Patty Kuhar, the department's information security
program manager and a board certified information systems
security professional.
We're here representing the State of California on behalf
of the Governor's office and the Department of Information
Technology.
I'd like to thank you for inviting us to participate in
this hearing. We did prepare a statement which I'll be
presenting a slightly condensed version of that statement here
as testimony.
California state government has over 100,000 computer work
stations and e-mail users and over 1,000 Web servers at
hundreds of locations state-wide. With the large number of
users, the even larger number of e-mail correspondence and
network connections, our systems are often subject to attack
and disruption by viruses and worms. The most visible and
notorious of these incidents involve mass e-mail viruses and
worms. Like many others, the State was hit particularly hard by
the Love Bug viruses which interrupted e-mail systems at many
departments for periods varying from a few minutes to several
days. Melissa, Kournikova and a few others have caused similar
but somewhat less wide-spread disruptions. Each time, several
hundred hours of work by skilled and scarce technicians was
required to get the e-mail systems cleaned-up and back in
business.
Over the past few years, we've deployed commercial software
products to protect most State work stations and many e-mail
servers. We know this has resulted in a big reduction in the
amount of impact that worms and viruses might have had by
comparing the impact of attacks on the best protected sites
with those that are less protected. Nevertheless, the defense
are far from perfect. It is a time consuming and continued
effort to ensure that every device and server has software
protection from the latest viruses and inevitably, a few
systems get missed and are left vulnerable.
Increasingly, the most destructive or at least disruptive
malicious software spreads around the world in just a few days
or even hours. The fast spreading Melissa was a real wakeup
call. We learned that an e-mail virus can span the world in
less than 24-hours hitting just about every vulnerable system.
We've had to change our approach to system protection from
focus on individual desktops out to the perimeters, adding
security software to e-mail servers and installing more robust
protections at the edges of our networks.
In addition to changing our security architecture to allow
us to apply fixes more rapidly, we also have taken steps to
make our organization more responsive with the establishment of
trained incident response teams and practice recovery
procedures. In fact though, we are just holding our own.
Generally, we're staying just a bit ahead of, perhaps not
falling any further behind, the bad guys. But we should expect
this to change for several reasons.
First, the motives of most malicious software authors have
heretofore been mostly anarchic. We in government should view
the apparent political intent behind some of the worm events
this spring with special alarm as the target is likely to be
us. Second, unlike the mass e-mail viruses which usually take
advantage of human nature to turn otherwise useful software
features against us, the most destructive malicious software
exploits unintentional flaws in the commercial software we're
using.
In the fairly recent past, we and the industry have had
several months to find and fix those flaws before the bad guys
began to exploit it. Usually, only systems maintained by
careless or overworked system administrators were affected. But
as we learned with the recent Code Red experience, the
attacking community is learning to move faster, too, and a
startling number of systems were caught unprepared for this
worm which emerged only a few weeks after the vulnerability was
discovered.
Third, again exemplified by the Code Red, the worm itself
can change quickly making it hard for even the most alert
security staff to keep up. The original version of Code Red was
fairly innocuous, at least to the system directly attacked, and
could be cleared by a simple reboot. Later versions were
potentially much more dangerous and required much more time
consuming recovery measures.
Fourth, as for both the Code Red worm and the mass e-mail
viruses, protecting your own system is not enough. When the
Code Red worm hit, every Internet user faced potential
disruption due to the sheer volume of traffic generated by the
worm's victims. Information security has become a community
responsibility. We must maintain robust security measures, not
just to protect our systems, but to avoid becoming a nuisance
to our peers.
And here we face the most difficult challenge of all,
making sure our users understand and perform their role in
information security. This is always difficult and is a
constantly moving target. Nonetheless, we must move our user
communities to a higher-level of sophistication, especially
since so many of them now have computers in their homes. These
home systems may well be used for after work hours and, while
we hate to discourage that, they are new sources of
vulnerability. With all this broad band network connectivity,
they're a sitting duck for attackers.
So we believe that above all we must place our trust in
policy more than technology. We need to stay current with the
emerging attack methods and improving security measures. We
need to be more organizationally and technically nimble in
closing holes and responding to incidents, and we need to
educate and keep re-educating our users and technical staff.
But ultimately we need to recognize that network-attached
resources are vulnerable. Systems that depend on the Internet
are going to be disrupted. We need to have effective
alternatives for accomplishing critical missions. Sensitive
information on network-attached systems is going to be
improperly accessed. We need to keep the most critical secrets,
including those involving private information, out of harm's
way, behind firewalls and properly encrypted.
At the State, we have set standards for information
security throughout government that ensure consistent and
reliable level of information security throughout State
government. We now require that information security
requirements are identified and addressed when new systems are
planned. We require that implemented security measures are
continually checked by information security officers
independent of the technology staff to make sure our
protections are not allowed to lapse. We have established a
level of security performance by State departments that is
attainable and is expected by our leaders and the public we
serve.
In addition, to make sure everyone in the organization from
the chief executive officer to the key data operator is on our
security team. We have been sponsoring a continuing series of
information security forums and seminars. Presented by
independent public and private sector information security
experts, these quarterly events are typically attended by over
200 State government decisionmakers, program managers and IT
professionals.
This concludes my testimony and, again, I'd like to thank
you for inviting us to participate in this hearing.
[The prepared statement of Ms. Lewis follows:]
[GRAPHIC] [TIFF OMITTED] T0480.091
[GRAPHIC] [TIFF OMITTED] T0480.092
[GRAPHIC] [TIFF OMITTED] T0480.093
Mr. Horn. Well, thank you very much, and we will now go to
questions. Some of them will be the same that we'll give the
second panel. The first one that comes to mind is do you feel
we have appropriate laws to deal with this problem and what
would you suggest? I'll ask Mr. Rhodes. We'll just go right
down the line.
Mr. Rhodes. I do believe the laws are appropriate. There's
enough laws on the books for anybody to exercise prosecution.
The struggle that I see in working with law enforcement is not
that the law is inadequate. It's trying to present highly
technical evidence in a court room. Having been an expert
witness in legal cases, I can tell you that there's nothing
more confusing than an engineer standing up in front of jury
trying to explain a denial of service attack and then, just as
our associate here, Mr. Castro, pointed out, if I show you this
cloud and at one point the actual attacker is here but it looks
like the apparent attacker is here and the victim is here, how
do we convey that in a way of making ceratin that the laws are
enforced? It's not really a question of law. It's a question of
forensic analysis and being able to present cogent argument in
a courtroom.
Mr. Rhodes. Mr. Castro.
Mr. Castro. From the NSA perspective, we wouldn't offer
anything ourselves but I do believe there's an issue that Mr.
Wiser will address that he mentioned in his testimony with
regard to having to seek warrant authority from different
jurisdictions. Clearly, the key to getting to some sense of
attribution is to be able to move very, very quickly once an
attack begins, and it would be in that area that I suspect Les
will talk about the need for being able to move faster in that
regard.
Mr. Horn. Thank you, Larry. Mr. Wiser representing the
Federal Bureau of Investigation. They're the ones that are
going to be following this up.
Mr. Wiser. Sir, time is of the essence in conducting
computer intrusion investigations, and we find that logs are
perishable and we depend upon those logs to trace back through
Internet service providers the trail that an intruder uses.
What we're required to do because the Federal rules of criminal
procedure mandate this is that we obtain court orders in the
judicial district in which the place to be searched exists.
When an intruder uses several different hot points, those
different ISPs, we have to obtain in serial fashion a number of
separate orders and, of course, this is a timely process that
could threaten an investigation and one in which a life may
depend upon it in a manner that is different from a simple
intrusion investigation. So that is one of our primary concerns
that we're interested in.
I echo what Assistant Attorney General Cherkoff mentioned
in earlier testimony before another committee about penalties
where, despite the large dollar amount of damage that can be
done, there seems to be disproportionately low maximum
penalties for computer intrusions and viruses.
The last point that I would mention would be that in my
discussions with members of the private sector, one of the
reasons--and I expect that there are many reasons--but one of
the reasons that they are sometimes reluctant to come forward
with information to us is that they fear that the Freedom of
Information Act does not provide adequate protection for
proprietary information that they provide to us and so they've
asked for a clarification of the law enforcement exception or
another exception to be created in FOIA. This is something
which there's a continuing dialog about when we've discussed
this with the Judiciary Committees as well.
Those are the three things that I would point to and, of
course, there are others that I'd be happy to speak with you at
another time about.
Mr. Horn. Mr. Carpenter, manager of the CERT Coordination
Center, Carnegie Mellon.
Mr. Carpenter. I would just echo Mr. Wiser's comment on
FOIA. From our perspective and our discussions with industry as
well as government, that has been probably one of the largest
issues that has been raised to us is issues regarding what
sensitive information regarding incidents be exposed to FOIA
requests. So that would be the only comment we would have on
that.
Mr. Horn. Ms. Lewis, what does the State of California have
with regard to laws that can relate to this damaging of the
computer infrastructure?
Ms. Lewis. Actually, at the State we work on policy that
relates directly to the IT computers and stuff that we actually
use. I really don't have any comments with respect to that
particular issue.
Mr. Horn. I'm delighted to have one of my colleagues. He's
fought the traffic between Sacramento and San Jose. Michael
Honda is the representative right in the middle of Silicon
Valley, and we thank you for coming. He'll have to go to
another appointment shortly, but I'd like him to pose a few
questions if he wishes to.
Mr. Honda. Thank you, Mr. Chairman, and thank you for
having this hearing. I know that from my visits with Symantec
and other organizations and companies in this area that
security is a critical area, not only in government, but also
for personal uses and for commercial uses. I don't have any
questions since I did not hear most of the testimony. I've been
briefly going through the written testimony. So I wouldn't be
able to ask any intelligent questions, but I do understand that
the issues around security, from my visit with Symantec, is
that we have a variety of issues and circumstances that we have
to be particularly cognizant of. It's not only related to
hardwire security and accessing our security information that
we have, but also the wireless issue is a very important area
that we're not keenly aware of and I think that the commercial
uses that I've been exposed to and schooled in poses even
greater concern on my part as far as government uses of similar
kinds of techniques that we have in place.
So I'll be listening and I'll be reading the materials, but
I'll be back following-up with Mr. Horn on issues of security.
But I think that the issue of wireless and things that we don't
see and don't realize and are not cognizant of is one top
priority for me.
And then also for public policy folks for the schools and
educated in the basic things that you all understand so that as
policymakers we'll be able to understand how to work with you
in developing policies on secure systems. I know that Dr.
Neumann is here and he's testified quite a few times, and so I
think the other concern I have that I'm sure is shared by Mr.
Horn and that is how quickly do we move and with whom do we
move and how will we be able to put the system together. So I
appreciate all of you being here and sharing your information
and your thoughts.
Thank you, Mr. Chairman.
Mr. Horn. Thank you.
Let me ask Mr. Castro. I'm quoting from your written
testimony. ``In taking out a computer network, the single
hacker has the cyber destructive power normally associated with
a nation state.'' If that's the case, what can be done
technologically to address this problem?
Mr. Castro. Well, there are a wealth of things and I
suspect in the industry panel you'll hear from some of the
industry folks. But within the National Security Agency in
cooperation with the National Institute of Standards and
Technology, we jointly administer a program called the National
Information Assurance Partnership. It's through this
partnership that there have been a number of independent
laboratories established. Think of them if you will as the
underwriter laboratory's equivalent for cyber products.
What we have now set up is a process whereby industry can
bring security and security-related products to these
laboratories and, at their expense, at the industry's expense,
can have these products evaluated against what is now being
called the international common criteria. This is a criteria
for specifying the five characteristics I showed you there
earlier in my testimony specifying how those characteristics
can be achieved and graded for achievement.
It's referred to as the international common criteria
because all the English speaking partners have signed up to
this criteria and it's now being moved out even for further
international acceptance. So the goal would be to have a set of
standards by which security and security-related products can
be certified as doing what it is that they are advertised to
do. These could range from firewalls in one case to public key
infrastructure arrangements in other cases.
So I think the short answer, sir, is that there are a
variety of defensive measures. We refer to them within the
Department of Defense as defense in depth. They certainly in
every case include well-trained people at the very, very
frontend of that defensive posture but then backed-up by the
appropriate software and hardware configurations.
The other thing I'd like to add is I really appreciate
Congressman Honda's concern about wireless security. That is an
area that at NSA we're working very, very closely with
industry, some in this area, to produce secure versions of
cellular telephones and other wireless devices. This is, quite
frankly, the threat of the future as more and more of our
Nation will be moving to this wireless technology. So your
point is well taken, sir, and we're right on it.
Mr. Horn. We do need to look at this from a broader
perspective that you've laid out there and I would suggest
we're talking about a computer NATO. I wonder to what degree is
the National Security Agency and the FBI--I know you've worked
with foreign people here. Are they listening to us and are they
hoping that you're helpful to them?
Mr. Castro. Maybe we can take it in two parts and I'll
defer to Mr. Wiser on the cooperation on what we call attack
sensing and warning. But certainly in the area of cooperating
to produce secure products and to ensure that that security is
inter-operable within both the NATO and other coalition
environments, I think the answer to your question, sir, is that
the allies are very, very well engaged. Again, we have a number
of both bilateral and multilateral arrangements that will
attempt to introduce the secure operability within our
defensive posture.
And then I would ask if Mr. Wiser could answer the question
on cooperation with regard to sensing and warning of attacks.
Mr. Wiser. Sir Congressman, Mr. Chairman, the NIPC is
unique because inside it we have the three disciplines
represented. That would be law enforcement, intelligence and
defense. In fact, NSA is represented at the NIPC and so we have
a tremendous coordination and cooperation on a number of levels
within the defense community and the NIPC and, therefore, the
FBI.
But also in the center we have representatives from foreign
governments. We have presently the U.K., Canada and Australia
represented. And we find that this is very important in our
links with those important allies. But in addition to that, we
have connectivity with similar centers around the world, and I
mentioned earlier the U.K., Canada and Australia as well as New
Zealand and Sweden, and we're working now with Germany to
establish that kind of a relationship as well.
So with those relationships and with the relationships that
our legal attaches stationed in 44 countries around the world
are engaged in, we are working toward that global security, and
we find that our allies and those countries with whom we work
are extremely interested in pursuing this objective.
Mr. Horn. Mr. Neumann's testimony is coming up on panel
two, but I want to get your ideas on it. He raises the point
that despite U.S. laws to prevent or punish hackers, given the
international aspect of this problem, little can be done. Do
you agree with that and how do we deal with it?
Mr. Wiser. We've been, just as I mentioned in the
testimony, very successful with the Leave worm case. It's just
the latest example. That threat is now over. A number of people
I don't think realized the danger that the Leave worm
represented, but those of us that were working on this
problem--I know that Mr. Castro, as he mentioned, is very
familiar with this--know that it presented a great potential
for danger. But the investigation itself solved this problem,
and we've been successful on a number of different
investigations.
For example, the Love Bug virus was solved quickly. I mean
we had an FBI agent within 24-hours standing outside the door
of the person responsible, along with the Philippine officials,
Mr. Menses's group. So we are establishing these relationships
with countries and as long as we can trace the trail back, many
of the countries have been cooperative. Another example would
be the Bloumberg case in Kazekstan where we have a league in
Amate who worked with Kazekstani authorities to bring people
that threatened the Bloumberg financial network to London where
we did a sting operation there and individuals have been
extradited to the United States to stand trial in that case.
So we have examples of success. I would say that there's a
way to go, but we're optimistic that other countries will
become more sophisticated with their statutes, with skilled
investigators, and we take part in the training of those
investigators and I think their growing awareness will create
the will to cooperate with us.
Mr. Horn. In looking at the originator of the Codes Red, do
you think that man or whoever will be apprehended?
Mr. Wiser. Yes, sir. I do. I'm confident about those kinds
of things. I'm an optimist and I believe that we'll be able to
eventually find the person responsible.
Mr. Horn. Is there anything we should be promoting with the
people in Silicon Valley, either in software, hardware where
some of this can be headed off?
Mr. Castro. If I could comment on that, sir, and I'm sure
others will, too. Anything that can be done to really
demonstrate the commitment of the U.S. Government to ensuring
the security of our ability to work on the Net and then to
translate that into meaningful action would be helpful.
As I said, from the Department of Defense's point of view,
we are not a dominant, although a very large customer for
information technology. In today's market place, we are not a
dominant customer. So if someone is going to make the argument
only on the economics of what DOD can provide, it's not going
to make it. The case is going to have to be made on a very much
larger scale that it is critical to our Nation's total
infrastructure that vendors start thinking security in their
products from the very, very point of inception. The lesson
that we have learned over NSA's 50 year history is that if you
try to go in after the fact and improve a product, it sometimes
doesn't work and, if it does work, it can be a very costly
venture.
So again, fora like this where for industry we demonstrate
the government's desire to really keep security in the
forefront and the Congress's intent to back that desire are
things that are needed.
Mr. Horn. Can you tell us how many government servers were
compromised by Code Red and Code Red II? How much damage was
made at this point?
Mr. Castro. I can speak for the Department of Defense.
Others will have to speak for the rest of the government.
Within the Department, General Brian, the commander of the
Joint Task Force on Computer Network Operations, made the
decision on the evening that it was clear that bad things were
going to happen that the Department would go to what we call
Info Con Alpha. Info Con Alpha is the first step where we
normally are in, which is normal Info Con. This Info Con
gradation is meant to match in some way DefCon and ThreatCon
status that are already well-established within the Department.
In doing that, then we raise the awareness of system
administrators throughout the Department.
He also directed the blocking of all port 80. Again,
without getting into a lot of that, and it was already
mentioned in previous testimony, what we basically did is to
disable anybody's ability to come in and exploit the one
particular port on which the vulnerability was being exploited.
I believe that what we're saying now, with the Department
still at Info Con Alpha and we are gradually getting ourselves
back to a normal state. You may be aware that there are some
finite number of places where the Department's portion of the
Internet, which we refer to as the NipperNet, connects to the
Internet. There are 13 such gateways currently in existence and
we've opened up now 9 of those 13. I can't give you the
specifics on what we have taken down, but I believe it's safe
to say the Department is slowly recovering and we will probably
lift the conditions on Info Con Alpha within the next 2 weeks.
Mr. Horn. I believe Mr. Rhodes, you and your team in the
General Accounting Office, have gone through security, various
designs, at various of the domestic parts of the government.
Have you ever had fun with the Defense Department and CIA and
knock them a little and gone through their systems?
Mr. Rhodes. No. Well, yes, we've done it with the
Department of Defense. I guess one point that I would make is
the latest estimate that we have on total number of servers
that have been taken down is 975,000. Those aren't government
servers though. That was the total estimated number.
I guess one point I would make is that you asked about what
could be done for Silicon Valley. What can be done to make the
developers change their mind? I have to echo what Mr. Castro
said. The U.S. Government has to take the point that you've
made continually during your membership in the House and say
they have to be able to manage. Silicon Valley is not going to
make a decision that's not based on economics. They're in
business, and we can't expect them to do it any other way.
If we as the U.S. Government do not manage from a security
standpoint, why in the world should they? If we can't make it
economically feasible for them, either by building systems
specifically for us or putting the security in, we're going to
continue to be in the same position we are now which are down
stream testers of released software that hasn't been fully
tested because they're trying to get their product to market
and they're testing it well enough to get to market, not well
enough to withstand a Code Red virus or something like that.
Mr. Horn. We will have the majority and minority staff give
you a few questions that we simply can't get to because I want
to get to the second panel. If some of you can stay, we'd
certainly appreciate it to go into questioning on panel two. So
let's move now to panel two. I think most of you saw the
routine. We thank you very much for coming and we do swear in
all witnesses and those that support the witnesses. Get them
all to stand up and we don't have to keep making changes.
[Witnesses sworn.]
Mr. Horn. Let the record note that five members took the
oath, and we will proceed. We now start with an old friend of
this committee and a very knowledgeable person, not only in the
United States but throughout the world on behalf of his
colleagues in the Information Technology Association of
America. So Harris Miller, president of that fine group, let's
start with you.
STATEMENT OF HARRIS MILLER, PRESIDENT, INFORMATION TECHNOLOGY
ASSOCIATION OF AMERICA
Mr. Miller. Thank you, Mr. Chairman. Thank you for inviting
me to the heart of Silicon Valley to testify about what
practices, policies and tools are being deployed to reduce the
impact of computer security threats to government at all
levels. I commend you for your continued leadership on
information technology issue.
IPA is proud to be the leading association on cyber
security issues representing over 500 corporate members. These
are companies that have a vested economic interest in assuring
that the public feels safe in cyber space to conduct electronic
commerce and, in a developing era of e-government, that their
information will be secure and transactions reliable.
Though the official title of today's hearing focuses on
government information security, I submit to you that security
challenge is ultimately a government and business challenge
that must be addressed at the highest levels of all
organizations, whether public or private. We must do more than
just recognizing the challenge, however, though that is an
important first step. We must work together to find ways to
enable solutions, solutions to threats that will likely become
more significant as the Internet becomes more pervasive.
As a witness during the Code Red situation, if cyber
security receives the kind of prioritization needed at senior
levels, government and industry can mobilize quickly and
effectively to combat common and significant threats to the
Internet. Those efforts during the Code Red situation helped to
reach users of vulnerable systems on a massive, unprecedented
scale that prevented the further spread of the worm. Over a
million copies of the patch were downloaded and, since that
patch can be downloaded and installed to any number of
machines, the number of systems that are actually patched is no
doubt higher.
Few of the major Web sites were affected by the Code Red
worm because many took action after the industry/government
announcement on July 30. The public awareness of information
security issues increased significantly during the Code Red
situation. This cooperative, proactive response by industry and
government that Mr. Rhodes addressed in his comments could be
used as one model for more meaningful and effective cooperation
on cyber security issues in the future.
If industry and government do not collaborate, then the
impact of such threats on the Internet users will be much
greater in the future.
Chairman Horn, I know from working together with you
closely on Y2K and cyber security issues that you are fond of
report cards and grading which you issued in your previous life
as a leading academic political scientist. Today I would like
to offer my own report card in six separate categories and an
overall grade on industry and government handling of computer
security threats. This is my own grading system, I tell you,
and I look forward to suggestions from you and others about
ways to improve it.
The first area is the government organization. In
addressing the challenges and developing structures that can
adequately address cyber security challenges, the Federal
Government has moved from what had to be a failing grade just a
few years ago to a passing grade or C today. I base my C grade
on four factors: the priority for this issue for the Federal
Government, internal cooperation within the government,
mechanisms for liaising with stakeholders, particularly in the
private sector, and response time.
The national plan for cyber security and Presidential
Decision Directive 63 help provide a framework for government
organization. However, the alphabet soup of government agencies
charged with some aspect of cyber crime prevention makes it
easy to see why progress has been slow in the government. We
credit the National Infrastructure Protection Center under the
leadership of Ron Dick to forge ahead with programs such as
InfoGard which was described in Mr. Wiser's testimony. Because
of his efforts and joint efforts between ITAA and the
Department of Justice, we've increased the cooperation between
law enforcement and the industry.
According to numerous press reports, President Bush will
sign soon after Labor Day an Executive order that will
establish the critical infrastructure and protection and
continuity board. As that draft Executive order has been
explained to us, it should be a major step forward creating
substantially more coordination within government and less
duplication among the plethora of government departments and
agencies involved in InfoSec. Should this new board result in a
centralized, coordinated cyber security effort based in the
White House, I think the government grade could be moved from a
C to a B.
Let me talk about a second area related to government.
Government funding for information security. Here the story is
not so positive, Mr. Chairman. The grade for government funding
at best has moved from a D- to a D. Mr. Chairman, while you and
some of your colleagues such as Representative Greenwood have
done a valuable service in scrutinizing computer security
policies and practices in U.S. Government agencies and
departments, that is not enough. As that well-known philosopher
Yogi Berra would say, this is deja vu all over again. During
Y2K you pointed out in a series of hearings that government
agencies had neither the plans nor the funds for Y2K
remediation. Under your prodding, they came up with a plan but
they still didn't have the funds. We seem to be seeing the same
thing today InfoSec. Agencies seem to be knowing much more
about what they need to do, but the funding is not there.
A GAO office report issued earlier this month strongly
criticized the Department of Commerce for InfoSec failures
internally, and that carried the clear implications report that
additional financial resources are needed. Every Federal CIO
with whom I speak privately tells me they are in desperate need
of additional funding for their InfoSec activities. There is a
long way to go before the government is going to get a passing
grade here.
For example, President Bush requested an e-government fund
of $20 million this year but, as you know, the House
Appropriations Committee and the Senate Appropriations
Committee only provided $5 million for even that. So we're
going to have to work together, Mr. Chairman, under your
leadership to convince your colleagues in Congress that
government agencies they need to really address the InfoSec
challenges.
Area No. 3. How about industry? Where is their focus in
information security? I think one of the good news stories from
Y2K is that issue elevated the whole issue of information
technology from a back room to a front office issue. The CEOs,
the members of the board began to understand how important
information technology was to their businesses. Similarly,
they've come to understand how important information security
is to their businesses if they're going to get continuity.
Yet, at best, I only give corporate America a B- because we
have a lot of variations. Some industries such as financial
services, telecommunications, are doing very well but others
are frankly far behind and particularly small businesses and
mid-size businesses as under Y2K are far behind. I commend the
FBI for its InfoCar program because that reaches small
businesses. But we have a long way to go. Organizations must be
willing to invest in development of comprehensive security
procedures and to educate all employees continuously. We have
to practice sensible cyber hygiene and Internet users have to
be vigilant about it.
The next area I wish to give a grade is industry/government
cooperation. The Ad Hoc Coalition on Industry and Government
that was formed to provide a public service message to counter
the Code Red worm is a major operational success, as Mr. Rhodes
remarked. It illustrates just how far players have come. A few
years ago, industry cooperation would have received an F or
maybe a D. However, through hard work on both sides, progress
has been made. The efforts to stand up the Information Sharing
and Analysis Centers, ISACs, by the telecommunications
industry, financial services industry, electric industry,
transportation and now the IT industry have helped to bring us
up to a C grade and, in fact, Code Red may get us up to a B-.
But in order to get to an A, the remaining industry sectors
will need to stand up and operationalize the ISACs and the
ISACs will need to share confidential information.
Equally important, if maybe not more important, is sharing
information between industry and government on sensitive
information in both directions. We strongly support the bill
that was referred to by the previous panel introduced by
Congressmen Tom Davis and Jim Moran and soon to be introduced
by Senator Bennett and Senator Kyl in the Senate to remove
legal obstacles related to the Freedom of Information Act and
Senator Feinstein from the State of California is in a position
as chairwoman of the Senate Judiciary Committee Subcommittee on
Technology, Terrorism and Government Information to move that
bill through the Senate under her leadership.
The next area is industry to industry cooperation. Let me
emphasize that while government has a critical role to play,
not just in the United States but internationally, vertical
industries also have an obligation to communicate on cyber
security issues, again, similar to the obligation they had
under Y2K. Progress has been made. We've moved from maybe a D-
a few short years ago to a C+/B- today. How so?
Critical to this has been the Partnership for Critical
Infrastructure Security which was begun in December 1999. This
created a cross-sectoral dialog with collaboration from
government, particularly the Critical Infrastructure Assurance
Office, to address risks to the Nation's critical
infrastructures and assure delivery of essential services over
the Nation's critical infrastructures in the face of cyber
threats. The Partnership is run by companies and private sector
associations and is effectively meeting the industry dialog
challenge.
But much more needs to be done globally. I have advocated
creation of an international InfoSec cooperation center,
analogous to the highly successful International Y2K
Cooperation Center that you supported very strongly, Mr.
Chairman, during that challenge to our global economy.
Let me next address international cooperation. Again, I
think the best I can do here is a C-. Some areas are working
well, others not so well. Let me tell you briefly about an area
well-intended that seems to have gone a little bit awry, and
that's the work of the Council of Europe to establish a cyber
crime convention. The principle here is great. We need to have
laws in every country around the world, not just in the United
States, to fight cyber crime. As we saw in the example of the
Philippines at the time that incident occurred that was
referred to in the previous panel, they didn't have laws at
that time to prosecute the people even though they identified
them. Fortunately, the Philippines has since updated their
laws.
The Cyber Crime Convention, if we could get it adopted
around the world, in theory is a good idea. Unfortunately, the
Cyber Crime Treaty has some flaws in it because it was
developed by law enforcement officials without adequate input
from industry and economic ministries. So we think with some
changes in it, that might be a model law that could be adopted
in many countries around the world.
To sum up, there is much work to do. In addition to
improving our letter grades in information security, both
industry and government need to strive to have the teacher
commend us for playing well with others. Cooperation,
communication and sharing sensitive information are the keys to
moving from today's overall grade, which is a C-, to an A+.
Summer vacation is ending, Mr. Chairman, and we are about
to begin a new school year. By working together to build
meaningful and effective relationships that recognize the
bottom line impact of InfoSec on our businesses and government
operations, both domestically and globally, we can all move to
the head of the class on cyber security issues. Thank you very
much.
[The prepared statement of Mr. Miller follows:]
[GRAPHIC] [TIFF OMITTED] T0480.094
[GRAPHIC] [TIFF OMITTED] T0480.095
[GRAPHIC] [TIFF OMITTED] T0480.096
[GRAPHIC] [TIFF OMITTED] T0480.097
[GRAPHIC] [TIFF OMITTED] T0480.098
[GRAPHIC] [TIFF OMITTED] T0480.099
[GRAPHIC] [TIFF OMITTED] T0480.100
[GRAPHIC] [TIFF OMITTED] T0480.101
Mr. Horn. Thank you very much.
We now have a rather well known person in the whole
computer evolution and that's Peter Neumann, the principal
scientist, Computer Science Laboratory, SRI International which
used to stand for Stanford Research Institute, but you don't
say that any more, I gather. Delighted to have you here.
STATEMENT OF PETER G. NEUMANN, PRINCIPAL SCIENTIST, COMPUTER
SCIENCE LABORATORY, SRI INTERNATIONAL, MENLO PARK, CA
Mr. Neumann. Thank you. Thank you for your very kind
introduction.
SRI, I should point out, is a not-for-profit research
institute. I would like to believe that what I have to say is
motivated, not by any corporate need or any allegiance to any
particular ideas.
I think the message that I want to give you is pretty well
taken care of in my written testimony. I'm going to summarize
it very briefly.
The bottom line here, I think, goes back to September 19,
1988 when Robert Morris, who was at the time chief scientist of
the Computer Security Center at NSA, said, ``To a first
approximation, every computer in the world is connected to
every other computer in the world.'' That was 13 years ago. The
situation is much worse now. The number of computers that are
connected to the Internet is enormous.
A month and a half later, it was his son who, in a research
experiment that went awry, created the Internet worm which, in
some sense, was the beginning of all of this nonsense that we
have going on relating to worms, viruses, trojan horses, and so
on. Letter bombs coming through e-mail.
I would like to take a broader view of the problem and make
the very bold statement that what we're really talking about is
not viruses, worms and related subjects but the fact that the
computer security and information security infrastructure,
including all the networking, is riddled with so many security
flaws that it is virtually impossible to expect that we can
have any meaningful sense of security, given the infrastructure
that we have today, and I want to elaborate on that to some
extent.
Larry Castro mentioned the classical DOD mantra which is
defense in depth. What we have is weakness in depth. There are
vulnerabilities essentially everywhere, in the mass market
desktop systems, in the server systems, in the networking, in
the embedding of even some of the cryptography in the world
into platforms that are again riddled with security
vulnerabilities. So let me very briefly go through what I've
called a set of seemingly dirty truths that remain largely
unspoken and under-appreciated in my written testimony.
The first is that what we have today is a far cry from what
is straightforwardly possible. Back in 1965 I was part of an
ARPA, Advanced Research Project Agency, project in MIT in Bell
Labs which developed a commercial operating system that had
enormous research advances in it. If we look at what's happened
in the last 36 years, many of those research advances and other
similar advances have not found their way into the mainstream.
What this leaves us with, especially me as a researcher, is the
very gnawing feeling, annoying and gnawing, that the good stuff
that should be coming out of research is not finding its way
into the market place.
One of the great adages of our society is that the market
place is supposed to drive everything. Unfortunately, the
market place seems to be much more interested in whiz bang
features and rush to market place than it is in having systems
that are truly reliable, secure, and available in high degrees
and survivable in the face of all sorts of problems.
The problems that we're addressing today in terms of worms,
viruses and so on are really the tip of the iceberg. If in fact
it is possible to penetrate systems from anywhere in the world,
irrespective of what the laws are in this country, we have a
fundamental problem. Whereas the laws are important and the
laws are in fact useful in many respects, the comment that you
quoted earlier was based on the fact that if you cannot trace
back to find out where the problem is coming from because of
network weaving and the lack of accountability and the lack of
identity and authorization and authentication, then the laws
may be absolutely worthless except as a possible deterrent for
the people who believe that those laws are applicable to them.
So we have a situation in which the Internet provides the
opportunity for attacks from essentially anywhere in the world,
and many of those attacks can be created by individuals for
which it is almost impossible to trace them back. I appreciate
the optimism stated in the previous panel, but I believe that
one of the most important things here is finding ways of
incentivizing the improvement in the systems that we're dealing
with.
The previous panel dealt primarily with the methodology of
patching. Patching is extremely limited. If you start with
something that is fundamentally insecure, you add patches, you
may or not remove a vulnerability and, in fact, you may
introduce new vulnerabilities. But because there were so many
vulnerabilities in the original products, you merely transfer
the attacks to new vulnerabilities.
If you look back at the Internet worm of 1988, essentially
all of the vulnerabilities that existed at that time--and there
were four of them--are still present today in one form or
another. They may not be the specific flaws in the specific
code that was used at that time, but the characteristics of
those four flaws are all present in systems today. This
suggests that we are not progressing as we should be
progressing. So let me very briefly go through some of my
seemingly dirty truths.
I don't really need to go into detail to you on the
President's Commission on Critical Infrastructure which found a
great many vulnerabilities. The Internet, being enormous and
relatively uncontrollable, and being international is not
really the culprit itself. It's all of the systems that are
attached to it. The presence of these almost trivial to
perpetrate Internet mail bombs, for example, are the result of
the fact that there is very little inherent security in the
systems that we're dealing with. I mentioned the education
problem indirectly, but I think I should mention it very
specifically.
The difficulties in developing very secure systems are
enormous. They require a great deal of education. They require
good software engineering practice, which is not very widely
found in this country or in other countries, as well. To
develop systems that are very secure, life critical, ultra-
reliable takes an enormous amount of effort and, although there
has been enormous research in those areas in the past 40 years
or so that I've been involved in this area, the research is not
finding its way into the market place.
Another dirty truth is this outsourcing thing, and you may
remember from the Y2K business the fact that the air traffic
control remediation was done by foreign nationals, essentially
unbeknownst to the technical people at the FAA. That was rather
startling when it was uncovered. The notion that DOD would like
to outsource all of its critical functionality--for example,
system administrators, is startling. If you can't have a
trustworthy system, then you outsource the management of it to
somebody who might be even less trustworthy than the system
itself. This does not sound like a good way to run a ship.
In general, simple systems and simple solutions are not
very effective. This gets us into the laws, to some extent. One
of the simple solutions that Congress has come up with is the
Digital Millennium Copyright Act which has a chilling effect on
the research community and which, in fact, is seriously
hindering, in my opinion, the development of systems that are
more secure because somebody who points out that a particular
system is not secure is immediately threatened as in the case
that occurred last week of somebody who pointed out that his
local newspaper had its Web site totally available to anybody
in the world and anybody could do anything to it with
essentially no authorization. He was threatened with 5 year
felony charge for having pointed out that this problem existed.
We're shooting the messenger in many cases in the enforcement
of the Digital Millennium Copyright Act.
The Uniform Computer Transactions Act, the UCITA
legislation which is working its way through many States, has a
chilling effect as well. It allows the vendor or the developer
to declare absolutely no liability for anything that goes
wrong. This is a very strange business. I remember in the Y2K
era there was legislation that said the remediators for Y2K
should be absolved of their liability and should be able to
have a certain freedom in that respect. I believe that when we
get to the issue of what the laws can do, the area of liability
is going to be a very important one.
There has been legislation in the past and directives from
the government that have dumped down security. Examples of that
include the use of good crypto. There's one example that is
extremely important to me. I was at a workshop yesterday and
the day before on electronic voting systems. Here's an example
where there's a mad rush to replace the punch card ballots
after Florida with all electronic voting systems. This is an
example where the simple solution of rushing into an electronic
voting system does not solve the problem at all because every
existing system today has essentially no assurance that the
vote as cast is actually the vote as counted. The vendors say
trust us. We have proprietary software. We can't show anybody
the software because it would diminish the security of the
system which is actually nonsense in many cases and that we
just have to trust them that they're going to do everything
right because they know what they're doing. This is an example
of an apparently simple solution that in fact has very serious
implications.
Another example is the use of legislation to insist on
filters to solve the spam problem. This doesn't work, and we've
had cases where the Bible and the encyclopedias and all sorts
of things are banned or where people's Web sites are banned
because their name happens to include the string S-E-X like
Essex and Sussex.
Now, my conclusions are very simple. We need to address
technological problems with technological solutions. We need to
address legal problems with legal solutions. We need to address
all of the problems of computer security, computer reliability,
with a combination of these approaches. Laws by themselves do
not solve the problem. Technology by itself does not solve the
problem. We need a combination of socio-economic and political,
technological and other approaches. So at the very minimum, we
need what I think would be radically improved security
reliability and availability in the systems that we are using,
not only in our critical infrastructures, but in our Internet
conduct of normal business.
As I said several times, it is really unfortunate that many
of the important research advances of the last 45 years or so
have not found their way into the market place. I don't know
how you can incentivize that more effectively, but I think
you've got to find ways to do it. There are roles that NIST can
play. In the former session, the common criteria was mentioned.
NIST has been involved for many years in the elaboration of the
common criteria. If those were systematically used in an
effective way, it would be tremendously valuable.
One of the examples. One of my doctoral students has just
written a thesis on applying the common criteria to the
electronic voting problem and demonstrates that even if all of
those criteria that she's constructed were satisfied, it's
still not enough, but it's a major, major step forward. So I
recommend strong endorsement of that approach.
I'm very concerned about liability issues. I believe that
liability legislation could go a very long way. The idea that a
vendor can disclaim all of its liability is a joke, although
it's good marketing. I believe that Federal legislation that
imposes strict liabilities on end consequential damages for
gross negligence in not only system development but corporate
misbehavior would be very valuable. There's a proposal today
that I saw about making Web site and system purveyors liable
for not using best practices when it comes to security, for not
installing the patches that have been given to them and, in
some cases, they've been told that they were critical. In some
cases, they weren't told at all.
So in my final comment, there is some wonderful research
and development out there and it really needs to be worked into
the development of systems that are substantially more secure,
more reliable. Along with that goes the education and the
training and everything else that's needed to make it work. But
if I look around the country, I do not see the adequate
attention to software engineering, to security, to reliability
in even graduate programs and certainly not in undergraduate
programs.
Thank you very much.
[The prepared statement of Mr. Neumann follows:]
[GRAPHIC] [TIFF OMITTED] T0480.102
[GRAPHIC] [TIFF OMITTED] T0480.103
[GRAPHIC] [TIFF OMITTED] T0480.104
[GRAPHIC] [TIFF OMITTED] T0480.105
[GRAPHIC] [TIFF OMITTED] T0480.106
Mr. Horn. Thank you. We appreciate those comments. They're
stimulating, to say the least.
Scott Culp is the lead security program manager for the
Microsoft Corp. We're glad to have you with us.
STATEMENT OF SCOTT CULP, MANAGER, MICROSOFT SECURITY RESPONSE
CENTER, MICROSOFT CORP.
Mr. Culp. It's a pleasure to be here. Thank you for the
opportunity to appear today at this hearing. My name is Scott
Culp. I'm the manager of the Microsoft Security Response
Center. I'd like to commend the chairman and the committee for
leadership on government computer security. It's a matter that
we take with great seriousness, not only because the U.S.
Government is one of our largest customers but also as an issue
of civic duty. Mobile hostile code such as viruses and worms
pose an ongoing threat to the security of our network systems.
Every vendor's platforms can be affected and countering worms
and viruses is a challenge that the entire IT industry must
address.
As an industry leader though, Microsoft has a number of
ambitious programs designed to combat hostile code and to
safeguard our networks. The good news is that the basic design
and architecture of the systems that we all use is sound.
Viruses and worms only succeed when they can bypass the
security these systems provide. Some say to do this is for the
virus or worm to exploit a security vulnerability, a hole in
the system's armor.
To reduce the occurrence of security vulnerabilities and
out products, Microsoft has had an ambitious program under way
for over 18 months called the Secure Windows Initiative which
has as its goal nothing less than a generational improvement in
the development practices that we use. We're providing advanced
security training to our developers, we're building leading
edge tools that dramatically improve how we test our software
and we're using innovative techniques like penetration test
teams in which we intentionally try to break into our own
products. At the same time, we're increasing our use of
independent third party experts, both inside and outside the
government, to validate our work.
But software is and always will be a human activity subject
to human frailties. No piece of bug-free software has ever been
developed and none ever will be. To root out any security
vulnerabilities that may have slipped through our development
and testing processes, Microsoft has assembled a Security
Response Center which even our critics acknowledge to be the
best in the industry. We investigate every claim of a security
vulnerability affecting one of our products. When one is found,
we quickly develop updated software and we deliver it through a
well-publicized Web site, a free mailing list with over 200,000
subscribers and automated sites like our Windows Update Web
site.
Last year alone, we received over 10,000 reports. We
investigated every single one of them. Of these, a grand total
of 100 security vulnerabilities in all Microsoft products was
found.
The other way that viruses and worms typically succeed is
through social engineering, tricking the user into undermining
his or her own security. To combat viruses and worms that use
these techniques, Microsoft announced in April of this year a
war on hostile code. One outcome of this campaign is something
called the Outlook E-mail Security Update which blocks e-mail
viruses. To the best of our knowledge, the number of customers
who, after applying this update, have subsequently been
affected by an e-mail virus is zero worldwide.
Another element of the war on hostile code is a new feature
in Windows XP called Software Restriction Policies which stop
viruses and worms from executing on the machine even if the
user downloads them and tries to run them.
In addition to improving our products, we work
collaboratively with our colleagues throughout the security
community. Microsoft senior executives are also fully engaged
in the U.S. government's security policy initiatives. For
example, Bill Gates, Microsoft's chairman and chief software
architect, received a Presidential appointment to a National
Infrastructure Assurance Council and Craig Monday, Microsoft's
senior vice president and chief technical officer for strategy
and policy, received a Presidential appointment to the National
Security Telecommunications Advisory Council.
But technology is not a panacea. Breaking into computers
and writing viruses and worms to damage them is a crime and
it's important that we not lose sight of that fact. Just as we
can never realistically expect the threat of burglary or bank
robbery to end, we should realize that cyber crime will always
be a fact of life and, accordingly, Microsoft strongly supports
enforcing our society's cyber crime laws and we work closely
with domestic and international authorities and we strongly
support increased funding for computer crime enforcement.
In sum, Microsoft takes its responsibilities as an industry
leader very seriously and we believe that the efforts of
Microsoft and its colleagues in the industry will improve the
security of the U.S. government's networks, the Nation's, and
the world's. Thank you, Mr. Chairman.
[The prepared statement of Mr. Culp follows:]
[GRAPHIC] [TIFF OMITTED] T0480.107
[GRAPHIC] [TIFF OMITTED] T0480.108
[GRAPHIC] [TIFF OMITTED] T0480.109
[GRAPHIC] [TIFF OMITTED] T0480.110
[GRAPHIC] [TIFF OMITTED] T0480.111
[GRAPHIC] [TIFF OMITTED] T0480.112
[GRAPHIC] [TIFF OMITTED] T0480.113
[GRAPHIC] [TIFF OMITTED] T0480.114
Mr. Horn. Thank you very much. Our second to last witness
is Stephen Trilling, senior director of advanced concepts from
the Symantec Corp.
STATEMENT OF STEPHEN TRILLING, SENIOR DIRECTOR OF ADVANCED
CONCEPTS, SYMANTEC CORP
Mr. Trilling. Thank you, Chairman Horn and members of the
subcommittee for giving me the chance to testify today about
the growing threat of computer worms to our national and
economic security.
Mr. Chairman, I'd also like to commend you and your
subcommittee for your leadership in examining cyber security
issues and for releasing the report card on computer security
in the Federal Government.
My name is Stephen Trilling. I'm here representing Symantec
Corp. We're a world leader in Internet security technology,
providing solutions to government, individuals, enterprises,
and Internet service providers. At Symantec I oversee our
Advanced Concepts Team, a group dedicated to studying new
security threats and creating new technologies to better
protect our electronic frontiers.
Prior to this role, I directed our Anti-Virus Research
Group, a worldwide team responsible for analyzing and creating
fixes for computer viruses and other malicious threats.
I'd like to first discuss the difference between computer
viruses and worms such as Code Red. Traditional viruses, while
potentially very damaging to individual computers, spread only
very slowly to other computers. Users can inadvertently spread
traditional viruses when they share infected files with one
another. For example, through user-initiated e-mail. Again,
since viruses rely on humans to spread, they spread only very
slowly between different computers.
I'd like to direct your attention to the screen to show a
short simulation of how traditional viruses spread. In the
simulation, each large circle represents an individual
organization and each of the small dots inside the large circle
represents a computer. What we're going to do is hypothetically
plant the virus in the left hand organization shown as a single
red dot--although I know from trying this out earlier the dots
look black on that screen--and watch how it spreads over time.
You can go ahead and start.
So what we're looking at is at the concept virus. It's a
simple virus that spreads when people exchange infected
documents with each other and, as you can see, viruses spread
over days or even weeks at about the rate that people exchange
information. This picture is how the world looked to us up
until the Melissa threat was released just over 2 years ago.
In contrast to traditional viruses, computer worms--as has
already been mentioned today--are designed specifically to
spread over networks to as many computers as possible. Most
worms, such as Melissa and LoveLetter, hijack e-mail systems to
spread themselves automatically and, because worms spread
largely or completely without human interaction, they can
infect new users at an exponential rate without regard to
borders or boundaries.
So I'd like to go back to the simulation and watch how a
single worm infection can ravage an organization. You can go
ahead and start that. As you can see, computer worms have
completely changed the rules of our game. Looking ahead, there
are three factors that increase the potential for future damage
from worms. No. 1, our global economy is clearly becoming more
dependent on the Internet. Computers connected to the Internet
now control e-commerce sites, power generation, electronic
business supply chains, government transactions, and numerous
other operations. A properly targeted computer worm could
hobble any of these systems, threatening our national security.
No. 2, as more home users move to high-speed broad-band
Internet connections through cable modems or DSL, the potential
for a devastating attack grows further. A Code Red type worm
could spread to tens of millions or more home computers within
hours. A denial of service attack then launched from tens of
millions of infected machines could decimate the on-line
business to business transactions of all Fortune 500 companies
as well as all business to business and government to
government electronic transactions. A large part of our economy
would simply grind to a halt.
Finally, No. 3, the demographics of on-line attackers are
changing. Until now, most computer worms appear to have been
created by amateurs with no specific targets. However, with
more business and government functions occurring on-line, we
expect to see an increase in professional attacks from
organized crime, corporate spies, terrorist groups, and other
organizations targeting specific systems on the Internet.
Today industry research shows that the public and private
sector have been reasonably successful in taking the first step
in cyber defense through deployment of anti-virus software and
firewalls. The same research has shown that government entitles
rank among the earliest adopters of anti-virus technology and
are also among the most effective at fighting computer viruses
in a timely fashion.
Moving forward, it will be increasingly important for both
the government and private sector to share as much information
on cyber attacks as possible. Harris Miller on this panel has
already spoken to you about the formulation of the ISACs, a
good step in encouraging such cooperation.
Symantec is a founding board member of the IT-ISAC and I
would like to commend Harris Miller for his efforts in helping
to create this important organization.
Now I'd like to move to some recommendations. A good lesson
learned from the private sector is the need to appropriately
prioritize potential security solutions according to their
cost/reward tradeoff. Deploying effective security is not an
all or nothing procedure. Rather, it is an evolutionary process
where each successive step further reduces risk.
We sometimes refer to an 80/20 rule for security. By
applying the most important 20 percent of potential security
solutions, one can likely prevent 80 percent of possible
attacks. Based on our experiences, there are three top
recommendations to protect against 80 percent of likely
attacks.
No. 1, organizations should deploy properly configured and
updated anti-virus software and firewalls. No. 2, organizations
need to install appropriate updates for any announced security
holes on all systems as soon as these are available. As we've
seen, such actions would have disabled the Code Red worm.
And finally, No. 3, organizations should have a specific
policy to ensure that computer users' passwords cannot be
easily compromised. Beyond these 80/20 rules are there further
general recommendations.
No. 1, organizations should consider deploying other types
of security software such as vulnerability assessment or
intrusion detection software at all appropriate layers of their
network.
No. 2, organizations should consider instituting a policy
to block all executable programs from flowing into their
networks through e-mail attachments. Many corporations have
successfully blocked numerous worms through just such
procedures.
And finally, No. 3, industries and government agencies
deemed essential to our national security, as described in
PDD63, should consider using private networks for all critical
communications to isolate themselves from worm-based attacks.
In conclusion, Mr. Chairman, over the coming decade, a
computer worm could easily devastate our national economy. The
time to invest in this problem is now. Both the government and
corporations are building their next generation of on-line
systems today and all of these systems will be targets
tomorrow. Thank you very much.
[The prepared statement of Mr. Trilling follows:]
[GRAPHIC] [TIFF OMITTED] T0480.115
[GRAPHIC] [TIFF OMITTED] T0480.116
[GRAPHIC] [TIFF OMITTED] T0480.117
[GRAPHIC] [TIFF OMITTED] T0480.118
[GRAPHIC] [TIFF OMITTED] T0480.119
[GRAPHIC] [TIFF OMITTED] T0480.120
[GRAPHIC] [TIFF OMITTED] T0480.121
Mr. Horn. Thank you, and we will back to you on a number of
questions.
Our last presenter is Marc Maiffret, the chief hacking
officer of eEye Digital Security. Welcome. We're delighted to
have you here.
STATEMENT OF MARC MAIFFRET, CHIEF HACKING OFFICER, eEYE DIGITAL
SECURITY
Mr. Maiffret. Thank you. I'd like to thank you for
providing me the opportunity to be here today. I hope to bring
a real world perspective to some of the issues that are
currently affecting the security of our computer networks. My
name is Marc Maiffret and I'm the co-founder and chief hacking
officer of the eEye Digital Security. I've been in the computer
security field for about 6 years now. The first 3 years of my
experience was mainly as a hacker and the last 3 years has been
as the chief hacking officer of the eEye Digital Security.
The eEye Digital Security was started with the goal of
creating software products that would help protect companies
against the growing threat of cyber attack. Besides just
creating software products, eEye also focuses on vulnerability
research as a way to stay on top of the latest security
threats. Vulnerability research is the process of analyzing
software products to find ways in which an attacker can
manipulate software in a malicious way.
I've personally found vulnerabilities within 30 or so
different software products and eEye itself has also been
responsible for the discovery and disclosure of a few of the
largest software vulnerabilities ever. It is a real world
experience I have in hacking, vulnerability research and worms
which I hope provides you all with an insight into the problems
we are currently facing in the world of computer security.
Computer systems and networks are vulnerable to many
different types of attacks. The computer worm is one of the
most dangerous types of attacks that threaten the Internet
today, potentially more damaging than any virus. A virus can
only infect systems if the computer user performs a certain
action--for example, executing an e-mail attachment--whereas a
worm, once planted on the Internet, is completely self-
propagating. This functionality allows a worm program to infect
a very large number of systems in a very short period of time.
Once the worm spreading has begun, the author of the worm could
have control over thousands, if not millions, of systems which
can then be used to perform attacks against the Internet or
specific parts of the Internet.
Code Red represents one of the best modern examples of a
worm and the impact they can have on the Internet. Code Red was
discovered around July 13 of this year. The first detailed
technical analysis of Code Red was actually published July 17.
That first detailed analysis of Code Red was done by myself and
Ryan Permeh of the eEye Digital Security. Funny enough, we
actually named the worm after the type of soft drink we had
been drinking while performing our analysis.
For a worm to propagate, it requires a method of entry. In
the case of Code Red, it was via vulnerability within Microsoft
Internet Information Services Web server or IIS. The
vulnerability that the worm used to compromise Microsoft IIS
Web servers is a vulnerability called the dot IDA buffer
overflow. The dot IDA buffer overflow was actually a
vulnerability found by eEye Digital Security. Microsoft and
eEye Digital Security released the security advisory a month
before Code Red was found in the wild. The advisory gave
administrators instructions on how to protect themselves from
the dot IDA vulnerability. Therefore, if administrators had
installed the Microsoft security patch, then Code Red would not
have had the ability to infect any systems and spread itself
across the Internet.
Code Red was designed with two goals in mind. The first
goal was to infect as many IIS Web servers as possible and the
second goal is to attack the White House Web server between the
20th and the 27th of every month. Code Red seems to have been
very successful at its first goal while failing at its second
goal. The reason it was successful for its first goal is due to
the fact that many Web servers were left unpatched against the
IDA vulnerability. Code Red failed at its second goal because
eEye Digital Security's early analysis of Code Red provided
enough information in advance to protect the White House Web
server.
The aftermath of Code Red has shown us the devastating
effect that worms can have on the Internet. Although the worm
only reached one of its two goals, the effects of the first
goal had numerous implications. The rapid spreading of Code Red
created abnormally high amounts of network traffic causing some
networks to go off-line. Certain routers and other network
devices experienced crashes unforeseen before Code Red.
Five hundred thousand systems were comprised at the highest
level of access and they were broadcasting that fact to the
Internet at large. Although preventative measures stopped the
second goal of the worm from being achieved, had it occurred,
it would have been the largest distributed denial of service
attack the Internet has seen today. Code Red has served as a
warning shot to grab the attention of the Internet community.
The biggest problem facing security today is that there are
too many people talking about what we could do or what the
threat is and not enough people doing real work that will
result in a mitigating or abolishment of those threats. The
Code Red worm was in some ways one of the best things to happen
to computer security in a long time. It was a much needed
wakeup call for software vendors and network administrators
alike. Code Red could have caused much more damage than it did
and, if the authors of Code Red had really wanted to attempt to
take down the Internet, they could most likely have easily done
so.
What made all of this possible and what steps can we take
to help prevent things like this in the future? These are the
most important questions and, luckily, there is much we can
learn from Code Red to improve our current security standing.
One of the first areas that needs improvement is the way that
software vendors test their code for stability and security.
I'm a software engineer so I know that mistakes do happen and
programmers will now and then accidentally write vulnerable
code. Software vendors, however, are usually not very motivated
to take security seriously.
Software vendors are not the only ones at fault here
though. Network administrators and managers at various
corporations are also to blame for faulty security. Going back
to Code Red as our example, we can see that really the largest
reason for Code Red's spreading as it did was because a lot of
network administrators did not install the Microsoft security
patch.
It should also be noted that many companies have a very
small budget for an IT staff or do not even have an IT staff.
This leads to a lot of problems for administrators when it
comes to securing a company's network.
To help get security messages out to the public, there
needs to be a centralized organization for vulnerability
alerting. There are a few cyber watch organizations, NIPC,
SANS, CERT, that currently watch for large scale attacks, i.e.,
worms, larger vulnerabilities and viruses. However, I feel
these organizations would be able to accomplish a lot more if
they sent alerts about all vulnerabilities instead of only
vulnerabilities deemed serious enough to cover. There should be
a Web site or e-mail alert system that administrators could
join that would allow them to find out about all
vulnerabilities and patches.
Something that was said earlier I thought was pretty
interesting from the gentleman from SRI. The reality of the
situation right now is that there's a few aspects to security.
One of the main things is, of course, vulnerabilities. Really,
the type of vulnerabilities that are out there, there's I'd say
five to six different classes of vulnerabilities out there.
Things like buffer overflows, etc. These classes of
vulnerabilities have actually been around, some of them, for 20
years, 15 years. For example, the class of vulnerability that
Code Red was exploiting was a buffer overflow vulnerability.
The Robert Morris worm itself was exploiting that type of
vulnerability.
So I think one thing is that the research has been done
about buffer overflows and all these things and a lot of people
have given the same speeches about doing more and all this sort
of stuff but really, to me, when I got into the security field,
I was kind of amazed that still, 15 years later after things
like buffer overflows have been covered, that something like
that is still actually a problem today. Really, it comes down
to software vendors and also IT administrators, etc., but
stopping worms, stopping viruses, stopping a lot of the
vulnerabilities out there, it is not as hard of a thing to do
as some people might say it is. These are vulnerabilities that
have been around for a long-time and there's tons of
information on them and there definitely is a lot that we could
be doing to make sure that software products do not have these
types of vulnerabilities. That's all.
[The prepared statement of Mr. Maiffret follows:]
[GRAPHIC] [TIFF OMITTED] T0480.122
[GRAPHIC] [TIFF OMITTED] T0480.123
[GRAPHIC] [TIFF OMITTED] T0480.124
[GRAPHIC] [TIFF OMITTED] T0480.125
[GRAPHIC] [TIFF OMITTED] T0480.126
[GRAPHIC] [TIFF OMITTED] T0480.127
[GRAPHIC] [TIFF OMITTED] T0480.128
[GRAPHIC] [TIFF OMITTED] T0480.129
[GRAPHIC] [TIFF OMITTED] T0480.130
[GRAPHIC] [TIFF OMITTED] T0480.131
Mr. Horn. Thank you very much. Let me ask you a question
and we'll start going this way. You heard the testimony of Mr.
Castro of the National Security Agency and the ease with which
hackers can learn their trade. Do you agree?
Mr. Maiffret. Yes. Definitely. To write something like Code
Red would take probably an hour or two. It's a very trivial
thing to do. To launch something like Code Red to the Internet
in a way where you're not going to be tracked, you're not going
to be detected, is very simple to do. Even sometimes finding
the vulnerabilities of these worms exploiting stuff is also
actually rather trivial. Some of the most talented people out
there happen to be on the side of the hackers and what not.
Really, the thing is it's like that sort of knowledge, as the
gentleman from SRI was saying, has not really been transferred
into a lot of the corporate companies that are actually
developing these products and what not. A lot of them have
started to do some very good things recently. Microsoft would
be a perfect example that's made a lot of improvements lately.
However, the majority of software vendors out there still, it's
a race for do I have the same features as this other software
company.
Really, one of the things, security is not going to
necessarily change until enough administrators are actually
demanding for better security and that's what the market is
actually asking for rather than new features being released.
Mr. Horn. What are the disincentives that you can think of
that governments might have to stem the hacker behavior, or do
you think it's a problem?
Mr. Maiffret. There's a lot of talk about having laws that
are a little bit more scary or whatnot but coming from the
hacker past and stuff, really when you're in that like mindset
and when you are that teenager breaking into systems and
whatnot, even though you read something in the newspapers about
Kevin McNeff being in jail for 5 years and this sort of thing--
which is definitely serious--you usually think you're above
that and you're not going to get caught, etc. So laws, I don't
really think, are necessarily going to scare people into not
doing it and whatnot. I mean it really comes down to stopping
the vulnerability in the first place.
And actually, it's not an easy task to get vendors and
whatnot to actually start looking at security first and then
designing the product around security. It's usually design the
product and then design the security around it, which is not
necessarily the best thing to do.
Mr. Horn. Let me try an analogy out on you and see if it
makes any sense in the electronics of software, hardware, so
forth. A lot of people look for marks on pistols and the bullet
goes out and you've got usually, as the FBI knows, you can find
and relate what happened on that barrel as the projectile went.
The other one is the use of gun powder in terms of shot guns
and people are talking about well, gee, why can't we have in
that one on the shot gun in particular, you can put in types of
things that have a pattern that no other shot gun shell does
that. So is there any way that something like that can be in
the electronics and all of the ones that are into software and
maybe even hardware?
Mr. Maiffret. I guess the question is basically kind of
like the attackers and the hackers, whatever you want to label
them, performing the attacks if there's something that can be
kind of resident or left to be able to help track them. Would
that be correct?
Mr. Horn. Could be.
Mr. Maiffret. Basically, dealing with software and whatnot,
it's not really an easy thing to put anything in there like
that. I mean people have tried to put in kind of bug type
devices or things. Different software products have like unique
identifiers for each computer which has actually led to the
capture of a couple of different e-mail virus authors. However,
all of those things, if you're smart enough, it really just is
software and it's bytes of information and that is all easily
manipulable. So it's not necessarily where you're going to
track a hacker that way.
There are a lot of things that could be done as far as on
the network layer with things like intrusion detection systems
and actually being able to detect an attack coming over the
network and you'll at least have some sort of starting point of
where they came from. Even intrusion detection systems, which
is one of the more popular ways of creating logs to track
attackers, even IDS systems themselves are vulnerable to
attacks. Either yesterday or sometime today eEye Digital
Security is releasing another security advisory on which we
basically illustrate a technical way where you could bypass any
commercial intrusion detection system to be able to attack IS
Web servers.
What that means is that if somebody would have had that
knowledge--in fact, somebody did have that knowledge at the
time of Code Red--they could have used that knowledge to
basically change around the Code Red attack in a way where
intrusion detection systems would not have actually detected
it, which is what led to the early analyses and the information
getting out. So it could have potentially given Code Red and
things of that nature another week head start on attacking the
systems and what not.
One of the things I was covering in my written testimony is
I think that there's a lot that could be done as far as trying
to detect some of these worms earlier in the process, to be
able to get the word out and having a sort of system. They call
it a honey pot in the security field. But you basically have a
set of dummy servers that look vulnerable and whatnot and
they're really watching. Typically they're used to monitor
attackers and how they work. However, you could adapt something
like that for worms and, if you did own a large enough block of
IP addresses or computer network addresses, you could actually
detect a worm and be able to get the analysis out much earlier
than we have been right now.
Mr. Horn. Mr. Trilling, you want to comment on that dialog?
Mr. Trilling. Yes, with regard to tracing back?
Mr. Horn. Right.
Mr. Trilling. Certainly a lot of these threats, e-mail
threats and so on and Code Red, as they move through the
Internet, they do leave traces, whether it's in logs or whether
it's in the actual e-mail. Sometimes they use the analogy as a
letter goes from one city to the next, each post office will
put a local stamp on that envelope and eventually, if you want
to trace back through all the stamps, you can find the origin.
But the extent to which you're likely to be successful at that
is very much related to how much effort you want to take and,
as has been mentioned earlier, there are over 50,000 known
computer viruses and worms right now. It's not likely to be
practical for law enforcement officials to be able to trace
back to the origin of all of them.
So certainly, as we've seen with Melissa, as we saw with
LoveLetter, it is possible and certainly when effort is placed,
when there's a high-profile attack that does a lot of damage,
it's absolutely possible to trace back to the origin, but it's
time consuming, it requires money and resources and proper
prioritization.
Mr. Horn. Mr. Culp, how about it? What's your feeling for
Microsoft?
Mr. Culp. Well, trying to make changes in the software
that's going to run on a hacker's machine to identify the
hacker is ultimately going to be futile. The hacker owns that
machine and, as Mr. Maiffret put it, it's just software. If a
vendor installs tracking software into the operating system, a
person who installs it on their machine and has administrative
control can simply take it off. They can patch it with
something that nulls out the functionality.
Just the same, what Mr. Trilling was saying about improved
forensics as the information transits the network is a much
more interesting idea. The flip side though is that there could
potentially be privacy concerns. But the real issue here is not
so much the technology as much as human behavior.
I want to sketch a scenario for your consideration. Suppose
we lived in a world where I could come home today and find out
that on my way out to work this morning I accidentally left my
back door unlocked and when I came into the house, I found all
my furniture gone with a sign that said, ``I've taken all your
furniture in order to teach you about the importance of locking
your doors.'' Now, suppose that I knew who did it and the
general opinion of society was, well, he's done you a favor.
He's shown you how insecure your home was. Does anybody believe
that our homes would be secure?
The reason that we don't tolerate this kind of behavior in
our physical lives is because we know what it would lead to.
Cyber crime is crime. There's nothing new about it. It's the
same old type of crime we've had for generations. It's breaking
and entering. It's robbery. It's burglary. It's destruction of
property. We focus on the cyber part of cyber crime and we lose
track of the fact that this is just crime. What keeps us safe
in our insecure physical world is the deterrent value of law
enforcement. To a certain extent, that's missing in cyberspace
and that's one reason why we have the problems that we do.
Adding tracking information is fine, but it presupposes that
there's going to be effective law enforcement.
Mr. Horn. Mr. Neumann.
Mr. Neumann. Thank you. There's a huge confusion between
leaving your front door open and leaving your computer system
accessible from anywhere in the world. Recently, Abby Rueben,
who works at AT&T Labs, one of the old Bell Lab spin-offs, was
sitting in the Morristown Memorial Hospital and all of a sudden
the green light on his laptop goes off and he discovers that
he's instantaneously connected to the wireless network of the
hospital with no security, no authentication, no protection
whatsoever.
As I mentioned earlier, we had this case in Oklahoma where
a guy let his newspaper know that their Web site was open and
he's now up for 5 year felony charge. Abby did not do anything
within the Morristown Memorial Hospital, but he noted this and
I published it in my risk forum and I fear that all of a sudden
people are going to be going after him because he has exceeded
authority.
In the Robert Morris case, Morris was accused by the
Federal prosecutor of exceeding authority. In the four
mechanisms that he used in the Internet world, not a single one
of them required any authority. There was no authentication
required, there was no access control required. The startling
thing about this is the law that we're dealing with says you
must exceed authority. If there's no authority required, then
somebody who happens to access your system from afar is
obviously intending to break into your system. But the law as
it is written does not say that he's doing anything wrong if
he's being accused of exceeding authority and there's no
authority required.
One of the most fundamental problems we have is that fixed
passwords are being used. Fixed passwords are flying around the
Internet unencrypted. They're trivial to sniff. There's lots of
software you can get that will enable you to pick off
essentially any Internet traffic.
The fact that somebody breaks into your system should be a
violation of the law and yet, as the law says, if he's
exceeding authority, there's something fishy here. So I think
we have to be a little bit careful if the laws are not saying
what they're supposed to be saying. If there is no
authentication and there exists zombie machines all over the
place that people can jump into and use as springboards for
attacks with no trace back possible because they've broken in
masquerading as someone else and you have no idea who they are
or where they're coming from because of the way they come in,
there's something fundamentally wrong here.
I mentioned the idea of malicious code. You have to realize
that the malicious code, once it's in your system, is executing
as if it were you. So the challenge is to keep it from getting
in there in the first place. The laws do not help in that
respect. So yes, we need better laws, I think, but we also need
better systems.
I will just mention the Advanced Research Project Agency of
the DOD which has at the moment a set of 10 contracts--I happen
to be lucky enough to have one of them--on what's called
composable high assurance trustworthy system. This is an effort
to radically improve the security/availability/ reliability of
the computer operating systems that we deal with, and I'm
hoping that research will inspire some of our computer vendors
and developers to use some of the better techniques to come out
of that research program.
But again, I say I don't have much hope because I've seen
the research that we did back in 1965 which is widely ignored.
Thank you.
Mr. Horn. Harris Miller, president, Information Technology
Association of America. How do you look on this?
Mr. Miller. I think the idea of the unique identifier, I
would agree with what Mr. Culp said. The problem with the
technology is that technology can be over-ridden, No. 1. No. 2,
the privacy advocates would go absolutely ballistic. They've
gone crazy when they've accused companies like Intel and others
of trying to plant identifiers in their computers, even though
Intel is doing it purely to protect the consumer. The consumer
privacy advocates say that this is an attempt to install big
brother. So I think the negative reaction sociologically, in
addition to the technological obstacle that Mr. Culp outlined,
really don't make that a very good alternative solution.
I would like to comment on two other things that you
addressed earlier though, Mr. Chairman. One is about the
behavior of cyber citizens. We're not foolish enough to believe
that simply saying be good will solve all of our cyber
problems. However, we're sort of at the other extreme right now
where we don't teach young people at all about good cyber
ethics.
In fact, there is still a tendency to revere hackers as if
somehow this is a positive element of our society. It's good to
be able to say I brought down the Defense Department Web site
or, even worse, Johnny and Susie's parents say, isn't Johnny or
Susie clever? They brought down the Defense Department Web site
as if it's a mark of admiration. They wouldn't be proud if
Johnny or Susie burned down the Pentagon or burned down an
office building, but somehow they're proud if they can figure
out a way to show that they're technologically more
sophisticated than the people who developed the software.
That's why ITAA has worked with the Department of Justice
and now Department of Defense on our cyber citizen program. We
think that there needs to be education built into the
classrooms all the way K-12 and higher education and even
beyond to teach people good cyber ethics. Again, it's not going
to solve all the problems but the previous panel mentioned that
24,000 attacks occurred on DOD last year. DOD will tell you
that a huge percentage of those, 80, 90, 95 percent, is what
they call script kitties. People just fooling around because
they think it's cute or clever. Doesn't mean most of those
attacks succeed but it does mean that it's harder for DOD as
the object of attack to identify the serious problem because
there's so much chaff coming at them in the form of people
playing games. So I think that we do need to focus more on
cyber education.
The last point I'd like to make is I enjoy Doctor Neumann.
He's obviously a lot smarter than all of us, but he does
somehow take statements and run a little bit to the extreme.
For example, he says that the Y2K legislation totally protected
software vendors. As you know as one of the authors of the
legislation, that was not the objective. The objective was to
try to make the point that if a remediation could be found,
that should be the first choice before you run off to the
courts. That was a system that worked reasonably well.
I would just disagree candidly with Doctor Neumann's
assessment that the market place does not provide incentives
for cyber security. I think the market place provides
tremendous incentive to cyber security but, just as with
automobiles, people want it both ways. They want to be able to
do speedy business, but they want to be able to do secure
business. So the challenge for industry is to balance those two
interests off. We could all drive HumVees and armored personnel
carriers down the road and probably wouldn't have 42,000
Americans die on American highways. But we'd go a lot slower,
they'd be a lot more expensive to run, they'd ruin the
highways. We'd have to replace them a lot more often. So we try
to come up with a balance: cars that are safe but also are
fairly inexpensive and can move quickly.
That's the challenge for the IT world. Companies,
customers, individual consumers, both domestically and
globally, want new products. They want products that work
quickly. They want to be able to get their e-mail instantly if
not faster. They want to have wireless access but at the same
time they want security. So the challenge for all of us, both
as producers of these products and as consumers, is to reach
that balance. I think that clearly the good news is there's a
lot more focus on cyber security. Mr. Maiffret said quite
correctly the Code Red virus was a wakeup call. An even bigger
wakeup call was the February 2000 distributed denial of service
attacks which led to the creation of the IT-ISAC. So these
incidents are good in a way. Fortunately, there's never been
what Dick Clark and others have referred to as an electronic
Pearl Harbor where it really has destroyed the Internet it's
been so bad. But I think there have been enough serious
incidents that people are paying more attention. I think we are
making progress.
Mr. Horn. When a symptom of being a virus or a worm or
whatever you want to call it, is there a way to sort of think
about that software side? Can you get all this bombardment away
into another part within a computer and that would then divert
the group that's making the attack?
Mr. Miller. I'll defer more to the experts. Again, I don't
think it's possible to say that somehow you know intrinsically
that these are good guys and bad guys. What technology has
tried to do is separate that as much as possible. Mr. Maiffret
mentioned the idea of this honey pot concept where you create a
lot of IP addresses that are basically out there just to lure
bad guys hoping that because security experts or government
officials are watching those IP addresses, they would catch
earlier warnings of these problems before they become widely
diffused through the real government and the real private
sector. But I don't know that there's any way of saying at the
end of the day we're going to know every bad guy that walks
into the bank any more than we're going to know every bad piece
of code that comes in. I don't think there's any way of saying
that in advance.
Clearly, the tradeoff--and I think I discussed this before
another hearing you had, one of your colleagues said, well, can
I get to a situation where I never get an e-mail virus on my
computer? I said to the Member of Congress, you could. You'd
have someone else get all your e-mail and let him or her be the
guinea pig, in a sense, and he or she would screen it. But, of
course, you're giving up your privacy because that means
someone else gets all your e-mail. You're giving up the time
sensitivity because someone else would have to filter it and
make sure it was all done. So that's a trade-off. You could
say, OK, I as an individual don't want to get any viruses but
what kind of tradeoffs am I going to make then?
Mr. Horn. Let me just ask a few closing questions here. Mr.
Maiffret, you've been criticized for giving a blueprint of the
exploit to malicious programmers. Could you tell us how you
believe this is an important way to provide details of threats
to the on-line community?
Mr. Maiffret. Yes. The first thing would be the wording on
that would be it's not necessarily a blueprint. The main
criticism came with Code Red and people said that we gave out
enough details where somebody took our details and then
actually wrote Code Red from those details.
In the case of Code Red, the actual techniques that they
used were far superior to anything that we talked about. In
every advisory on software that we do, we always give out
enough details where a vulnerability scanning type tool or an
intrusion detection system or administrators themselves will
have enough technical information where they can either test
the vulnerability to make sure that the patch is working
themselves or that they can actually update their intrusion
detection systems to be able to monitor for potential people
trying to exploit the vulnerability.
It is a double-edged sword because yes, there is the
information that's there and somebody could take that and try
to write an exploit program with it, as they call it. However,
the thing people need to understand is that even without any
information at all, it's actually rather trivial to actually
figure out where the vulnerability lies and exploit it. This
has happened in the past before. One example of that would be
Code Red itself was actually based off of another worm that was
released back in April of this year and the vulnerability that
worm exploited, there was actually no technical details ever
released on it.
So what happened from that was that some hackers did figure
out the technical details, did write an exploit for it, did
write a worm for it. However, since there was no public
technical details released about it, no security software tools
or anything out there were actually updated to be able to look
for that specific signature. So back in April when Code Red was
actually first attempting to go around the Internet, since
there was no details, nobody was actually able to detect that
it was going on. There just happened to be a couple of
administrators at Sandia Labs that were lucky enough to see it.
Mr. Horn. Recently the editorial editor of the Washington
Post, Meg Greenfield, had her computer and people wondered what
her password was and so when they found out, she simply said
password, and I began to think that's so obvious, maybe people
would leave her alone. No one would obviously think password
for the password.
Mr. Maiffret. One of the most common.
Mr. Horn. That's right. Well, since some of you have
teaching backgrounds, I guess I'd be interested in the fact
that even Microsoft who warned the users of the newly
discovered vulnerability and issued the patch to protect
against the exploit did not protect all of its own systems,
illustrative of the day-to-day challenge that system
administrators face in maintaining the security of their
systems. Any thinking on that?
Mr. Maiffret. Sure. Let's walk back through. As you noted,
when the initial patch was released, we did extensive
publicity. Let me run through a couple of things that we did.
As always, we released a security bulletin on our Web site.
It's one of those heavily traveled Web sites on the Internet.
We mailed it to over 200,000 subscribers to our mailing list.
We also took the unusual step, because of the severity of
the vulnerability, of engaging our worldwide support
organization, particularly several thousand employees known as
technical account managers who have direct relationships with
customers and we asked them, call your customer and tell them
you need to put this patch on now, read the bulletin later.
We also proactively contacted the media and asked for help
in getting information out. This was without a doubt the most
widely publicized security bulletin in history. It's in keeping
with how we have traditionally handled security
vulnerabilities. Our goal at the end of the day is to get as
many patches on machines that need them and, if the way to do
that is to air the fact that we've made a mistake worldwide,
we're going to do that.
But as you mentioned, we neglected to fully protect our own
networks. We did have a few machines, scattered machines here
and there, that didn't get patched and this is illustrative of
a problem that's inherent in a patch-based protection scheme.
Applying patches is a management burden. Takes time. Certainly
takes less time to apply a patch than it does to rebuild a
machine after the machine has been compromised, but just the
same, there's a management burden associated with this. We've
invested quite a bit of time and effort, even starting before
the worm, into trying to make our patches as simple as possible
to get onto the machines that need them.
Let me give you a couple of examples. Starting in May, we
inaugurated a practice in which every IIS patch, patches not
only whatever the vulnerability is we're discussing here now,
but includes every previous patch for IIS. So if you just apply
the most recent patch, you're protected against everything. No
other vendor in the industry does that.
We've also taken some steps to do some technology
development to make it easier to get the patches onto the
machines. Specifically, not requiring the machines to reboot.
It turned out when we talked with our customers we found that
was a significant impediment to a lot of them. So we did some
technology development. We rolled out no reboot patches. And
just recently we've rolled out some tools that have been in the
works that have been under development since earlier this year
that we believe will help ensure that customers have fully
patched machines.
The first one is something called the Microsoft Personal
Security Advisor. It's a Web site. You navigate to the Web site
and it downloads some software to your machine that allows it
to scan itself with reference to a data base that we keep up to
the minute on our site to find out whether your machine is
configured securely and to determine whether or not you're
missing any patches. We released a companion tool that server
farm administrators can use so that if you're, for instance, an
administrator with 100 machines, from a single console you can
tell which patches each one of those machines is lacking and
keep them up to date. But just the same, the fact that we
didn't have perfect compliance ourselves illustrates that
there's more work to be done and we're certainly committed to
making improvements as we go forward. We have some new features
in our upcoming products that we believe will make it even
easier to stay up to date on patches, including some
technologies that will allow you to stay up to date
automatically.
Mr. Horn. That's very interesting and Mr. Trilling, I was
intrigued by your testimony. Applying a few simple rules. One
can prevent the majority of attacks on your systems. More
specifically, you detailed three top security recommendations
that would likely protect against 80 percent of the attacks. In
your opinion, should these rules be made mandatory for
government agencies? That's a good probability.
Mr. Trilling. Right. It's an interesting question. I think
a little outside my area of expertise. I certainly feel like
security rules and security policies really ought to be decided
on by security companies rather than necessarily by the
government. The other thing to point out is that security
really is different for everybody. One of the things we often
say is that it's important to secure your systems in such a way
that the cost of breaking into that system is greater than the
value of information you could get out of that system. So the
effort to protect information for the Department of Defense is
going to be very different than for a home user's individual
Web site. I think each of those decisions needs to be made
individually by individual organizations in consultation in
many cases with security experts.
I'd have to sort of understand a little bit the framework
of what you're talking about but I think in general it would be
difficult to sort of mandate across all agencies that these
certain laws ought to be applied because the needs of security
for different agencies and different organizations are really
different depending on the value of what they're trying to
protect.
Mr. Miller. Mr. Chairman, the Federal CIO Council is trying
to deal with this kind of a challenge and IT has been somewhat
involved. It's basically led by the Federal CIO Council,
particularly Mr. John Gilligan who's now the Deputy CIO at the
Department of the Air Force and previously was CIO at the
Department of Energy. What they're trying to do is establish
best practices across agencies and it is complicated for the
reasons Mr. Trilling suggested because there's no one size fits
all. But by sharing information within the Federal CIO Council
and then between industry and government, that's the role ITA
has played by bringing to the government CIOs some of the best
practices applied in commercial settings. We think there has
been some progress there.
Your staff might want to get a debriefing from the Federal
CIO Council about how their best practices are coming along.
They're trying to achieve in practice what Mr. Trilling has
outlined in theory would be a good idea.
Mr. Trilling. If I could just make one quick point just to
take an example. If you were to mandate inside an organization
every user inside the organization needed to change their
password every 5 minutes, clearly that would reduce
productivity enormously to the extent that most companies would
never make that tradeoff. But there may well be some
organization, some government organization where security is so
critical that you're willing to make that tradeoff, and you see
this over and over again, the tradeoff between convenience and
security. More convenience often means less security and people
need to, again, appropriately protect themselves depending on
the value of their information stored on their computer
networks.
Mr. Horn. Mr. Neumann.
Mr. Neumann. A couple of comments. One is that this 80/20
business is a moving target. I go back to my tip of the iceberg
analogy. You chop off the top very small percentage of the
iceberg and there's still exactly the same size of the iceberg
there. You may get rid of the 80 percent but there's an
escalation effect here in that the attackers are advancing
faster than the developers which means that no matter how much
there is visible of the iceberg, you still have a very serious
problem.
You mentioned education. Let me just speak to that. I've
taught in a bunch of different universities. Most recently I
taught a course based on work that I've done for the Army
Research Lab on how to build reliable, secure, highly
survivable systems. All of the notes for that course are on my
Web site and I think when you talk about how do you set
principles and try to get people to enforce them, a good place
to start is to read a document like that and discover what the
principles are and see which ones of them are applicable.
The most important thing is the architecture, as I've
mentioned. I don't have a virus problem. I can read e-mail with
all kinds of attachments but it never bothers me. I'm not
running a Microsoft operating system. I'm running a Lennox
system. Lennox has its own security violations and
vulnerabilities. But the point is that if you focus on an
architecture in which your system protects itself against
itself--and again I go back to the research that we did in 1965
which pretty much solved that problem--then a lot of the
problems that you see in malicious code don't happen because
the malicious code is executing with all of your privileges and
you're giving it freedom to do whatever it wants.
So all of the stuff about Trojan horses is ignoring one
fundamental thing. That once somebody has broken into your
system with a virus or a worm or whatever it is, you don't know
that there's a residual Trojan horse there. There might be
something nasty just sitting waiting for something else to
happen. The Trojan horses are really the ultimate problem here.
We're talking a lot about viruses and worms, but the real
problem is the fact that systems are not designed with adequate
architectures to protect themselves against themselves and to
protect themselves against outsiders as well as, of course,
insiders.
Mr. Trilling. May I make a very quick comment to respond to
Mr. Neumann. I think you're quite correct in saying that it is
a moving target and that more of the iceberg is always showing
when you cutoff the top. But again, it's about reducing risk.
As we pointed out here, most of these crimes, most of these
worms that we talked about today, were not targeted attacks.
They were crimes of opportunity. Code Red simply went from
machine to machine checking somebody's door knob. It would be
like somebody walking through a neighborhood seeing if each
door was open. If the door was open, they'd walk in and attack.
If not, they'd keep moving. You could break into that home but
you might as well keep walking down the block because you'll
find another home that's open down the road.
Most of these attacks such as Code Red are crimes of
opportunity. They're going from machine to machine seeing if
they can break in and so, again, it's all about reducing risk.
By taking a small number of steps, we believe you can reduce
your risk a lot. Certainly, to reduce your risk further to get
that next part of the iceberg is going to be a big step for
some organizations is more cost effective and more needed than
others. But you want to make sure that the person just trying
to walk into your door or come in through your basement, which
is how most attacks are occurring today, you want to make sure
you're stopping that. That's for government machines as well as
home machines.
Mr. Horn. Mr. Maiffret, any thoughts on this?
Mr. Maiffret. I guess beyond just like it's really
something where I think they're kind of talking like if you
like patch the current top 10 vulnerabilities, you're making
the best effort. But I think what Mr. Neumann was saying is
when you patch the ranked top 10 right now, then hackers move
on to the next top 10 and the next top 10. It's really
something where the biggest vulnerabilities, they're just that
and if you fix them, then the things that were not necessarily
the biggest vulnerabilities the week before, now they are. It's
really something where you do have to try to eliminate all of
them. It's not something about doing the top 10 checklist or
something of that nature.
Mr. Trilling. I think that's also a really good point which
is that you never get to the point where you are now secure.
Security is a moving target. The value of the information on
your network could suddenly change tomorrow as your business
changes, as you acquire a new organization. So companies,
organizations, government entities should never be stopping and
saying, well, because we've gone through these top 10 lists,
we're now done. Security is an evolving thing in much the same
way that physical security is also.
Mr. Horn. One of my colleagues who sat near me in our
investigation of the White House e-mails which went on for
dozens of hours and he said to me, he said, I'm just going to
get rid of e-mail. The heck with it. They had the most stupid
conversation. It was not great political theory or great policy
and all this. They were darned stupid crazy things. Everything
from every joke on Arkansas and everything else. He said,
enough is enough. If they want to see me, they can walk through
the door.
Panel one has been very gracious listening to this dialog
and if you have any thoughts that we haven't explored, feel
free to get to the microphone or we can just send it back, I
think, and put it in the front row there whereas they're in the
orchestra pit. I've got a number of questions here and if
you're on the way home or something or dictating into whatever
your little thing is, we would welcome. Both the Democratic
staff and the Republican majority staff have a number of
questions. So we appreciate any helpfulness you could give in
answer.
We will keep the hearing over and out and open for probably
2 weeks and then any thoughts you have going back. I want to
thank all of you. You're very able in your whole firm of
computers and enhancing computer security in the public and
private sectors is a priority of this subcommittee and must
become a priority, we think, for governments at all the levels
because as we get from enhancing computer security, we're also
talking about helping to have privacy for the citizen. Their
records should not be used without their access or whatever the
law reads on that.
We'll issue a second report card on computer security
within the Federal Government shortly. Attention to and action
on this important issue must occur at the highest levels. It
took them 2 years in the previous administration to wake up to
Y2K and we're hoping that the current administration will take
this very seriously, and I think they will. Today's hearing is
a part of that process and we thank you very much for coming
here, some of you for 3,000 miles.
The staff I'd like to thank for this hearing is to my left,
J. Russell George, the staff director/chief counsel of the
subcommittee. Bonnie Heald is here out in the audience. She's
working with the press, professional staff member, director of
communications. And then Elizabeth Johnston, as a lot of you
know, is a detailee with us and very knowledgeable on all sorts
of issues. Scott Fagan is assistant to the subcommittee. Scott,
this is his last hearing because he's going into the American
Foreign Service. So you might see him in embassies throughout
the world and maybe one of these days he'll be an ambassador
and will be nice to us in congressional delegations. Hopefully
you've been around us enough to know that Congress is trying to
help you. We're not from the government alone.
David McMillen, professional staff for the Democrat group
and the San Jose Council Chamber's contacts that really helped
us here tremendously. Judy Lacy, Ross Braver and the court
reporters and Mark Johnson is the clerk for the majority. Mark,
you're still around. You're not going to go in the foreign
service or anything, are you?
Mr. Johnson. I'm here as long as you want me.
Mr. Horn. And the court reporter is George Palmer. It's
tough when you go as long as we have, and we thank you, Mr.
Palmer, for doing a good job on this, and that it'll be a good
transcript.
So now this hearing will be in other parts of the United
States on a number of questions. So we thank you all.
Adjourned.
[Whereupon, at 12:58 p.m., the subcommittee was adjourned,
to reconvene at the call of the Chair.]