[House Hearing, 107 Congress]
[From the U.S. Government Publishing Office]



PROTECTING AMERICA'S CRITICAL INFRASTRUCTURE: HOW SECURE ARE GOVERNMENT 
                           COMPUTER SYSTEMS?

=======================================================================

                                HEARING

                               before the

                            SUBCOMMITTEE ON
                      OVERSIGHT AND INVESTIGATIONS

                                 of the

                    COMMITTEE ON ENERGY AND COMMERCE
                        HOUSE OF REPRESENTATIVES

                      ONE HUNDRED SEVENTH CONGRESS

                             FIRST SESSION

                               __________

                             APRIL 5, 2001

                               __________

                           Serial No. 107-13

                               __________

       Printed for the use of the Committee on Energy and Commerce


 Available via the World Wide Web: http://www.access.gpo.gov/congress/
                                 house

                               __________

                  U.S. GOVERNMENT PRINTING OFFICE
73-508                     WASHINGTON : 2001
_______________________________________________________________________
For Sale by the Superintendent of Documents, U.S. Government Printing Office
Internet: bookstore.gpr.gov  Phone: toll free (866) 512-1800; (202) 512�091800  
Fax: (202) 512�092250 Mail: Stop SSOP, Washington, DC 20402�090001


                    COMMITTEE ON ENERGY AND COMMERCE

               W.J. ``BILLY'' TAUZIN, Louisiana, Chairman

MICHAEL BILIRAKIS, Florida           JOHN D. DINGELL, Michigan
JOE BARTON, Texas                    HENRY A. WAXMAN, California
FRED UPTON, Michigan                 EDWARD J. MARKEY, Massachusetts
CLIFF STEARNS, Florida               RALPH M. HALL, Texas
PAUL E. GILLMOR, Ohio                RICK BOUCHER, Virginia
JAMES C. GREENWOOD, Pennsylvania     EDOLPHUS TOWNS, New York
CHRISTOPHER COX, California          FRANK PALLONE, Jr., New Jersey
NATHAN DEAL, Georgia                 SHERROD BROWN, Ohio
STEVE LARGENT, Oklahoma              BART GORDON, Tennessee
RICHARD BURR, North Carolina         PETER DEUTSCH, Florida
ED WHITFIELD, Kentucky               BOBBY L. RUSH, Illinois
GREG GANSKE, Iowa                    ANNA G. ESHOO, California
CHARLIE NORWOOD, Georgia             BART STUPAK, Michigan
BARBARA CUBIN, Wyoming               ELIOT L. ENGEL, New York
JOHN SHIMKUS, Illinois               TOM SAWYER, Ohio
HEATHER WILSON, New Mexico           ALBERT R. WYNN, Maryland
JOHN B. SHADEGG, Arizona             GENE GREEN, Texas
CHARLES ``CHIP'' PICKERING,          KAREN McCARTHY, Missouri
Mississippi                          TED STRICKLAND, Ohio
VITO FOSSELLA, New York              DIANA DeGETTE, Colorado
ROY BLUNT, Missouri                  THOMAS M. BARRETT, Wisconsin
TOM DAVIS, Virginia                  BILL LUTHER, Minnesota
ED BRYANT, Tennessee                 LOIS CAPPS, California
ROBERT L. EHRLICH, Jr., Maryland     MICHAEL F. DOYLE, Pennsylvania
STEVE BUYER, Indiana                 CHRISTOPHER JOHN, Louisiana
GEORGE RADANOVICH, California        JANE HARMAN, California
CHARLES F. BASS, New Hampshire
JOSEPH R. PITTS, Pennsylvania
MARY BONO, California
GREG WALDEN, Oregon
LEE TERRY, Nebraska

                  David V. Marventano, Staff Director

                   James D. Barnette, General Counsel

      Reid P.F. Stuntz, Minority Staff Director and Chief Counsel

                                 ______

              Subcommittee on Oversight and Investigations

               JAMES C. GREENWOOD, Pennsylvania, Chairman

MICHAEL BILIRAKIS, Florida           PETER DEUTSCH, Florida
CLIFF STEARNS, Florida               BART STUPAK, Michigan
PAUL E. GILLMOR, Ohio                TED STRICKLAND, Ohio
STEVE LARGENT, Oklahoma              DIANA DeGETTE, Colorado
RICHARD BURR, North Carolina         CHRISTOPHER JOHN, Louisiana
ED WHITFIELD, Kentucky               BOBBY L. RUSH, Illinois
  Vice Chairman                      JOHN D. DINGELL, Michigan,
CHARLES F. BASS, New Hampshire         (Ex Officio)
W.J. ``BILLY'' TAUZIN, Louisiana
  (Ex Officio)

                                  (ii)


                            C O N T E N T S

                               __________
                                                                   Page

Testimony of:
    Dacey, Robert F., Director, Information Security Issues, U.S. 
      General Accounting Office..................................    53
    Dick, Ronald L., Director, National Infrastructure Protection 
      Center.....................................................    30
    McDonald, Sallie, Assistant Commissioner, Office of 
      Information Assurance and Critical Infrastructure, U.S. 
      General Services Administration............................    26
    Noonan, Tom, President and CEO, Internet Security Systems, 
      Inc........................................................    39
    Podonsky, Glenn S., Director, Office of Independent Oversight 
      and Performance Assurance, accompanied by Jason Bellone, 
      former member of the Computer Analysis Response Team, 
      Federal Bureau of Investigation; Karen Matthews, formerly 
      with Computer Forensics Laboratory, U.S. Department of 
      Defense; Brent Huston, author of book on hackproofing; and 
      Brad Peterson, Director, Office of Cyber Security and 
      Special Reviews, U.S. Department of Energy.................    13
    Tritak, John S., Director, Critical Infrastructure Assurance 
      Office, U.S. Department of Commerce........................    65
Material submitted for the record by:
    Kemper, Jason, III, Vice President, Government Affairs, 
      Cryptek, letter dated April 5, 2001, enclosing testimony 
      for the record.............................................    76

                                 (iii)

  

 
PROTECTING AMERICA'S CRITICAL INFRASTRUCTURE: HOW SECURE ARE GOVERNMENT 
                           COMPUTER SYSTEMS?

                              ----------                              


                        THURSDAY, APRIL 5, 2001

                  House of Representatives,
                  Committee on Energy and Commerce,
              Subcommittee on Oversight and Investigations,
                                                    Washington, DC.
    The subcommittee met, pursuant to notice, at 9:40 a.m., in 
room 2322, Rayburn House Office Building, Hon. James C. 
Greenwood (chairman) presiding.
    Members present: Representatives Greenwood, Tauzin, (ex 
officio), Strickland, and DeGette.
    Also present: Representatives Norwood and Davis.
    Staff present: Tom DiLenge, majority counsel; Amit Sachdev, 
majority counsel; Peter Kielty, legislative clerk; and Edith 
Holleman, minority counsel.
    Mr. Greenwood. This hearing of the Oversight and 
Investigations Subcommittee will come to order. The Chair 
recognizes himself for 5 minutes for the purpose of an opening 
statement.
    Today, the subcommittee holds a hearing to assess the 
security of government computer systems. In particular, we will 
assess how well or how poorly they are protecting our most 
critical cyberinfrastructures and operations from the threat of 
disgruntled insiders, hackers, criminals, terrorists, and rogue 
nation-states. Over the past 2 years this committee has 
conducted extensive oversight of computer security at 
particular government agencies, most notably EPA, the 
Department of Energy, and to a lesser extent, FDA and the 
Department of Commerce. Our reviews consistently have found 
poor computer security planning and management and a general 
lack of compliance with existing requirements of law and 
policy.
    We also found that, with few exceptions, the agencies were 
not testing their own systems to determine whether their 
security plans and policies were as effective in practice as 
they looked on paper. And we found that whenever real testing 
of agency systems was conducted numerous significant and easily 
exploitable vulnerabilities were almost always discovered.
    In response, Congress passed a law last October that 
reiterated computer security requirements contained in prior 
Federal laws and OMB directives mandating that agencies develop 
security plans for their systems and conduct periodic risk 
assessments and tests of those systems. But it also imposed a 
new requirement, that agency inspectors general conduct an 
independent test of an appropriate subset of agency systems 
each year.
    One month ago, in order to set a benchmark for measuring 
agency progress under this new law, I wrote to 15 Federal 
departments, agencies, and commissions within this committee's 
jurisdiction to inquire about their compliance with computer 
security directives and their plans to implement the new law. 
While a few of the agencies are still in the process of 
producing documentation for us, it is fair to say that, at this 
point, we are not surprised or pleased by what we are finding.
    In particular, very few of the responding agencies have had 
any true penetration tests of their computer systems conducted 
and many of these were very limited in nature and scope, 
conducted as part of financial system audits. A few other 
agencies have conducted automated scans of their network to 
search for vulnerabilities in their configurations or operating 
systems which, while worthwhile, do not reveal the real degree 
of potential exploits of their systems. And several other 
agencies reported no scans or penetration tests whatsoever.
    Also, not surprising, the tests and scans that have been 
done continue to reveal real computer security problems at 
these agencies:
    A recent internal scan conducted by a Commerce Department 
bureau found more than 5,000 security ``holes,'' or known 
vulnerabilities, in its networks and systems; and that of 1,200 
hosts or workstations scanned, fully 30 percent suffered from 
category ``red'' vulnerabilities, which is the most severe 
rating because of the potential to compromise an entire 
account.
    An internal test of a Medicare contractor 2 years ago 
found, unbelievably, that the network system administrator's 
account--let me repeat that, the network system administrator's 
account--could be easily compromised because his password was 
the same as his user name.
    A recent internal test of a critical HHS operating 
division, using freely available password cracking software, 
resulted in 60 percent of passwords being cracked in under 10 
minutes.
    Unfortunately, these findings are not the exception. They 
are just some of the many examples of poor computer security we 
are finding during the course of our review. Consistent with 
the broad swath of GAO and inspector general computer security 
audits across the Federal Government over the past 4 or 5 
years.
    I point these out not to embarrass particular agencies--
actually, they should be commended for testing their systems to 
find these problems in the first place--but rather to emphasize 
the need for the Federal Government to begin taking 
cybersecurity much more seriously than we have been. They also 
clearly demonstrate the need to increase our level of testing 
so that problems can like these can be found and corrected 
before real damage is done.
    Why is this so important? Because as we will see and hear 
today, the threats and attacks on government systems are 
increasing and the technology used to perpetrate such attack is 
becoming both more sophisticated and more generally available. 
An expert team from the Department of Energy will demonstrate 
this morning how such attacks are conducted, using freely 
available software tools found on the Internet, and they will 
show us the results from some recent real-world testing the 
team conducted at several DOE sites.
    For its part, GSA, which tracks overall security incidents 
at Federal civilian agencies, will testify today that in the 
year 2000 alone 32 agencies reported 155 known ``root'' 
compromises of their computer systems, the most serious type of 
incident tracked because the unauthorized user was able to gain 
complete control of the server or system compromised.
    GSA also will testify that there were hundreds of incidents 
of network reconnaissance reported by 18 different civilian 
agencies last year, mostly from foreign sources and targeting 
our scientific facilities. And these are only the incidents we 
know about. GSA estimates that only 20 percent of all known 
incidents are reported by the agencies and there likely are 
thousands more that go undetected by the agencies themselves.
    GSA and other experts in this field also estimate that 
nearly all of the incidents reported on both government and 
private systems could have been prevented had the system 
administrators fixed well-known vulnerabilities with existing 
patches or configuration changes.
    While no network can ever be 100 percent secure from the 
most sophisticated and novel attacks, it should not be an 
unreasonable expectation that our sensitive systems would be 
secure from commonly known vulnerabilities.
    Finally, as the title of this hearing suggests, we also 
will focus today on the related issue of critical 
cyberinfrastructure protection, that is, the protection of 
those Federal cybersystems that are truly critical to the 
Nation's security for the public's health and welfare. Not all 
computer systems are created equal, nor do they deserve the 
same level of security attention.
    The Clinton administration realized the need to focus the 
attention on threats posed to our most critical cybersystems by 
terrorists or others intent on doing the Nation harm. 
Accordingly, in May 1998, the President issued a directive 
mandating the Federal agencies identify their critical assets, 
assess the vulnerabilities of those assets, and then implement 
plans to fix the vulnerabilities by May 2003. However, several 
recent reports confirm what the committee's own review has 
found that, 3 years later, most agencies are still in the 
process of identifying their critical assets and virtually none 
have made significant progress in assessing and mitigating 
vulnerabilities in those systems or the private sector 
resources on which these Federal systems so often rely. Given 
this state of affairs, it appears that we will not meet this 
deadline unless we dramatically increase our focus on this 
problem in the very near term.
    Clearly, we need to do better both with respect to critical 
cybersystems and to overall computer security throughout the 
Federal Government. I hope that today's hearing will be the 
first in a series on these important and related topics, that 
we can work together on both sides of the aisle and with this 
new administration to improve the security of our Nation and 
the sensitive data held by our Federal Government.
    The Chair recognizes Mr. Strickland for an opening 
statement.
    Mr. Strickland. Mr. Chairman, thank you for holding this 
hearing on this very important question.
    As one of our witnesses will testify today, the existence 
of the Internet ties together a vast array of computer systems 
and networks. For communications, commerce, and the democratic 
exchange of ideas, there are enormous benefits from this full 
and open access; but like any technology that is new, or 
relatively new, it has a serious downside. By tying these 
networks together, the Internet makes them all vulnerable to 
hacking by creative teenagers and others with more nefarious 
purposes such as fraud, identity theft, extortion, disruptions 
of commercial service, and terrorist attacks.
    One system can be used as a platform to attack other 
systems. Without appropriate safeguards, any system can be hit, 
whether it is essential to our defense and economy or it is a 
site that sells goods in an electronic auction; and it appears 
that the attempts to penetrate both government and private 
systems are increasing. We must recognize that no system will 
ever be completely secure, but the question is whether the 
Federal response to safeguard their critical assets is adequate 
and whether it has the resources to respond fully.
    A great deal was done by the previous administration to 
begin to address this enormous task. President Clinton 
established a Commission on Critical Infrastructure Protection 
in July 1996 to look at the scope and the nature of 
vulnerabilities and threats to the Nation's critical 
infrastructures and to recommend a comprehensive national 
policy and implementation plan for protecting them, whether 
public or private.
    The result was the commission's 1997 report, which found no 
immediate crisis threatening the infrastructure, but did find 
that threat to and the vulnerability of the critical 
infrastructure existed. President Clinton responded by issuing 
Presidential Decision Directive 63 in May 1998, which ordered 
the Federal agencies to identify their critical 
infrastructures, take steps to protect them and work 
cooperatively with private companies which control most of the 
infrastructure, to secure those systems also. The target date 
for completion was May of 2003.
    Presidential Directive 63 listed the areas in which the 
infrastructure should be protected, and established the 
position of National Coordinator for Security and for 
Structural Protection and Counterterrorism in the National 
Security Council. It set up the critical Infrastructure 
Assurance Office at the Commerce Department to support the 
national coordinator and the agencies and gave the Federal 
Bureau of Investigation the explicit authority to expand its 
existing cybercrimes unit into the National Infrastructure 
Protection Center.
    Prior to this Presidential directive, President Clinton had 
already established a Federal computer intrusion response 
capability, which is housed at the General Services 
Administration. A national plan for information systems 
protection, the first in the world by a national government, 
was issued in January of 2000. And just before he left office, 
President Clinton nominated 18 members of the National 
Infrastructure Assurance Council, which is to report on the 
actions of private and public bodies to protect their critical 
infrastructures.
    Three industry sectors also have established information 
sharing and analysis centers.
    How far along are the agencies in implementing the 
Presidential directive? Certainly they are ahead of where they 
were 5 years ago when cybersecurity was given little, if any, 
attention, but they are not far enough along and they remain 
vulnerable. As we will hear from the Commerce Department 
witnesses, most agencies still have to finish identifying their 
critical infrastructure assets. They will not meet the 2003 
deadline without significant additional resources.
    Furthermore, no one know if the structure established by 
the previous administration to enforce Presidential Directive 
63 will be continued by the new administration. The old 
structure was not perfect, and there are numerous overlapping 
and conflicting responsibilities resulting from the differing 
directives in PDD-63 and various other laws. But we must 
request that the Bush administration tread lightly and consider 
whether a completely new structure will delay even longer this 
very important task.
    A question for the Congress to address is whether the 
agencies are getting the money they need to get the job done. 
This body has not been particularly responsive to 
appropriations for computer security, as evidenced by its 
rejection of most of the requests last year for beefing up the 
Energy Department security, its rejection of the $50 million 
request for an Institute for Information Infrastructure 
Protection, and an almost 50 percent reduction in GSA's request 
for funding for their needs.
    One other concern I must mention, however, is privacy. GSA 
has published a very disturbing newsletter that tells agencies 
to get around Congress' and the public's concerns about being 
tracked by Federal agencies by contracting out the service and 
calling it something else. I have attached that document to my 
testimony and would like it placed in the record.
    Mr. Chairman, these are all issues that I hope this 
subcommittee will address in the next several months. I may 
have additional documents to place in the record and would 
request that the record be held open for that purpose.
    Thank you, Mr. Chairman.
    Mr. Greenwood. The Chair thanks the gentleman. Without 
objection his attachment will be entered into the record.
    [The prepared statement of Hon. Ted Strickland follows:]

Prepared Statement of Hon. Ted Strickland, a Representative in Congress 
                         from the State of Ohio

    Mr. Chairman, thank you for holding this hearing on this very 
important question. The existence of the Internet ties together a vast 
array of computer systems and networks. For communications, commerce 
and the democratic exchange of ideas, there are enormous benefits from 
full and open access to these systems. But, like any technological 
advance, it also has a serious downside. By tying these networks 
together, the Internet makes them all vulnerable to hacking by creative 
teen-agers and others with more nefarious purposes such as: fraud; 
identity theft; extortion; disruptions of commercial service; and 
terrorist attacks. One system can be used as a platform to attack other 
systems. Without appropriate safeguards, any system can be hit, whether 
it is essential to our defense and economy, or it is a site that sells 
goods in an electronic auction. And it appears that the attempts to 
penetrate both government and private systems are increasing.
    We must recognize that no system will ever be completely secure, 
but the question is whether the federal government's response to 
safeguard its critical assets is adequate, and whether it has the 
resources to fully respond. A great deal was done by the previous 
administration to begin to address this enormous task. President 
Clinton established a Commission on Critical Infrastructure Protection 
in July of 1996 to look at the scope and nature of vulnerabilities and 
threats to the nation's critical infrastructures and recommend a 
comprehensive national policy and implementation plan for protecting 
them, whether public and private. The Commission's 1997 report found no 
immediate crisis threatening the infrastructure, but did find that the 
threat to and vulnerability of the critical infrastructure existed. 
President Clinton responded by issuing Presidential Decision Directive 
63 in May of 1998. It ordered federal agencies to identify their 
critical infrastructures, take steps to protect them and work 
cooperatively with private companies--which control most of the 
infrastructure--to secure those systems also. The target date for 
completion was May of 2003.
    PDD 63 listed the areas in which the infrastructures should be 
protected, and established the position of national coordinator for 
security, infrastructure protection and counter-terrorism in the 
National Security Council. It set up the Critical Infrastructure 
Assurance Office at the Commerce Department to support the national 
coordinator and the agencies and gave the Federal Bureau of 
Investigation the explicit authority to expand its existing cyber 
crimes unit into the National Infrastructure Protection Center (NIPC). 
Prior to PDD 63, President Clinton had already established a Federal 
Computer Intrusion Response Capability, or ``Fed CIRC'', which is 
housed at the General Services Administration. A national plan for 
information systems protection--the first in the world by a national 
government--was issued in January of 2000. And just before he left 
office, President Clinton nominated 18 members of the National 
Infrastructure Assurance Council, which is to report on the actions of 
private and public bodies to protect their critical infrastructures. 
Three industry sectors also have established Information Sharing and 
Analysis Centers or ISACs.
    How far along are the agencies in implementing PDD 63? Certainly, 
they are ahead of where they were five years ago when cyber security 
was given little, if any, attention. But they are not far enough along, 
and they remain vulnerable. As we will hear from the Commerce 
Department witnesses, most agencies still have to finish identifying 
their critical infrastructure assets. They will not meet the 2003 
deadline without significant additional resources.
    Furthermore, no one knows if the structure established by the 
previous administration to enforce PDD-63 will be continued in the new 
administration. The old structure was not perfect, and there are 
numerous overlapping and conflicting responsibilities resulting from 
the differing directives in PDD-63 and various laws. But the Bush 
Administration should tread lightly and consider whether a completely 
new structure will delay even longer this very important task.
    A question for the Congress to address is whether the agencies are 
getting the money they need to get the job done. This body has not been 
particularly responsive to appropriations for computer security as 
evidenced by its rejection of most of the request last year for beefing 
up the Energy Department's security; its rejection of NIST's $50 
million request for an Institute for Information Infrastructure 
Protection; and an almost 50 percent reduction of GSA's request for 
funding for Fed CIRC.
    One other concern that I must mention, however, is privacy. GSA has 
published a very disturbing newsletter that tells agencies to get 
around Congress' and the public's concerns about being tracked on the 
Internet by federal agencies by contracting out the surveillance to 
private contractors and calling it ``Management Security Services.'' I 
have attached that document to my testimony and would like it placed 
into the record.
    Mr. Chairman, these are all issues that I hope this Subcommittee 
will address in the next several months. I may have additional 
documents to place in the record and would like to request that the 
record to be held open for that purpose.

    Mr. Norwood. Mr. Chairman, I ask unanimous consent that I 
may make a brief opening statement.
    Mr. Greenwood. Mr. Norwood, while an esteemed member of the 
Energy and Commerce Committee, does not have the honor of 
serving on this subcommittee. But we have the honor of his 
presence, and without objection, we will ask that he be offered 
time for an opening statement.
    Mr. Norwood. Thank you very much, Mr. Chairman.
    I am here for two or three reasons this morning, one of 
which is to thank you and to congratulate you and to tell you 
how pleased I am that you are taking the Commerce Committee in 
this direction in terms of the security for our Nation. I thank 
you for that, and I hope, too, you will have many other 
hearings.
    To give you some indication of how important I think this 
subject is, about right now we are teeing off the first tee in 
the Augusta National this morning, my home district, and I 
promise you I would have loved to have been there, but I view 
this as a little more important.
    The other reason I wanted to come this morning is because I 
am very pleased with the witnesses and especially that you have 
the President and CEO of Internet Security Systems here as a 
big player in all of this. ISS has been recognized as the 
worldwide leader, Mr. Chairman, in the intrusion detection and 
vulnerability assessment market. In addition, ISS has become 
the world's largest provider of managed security service, and 
they deliver a 24-7 security monitoring and management, just 
sort of something we might be interested in. And I guess I am 
just real tickled that a Georgia company has played such a 
leading role in this extremely important area.
    We have indications that this area of computer security is 
growing very, very rapidly. For example, ISS has been named the 
fifth fastest growing technology company in North America and, 
listen to this, this is based on a 5-year revenue growth of 
45,000 percent. There is some indication in that number that 
tells us all how important this is and must be.
    This achievement demonstrates to me that this is a large 
emerging area that will impact today's Internet economy.
    Now, the government has taken strides--I don't know whether 
to say great or good--but at least strides in the past few 
years. However, as you know, much more is needed. Funding must 
be increased by a substantial amount if we take this seriously. 
As industry has considered resources and expertise, a continued 
partnership with industry on this subject is going to be very 
critical; and it is my understanding that ISS has played a 
leadership role in working and partnering with the government 
on security issue s. And with any private company you do that 
with some risk, but I think and hope this relationship will 
continue, not just because it is good for a Georgia company, 
but because it is so very needed for the national security of 
this Nation. And with that, Mr. Chairman, I will submit the 
rest for the record and thank you for your courtesy and 
kindness this morning.
    Mr. Greenwood. The Chair thanks the gentleman. Without 
objection, the rest of his testimony, as well as the testimony 
of all other members who may submit them, will be entered into 
the record. Also a member of the committee, but not a member of 
the Oversight and Investigation Subcommittee, is Mr. Davis of 
Virginia, and we are happy to have him here as well.
    Mr. Davis. Thank you very much. Let me--Mr. Chairman, I ask 
unanimous consent that I be able to make some comments.
    Mr. Greenwood. Without objection.
    Mr. Davis. Thanks for allowing me to participate in this 
hearing today. I want to compliment you and your staff on the 
diligent work on this pressing issue. It is vitally important 
that we in Congress recognize and understand the complexities 
we face in pursuing our Nation's critical infrastructure, the 
systematic activities that are essential to the minimum 
operation of our economy and government.
    Although 95 percent of our critical infrastructure is owned 
and operated by the private sector as your Nation's front line, 
the Federal Government plays an essential role in sharing 
information about cyberthreats against our assets. But the 
evidence demonstrates that the Federal Government is 
dangerously behind the curve in getting its own house in order. 
Simply put, we are losing time. Since 1997, GAO has listed 
information security as a governmentwide high-risk area and has 
conducted numerous reviews which have continuously sounded the 
alarm about widespread weakness and vulnerabilities in the 
Federal Government's information systems.
    During March of last year, as part of a review requested by 
the Subcommittee on Government Management Information and 
Technology, of which I was a member, GAO has found that 22 of 
the largest Federal agencies were providing inadequate 
protection to critical Federal operations and assets from 
computer-based attacks. They were able to identify systemic 
weaknesses in the information security practice of the 
Department of Defense, the National Aeronautics and Space 
Administration, the Department of State, and the Department of 
Veterans Affairs; and then, as many of you know, in September 
of 2000, the subcommittee gave the Federal Government an 
overall D-minus on its computer security practices report card.
    Just as the Romans built the greatest network of roads at 
the height of the Roman Empire and the barbarians used these 
same networks to destroy the Romans, so we may face the same 
vulnerabilities with the advances we have made in technology 
and the interconnectivity of our networks. There is no doubt 
that nations are in the process of developing tools to 
penetrate and cripple these networks.
    At the same time, the outside world is but one source of 
the threat to government information systems. Much of the 
threat comes from within the government. A key challenge to 
making the Federal Government more secure lies in the mindSet 
of many Federal agencies vis-a-vis the importance of 
information security to their operations and assets.
    For many, implementing best practices for controlling and 
protecting information resources is just a low priority. The 
question before us then is, what do we do about it? What steps 
should Congress take to change the direction and reduce the 
vulnerability of Federal operations and assets?
    As one who has studied the issue for over a year, I come to 
the conclusion there are two necessary components to achieving 
the goal. First, I strongly believe there is a dire need for a 
strong central leader who can coordinate implementation of 
information security best practices across government. 
Currently, these responsibilities are shared by several Federal 
agencies, some of whom are before us today, which make the 
coordination and uniformity of information security practices a 
formidable obstacle.
    The government information security community needs an 
advocate who can ensure that information security becomes an 
integrated component of information systems. Let me say I agree 
with those who assert that funding for implementing information 
security measures is inadequate. I submit that having a Federal 
CIO with this responsibility, as I put forth in legislation, 
who can champion the agency's security needs, would be an 
effective voice in this respect.
    Second, we need to encourage information sharing between 
the private sector and government. As many of our witnesses 
would likely agree, the ownership dynamic of our Nation's 
critical assets makes crucial the development of thriving 
public-private partnerships for this purpose, but with the 
current Federal computer systems it is, in my mind, entirely 
reasonable that many in the private sector are wary of entering 
into these partnerships. At the same time, current law is 
retarding the implementation of the National Infrastructure 
Assurance Plan. It is for this reason we introduced legislation 
last year that gives critical infrastructure industries the 
assurances they need to confidently share information with the 
Federal Government.
    Our measure would provide a limited FOIA exemption, civil 
litigation protection for shared information, and an antitrust 
exemption for information shared within an information sharing 
and analysis. These three protections were cited by the past 
administration as necessary legislative remedies. This 
legislation would enable the ISACs to move forward without fear 
from industry, so that government and industry could enjoy the 
mutually cooperative partnership called for in the PDD-63.
    I ask unanimous consent the rest of my statement be put in 
the record, and I appreciate the opportunity to be here today.
    Mr. Greenwood. Without objection, the gentleman's statement 
in its entirety will be placed in the record.
    [The prepared statement of Hon. Tom Davis follows:]

Prepared Statement of Hon. Tom Davis, a Representative in Congress from 
                         the State of Virginia

    Mr. Chairman, thank you very much for allowing me to participate 
today in this hearing. I want to compliment you and your staff for your 
diligent work on this pressing issue.
    It is vitally important that we in Congress recognize and 
understand the complexities we face in pursuing the protection of our 
nation's critical infrastructure--those systemic activities that are 
essential to the minimum operations of our economy and government. 
Although 95% of our critical infrastructure is owned and operated by 
the private sector, as our nation's front line, the Federal Government 
plays an essential role in sharing information about cyber threats 
against our assets.
    But the evidence demonstrates that the Federal Government is 
dangerously behind the curve in getting its house in order. Simply put, 
we are losing time. Since 1997, GAO has listed information security as 
a governmentwide high risk area and has conducted numerous reviews 
which have continuously sounded the alarm about widespread weaknesses 
and vulnerabilities in the Federal Government's information systems. 
During March of last year, as part of a review requested by the 
Subcommittee on Government Management, Information, and Technology, of 
which I was a Member, GAO found that 22 of the largest federal agencies 
were providing inadequate protection for critical federal operations 
and assets from computer-based attacks. They were able to identify 
systemic weaknesses in the information security practices of the 
Department of Defense, the National Aeronautics and Space 
Administration, the Department of State, and the Department of Veterans 
Affairs. And then as many of you know, in September 2000, the 
Subcommittee gave the Federal Government an overall D- on its computer 
security practices report card.
    Just as the Romans built the greatest network of roads at the 
height of the Roman Empire and the Barbarians later used this same 
network to destroy the Romans, so may we face the same vulnerabilities 
with the advances we have made in technology and the interconnectivity 
of our networks. There is no doubt that nations are in the process of 
developing tools to penetrate and cripple these networks.
    At the same time, the outside world is but one source of the threat 
to government information systems. Much of the threat comes from within 
the government. A key challenge to making the Federal Government more 
secure lies in the mind set of many federal agencies vis-a-vis the 
importance of information security to their operations and assets. For 
many, implementing best practices for controlling and protecting 
information resources is a low priority.
    The question before us then is what do we do about it? What steps 
should Congress take to change the direction and reduce the 
vulnerability of federal operations and assets?
    As one who has studied these issues for over a year now, I have 
come to the conclusion that there are two necessary components to 
achieving this goal. First, I strongly believe that there is dire need 
for a strong central leader who can coordinate the implementation of 
information security best practices across government. Currently, these 
responsibilities are shared by several federal agencies (some of whom 
are before us today), which makes the coordination and uniformity of 
information security practices a formidable obstacle. The government 
information security community needs an advocate who can ensure that 
information security becomes an integrated component of information 
systems. Let me also say that I agree with those who assert that 
funding for implementing information security measures is inadequate, 
and I submit that having a Federal CIO with this responsibility as I 
have put forth in legislation, who can champion the agencies' security 
needs, would be an effective voice in this respect.
    Second, we need to encourage information sharing between the 
private sector and government. As many of our witnesses would likely 
agree, the ownership dynamic of our nation's critical assets makes 
crucial the development of thriving public/private partnerships for 
this purpose. Yet with the current state of Federal computer systems, 
it is in my mind entirely reasonable that many in the private sector 
are wary of entering into those partnerships. At the same time, current 
law is retarding the implementation of the National Infrastructure 
Assurance Plan. It is for this reason that I introduced legislation 
last year that gives critical infrastructure industries the assurances 
they need in order to confidently share information with the Federal 
Government. My measure would provide a limited FOIA exemption, civil 
litigation protection for shared information, and an antitrust 
exemption for information shared within an Information Sharing and 
Analysis (ISAC). These three protections were cited by the past 
Administration as necessary legislative remedies in Version 1.0 of the 
National Plan for Information Systems Protection and PDD-63. This 
legislation would enable the ISACs to move forward without fear from 
industry so that government and industry may enjoy the mutually 
cooperative partnership called for in PDD-63.
    As Chairman of the House Government Reform Subcommittee on 
Technology and Procurement Policy, I will be continuing to explore this 
matter, along with Chairman Steve Horn of the Government Efficiency, 
Financial Management, and Intergovernmental Affairs Subcommittee. I am 
grateful that you, Mr. Chairman, have also taken an active approach to 
addressing this problem today, and I look forward to working with you 
to make the Federal Government a model for risk management and the 
protection of information systems. As well, I am pleased to have the 
opportunity to hear the testimony of our distinguished panelists and 
appreciate their being here. I want to particularly welcome here today, 
Mr. Tom Noonan, the President and CEO of Internet Security Systems, 
which is headquartered in Atlanta but has an important presence in my 
district. I look forward to hearing from all of you.

    Mr. Greenwood. The Chair recognizes the chairman of the 
full committee, the gentleman from Louisiana, Mr. Tauzin, for 
an opening statement.
    Chairman Tauzin. Thank you, Mr. Chairman, for holding this 
important hearing on the inadequacy of the Federal efforts to 
protect our Nation's critical cyberinfrastructure and the vast 
amount of sensitive data that is stored on Federal computer 
systems.
    I really don't think that many people realize the extent to 
which the Federal civilian agencies collect and store so much 
sensitive information, whether it is medical, financial or 
other personal information on American citizens, confidential, 
proprietary data from America's corporations, cutting-edge 
scientific research, or whether it is export controlled 
information or even sensitive law enforcement information. 
There are tons of it that is subject to hacking and to 
compromise.
    We learned, for example, in the GAO report that even the 
IRS had allowed a cookie on its Web site. Nor do most people 
realize the extent to which we as a Nation have become so 
independent on these computer systems to assure our national 
economic security, and I think it would come as quite a 
surprise for most Americans to learn which these Federal 
agencies are the target of attacks by foreign and domestic 
sources bent upon espionage and other very malicious actions.
    Faced with this kind of serious challenge, the Federal 
Government has not performed well. This committee's oversight 
continues to reveal troubling computer security deficiencies 
across the Federal Government, deficiencies that place critical 
services and sensitive data at significant risk of compromise. 
Here, the connection between the security and the privacy of 
American citizens cannot be ignored.
    A recent inspector general's audit of the Health Care 
Financing Administration and several of its Medicare 
contractors, which the committee is releasing publicly today, 
found numerous system control weaknesses that permitted 
unauthorized access to sensitive beneficiary information. This 
is sensitive health care information about Americans that we 
discovered could be easily compromised in the Federal HCFA 
systems; and while we don't know today whether the information 
was in fact compromised, we intend to find out whether that has 
in fact happened. And I can assure you, in a private 
conversation I had with Secretary Thompson yesterday, he 
intends to see what is going on at HCFA in this critical area 
and he intends to get it fixed before this is an issue of 
enormous importance to Americans and one that this committee, I 
hope, Mr. Chairman, will continue to take a very close and 
diligent look at.
    The Clinton administration talked a great deal about 
cybersecurity and critical infrastructure protection over the 
past several years, holding Presidential summits and issuing 
Presidential directives. The administration, for example, said 
the Federal Government would serve as a model for good security 
practice for the private sector, which controls much of the 
Nation's infrastructure, that it might follow and emulate. 
Despite all the rhetoric and the photo ops and the paper 
exercises, the bad news continues to roll in with every GAO 
report, every inspector general's audit, with every 
congressional oversight hearing, with each day's newspaper 
accounts which each real-world test of government's computer 
systems security, no matter how recent, we continue to learn 
how bad the situation is.
    For example, two reports released this year show little 
progress that Federal agencies have made in protecting critical 
cyberassets in the 3 years since the President issued his PDD-
63. Essentially, we are still in the process of identifying the 
critical assets and their interdependencies, which raises the 
question, how can we adequately protect our most critical 
cybersystems when we haven't yet identified them all.
    This is not to say that there have not been improvements in 
the area, and certainly there have been some, particularly at 
those agencies that have felt the sting of public 
embarrassment, but overall we are barely treading water; and 
unless we get serious about the effort, we will never keep up 
with the rapid advances of technology in this area which 
continue to reveal new ways to attack cybersystems.
    The technology to get into our systems is advancing much 
more rapidly than the deployment of security to protect them, 
and in this increasingly interconnected world, we are either 
going to prioritize our resources better to meet this 
challenge, something that today Congress has not yet forced the 
agencies to do, or we are going to find ourselves in deep, deep 
trouble, and Americans are going to wake up angrier than you 
can possibly imagine to learn that in many cases their 
personal, sensitive data, which they shared not voluntarily, 
but involuntarily with the Federal Government, has been 
compromised and perhaps will be used in ways that they find 
very offensive.
    This committee has both the responsibility and the 
authority to conduct oversight as to whether a nation's 
critical and computer systems are being adequately protected, 
and we intend to do that. And I want to thank you, Mr. 
Chairman, for taking this job and this assignment so seriously.
    This is an extremely important hearing. If Americans are 
concerned about privacy and security on the Internet as they do 
commerce voluntarily, let me assure you their concern, as they 
share sensitive information with government agencies 
involuntarily, is even deeper, and our obligations here are 
much stronger.
    Thank you for taking this seriously, and I yield back the 
balance of my time.
    Mr. Greenwood. Thanks to the chairman for his statement.
    [Additional statement submitted for the record follows:]

Prepared Statement of Diana DeGette, a Representative in Congress from 
                         the State of Colorado

    I want to thank the Chair for holding this important hearing, and I 
want to thank our witnesses for being here today.
    The positive aspects of advanced technology in communications go 
without saying. Enhanced inter-connectivity brings a whole new level of 
efficiency and speed to our systems.
    The downside is that this same inter-connectivity can create 
vulnerability. I think a good analogy is when the gene pool of a 
certain species loses its diversity, a certain strain of virus can come 
in and wipe out the whole population because they all share the same 
vulnerabilities.
    It is certainly eye opening to learn, as I did when preparing for 
this hearing, that the number of serious security breaches of federal 
systems is on the rise. Most unnerving of all is the knowledge that 
there were over 150 incidents of the utmost severity last year alone 
when an unauthorized user was able to gain complete control of a system 
within 32 federal civilian agencies.
    The Government Information Security Reform Act, passed last year, 
appears to be a step in the right direction to evaluate government 
computer system weaknesses and then address the problems that exist. I 
expect that this subcommittee will be among the first to gain the 
results of the independent tests that are due to be completed by 
October of this year and again in 2002.
    It is reassuring to learn that action has already been taken to 
evaluate the government's system weaknesses. I think the Clinton 
Administration deserves great credit for recognizing the growing 
threats to our nations security within this area, and taking steps to 
address the risk that poor federal computer security poses to our 
country. The Executive Order in 1996 that established the President's 
Commission on Critical Infrastructure Protection (PCCIP) was a 
tremendous step in officially recognizing this growing problem and 
bringing the public and private sector together to address it.
    In 1998, a Presidential Directive was issued to have federal 
officials to create and implement a strategy for protecting the 
nations' critical infrastructures, which was another crucial step for 
the security of our country.
    I am glad to learn that the new Administration is taking this issue 
seriously and am anxious to learn more about its plans to continue this 
important work and who will be in charge of coordinating this effort 
within each agency.
    Thanks again to the witnesses for coming, and I look forward to 
hearing the testimony.

    Mr. Greenwood. If there are no more opening statements by 
members, I would like to turn to our cybersecurity penetration 
demonstration and welcome Mr. Glenn Podonsky, Director of the 
Department of Energy's Office of Independent Oversight and 
Performance Assurance, and his excellent team of cyberexperts 
to this hearing. And I thank you for putting together this 
demonstration for the committee.
    Mr. Podonsky, although you and your team technically are 
not witnesses today and are not testifying before the 
subcommittee, it is our general practice to swear in all 
persons who appear before the subcommittee; and if you and your 
team have no objection, I would like to do that now. I ask that 
you rise and raise your right hand.
    Do any of you have any objections to testifying under oath?
    Seeing none, the Chair then advises you that under the 
rules of the House and the rules of the committee, you are 
entitled to be advised by counsel. Do you desire to be advised 
by counsel during your testimony?
    Mr. Podonsky. No.
    Ms. Matthews. No.
    Mr. Bellone. No.
    Mr. Huston. No.
    Mr. Peterson. No.
    Mr. Greenwood. In that case, would you please rise and 
raise your right hand, as you already have.
    [Witnesses sworn.]
    Mr. Greenwood. You may be seated and we recognize you, Mr. 
Podonsky, and look forward to your demonstration.

TESTIMONY OF GLENN S. PODONSKY, DIRECTOR, OFFICE OF INDEPENDENT 
  OVERSIGHT AND PERFORMANCE ASSURANCE, ACCOMPANIED BY, JASON 
BELLONE, FORMER MEMBER OF THE COMPUTER ANALYSIS RESPONSE TEAM, 
FEDERAL BUREAU OF INVESTIGATION; KAREN MATTHEWS, FORMERLY WITH 
  COMPUTER FORENSICS LABORATORY, U.S. DEPARTMENT OF DEFENSE; 
    BRENT HUSTON, AUTHOR OF BOOK ON HACKPROOFING; AND BRAD 
   PETERSON, DIRECTOR, OFFICE OF CYBER SECURITY AND SPECIAL 
               REVIEWS, U.S. DEPARTMENT OF ENERGY

    Mr. Podonsky. Thank you, Mr. Chairman. We appreciate the 
opportunity to appear before this subcommittee for the sole 
purpose of demonstrating the cyberpenetration techniques 
employed by my office. As you are aware, my office provides the 
Secretary of Energy with an independent view of the 
effectiveness of Department policies, programs and procedures 
in the areas of cybersecurity, safeguard security and emergency 
management.
    Today, my staff will provide a brief demonstration of our 
cybersecurity penetration capabilities. With me for the 
demonstration today are Mr. Jason Bellone, formerly with the 
FBI's computer analysis response team; Ms. Karen Matthews, 
formerly with the Department of Defense computer forensics 
laboratory; Mr. Brent Huston, author of a soon-to-be-published 
book on hack-proofing your e-commerce Web site; and Mr. Brad 
Peterson, my Director of the Office of Cybersecurity.
    Our cybersecurity office maintains a continuous program for 
assessing Internet security to identify vulnerabilities that 
hackers and others could exploit. As part of the program, we 
continuously attempt to penetrate the DOE cybercommunity. We 
use this--we do this by using off-the-shelf software of hacking 
programs that are available to virtually anybody. Using these 
tools, we have been successful in identifying numerous 
vulnerabilities on DOE cybersecurity programs, and I am pleased 
to report, at the same time, those have been largely corrected 
by the Department.
    We will take a few minutes to demonstrate the results of 
some actual inspections that have taken place over the last 6 
months in order to show you the hacking techniques that we use 
and others employ. After the demonstration, we would be happy 
to respond to questions about the demonstration.
    Let me now introduce Mr. Jason Bellone to lead the 
demonstration.
    Mr. Bellone. Thank you, Mr. Podonsky.
    Mr. Greenwood. Why doesn't it surprise me that it is the 
youngest member of the team?
    Mr. Bellone. We are very proud to present our cybersecurity 
laboratory to you today. Although it is small in presence here, 
this laboratory is a comprehensive suite of headquarters, 
regional and mobile assets that we use, in effect, to attack 
and subsequently performance-assess the Department's 
information systems. It is our goal here to provide as much 
realism as possible to illustrate our cybersecurity penetration 
capabilities. The demonstration should give you an inside look 
at our process, and at the same time, I think you will see that 
the demonstrations will demystify the attacker process.
    Let me highlight two points before I begin. First, each 
demonstration you will see derives from a real penetration test 
conducted against government sites within the past 6 months. 
Sites, however, will not be mentioned by name.
    Second, all tools demonstrated are real, meaning employed 
as utilities by the attacker community. Some of these products 
are commercial. All are available downloads from the Internet 
and most are free. Nor will they be mentioned by name.
    When we assess, we don't use rubber bullets and paint 
pellets. To the greatest extent possible, we use the same 
process, tactics and tools as an attacker. This process I refer 
to here is the attacker's modus operandi; hence, it is our 
modus operandi. We will follow this process throughout the 
demo, about one level of detail away from teaching you how to 
attack a system. So don't try this at home.
    Without further delay, let's begin the demonstration.
    We will start with footprinting. Footprinting is a 50,000-
foot view, a snapshot, a bird's-eye view of your targets. It is 
anonymous. It is unintrusive. It is generally undetected. It is 
basically reconnaissance to gather a lay of the land. The 
ultimate goal is the who, the what and the where of the target.
    I will turn your attention to the demonstration screen. The 
following demo will illustrate a utility, again freely 
available on the Internet, that will graphically depict the 
who, the what, the where of the target. Although this operation 
was conducted from Maryland, the source of our efforts appear 
to come from Tampa, Florida. I will refer you to line one of 
the table.
    The table represents the path that our data flowed from, 
the launch point which was redirected from Florida to Maryland. 
In this case and only this case, I will tell you that we are 
looking at the Department of Energy's Web site for the purpose 
of illustration. The analysis section indicates the type of 
system of the target. This is the basic idea of what we are 
looking at, so what we have here is the who, the what, the 
where data collected. We are ready to move on to the second 
step of the process, which is scanning.
    The scanning process enables us to generate our target, our 
target list, and develop an attack plan. The scanning operation 
employs hundreds to thousands of agents acting as virtual 
detectives checking the target systems for specific 
vulnerabilities. Each virtual detective reports its findings 
back to the attacker. The probing process emulates hostile 
operations and searches for known vulnerabilities.
    The data base of vulnerabilities and exploit change daily. 
At the present time we test for over 900 vulnerabilities. 
Importantly, the scanning operation can be conducted with what 
we call ``low and slow,'' which means covertly without 
detection. The end result is a vulnerability profile, or intact 
plan ultimately.
    The next demonstration will show you exactly what the 
digital detectives delivered to us from an assessment we 
conducted a few months ago. I will again turn your attention to 
the demonstration screen.
    These results represent the output of a very robust 
scanning effort directed at one of our sites. This was a source 
of our attack plan.
    The significance of what you are looking at is this: The 
red icon represents the presence of a high-risk vulnerability, 
meaning it is probable for the vulnerability to result in 
system compromise. The yellow represents a medium-risk 
vulnerability that equates to a medium probability of system 
compromise. Let me drill down one level of detail to help you 
understand what you look at.
    If I click the red icon, the high vulnerability icon, I can 
drill down to understand the exact nature of the finding. The 
detail supports a focused attack and later a corrective action.
    The attack name is clear. It reads NBTDIC. More 
importantly, the description reads as follows, a share that 
requires only a password may be compromised using a dictionary 
file. Put simply, it details exactly what we need to do to 
focus our attack.
    Our third example is a separate product that may serve in a 
similar capacity. In contrast to the commercial product we 
demoed, this is a free utility. You will notice the 
presentation is similar, red equals high risk, yellow equals 
medium risk.
    Something interesting to note here: In the upper left-hand 
corner is a summary of the findings. It is quantitative, tells 
us how many targets, how many vulnerabilities, how many 
warnings. Let me point out, there have been instances where the 
scan results did not yield significant vulnerabilities and, 
hence, the process can stop there. So each step is requisite 
for the next step, and with that we are on to enumeration.
    As the scan results identify specific vulnerabilities for 
specific targets, we use this data to concentrate our efforts 
for more intrusive probing. The goal is to refine the attack 
plan with information about user accounts, file-sharing and 
system characteristics.
    The next demo will show you how to use the scan data to 
concentrate efforts and probe for more valuable information. I 
will again turn your attention to the demonstration screen.
    This utility enables us to probe for specific information 
relating to the scan results. The list has several possible 
targets. You can see that they are over 20 targets at the 
moment. So, next, although over 20 exist, we are going to focus 
on one. We have a game plan for attack then, to gain access to 
a user C drive. So--to remotely gain connectivity to a user's C 
drive over the Internet.
    So with footprinting, scanning, and enumeration data in 
hand, we are ready to gain access to the system. The demo you 
are about to see is a playback of the exact same exploit that 
we used in the course of our assessment; the process, the tools 
and the data to include the password are directly from the 
assessment. The demo is technical, so I am going to narrate as 
we go through it, so you will understand what you are about to 
see. Keep in mind, our goal here is to run an attack on target 
X to gain access to the user's C drive. We will begin the demo.
    This is Step 1. This is collecting basic configuration 
data. We use this data to enter into our utility, basically an 
attack utility, that will be used to crack the password. You 
will see that it is iterating through special characters, 
through letters, through numbers and so forth. It goes one 
character at a time; and for the purpose of this demo, we did 
select out of our set a four-character password. Again, it is 
original password from the site.
    We have I, and we have A--still moving through, lasts only 
a few seconds--I-A-E, and you can see it is almost there.
    We now have password in hand, so we move on to step three. 
Step three is to use that password to connect across the 
Internet to the user's drive. We enter the password and, voila, 
across the Internet, we have total access to this person's hard 
drive.
    At this point, we can load anything we want or we can 
download anything we want. In particular, here, we are going to 
load something called a key stroker logger, and we are going to 
download a sniffer. We could equally upload the person's 
password file at this point. So for step five we will move on 
to escalating privileges.
    As you could see from the demo, we gained unrestricted 
access to a user's hard drive, but an attacker would never stop 
here, nor do we. The idea now is to discover how far can we go, 
can we propagate throughout the network?
    What you will see next is, we will crack a password. So 
with this foothold, we have downloaded the password file. The 
password cracking demonstration uses a password file captured 
from exploits similar to the ones we have demonstrated. The 
demo will highlight the fact that cracking passwords is simply 
a matter of time.
    The tool you are about to see is designed to serve as a 
password auditing tool; that is, it is to check a department's 
password policy, eight characters, nine characters and so 
forth. It is publicly available and widely used in the 
information security community. Needless to say, it can have 
alternative uses to a malicious user.
    Before we begin the demo, let me explain what you will be 
looking at. In the first column, that is the user name. When 
you log in, generally you enter a user name and a password. So 
that would be the user name, and the columns that are empty, 
those will be where passwords appear. It is empty at this 
point. At the blink of an eye you will start to see passwords 
appear. In the far column, that's the encrypted representation 
of the password. Let's start to crack.
    We saw, at the blink of an eye, 25,000 words in the English 
dictionary and about 5 million tries occurred in a second. Less 
than a minute will pass for us to have the super-user password. 
We talk about root, super-user, administrator; bottom line, 
complete and utter control over the system. We will let it go 
for a moment. It is very far along. You see administrator, and 
you see it says MOTOROL. We are about two characters away from 
its completing. We find that we get to this point in under a 
minute most of the time.
    You also notice that it is telling us that they are not 
under eight characters. However, this is still not compliant 
with policy. So you can use this to support policy programs 
that may exist for a department.
    So it is completed. We now have super-user privileges. We 
will move on to the next demonstration.
    You recall that we were able to upload both a key stroke 
logger and a sniffer to the target's hard drive. Commonly, we 
install the logger to capture the user's monitoring log in 
session. When you come in in the morning most likely you check 
your e-mail and so forth. The idea for what we do is, we load 
it that night so that we can catch what you do in the morning.
    I refer you to the demo screen for a large picture, fairly 
hard to decipher, and that is because every key--escape, 
control, delete--is captured. It also runs in stealth mode, 
unknowing to the user, very hard to detect, and all of the 
results go to a text file which the attacker can bring to their 
system. Embedded between all of those escape keys and tab keys 
actually are passwords.
    Of course, an attacker doesn't stop here either, nor should 
we, so we will go on to pilfering.
    A sniffer is a stealth utility that will act as a wiretap, 
a wiretap that will listen to traffic traversing throughout the 
network. The idea of pilfering is to turn a compromised target 
into a listening device to capture not only what you are 
typing, but also what your peers are doing. Clear text 
passwords, e-mail correspondence, documents are all routed to 
the original recipient and, at the same time, rerouted to the 
attacker. In many cases, we have used this to propagate our 
control to other areas of the network. This courtesy, with 
small footholds, escalating privileges and pilfering, enables 
us or an attacker to gain more and more control in the network.
    The next short demonstration will demonstrate how a freely 
available tool can turn your machine into a secret listening 
device. Let me set up what you're looking at here.
    I mentioned wiretap as an example. This is one snippet, 1 
second from a wiretap, so to speak; and the purpose of this is 
to highlight that we indeed have user name and password. So we 
have gone from an exploit on a local machine to finding a way 
further on the network to other machines now. That is the point 
of pilfering.
    We move on to covering tracks. Covering tracks is hacker 
101. Hackers don't want to get caught. We do not employ this 
tactic as part of our process so that we can work with the 
sites to engage in what we call ``post-incident analysis.'' 
Simply put, we leave our traces to enable the site and us to 
collaborate to understand the nature of the attack.
    The following demo will demonstrate yet another freely 
available tool, erasing the traces of an attack with a few 
button clicks. What will be important to recognize here is that 
you will notice that it is only the traces of the attacking 
activity that are deleted. So a systems administrator would 
never be aware of what happened because all of the other logs, 
those that are from a normal conduct of a computer, would still 
be there. A button click, the traces are gone. Let's move on to 
back doors.
    For the following demo I will submit this machine. Karen 
will do the heavy lifting here. Although this machine is 
separated by 20 feet of cable, we have executed the exact same 
exploitation with hundreds to thousands of miles of separation 
between our lab and the site. The message is clear that 
ownership and control of a resource is, to the fullest extent 
possible, in many cases more than the user. The goal is to make 
a key that only you can use to enter, create accounts, plant 
remote control services and to install Trojans. I will now 
start the demo.
    Let me set the scene again here. Imagine yourself working 
in front of this screen, doing normal business work wherever--
anywhere in the world, for that matter, okay? We have exploited 
this system unknowing to you, and we are now going to take over 
control by doing things like change colors. So you are sitting 
there and this is happening to you, okay?
    The other thing we are going to do is, we are going to 
eject the CD on you--again, from 3,000, 2,000, 1,000 miles 
away--and the other thing we might do, just to harass you a 
little more, is to hide icons. There we go. The point being--
these are visual examples; ultimately, it is complete control.
    A popular news organization reported about this tool, and 
let me quote: ``he or she can access your files, monitor your 
key strokes, move your mouse around the screen. If you have a 
Web cam, they can watch what you are doing. If you have a 
microphone, they can listen to you. It is complete power.''
    This concludes the demonstration portion of our testimony. 
In closing, I will highlight the end product of this 
capability.
    The essence of our capability is our final product. Our 
product encapsulates every element of what you have just seen--
process, tactics, tools, every vulnerability and exploit. Along 
with meticulous note-taking and recordkeeping, we deliver all 
of this information to the site in a user friendly, Web-based 
CD-ROM. So anything and everything that is collected, yellow 
sticky and so forth, is given to the site for corrective 
action. I know you are also familiar with our paper product, 
which combines the technical elements with the policy, program 
and procedural analysis.
    Thank you.
    I will now offer our technical team for technical 
questions, as well as Mr. Podonsky and Mr. Peterson, who can 
entertain questions about our program.
    Mr. Greenwood. Thank you. Now, I know why I can't open my 
e-mail in the morning.
    I don't know if you are able to answer this in anything 
like a brief response, but what are the fundamental things that 
agencies and Federal entities ought to do to protect themselves 
from this kind of assault?
    Mr. Bellone. It is due diligence. This--what you are seeing 
here is such a dynamic process that it is a snapshot in time 
when we do an assessment. The fundamental core of doing this is 
to have program, policy, procedure and technology working 
together. That is why the scope of our assessments is what is 
important, that we do the technical elements, but at the same 
time, we have a team who looks at policy, looks at programs, 
looks at procedures. We put it together so that we can 
understand the health of a program and how they are able to 
sustain the program. It is the sustainability that is most 
significant.
    Mr. Greenwood. So what I hear you saying is that you are 
never finished with your security precautions. You can't build 
a firewall or create air space and stay permanently fixed. You 
always need to be----
    Mr. Bellone. The quote that I think about is, ``as 
technology evolves, sneakiness finds new ways of expression;'' 
and that's exactly where we are. We can assume technology will 
evolve, especially in this growing field of information 
technology. Hence, the task is always ahead of us.
    Mr. Greenwood. That is a fascinating, fascinating 
demonstration.
    Are there questions from the members for the technical 
panel here?
    The Chair recognizes the chairman, Mr. Tauzin.
    Chairman Tauzin. Thank you very much.
    I simply want to put what you have told us in layman's 
terms a little bit. Am I correct in that, with this 
demonstration, you have shown us how a hacker cannot only 
compromise the system but take it over and actually control the 
information on that system? Is that correct?
    Mr. Bellone. Yes.
    Chairman Tauzin. You have shown us how someone who could 
compromise, let's say, a third-party payment system at HCFA to 
get into that system--how they might not only gather the 
information that's in that system about patient's health care 
and problems, but that they might even alter the information on 
that system?
    Mr. Bellone. Absolutely.
    Chairman Tauzin. So that I take it your answer is, yes, 
right?
    Mr. Bellone. My answer is yes.
    Chairman Tauzin. So the person who is using the systems you 
have demonstrated can actually change the medical condition or 
the treatment profile or the payment requirements of that 
system; is that correct?
    Mr. Bellone. That is exactly correct.
    Chairman Tauzin. And, therefore, compromise the integrity 
of the payment system?
    Mr. Bellone. Absolutely.
    Chairman Tauzin. I can envision incredible fraud 
opportunities with that scenario, is that right, as well as 
privacy problems?
    Mr. Bellone. You can assume that with what we have shown, 
an attacker can gain more privileges than the user has.
    Chairman Tauzin. Say that again, ``An attacker can gain 
more privileges than the user.'' What do you mean by that?
    Mr. Bellone. What I mean is that once you exploit it, you 
can deny them service to that resource.
    Chairman Tauzin. So you can not only take charge of their 
operation, you can make it more difficult for them to actually 
use it themselves?
    Mr. Bellone. Absolutely.
    Chairman Tauzin. You can deny them total use, if you want, 
of these systems?
    Mr. Bellone. Absolutely.
    Chairman Tauzin. You also indicated--obviously, I am just 
using health care systems as an example for us to understand 
this technology, but this, in the case of an energy lab, might 
explain how someone might get in and compromise, with espionage 
intent, not only the information in that lab, but you might do 
it from across the world.
    You don't need necessarily someone working in the lab; is 
that right?
    Mr. Bellone. To a certain extent. The one thing that I 
think the Department of Energy recognizes is, given that risk, 
there are certain assets that they are not willing to subject 
to that risk.
    Chairman Tauzin. Well, let's hope so.
    Mr. Bellone. Yes.
    Chairman Tauzin. But we have some confidence problem with 
that.
    Yes, sir.
    Mr. Podonsky. Also the fact that we exist as an 
organization to continue doing these penetrations is a 
compliment to the current Secretary and the Department because 
we are allowed, without legislation, to go anywhere that we 
need to and report on anything that we find.
    Chairman Tauzin. On the technical side again, the last 
thing you said was quite disturbing as well, that if you had a 
camera, once this system is compromised, that you take over 
that camera, that you can actually watch activities in that 
room in front of that screen; is that correct?
    Mr. Bellone. Absolutely.
    Chairman Tauzin. And if you have a microphone, which most 
computers do, you can, with this technology, install your 
sniffer and actually listen in on all conversation inside that 
room; is that correct?
    Mr. Bellone. Absolutely. If the machine has a microphone, 
that is the case.
    Chairman Tauzin. And unless all the Federal sites in which 
sensitive information is being discussed are protected against 
this technology, anyone from around the world using it could 
enter any room where sensitive conversations are being held and 
eavesdrop on those conversations without a court order covering 
their tracks, without anybody ever knowing they have done it; 
is that correct?
    Mr. Bellone. To a certain extent, it is correct.
    What I could say is that in some environments they look 
harder at things like hardware, the presence of microphones and 
so forth, and so that is looked very carefully upon. In other 
environments where there is less, where there is not the 
presence of sensitive information, it is more likely that that 
may be the case.
    Chairman Tauzin. But it is a problem. Unless the Federal 
official who is operating in front of that computer screen 
which has camera and microphone capabilities is aware of what 
you have just shown us, if he has no awareness of it, if it is 
not a priority item in his thinking or her thinking that day, 
that conceivably those systems can be compromised in the way 
you have demonstrated and the conversations, the actions even 
in that room can be in someone else's domain, unknown to the 
Federal officials involved.
    Mr. Huston. That is correct, sir, but you have to realize 
that it should never get that far. There should be defensive 
measures installed in these systems to prevent that from 
occurring long before that ever becomes a risk.
    Chairman Tauzin. That is, of course, the next question.
    You know, I have raised in the opening statement the 
concern that enough of our Federal agencies are not keenly 
aware, we have not yet made them keenly aware nor instructed 
them nor appropriated funds for them to install these defensive 
systems. Is that generally correct as well? Who can answer?
    Mr. Podonsky. Well, we are better off to keep focus on what 
we do know about the Department of Energy. On the technical 
side, we don't know what all the other agencies are doing, but 
we do know that because of some very good reasons, the 
Department was very motivated in the last 2 years to really 
focus on cybersecurity.
    Chairman Tauzin. Something called public embarrassment, I 
think.
    Mr. Podonsky. That often helps.
    So to answer your question, from our standpoint, as we 
pointed out here, not only do we continue to probe, but the 
people who are responsible for filling the vulnerabilities that 
we find are actively doing that as we speak on a regular basis.
    Chairman Tauzin. And I guess, as a final question, these 
technologies are also available for private snooping and 
private compromising of homes and businesses across America; is 
that correct? Unless Americans are aware, keenly, of the 
capabilities of these systems and take as much concern about 
installing defensive systems, their private homes, their most 
private conferences, in many cases their most private spaces 
and activities can be easily compromised by someone invading 
their home through these devices and literally listening in and 
watching the most private of circumstances of Americans in 
their personal and business lives; is that correct?
    Mr. Huston. That is correct. However, awareness is the 
primary means of defense against any security threat, and much 
like a physical security threat, where you have started to see 
the evolution of homeowners installing alarm systems and other 
threat and risk mitigation strategies, I think you will see a 
growth in that marketplace, as well, for cybersystems.
    Chairman Tauzin. Thank you, Mr. Chairman.
    Mr. Greenwood. Let me just ask a question about motivation.
    Obviously, we know that there are some hackers who do this 
for the sport of it, just to see what they can do, and they may 
or may not have nefarious intentions other than to sneak in and 
see what they can do. But what nefarious opportunities are 
there once you get in?
    In other words, I assume a lot of people wouldn't get all 
the way there just to hide your icons or change the colors on 
your screen; that they would be there to--is there a market for 
the information? Can you get information and then sell it? Is 
it a question of compromising and destroying internal systems 
for strategic purposes?
    Talk, if you would, briefly about some of the motivations 
for doing this.
    Ms. Matthews. I think the answer to your question is all of 
the above and then some.
    There are over 100 countries that have some sort of 
information operations capabilities, and you saw what we could 
do with publicly available software and hardware. If you could 
imagine them turning their expertise and resources to debunking 
those information and operations, you can imagine what damage 
that could do. So the motivations are various, depending on 
whether it is a teenager or whether it is a nation-state or a 
terrorist organization that has motivation behind them.
    Mr. Greenwood. And given the ability to cover tracks, it is 
safe to say that this has probably happened to Federal systems, 
and we don't know what was done, have no way of knowing what 
was done? They could have covered the tracks and left no trail 
whatsoever?
    Mr. Bellone. Part of strength and defense is having an 
effective intrusion detection system--and I emphasize the word 
``system,'' because what we showed you is covering tracks at a 
very micro level. When we assess a site, one of our topical 
areas is intrusion detection systems, meaning their ability to 
respond to an event and provide that for an investigation, if 
you will. That is a critical component of detecting that level 
of activity. Sure, there are point-and-click tools available to 
vanish yourself from one machine, but with a very comprehensive 
system of alarms, you can still detect the activity.
    So there are defense elements that are available.
    Mr. Greenwood. Mr. Strickland, do you have questions for 
the panel?
    Mr. Strickland. No, sir, but I want to thank the panel. 
They have been very stimulating, and I am sitting here 
wondering what their IQs must be.
    Mr. Greenwood. We can assume it is higher than ours.
    Mr. Davis.
    Mr. Davis. Thank you. You can never have 100 percent 
protection in an information system; do you agree with that?
    Mr. Bellone. That is correct.
    Mr. Davis. Information security best practices really means 
using effective risk management in their implementation. How do 
you collaborate with your clients to assist them in meeting 
those objectives?
    Mr. Peterson. We have--as part of our process, we do the 
technical performance testing, what Mr. Bellone has shown you 
today. We then go in with our programmatic team and we take a 
look at their processes, and one of the key ones would be the 
risk management process, you know, does the site understand the 
threat. Then you do a risk assessment, understanding your 
critical systems and your critical information need protection. 
You then devise risk mitigation strategies and a protection 
strategy as well.
    You implement those, and then there is going to be some 
residual risk left over. What we do then is, we go in to see, 
do you understand your residual risk, has there been an 
appropriately designated official--has that person accepted 
that risk. That is what we look for.
    Mr. Davis. Thanks.
    Mr. Greenwood. Ms. DeGette.
    Ms. DeGette. Thank you, Mr. Chairman.
    I want to follow up on the full committee chairman's 
questions about, if you had microphones and video capability in 
computers. I would assume that for someone to be able to 
intercept that, the computer would have to be on at that time. 
And is that a yes?
    Mr. Bellone. That is correct.
    Ms. DeGette. And I would also assume that many meetings 
that take place where secret information is discussed are not 
in people's cubicles or offices where their PC is on, but 
rather in a conference room or some other venue. Would that be 
correct?
    Mr. Bellone. Absolutely.
    Ms. DeGette. And in those venues, in your experience in 
your agency, are there computers running in those rooms at the 
time those meetings are taking place? I am trying to figure out 
how real a threat this really is.
    Mr. Bellone. In the sensitive realm, there is a very clear 
accreditation process that looks at the room--the nature of the 
room, the hardware, the software and so forth. So it is very 
much a controlled environment, and because there are so many 
checks and balances and procedures and signatures and so forth, 
generally the process resolves or reconciles those kinds of 
concerns.
    Ms. DeGette. And that is happening under current DOE 
protocols?
    Mr. Bellone. Accreditation process.
    Mr. Podonsky. Yes.
    Ms. DeGette. And what about the training of personnel, are 
personnel currently, under current protocols, trained about the 
risks of interception of verbal communications?
    Mr. Peterson. It is part of what we look at in our 
programmatic review, we look for annual training of users--
obviously more detailed training down to the systems 
administrator level, managers--making sure that they understand 
their roles and responsibilities, making sure that the site has 
good procedures that actually push policy down from the broad 
national perspective down to the working level.
    Ms. DeGette. Well, these particular concerns that Mr. 
Tauzin was expressing are--is that part of your current 
training for personnel about the risks of hackers coming in and 
actually being able to intercept visual or verbal discussions? 
Is that a policy right now?
    Mr. Peterson. Again, that is part of the risk assessment 
process that is evaluated at the site level for each individual 
network. You know, depending on what information they have, 
again it is going to drive the level of concern. Again, that is 
a process at the site level.
    Then that feeds into the training based on, we know we have 
these risks, we need to inform our users and our systems 
administrators.
    Ms. DeGette. I understand what your general protocols are, 
but specifically, are people advised of these risks?
    Mr. Bellone. One thing that comes to mind, we run through 
computer-based training in yearly training sessions that go 
over counterintelligence and cybersecurity, and the 
cybersecurity awareness training covers these elements. They 
talk about the exploit or attacker threat. That is required 
yearly.
    Ms. DeGette. Now, let us talk for a minute about classified 
systems. By the way, I apologize, I missed your demonstration. 
I was caught in the cherry blossom traffic, I think.
    But apparently, according to Mr. Strickland, we are never 
turning on our computers again because of the risk of people 
getting our information, and I want to know how very real the 
risk is with your Agency? Are the classified systems at your 
Agency connected to the Internet?
    Mr. Peterson. We take a very close look at that. With 
classified systems, there is either an air gap between the 
Internet and the classified system or NSA-approved encryption.
    Ms. DeGette. So some are connected to the Internet, but 
there are protections that you believe would be effective in 
place?
    Mr. Peterson. Yes.
    Ms. DeGette. How many of the classified systems, what 
percentage of your classified systems are connected to the 
Internet?
    Mr. Peterson. I am not sure if we can provide a good number 
for that.
    Ms. DeGette. If you can supplement your answer in writing, 
I would appreciate it. Mr. Chairman, thank you.
    [The following was received for the record:]

    The Department has one classified system connected to the 
Internet. However, all classified information that is 
transmitted over the Internet is protected using an encryption 
device approved by the National Security Agency.

    Mr. Greenwood. We thank you for that mind-bending 
demonstration. You are excused, and we will bring up the next 
panel. Thank you again.
    The Chair calls the witnesses, Ms. Sallie McDonald, 
Assistant Commissioner, Office of Information Assurance and 
Critical Infrastructure of the U.S. General Services 
Administration; Mr. Ron Dick, Director, National Infrastructure 
Protection Center of the FBI; and Mr. Tom Noonan, President and 
CEO of Internet Security Systems.
    The Chair would ask unanimous consent that the gentleman 
from Georgia, Mr. Isakson, be given permission to sit at the 
table and introduce his constituent, Mr. Noonan.
    I am going to have Mr. Isakson introduce Mr. Noonan first, 
and then we will turn to Ms. McDonald for her opening 
statement.
    Mr. Isakson. I commend the chairman and members of the 
committee for looking into an issue of major importance to the 
U.S. Government. It is also an issue of major importance as 
well to the private sector throughout this country.
    I am particularly pleased to have the honor to introduce a 
citizen of Atlanta, Georgia, Mr. Tom Noonan, Chairman and CEO 
of Internet Security Systems, whose software development, 
remote management of security systems, education and consulting 
is sought worldwide. ISS is a company that has offices in Asia, 
Latin America, Middle East, Europe and throughout North 
America. They have over 6,000 customers in the United States of 
America in the management and security of their network 
systems.
    To talk about the importance of the software that they 
developed and the remote management that they have, today 21 of 
the top 25 banks in the United States of America are clients of 
ISS. The top 10 telecommunications companies in the United 
States of America are clients of ISS, and 35 government 
agencies in this country, or possibly worldwide, are clients of 
ISS.
    But probably the best compliment that I can pay to Mr. 
Noonan is that 2 years or 3 years ago, following my election to 
Congress, I sought the opportunity, because of my business 
experience and knowing the importance of technology, to develop 
an advisory board of individuals to help me deal with the 
myriad of privacy and safety and security issues that deal with 
the Internet and technology. Tom Noonan's name was consistently 
mentioned as the paramount authority on security systems in 
Atlanta, and, in fact, in the United States. It is an honor and 
privilege for me to introduce him. I am going to apologize that 
I have to leave this table, but I have the intellectual 
capacity to be a Congressman; I am not sure that I have the 
capacity to sit at this table with these individuals, and I do 
not want to confuse anyone here. I thank the chairman.
    Mr. Greenwood. I thank the gentleman. The Chair recognizes 
Mr. Davis.
    Mr. Davis. Mr. Isakson, you missed one item in that 
introduction. That is, his company has a strong presence in 
Herndon, Virginia. Welcome.
    Mr. Greenwood. The Chair recognizes Ms. Sallie McDonald for 
her testimony.

TESTIMONY OF SALLIE McDONALD, ASSISTANT COMMISSIONER, OFFICE OF 
INFORMATION ASSURANCE AND CRITICAL INFRASTRUCTURE, U.S. GENERAL 
  SERVICES ADMINISTRATION; RONALD L. DICK, DIRECTOR, NATIONAL 
INFRASTRUCTURE PROTECTION CENTER; AND TOM NOONAN, PRESIDENT AND 
              CEO, INTERNET SECURITY SYSTEMS, INC.

    Ms. McDonald. Good morning, Mr. Chairman and members of the 
committee. I am the Assistant Commissioner for the Office of 
Information Assurance and Critical Infrastructure Protection. 
My office is a component of GSA's Federal Technology Service 
under which the Federal Computer Incident Response Center 
operates.
    We wish to thank you for the opportunity to offer testimony 
pertinent to the state of security for government information 
technology resources. The Federal Computer Incident Response 
Center, or FedCIRC, is a central coordination activity for 
dealing with computer security-related incidents affecting 
computer systems within the Federal civilian agencies and 
departments of the U.S. Government.
    As government industry system interconnectivity increases, 
the boundary between the two becomes more difficult to define 
and in some cases they simply do not exist. Any security 
weakness across the Internet has a potential of being exploited 
to gain unauthorized access to one or more of the connected 
systems, including those of government. Reports indicate that 
numerous countries have or are developing information warfare 
capabilities that could be used to target critical components 
of the national infrastructure, including government systems. 
The National Security Agency has determined that potential 
adversaries are collecting significant knowledge on U.S. 
information systems and also collecting information and 
techniques to attack these systems.
    Since October 1998, FedCIRC incident records have shown an 
alarming trend in the number of attacks targeting government 
systems. Overall, 376 incidents were reported in 1998, 
affecting 2,732 Federal Government systems.
    In 1999, the figure had risen to 580 reported incidents 
affecting 1.3 million systems. By 2000, reported incidents 
numbered 586; and those incidents impacted over 576,000 
government systems.
    Although these numbers are alarming, it should be noted 
that they reflect only those reported incidents and do not 
include statistics on the estimated 80 percent that go 
unreported. Studies indicate that the lack of reporting is not 
due to an organization overlooking its obligation to report, 
but rather a sign of the organization's inability to recognize 
that its systems have been penetrated. The increase in the 
number of route compromises, denial of service attacks, network 
reconnaissance activities, destructive viruses and malicious 
code, coupled with advances in attack sophistication, pose 
immeasurable threats to government systems and the critical 
missions and services they support.
    With the rapid transition to a paperless government and 
increasing dependence on e-government solutions, the focus on 
secure technology approaches must be a priority. We in 
government cannot afford to overlook our inherent 
responsibility to protect sensitive information from 
unauthorized disclosure. The unprecedented growth in technology 
is driving government to implement capabilities and services so 
rapidly that security concerns are often overlooked.
    Mr. Chairman, my brief summary today only begins to touch 
on the most significant information security challenges we have 
before us. The complete text of my testimony describes in 
greater deal the current and growing threat to the Federal 
information infrastructure. I trust that you will derive from 
my remarks an understanding of the cybersecurity issues, and 
also an appreciation for the commitment that those in the 
FedCIRC and participating organizations share for the 
protection of components of our critical infrastructure. We 
appreciate your leadership and that of the committee for 
helping us achieve our goals and allowing us to share 
information that we feel is crucial to the defenses of the 
Federal information technology resources. Thank you.
    [The prepared statement of Sallie McDonald follows:]

 Prepared Statement of Sallie McDonald, Assistant Commissioner, Office 
   of Information Assurance and Critical Infrastructure Protection, 
      Federal Technology Service, General Services Administration

    Good morning, Mr. Chairman and Members of the Committee. On behalf 
of the Federal Technology Service of the General Services 
Administration let me thank you for this opportunity to appear before 
you to discuss our perspective on the state of security for government 
information technology resources.
    As you know we operate an entity known as FedCIRC. FedCIRC stands 
for the Federal Computer Incident Response Center, and is a component 
of GSA's Federal Technology Service. FedCIRC is the central 
coordinating activity associated with security related incidents 
affecting computer systems within the Civilian Agencies and Departments 
of the United States Government. FedCIRC provides security incident 
identification, containment and recovery services and works within the 
Federal community to educate agencies on effective security practices 
and procedures. FedCIRC's prevention and awareness program includes 
security bulletins and advisories, hardware and software vulnerability 
notifications, and vulnerability fixes.
    With the recent enactment by Congress of the Government Information 
Security Reform Act, federal agencies and departments must report 
computer security incidents to FedCIRC. FedCIRC's role is to assist 
those federal agencies and departments with the containment of security 
incidents and to provide information and tools to aid them with the 
recovery process. In January, the Office of Management and Budget (OMB) 
issued implementing guidance on the new security act. In that guidance, 
OMB instructed agencies to implement both technical and procedural 
means to detect security incidents, report them to FedCIRC, and to use 
FedCIRC to share information on common vulnerabilities. Agencies were 
advised to work with their security officials and Inspectors General to 
remove all internal obstacles to timely reporting and sharing. 
Additionally, in October of last year, the Federal CIO Council worked 
with FedCIRC and developed procedural advice to agencies for efficient 
interaction with FedCIRC.
    When an incident is reported to FedCIRC, we work with those 
involved to collect pertinent information, analyze it for severity and 
potential impact, and offer guidance to minimize or eliminate further 
proliferation or damage. Additionally, FedCIRC assists in identifying 
system vulnerabilities associated with the incident and provides 
recommendations to prevent recurrence. Moreover, FedCIRC works closely 
with the FBI's NIPC and the national security community to ensure that 
incidents with potential law enforcement or national security impact 
are quickly reported to the appropriate authorities.
    As government and industry systems and network interconnectivity 
increase, the boundaries between the two begin to blur. This huge 
network of networks, known of course as the Internet, includes both 
government and private systems. In some fashion, through the Internet, 
all of these systems are interconnected. Thus, an inescapable fact of 
life in this Internet Age is that any risk associated with any part of 
the Internet environment is ultimately assumed by all systems connected 
to it. Any security weakness across the Internet has the potential of 
being exploited to gain unauthorized access to one or more of the 
connected systems.
    Reports from the Department of Defense and other sources tell us 
that over 100 countries have or are developing information warfare 
capabilities that could be used to target critical components of the 
national infrastructure including government systems. The National 
Security Agency has determined that potential adversaries are 
collecting significant knowledge on U.S. information systems and also 
collecting information and techniques to attack these systems. These 
techniques give an adversary the capability of launching attacks from 
anywhere in the world that are potentially impossible to trace.
    Since October 1998, FedCIRC incident records have shown an 
increasing trend in the number of attacks targeting government systems. 
Overall, there were 376 incidents reported in 1998 that affected 2,732 
Federal civilian systems and 86 military systems. In 1999, the figure 
had risen to 580 reported incidents affecting 1,306,271 Federal 
civilian systems and 614 military systems. By 2000, reported incidents 
numbered 586, which impacted 575,568 Federal civilian systems and 148 
of their military counterparts. Though these numbers are in themselves 
ample cause for concern, these numbers reflect only those reported 
incidents and do not include incidents that were not reported. Studies 
conducted by the Department of Defense as well as data collected from 
the broad Internet community by Carnegie Mellon University's CERT 
Coordination Center indicate that as many as 80% of actual security 
incidents go unreported. More importantly, perhaps is the reason 
incidents appear to remain unreported. In most cases incidents are not 
reported because the organization was unable to recognize that its 
systems had been penetrated or because there were no indications of 
penetration or attack.
    Of course computer security incidents vary in degree of severity 
and significance. Many incidents, such as web page defacements, are 
seemingly insignificant and generally categorized as ``cyber-
graffiti.'' Typically, systems that are victims of defacement have one 
thing in common, an overabundance of commonly known weaknesses in their 
respective operating system and server software. Though the damage from 
such incidents may be small, the rising number of occurrences suggests 
a clear pattern of inattentiveness to security problems, especially 
those that might be easily resolved with publicly available software 
patches.
    While these relatively minor incidents may amount to mostly 
nuisances, the more significant incidents are those associated with the 
development of sophisticated attack methodologies. Such attack 
methodologies involve the organized distribution of intrusion 
techniques across the Internet. So called ``hackers'', ``crackers,'' 
mischievous individuals, rogue nations and even state sponsored attacks 
are all threats to systems in government and the private sector.
    In particular, unauthorized intrusions into government systems 
containing sensitive information are also on the rise. In 2000, as I 
reported earlier, FedCIRC documented 586 incidents affecting government 
systems. 155 of those were reported from 32 agencies and resulted in 
what is known as ``root compromise.'' A root compromise means the 
intruder has gained full administrative or ``root'' privileges over the 
targeted system. This means that any information or capability of the 
system is totally owned by and controllable by the intruder. With 
``root'' privileges, the intruder can cover his or her tracks because 
the privileges allow them to alter system logs and thereby erase any 
evidence of intrusion activities. In at least 5 of the incidents 
involving a root compromise, access to sensitive government information 
was verified. For the remaining 150 incidents, compromise of any and 
all information must be assumed. Root compromises were also employed in 
17 separate instances where the compromised systems were used to host 
and then launch attacks. Attacks of this nature are particularly 
egregious since they work to erode the public trust in government 
systems integrity while serving to openly demonstrate security 
vulnerabilities within government systems.
    More recently, as a byproduct of the Y2K problem, a new type of 
attack has been gaining attention. This type of attack is known as the 
``Distributed Denial of Service'' attack and is considered one of the 
most potentially damaging attack methods yet to be developed. The 
Distributed Denial of Service or DDoS attack simply overwhelms a 
targeted system with so much information that the targeted system 
cannot grant access to legitimate users. This attack can be 
particularly damaging when components of the critical infrastructure 
such as power grid controls, traffic controls, emergency and medical 
services are subject to a DDoS attack, since these attacks render their 
targets effectively inoperative. And if that is not enough, the DDoS 
attack, after first identifying and compromising vulnerable systems 
anywhere across the Internet, next deposits on those compromised 
systems hostile software capable of launching further attacks. Once in 
place, the exploited systems can then be orchestrated to simultaneously 
launch attacks on a predetermined target, flooding the target with more 
information than it is capable of processing. Ninety three government 
systems were targets of DDoS attacks, many of which resulted in the 
disruption of critical government services.
    Perpetrators continually scan the Internet to identify systems with 
weak security profiles or vulnerabilities. These reconnaissance 
activities focus on identifying the active services, operating systems, 
software versions and any protective mechanism that may be in place. 
Armed with this information, a would-be intruder can consult publicly 
available information repositories and references for vulnerabilities 
particular to their selected target. Then they can devise attack 
strategies with the highest probabilities for successful compromise. 
Port scans, probes, network mapping applications and commonly used 
network administration tools are typical resources used by an intruder 
to identify weaknesses in the chosen organization's infrastructure and 
to simplify the intrusion effort. Incidents reported by Federal 
agencies to FedCIRC during 1998 indicated a mere 157 occurrences. 
However in 1999 there was a significant jump in network reconnaissance 
activity to 1,686 occurrences. Although 2000 showed a slight decrease, 
the number of reported reconnaissance incidents still was 1,207.
    The sophistication of computer viruses also poses a significant 
threat. While yesterday's viruses were destructive to files residing on 
a system, today's viruses come in many forms and self propagate by 
exploiting the advanced capabilities of modern-day software 
applications. Computer viruses may harbor capabilities to destroy both 
hardware and software. They may arrive in the form of so-called 
``trojan horse'' code capable of capturing and transmitting sensitive 
information, user account data or administrator passwords. As 
legitimate software programs incorporate more advanced capabilities, 
those same capabilities are being harnessed to very destructive 
purposes. As we observed during the ``Melissa'' and ``I Love You'' 
viruses, a single email on the other side of the globe began saturating 
mail servers within a few short hours. The number of virus incidents 
reported by Federal agencies in 1998, 1999 and 2000 totaled 55, 35, and 
36 respectively. Since anti-virus defenses are developed in response to 
a virus, there is a relatively significant period of time between the 
capturing of the virus code and the development of a defense. 
Considering the near-real-time communications capabilities available to 
a large percentage of the world population, microseconds can mean the 
difference between normal operations and system disruption.
    Statistics compiled by Carnegie Mellon University's CERT 
Coordination Center show a definite correlation between the growth of 
software vulnerabilities and the number of reported incidents. From 
1988 to present day, the number of vulnerabilities identified annually 
has increased from only single digits to well over 800. The number of 
reported incidents across industry and government closely track that of 
the vulnerabilities, from a meager few in 1988 to almost 25,000 as of 
the beginning of this year. These trends indicate that Internet 
connected systems are becoming increasingly vulnerable to attack and 
that defensive measures are not yet adequate to protect against 
exploitation of the vulnerabilities.
    With the rapid transition to a paperless government and increasing 
dependence on e-government solutions, the focus on secure technology 
approaches must be a high priority. The unprecedented growth in 
technology is driving government to implement capabilities and services 
so rapidly that security concerns are often overlooked. The adoption of 
e-commerce solutions, e-government solutions and countless forms of 
electronic information exchange is in danger of moving forward without 
adequate consideration of the protection of the systems and the 
information they store, process or transmit. We in government cannot 
afford to overlook our inherent responsibility to protect sensitive 
information from unauthorized disclosure. The implementation of 
strategic defenses for the Federal Information Infrastructure can only 
be realized if we act promptly to establish the proper foundation for 
already overdue initiatives to combat these issues. Information sharing 
and collaboration on the part of all concerned is key to the creation 
of effective defenses. FedCIRC, in cooperation with every Civilian 
Federal Agency, Industry, Law Enforcement, the Department of Defense 
and Academia, has begun building a virtual network of partners to 
facilitate the sharing of security relevant information and ideas. Each 
week, the list of partners increases as more and more realize that this 
battle cannot be fought in isolation. Every contributing piece of 
information from a participating partner has the potential of unlocking 
a critical cyber-defense problem.

                                SUMMARY

    Mr. Chairman, in my remarks here this morning, I have merely 
touched on the most significant information security challenges we face 
in this Internet Age dawning before us. My goal was to inform you and 
this committee about the nature of the cyber-security issues we face 
collectively as a nation. I also want to help you appreciate the degree 
and level of commitment that those in FedCIRC and participating 
organizations share regarding the protection of the components of our 
Critical Infrastructure. We appreciate your leadership and that of the 
Committee in helping us achieve our goals and allowing us to share 
information that is crucial to the effective defense of Federal 
Information Technology resources.

    Mr. Greenwood. Thank you.
    Mr. Dick.

                   TESTIMONY OF RONALD L. DICK

    Mr. Dick. Mr. Chairman, I am the Director of the National 
Infrastructure Protection Center which is located at the FBI. I 
want to thank you today for inviting me to discuss cyber-
intrusion issues into government systems. Because of the impact 
that cyber-intrusions have on our national security, as well as 
the economic well-being of government and industries to provide 
vital goods and services to Americans, this is a very important 
topic.
    I would ask that my full statement be entered into the 
record, and I will focus on a few brief comments.
    Computer intrusions into government systems are a serious 
problem. In my statement, I cite that we have currently 102 
pending investigations of government systems out of a total of 
approximately 1,219. But each case can represent multiple 
intrusions and multiple victims. Thus the caseload denotes a 
large number of incidents. That is the bad news.
    The good news is that National Security Advisor Rice's 
recent statement at the Partnership for Critical Infrastructure 
for Security meeting indicated the administration's view that 
this is a high priority.
    Let me briefly outline some threats we face and discuss a 
few examples that highlight the vulnerability.
    Insiders have always been a major threat. Their motive is 
usually against a current or former employer. In many instances 
they do not need to be sophisticated because they do know the 
passwords, or controls are such that passwords are not changed 
routinely. Further, they have the greatest knowledge of how to 
defeat the system's internal controls.
    In one case, a dismissed employee of the National Library 
of Medicine created a back door in the system through which he 
could alter and destroy data on the system. These intrusions 
were a threat to public safety, as doctors from around the 
world depended on the integrity of this information for 
diagnosis and drug prescriptions.
    Computer virus writers have become a dangerous problem in 
the last few years. They write their programs, often just to 
cause mayhem in the networks. The result is that important 
systems are made or forced to come off-line for repairs. This 
is at a cost of billions of dollars; last year, as we all 
remember, the well known love letter virus which began in the 
Philippines but soon spread globally. The FBI and Philippine 
authorities were able to trace the virus back to its source, 
but because the Philippines lacked a cybercrime statute at the 
time, he could not be prosecuted.
    Along with viruses, hacking cases are the best known. In 
February 1998, just as the Center was being established, we had 
one of the largest hacks ever of U.S. Government systems. 
Intruders had compromised hundreds of Department of Defense 
computers. We initially thought it could be an attack from a 
foreign power. It turned out to be teenagers from California 
and Israel. Those teens have since been prosecuted by the U.S. 
Government; but it was a wake-up call regarding cybersecurity.
    While the motive was less malicious in this case than 
others we had seen, it highlighted the potential for use of 
cyberspace to prepare the battlefield.
    Let me touch further on national security threats. There 
are thousands of intrusions or attempts into government systems 
every year. Many of them emanate from abroad. We know many 
nations are developing information warfare capabilities, as 
well as adapting cybertools as information-gathering trade 
craft. That is about as far as I can go today, but this is an 
evolving area for us.
    Let me talk about the response to these threats. In the 
middle of the 1990's, the Federal Government, as has been 
recognized already, recognized the potential dangerous problem 
regarding cyber-vulnerabilities.
    In February 1998, the Attorney General authorized the 
creation of the National Infrastructure Protection Center. In 
May 1998, President Clinton authorized the expansion of Justice 
Department efforts to a full-scale National Protection Center. 
The Center's mission is for detecting, assessing, warning of, 
and investigating significant threats and incidents concerning 
our critical infrastructures. The NIPC is an interagency 
center. Of the 101 persons currently working in the Center, we 
currently have 18 detailees from outside the FBI, and two 
foreign detailees. The leadership of the Center comes from 
several agencies. The NIPC's Deputy Director is Rear Admiral 
James Playhall from the Navy, who is with us today. Over the 
last 3 years the Center has issued 82 warning products. Many of 
these products, such as the one issued last week on the ``Lion 
Internet Worm'' are issued before any attacks occur.
    These warning products are sent to our Federal partners, as 
well as State and Federal law enforcement, international 
partners with whom we have connectivity, the information 
sharing and analysis centers, and others in the private sector 
so as to enhance security worldwide.
    What makes the NIPC unique is that we have access to 
information from law enforcement sources and investigations, 
the intelligence community, international sources, private 
sector contacts and open sources. No other entity has access to 
such a complete range of information.
    In cyberspace, we all look the same as has been pointed out 
here today in the demonstration. Thus, investigations is an 
important component of what the center does. Finding out the 
origin of an intrusion and who is sitting behind that keyboard 
is a huge challenge. What makes the NIPC unique is that through 
the FBI, we have access to both criminal and national security 
authorities to conduct such investigations. As an interagency 
center, we can coordinate our investigative efforts more 
efficiently. If the intruder is overseas, we can use our 
partners regarding investigations and prosecutions through our 
legal attaches in over 40 countries around the world. Once we 
have determined the facts regarding the attack and the identity 
of the attacker, we can confer with the Department of Justice, 
and just as importantly, policymakers, to fashion the 
appropriate response.
    That response may be criminal prosecution or it might be 
diplomatic, intelligence, or military action, or a combination 
of all three of those things.
    In summary, I must stress that cooperation lies at the 
heart of everything that we do within the Center. We are 
actively engaged with our Federal partners, domestic law 
enforcement, international agencies, the private sector, and 
our international counterparts across the globe. Without 
cooperation and information sharing, we cannot hope to come to 
grips with this problem. We have made a lot of progress, but 
much work remains to be done. Thank you.
    [The prepared statement of Ronald L. Dick follows:]

Prepared Statement of Ronald L. Dick, Director, National Infrastructure 
           Protection Center, Federal Bureau of Investigation

    Representative Greenwood, Members of the subcommittee, thank you 
for inviting me here today to speak to the important issue of 
intrusions into government computer networks. The problem is serious. 
The Department of Defense reports thousands of potential cyber attacks 
launched against DoD systems. GAO reports that ``in 1999 and 2000, the 
Air Force, Army, and Navy recorded a combined total of 600 and 715 
[serious] cyber attacks, respectively.'' This does not even consider 
attacks on civilian agencies. Two weeks ago National Security Advisor 
Condoleezza Rice stated that ``The President himself is on record as 
stating that infrastructure protection is important to our economy and 
to our national security and therefore it will be a priority for this 
administration.''
    Dr. Rice also stated during that same speech that, ``We have to 
maximize our resources and energies by making sure that they are 
focused, instead of allowing them to be dissipated through dispersal.'' 
The need for a coordinated interagency approach to address intrusions 
into government networks was one of the principal reasons for having 
established the National Infrastructure Protection Center (NIPC). When 
the NIPC was founded three years ago, it was during one of the largest 
intrusions ever into U.S. government systems. The lessons learned from 
that intrusion and from the response to it have helped shape the NIPC.
    Let me provide you with a snapshot of our caseload on government 
intrusions. Currently we have 102 cases (of a current total of 1,219 
pending cases) involving computer intrusions into government systems. 
This includes intrusions into federal, state and local systems, as well 
as the military. It should be noted that a single case can consist of 
hundreds of compromised systems that have experienced thousands of 
intrusions. In addition, many agencies conduct investigations 
concerning intrusions into their systems that are not reported to the 
FBI. In short, this case load represents a large number of incidents.
    Several critical elements are required to deal with intrusions into 
government computer systems. There must be an interagency structure to 
deal with this problem. No agency should or should have to address 
these issues alone. Information must be shared with law enforcement and 
the NIPC. We must work to ensure that any intrusions are stemmed and 
the vulnerability that allowed the intrusion is patched.
    Interagency cooperation is essential in dealing with intrusions 
into government systems. As I said at the outset, that is why the NIPC 
was created. Currently the NIPC has representatives from the following 
agencies at the Center: FBI, Army, Navy, Air Force Office of Special 
Investigations, Defense Criminal Investigative Service, National 
Security Agency, United States Postal Service, Department of 
Transportation/Federal Aviation Administration, Central Intelligence 
Agency, Department of Commerce/Critical Infrastructure Assurance 
Office, and the Department of Energy. This representation has given us 
the unprecedented ability to reach back into the parent organizations 
of our interagency detailees on intrusions and infrastructure 
protection matters. In addition, we have formed an interagency 
coordination cell at the Center which holds monthly meetings with U.S. 
Secret Service, U.S. Customs Service, representatives from DoD 
investigative agencies, the Offices of Inspector General of NASA, 
Social security administration, Departments of Energy, State, and 
Education, and the U.S. Postal Service, to discuss topics of mutual 
concern.
    This representation is not enough, however. The PDD states that,--
The NIPC will include FBI, USSS, and other investigators experienced in 
computer crimes and infrastructure protection, as well as 
representatives detailed from the Department of Defense, the 
Intelligence Community and Lead Agencies.'' The NIPC would like to see 
all lead agencies represented in the Center. The more broadly 
representative the NIPC is, the better job it can do in responding to 
intrusions into government systems.
    The NIPC is pursuing three sets of activities that address computer 
intrusions into government systems: prevention, detection, and 
response.

                              PREVENTION:

    Our role in preventing cyber intrusions into government systems is 
not to provide advice on what hardware or software to use or to act as 
a federal systems administrator. Rather our role is to provide 
information about threats, ongoing incidents, and exploited 
vulnerabilities so that government and private sector system 
administrators can take the appropriate protective measures. The NIPC 
has a variety of products to inform the private sector and other 
domestic and international government agencies of the threat, 
including: alerts, advisories, and assessments; biweekly CyberNotes; 
monthly Highlights; and topical electronic reports. These products are 
designed for tiered distribution to both government and private sector 
entities consistent with applicable law and the need to protect 
intelligence sources and methods, and law enforcement investigations. 
For example, Highlights is a monthly publication for sharing analysis 
and information on critical infrastructure issues. It provides 
analytical insights into major trends and events affecting the nation's 
critical infrastructures. It is usually published in an unclassified 
format and reaches national security and civilian government agency 
officials as well as infrastructure owners. CyberNotes is another NIPC 
publication designed to provide security and information system 
professionals with timely information on cyber vulnerabilities, hacker 
exploit scripts, hacker trends, virus information, and other critical 
infrastructure-related best practices. It is published twice a month on 
our website and disseminated in hardcopy to government and private 
sector audiences.
    The NIPC has elements responsible for both analysis and warning. 
What makes the NIPC unique is that it has access to all-source 
intelligence from law enforcement, the intelligence community, private 
sector, international arena, and open sources. No other entity has this 
range of information. Complete and timely reporting of incidents from 
private industry and government agencies allows NIPC analysts to make 
the linkages between government intrusions and private sector activity. 
We are currently working on an integrated database to allow us to more 
quickly make the linkages among seemingly disparate intrusions. This 
database will leverage both the unique information available to the 
NIPC through FBI investigations and information available from the 
intelligence community and open sources. Having these analytic 
functions at the NIPC is a central element of its ability to carry out 
its preventive mission.
    This initiative expands direct contacts with the private sector 
infrastructure owners and operators and shares information about cyber 
intrusions and exploited vulnerabilities through the formation of local 
InfraGard chapters within the jurisdiction of each of the 56 FBI Field 
Offices. This is critical to infrastructure protection, since private 
industry owns most of the infrastructures. Further, InfraGard's success 
belies the notion that private industry will not share information with 
NIPC or law enforcement. All 56 FBI field offices have InfraGard 
chapters. There are currently over 900 InfraGard members. The national 
InfraGard rollout was held on January 5, 2001.
    The NIPC is also working with the Information Sharing and Analysis 
Centers established under the auspices of PDD-63. For example, the 
North American Electric Reliability Council (NERC) serves as the 
electric power ISAC. We have developed a program with the NERC to 
develop an Indications and Warning System for physical and cyber 
attacks. Under the program, electric utility companies and other power 
entities transmit incident reports to the NIPC. These reports are 
analyzed and assessed to determine whether an NIPC alert, advisory, or 
assessment is warranted to the electric utility community. Electric 
power participants in the pilot program have stated that the 
information and analysis provided by the NIPC back to the power 
companies make this program especially worthwhile. NERC has recently 
decided to expand this initiative nationwide. This initiative will 
serve as a good example of government and industry working together to 
share information and the Electrical Power Indications and Warning 
System will provide a model for the other critical infrastructures. 
Eventually the NIPC will need to be able to have a comprehensive 
nation-wide system for all the infrastructures.
    The NIPC is the Sector Lead Agency for the Emergency Law 
Enforcement Services sector. As part of this mission, the Center has 
also been asked to by ELES Sector the to have the NIPC Watch and 
Warning Unit act as the ISAC for the sector. The NIPC is working to 
implement this request.

                               DETECTION:

    Given the ubiquitous vulnerabilities in existing Commercial Off-
the-Shelf (COTS) software, intrusions into critical systems are 
inevitable for the foreseeable future. Thus detection of these 
intrusions is critical if the U.S. Government and critical 
infrastructure owners and operators are going to be able to respond. To 
improve our detection capabilities, we first need to ensure that we are 
fully collecting, sharing, and analyzing all extant information from 
all relevant sources. It is often the case that intrusions can be 
discerned simply by collecting bits of information from various 
sources; conversely, if we don't collate these pieces of information 
for analysis, we might not detect the intrusions at all. Thus the 
NIPC's role in collecting information from all sources and performing 
analysis in itself serves the role of detection.
    Agency system administrators need to work with FedCIRC and the 
NIPC. PDD-63 makes clear the importance of such reporting. It states, 
``All executive departments and agencies shall cooperate with the NIPC 
and provide such assistance, information and advice that the NIPC may 
request, to the extent permitted by law. All executive departments 
shall also share with the NIPC information about threats and warning of 
attacks and about actual attacks on critical government and private 
sector infrastructures, to the extent permitted by law.'' Currently OMB 
has instructed the agencies that they must report their intrusions to 
FedCIRC, but reporting to the NIPC is not mentioned. We are working 
with FedCIRC to define criteria for reporting of incidents to the NIPC 
for analytical as well as investigative purposes.
    In some cases, in response to victims' reports, the NIPC has 
sponsored the development of tools to detect malicious software code. 
For example, in December 1999, in anticipation of possible Y2K related 
malicious conduct, the NIPC posted a detection tool on its web site 
that allowed systems administrators to detect the presence of certain 
Distributed Denial of Service (DDoS) tools on their networks. In these 
cases, hackers plant tools such as Trinoo, Tribal Flood Net (TFN), 
TFN2K, or Stacheldraht (German for barbed wire) on a number of 
unwitting victim systems. Then when the hacker sends the command, the 
victim systems in turn begin sending messages against a target system. 
The target system is overwhelmed with the traffic and is unable to 
function. Users trying to access that system are denied its services. 
The NIPC's detection tools were downloaded thousands of times and have 
no doubt prevented many DDoS attacks.
    The NIPC also led the FBI's multiagency Y2K command center. NIPC 
personnel were on alert during the rollover period watching for 
possible malicious activity under the guise of Y2K. NIPC coordinated a 
nationwide watch effort and distributed reports every four hours round 
the clock on the situation.
    Regarding warning, if we determine that an intrusion is imminent or 
underway, the NIPC Watch is responsible for formulating assessments, 
advisories, and alerts, and quickly disseminating them. The substance 
of those products will come from analytical work done by NIPC analysts. 
If we determine an attack is underway, we can notify both private 
sector and government entities using an array of mechanisms so they can 
take protective steps. In some cases these warning products can prevent 
a wider attack; in other cases warnings can mitigate an attack already 
underway. Finally, these notices can prevent attacks from ever 
happening in the first place. For example, the NIPC released an 
advisory on March 30, 2001 regarding the ``Lion Internet Worm,'' which 
is a DDoS tool targeting Unix-based systems. Based on all-source 
information and analysis, the NIPC alerted systems administrators how 
to look for this compromise of their system and what specific steps to 
take to remove the tools if they are found. This alert was issued after 
consultation with FedCIRC, JTF-CND, a private sector ISAC, and other 
infrastructure partners.

                               RESPONSE:

    Despite our efforts, we know that government systems will continue 
to be attacked. Thus we need to determine the origin of these attacks 
in order to get to the person behind the keyboard for our government to 
formulate the appropriate response. In the cyber world, determining 
what is happening is difficult at the early stages. An event could be a 
system probe to find vulnerabilities or entry points, an intrusion to 
steal data or plant sniffers or malicious code, an act of teenage 
vandalism, an attack to disrupt or deny service, or even an act of war. 
The crime scene itself is totally different from the physical world in 
that it is dynamic--it grows, contracts, and can change shape. Further, 
the tools used to perpetrate a major infrastructure attack can be the 
same ones used for other cyber intrusions (simple hacking, foreign 
intelligence gathering, organized crime activity to steal property, 
data, etc. . . .), making identification more difficult. Determining 
that an event is even occurring thus can often be difficult in the 
cyber world, and usually a determination cannot be made without a 
thorough investigation. In the physical world one can see instantly if 
a building has been bombed or an airliner brought down. In the cyber 
world, an intrusion may go undetected for some time.
    Identification of the perpetrators and their objectives during an 
event is critical especially in the initial stages. The perpetrators 
could be criminal hackers, teenagers, electronic protestors, 
terrorists, or foreign intelligence services. In order to attribute an 
attack, the NIPC coordinates an investigation that gathers information 
from within the United Sates using either criminal investigative or 
foreign counter-intelligence authorities, depending on the 
circumstances. We also rely on the assistance of other nations when 
appropriate. Obtaining reliable information is necessary not only to 
identify the perpetrator but also to determine the size and nature of 
the intrusion: how many systems are affected, what techniques are being 
used, and what is the purpose of the intrusions--disruption, economic 
espionage, theft of money, etc. . . .
    Relevant information could come from existing criminal 
investigations or other contacts at the FBI Field Office level. It 
could come from the U.S. Intelligence Community, other U.S. Government 
agency information, through private sector contacts, the media, other 
open sources, or foreign law enforcement contacts. The NIPC's role is 
to coordinate, collect, analyze, and disseminate this information. 
Indeed this is one of the principal reasons the NIPC was created.
    Because the Internet by its nature embodies a degree of anonymity, 
our government's proper response to an attack first requires 
significant investigative steps. Investigators typically need a full 
range of criminal and/or national security authorities to determine who 
launched the attack. Under our system the legal authorities for 
conducting investigations within the United States include: the 
Computer Fraud and Abuse Act, the Economic Espionage Statute, the 
Electronic Communications Privacy Act, the Foreign Intelligence 
Surveillance Act, as well as the relevant executive orders delineating 
the responsibilities of the intelligence community. Thus the FBI can 
apply for court orders to get subscriber information from Internet 
Service Providers, and monitor communications under the Electronic 
Communications Privacy Act or under the Foreign Intelligence 
Surveillance Act, depending on the facts of the case as they are known 
at the time the order is requested. The FBI has designated the NIPC to 
act as the program manager for all of its computer intrusion 
investigations, and the NIPC has made enormous strides in developing 
this critical nationwide program. In that connection, the NIPC works 
closely with the Criminal Division's Section on Computer Crime and 
Intellectual Property, the Department's Office of Intelligence Policy 
and Review, and the U.S. Attorney's Offices in coordinating legal 
responses.
    In the event of a national-level set of intrusions into significant 
systems, the NIPC will form a Cyber Crisis Action Team (C-CAT) to 
coordinate response activities and use the facilities of the FBI's 
Strategic Information and Operations Center (SIOC). The team will have 
expert investigators, computer scientists, analysts, watch standers, 
and other U.S. government agency representatives. Part of the U.S. 
government team might be physically located at FBI Headquarters and 
part of the team may be just electronically connected. The C-CAT will 
immediately contact field offices responsible for the jurisdictions 
where the attacks are occurring and where the attacks may be 
originating. The C-CAT will continually assess the situation and 
support/coordinate investigative activities, issue updated warnings, as 
necessary, to all those affected by or responding to the crisis. The C-
CAT will then coordinate the investigative effort to discern the scope 
of the attack, the technology being used, and the possible source and 
purpose of the attack.
    While we have not seen an example of cyber terrorism directed 
against U.S. government systems, the NIPC's placement in the FBI's 
counterterrorism division will allow for a seamless FBI response in the 
event of a terrorist action that encompasses both cyber and physical 
attacks. The NIPC and the other elements of the FBI's Counterterrorism 
Division have conducted joint operations and readiness exercises in the 
FBI's SIOC. We are prepared to respond if called upon.
Case Examples
    Over the past several years we have seen a wide range of cyber 
threats ranging from defacement of websites by juveniles to 
sophisticated intrusions sponsored by foreign powers, and everything in 
between. Some of these are obviously more significant than others. The 
theft of national security information from a government agency or the 
interruption of electrical power to a major metropolitan area would 
have greater consequences for national security, public safety, and the 
economy than the defacement of a web-site. But even the less serious 
categories have real consequences and, ultimately, can undermine 
confidence in e-commerce and violate privacy or property rights. A web 
site hack that shuts down an e-commerce site can have disastrous 
consequences for a business. An intrusion that results in the theft of 
credit card numbers from an online vendor can result in significant 
financial loss and, more broadly, reduce consumers' willingness to 
engage in e-commerce. Because of these implications, it is critical 
that we have in place the programs and resources to investigate and, 
ultimately, to deter these sorts of crimes.
    In addition, because it is often difficult to determine whether an 
intrusion or denial of service attack, for instance, is the work of an 
individual with criminal motives or foreign nation state, we must treat 
each case as potentially serious until we gather sufficient information 
to determine the nature, purpose, scope, and perpetrator of the attack. 
While we cannot discuss ongoing investigations, we can discuss closed 
cases that involve FBI and other agency investigations in which the 
intruder's methods and motivation were similar to what we are currently 
seeing. A few illustrative are described below:
    In hacker cases, the attacker's motivation is just to see how far 
he can intrude into a system. This seems to be the motivation for the 
California teens in the well-known Solar Sunrise case. In this case the 
intruders exploited a well known vulnerability in computers that run on 
the Sun Solaris operating system. By exploiting this vulnerability, the 
intruder can gain root access (total control) of the system. As in the 
Solar Sunrise case, the intruders can then install their own accounts 
on the system and create backdoors into the system from which they can 
then install additional programs to find passwords. They also had the 
ability to alter, remove, or destroy data on those systems. This case 
demonstrated to the interagency community how difficult it is to 
identify an intruder until all of the facts are gathered through an 
investigation, and why assumptions cannot be made until sufficient 
facts are available. The incident also vividly demonstrated the 
vulnerabilities that exist in our networks; if these individuals were 
able to assume ``root access'' to certain unclassified DoD systems, it 
is not difficult to imagine what hostile adversaries with greater 
skills and resources would be able to do. Finally, Solar Sunrise 
demonstrated the need for interagency coordination to deal with such 
attacks. The perpetrators in this case were two 16 and an 18 years old.
    We have also seen cases of hacking and mischief for what might be 
termed personal reasons. For example, Eric Burns, a.k.a Zyklon, hacked 
into the White House web site as well as other sites. This case was 
worked jointly by the U.S. Secret Service and the FBI. He was caught 
and pled guilty to one count of 18 U.S.C.1030. In November 1999 he was 
sentenced to 15 months in prison, 3 years supervised release, and 
ordered to pay $36,240 in restitution and a $100 fine.
    In another example, the Melissa Macro Virus was reportedly named 
after an exotic dancer from Florida; this virus wreaked havoc on 
government and private sector networks in March 1999. He pled guilty to 
one federal count of violating 18 U.S.C. 1030 and four state counts. He 
admitted to causing $80 million in damage as well. David Smith, the 
author of the virus, faces a maximum sentence of five years and 
$250,000 on the federal charge. He is currently awaiting sentencing. 
This is a good example of how federal and state governments are 
increasingly coordinating investigations and prosecutions in combating 
computer crime.
    In another case, system penetration coupled with theft can be the 
motivation. A Florida youth admitted to breaking into 13 computers at 
the Marshall Space Flight Center in Huntsville, Alabama in June 1999 
and downloading $1.7 million in NASA proprietary software that supports 
the International Space Station's environmental systems. NASA has 
estimated the cost to repair the damage at $41,000. The subject has 
also admitted to entering Defense Department systems of the Defense 
Threat Reduction Agency, intercepting 3,300 e-mail messages, and 
stealing passwords from Pentagon computers. This case was investigated 
by NASA. He was sentenced to six months in a juvenile detention center 
for hacking into NASA computers which support the International Space 
Station.
    Virus writers have become a more prevalent threat in recent years. 
We have seen virus writers unleash havoc on the Internet for a variety 
of motivations. In May 2000 companies and individuals around the world 
were stricken by the ``Love Bug,'' a virus (or, technically, a 
``worm'') that traveled as an attachment to an e-mail message and 
propagated itself extremely rapidly through the address books of 
Microsoft Outlook users. The virus/worm also reportedly penetrated at 
least 14 federal agenciesCincluding the Department of Defense (DOD), 
the Social Security Administration, the Central Intelligence Agency, 
the Immigration and Naturalization Service, the Department of Energy, 
the Department of Agriculture, the Department of Education, the 
National Aeronautics and Space Administration (NASA), along with the 
House and Senate.
    Investigative work by the FBI's New York Field Office, with 
assistance from the NIPC, traced the source of the virus to the 
Philippines within 24 hours. The FBI then worked, through the FBI Legal 
Attache in Manila, with the Philippines' National Bureau of 
Investigation, to identify the perpetrator. The speed with which the 
virus was traced back to its source is unprecedented. The prosecution 
in the Philippines was hampered by the lack of a specific computer 
crime statute. Nevertheless, Onel de Guzman was charged on June 29, 
2000 with fraud, theft, malicious mischief, and violation of the 
Devices Regulation Act. However, those charges were dropped in August 
by Philippine judicial authorities. As a postscript, it is important to 
note that the Philippines' government on June 14, 2000 reacted quickly 
and approved the E-Commerce Act, which now specifically criminalizes 
computer hacking and virus propagation. The Philippine government will 
not be hindered by insufficient charging authorities should an incident 
like this one ever occur again. Also, the NIPC continues to work with 
other nations to provide guidance on the need to update criminal law 
statutes.
    In some cases, we have been able to prevent the release of 
disastrous viruses against public systems. On March 29, 2000, FBI 
Houston initiated an investigation when it was discovered that certain 
small businesses in the Houston area had been targeted by someone who 
was using their Internet accounts in an unauthorized manner and causing 
their hard drives to be erased. On March 30, 2000, FBI Houston 
conducted a search warrant on a residence of an individual who 
allegedly created a computer ``worm'' that seeks out computers on the 
Internet. This ``worm'' looks for computer networks that have certain 
sharing capabilities enabled, and uses them for the mass replication of 
the worm. The worm causes the hard drives of randomly selected 
computers to be erased. The computers whose hard drives are not erased 
actively scan the Internet for other computers to infect and force the 
infected computers to use their modems to dial 911. Because each 
infected computer can scan approximately 2,550 computers at a time, 
this worm could have the potential to create a denial of service attack 
against the E911 system. The NIPC issued a warning to the public 
through the NIPC webpage, SANS, NLETS, InfraGard, and teletypes to 
government agencies. On May 15, 2000 Franklin Wayne Adams of Houston 
was charged by a federal grand jury with knowingly causing the 
transmission of a program onto the Internet which caused damage to a 
protected computer system by threatening public health and safety and 
by causing loss aggregated to at least $5000. Adams was also charged 
with unauthorized access to electronic or wire communications while 
those communications were in electronic storage. He faces 5 years in 
prison and a $250,000 fine.
    Revenge by disgruntled employees seems to be another strong 
motivation for attacks. Insiders do not need a great deal of knowledge 
about computer intrusions, because their knowledge of victim systems 
often allows them to gain unrestricted access to cause damage to the 
system or to steal system data. For example, in July 1997 Shakuntla 
Devi Singla used her insider knowledge and another employee's password 
and logon identification to delete data from a U.S. Coast Guard 
personnel database system. It took 115 agency employees over 1800 hours 
to recover and reenter the lost data. Ms. Singla was convicted and 
sentenced to five months in prison, five months home detention, and 
ordered to pay $35,000 in restitution.
    Another case involved a National Library of Medicine (NLM) 
employee. In January and February 1999 the National Library of Medicine 
computer system, relied on by hundreds of thousands of doctors and 
medical professionals from around the world for the latest information 
on diseases, treatments, drugs, and dosage units, suffered a series of 
intrusions where system administrator passwords were obtained and 
hundreds of files downloaded, including sensitive medical ``alert'' 
files and programming files that kept the system running properly. The 
intrusions were a significant threat to public safety and resulted in a 
monetary loss in excess of $25,000. FBI investigation identified the 
intruder as Montgomery Johns Gray, III, a former computer programmer 
for NLM, whose access to the computer system had been revoked. Gray was 
able to access the system through a ``backdoor'' he had created in the 
programming code. Due to the threat to public safety, a search warrant 
was executed for Gray's computers and Gray was arrested by the FBI 
within a few days of the intrusions. Subsequent examination of the 
seized computers disclosed evidence of the intrusion as well as images 
of child pornography. Gray was convicted by a jury in December 1999 on 
three counts for violation of 18 U.S.C. 1030. Subsequently, Gray 
pleaded guilty to receiving obscene images through the Internet, in 
violation of 47 U.S.C. 223. Montgomery Johns Gray III was sentenced to 
5 months prison, 5 months halfway house, 3 years probation and ordered 
to pay $10,000 in restitution and assessments.
    We are also seeing the increased use of cyber intrusions by 
criminal groups who attack systems for purposes of monetary gain. In 
September, 1999, two members of a group dubbed the ``Phonemasters'' 
were sentenced after their conviction for theft and possession of 
unauthorized access devices (18 USC Sec. 1029) and unauthorized access 
to a federal interest computer (18 USC Sec. 1030). The ``Phonemasters'' 
were an international group of criminals who penetrated the computer 
systems of MCI, Sprint, AT&T, Equifax, and even the National Crime 
Information Center. The Phonemasters' methods included ``dumpster 
diving'' to gather old phone books and technical manuals for systems. 
They used this information to trick employees into giving up their 
logon and password information. The group then used this information to 
break into victim systems. One member of this group, Mr. Calvin 
Cantrell, downloaded thousands of Sprint calling card numbers, which he 
sold to a Canadian individual, who passed them on to someone in Ohio. 
These numbers made their way to an individual in Switzerland and 
eventually ended up in the hands of organized crime groups in Italy. 
Cantrell was sentenced to two years as a result of his guilty plea, 
while one of his associates, Cory Lindsay, was sentenced to 41 months.
    Terrorists groups are increasingly using new information technology 
and the Internet to formulate plans, raise funds, spread propaganda, 
and to communicate securely. In his statement on the worldwide threat 
in 2000, Director of Central Intelligence George Tenet testified that 
terrorists groups, ``including Hizbollah, HAMAS, the Abu Nidal 
organization, and Bin Laden's al Qa'ida organization are using 
computerized files, e-mail, and encryption to support their 
operations.'' In one example, convicted terrorist Ramzi Yousef, the 
mastermind of the World Trade Center bombing, stored detailed plans to 
destroy United States airliners on encrypted files on his laptop 
computer. While we have not yet seen these groups employ cyber tools as 
a weapon to use against critical infrastructures, their reliance on 
information technology and acquisition of computer expertise are clear 
warning signs. Moreover, we have seen other terrorist groups, such as 
the Internet Black Tigers (who are reportedly affiliated with the Tamil 
Tigers), engage in attacks on foreign government web-sites and email 
servers. During the riots on the West Bank in the fall of 2000, Israeli 
government sites were subjected to e-mail flooding and ``ping'' 
attacks. The attacks allegedly originated with Islamic elements trying 
to inundate the systems with email messages. As one can see from these 
examples overseas, ``cyber terrorism''--meaning the use of cyber tools 
to shut down critical national infrastructures (such as energy, 
transportation, or government operations) for the purpose of coercing 
or intimidating a government or civilian population--is thus a very 
real threat.
    We have worked closely with our international partners on computer 
intrusion cases, including cases in which hackers have illegally 
accessed U.S. government systems. In 1999 the FBI cooperated with New 
Scotland Yard in the United Kingdom on a case in which a UK citizen 
confessed to breaking into U.S. Navy systems. He was further suspected 
of intruding into other systems, including that of the U.S. Senate. He 
was sentenced to a term of 3 years on a probation-like status.
    We believe that foreign intelligence services have adapted to using 
cyber tools as part of their information gathering tradecraft. While I 
cannot go into specific cases, there are overseas probes against U.S. 
government systems every day. It would be naive to ignore the 
possibilty or even probability that foreign powers were behind some or 
all of these probes. The motivation of such intelligence gathering is 
obvious. By combining law enforcement and intelligence community assets 
and authorities under one Center, the NIPC can work with other agencies 
of the U.S. government to detect these foreign intrusion attempts.
    The prospect of ``information warfare'' by foreign militaries 
against our critical infrastructures is perhaps the greatest potential 
cyber threat to our national security. We know that many foreign 
nations are developing information warfare doctrine, programs, and 
capabilities for use against the United States or other nations. 
Knowing that they cannot match our military might with conventional or 
``kinetic'' weapons, nations see cyber attacks on our critical 
infrastructures or military operations as a way to hit what they 
perceive as America's Achilles heel B our growing dependence on 
information technology in government and commercial operations. For 
example, two Chinese military officers recently published a book that 
called for the use of unconventional measures, including the 
propagation of computer viruses, to counterbalance the military power 
of the United States.

                               CONCLUSION

    While the NIPC has accomplished much over the last three years in 
building the first nationallevel operational capability to respond to 
cyber intrusions, much work remains. We have learned from cases that 
successful network investigation is highly dependent on expert 
investigators and analysts, with state-of-the-art equipment and 
training. We have built that capability both in the FBI Field Offices 
and at NIPC Headquarters, but we have much work ahead if we are to 
build our resources and capability to keep pace with the changing 
technology and growing threat environment, while at the same time being 
able to respond to several major incidents at once.
    We are building the international, agency to agency, government to 
private sector, and law enforcement partnerships that are vital to this 
effort. The NIPC is well suited to foster these partnerships since it 
has analysis, information sharing, outreach, and investigative 
missions. We are working with the executives in the infrastructure 
protection community with the goal of fostering the development of safe 
and secure networks for our critical infrastructures. While this is a 
daunting task, we are making progress.
    Within the federal sector, we have seen how much can be 
accomplished when agencies work together, share information, and 
coordinate their activities as much as legally permissible. But on this 
score, too, more can be done to achieve the interagency and 
publicprivate partnerships called for by PDD63. We need to ensure that 
all relevant agencies are sharing information about threats and 
incidents with the NIPC and devoting personnel and other resources to 
the Center so that we can continue to build a truly interagency, 
``national'' center. Finally, we must work with Congress to make sure 
that policy makers understand the threats we face in the Information 
Age and what measures are necessary to secure our Nation against them. 
I look forward to working with the Members and Staff of this Committee 
to address these vitally important issues.
    Thank you.

    Mr. Greenwood. We thank you for your testimony.
    Mr. Noonan.

                    TESTIMONY OF TOM NOONAN

    Mr. Noonan. Mr. Chairman, thank you for having me today, 
and other members of the committee. I am very pleased to be 
here to talk about an issue that we are both passionate about, 
and an issue of, I believe, very critical national security.
    Although the folks from the DOE are not here, I thank them 
because I recognize some of the technology that we pioneered 
about 8 years ago, and they are using it today effectively to 
protect the DOE, as are other government agencies, and I am 
always pleased to see our technology in use.
    I am here today to provide you with some background 
information on threat assessment, on the vulnerabilities and 
threats that we see in the commercial sector, on the 
vulnerabilities and threats that we see in working with some 26 
foreign governments outside of the United States as well as 
some 9,000 commercial customers around the globe.
    Every day we get involved in one side or the other of 
hacking, either protecting networks from hackers, cyber thieves 
and others; or addressing vulnerabilities, fixing the 
weaknesses necessary to protect those systems. These 
individuals typically use the Internet to address their own 
pursuits, including international cyberterrorism, causing havoc 
and mayhem. I am far less concerned about teenage hackers, 
although they seem to make the press more often, and become far 
more concerned with the sophisticated attacks against not just 
our government but our industry.
    As a company, we monitor and manage the security of 
companies around the world through security operations centers 
we have located in Sweden, the U.S., Japan, the Philippines, 
Italy, Rio de Janeiro, and Atlanta, Georgia. So we have an 
interconnected network of security operation centers monitoring 
companies and detecting and tracking threats around the world.
    Over the years, I have watched computer vulnerabilities 
increase dramatically. The Internet is so useful for the 
reasons that it is so vulnerable. I would like to share two 
analogies. The first analogy I would like to use is to compare 
a computer to that of a house. Most of you are familiar with 
your house. You typically have a front door, a back door, and 
some windows that periodically you lock or monitor through your 
system. Every single computer connected to the Internet has the 
equivalent of 65,536 doors and windows, and many of them cannot 
be locked. They cannot be locked because you are using those 
doors and windows for legitimate access. So the real challenge 
becomes, with all of these doors and windows, how do we 
ultimately determine which need to be locked and which need to 
be left open, and those that are left open, how are they 
monitored to assure proper use and access of the system?
    If you multiply 65,000 times all of the computers on the 
Internet, that is how many potential ways to access computers 
there are. It is simply not a problem that we can address 
manually. We have to use technology and automation as part of 
that solution.
    So just as physical security companies like ADT or 
Honeywell or Brinks monitor physical locations, security 
companies, ours being one of them, have not only pioneered the 
technology to provide this monitoring--some of the tools you 
saw from the DOE, for instance--but also to deliver that as a 
service. I think that is an area that government ought to 
responsibly look at as we move forward: the area of managed 
security systems.
    My second analogy compares computer security to a chess 
game. In a chess game, the goal is to protect the king. In 
information security, the goal is to protect information but 
otherwise provide legitimate access to it for nonmalicious 
purposes. But a knowledgeable chess player is required to 
maneuver and play the chess game, just like a knowledgeable 
security person is required to help coordinate and manage the 
overall security posture of a system.
    I think we are fooling ourselves if we think that every 
single user of every computer is going to be aware enough to 
check their own systems for back doors, to deal with the 
problems that are so deeply routed in the technology underneath 
this. Just as a chess game environment is constantly changing, 
so is the network. New applications, new users, new trading 
partners, new introductions of sensitive data, et cetera. Over 
the years, as the Internet has become more used in business and 
more acceptable to the masses, it has been attacked at an 
increasing rate.
    Incidents occur when hackers maneuver through a system, 
take advantage of the vulnerabilities and cause a system 
breach.
    So as to your question, Mr. Chairman, there is a whole new 
currency on the Internet, it is called the back door. Today I 
could easily trade two DOTs for one GM or a Procter & Gamble 
for another back door in some other case. So on the Internet, 
back doors or accounts are being used as a new currency, and 
they are being traded frequently.
    Vulnerabilities are holes or weaknesses and problems that 
exist in the computer systems, as we saw from the DOE 
demonstration, and these incidents include everything from 
credit card theft, which seems to be where the consumers' fear 
is, to the compromise of very sensitive systems. And it comes 
down to three things:
    One, confidentiality. Is and can the information be 
protected?
    Two, integrity. Can it be changed to questions that came 
from the Chair?
    And, last, is it available? Denial of service, which you 
have heard, the ability to completely shut down or destroy data 
is possible here.
    So what I would like to do is introduce three slides to 
demonstrate what is happening in industry. The first slide 
demonstrates top security breaches. As you can see, 4 percent 
of the breaches are actually physical security breaches such as 
breaking into a window or getting through a locked door.
    Let us look beyond that into where the real computer 
security problems are. Twenty percent are system unavailability 
breaches or denial of service breaches. We learned about those 
in February of last year when some of the most important 
commerce sites on the Internet were taken off line by malicious 
activity.
    Also, as Mr. Dick has commented on, the ``ILUVYOU'' e-mail 
virus cost industry billions of dollars. Electronic exploits 
represent about 20 percent of the breaches. An example of an 
electronic exploit is finding a hole and installing a back 
door. The gentleman from the DOE showed you how easy that is. 
Last, 25 percent of the breaches are loss of privacy or 
confidentiality breaches such as when someone compromises a 
record or data base and removes information. Twenty-six percent 
are malicious code breaches, things like when a hacker sends an 
attachment with a malicious payload and, when opened, it 
deletes files automatically.
    To give you an idea how fast incidents are occurring, the 
second slide examines the increase in one type of breach: the 
virus. If you look at the threat spectrum, on one side you have 
the traditional virus all of the way up through denial of 
service attacks, trojans, worms, electronic compromise of data 
bases and operating systems.
    But if you look at this slide, you can see that viruses in 
October 1999 alone, there were more than 2,000 new known 
viruses. In November 1999, there were over 2,400. In December 
1999, over 2,500 more were added. In October 2000, there were 
30,678 new viruses being tracked; and in November of 2000, 
there were some 23,962 new viruses. What we are seeing here is 
exponential growth of an issue that is getting out of hand and 
causing significant damage and problems to the global computing 
infrastructure.
    I would like to give you a better idea of how incidents 
generally occur and how computer security companies protect 
against these incidents.
    The third slide is an example of a Website where crackers 
can get information to help them break into a system. This is a 
Website that I have deattributed. Being in the protection 
business, I don't like to pass along where people can go get 
these weapons. This actually came from an African hacking site, 
and in this hacking site it is basically the equivalent of 
being able to anonymously walk down to your corner store, pick 
up an anthrax bomb and a couple of grenades, and be able to 
launch them from your own computer anonymously and without any 
visibility as to who you are. These happen to be computer 
exploits.
    You can take back doors that monitor and take advantage of 
microphones, denial of service attacks, you have a whole 
smorgasbord up here to fill your palate.
    This site lists new vulnerabilities that have been 
discovered and programs that allow anyone to use these exploits 
to damage a system. There are literally thousands of these 
sites on the Internet, so you do not have to be very 
sophisticated or have a high IQ to cause a lot of damage to our 
infrastructure.
    We monitor the Websites that discover the latest trends. In 
addition, thousands of private chat rooms exist where more 
sophisticated crackers trade hacking tools over the Internet.
    We are pleased that the government is interested in taking 
computer security seriously. The United States spends billions 
of dollars buying weapons and gaining intelligence to protect 
our country. Our computer systems must be adequately protected 
or our entire infrastructure could be compromised by one single 
person with one single computer.
    Even though the task is complicated, computer systems can 
be protected. I think today we focused on how easy they are to 
break in. I think it might be helpful someday to have a session 
on how effectively we can protect the computer systems today 
because this is where we are going to take action. I think the 
government has taken great strides in the past few years, but 
much more is needed. I think we are moving from the topical to 
the awareness to let us start taking some action here.
    As industry has considerable resources and expertise, a 
continued partnership with industry is crucial. In addition, 
computer systems should be a priority, and leadership and 
coordination are necessary in the government. The government 
has done well with the resources it has been given. However, 
computer security specialists we believe are required to 
implement and coordinate many different security products and 
services to adequately secure a system.
    In my company alone, the average salary of one of my 2,000 
employees is around $80,000. I don't know of an industry where 
the average employee from the mailman to the CEO is $80,000. 
Computer security experts are scarce. They are in short supply 
and they are expensive. To help address the cost of computer 
security, I think we ought to focus not just on what do we do 
to protect our infrastructure, but we ought to extend these 
efforts to educational efforts that we can undertake to train 
the personnel coming out of our schools, not just our 
engineering schools, but our colleges and universities. 
Computer programmers should be trained in computer security. 
Today they are not. Today they are trained in how do you make 
the best feature. What they do not focus on is the 
vulnerability that they leave behind.
    Specialized programs in computer security should be 
encouraged, and we are strongly supportive of the universities 
that are implementing them today. I look forward to a 
continuing dialog on computer security issues. Working 
together, we are confident we can adequately secure our 
country's assets and information. Thank you.
    [The prepared statement of Tom Noonan follows:]

Prepared Statement of Tom Noonan, President and CEO, Internet Security 
 SystemsGood Morning, Mr. Chairman and Members of the Committee. I am 
    pleased to appear before you today to discuss an issue of great 
                       importance to our country.

                               BACKGROUND

    In 1991, the founder and Chief Technology Officer of Internet 
Security Systems, Chris Klaus, became interested in government security 
while interning at the Department of Energy. Chris then began working 
on a groundbreaking technology that actively identified and fixed 
computer security weaknesses. The next year, while attending Georgia 
Institute of Technology (``Georgia Tech''), Chris released his product 
for free on the Internet. He received thousands of requests for his 
invention, and decided that he should sell it. In 1994, I met Chris 
over the Internet and teamed with him to form Internet Security 
Systems. I was then working for a computer company, having attended GA 
Tech and Harvard Business School. Chris and I then launched the 
company's first product, Internet Scanner, and went public in March 
1998. And yes, we're a profitable company, even in today's market. 
Today, Internet Security Systems is the worldwide leader in security 
management software. For nearly 10 years, which is several lifetimes in 
Internet time, we have been involved in computer security, watching the 
area grow from the outset. Chris Klaus (who is now 26) is one of a 
handful of premiere experts in the world on computer security, and 
Internet Security Systems is a widely recognized pioneer in computer 
security. Computer security is all we do. We have nearly 2,000 
employees in 18 countries focused exclusively on computer security. 
Altogether, we now have more than 8,000 customers, including 68 percent 
of the Fortune 500, and 21 of the 25 largest U.S. commercial banks. We 
also serve the ten largest telecommunication companies, numerous U.S. 
government agencies, and other non-U.S. governments.

                            VULNERABILITIES

    I'm here today to provide you with some background information on 
threat assessment. Every day, Internet Security Systems stops criminal 
hackers and cyber-thieves by addressing vulnerabilities in computers. 
The individuals who use the Internet for business to business warfare, 
for international cyber-terrorism, or to cause havoc and mayhem in our 
technology infrastructure. Internet Security Systems is involved in 
every aspect of computer security, whether in making the security 
products or in managing them. We also monitor networks and systems 
around the clock (24 x 7 x 365) from the US, Japan, South America, and 
Europe in our Security Operations Centers (``SOCs''). We search for 
attacks and misuse, identify and prioritize security risks, and 
generate reports explaining the security risks and what can be done to 
fix them. At the heart of our solution is our team of world-class 
security experts focused on uncovering and protecting against the 
latest threats. This team of 200 global specialists, dubbed the X-
Force, understands exactly how to transform the complex technical 
challenges into an effective, practical, and affordable strategy. 
Because of all of these capabilities, companies and governments turn to 
us as their trusted computer security advisor.
    Over the years, I have watched computer vulnerabilities increase 
dramatically. The Internet is so useful for the very reasons that it is 
so vulnerable. To give you an idea of what we are dealing with, I'd 
like to share two analogies. First, I'll compare a computer to a house. 
Every computer connected to the Internet has the equivalent of 65,536 
doors and windows which need to be locked and monitored to make sure no 
one breaks in. Multiply 65,536 by every computer in every company and 
you begin to see the extent of the problem. Just as physical security 
companies like ADT monitor your physical doors and windows, computer 
security companies must lock and monitor the doors and windows of 
computers.
    My second analogy compares this complicated area of computer 
security to a Chess game. In a Chess game, the goal is to protect the 
king--or mission critical information. The other Chess pieces protect 
the king. But a knowledgeable Chess player is required to maneuver the 
Chess pieces. With computer security, the goal is to protect the 
information. A variety of computer security products, including 
Intrusion Detection Systems (IDS) and vulnerability assessment, 
function as Chess pieces, and protect and watch the information. These 
products are absolutely essential. However, you also need to have a 
computer security expert to manage these products, just as you have to 
have a knowledgeable Chess player maneuver the Chess pieces. Just as a 
Chess game environment is constantly changing, the computer security 
environment is also constantly changing. Computer security companies, 
such as Internet Security Systems, produce the products and perform the 
services that protect the information and manage the products so that 
they function in the proper way.
    Over the years, as the Internet has become more used in business 
and more accessible to the masses, it has been attacked at an 
increasing rate. Incidents occur when hackers maneuver through a 
system, take advantage of the vulnerabilities, and cause a system 
breach. Vulnerabilities are holes, weaknesses, and problems that exist 
in computer systems. Incidents include credit card theft or other 
information theft. The first slide documents the top security breaches. 
4% of these breaches are actual physical security breaches, such as 
breaking a window or getting in through a locked door. 20% are system 
unavailability breaches or denial-of-service breaches, such as the 
``ILUVYOU'' email virus. Electronic exploits represent 20% of the 
breaches. An example of an electronic exploit is finding a hole where 
you can install a backdoor to get into a computer system. 25% of the 
breaches are loss of privacy or confidentiality breaches, such as when 
a cracker breaks into a database server and gains access to credit card 
information. 26% are malicious code breaches, such as when a hacker 
sends an email with an attachment that when opened, deletes files on 
the computer system. 5% of the breaches are other breaches.
    To give you an idea of how fast incidents are occurring, the second 
slide examines the increase in just one type of breach, the virus. 
Viruses, such as the ``ILUVYOU'' virus are mini computer programs that 
flood a computer system with email so that the system slows down or 
crashes. Viruses can also destroy information on a computer system. In 
October 1999 alone there were more than 2000 new known viruses. In 
November 1999, there were 2,427 new viruses. In December 1999, 2,586 
were added. Look at how these numbers have dramatically increased in 
2000. In October 2000, there were 30,678 new viruses. In November 2000, 
there were 23,962 new viruses. In December 2000, there were 16,762 new 
viruses. Keep in mind that the vast impact caused by the ``ILUVYOU'' 
virus was caused by only one of these viruses.
    To give you a better idea of how incidents generally occur, and how 
computer security companies protect against these incidents, the third 
slide is an example of a Web site where crackers can get information 
that will help them break into a system. Because we are in the 
protection business, we have modified this site and removed the 
identifying information. This site lists new vulnerabilities that have 
been discovered, and includes programs that allow anyone to use these 
to exploit vulnerabilities to damage a system. There are thousands of 
similar Web sites. Our X-Force monitors the most important Web sites to 
discover the latest trends. In addition, thousands of private chat 
rooms exist where more sophisticated crackers trade hacking tools over 
the Internet. Our X-Force gains access to important chat rooms and 
monitors them as well.

                            RECOMMENDATIONS

    We are pleased that the Government is interested in taking computer 
security seriously. The United States spends billions of dollars buying 
weapons and gaining intelligence to protect our country from more 
conventional types of attack. Our computer systems must also be 
adequately protected, or our entire infrastructure could be compromised 
by one person with one computer. Even though the task is complicated, 
computer systems can be protected.
    The Government has taken great strides in the past few years. 
However, much, much more is needed. As industry has considerable 
resources and expertise, a continued partnership with industry is 
crucial. In addition, computer security must be a priority, and 
leadership and coordination are necessary in the Government. 
International leadership is also required. Perhaps most importantly, 
funding for secure Government systems must be increased by a 
substantial amount, and outsourcing should be considered as an option. 
The Government often does well with the resources it has been given. 
However, computer security specialists are required to implement and 
coordinate many different security products and services to adequately 
secure a system. As computer security expertise is extremely rare, the 
cost of computer security specialists is astronomical. In my company 
alone, the average salary of my 2000 employees is around $80,000. To 
help address the cost of computer security, educational efforts must be 
undertaken to train the personnel required. Computer programmers in 
universities should be trained in computer security. Currently, they 
are not. In addition, specialized programs in computer security should 
be encouraged.
    Thank you for inviting me here today. I look forward to a 
continuing dialog on the computer security issue, and hope that, 
working together, we can adequately secure our country's assets and 
information.

    Mr. Greenwood. Thank you very much for your extraordinary 
testimony.
    The Chair recognizes himself for 5 minutes for questions.
    Ms. McDonald, on your chart, the route compromises, 155 
last year, are those the kind of compromises that we saw in the 
demonstration where you can essentially take over an entire 
system?
    Ms. McDonald. Yes.
    Mr. Greenwood. Question for Mr. Dick. You referred to the 
issue of who is sitting behind the keyboard. Can you elaborate 
on what the FBI has discovered as to who these perpetrators 
are? We know that there are teenagers who will hack into 
systems for the fun of it. But in terms of identified 
perpetrators, can you share with us what their motivations have 
been?
    Mr. Dick. In the physical world, the range and motives 
associated with who are perpetrating these kinds of acts runs 
the full gamut. As Tom was referring to, we have the teenage 
hackers that are doing it for sport and notoriety on the 
Internet, to the other range where we have state-sponsored 
activities associated with trying to discern how to conduct 
information warfare.
    What we see in the range of what we refer to as southern 
vulnerabilities, you have a high volume of, let us say, the 
hackers that are going into systems for the honor or 
recognition of it--which is relatively low impact as far as our 
national security and economic well-being--which is going down 
the virus writers, which does have an economic impact on us, to 
criminal organizations. We are now seeing both U.S. and foreign 
criminal organizations attacking systems for credit card 
information, and then going back and extorting the businesses 
out of funds for not recognizing or exposing that they have 
been vulnerable to espionage and so forth.
    Mr. Greenwood. What are the kind of penalties that have 
been exacted against these perpetrators, and do you believe the 
penalties are adequate under the current Federal statutes?
    Mr. Dick. For violations of Title 18, section 1030, the 
penalties are 10 years in jail for each violation. The maximum 
penalties associated probably are adequate.
    Now, have the courts, based upon the sentencing guidelines, 
levied those kinds of penalties to subjects which have been 
convicted? Not at this point. It is very similar to white 
collar crime investigations where the penalties are perceived 
by some to be less than adequate. But I think with time, that 
will change also.
    Mr. Greenwood. What about international cooperation? You 
referenced the case in the Philippines where they were not--
their laws did not permit us to prosecute that perpetrator. Are 
there in process efforts to create international agreements or 
treaties with regard to these hackers?
    Mr. Dick. Yes. There are a number of things ongoing right 
now through the G-8 and the Council of Europe to implement laws 
that will more standardize not only our ability to prosecute, 
but our ability to access information.
    One of the difficulties in investigating these cases is 
almost 99 percent of the time, we are going to end up overseas 
in some faction of the case because of particular hot point or 
place that they intruded into overseas to get into the U.S. 
system exists. So we have to go to a foreign entity just to get 
the information as to what occurred over there. There are 
efforts going on and more could be done. There is a lot of 
emphasis on that at this point in time.
    Mr. Greenwood. Thank you.
    Mr. Noonan, I think you made some reference in your 
testimony to Federal customers that you have, U.S. Government 
customers.
    Mr. Noonan. Yes.
    Mr. Greenwood. Do they tend to be the inspectors general 
buying your services and software so they can check on the 
departments, or do they tend to be the managers of those 
departments buying your software so as to provide the 
protections necessary?
    Mr. Noonan. Historically they have been more the watchdog 
or audit, inspector general type function, meaning using the 
technology to determine where the systems are vulnerable.
    Today we are beginning, and just beginning to see the 
beginnings of more widespread use in intrusion detection. 
Vulnerability detection and intrusion detection are kind of the 
yin and yang. One finds the holes, and the other watches to 
make sure that the other does not exploit the holes.
    Operationally, you want to see the units, using both 
vulnerability detection to fortify the environment and 
intrusion detection to monitor it to ensure that it is being 
used judiciously.
    Historically it has been mainly the watchdog part. That is 
just now beginning to turn to more operational use.
    Mr. Greenwood. Do you and your competitors aggressively 
market your services to the systems managers within the Federal 
Government? Do you have conferences and exhibits and so forth 
where these Federal managers can come and survey this 
technology?
    Mr. Noonan. Yes, we do, as do many in the industry. One 
thing that is of particular note is movement in this area has 
really just begun in the last 6 to 9 months in terms of active 
technologies that can be deployed to protect the 
infrastructure. If I had to take a guess, I would probably say 
that 5 percent, maybe, of the government actually is protected 
with these types of technologies operationally. And I could be 
off by as much as 5 percent. Regardless, I think we have a long 
way to go.
    Ms. McDonald. Mr. Chairman, one of the things that we are 
doing in FedCIRC this fiscal year is evolving into an intrusion 
detection system that is called Managed Security Services, much 
like what Mr. Noonan's company offers.
    We are encouraging Federal agencies to deploy managed 
security services; and hopefully we are responsible for maybe 
some of that 5 percent, if 5 percent exists. It is our 
intention in the FedCIRC organization to, after we have 
encouraged agencies to implement managed security services and 
intrusion detection systems, that we will develop an analysis 
capability within FedCIRC so that these intrusion detection 
systems will feed up into the FedCIRC program office and we 
will be able to get a picture, a much better picture across 
government as to what is actually occurring.
    With this step we feel that we can move from the 20 percent 
of the incidents that are being discovered to closer to the 100 
percent.
    Mr. Greenwood. Mr. Noonan, since the bad guys can use your 
services or at least your software, do you have any process of 
screening out the bad guys?
    Mr. Noonan. Mr. Chairman, it would be very difficult for 
the bad guys to use our technology. Each is encrypted with a 
special key. Each user that licenses the software is required 
to provide information and sign a license agreement. So our 
systems are not freely available, and they do not operate 
unless you have a key generated by us, and each key is specific 
to that user.
    So if the DOE licensed our vulnerability system, they could 
not use it on the Department of Transportation computers 
because it would not match up with their IP addresses.
    Mr. Greenwood. The Chair recognizes the gentleman, Mr. 
Strickland.
    Mr. Strickland. Ms. McDonald, I have a copy here of a March 
2001 newsletter from FedCIRC about the demise of the FedNet, 
which has been described as a conceptualized weapon to defend 
the Federal information infrastructure by tracking anomalous 
behavior. According to this newsletter, FedNet was buried 
because of concerns of the public, media, and Congress because 
it was a threat to privacy rights. Are you familiar with this?
    Ms. McDonald. I am familiar with that, sir. If I could 
explain----
    Mr. Strickland. If you could explain to me what you do not 
agree with.
    Ms. McDonald. We did not bury FedNet. FedNet first came to 
the public's attention in a New York Times article in 1998. 
That article said that FedNet was a system that was going to be 
run by the FBI, and that it was going to monitor all citizens' 
e-mails, including the content of those e-mails, in the United 
States. FedNet was actually a program the GSA was sponsoring, 
not the FBI, and the idea was to develop an intrusion detection 
network with all of the Federal civilian agencies.
    Because of the bad publicity that it got, we revamped the 
program. We now call it the managed security services, which is 
what I alluded to. And what we have done, so that agencies have 
confidence in what we are doing in the FedCIRC program, is we 
are encouraging agencies to establish intrusion detection 
systems within their own organizations and then work with 
FedCIRC on a voluntary basis.
    One of the important facts of this entire area is trust. We 
lost a lot of trust with the FedNet program, which is why we 
chose to rename it managed security services. And as the 
industry has matured, and as Mr. Noonan has testified, these 
services are commercially available and we are encouraging 
agencies to procure these services themselves and then work 
with FedCIRC.
    Mr. Strickland. Ms. McDonald, this is your publication?
    Ms. McDonald. That's correct.
    Mr. Strickland. It indicates that Federal civilian agencies 
for questionable activities, to provide those same agencies a 
vehicle to obtain those services from private industry. I think 
we are talking about the services that were envisioned in 
FedNet. FedCIRC is preparing a new offering that would employ 
private industry and will consist of a variety of information 
security services under the caveat managed security services.
    Now, is this an attempt by the GSA to go--to sneak around 
behind the back of Congress and set up, if not the same system, 
certainly a similar system, as a way of avoiding the kind of 
criticism that was directed toward the previous effort?
    Ms. McDonald. Absolutely not. The idea was to make it much 
more palatable to the Federal civilian agencies, to put them in 
control of the systems because they would be the ones that 
would be procuring what is now a commercially available 
service. FedNet as it was designed or thought of in 1998 didn't 
really exist. But that shows the maturity in this entire field. 
Now these services are available commercially, and it is 
important for agencies to trust the FedCIRC operation. So we 
are encouraging them to deploy these services and then share 
the results of those systems with us.
    Mr. Strickland. Yes. If you can just speak to this 
question. Under the services available from the managed 
security services program, will the public be able to have 
confidence that all of their communications will not be tracked 
or trackable?
    Ms. McDonald. Absolutely.
    Mr. Strickland. That is still a concern?
    Ms. McDonald. That was a misunderstanding from the New York 
Times article. These systems are going to be deployed only at 
Federal agencies looking at Federal agency systems, and they 
will not be looking at the content of those systems.
    Mr. Strickland. So you are saying to me, if a private 
citizen attempts or does gather information from some Federal 
source, some Federal agency, that it will not be possible to 
track that communication to identify it?
    Ms. McDonald. That's correct. Unless that private citizen 
does something like the Department of Energy demonstrated this 
morning, it won't show up on an intrusion detection system if 
it is a normal, approved-type activity.
    Mr. Strickland. Reference is made to anomalous behavior. Do 
you have a definition of what that would be?
    Ms. McDonald. Behavior that is beyond the normal. For 
instance, most of us work 9-to-5 jobs. Profiles are developed 
on a user. If all of a sudden somebody was working at their job 
at 2 a.m., that would fall into that type of behavior, and that 
would kick out on the intrusion detection system.
    Mr. Strickland. I suspect that a lot of committee and staff 
members of the House of Representatives would be identified as 
engaging in anomalous behavior because many of them work at 
strange hours.
    Ms. McDonald. That is true. I am sure that if you looked at 
Mr. Noonan's company's hours, his hours would be quite 
different than perhaps a Federal agency's hours. But with an 
intrusion detection system, you profile the culture that occurs 
in your organization. So perhaps maybe the staffers are not 
working at 2 o'clock in the afternoon.
    Mr. Strickland. It seems to me that the result of this 
could be, the profiling, a very innocent behavior on the part 
of American citizens that seem to have work habits that were 
perceived by someone as anomalous. Is that not something that 
the American public should have some reasonable concern about?
    Ms. McDonald. Let me say that this whole area of 
technology, as you very well know, opens up a tremendous amount 
of privacy concerns, and people's activities can be tracked. It 
is something that we need to balance with the need to protect.
    Mr. Strickland. I appreciate the difficulty of the issue 
that we are discussing today. I think it is important to be 
open and have full disclosure. I think it is important that the 
concerns that resulted in the initial action to not proceed be 
fully explored.
    Mr. Chairman, I do think this is a matter that we should 
continue to follow and to explore as we look more deeply into 
this.
    Ms. McDonald. We would be glad to work with you on that. 
Thank you.
    Mr. Strickland. Thank you.
    Mr. Greenwood. The Chair thanks the gentleman and 
recognizes the gentlelady from Colorado.
    Ms. DeGette. Thank you, Mr. Chairman.
    We have been hearing a lot of pretty chilling testimony 
this morning about the risks of this cyberterrorism and other 
kinds of compromises of our systems.
    I am just sitting here wondering--for example, this slide 
that Mr. Noonan put up with this Website from--not the Website, 
but this slide from Africa. And I think you said that we wonder 
if people from places like Africa couldn't hack into our 
systems and even launch nuclear weapons or biological warfare.
    Mr. Dick, in your written testimony you say we have not 
seen an example of cyberterrorism. With all of this activity 
going on, I guess I am wondering why we have not seen an 
example of cyberterrorism yet.
    Mr. Dick. In the continuum of incidents and times, over 
time as people get familiar with the technology, the tools, 
even get greater availability out on the Internet, you are 
going to see the volume of activity go up. Eventually we are 
going to see it.
    Ms. DeGette. Why do you think that we have not seen it yet?
    Mr. Noonan. I was just going to comment on that. I think we 
have seen it. We see it in industry. It is just a microcosm. It 
is not the same necessarily as in the physical world. I have 
seen entire customer records destroyed. That is terrorism to a 
business.
    Ms. DeGette. And that is certainly serious to us. What is 
your definition of cyberterrorism?
    Mr. Noonan. I think that is a very good question. The tools 
that I represented--and that is actually a Website which has 
been copied now and made into a slide. You can click on any one 
of those and download those weapons, if you will.
    My definition of cyberterrorism for a commercial industry 
is anything that causes significant problems with the 
availability, the confidentiality, or the integrity of those 
systems. We can now have very small incidences of 
cyberterrorism, or very coordinated, large-scale attacks.
    Mr. Dick. My definition is different. What he described 
there, those would be criminal acts that we would investigate 
under criminal authorities.
    When we talk about terrorism in the Department of Justice 
and from an investigation standpoint, we have governed by 
certain laws and by who are defined as foreign powers. So my 
definition is much more restrictive.
    Ms. DeGette. What is your definition?
    Mr. Dick. Basically those foreign powers that are attacking 
the United States and its assets for political motives as 
opposed to some sort of economic reason.
    Ms. DeGette. Why do you think that we have not had any 
incidence of cyberterrorism on the scale of what Mr. Noonan 
describes?
    Mr. Dick. My statement says we have not had any that we can 
attribute to any foreign powers, organizations, and acts at 
this point in time. I am not saying that there never has been.
    Ms. DeGette. So you think that we might have had 
cyberterrorism, but we do not know?
    Mr. Dick. I have no empirical data that says specifically.
    Ms. DeGette. First of all, I think we should figure out 
what our definition of cyberterrorism is. That might be helpful 
in this analysis. It might be helpful to the public when we 
think about the safety of our government and Internet systems. 
I agree with Mr. Strickland that we need a lot more research 
and hearings on this. But the reason that I am concerned about 
this issue is because we are here today talking about 
compromise of government computer systems, and I am trying to 
figure out what the very real risk is of, say, someone hacking 
into our military intelligence systems or our defense systems 
and actually launching these biological weapons or nuclear 
weapons or obtaining top secret information.
    I understand that there are a lot of incidents, but what is 
the real risk here?
    Mr. Dick. When we say, ``terrorism,'' we are looking at 
things that are politically motivated in an attempt to 
intimidate our society or policies, or change policies, as 
opposed to affect a business's way of doing business.
    Ms. DeGette. Why do you think that we have not had this 
happen? Do we have pretty good integrity of those critical 
systems and what we need to do is work on other systems? Ms. 
McDonald, do you have an opinion on this?
    Ms. McDonald. I think we are lucky that we have not had it 
happen.
    Ms. DeGette. Mr. Noonan, do you have any comments?
    Mr. Noonan. I think we have a lot of problems. I think in 
terms of the infrastructure, I think that it is very, very 
widespread; and whether I would comment on whether we have had 
cyberterrorism or not, I know we have had compromises. I have 
tracked them and watched them in and out of our own government 
and agencies.
    What networks the Pentagon actually uses to launch nuclear 
weapons, I don't know. I hope that those are not easily 
accessible from the Internet. But I know that we have had 
compromises. Whether we want to call that terrorism or not is 
up to us.
    Ms. DeGette. Shifting direction a little bit, Mr. Noonan, 
these 65,000 doors that you talk about, and computers that 
allow unauthorized entries, those are part of the operating 
systems that come with computers when people obtain them?
    Mr. Noonan. That's correct. That is a world standard.
    Ms. DeGette. Right. I would think that a good portion of 
the blame for the vulnerabilities in operating systems would 
lie on the developers of those products; wouldn't you agree?
    Mr. Noonan. Not entirely, but partially, yes; because the 
Internet standard, PCPIP, which we use all over the world, is 
open by design, and this is the fundamental challenge.
    Ms. DeGette. In fact, Microsoft says customers want 
openness, not closed doors, correct?
    Mr. Noonan. Absolutely. So the conundrum is how do you 
secure the integrity of the system when it is based on an open 
design.
    Ms. DeGette. Do you have any ideas how to do that?
    Mr. Noonan. Absolutely. I absolutely do.
    Ms. DeGette. Would you share one?
    Mr. Noonan. I believe we are entering an age where 
everything is going to be microprocessor driven, not just our 
computers, but the Internet will be the base foundation for 
command and control systems for distribution tracking systems, 
for satellite tracking systems, for everything that we do that 
needs information. The only way that we are going to secure 
these systems out into the future is if each individual system 
on the network has its own capability to intelligently monitor 
itself and discern between good and bad behavior.
    Ms. DeGette. Thank you. I have one last question, and that 
is to Ms. McDonald. I assume that is your chart behind you?
    Ms. McDonald. Yes. It is based upon our data.
    Ms. DeGette. My question to you is of the route compromises 
on that chart which are in red, it says a route compromise 
means that the intruder has gained full administrative or route 
privileges over the targeted system, meaning that any 
information or capability of the system is totally owned and is 
controllable by the intruder.
    Ms. McDonald. That's correct.
    Ms. DeGette. How many of those route compromises have been 
to confidential or secret data?
    Ms. McDonald. To my knowledge, none.
    Ms. DeGette. Thank you.
    Mr. Chairman, I can see that we have a lot more work to do. 
I want to thank this excellent panel and the previous one.
    Mr. Greenwood. The Chair is going to recognize himself for 
a second round of questions, and I turn to you first, Ms. 
McDonald.
    Of the 586 incidents reported in 2000, is it true that at 
least several of those are known to have resulted in the 
compromise of sensitive agency information; and if so, can you 
give us some sense of the type of information that was 
compromised?
    Ms. McDonald. Every Federal civilian agency, as we have 
heard this morning, maintains very sensitive information on 
American citizens. I can tell you that most of the increases 
that we have seen, and most of the incidents in the year 2000 
had to do with scientific research and environmentally involved 
agencies. Again, because this is an area that FedCIRC needs to 
develop the trust of the agencies that we work with, I could 
not go into identifying which particular agencies and what 
systems.
    But generally the scientific area is--as Mr. Noonan alluded 
to, the whole Internet is very open. And it was developed by 
the scientific area and they, as part of their research, are a 
very open community.
    Mr. Greenwood. Your testimony notes there has been a rise 
in reconnaissance activities, scans of government computers by 
foreign sources over the past year, up from 60 percent in 1999 
to 75 percent in 2000. Are we talking about terrorism 
activities, teenage hackers from abroad, espionage, or a 
combination of these; and how does FedCIRC determine if a scan 
is by a foreign source, and what information are these foreign 
sources trying to gain access to?
    Ms. McDonald. Well, we can determine whether it's a foreign 
address where these scans are coming from. If with working with 
the agency we feel that it is a nation-state then we work with 
Mr. Dick's area or the NSA and transfer that information over 
to them. We do not investigate incidents. Our job is to report 
incidents, assist agencies to recover from incidents, and to 
give agencies the tools that they need in order to protect 
themselves.
    Mr. Greenwood. Mr. Dick, according to a Washington Post 
article dated March 21 of this year, your current assessment of 
computer security at Federal facilities is that they are 
extremely vulnerable to potentially crippling cyberattacks. Is 
that an accurate assessment of your view; and if so, what is 
that view based on?
    Mr. Dick. It is an accurate assessment of my view of not 
only government systems but private sector systems as has been 
demonstrated in this committee today. There are numerous tools 
out there for which to exploit the vulnerabilities in those 
systems; and unless there is due diligence on the part of 
systems administrators, CEOs and executive managements of 
government agencies, as well as the private sector as a whole, 
you're going to have vulnerabilities and that includes due 
diligence not only in the implementation of firewalls and 
intrusion detection software, but as has been pointed out 
earlier, continually updating and correcting your systems.
    For example, we are conducting an investigation currently, 
or several investigations, regarding known vulnerabilities to 
certain operating systems. These intruders are going in, as I 
alluded to earlier, and taking credit card numbers and then 
extorting the businesses. In December of this year we issued a 
warning based upon our investigative efforts to the public 
saying that these are the known vulnerabilities in this 
operating system which need to be repaired because of this. We 
got very little play.
    In March we became much more public after coordinating with 
the information sharing and analysis centers and our other 
partners and came out with a very--a much more public 
announcement and beat the drum louder, if you will, to try and 
get these vulnerabilities fixed because there are known patches 
that can prevent this. Because of that, one of the information 
sharing and analysis centers indicated that we were able to 
prevent over 1,600 attempts.
    So the point is that it is continual vigilance and 
implementation in security; and unless you do that, you are 
vulnerable.
    Mr. Greenwood. GSA told this committee--told our staff that 
in excess of 95 percent of the intrusions into Federal 
computers could have been prevented had well-known 
vulnerabilities been patched with existing remedies. What does 
that say about the state of our computer security and 
vigilance, Ms. McDonald?
    Ms. McDonald. It doesn't say a lot.
    Mr. Greenwood. Actually, it does say a lot.
    Ms. McDonald. Well, yes it does; but not what I would like 
to say about it. One of the things that we're doing in the Fed 
service area, recognizing this being an issue, is working with 
a number of companies to see what capabilities they have to 
offer the Federal Government for a patch distribution system so 
that we can profile the agency systems to determine where--what 
type systems they have, where they stand on their patches, and 
then, as patches come out, feed them down to the agencies in a 
hope that that will encourage them to apply the patches and 
therefore allow them to recover from----
    Mr. Greenwood. Well, you're hoping that it will encourage 
them, but are they required? If you do an advisory indicating a 
vulnerability in a known patch and you distribute that to the 
Federal agency, is the Federal agency required----
    Ms. McDonald. No.
    Mr. Greenwood. [continuing] under any----
    Ms. McDonald. No. This would only allow us the knowledge 
that the patch was delivered to them, and we can establish the 
system so that we can see if they actually took the patch; but 
they're under no requirement to apply the patch.
    Mr. Greenwood. Do you keep records of to what extent your 
encouragement works in the patches?
    Ms. McDonald. We will, once we implement the system.
    Mr. Greenwood. Okay. The Chair thanks all three of our 
witnesses for their superb testimony and you are excused. And I 
would call the second panel, consisting of Mr. Robert Dacey, 
director of information security systems at the U.S. General 
Accounting Office, and Mr. John S. Tritak, director of Critical 
Infrastructure Assurance Office of the U.S. Department of 
Commerce.
    I'm going to do what I failed to do in the last panel and 
that is remind you this committee is holding an investigative 
hearing and when doing so it has had the practice of taking 
testimony under oath. Do either of you have any objection to 
testify under oath?
    Mr. Dacey. No.
    Mr. Tritak. Not at all.
    Mr. Greenwood. You're also then advised that under the 
rules of the House and under the rules of the committee you're 
entitled to be advised by counsel. Do you desire to be advised 
by counsel during your testimony?
    Mr. Dacey. I do not.
    Mr. Tritak. I do not.
    Mr. Greenwood. In that case, will you rise and raise your 
right hand and I will swear you in.
    [Witnesses sworn.]
    Thank you. Please be seated.
    We will recognize Mr. Dacey for his testimony for 5 
minutes.

 TESTIMONY OF ROBERT F. DACEY, DIRECTOR, INFORMATION SECURITY 
  ISSUES, U.S. GENERAL ACCOUNTING OFFICE; AND JOHN S. TRITAK, 
   DIRECTOR, CRITICAL INFRASTRUCTURE ASSURANCE OFFICE, U.S. 
                     DEPARTMENT OF COMMERCE

    Mr. Dacey. Mr. Chairman, I am pleased to be here this 
afternoon to discuss information security in the Federal 
Government. Evaluations by GAO and the Inspectors General 
continue to show that computer security over the government's 
unclassified systems are fraught with serious and widespread 
weaknesses. The risk associated with these weaknesses as has 
been discussed earlier are heightened by the increasing 
interconnectivity of our systems, as well as the use of the 
Internet. While the government cannot estimate the actual 
damage and loss, principally because many incidents are either 
not identified or not reported, I'd like to provide several 
examples that illustrate the effect that can happen to Federal 
agencies.
    First, there can be theft or misuse of Federal Government 
resources. For example, one individual embezzled over $435,000 
at the Department of Defense. At EPA, a hacker chat room was 
surreptitiously installed on an agency server. An EPA system 
was used by hackers to launch attacks against others, and 
numerous Federal Web sites have been reportedly defaced.
    Ineffective security can also result in inappropriate 
disclosure or misuse of sensitive personal and proprietary 
business information. For example, sensitive information was 
reported stolen by the Department of Defense. IRS employees 
have browsed taxpayer records and used information obtained to 
commit financial and other crimes. Social security information 
has been sold to facilitate identity theft.
    Another effect is potential disruption of business 
operations. For example, operations at several agencies were 
disrupted by the ``I love you'' virus. Also, users were locked 
out of EPA systems using some of the techniques we saw 
demonstrated earlier today.
    And third, DOE stood down its Internet connections on 
several occasions. The last can result in modification or 
destruction of programs or data. For example, sensitive 
information was corrupted and malicious software installed at 
the Department of Defense.
    While agencies' operations and risks vary, the types of 
weaknesses reported are strikingly similar. In general, systems 
did not have adequate controls to prevent and detect 
unauthorized changes to systems software, to prevent or detect 
unauthorized access to facilities, systems, programs and data, 
and to ensure the continuity of business operations.
    We and the Inspectors General made scores of 
recommendations to improve security, and in 2001 we again 
reported information security as a high-risk area, as we have 
in 1997 and 1999.
    I would like to point out that GAO employs similar tests to 
those that were demonstrated this morning and would like to add 
that even though those generally result in our ability to gain 
root access or other access to systems, we sometimes are just 
as successful in guessing passwords and using social 
engineering to gain access to those systems.
    Even if agencies do implement the corrective actions that 
have been identified, all too often subsequent reviews have 
uncovered the same types of vulnerabilities. As we've reported 
in the past, these weaknesses continue to exist principally 
because agencies have not established effective computer 
security management programs. Effective programs would allow 
for processes and procedures to assess risks, to ensure that 
controls are adequately put in place to address those risks, to 
have a regular process of raising awareness by the employees, 
and last, to have a process to monitor the effectiveness of 
security on an ongoing basis.
    While we have seen that some agencies have implemented 
policies and procedures and have established risk awareness 
programs, little has been done by most agencies to actively 
monitor the effectiveness of the controls, unlike what was 
demonstrated today by the Department of Energy.
    The Congress has expressed concern about the serious and 
pervasive nature of computer security and recently passed 
legislation that would require some additional reporting and 
work to be done. Specifically, the legislation requires that 
agencies establish computer security management programs over 
all operations and assets of the agency.
    Second, the legislation requires both agency and Inspector 
General annual reviews to be performed, and the information 
from those reviews could be very helpful in oversight and 
monitoring of agencies' progress.
    Other actions have been initiated across government, 
including several agencies that have taken important steps to 
improve computer security. The Federal Chief Information 
Officers Council has issued a guide for measuring agency 
progress, which we assisted in developing; and the prior 
administration has issued a national plan for information 
systems protection as well as the current administration 
issuing the first annual update on the status of critical 
infrastructure.
    It is important to maintain the momentum of these efforts 
and ensure that the activities currently underway are 
coordinated under a comprehensive strategy and that the roles 
and responsibilities of the numerous organizations with central 
responsibilities for computer security are clearly defined.
    Mr. Chairman, that concludes our statement. I would be 
pleased to answer any questions that you or the members of the 
subcommittee may have.
    [The prepared statement of Robert F. Dacey follows:]

 Prepared Statement of Robert F. Dacey, Director, Information Security 
                   Issues, General Accounting Office

    Mr. Chairman and Members of the Subcommittee: I am pleased to be 
here today to discuss our analysis of information security audits at 
federal agencies. As with other large organizations, federal agencies 
rely extensively on computerized systems and electronic data to support 
their missions. Accordingly, the security of these systems and data is 
essential to avoiding disruptions in critical operations, data 
tampering, fraud, and inappropriate disclosure of sensitive 
information.
    Today, I will summarize the results of our analysis of information 
security audits performed by us and by agency inspectors general since 
July 1999 at 24 major federal departments and agencies. In summarizing 
these results, I will discuss the types of pervasive weaknesses that we 
and agency inspectors general have identified. I will then describe the 
serious risks that these weaknesses pose at selected individual 
agencies of particular interest to this subcommittee, and the major 
common weaknesses that agencies need to address. Finally, I will 
describe the management improvements that are needed to resolve these 
weaknesses and the significant challenges that remain.

                               BACKGROUND

    Dramatic increases in computer interconnectivity, especially in the 
use of the Internet, are revolutionizing the way our government, our 
nation, and much of the world communicate and conduct business. The 
benefits have been enormous. Vast amounts of information are now 
literally at our fingertips, facilitating research on virtually every 
topic imaginable; financial and other business transactions can be 
executed almost instantaneously, often on a 24-hour-a-day basis; and 
electronic mail, Internet web sites, and computer bulletin boards allow 
us to communicate quickly and easily with a virtually unlimited number 
of individuals and groups.
    In addition to such benefits, however, this widespread 
interconnectivity poses significant risks to our computer systems and, 
more important, to the critical operations and infrastructures they 
support. For example, telecommunications, power distribution, water 
supply, public health services, and national defense--including the 
military's warfighting capability--law enforcement, government 
services, and emergency services all depend on the security of their 
computer operations. The speed and accessibility that create the 
enormous benefits of the computer age likewise, if not properly 
controlled, allow individuals and organizations to inexpensively 
eavesdrop on or interfere with these operations from remote locations 
for mischievous or malicious purposes, including fraud or sabotage.
    Reports of attacks and disruptions abound. The March 2001 report of 
the ``Computer Crime and Security Survey,'' conducted by the Computer 
Security Institute and the Federal Bureau of Investigation's San 
Francisco Computer Intrusion Squad, showed that 85 percent of 
respondents (primarily large corporations and government agencies) had 
detected computer security breaches within the last 12 months. 
Disruptions caused by virus attacks, such as the ILOVEYOU virus in May 
2000 and 1999's Melissa virus, have illustrated the potential for 
damage that such attacks hold.\1\ A sampling of reports summarized in 
Daily Reports by the FBI's National Infrastructure Protection Center 
\2\ during two recent weeks in March illustrates the problem further:
---------------------------------------------------------------------------
    \1\ Critical Infrastructure Protection: ``ILOVEYOU'' Computer Virus 
Highlights Need for Improved Alert and Coordination Capabilities (GAO/
T-AIMD-00-181, May 18, 2000); Information Security: ``ILOVEYOU'' 
Computer Virus Emphasizes Critical Need for Agency and Governmentwide 
Improvements (GAO/T-AIMD-00-171, May 10, 2000); Information Security: 
The Melissa Computer Virus Demonstrates Urgent Need for Stronger 
Protection Over Systems and Sensitive Data (GAO/T-AIMD-99-146, April 
15, 1999).
    \2\ In its Daily Reports, the National Infrastructure Protection 
Center states that these summaries are for information purposes only 
and do not constitute any verification of the information contained in 
the reports or endorsement by the FBI.

 Hackers suspected of having links to a foreign government 
        successfully broke into the Sandia National Laboratory's 
        computer system and were able to access sensitive classified 
        information. (Source: Washington Times, March 16, 2001.)
 A hacker group by the name of ``PoizonB0x'' defaced numerous 
        government web sites, including those of the Department of 
        Transportation, the Administrative Office of the U.S. Courts, 
        the National Science Foundation, the National Oceanic and 
        Atmospheric Administration, the Princeton Plasma Physics 
        Laboratory, the General Services Administration, the U.S. 
        Geological Survey, the Bureau of Land Management, and the 
        Office of Science & Technology Policy. (Source: Attrition.org., 
        March 19, 2001.)
 The ``Russian Hacker Association'' is offering over the 
        Internet an e-mail bombing system that will destroy a persons 
        ``web enemy'' for a fee. (Source: UK Ministry of Defense Joint 
        Security Coordination Center)
 Two San Diego men allegedly crashed a company's computer 
        system by rerouting tens of thousands of unsolicited e-mails 
        through its servers. (Source: ZDNet News, March 18, 2001.)
    Government officials are increasingly concerned about attacks from 
individuals and groups with malicious intent, such as crime, terrorism, 
foreign intelligence gathering, and acts of war. According to the FBI, 
terrorists, transnational criminals, and intelligence services are 
quickly becoming aware of and using information exploitation tools such 
as computer viruses, Trojan horses, worms, logic bombs, and 
eavesdropping sniffers that can destroy, intercept, or degrade the 
integrity of and deny access to data. As greater amounts of money are 
transferred through computer systems, as more sensitive economic and 
commercial information is exchanged electronically, and as the nation's 
defense and intelligence communities increasingly rely on commercially 
available information technology, the likelihood that information 
attacks will threaten vital national interests increases. In addition, 
the disgruntled organization insider is a significant threat, since 
such individuals often have knowledge that allows them to gain 
unrestricted access and inflict damage or steal assets without a great 
deal of knowledge about computer intrusions.
    Since 1996, our analyses of information security at major federal 
agencies have shown that federal systems were not being adequately 
protected from these threats, even though these systems process, store, 
and transmit enormous amounts of sensitive data and are indispensable 
to many federal agency operations. In September 1996, we reported that 
serious weaknesses had been found at 10 of the 15 largest federal 
agencies, and we concluded that poor information security was a 
widespread federal problem with potentially devastating 
consequences.\3\ In 1998 and in 2000, we analyzed audit results for 24 
of the largest federal agencies: both analyses found that all 24 
agencies had significant information security weaknesses.\4\ As a 
result of these analyses, we have identified information security as a 
high-risk issue in reports to the Congress since 1997-most recently in 
January 2001.\5\
---------------------------------------------------------------------------
    \3\ Information Security: Opportunities for Improved OMB Oversight 
of Agency Practices (GAO/AIMD-96-110, September 24, 1996).
    \4\ Information Security: Serious Weaknesses Place Critical Fedearl 
Operations and Assets at Risk (GAO/AIMD-98-92, September 23, 1998); 
Information Security: Serious and Widespread Weaknesses Persist at 
Federal Agencies (GAO/AIMD-00-295, September 6, 2000).
    \5\ High-Risk Series: Information Management and Technology (GAO/
HR-97-9, February 1, 1997); High-Risk Series: An Update (GAO/HR-99-1, 
January 1999); High Risk Series: An Update (GAO-01-263, January 2001).
---------------------------------------------------------------------------

                      WEAKNESSES REMAIN PERVASIVE

    Evaluations published since July 1999 show that federal computer 
systems are riddled with weaknesses that continue to put critical 
operations and assets at risk. Significant weaknesses have been 
identified in each of the 24 agencies covered by our review. These 
weaknesses covered all six major areas of general controls--the 
policies, procedures, and technical controls that apply to all or a 
large segment of an entity's information systems and help ensure their 
proper operation. These six areas are (1) security program management, 
which provides the framework for ensuring that risks are understood and 
that effective controls are selected and implemented, (2) access 
controls, which ensure that only authorized individuals can read, 
alter, or delete data, (3) software development and change controls, 
which ensure that only authorized software programs are implemented, 
(4) segregation of duties, which reduces the risk that one individual 
can independently perform inappropriate actions without detection, (5) 
operating systems controls, which protect sensitive programs that 
support multiple applications from tampering and misuse, and (6) 
service continuity, which ensures that computer-dependent operations 
experience no significant disruptions.
    Weaknesses in these areas placed a broad range of critical 
operations and assets at risk for fraud, misuse, and disruption. In 
addition, they placed an enormous amount of highly sensitive data--much 
of it pertaining to individual taxpayers and beneficiaries--at risk of 
inappropriate disclosure.
    The scope of audit work performed has continued to expand to more 
fully cover all six major areas of general controls at each agency. Not 
surprisingly, this has led to the identification of additional areas of 
weakness at some agencies. While these increases in reported weaknesses 
are disturbing, they do not necessarily mean that information security 
at federal agencies is getting worse. They more likely indicate that 
information security weaknesses are becoming more fully understood-an 
important step toward addressing the overall problem. Nevertheless, our 
analysis leaves no doubt that serious, pervasive weaknesses persist. As 
auditors increase their proficiency and the body of audit evidence 
expands, it is probable that additional significant deficiencies will 
be identified.
    Most of the audits covered in our analysis were performed as part 
of financial statement audits. At some agencies with primarily 
financial missions, such as the Department of the Treasury and the 
Social Security Administration, these audits covered the bulk of 
mission-related operations. However, at agencies whose missions are 
primarily nonfinancial, such as the Departments of Defense and Justice, 
the audits may provide a less complete picture of the agency's overall 
security posture because the audit objectives focused on the financial 
statements and did not include evaluations of systems supporting 
nonfinancial operations.
    In response to congressional interest, during fiscal years 1999 and 
2000, we expanded our audit focus to cover a wider range of 
nonfinancial operations. We expect this trend to continue.

     RISKS TO FEDERAL OPERATIONS, ASSETS, AND CONFIDENTIALITY ARE 
                              SUBSTANTIAL

    To fully understand the significance of the weaknesses we 
identified, it is necessary to link them to the risks they present to 
federal operations and assets. Virtually all federal operations are 
supported by automated systems and electronic data, and agencies would 
find it difficult, if not impossible, to carry out their missions and 
account for their resources without these information assets. Hence, 
the degree of risk caused by security weaknesses is extremely high.
    The weaknesses identified place a broad array of federal operations 
and assets at risk of fraud, misuse, and disruption. For example, 
weaknesses at the Department of the Treasury increase the risk of fraud 
associated with billions of dollars of federal payments and 
collections, and weaknesses at the Department of Defense increase the 
vulnerability of various military operations. Further, information 
security weaknesses place enormous amounts of confidential data, 
ranging from personal and tax data to proprietary business information, 
at risk of inappropriate disclosure. For example, in 1999, a Social 
Security Administration employee pled guilty to unauthorized access to 
the administration's systems. The related investigation determined that 
the employee had made many unauthorized queries, including obtaining 
earnings information for members of the local business community.
    Such risks, if inadequately addressed, may limit government's 
ability to take advantage of new technology and improve federal 
services through electronic means. For example, this past February, we 
reported on serious control weaknesses in the Internal Revenue 
Service's (IRS) electronic filing system, noting that failure to 
maintain adequate security could erode public confidence in electronic 
filing, jeopardize the Service's ability to meet its goal of 80 percent 
of returns being filed electronically by 2007, and deprive it of 
financial and other anticipated benefits. Specifically, we found that, 
during the 2000 tax filing season, IRS did not adequately secure access 
to its electronic filing systems or to the electronically transmitted 
tax return data those systems contained. We demonstrated that 
unauthorized individuals, both internal and external to IRS, could have 
gained access to these systems and viewed, copied, modified, or deleted 
taxpayer data. In addition, the weaknesses we identified jeopardized 
the security of the sensitive business, financial, and taxpayer data on 
other critical IRS systems that were connected to the electonic filing 
systems. The IRS Commissioner has stated that, in response to 
recommendations we made, IRS has completed corrective action for all of 
the critical access control vulnerabilities we identified and that, as 
a result, the electronic filing systems now satisfactorily meet 
critical federal security requirements to protect the taxpayer.\6\ As 
part of our audit follow up activities, we plan to evaluate the 
effectiveness of IRS's corrective actions.
---------------------------------------------------------------------------
    \6\ Information Security: IRS Electronic Filing Systems (GAO-01-
306, February 16, 2001).
---------------------------------------------------------------------------
    I would now like to describe the risks associated with specific 
recent audit findings at agencies of particular interest to this 
subcommittee.
     Information technology is essential to the Department of 
Energy's (DOE) scientific research mission, which is supported by a 
large and diverse set of computing systems, including very powerful 
supercomputers located at DOE laboratories across the nation. In June 
2000, we reported that computer systems at DOE laboratories supporting 
civilian research had become a popular target of the hacker community, 
with the result that the threat of attacks had grown dramatically in 
recent years.\7\ Further, because of security breaches, several 
laboratories had been forced to temporarily disconnect their networks 
from the Internet, disrupting the laboratories' ability to do 
scientific research for up to a full week on at least two occasions. In 
February 2001, the DOE's Inspector General reported network 
vulnerabilities and access control weaknesses in unclassified systems 
that increased the risk that malicious destruction or alteration of 
data or the processing of unauthorized operations could occur.\8\
---------------------------------------------------------------------------
    \7\ Information Security: Vulnerabilities in DOE's Systems for 
Unclassified Civilian Research (GAO/AIMD-00-140, June 9, 2000).
    \8\ Report on the Department of Energy's Consolidated Financial 
Statements, DOE/IG-FS-01-01, February 16, 2001.
---------------------------------------------------------------------------
     In February, the Department of Health and Human Services' 
Inspector General again reported serious control weaknesses affecting 
the integrity, confidentiality, and availability of data maintained by 
the department.\9\ Most significant were weaknesses associated with the 
department's Health Care Financing Administration, which was 
responsible, during fiscal year 2000, for processing more than $200 
billion in medicare expenditures. HCFA relies on extensive data 
processing operations at its central office to maintain administrative 
data, such as Medicare enrollment, eligibility, and paid claims data, 
and to process all payments for managed care. HCFA also relies on 
Medicare contractors, who use multiple shared systems to collect and 
process personal health, financial, and medical data associated with 
Medicare claims. Significant weaknesses were also reported for the Food 
and Drug Administration and the department's Division of Financial 
Operations.
---------------------------------------------------------------------------
    \9\ Report on the Financial Statement Audit of the Department of 
Health and Human Services for Fiscal Year 2000, A-17-00-00014, February 
26, 2001.
---------------------------------------------------------------------------
     The Environmental Protection Agency (EPA) relies on its 
computer systems to collect and maintain a wealth of environmental data 
under various statutory and regulatory requirements. EPA makes much of 
its information available to the public through Internet access in 
order to encourage public awareness of and participation in managing 
human health and environmental risks and to meet statutory 
requirements. EPA also maintains confidential data from private 
businesses, data of varying sensitivity on human health and 
environmental risks, financial and contract data, and personal 
information on its employees. Consequently, EPA's information security 
program must accommodate the often competing goals of making much of 
its environmental information widely accessible while maintaining data 
integrity, availability, and appropriate confidentiality. In July 2000, 
we reported serious and pervasive problems that essentially rendered 
EPA's agencywide information security program ineffective.\10\ Our 
tests of computer-based controls concluded that the computer operating 
systems and agencywide computer network that support most of EPA's 
mission-related and financial operations were riddled with security 
weaknesses.
---------------------------------------------------------------------------
    \10\ Information Security: Fundamental Weaknesses Place EPA Data 
and Operations at Risk (GAO/AIMD-00-215 July 6, 2000).
---------------------------------------------------------------------------
    In addition, EPA's records showed that its vulnerabilities had been 
exploited by both external and internal sources, as illustrated by the 
following examples.
--In June 1998, EPA was notified that one of its computers was used by 
        a remote intruder as a means of gaining unauthorized access to 
        a state university's computers. The problem report stated that 
        vendor-supplied software updates were available to correct the 
        vulnerability, but EPA had not installed them.
--In July 1999, a chat room was set up on a network server at one of 
        EPA's regional financial management centers for hackers to post 
        notes and, in effect, conduct on-line electronic conversations.
--In February 1999, a sophisticated penetration affected three of EPA's 
        computers. EPA was unaware of this penetration until notified 
        by the FBI.
--In June 1999, an intruder penetrated an Internet web server at EPA's 
        National Computer Center by exploiting a control weakness 
        specifically identified by EPA about 3 years earlier during a 
        previous penetration of a different system. The vulnerability 
        continued to exist because EPA had not implemented vendor 
        software updates (patches), some of which had been available 
        since 1996.
--On two occasions during 1998, extraordinarily large volumes of 
        network traffic--synonymous with a commonly used denial-of-
        service hacker technique--affected computers at one of EPA's 
        field offices. In one case, an Internet user significantly 
        slowed EPA's network activity and interrupted network service 
        for over 450 EPA computer users. In a second case, an intruder 
        used EPA computers to successfully launch a denial-of-service 
        attack against an Internet service provider.
--In September 1999, an individual gained access to an EPA computer and 
        altered the computer's access controls, thereby blocking 
        authorized EPA employees from accessing files. This individual 
        was no longer officially affiliated with EPA at the time of the 
        intrusion, indicating a serious weakness in EPA's process for 
        applying changes in personnel status to computer accounts.
    Of particular concern was that many of the most serious weaknesses 
we identified-those related to inadequate protection from intrusions 
through the Internet and poor security planning-had been previously 
reported to EPA management in 1997 by EPA's inspector general.\11\ The 
negative effects of such weaknesses are illustrated by EPA's own 
records, which show several serious computer security incidents since 
early 1998 that have resulted in damage and disruption to agency 
operations. As a result of these weaknesses, EPA's computer systems and 
the operations that rely on them were highly vulnerable to tampering, 
disruption, and misuse from both internal and external sources.
---------------------------------------------------------------------------
    \11\ EPA's Internet Connectivity Controls, Office of Inspector 
General Report Audit (Redacted Version), September 5, 1997.
---------------------------------------------------------------------------
    EPA management has developed and begun to implement a detailed 
action plan to address reported weaknesses. However, the agency does 
not expect to complete these corrective actions until 2002 and 
continued to report a material weakness in this area in its fiscal year 
2000 report on internal controls under the Federal Managers' Financial 
Integrity Act of 1982.\12\
---------------------------------------------------------------------------
    \12\ Audit Rewport on EPA's Fiscal 2000 Financial Statements, 
Office of the Inspector General Audit Report 2001-1-00107, February 28, 
2001.
---------------------------------------------------------------------------
     The Department of Commerce is responsible for systems that 
the department has designated as critical for national security, 
national economic security, and public health and safety. Its member 
bureaus include the National Oceanic and Atmospheric Administration, 
the Patent and Trademark Office, the Bureau of the Census, and the 
International Trade Administration. During December 2000 and January 
2001, Commerce 's inspector general reported significant computer 
security weaknesses in several of the department's bureaus and, last 
month, reported multiple material information security weaknesses 
affecting the department's ability to produce accurate data for 
financial statements. These included a lack of formal, current security 
plans and weaknesses in controls over access to systems and over 
software development and changes.\13\ At the request of the full 
committee, we are currently evaluating information security controls at 
selected other Commerce bureaus.
---------------------------------------------------------------------------
    \13\ Department of Commerce's Fiscal year 2000 Consolidated 
Financial Statements, Inspector General Audit Report No. FSD-12849-1-
0001.
---------------------------------------------------------------------------
  WHILE NATURE OF RISK VARIES, CONTROL WEAKNESSES ACROSS AGENCIES ARE 
                           STRIKINGLY SIMILAR

    The nature of agency operations and their related risks vary. 
However, striking similarities remain in the specific types of general 
control weaknesses reported and in their serious negative impact on an 
agency's ability to ensure the integrity, availability, and appropriate 
confidentiality of its computerized operations--and therefore on what 
corrective actions they must take. The sections that follow describe 
the six areas of general controls and the specific weaknesses that were 
most widespread at the agencies covered by our analysis.

Security Program Management
    Each organization needs a set of management procedures and an 
organizational framework for identifying and assessing risks, deciding 
what policies and controls are needed, periodically evaluating the 
effectiveness of these policies and controls, and acting to address any 
identified weaknesses. These are the fundamental activities that allow 
an organization to manage its information security risks cost 
effectively, rather than react to individual problems in an ad-hoc 
manner only after a violation has been detected or an audit finding 
reported.
    Despite the importance of this aspect of an information security 
program, poor security program management continues to be a widespread 
problem. Virtually all of the agencies for which this aspect of 
security was reviewed had deficiencies. Specifically, many had not 
developed security plans for major systems based on risk, had not 
documented security policies, and had not implemented a program for 
testing and evaluating the effectiveness of the controls they relied 
on. As a result, agencies

 were not fully aware of the information security risks to 
        their operations,
 had accepted an unknown level of risk by default rather than 
        consciously deciding what level of risk was tolerable,
 had a false sense of security because they were relying on 
        controls that were not effective, and
 could not make informed judgments as to whether they were 
        spending too little or too much of their resources on security.
    With the October 2000 enactment of the government information 
security reform provisions of the fiscal year 2001 National Defense 
Authorization Act, agencies are now required by law to adopt the 
practices described above, including annual management evaluations of 
agency security.

Access Controls
    Access controls limit or detect inappropriate access to computer 
resources (data, equipment, and facilities), thereby protecting these 
resources against unauthorized modification, loss, and disclosure. 
Access controls include physical protections--such as gates and 
guards--as well as logical controls, which are controls built into 
software that require users to authenticate themselves through the use 
of secret passwords or other identifiers and limit the files and other 
resources that an authenticated user can access and the actions that he 
or she can execute. Without adequate access controls, unauthorized 
individuals, including outside intruders and terminated employees, can 
surreptitiously read and copy sensitive data and make undetected 
changes or deletions for malicious purposes or personal gain. Even 
authorized users can unintentionally modify or delete data or execute 
changes that are outside their span of authority.
    For access controls to be effective, they must be properly 
implemented and maintained. First, an organization must analyze the 
responsibilities of individual computer users to determine what type of 
access (e.g., read, modify, delete) they need to fulfill their 
responsibilities. Then, specific control techniques, such as 
specialized access control software, must be implemented to restrict 
access to these authorized functions. Such software can be used to 
limit a user's activities associated with specific systems or files and 
to keep records of individual users' actions on the computer. Finally, 
access authorizations and related controls must be maintained and 
adjusted on an ongoing basis to accommodate new and terminated 
employees, and changes in users' responsibilities and related access 
needs.
    Significant access control weaknesses were reported for all of the 
agencies covered by our analysis, as evidenced by the following 
examples:

 Accounts and passwords for individuals no longer associated 
        with the agency were not deleted or disabled; neither were they 
        adjusted for those whose responsibilities, and thus need to 
        access certain files, changed. At one agency, as a result, 
        former employees and contractors could and in many cases did 
        still read, modify, copy, or delete data. At this same agency, 
        even after 160 days of inactivity, 7,500 out of 30,000 users' 
        accounts had not been deactivated.
 Users were not required to periodically change their 
        passwords.
 Managers did not precisely identify and document access needs 
        for individual users or groups of users. Instead, they provided 
        overly broad access privileges to very large groups of users. 
        As a result, far more individuals than necessary had the 
        ability to browse and, sometimes, modify or delete sensitive or 
        critical information. At one agency, all 1,100 users were 
        granted access to sensitive system directories and settings. At 
        another agency, 20,000 users had been provided access to one 
        system without written authorization.
 Use of default, easily guessed, and unencrypted passwords 
        significantly increased the risk of unauthorized access. During 
        testing at one agency, we were able to guess many passwords 
        based on our knowledge of commonly used passwords and were able 
        to observe computer users' keying in passwords and then use 
        those passwords to obtain ``high level'' system administration 
        privileges.
 Software access controls were improperly implemented, 
        resulting in unintended access or gaps in access-control 
        coverage. At one agency data center, all users, including 
        programmers and computer operators, had the capability to read 
        sensitive production data, increasing the risk that such 
        sensitive information could be disclosed to unauthorized 
        individuals. Also at this agency, certain users had the 
        unrestricted ability to transfer system files across the 
        network, increasing the risk that unauthorized individuals 
        could gain access to the sensitive data or programs.
    To illustrate the risks associated with poor authentication and 
access controls, in recent years we have begun to incorporate network 
vulnerability testing into our audits of information security. Such 
tests involve attempting--with agency cooperation--to gain unauthorized 
access to sensitive files and data by searching for ways to circumvent 
existing controls, often from remote locations. Our auditors have been 
successful, in almost every test, in readily gaining unauthorized 
access that would allow intruders to read, modify, or delete data for 
whatever purpose they had in mind. Further, user activity was 
inadequately monitored. At one agency, much of the activity associated 
with our intrusion testing was not recognized and recorded, and the 
problem reports that were recorded did not recognize the magnitude of 
our activity or the severity of the security breaches we initiated.

Application Software Development and Change Controls
    Application software development and change controls prevent 
unauthorized software programs or modifications to programs from being 
implemented. Key aspects of such controls are ensuring that (1) 
software changes are properly authorized by the managers responsible 
for the agency program or operations that the application supports, (2) 
new and modified software programs are tested and approved prior to 
their implementation, and (3) approved software programs are maintained 
in carefully controlled libraries to protect them from unauthorized 
changes and to ensure that different versions are not misidentified.
    Such controls can prevent both errors in software programming as 
well as malicious efforts to insert unauthorized computer program code. 
Without adequate controls, incompletely tested or unapproved software 
can result in erroneous data processing that, depending on the 
application, could lead to losses or faulty outcomes. In addition, 
individuals could surreptitiously modify software programs to include 
processing steps or features that could later be exploited for personal 
gain or sabotage.
    Weaknesses in software program change controls were identified for 
almost all of the agencies where such controls were evaluated. Examples 
of weaknesses in this area included the following:

 Testing procedures were undisciplined and did not ensure that 
        implemented software operated as intended. For example, at one 
        agency, senior officials authorized some systems for processing 
        without testing access controls to ensure that they had been 
        implemented and were operating effectively. At another, 
        documentation was not retained to demonstrate user testing and 
        acceptance.
 Implementation procedures did not ensure that only authorized 
        software was used. In particular, procedures did not ensure 
        that emergency changes were subsequently tested and formally 
        approved for continued use and that implementation of ``locally 
        developed'' (unauthorized) software programs was prevented or 
        detected.
 Agencies' policies and procedures frequently did not address 
        the maintenance and protection of program libraries.

Segregation of Duties
    Segregation of duties refers to the policies, procedures, and 
organizational structure that help ensure that one individual cannot 
independently control all key aspects of a process or computer-related 
operation and thereby conduct unauthorized actions or gain unauthorized 
access to assets or records without detection. For example, one 
computer programmer should not be allowed to independently write, test, 
and approve program changes.
    Although segregation of duties alone will not ensure that only 
authorized activities occur, inadequate segregation of duties increases 
the risk that erroneous or fraudulent transactions could be processed, 
improper program changes implemented, and computer resources damaged or 
destroyed. For example,

 an individual who was independently responsible for 
        authorizing, processing, and reviewing payroll transactions 
        could inappropriately increase payments to selected individuals 
        without detection; or
 a computer programmer responsible for authorizing, writing, 
        testing, and distributing program modifications could either 
        inadvertently or deliberately implement computer programs that 
        did not process transactions in accordance with management's 
        policies or that included malicious code.
    Controls to ensure appropriate segregation of duties consist mainly 
of documenting, communicating, and enforcing policies on group and 
individual responsibilities. Enforcement can be accomplished by a 
combination of physical and logical access controls and by effective 
supervisory review.
    Segregation of duties weaknesses were identified at most of the 
agencies covered by our analysis. Common problems involved computer 
programmers and operators who were authorized to perform a variety of 
duties, thus providing them the ability to independently modify, 
circumvent, and disable system security features. For example, at one 
data center, a single individual could independently develop, test, 
review, and approve software changes for implementation.
    Segregation of duties problems were also identified related to 
transaction processing. For example, at one agency, 11 staff members 
involved with procurement had system access privileges that allowed 
them to individually request, approve, and record the receipt of 
purchased items. In addition, 9 of the 11 had system access privileges 
that allowed them to edit the vendor file, which could result in 
fictitious vendors being added to the file for fraudulent purposes. For 
fiscal year 1999, we identified 60 purchases, totaling about $300,000, 
that were requested, approved, and receipt-recorded by the same 
individual.

Operating System Controls
    Operating system software controls limit and monitor access to the 
powerful programs and sensitive files associated with the computer 
systems operation. Generally, one set of system software is used to 
support and control a variety of applications that may run on the same 
computer hardware. System software helps control and coordinate the 
input, processing, output, and data storage associated with all of the 
applications that run on the system. Some system software can change 
data and program code on files without leaving an audit trail or can be 
used to modify or delete audit trails. Examples of system software 
include the operating system, system utilities, program library 
systems, file maintenance software, security software, data 
communications systems, and database management systems.
    Controls over access to and modification of system software are 
essential in providing reasonable assurance that operating system-based 
security controls are not compromised and that the system will not be 
impaired. If controls in this area are inadequate, unauthorized 
individuals might use system software to circumvent security controls 
to read, modify, or delete critical or sensitive information and 
programs. Also, authorized users of the system may gain unauthorized 
privileges to conduct unauthorized actions or to circumvent edits and 
other controls built into application programs. Such weaknesses 
seriously diminish the reliability of information produced by all of 
the applications supported by the computer system and increase the risk 
of fraud, sabotage, and inappropriate disclosure. Further, system 
software programmers are often more technically proficient than other 
data processing personnel and, thus, have a greater ability to perform 
unauthorized actions if controls in this area are weak.
    The control concerns for system software are similar to the access 
control issues and software program change control issues discussed 
earlier. However, because of the high level of risk associated with 
system software activities, most entities have a separate set of 
control procedures that apply to them.
    Weaknesses were identified at each of the agencies for which 
operating system controls were reviewed. A common type of problem 
reported was insufficiently restricted access that made it possible for 
knowledgeable individuals to disable or circumvent controls in a 
variety of ways. For example, at one agency, system support personnel 
had the ability to change data in the system audit log. As a result, 
they could have engaged in a wide array of inappropriate and 
unauthorized activity and could have subsequently deleted related 
segments of the audit log, thus diminishing the likelihood that their 
actions would be detected.
    Further, pervasive vulnerabilities in network configuration exposed 
agency systems to attack. These vulnerabilities stemmed from agencies' 
failure to (1) install and maintain effective perimeter security, such 
as firewalls and screening routers, (2) implement current software 
patches, and (3) protect against commonly known methods of attack.

Service Continuity
    Finally, service continuity controls ensure that when unexpected 
events occur, critical operations will continue without undue 
interruption and that crucial, sensitive data are protected. For this 
reason, an agency should have (1) procedures in place to protect 
information resources and minimize the risk of unplanned interruptions 
and (2) a plan to recover critical operations, should interruptions 
occur. These plans should consider the activities performed at general 
support facilities, such as data processing centers, as well as the 
activities performed by users of specific applications. To determine 
whether recovery plans will work as intended, they should be tested 
periodically in disaster simulation exercises.
    Losing the capability to process, retrieve, and protect information 
maintained electronically can significantly affect an agency's ability 
to accomplish its mission. If controls are inadequate, even relatively 
minor interruptions can result in lost or incorrectly processed data, 
which can cause financial losses, expensive recovery efforts, and 
inaccurate or incomplete financial or management information. Controls 
to ensure service continuity should address the entire range of 
potential disruptions. These may include relatively minor 
interruptions, such as temporary power failures or accidental loss or 
erasure of files, as well as major disasters, such as fires or natural 
disasters that would require reestablishing operations at a remote 
location.
    Service continuity controls include (1) taking steps, such as 
routinely making backup copies of files, to prevent and minimize 
potential damage and interruption, (2) developing and documenting a 
comprehensive contingency plan, and (3) periodically testing the 
contingency plan and adjusting it as appropriate.
    Service continuity control weaknesses were reported for most of the 
agencies covered by our analysis. Examples of weaknesses included the 
following:

 Plans were incomplete because operations and supporting 
        resources had not been fully analyzed to determine which were 
        the most critical and would need to be resumed as soon as 
        possible should a disruption occur.
 Disaster recovery plans were not fully tested to identify 
        their weaknesses. At one agency, periodic walkthroughs or 
        unannounced tests of the disaster recovery plan had not been 
        performed. Conducting these types of tests provides a scenario 
        more likely to be encountered in the event of an actual 
        disaster.

           IMPROVED SECURITY PROGRAM MANAGEMENT IS ESSENTIAL

    The audit reports cited in this statement and in our prior 
information security reports include many recommendations to individual 
agencies that address specific weaknesses in the areas I have just 
described. It is each individual agency's responsibility to ensure that 
these recommendations are implemented. Agencies have taken steps to 
address problems and many have good remedial efforts underway. However, 
these efforts will not be fully effective and lasting unless they are 
supported by a strong agencywide security management framework.
    Establishing such a management framework requires that agencies 
take a comprehensive approach that involves both (1) senior agency 
program managers who understand which aspects of their missions are the 
most critical and sensitive and (2) technical experts who know the 
agencies' systems and can suggest appropriate technical security 
control techniques. We studied the practices of organizations with 
superior security programs and summarized our findings in a May 1998 
executive guide entitled Information Security Management: Learning From 
Leading Organizations (GAO/AIMD-98-68). Our study found that these 
organizations managed their information security risks through a cycle 
of risk management activities that included

 assessing risks and determining protection needs,
 selecting and implementing cost-effective policies and 
        controls to meet these needs,
 promoting awareness of policies and controls and of the risks 
        that prompted their adoption among those responsible for 
        complying with them, and
 implementing a program of routine tests and examinations for 
        evaluating the effectiveness of policies and related controls 
        and reporting the resulting conclusions to those who can take 
        appropriate corrective action.
    In addition, a strong, centralized focal point can help ensure that 
the major elements of the risk management cycle are carried out and 
serve as a communications link among organizational units. Such 
coordination is especially important in today's highly networked 
computing environments. This cycle of risk management activities is 
depicted below.
    This cycle of activity, as described in our May 1998 executive 
guide, is consistent with guidance on information security program 
management provided to agencies by the Office of Management and Budget 
(OMB) and by NIST. In addition, the guide has been endorsed by the 
federal Chief Information Officers (CIO) Council as a useful resource 
for agency managers. We believe that implementing such a cycle of 
activity is the key to ensuring that information security risks are 
adequately considered and addressed on an ongoing basis.
    While instituting this framework is essential, there are several 
steps that agencies can take immediately. Specifically, they can (1) 
increase awareness, (2) ensure that existing controls are operating 
effectively, (3) ensure that software patches are up-to-date, (4) use 
automated scanning and testing tools to quickly identify problems, (5) 
propagate their best practices, and (6) ensure that their most common 
vulnerabilities are addressed. None of these actions alone will ensure 
good security. However, they take advantage of readily available 
information and tools and, thus, do not involve significant new 
resources. As a result, they are steps that can be made without delay.

   NEW LEGAL REQUIREMENTS PROVIDE BASIS FOR IMPROVED MANAGEMENT AND 
                               OVERSIGHT

    Due to concerns about the repeated reports of computer security 
weaknesses at federal agencies, in 2000, the Congress passed government 
information security reform provisions require agencies to implement 
the activities I have just described. These provisions were enacted in 
late 2000 as part of the fiscal year 2001 NationalDefense Authorization 
Act. In addition to requiring these management improvements, the new 
provisions require annual evaluations of agency information security 
programs by both management and agency inspectors general. The results 
of these reviews, which are initially scheduled to become available in 
late 2001, will provide a more complete picture of the status of 
federal information security than currently exists, thereby providing 
the Congress and OMB an improved means of overseeing agency progress 
and identifying areas needing improvement.

      IMPROVEMENT EFFORTS ARE UNDERWAY, BUT MANY CHALLENGES REMAIN

    During the last two years, a number of improvement efforts have 
been initiated. Several agencies have taken significant steps to 
redesign and strengthen their information security programs; the 
Federal Chief Information Officers Council has issued a guide for 
measuring agency progress, which we assisted in developing; and the 
President issued a National Plan for Information Systems Protection and 
designated the related goals of computer security and critical 
infrastructure protection as a priority management objective in his 
fiscal year 2001 budget. These actions are laudable. However, recent 
reports and events indicate that they are not keeping pace with the 
growing threats and that critical operations and assets continue to be 
highly vulnerable to computer-based attacks.
    While OMB, the Chief Information Officers Council, and the various 
federal entities involved in critical infrastructure protection have 
expanded their efforts, it will be important to maintain the momentum. 
As we have noted in previous reports and testimonies, there are actions 
that can be taken on a governmentwide basis to enhance agencies' 
abilities to implement effective information security.
    First, it is important that the federal strategy delineate the 
roles and responsibilities of the numerous entities involved in federal 
information security and related aspects of critical infrastructure 
protection. Under current law, OMB is responsible for overseeing and 
coordinating federal agency security; and NIST, with assistance from 
the National Security Agency (NSA), is responsible for establishing 
related standards. In addition, interagency bodies, such as the CIO 
Council and the entities created under Presidential Decision Directive 
63 on critical infrastructure protection are attempting to coordinate 
agency initiatives. While these organizations have developed 
fundamentally sound policies and guidance and have undertaken 
potentially useful initiatives, effective improvements are not taking 
place, and it is unclear how the activities of these many organizations 
interrelate, who should be held accountable for their success or 
failure, and whether they will effectively and efficiently support 
national goals.
    Second, more specific guidance to agencies on the controls that 
they need to implement could help ensure adequate protection. Currently 
agencies have wide discretion in deciding what computer security 
controls to implement and the level of rigor with which they enforce 
these controls. In theory, this is appropriate since, as OMB and NIST 
guidance states, the level of protection that agencies provide should 
be commensurate with the risk to agency operations and assets. In 
essence, one set of specific controls will not be appropriate for all 
types of systems and data.
    However, our studies of best practices at leading organizations 
have shown that more specific guidance is important. In particular, 
specific mandatory standards for varying risk levels can clarify 
expectations for information protection, including audit criteria; 
provide a standard framework for assessing information security risk; 
and help ensure that shared data are appropriately protected. 
Implementing such standards for federal agencies would require 
developing a single set of information classification categories for 
use by all agencies to define the criticality and sensitivity of the 
various types of information they maintain. It would also necessitate 
establishing minimum mandatory requirements for protecting information 
in each classification category.
    Third, routine periodic audits, such as those required in the 
government information security reforms recently enacted, would allow 
for more meaningful performance measurement. Ensuring effective 
implementation of agency information security and critical 
infrastructure protection plans will require monitoring to determine if 
milestones are being met and testing to determine if policies and 
controls are operating as intended.
    Fourth, the Congress and the executive branch can use of audit 
results to monitor agency performance and take whatever action is 
deemed advisable to remedy identified problems. Such oversight is 
essential to holding agencies accountable for their performance as was 
demonstrated by the OMB and congressional efforts to oversee the year 
2000 computer challenge.
    Fifth, it is important for agencies to have the technical expertise 
they need to select, implement, and maintain controls that protect 
their computer systems. Similarly, the federal government must maximize 
the value of its technical staff by sharing expertise and information. 
As the year 2000 challenge showed, the availability of adequate 
technical expertise has been a continuing concern to agencies.
    Sixth, agencies can allocate resources sufficient to support their 
computer security and infrastructure protection activities. Funding for 
security is already embedded to some extent in agency budgets for 
computer system development efforts and routine network and system 
management and maintenance. However, some additional amounts are likely 
to be needed to address specific weaknesses and new tasks. OMB and 
congressional oversight of future spending on computer security will be 
important to ensuring that agencies are not using the funds they 
receive to continue ad hoc, piecemeal security fixes not supported by a 
strong agency risk management framework.
    Mr. Chairman, this concludes my statement. I would be pleased to 
answer any questions that you or other members of the Subcommittee may 
have at this time.

    Mr. Greenwood. Thank you, Mr. Dacey.
    Mr. Tritak.

                  TESTIMONY OF JOHN S. TRITAK

    Mr. Tritak. Thank you, Mr. Chairman. I welcome the 
opportunity to appear before this subcommittee to discuss 
internal Federal Government efforts in securing its critical 
infrastructures. I ask that my written statement be introduced 
into the record at this time.
    Mr. Greenwood. It will be.
    Mr. Tritak. My opening remarks will focus primarily on 
those efforts through the end of the Clinton administration. A 
detailed discussion of those efforts are provided in the 
President's report to the Congress which was published in 
January and was prepared both by the National Security Council 
and my office, the Critical Infrastructure Assurance Office, in 
coordination with Federal Governments and agencies that 
actually reported on their activities.
    Mr. Chairman, as you know, the administration is currently 
conducting a thorough review of its critical infrastructure 
protection policy. While the results of that review are still 
several weeks away, several things we already know, which I 
think should be discussed here.
    First, President Bush himself has indicated that critical 
infrastructure protection is important to U.S. Economic and 
national security and will be a priority of his administration.
    Second, and the point goes to remarks made by Congressman 
Tauzin, National Security Adviser Rice has recently stated with 
regard to government agency organizations that on the one hand 
no single government agency can handle all of the critical 
infrastructure assurance problems for the Federal Government. 
All agencies are stakeholders and have a role in the solution. 
That said, however, coordination among governments naturally 
occurring stovepipes must take place and must take place better 
than it has in the past. Moreover there must be a common point 
of contact that is accessible both to private industry and the 
government, Federal Government, the Congress, and the American 
people in addressing this issue.
    A third point was also made by Dr. Rice. She stated that 
the Federal Government bears a direct responsibility to ensure 
that it can deliver essential services and perform critical 
functions necessary for the Nation's defense, the health and 
welfare and safety of its citizens. I think this statement 
deserves a little explanation because it makes a very important 
point about critical infrastructure policy.
    In the first instance, critical infrastructure protection 
is about assured delivery of vital services that are provided 
by key sectors of government and the economy, including 
electric power, oil and gas, telecommunications, banking and 
finance, transportation, water, health and emergency services. 
To the extent these infrastructures depend on computer systems 
and networks to deliver those vital services, and increasingly 
they do, to that extent critical infrastructure policy must be 
concerned with computer security and information assurance.
    Now, under Presidential directive 63 the previous 
administration established as one of its goals the achievement 
of the ability to protect the Nation's critical infrastructures 
from deliberate attacks. That could significantly diminish the 
government's ability to perform national security missions and 
ensure the public health and safety of the American people.
    When I first took office, this office, I often asked how 
are we going to know when we've achieved this goal and what 
does it take to achieve it. I had more than a passing interest 
in the question because one of the mandates under PDD-63 for my 
office is to assist Federal agencies in assessing their 
dependence on critical infrastructures.
    Ultimately, our response was to develop what we call 
``project matrix.'' That decision came out of a sense of 
frustration both within our own office as well as some 
government agencies asking the question how do we go about 
doing this, managing this very large problem.
    Now project matrix basically takes a systems-analysis 
approach to the critical infrastructure problem. It starts by 
asking each participating department and agency what services 
do you provide that are necessary to the Nation's defense, the 
orderly functioning of the economy, or the health, welfare and 
safety of Americans. More importantly, of those services, which 
if disrupted even for short periods of time could have a 
significant and immediate impact on the public.
    You will note, Mr. Chairman, that there's a time-
sensitivity element that is important to our analysis. I have 
to explain why. We believe that those types of services, those 
types of critical and time-sensitive services, and the systems 
that are necessary for their delivery, are at the greatest risk 
if attacked and therefore deserve priority attention in terms 
of security. Let me give you an example.
    Timely hurricane warnings would be deemed under our 
approach as a critical service; and, therefore, NOAA's national 
hurricane warning center would be deemed a critical asset. This 
is because disruption of timely warnings of hurricanes during a 
hurricane season could have absolutely catastrophic effects on 
the public.
    The matrix approach requires agencies also to think 
functionally rather than bureaucratically. It is not enough in 
the case of the national hurricane warning center to determine 
whether it alone is secure. So, too, must all the other 
government and private sector entities necessary to the 
performance of the center's warning operations be secure as 
well. In many instances, vital functions performed by one 
agency depend on services provided by another. Assured delivery 
of critical services are only as good as the weakest link in 
the delivery chain.
    Having essentially mapped a critical government service 
across government agencies and between government and the 
private sector, we are now--agencies are better able then to 
direct their efforts toward determining whether or not that 
service is vulnerable to disruption and immediate disruption. 
Among other things, this sort of approach also helps 
rationalize the budgetary process and prioritizing your 
security activities within an agency.
    Let me say in conclusion, Mr. Chairman, a number of things. 
First, critical infrastructure policy is inherently a risk-
management problem. A number of people here today have all 
indicated there's no such thing as perfect security. We need to 
know what is at risk however; and we need to decide how to 
manage those risks, balancing costs and consequences.
    Also, critical infrastructure protection is concerned with 
computer security, but it is not synonymous with it. There are 
very good reasons for having good computer security besides 
those in support of critical infrastructure policy. We've heard 
about many. Privacy of data bases that have information about 
citizens is critical, whether or not it would meet the standard 
of creating an immediate impact and harm on the public in some 
broader sense. Protecting classified systems is important 
regardless of what is contained in them.
    Now, how we decide to allocate resources for all computer 
security demands within the Federal Government is essentially a 
public-policy choice, a choice the administration is currently 
weighing in its review. That said, if securing critical 
government services are to be a priority, particularly time-
sensitive ones, then going through a process along the lines 
I've just described is required. In addition, having identified 
government--critical government assets essential to delivery of 
critical services, priority must also be given to assessing 
their vulnerabilities and developing and implementing 
remediation plans in those instances where vulnerabilities 
exist. And I can't overemphasize that last point. Just because 
a government asset is critical doesn't necessarily mean it's 
vulnerable to cyberattacks. If it is not connected to the 
Internet, if it is not connected to any part of the world, it 
by definition would not be vulnerable to outside attack, 
putting aside the internal problems you may have with 
disgruntled employees, which we all acknowledge is a problem.
    For example, I use the hurricane warning center as an 
example of how we go through the analytic process. I didn't by 
any means want to imply it is necessarily vulnerable to attack. 
In fact, from what I know, it's quite secure. What is the 
point, however, and what I wish to leave you with is that 
unless you know how the government's crown jewels function and 
how having identified those elements all other relevant 
government assets and private assets that are essential to the 
functioning of those crown jewels you don't know whether you're 
vulnerable or not; and, therefore, you don't know whether 
you're secure or not against cyber-based attacks.
    That concludes my remarks, Mr. Chairman; and I welcome any 
questions you may have.
    [The prepared statement of John S. Tritak follows:]

Prepared Statement of John S. Tritak, Director, Critical Infrastructure 
                            Assurance Office

    Mr. Chairman, members of the Subcommittee, it is an honor to appear 
before you today to discuss the status, as of the time that the Bush 
Administration took office, of Federal government efforts to secure 
internal critical systems and infrastructure within Departments and 
Agencies. These efforts are described in some detail in the Report of 
the President of the United States on the Status of Federal Critical 
Infrastructure Protection Activities, January 2001.
    This Subcommittee has shown exceptional leadership on a broad range 
of national and economic security issues and I am grateful for the 
opportunity to work closely with you and the Congress to find ways to 
advance infrastructure assurance for all Americans. As you know, the 
Bush Administration currently is conducting a thorough review of our 
critical infrastructure protection policy. We expect the results of 
that review over the next couple of months. President Bush has 
indicated already, however, that securing our nation's critical 
infrastructures will be a priority of his Administration. Your decision 
to hold this hearing could not be more timely. We all recognize that no 
viable solutions will be developed or implemented without the executive 
and legislative branches working together.
    I believe the work of your subcommittee, along with that of others, 
will make an important contribution to establishing a new consensus on 
safeguarding critical government services against cyber attacks.

                               BACKGROUND

    America has long depended on a complex of systems--or critical 
infrastructures--to assure the delivery of services vital to its 
national defense, economic prosperity, and social well-being. These 
infrastructures include telecommunications, water supplies, electric 
power, oil and gas delivery and storage, banking and finance, 
transportation, and vital human and government services.
    The Information Age has fundamentally altered the nature and extent 
of our dependency on these infrastructures. Increasingly, our 
government, economy, and society are being connected together into an 
ever expanding and interdependent digital nervous system of computers 
and information systems. With this interdependence come new 
vulnerabilities. One person with a computer, a modem, and a telephone 
line anywhere in the world potentially can break into sensitive 
government files, shut down an airport's air traffic control system, or 
cause a power outage in an entire region.
    Events such as the 1995 bombing of the Murrah Federal Building in 
Oklahoma City demonstrated that the Federal government needed to 
address new types of threats and vulnerabilities, many of which the 
nation was unprepared to defend against. In response to the Murrah 
Building tragedy and other events, an inter-agency working group was 
formed to examine the nature of the threat, our vulnerabilities, and 
possible long-term solutions for this aspect of our national security. 
The National Security Council's Critical Infrastructure Working Group 
(CIWG) included representatives from the defense, intelligence, law 
enforcement and national security communities. The working group 
identified both physical and cyber threats and recommended formation of 
a Presidential Commission to address more thoroughly many of these 
growing concerns.
    In July 1996 the President's Commission on Critical Infrastructure 
Protection (PCCIP) was established by Executive Order 13010. The 
bipartisan PCCIP included senior representatives from private industry, 
government, and academia; its Advisory Committee consisted of industry 
leaders who provided counsel to the Commission.
    After examining infrastructure issues for over a year, the 
Commission issued its report, Critical Foundations: Protecting 
America's Infrastructures. The Report reached four significant 
conclusions:

 First, critical infrastructure protection is central to our 
        national defense, including national security and national 
        economic power;
 Second, growing complexity and interdependence between 
        critical infrastructures may create the increased risk that 
        rather minor and routine disturbances can cascade into national 
        security emergencies;
 Third, vulnerabilities are increasing steadily and the means 
        to exploit weaknesses are readily available; practical measures 
        and mechanisms, the Commission argued, must be urgently 
        undertaken before we are confronted with a national crisis; and
 Fourth, laying a foundation for security will depend on new 
        forms of cooperation with the private sector, which owns and 
        operates a majority of these critical infrastructure 
        facilities.

                                 PDD-63

    On May 22, 1998, Presidential Decision Directive 63 (PDD-63) was 
issued to achieve and maintain the capability to protect our nation's 
critical infrastructures from acts that would significantly diminish 
the abilities of:

 The Federal government to perform essential national security 
        missions and to ensure the general public health and safety;
 State and local governments to maintain order and to deliver 
        minimum essential public services; and
 The private sector to ensure the orderly functioning of the 
        economy and the delivery of essential telecommunications, 
        energy, financial, and transportation services.
    To achieve these ends, PDD-63 articulates a strategy of:

 Creating a public-private partnership to address the problem 
        of information technology security;
 Raising awareness of the importance of cyber security in the 
        government and in the private sector;
 Stimulating market forces to increase the demand for cyber 
        security and to create standards or best practices;
 funding or facilitating research into new information 
        technology systems with improved security inherent in their 
        design;
 Working with educational facilities to increase the number of 
        students specializing in cyber security; and
 Helping to prevent, mitigate, or respond to major cyber 
        attacks by building an information sharing system among 
        government agencies, among corporations, and between government 
        and industry.
    The Federal government's basic approach to critical infrastructure 
protection, as reflected in PDD-63, has been built around a strong 
policy preference for consensus-building and voluntary cooperation 
rather than regulatory actions. In an economy as complex as ours, and 
with technology changing as quickly as it is, cooperation offers the 
best and surest way to achieve our shared goals in this emerging area. 
However, the government's approach also recognizes the need for 
coordinated actions to improve its internal defenses and the nation's 
overall posture against these new threats.
    PDD-63 called for the Federal government to produce a detailed plan 
to protect and defend the nation against cyber disruptions. Version 1 
of this effort, entitled The National Plan for Information Systems 
Protection, was released in January 2000, and represents the first 
attempt by a national government to design a comprehensive approach to 
protect its critical infrastructures. This initial version of the plan 
focused mainly on domestic efforts being undertaken by the Federal 
government to protect the nation's critical cyber-based 
infrastructures. The next version of the plan, due out this summer, 
will focus on the efforts of the infrastructure owners and operators, 
as well as the risk management and broader business community.
    Under PDD-63, Federal Agencies have a number of distinct 
responsibilities:

 All agencies are required to protect their own internal 
        critical infrastructures, especially their cyber systems.
 Some agencies with special expertise or functional 
        responsibilities are tasked with providing services to the 
        government as a whole.
 A number of agencies also are charged with developing 
        partnerships with private industry in their sectors of the 
        economy.
    I will focus the remainder of my remarks on the first 
responsibility--securing internal critical systems. Specifically, I 
will discuss the work of my office, the Critical Infrastructure 
Assurance Office, in assisting agencies to identify and prioritize 
these systems. I also will discuss briefly Federal Government efforts 
to formulate security and best practices standards that apply to 
information, security, and critical infrastructure assets.
    Time constraints prevent me from fully describing the internal 
efforts of each federal agency to secure their critical systems. I urge 
the subcommittee to review the status reports of each Department and 
Agency provided in Section III of the President's January Report. 
Likewise, I strongly recommend that the subcommittee study the 
agencies' sector partnership efforts described in Section II of the 
Report. These efforts are as important to overall national critical 
infrastructure assurance as the internal activities that have been 
undertaken within the Federal government. I would welcome the 
opportunity to brief the sub-committee on another occasion on the work 
of the CIAO and the federal lead agencies (Commerce, Energy, Treasury, 
Transportation, Justice, Health and Human Services, EPA and Defense) in 
promoting meaningful public-private partnerships.

   IDENTIFYING CRITICAL FEDERAL INFRASTRUCTURES AND SYSTEMS: PROJECT 
                                 MATRIX

    In response to PDD 63, my office established Project Matrix last 
year to ``coordinate analyses of the U.S. Government's own dependencies 
on critical infrastructures.''
    This is a government-wide issue. Federal Departments and Agencies 
do not operate independently of one another. Due to significant 
advances in information technology, the public and private sectors have 
become inextricably intertwined. As a result, there is limited utility 
in each Federal Department and Agency viewing physical and cyber 
security only in the context of its own organization. Project Matrix 
provides each Federal Department and Agency an expanded, more 
comprehensive, realistic, and useful view of the world within which it 
actually functions. The Administration, Congress, and private sector 
providers of the nation's critical infrastructures will require such 
information to implement cost efficient and effective physical and 
cyber security enhancement measures in the future. Project Matrix 
provides a common methodology and approach and allows the government to 
develop a clearer picture of cross-agency interdependencies.
    Participating in Project Matrix helps each Federal Department and 
Agency identify the assets, nodes and networks, and associated 
infrastructure dependencies and interdependencies that are required for 
it to fulfill its national security, economic stability, and critical 
public health and safety responsibilities to the American people. A 
number of Departments and Agencies refer to Project Matrix in their 
reports.
    Project Matrix also helps each participating Federal Department and 
Agency:

 Identify the nodes and networks that should receive robust 
        cyber and physical vulnerability assessments;
 Conduct near-term risk management assessments;
 Justify funding requests for high-priority security 
        enhancement measures in the areas of physical security, 
        information system security, industrial security, emergency 
        preparedness, counter-intelligence, counter-terrorism; and
 Review actual business processes to better understand and 
        improve the efficiencies of its organization's functions and 
        information technology architectures.
    Project Matrix involves a three-step process. In Step 1, the 
Project Matrix team identifies and prioritizes each Federal 
Department's and Agency's PDD 63 relevant assets. In Step 2, the team 
provides a business process topology on, and identifies significant 
points of failure associated with, each Department's or Agency's most 
critical assets. In Step 3, the team identifies the infrastructure 
dependencies associated with select assets identified in Step 1 and 
analyzed in-depth in Step 2.
    In FY 2001, the Project Matrix team will complete the documentation 
of its entire analytical process for use throughout the public and 
private sectors, improve its Step One automated data collection tool, 
and develop compatible automated Step Two and Three tools.
  integrating security into the capital planning and budget processes
    In February 2000, OMB issued important new guidance to the agencies 
on incorporating and funding security in information technology 
investments. In brief, this policy states that funding will not be 
provided for agency requests that fail to demonstrate how security is 
built into and funded as part of each system.
    This policy carries through on the requirements of the Clinger-
Cohen Act of 1996 and emphasizes that security must be incorporated in 
and practiced throughout the life cycle of each agency's system and 
program. To accomplish this, beginning with the FY 2002 budget, each 
agency budget request to OMB for information technology funding must, 
among other things:

 Demonstrate life cycle security costs for each system;
 Include a security plan that complies with applicable policy;
 Show specific methods used to ensure that risks are 
        understood, continually assessed, and effectively controlled; 
        and
 Demonstrate that security is an integral part of the agency's 
        enterprise architecture including interdependencies and 
        interrelationships.

             THE GOVERNMENT INFORMATION SECURITY REFORM ACT

    On October 30, 2000 the President signed into law the FY 2001 
Defense Authorization Act (P.L. 106398) including Title X, subtitle G, 
``Government Information Security Reform (Security Act).'' The security 
provision amends the Paperwork Reduction Act of 1995 (44 U.S.C. Chapter 
35) and primarily addresses the program management and program 
evaluation aspects of security.
    In concert with OMB policy, the Security Act requires agencies to 
incorporate and practice risk-based and cost-effective security 
throughout the life cycle of each agency system and thus firmly ties 
security to the agencies' capital planning and budget processes.
    The Security Act also requires on an annual basis:

 Agency program reviews;
 Inspector General evaluations of agency security programs;
 Agency reports to OMB; and
 An OMB report to Congress.
    The annual review and reporting requirements will promote 
consistent, ongoing assessments of government security performance. 
Recently a uniform method for agency program reviews has been 
developed.

         THE CIO AND CFO COUNCILS: STANDARDS AND BEST PRACTICES

    Standardizing the security controls for government systems has a 
conceptual appeal because it can reduce the complexity and expense of 
developing, implementing, and monitoring security on a system-by-system 
basis. This is increasingly important given the government's shortage 
of expert information security personnel. Government computer security 
almost certainly would improve if specific standards were prescribed 
and implemented for each government information system.
    However, specific standards for all systems--a ``one-size-fits-
all'' security approach--may not accommodate the vastly different 
operational requirements of each information system and could 
unnecessarily impede business operations. Executive branch agencies 
operate more than 26,000 major information systems, many of which 
directly interact with the public, industry, or State and local 
governments. Just as each system has its own unique operational 
requirements, so too are its security requirements unique.
    The CIO Council and the CFO Council recognize both the benefits and 
potential problems with standardized security approaches. They have 
undertaken the following important initiatives:
    Securing Electronic Government Transactions to the Public--Resource 
Guide: The CIO Council, the CFO Council, and the Information Technology 
Association of America are working together to develop a benchmark for 
risk-based, cost-effective security for three types of electronic 
government services:

 Web-based information services;
 Government procurement; and
 Financial transactions with the public.
    A resource guide for securing electronic transactions with the 
public will be released in 2001 to assist agency CIOs in promoting 
electronic government initiatives within their agencies. Together with 
the CFO Council initiative for agency financial systems, this effort 
may prove to be an effective pilot for establishing similar benchmarks 
for other discrete classes of programs and information systems.
    Best Security Practices: The CIO Council, led by the U.S. Agency 
for International Development and NIST, has developed a web-based 
repository of sound Federal agency security practices that have worked 
in the real world. The CIO Council's Best Security Practices initiative 
collects, documents, and disseminates these practices to help agencies 
reduce the cost of developing and testing new security controls, 
improve the speed of implementation, and increase the quality of their 
security programs.
    The goal is to populate the repository with more than 100 practices 
by mid 2001 and continually expand offerings from then on. In their 
guidance to the agencies on implementing the Government Information 
Security Reform Act, OMB has instructed agencies to use the CIO 
Council's best practices initiative to fulfill the new act's 
requirement to share best practices.
    Measuring Performance--Federal Information Technology Security 
Assessment Framework: Over the past year, the CIO Council, working with 
NIST, OMB, and the GAO, developed the Federal Information Technology 
Security Assessment Framework. The framework, issued in December 2000, 
provides agencies with a self-assessment methodology to determine the 
current status of their security programs and, where necessary, 
establish a target for improvement. In developing the framework, the 
CIO Council recognizes that the security needs for the tens of 
thousands of Federal information systems differ and must be addressed 
in different ways.
    The framework comprises five levels to guide agency self 
assessments and to assist them in prioritizing efforts for improvement:

 Level 1 reflects a documented security policy;
 Level 2 shows documented procedures and controls to implement 
        the policy;
 Level 3 indicates that the procedures and controls have in 
        fact been implemented;
 Level 4 shows that the procedures and controls are continually 
        tested and reviewed; and
 Level 5 demonstrates that procedures and controls are fully 
        integrated into a comprehensive program.
    Each level represents a more complete and effective security 
program. Agencies should bring all systems and programs to level 4 and 
ultimately level 5. OMB and the CIO Council have alerted agencies that 
when individual systems do not meet the framework's level 4 
requirements, the system may not meet OMB's security funding criteria.
    As mentioned earlier, the new Government Information Security 
Reform Act emphasizes the importance of assessing security 
effectiveness and requires annual agency reporting to OMB of the 
results of the agency security reviews. OMB has instructed agencies to 
use the framework to fulfill their assessment and reporting obligations 
under the Security Act.

                               CONCLUSION

    While much has been accomplished in recent years, much more needs 
to be done to ensure our critical government systems are adequately 
protected from cyber attack. I look forward to working with members of 
this subcommittee, and the entire Congress, as we address the 
challenges ahead. I look forward to your questions.

    Mr. Greenwood. Thank you. Appreciate your testimony.
    I will direct some questions to Mr. Dacey, if I may. 
Overall, if you had to give the Federal agencies the GAO has 
reviewed a collective grade A through F, i.e., passing or 
failing, how would you rate them as a group?
    Mr. Dacey. I think overall the types of weaknesses we've 
seen, again, are pervasive. In terms of a grade, I'll leave 
that to Chairman Horn. He's given grades last year, and I am 
not sure they've changed a whole lot since then.
    Mr. Greenwood. Would this grade be different for defense 
versus military agencies than civilian agencies? How would you 
compare them?
    Mr. Dacey. I just wanted to clarify, the main part of the 
work that's been done has been on unclassified systems. So with 
respect to those, we're finding similar types of 
vulnerabilities in both.
    Mr. Greenwood. The committee's reviews of computer security 
at various Federal agencies has largely found that security has 
been mostly a paperwork exercise up to now. Do you agree with 
that?
    Mr. Dacey. There are certain areas, I guess, in terms of a 
paperwork exercise, that there are documented policies in many 
cases that aren't carried through in terms of execution. Also, 
there are many places where the policies aren't even 
documented. One of the areas that we look at is, again, whether 
the agencies have a process such as Energy to really determine 
what the effectiveness of their controls are. We've many times 
identified vulnerabilities for the first time to agencies; and 
although they have been generally very responsive, it's a 
process that we think ought to take place in the management 
role, not as an audit function. So that is, I guess, how I'd 
answer that question.
    Mr. Greenwood. It's safe to say that every agency ought to 
be constantly testing its own security systems; isn't that a 
fair statement?
    Mr. Dacey. I think there needs to be a regular process for 
that type of testing. Part of that is called for in the new 
legislation. The reports on that new legislation will be due 
out in the fall to Congress, and those should illustrate some 
of the issues and also indicate whether, in fact, that testing 
is being done. I believe in your opening statement you referred 
to the fact, based on evidence you obtained, that that wasn't 
being done. That is consistent with our--what we have seen 
actually. We've seen very little done by most agencies to 
assess the effectiveness of their security.
    Mr. Greenwood. You mentioned in your testimony some 
examples of unauthorized access, security breaches, compromised 
networks and data from GAO's body of work across Federal 
agencies. These are not just hypothetical, are they?
    Mr. Dacey. No. We have seen incidents where that has 
actually occurred, which I gave in my oral statement. The 
question really too is some of these vulnerabilities are, or 
were, sensitive when we found them, at least could have led to 
all kinds of other things that weren't detected. I would agree 
based upon the comments earlier that a large number of 
incidents that are occurring are probably not detected and 
reported. That is an area where we really need to get better 
systems because you can't protect the systems a hundred 
percent, as was discussed earlier; but you need to do the best 
you can to really implement known patches and address known 
vulnerabilities. Many of the tools and Web sites that were 
referred to earlier that provide evidence of ways in which 
systems can be hacked can also be used by agencies to identify 
those same types of weaknesses in their system and fix them. So 
I think that is an important area that needs to be addressed.
    Mr. Greenwood. It seems to me, as I think Ms. McDonald 
said, they encourage the use of patches; but there's no 
requirement that the patches be used, and perhaps we ought to 
consider a mechanism to make them mandatory.
    Mr. Tritak, could you describe for the committee a worst-
case scenario for a cyberattack or information-warfare attack 
on one of our Nation's critical infrastructures, just to make 
us all feel good?
    Mr. Tritak. Yeah, make me feel real good. If I may a little 
bit, sir, sort of qualify my remarks by saying the following: 
I've heard conversations earlier talk about cyberterrorism, 
information warfare; and that is a shorthand that we all use in 
describing certain types of threats. I think I prefer when I 
address these things is to turn around a little bit and not 
using cyberadjectives to modify traditional nouns but to say in 
a sense, for example, instead of cyberterrorism, I refer to it 
as terrorist activities that attempt to exploit cyberspace to 
achieve certain terrorist goals and objectives. Okay. And in an 
information warfare context, I think if we're using the term 
properly, we're in a state of war in which a country is 
utilizing or exploiting the cyberspace and vulnerabilities in 
the cyberspace to achieve certain goals and certain objectives.
    Now let me give you an idea of the kinds of things I think 
would be played out in that context. Let's pretend we go back, 
and we have to, God forbid, have to deal with Iraq again in a 
way that we had to deal with Iraq before. I think Iraq and the 
leadership of Iraq probably would prefer not to have to go toe 
to toe with the Americans the way it had to go toe to toe the 
first time around. One of the things it probably would attempt 
to do if it could--and I'm not saying any of this they can 
actually achieve, because I think it is very difficult to do 
this, but let's just suppose the intent would be to disrupt the 
deployment--mobilization and deployment of U.S. Forces in the 
United States and project them overseas and then also the 
logistics efforts going from Europe points of demarcation in 
Europe finally to the Middle East. To the extent they could 
achieve something like that, it could have strategic 
implications. So I think we need to look at it in that sense.
    Now if you're talking about in the case of a war where in a 
sense they would attempt to achieve through cyberattacks what 
bombers used to achieve, for example, then you would think of 
things that could cause mass problems, disruptions of 911, 
introduction of biological chemical weapons at the same time, 
the possibility of trying to hack into dams and potentially 
open floodgates, anything that would cause the kind of hysteria 
and potential loss of life that we tried to do in World War II 
or whatever.
    That is the kind of thing I think we all have to be 
concerned about because I think that is the sort of thing 
people would be thinking about if they were going to war with 
us and they wanted to exploit the cyberspace in order to 
achieve their military and political objectives. I want to also 
emphasize it's not clear that they could achieve that; and in 
fact, this the beauty of now as well as the curse of today is 
the fact that we haven't seen the worst because the worst that 
can be done over cyberspace is a function of interconnectivity 
and being hooked in. And we're still in the fairly early stages 
of doing this. Our society, our government, our economy are 
being transformed by information technologies; and increasingly 
we're going to be depending on wireless technologies in 
addition to the online versions.
    So I think that over time the potential for serious 
problems conducted over cyberspace will go up. That is why I 
applaud the efforts that you're trying to do now. Let's not 
wait for that eventuality. Let's take aggressive action now and 
perhaps preempt the problem altogether.
    Mr. Greenwood. Well, while these worst-case scenarios are 
theoretical, the fact of the matter is would you agree with us 
that the only thing that stands between us and the worst-case 
scenario is the extent to which the Federal agencies involved 
utilize the billions of dollars that we've appropriated to them 
and the tools, the technological tools that are available to 
protect against those scenarios?
    Mr. Tritak. Yes. I think that to the extent that Federal 
agencies are increasingly relying on information technology to 
do key services in national defense and to the extent that 
those services are linked into the ever-expanding digital 
nervous system that is spanning the country and the globe, you 
are exposing yourself to a risk that you have never had before; 
and if you are not safeguarding yourself against that, the 
potential for the kinds of concerns that you have, I think, 
can't be ignored.
    Mr. Greenwood. The means will always be there; the 
motivation will always be there. The only protection is the 
security systems, and the only long-range protection against 
those scenarios is constant vigilance, constant testing of our 
systems to protect us.
    Mr. Tritak. Yes.
    Mr. Greenwood. Okay. A recent report by a committee of 
Inspectors General issued just last week found PDD-63 
implementation to be progressing very slowly at most Federal 
agencies. They surveyed 15 Federal agencies including some key 
ones for PDD-63 purposes and found that quote ``many agency 
infrastructure plans were incomplete,'' that quote ``most 
agencies had not identified their critical assets yet and that 
almost none of the agencies had completed vulnerability 
assessments of those assets or developed remediation plans.'' 
Do you concur, Mr. Tritak, with this assessment, and why are we 
so far in the hole on this?
    Mr. Tritak. Well, a couple things. I think that there's 
some truth to what you have said. I can't articulate for you in 
full to what extent that is the case in each agency situation. 
What I can tell you is in the case of the work that we're doing 
with agencies under the project matrix all efforts that have 
been done so far are in the area of identifying the assets.
    I just want to qualify one piece about that because some of 
these assets may have been assessed for vulnerabilities during 
Y2K, for example, and for other reasons--and we can't 
necessarily assume that nothing has been done--but I think one 
of the points I am trying to get across to this committee is 
unless you understand the full--the way the systems operate in 
critical services and you have addressed every single aspect of 
that service for vulnerabilities, you don't know whether that 
service is assured or not. I think in that regard we have a 
long way to go, a real long way to go.
    Mr. Greenwood. Okay. We thank you both for your testimony. 
The Chair seeks unanimous consent that documents that have been 
agreed to by the staff majority and minority be admitted into 
the record and that the record remain open for 30 days for 
additional statements and materials. With that, this committee 
thanks all of its witnesses and adjourns.
    [Whereupon, at 12:15 p.m., the subcommittee was adjourned.]
    [Additional material submitted for the record follows:]

                                            CRYPTEK
                                 Secure Communications, LLC
                                                      April 5, 2001
The Honorable W.J. ``Billy'' Tauzin
Chairman
House Energy and Commerce Committee
2125 Rayburn House Office Building
Washington, DC 20515-6115
    Dear Mr. Chairman, I am submitting the following testimony and 
presentation for the record at the suggestion of Mr. Gary A. Dionne, a 
member of your Committee's professional staff. My firm is the developer 
and manufacturer of a network security product known as DiamondTEK. 
TM DiamondTEK is the only network security component to ever 
successfully complete the National Security Agency's (NSA) B2 level 
evaluation. What this means is that DiamondTEK is approved by the NSA 
to handle data of multiple levels of classification on a single 
workstation over a single network connection. This can translate in 
significant cost savings for government users who must worry about 
keeping data of various classification levels separate and secure.
    This technology is also invaluable to users of sensitive, valuable 
data in the commercial marketplace. An example that comes immediately 
to mind is ensuring the confidentiality of patient medical records. 
Another industry that could benefit from such technology is the 
financial services industries and any organization involved with funds 
transfer. One misplaced ``byte'' could mean the loss of billions of 
dollars.
    Cryptek developed DiamondTEK with internal R&D funds to meet 
stringent NSA requirements. The company has continued to invest in the 
technology, resulting in the worlds most ``trusted'' and secure network 
security product. This leading edge capability is available today for 
government and commercial users worldwide (Cryptek recently received a 
blanket export license from the Department of Commerce to export to any 
commercial or government entity in the world with the exception of the 
seven terrorist-sponsoring nations).
    I wanted to ensure that the Committee was aware that this 
technology was available as you consider various encryption and privacy 
issues during this Congress. Cryptek stands prepared to brief you, 
other Committee Members or staff on our unique products and 
capabilities and answer questions you may have.
    Thank you for your consideration of this information.
            Sincerely,
                                        Jackson Kemper, III
                             Vice President, Government Affairs6602
[GRAPHIC] [TIFF OMITTED] T2834.001

[GRAPHIC] [TIFF OMITTED] T2834.002

[GRAPHIC] [TIFF OMITTED] T2834.003

[GRAPHIC] [TIFF OMITTED] T2834.003

[GRAPHIC] [TIFF OMITTED] T2834.005

[GRAPHIC] [TIFF OMITTED] T2834.006

[GRAPHIC] [TIFF OMITTED] T2834.007

[GRAPHIC] [TIFF OMITTED] T2834.008

[GRAPHIC] [TIFF OMITTED] T2834.009

[GRAPHIC] [TIFF OMITTED] T2834.010

[GRAPHIC] [TIFF OMITTED] T2834.011

[GRAPHIC] [TIFF OMITTED] T2834.012

[GRAPHIC] [TIFF OMITTED] T2834.013

[GRAPHIC] [TIFF OMITTED] T2834.014

[GRAPHIC] [TIFF OMITTED] T2834.015

[GRAPHIC] [TIFF OMITTED] T2834.016

[GRAPHIC] [TIFF OMITTED] T2834.017

[GRAPHIC] [TIFF OMITTED] T2834.018

[GRAPHIC] [TIFF OMITTED] T2834.019

[GRAPHIC] [TIFF OMITTED] T2834.020

[GRAPHIC] [TIFF OMITTED] T2834.021

[GRAPHIC] [TIFF OMITTED] T2834.022

[GRAPHIC] [TIFF OMITTED] T2834.023

[GRAPHIC] [TIFF OMITTED] T2834.024

[GRAPHIC] [TIFF OMITTED] T2834.025

[GRAPHIC] [TIFF OMITTED] T2834.026

[GRAPHIC] [TIFF OMITTED] T2834.027

[GRAPHIC] [TIFF OMITTED] T2834.028

[GRAPHIC] [TIFF OMITTED] T2834.029

[GRAPHIC] [TIFF OMITTED] T2834.030

[GRAPHIC] [TIFF OMITTED] T2834.031

[GRAPHIC] [TIFF OMITTED] T2834.032

[GRAPHIC] [TIFF OMITTED] T2834.033

[GRAPHIC] [TIFF OMITTED] T2834.034

[GRAPHIC] [TIFF OMITTED] T2834.035

[GRAPHIC] [TIFF OMITTED] T2834.036

[GRAPHIC] [TIFF OMITTED] T2834.037

[GRAPHIC] [TIFF OMITTED] T2834.038

[GRAPHIC] [TIFF OMITTED] T2834.039

[GRAPHIC] [TIFF OMITTED] T2834.040

[GRAPHIC] [TIFF OMITTED] T2834.041

[GRAPHIC] [TIFF OMITTED] T2834.042

[GRAPHIC] [TIFF OMITTED] T2834.043

[GRAPHIC] [TIFF OMITTED] T2834.044

[GRAPHIC] [TIFF OMITTED] T2834.045

[GRAPHIC] [TIFF OMITTED] T2834.046

[GRAPHIC] [TIFF OMITTED] T2834.047

[GRAPHIC] [TIFF OMITTED] T2834.048

[GRAPHIC] [TIFF OMITTED] T2834.049

[GRAPHIC] [TIFF OMITTED] T2834.050

[GRAPHIC] [TIFF OMITTED] T2834.051

[GRAPHIC] [TIFF OMITTED] T2834.052

[GRAPHIC] [TIFF OMITTED] T2834.053

[GRAPHIC] [TIFF OMITTED] T2834.054

[GRAPHIC] [TIFF OMITTED] T2834.055

[GRAPHIC] [TIFF OMITTED] T2834.056

[GRAPHIC] [TIFF OMITTED] T2834.057

[GRAPHIC] [TIFF OMITTED] T2834.058

[GRAPHIC] [TIFF OMITTED] T2834.059

[GRAPHIC] [TIFF OMITTED] T2834.060

[GRAPHIC] [TIFF OMITTED] T2834.061

[GRAPHIC] [TIFF OMITTED] T2834.062

[GRAPHIC] [TIFF OMITTED] T2834.063

[GRAPHIC] [TIFF OMITTED] T2834.064

[GRAPHIC] [TIFF OMITTED] T2834.065

[GRAPHIC] [TIFF OMITTED] T2834.066

[GRAPHIC] [TIFF OMITTED] T2834.067

[GRAPHIC] [TIFF OMITTED] T2834.068

[GRAPHIC] [TIFF OMITTED] T2834.069

[GRAPHIC] [TIFF OMITTED] T2834.070

[GRAPHIC] [TIFF OMITTED] T2834.071

[GRAPHIC] [TIFF OMITTED] T2834.072

[GRAPHIC] [TIFF OMITTED] T2834.073

[GRAPHIC] [TIFF OMITTED] T2834.074

[GRAPHIC] [TIFF OMITTED] T2834.075

[GRAPHIC] [TIFF OMITTED] T2834.076

[GRAPHIC] [TIFF OMITTED] T2834.077

[GRAPHIC] [TIFF OMITTED] T2834.078

[GRAPHIC] [TIFF OMITTED] T2834.079

[GRAPHIC] [TIFF OMITTED] T2834.080

[GRAPHIC] [TIFF OMITTED] T2834.081

[GRAPHIC] [TIFF OMITTED] T2834.082

[GRAPHIC] [TIFF OMITTED] T2834.083

[GRAPHIC] [TIFF OMITTED] T2834.084

[GRAPHIC] [TIFF OMITTED] T2834.085

[GRAPHIC] [TIFF OMITTED] T2834.086

[GRAPHIC] [TIFF OMITTED] T2834.087

[GRAPHIC] [TIFF OMITTED] T2834.088

[GRAPHIC] [TIFF OMITTED] T2834.089

[GRAPHIC] [TIFF OMITTED] T2834.090

[GRAPHIC] [TIFF OMITTED] T2834.091

[GRAPHIC] [TIFF OMITTED] T2834.092

[GRAPHIC] [TIFF OMITTED] T2834.093

[GRAPHIC] [TIFF OMITTED] T2834.094

[GRAPHIC] [TIFF OMITTED] T2834.095

[GRAPHIC] [TIFF OMITTED] T2834.096

[GRAPHIC] [TIFF OMITTED] T2834.097

[GRAPHIC] [TIFF OMITTED] T2834.098

[GRAPHIC] [TIFF OMITTED] T2834.099

[GRAPHIC] [TIFF OMITTED] T2834.100

[GRAPHIC] [TIFF OMITTED] T2834.101

[GRAPHIC] [TIFF OMITTED] T2834.102

[GRAPHIC] [TIFF OMITTED] T2834.103

[GRAPHIC] [TIFF OMITTED] T2834.104

[GRAPHIC] [TIFF OMITTED] T2834.105

[GRAPHIC] [TIFF OMITTED] T2834.106

[GRAPHIC] [TIFF OMITTED] T2834.107

[GRAPHIC] [TIFF OMITTED] T2834.108

[GRAPHIC] [TIFF OMITTED] T2834.109

[GRAPHIC] [TIFF OMITTED] T2834.110

[GRAPHIC] [TIFF OMITTED] T2834.111

[GRAPHIC] [TIFF OMITTED] T2834.112

[GRAPHIC] [TIFF OMITTED] T2834.113

[GRAPHIC] [TIFF OMITTED] T2834.114

[GRAPHIC] [TIFF OMITTED] T2834.115

[GRAPHIC] [TIFF OMITTED] T2834.116

[GRAPHIC] [TIFF OMITTED] T2834.117

[GRAPHIC] [TIFF OMITTED] T2834.118

[GRAPHIC] [TIFF OMITTED] T2834.119

[GRAPHIC] [TIFF OMITTED] T2834.120

[GRAPHIC] [TIFF OMITTED] T2834.121

[GRAPHIC] [TIFF OMITTED] T2834.122

[GRAPHIC] [TIFF OMITTED] T2834.123

[GRAPHIC] [TIFF OMITTED] T2834.124

[GRAPHIC] [TIFF OMITTED] T2834.125

[GRAPHIC] [TIFF OMITTED] T2834.126

[GRAPHIC] [TIFF OMITTED] T2834.127

[GRAPHIC] [TIFF OMITTED] T2834.128

[GRAPHIC] [TIFF OMITTED] T2834.129

[GRAPHIC] [TIFF OMITTED] T2834.130

[GRAPHIC] [TIFF OMITTED] T2834.131

[GRAPHIC] [TIFF OMITTED] T2834.132

[GRAPHIC] [TIFF OMITTED] T2834.133

[GRAPHIC] [TIFF OMITTED] T2834.134

[GRAPHIC] [TIFF OMITTED] T2834.135

[GRAPHIC] [TIFF OMITTED] T2834.136

[GRAPHIC] [TIFF OMITTED] T2834.137

[GRAPHIC] [TIFF OMITTED] T2834.138

[GRAPHIC] [TIFF OMITTED] T2834.139

[GRAPHIC] [TIFF OMITTED] T2834.140

[GRAPHIC] [TIFF OMITTED] T2834.141

[GRAPHIC] [TIFF OMITTED] T2834.142

[GRAPHIC] [TIFF OMITTED] T2834.143

[GRAPHIC] [TIFF OMITTED] T2834.144

[GRAPHIC] [TIFF OMITTED] T2834.145

[GRAPHIC] [TIFF OMITTED] T2834.146

[GRAPHIC] [TIFF OMITTED] T2834.147

[GRAPHIC] [TIFF OMITTED] T2834.148

[GRAPHIC] [TIFF OMITTED] T2834.149

[GRAPHIC] [TIFF OMITTED] T2834.150

[GRAPHIC] [TIFF OMITTED] T2834.151

[GRAPHIC] [TIFF OMITTED] T2834.152

[GRAPHIC] [TIFF OMITTED] T2834.153

[GRAPHIC] [TIFF OMITTED] T2834.154