Homeland Security Grants: Observations on Process DHS Used to
Allocate Funds to Selected Urban Areas (07-FEB-07, GAO-07-381R).
In fiscal year 2006, the Department of Homeland Security (DHS)
provided approximately $1.7 billion in federal funding to states,
localities, and territories through its Homeland Security Grant
Program (HSGP) to prevent, protect against, respond to, and
recover from acts of terrorism or other catastrophic events. The
Urban Areas Security Initiative (UASI) is a discretionary grant
under this program, and since fiscal year 2003, Congress has
directed DHS to target UASI funding to high-threat, high-density
urban areas to assist in building capacity. To meet this
requirement and inform funding decisions, DHS developed a method
to estimate the relative risk of terrorist attacks to urban
areas. From fiscal year 2003 through 2005, DHS used a number of
risk indicators such as population density and threat to allocate
UASI funds. UASI funding increased during this period from about
$96 million to $830 million, while the number of urban areas that
received grants grew from 7 to 43. In fiscal year 2006, DHS
awarded approximately $711 million in UASI grants--a 14 percent
reduction in funds from the previous year--while the number of
eligible urban areas identified by the risk assessment decreased
to 35. For fiscal year 2006, DHS made several changes to the
grant allocation process, including modifying its risk assessment
methodology, introducing an assessment of the anticipated
effectiveness of investments, and combining the outcomes of these
two assessments to inform funding decisions. The results of the
UASI eligibility and funding allocations in fiscal year 2006
raised congressional questions and concerns about DHS's methods
in making UASI determinations. Several congressional members
requested that we examine aspects of DHS's UASI funding process,
and the fiscal year 2007 DHS Appropriations Act directed us to
examine the validity, relevance, reliability, timeliness, and
availability of the risk factors (including threat,
vulnerability, and consequence) used by the Secretary of Homeland
Security for the purpose of allocating discretionary grants. On
November 17, 2006, we responded to the mandate and the request by
briefing congressional staff on the results of this review. We
specifically examined (1) DHS's method of estimating relative
risk of terrorism in fiscal year 2006; (2) DHS's process for
assessing the effectiveness of the various risk mitigation
investments submitted in UASI applications; (3) how DHS used
estimated relative risk scores and assessments of effectiveness
to allocate UASI grant funds in fiscal year 2006; and (4) what
changes, if any, DHS plans to make in its UASI award
determination process for fiscal year 2007.
-------------------------Indexing Terms-------------------------
REPORTNUM: GAO-07-381R
ACCNO: A65708
TITLE: Homeland Security Grants: Observations on Process DHS
Used to Allocate Funds to Selected Urban Areas
DATE: 02/07/2007
SUBJECT: Data collection
Eligibility determinations
Evaluation criteria
Federal aid to localities
Federal aid to states
Grants
Homeland security
Program evaluation
Risk assessment
Statistical data
Terrorism
State Homeland Security Grant Programs
Urban Areas Security Initiative
******************************************************************
** This file contains an ASCII representation of the text of a **
** GAO Product. **
** **
** No attempt has been made to display graphic images, although **
** figure captions are reproduced. Tables are included, but **
** may not resemble those in the printed version. **
** **
** Please see the PDF (Portable Document Format) file, when **
** available, for a complete electronic file of the printed **
** document's contents. **
** **
******************************************************************
GAO-07-381R
* [1]Appendix I: Briefing Slides
* [2]Appendix II: DHS's Approach to Risk Analysis for Fiscal Year
* [3]Appendix III: Data Sources Used in DHS's Fiscal Year 2006 Ri
* [4]Appendix IV: DHS's Approach to Assessing Effectiveness for F
* [5]Appendix V: UASI Grant Allocation Approach for Fiscal Year 2
* [6]Appendix VI: GAO Contact and Staff Acknowledgments
* [7]GAO Contact
* [8]Acknowledgments
February 7, 2007
Congressional Requesters:
Subject: Homeland Security Grants: Observations on Process DHS Used to
Allocate Funds to Selected Urban Areas
In fiscal year 2006, the Department of Homeland Security (DHS) provided
approximately $1.7 billion in federal funding to states, localities, and
territories through its Homeland Security Grant Program (HSGP) to prevent,
protect against, respond to, and recover from acts of terrorism or other
catastrophic events. The Urban Areas Security Initiative (UASI) is a
discretionary grant under this program, and since fiscal year 2003,
Congress has directed DHS to target UASI funding to high-threat,
high-density urban areas to assist in building capacity.^1 To meet this
requirement and inform funding decisions, DHS developed a method to
estimate the relative risk of terrorist attacks to urban areas. From
fiscal year 2003 through 2005, DHS used a number of risk indicators such
as population density and threat to allocate UASI funds. UASI funding
increased during this period from about $96 million to $830 million, while
the number of urban areas that received grants grew from 7 to 43. In
fiscal year 2006, DHS awarded approximately $711 million in UASI grants--a
14 percent reduction in funds from the previous year--while the number of
eligible urban areas identified by the risk assessment decreased to 35.
For fiscal year 2006, DHS made several changes to the grant allocation
process, including modifying its risk assessment methodology, introducing
an assessment of the anticipated effectiveness of investments, and
combining the outcomes of these two assessments to inform funding
decisions.
The results of the UASI eligibility and funding allocations in fiscal year
2006 raised congressional questions and concerns about DHS's methods in
making UASI determinations. Several congressional members requested that
we examine aspects of DHS's UASI funding process, and the fiscal year 2007
DHS Appropriations Act directed us to examine the validity, relevance,
reliability, timeliness, and availability of the risk factors (including
threat, vulnerability, and consequence) used by the Secretary of Homeland
Security for the purpose of allocating discretionary grants.^2 On November
17, 2006, we responded to the mandate and the request by briefing
congressional staff on the results of this review (see app. I). We
specifically examined (1) DHS's method of estimating relative risk of
terrorism in fiscal year 2006; (2) DHS's process for assessing the
effectiveness of the various risk mitigation investments submitted in UASI
applications; (3) how DHS used estimated relative risk scores and
assessments of effectiveness to allocate UASI grant funds in fiscal year
2006; and (4) what changes, if any, DHS plans to make in its UASI award
determination process for fiscal year 2007. This letter and the
accompanying appendices transmit the information provided during those
briefings.
^1Prior to fiscal year 2003, funding to urban areas was provided under the
Nunn-Lugar-Domenici Domestic Preparedness Program, which was administered
by the Department of Defense starting in fiscal year 1997, and later the
Department of Justice during fiscal years 2001 and 2002. Other grants
under the HSGP include the State Homeland Security Program, Law
Enforcement Terrorism Prevention Program, and Citizen Corps Program, among
others.
^2Pub. L. No. 109-295, 120 Stat. 1355, 1370 (2006).
To gain understanding and describe DHS's process for awarding fiscal year
2006 UASI funds, including eligibility and award amount determinations, we
reviewed available documentation and interviewed knowledgeable officials.
During this document review and our interviews, we gathered information
about the data DHS used to analyze relative risk and what efforts it had
in place to ensure data reliability. For example, we collected information
on DHS's consultation with states to obtain and review critical
infrastructure data and DHS's internal assessments of intelligence data.
Additionally, we examined DHS guidance and methods for implementing an
assessment of effectiveness of applicants' plans to mitigate risk.
Finally, during our review of documents and interviews with DHS officials,
we also collected information regarding any changes to the determination
process for the fiscal year 2007 grant cycle. We conducted our work from
September 2006 through November 2006 in accordance with generally accepted
government auditing standards.
Summary of DHS's Process for Allocating UASI Grant Funds in Fiscal Years
2006 and 2007
The Risk Assessment. In fiscal year 2006, DHS used its risk assessment to
identify urban areas that faced the greatest potential risk, which made
them eligible to apply for the UASI grant, and based the amount of awards
to all eligible areas primarily on the outcomes of the risk assessment and
a new effectiveness assessment. DHS enhanced its risk assessment by
including three components--threat, vulnerability, and consequences--to
estimate the relative risk of successful terrorist attacks to urban areas.
The risk assessment was used to inform DHS's selection of eligible urban
areas. DHS also implemented a competitive process to evaluate the
anticipated effectiveness of proposed investments to address homeland
security needs by using peer reviewers, who were homeland security
professionals from fields such as law enforcement and fire service. The
peer reviewers scored the investments using criteria, such as
regionalization, sustainability, and impact. According to DHS, it combined
the outcomes of the risk and effectiveness assessments to inform the
funding allocation decisions in fiscal year 2006, but the Secretary of
Homeland Security made the final UASI grant decisions. Officials also
reported no significant changes to the risk assessment process for next
year's grant cycle, but other decisions, such as the identification of
eligible urban areas through the risk assessment and how much weight risk
and effectiveness will be given in determining amounts, have yet to be
made. Figure 1 illustrates UASI grant determination process in fiscal year
2006.
Figure 1: Overview of DHS's UASI Grant Determination Process in Fiscal
Year 2006
In fiscal year 2006, DHS estimated the risk faced by urban areas by
assessing the relative risk of terrorism^3 as a product of three
components: (1) threat, or the likelihood that a type of attack might be
attempted; (2) vulnerability, or the likelihood of a successful attack
using a particular attack scenario; and (3) consequence, or the potential
impact of a particular attack. To estimate the relative risk, DHS assessed
risk from two perspectives, asset-based and geographic, then combined the
assessments weighting geographic risk twice as much as asset-based risk.
According to DHS officials, it made the judgment to weight geographic risk
1.0 and asset-based risk 0.5, since a potential loss of lives within an
area would contribute to how geographic risk is assessed. To estimate
asset risk, DHS computed the product of threat, vulnerability, and
consequence by assessing the intent and capabilities of an adversary to
successfully attack an asset type, such as a chemical plant, dam, or
commercial airport, using one of 14 different attack scenarios (e.g.,
nuclear explosion or vehicle-borne improvised explosive device).
Simultaneously, DHS assessed geographic risk by approximating the threat,
vulnerability, and consequences considering general geographic
characteristics mostly independent of the area's assets using counts of
data such as reports of suspicious incidents, the number of visitors from
countries of interest, and population. In DHS's view, the two estimates of
risk--asset-based and geographic--are complementary and provide a "micro-
and macro-" perspective of risk, respectively. In calculating these
relative risk scores and addressing the uncertainties in estimating
relative risk, policy and analytic judgments were required. For example,
DHS made judgments about how to weight asset and geographic risk, how to
identify the urban boundaries it used to estimate risk, and what data were
sufficient to use in its risk estimates. DHS used this risk assessment to
identify the eligibility cut point, which determined the number of urban
areas that could apply for UASI funding in fiscal year 2006 and defined
high-risk urban areas. According to DHS officials, the DHS Secretary
selected a point that resulted in 35 eligible urban areas, which accounted
for 85 percent of total related risk. DHS then decided to extend
eligibility to 11 sustainment areas that participated in the program in
fiscal year 2005, but were not identified in fiscal year 2006 through the
risk assessment.^4 Appendix II contains more detail about the risk
assessment process and describes how DHS used these estimates to determine
which urban areas were eligible to apply for fiscal year 2006 UASI grants.
Appendix III provides information on the data sources used in DHS's fiscal
year 2006 risk estimates.
^3By using a relative risk value, DHS assessed the risk of potential
terrorist attacks to one urban area relative to another urban area. DHS
estimated relative risk as an ordinal number, which typically is
understood to indicate rank order. Further, the "distance" between the
numbers has no meaning. According to DHS, a classical probabilistic risk
assessment, in which risk is calculated using historical statistical data
to quantitatively describe the likelihood of a particular event (usually
expressed as a value between 0 and 1), cannot be used, because there are
little available historical statistical data to describe terrorism risk.
The Effectiveness Assessment. For the first time since the inception of
the program, DHS required urban areas to submit investment justifications
as part of their grant application, so it could assess the anticipated
effectiveness of the various risk mitigation investments urban areas
proposed. The investment justifications included up to 15 "investments" or
proposed solutions to address homeland security needs, identified by the
states and urban areas through their strategic planning process. DHS used
peer reviewers to assess the investments submitted by the 46 urban
areas--35 eligible through the risk assessment and 11 sustainment areas.
DHS and the states collaborated to identify and select these peer
reviewers who were homeland security professionals and managers from
disciplines such as law enforcement, fire service, and emergency
communications. According to DHS, it arranged 17 peer review panels that
included reviewers from a variety of professions, all levels of
government, and representatives from different regions of the country and
from both large- and small-population states. These reviewers evaluated,
discussed, and scored the urban areas' investment justifications,
initially on an individual basis, then in panels. The criteria reviewers
used to score the investment justifications included the following
categories: relevance to the interim National Preparedness Goal and to
state and local homeland security plans, anticipated impact,
sustainability, regionalism, and the implementation of each proposed
investment. Reviewers on each panel assigned scores for six investment
justifications, which according to DHS officials were averaged to
determine a final effectiveness score for each urban area. Appendix IV
provides additional details about the approach DHS used to assess
effectiveness in fiscal year 2006.
Final Allocation Decisions. Finally, DHS used a new method to help
determine UASI allocation amounts for the 46 eligible urban areas, based
primarily on the risk and effectiveness assessments, but final allocation
decisions were made by the Secretary of Homeland Security. The risk and
effectiveness scores did not automatically translate into funding amounts,
but rather, the scores informed the decisions, according to DHS. While all
eligible urban areas that applied for UASI grants would receiving funding,
DHS had to prioritize how funds would be allocated. DHS prioritized those
areas estimated to have the highest risk of a successful terrorist attack,
while still rewarding those areas that proposed ways to address homeland
security needs that were anticipated to be effective. DHS used the
combined scores to assign the 46 eligible urban areas into four
categories: Category I--higher risk, higher effectiveness; Category
II--higher risk, lower effectiveness; Category III--lower risk, higher
effectiveness; and Category IV--lower risk, lower effectiveness. According
to DHS, it considered many different distributions of funding to each of
the 4 categories. DHS officials said that they made the decision to give
Category I the highest funding priority and Category IV the lowest funding
priority. Once the amounts for each category were decided, DHS used a
formula to determine the grant award for each urban area, giving the risk
score a weight of 2/3 and the effectiveness score a weight of 1/3.
According to DHS, these weights reflect its decision to prioritize risk
over effectiveness. DHS officials reported presenting funding options to
the Secretary of Homeland Security, who made the final decision about
funding allocations. The final funding decision resulted in 70 percent of
UASI funding going to "higher risk" candidates in Categories I and II.
Figure 1 illustrates these funding priorities, as described by DHS
officials, in which each circle represents a hypothetical urban area and
the size of the circle corresponds to the relative amounts of the grant
awards.^5 Appendix V provides additional details on the allocation method
used in fiscal year 2006.
^4According to DHS, extending eligibility to the 11 urban areas reflected
feedback from stakeholders on the importance of providing funding across
fiscal years. In addition, in DHS's view, this decision provided greater
transparency in the process and fostered long-term planning for program
participants. DHS has also stated that any urban area not identified as
eligible through the risk analysis process for two consecutive grant
cycles will not be eligible for continued UASI funding.
Figure 2: DHS Allocation Tool for Fiscal Year 2006 UASI Funding
The Fiscal Year 2007 Process
The fiscal year 2007 process, as described by DHS officials, represents a
continuing evolution in DHS's approach to its risk methodology for grant
allocation. DHS officials said they will to continue to use the risk and
effectiveness assessments to inform final funding decisions. For fiscal
year 2007, DHS officials described changes that simplified the risk
methodology, integrating the separate analyses for asset-based and
geographic-based risk, and included more sensitivity analysis in
determining what the final results of its risk analysis should be. DHS
officials said the primary goal was to make the process more transparent
and more easily understood, focusing on key variables and incorporating
comments from a variety of stakeholders regarding the fiscal year 2006
process. For the 2007 grant cycle, DHS no longer estimated asset-based and
geographic risk separately, considered most areas of the country equally
vulnerable to a terrorist attack, given freedom of movement within the
nation, and focused on the seriousness of the consequences of a successful
terrorist attack. As shown in figure 3, the maximum risk score possible
for a given area was 100. Threat to people and places accounted for a
maximum of 20 points, and vulnerability and consequences for a maximum 80
points. In the fiscal year 2007 process the intelligence community for the
first time assessed threat information for multiple years (generally, from
September 11, 2001 forward) for all candidate urban areas and gave the
Office of Grants and Training a list that grouped the 168 areas into one
of four tiers. Tier I included those at highest threat, relatively to the
other areas, and tier IV included those at lowest threat relative to the
others.
^5The figure does not represent actual urban areas or grant award amounts.
Figure 3: DHS Risk Assessment Methodology for Fiscal Year 2007 UASI
Funding
Note: DIB is Defense Industrial Base.
According to DHS officials, the greatest concern was the impact of an
attack on people, including the economic and health impacts of an attack.
Also of concern was the quantity and nature of critical infrastructure
within each of the 168 urban areas assessed. DHS reported that the threat
information used for risk estimates was based upon an analysis of all
credible intelligence data. DHS's Office of Intelligence and Analysis
performed this review and provided the Office of Grants and Training with
threat assessments and corresponding threat values for each urban area. In
contrast, for the 2006 grant cycle, DHS used total counts of threats and
suspicious incidents and incorporated these into its model. In addition,
estimates of asset-based vulnerability were assigned values on a cardinal
scale of 1 to 100 rather than an ordinal scale of 1 to 3, which DHS
officials believe provided insight into the differences between asset
types with different values.
In assessing threat, vulnerability and consequences, DHS specifically
wanted to capture key land and sea points of entry into the United States
and the location of defense industrial base facilities and nationally
critical infrastructure facilities. The approximately 2,100 nationally
critical infrastructure assets included in the risk assessment were
selected on the basis of analysis by DHS infrastructure protection
analysts, sector specific federal agencies, and the states. According to
DHS, these 2,100 assets include some 129 defense industrial base assets.
Assets were grouped into two tiers: (1) those that if attacked could cause
major national or regional impacts similar to those from Hurricane Katrina
or 9/11; and (2) highly consequential assets with potential national or
regional impacts if attacked. Tier II includes about 660 assets identified
by state partners and validated by sector specific agencies. On the basis
of Office of Infrastructure Protection analysis, Tier I assets were
weighted using an average value three times as great as Tier II assets.
According to DHS officials, defense industrial base assets were included
in the national security index and all other assets in the national
infrastructure index.
Throughout this process, a number of policy judgments were necessary,
including what variables to include in the assessment, the points to be
assigned to each major variable (e.g., threat, the population index,
economic index, national infrastructure index, and the national security
index) with an eye towards how these judgments affected outcomes. DHS
officials noted that such judgments were the subject of extensive
discussions, including among high-level officials. In addition, DHS
officials said that they conducted more sensitivity analyses than was
possible in the fiscal year 2006 process. DHS officials noted that because
expert judgment was applied to the data, fewer variables were used in the
current model, making it possible to track the effect of different
assumptions and values on the ranking of individual urban areas.
Finally, DHS officials said that the effectiveness assessment process will
be consistent with last year's process, although a number of enhancements
have been made based on feedback received. However, no final decision has
been made on the weights to be given to risk and effectiveness for the
allocation of the fiscal year 2007 grants, according to DHS officials. One
modification to the effectiveness assessment will provide urban areas the
opportunity to include investments that involve multiple regions. This can
potentially earn an extra 5 percent to 8 percent on their final score. In
addition, DHS will convene a separate peer review panel to assess proposed
investments for these multi-regional investments. DHS has also offered
applicants a mid-year review where applicants can submit their draft
proposals to DHS to obtain comments, guidance or address questions that
the grant may pose (such as little or unclear information on the
anticipated impact of the investment on preparedness). As in the 2006
process, DHS officials have said that they can not assess how effective
these investments, once made, are in mitigating risk.
Observations
Determining an appropriate methodology to assess terrorism risk is
challenging, given uncertainties such as the limited data on actual
attacks and understanding the capabilities, intentions, and adaptability
of terrorists. The inherent uncertainties in estimating risk require
policy and analytic judgments. Other federal agencies, terrorism risk
researchers, and we agree that threat, vulnerability, and consequences of
an attack should be incorporated into terrorism risk assessments. DHS has
adopted an overall risk assessment approach that consists of these three
risk factors, and in implementing this approach has made judgments in an
attempt to address inherent uncertainties. According to DHS's Under
Secretary, DHS has made significant progress in developing its risk
assessment methods, which includes using a model based on the three risk
factors and incorporating state and local input. However, for the 2006
risk assessment process, DHS officials told us that DHS had limited
knowledge of how changes to its risk assessment methods, such as adding
asset types and using additional or different data sources, affected its
risk estimates. According to a senior technical advisor in DHS's Risk
Management Division, DHS did not have the resources to undertake such
analyses. Consequently, DHS could not assess the effects of these changes
on risk rankings and the determination of eligibility for, and amount of,
UASI grants. This official acknowledged the importance of judgments in
assessing risk of terrorism and eligibility outcomes.
DHS had a limited understanding of the effects of the judgments made in
estimating risk that influenced eligibility and allocation outcomes in
fiscal year 2006. DHS leadership can make more informed policy decisions
if they are provided with alternative risk estimates and funding
allocations resulting from analyses of varying data, judgments, and
assumptions. The Office of Management and Budget (OMB) offers guidelines
for treatment of uncertainty in a number of applications, including the
analysis of government investments and programs. These guidelines call for
the use of sensitivity analysis to gauge what effects key sources of
uncertainty have on outcomes. According to OMB, assumptions should be
varied and outcomes recomputed to determine how sensitive analytical
results are to such changes.^6 By applying these guidelines decision
makers are better informed about how sensitive outcomes are to key sources
of uncertainty. While DHS has indicated that it performed some sensitivity
analyses for fiscal year 2006, it has not provided us with details on the
extent of these analyses, how they were used, or how much they cost. DHS
officials told us they had conducted more extensive sensitivity analyses
for the fiscal year 2007 risk assessment, but we have no documentation on
what analyses were conducted, how they were conducted, or how they were
used and affected the final risk assessment scores and relative rankings.
^6Office of Management and Budget, Circular A-94: Guidelines and Discount
Rates for Benefit-Cost Analysis of Federal Programs, (Washington, D.C.;
October 29, 1992) p.10-11
Agency Comments
We provided a draft of this report to DHS for review and comment. DHS
provided us technical comments that we incorporated into our report where
appropriate.
GAO Contact
We are sending copies of this correspondence to the requesters listed
below, the appropriate congressional committees, and the Secretary of
Homeland Security.
Contact points for our Offices of Congressional Relations and Public
Affairs may be found on the last page of this report. For further
information about this report, please contact William Jenkins, Jr.,
Director, GAO Homeland Security and Justice Issues Team, at (202)-512-8777
or at [email protected]. GAO staff members who were major contributors to
this report are listed in appendix VI.
William Jenkins, Jr., Director,
Homeland Security and Justice Issues Team
List of Congressional Addressees:
The Honorable Robert C. Byrd Chairman
The Honorable Thad Cochran
Ranking Minority Member
Committee on Appropriations
United States Senate
The Honorable David Obey
Chairman
The Honorable Jerry Lewis
Ranking Minority Member
Committee on Appropriations
House of Representatives
The Honorable Bennie Thompson Chairman
Committee on Homeland Security
House of Representatives
The Honorable Barbara Boxer
United States Senate
The Honorable Dianne Feinstein
United States Senate
The Honorable Bob Filner
House of Representatives
The Honorable Doris Matsui
House of Representatives
The Honorable Mike Thompson
House of Representatives
The Honorable Susan Davis
House of Representatives
Appendix I: Briefing Slides
Appendix II: DHS's Approach to Risk Analysis for Fiscal Year 2006
In fiscal year 2006, DHS made enhancements to its approach to estimating
risk that involved incorporating stakeholder feedback and three risk
factors--threat, vulnerability, and consequence. Other models and
methodologies of assessing risk also include these three risk factors.
However, the inherent uncertainties associated with estimating the risk of
terrorist attacks required DHS to make numerous policy and analytic
judgments. The results of DHS's risk assessment were used to inform two
key grant decisions in fiscal year 2006: (1) eligibility of urban areas to
apply for UASI funding and (2) funding amounts.
DHS has developed a flexible approach to assessing risk of terrorist
attacks that considers several factors, including stakeholder feedback. In
developing DHS's fiscal year 2006 UASI grant determination process, DHS
considered agency goals and statutory responsibilities related to risk
management. DHS's fiscal year 2006 funding criteria--based on relative
risk and effectiveness of proposed solutions to identified needs--align
federal resources with the national priorities established by the Interim
National Preparedness Goal. In addition, DHS solicited feedback from
states, territories, and local governments to increase transparency and
held discussions with stakeholders and experts such as the RAND
Corporation regarding data analysis.^7 For example, in May 2005, DHS
hosted a meeting to solicit feedback on the fiscal year 2005 risk formula,
which was attended by representatives from 12 states or urban areas and
from law enforcement and fire service associations. DHS officials reported
that changes to the fiscal year 2006 risk estimation model for fiscal year
2007 were based on feedback received, given the data were relevant and the
changes could be applied to all urban areas during the data collection
phase. However, agency officials said that implementing these suggestions
varied in cost and time from minimal to very costly and time-consuming.
Additionally, we were told that incorporating suggestions from states,
territories, and local areas may not add significant value to outcomes,
but DHS did not test the impacts of these changes. Where possible, DHS has
integrated approaches with the intent of improving the model and its
approach to estimating risk.
DHS changed its definition of risk in fiscal year 2006 to incorporate
common components from other models. DHS defined risk by three principal
components: threat, or the likelihood of a type of attack that might be
attempted; vulnerability, or the likelihood of a successful attack with a
particular attack method; and consequence, or the potential impact of a
particular attack. Other risk assessment models also use these three
components. For example, other federal agencies have adopted some form of
threat, vulnerability, and consequence into their risk management
frameworks. DOD's risk management approach includes threat and
vulnerability assessments that identify potential threats and weaknesses
that may be exploited by those threats. The Department of Justice provided
guidance to law enforcement executives on how to assess risk of terrorism
to an asset by combining assessments of threat, vulnerability, and
criticality, which evaluates the likely impact if an identified asset is
lost or harmed by specific events. Additionally, the RAND Corporation
argues that threat, vulnerability, and consequences play a significant
role in assessing risk to urban areas and defines risk in a way that links
these three components. Further, in February 2005, the Congressional
Research Service reported that many risk assessment models and
methodologies consisted of identifying critical assets, evaluating
threats, assessing the vulnerabilities of critical assets, and determining
the expected consequences of specific types of attack on specific
assets.^8 For instance, the report noted the American Petroleum Institutes
and the National Petrochemical and Refiners Association defined risk as a
function of consequences of a successful attack against an asset, and
likelihood of a successful attack against an asset, where likelihood is
defined as the attractiveness of the target to the adversary based on the
adversary's intent and the target's perceived value to the adversary,
degree of threat based on capabilities, and degree of vulnerability of the
asset.
^7The RAND Corporation is a nonprofit policy research and analysis
institution that has conducted national security research for the U.S.
Department of Defense, the intelligence community, and key allied
governments and ministries of defense. In addition, RAND operates three
federally funded research and development centers that focus on national
security issues.
In fiscal year 2006, DHS combined two risk assessments--asset-based risk
and geographic-based risk--that were both based on threat, vulnerability,
and consequence to determine the relative risk of a successful terrorist
attack to urban areas. The asset-based risk assessment analyzed the intent
and capabilities of an adversary to successfully attack any of 38 asset
types, such as a chemical, plant, dam, or commercial airport, using one of
14 different attack scenarios (e.g., nuclear explosion or vehicle-borne
improvised explosive device.) DHS identified the list of 38 asset types,
and according to DHS officials, it collected over 200,000 individual
assets obtained from public and private sector sources. Geographic risk
considered the general geographic characteristics of an area mostly
independent of the area's assets using counts of information, such as
suspicious incident reports, FBI cases, and population. Table 1 describes
what we know about how each component of asset and geographic risk were
calculated. According to DHS, the two estimates of risk, asset-based and
geographic, were complementary providing a micro- and macro-prospective of
risk, respectively. Furthermore, while DSH's risk analysis was largely
based on population and population density in previous years, a DHS
official told us that legislative language directed DHS to look at threats
to infrastructure, which was partly why DHS added the asset-based
analysis.
^8Congressional Research Service, Risk Management and Critical
Infrastructure Protection: Assessing, Integrating, and Managing Threats,
Vulnerability, and Consequences, (Washington, D.C.: February 2005).
Table 1: Description of Asset-Based and Geographic Risk Computations in
Fiscal Year 2006
Component Description
Asset-based risk
Threat DHS used information from the intelligence community, such
as communications intercepts and assessments of the
abilities of adversaries to carry out various types of
attacks. This information was evaluated on two main
criteria, the intent and capability of the group making the
threat. The strategic intent of an adversary is based on the
"chatter factor" and "attractiveness," which is partly
determined by how closely the results of a type of attack
align with high-level objectives of an adversary. We learned
that information used in this component for fiscal year 2006
was based on the terrorist group viewed as having the
"greatest capabilities" across all attack scenarios. How the
variables were calculated to get a measurement of threat to
a particular asset type was not specified.
Vulnerability To measure the vulnerability of an asset type, DHS used
internal subject matter experts who analyzed the general
attributes of an asset type against various terrorist attack
scenarios. These subject matter experts conducted site
vulnerability analyses on a sample of sites from the asset
type to catalog attributes for the generic asset. Experts
evaluated vulnerability by attack scenario and asset type
pairs (e.g., nuclear explosion against a chemical plant) and
assigned an ordinal relative value (1, 2, or 3) to the pair
based on 10 major criteria (e.g. electronic detection,
access control, etc.).
Consequences The mode of attack associated with the greatest likelihood
of success was used to assess the consequence that would
result from such an attack on the asset type. DHS used four
categories of consequences--human health, economic,
strategic mission, and psychological--for this assessment,
which were identified in the National Strategy for
Infrastructure Protection. These categories were weighted
and then summed. Details about what data were used to
calculate or simulate consequence were not specified.
Geographic risk
Threat To measure threat to a geographic area, DHS used counts of
data from seven variables--total of intelligence community
reports, total of FBI investigations, total of
DHS/Immigration and Customs Enforcement (ICE)
investigations, total of suspicious incidents, total of ICE
I-94 information for specific countries, total of
international visitors from specific countries, and total of
vessels from specific countries. Weights were assigned to
each variable, then summed.
Vulnerability DHS used total of international visitors and miles of
designated Waste Isolation Pilot Plant (WIPP) route to
assess the vulnerability of a geographic area. Details on
how the two were computed to achieve a measure of
vulnerability for a given area were not specified.
Consequences DHS used three of the four categories used to assess
consequences to asset types to assess the consequences to a
geographic area. DHS did not factor economic consequences to
urban areas in fiscal year 2006. According to a DHS
official, it did not have a UASI-specific economic measure
in fiscal year 2006, but has added it to the model for
fiscal year 2007 using gross metropolitan product data. In
fiscal year 2006, DHS used various population types,
population density, total of defense industrial base
facilities, total of military installations, and total of
large gatherings/special events to measure consequence. A
description of how the variables were computed to achieve a
measure of consequence for a given area was not specified.
Source: GAO analysis of DHS data.
In calculating asset-based and geographic risk, DHS made a number of
policy and analytic judgments because of uncertainties in estimating risk
of terrorism. There are inherent uncertainties associated with estimating
the risk of a terrorist attack, due to various factors such as limited
information on actual attacks; limited information on goals, capabilities,
and adaptability of terrorist groups; and differences in views about how
to combine data about threat, vulnerability, and consequences into a risk
methodology. Given uncertainties, policy and analytic judgments are
required to inform the estimation process. For example, there are a number
of judgments involved in estimating asset-based and geographic risk scores
with various implications and limitations. Table 2 describes some of the
judgments DHS made in estimating risk.
Table 2: Judgments Used in Estimating Asset-Based and Geographic Risk in
Fiscal Year 2006 and Potential Implications and Limitations
Judgment Potential Implications and Limitations
Asset-based risk
Identifying critical assets--38 DHS assessed risk scores for generic types
asset types for fiscal year of assets, such as bridges. Alternatively,
2006. different risk models assess the threat
and vulnerability of a specific asset,
such as the Golden Gate Bridge, and factor
in consequences from an attack to that
specific asset. While determining which
assets are critical can be a subjective
judgment, there may also be a wide
variance regarding the criticality of
assets within a particular asset type.
Determining threat to assets The capabilities of various terrorist
from intelligence data on groups are constantly changing, and there
chatter, attractiveness of is no known method for predicting future
assets as targets, and motivations of adversaries.
strategic intent and
capabilities of terrorist
groups to attack assets.
Estimating vulnerability of DHS noted the limitation of this approach
assets to attack from internal in determining vulnerability for generic
subject matter experts who asset types and would have liked to have
assigned values using various conducted site visits for all assets
attack scenarios--pairing of instead of a sample for each asset type.
each of the 38 asset types to Details about what information internal
14 attack scenarios for fiscal subject matter experts used to assign a
year 2006 value for vulnerability was not specified.
DHS officials told us that using an
ordinal value (1, 2, or 3) to measure
vulnerability did not allow DHS to assess
the differences in magnitude between the
asset-scenario pairs with different
values. Therefore, DHS used cardinal
values (0-100) for fiscal year 2007
analyses.
Determining consequences of This method does not account for multiple
attack in terms of human or simultaneous attacks on assets because
health, economic, strategic of lack of data and DHS' inability to
mission, and psychological and compute these scenarios. DHS officials
assigning weights to each stated the current model does not address
component. simultaneous multiple attacks.
DHS officials reported many challenges to
modeling consequences from multiple or
simultaneous attacks on assets including
answering modeling questions, such as who
should determine what combination of
location or mode of attacks was most
likely (e.g. vehicle-borne improvised
explosive device at a mall, plus a suicide
bomber at a federal building).
Additionally, according to agency
officials, even if DHS were able to select
the most likely multi-attack, it is very
difficult to estimate the potential
interdependencies and consequences. DHS
continues to devise a way to integrate
these issues into its risk model. DHS
acknowledges the uncertainty of
consequence values used in the model, but
does not know of available databases for
consequence information for all
asset-scenario pairs. However when data
are available, DHS uses them, such as with
its use of EPA's database of "worst-case"
scenarios from chemical releases.
DHS did conduct sensitivity analysis for
the consequence weights and told us that
risk results did not change much under
different assumptions about weights and
that this may have been due to the fact
that the consequences were positively
correlated.
Geographic risk
Geographic threat was based on In general, we know very little about how
information from the DHS estimated geographic risk and
intelligence community, such as judgments regarding what parameters are
reports, and numbers of FBI used in assessing threat, vulnerability,
investigations, ICE and consequence were not specified. RAND
investigations, and ICE I-94 has indicated limitations in using simple
data. indicators such as counts of data to
Vulnerability was assessed in assess risk, although there is no
relation to total international theoretical and empirical basis for
visitors and miles of deciding what counts should be included
designated WIPP routes. and in what proportion.
Three types of consequences
were assessed--human health,
strategic mission, and
psychological--and data on
population and other factors
were used.
Source: GAO analysis of DHS data.
The results of the asset-based and geographic risk calculations were
combined to determine a total risk score for a candidate area. Combining
these scores involved (1) determining the values of parameters; (2)
normalizing the values; (3) weighting factors, 0.5 for asset and 1.0 for
geographic; and (4) summing the values. Before adding the two estimates of
risk, DHS made a judgment to weight geographic risk twice as much as
asset-based risk since the potential loss of lives within an area was
factored into how geographic risk was calculated, according to DHS
officials. In determining the appropriate weights, DHS reported that it
conducted limited sensitivity analysis for the weights applied to the
asset and geographic risk scores, but that it would have been a useful
tool to help inform decision makers about eligible candidate areas. During
our review, we conducted sensitivity analysis for the weights assigned to
asset-based and geographic risk estimates, which took an analyst a few
hours to complete. By varying the weights, DHS could identify a subset of
candidate areas that fall in and out of the cutoff point for UASI grant
eligibility, which could justify the decision--35 urban areas. DHS has
approached the National Infrastructure Simulation and Analysis Center to
conduct work to identify sources of uncertainty, which could help better
inform analytic judgments.^9
DHS used essentially the same risk assessment methods for fiscal year
2007. According to DHS officials, the most significant change to the model
is in how threat was assessed. In fiscal year 2006, DHS used counts of
data from the intelligence community to estimate threat to asset types and
geographic areas. In fiscal year 2007, DHS's Homeland Infrastructure
Threat and Risk Analysis Center--a joint unit of the Office of
Intelligence and Analysis and the Office of Infrastructure
Protection--will assess current and trend threat data to assign a single
threat value for each asset type and geographic area using a tiered
approach. Other changes were in response to stakeholder feedback. For
example, DHS expanded the number of asset types in its assessment from 38
in fiscal year 2006 to 47 in fiscal year 2007, based on the feedback
provided to DHS from users, such as states and local governments.
Identifying Eligible Urban Areas
In applying its risk assessment to determine the urban areas that were
eligible to receive UASI grants, DHS first had to determine the geographic
boundaries or footprint of candidate urban areas within which data were
collected to estimate risk. Table 3 identifies the footprints for eligible
urban areas in fiscal year 2006. It used data from various sources to
calculate risk scores; the sources included information from federal
agencies; proprietary data on assets; and intelligence data on threats,
suspicious incidents, and other indicators of threats. Appendix III
further describes the data sources used by DHS to assess risk.
^9The National Infrastructure Simulation and Analysis Center is a virtual
center that includes national laboratories, such as Sandia, Los Alamos,
and Argonne National Laboratories.
On the basis of comments from state and local governments, DHS chose to
redefine the footprint for fiscal year 2006. DHS took several steps to
identify this footprint; these included:
Identifying areas with population greater than 100,000 persons and
areas (cities) that had any reported threat data during that past
year. For fiscal year 2006, DHS started with a total of 266
cities.
Combining cities or adjacent urban counties with shared boundaries
to form single jurisdictions. For fiscal year 2006, this resulted
in 172 urban areas.
Drawing a buffer zone around identified areas. A 10-mile buffer
was then drawn from the border of that city/combined entity to
establish candidate urban areas.^10 This area was used to
determine what information was used in the risk analysis, and
represents the minimum area that had to be part of the state/urban
areas defined grant application areas.
In fiscal year 2005, the footprint was limited to city boundaries (and did
not include the 10-mile buffer zone). According to DHS, for fiscal year
2006, it considered other alternatives such as a radius from a city
center, although such a solution created apparent inequities among urban
areas. DHS incorporated buffer zones at the suggestion of stakeholders,
although this action resulted in making the analysis more difficult,
according to a DHS official. In addition, DHS officials told us the steps
taken to determine the footprint were based on the "best fit," as compared
with other alternatives. DHS did not provide details on what criteria this
comparison was based on.
Table 3: Footprint of Urban Areas Eligible for UASI Grants in Fiscal Year
2006
State Eligible urban area Geographic area captured in Previously
the data count Geographic designated urban
area captured in the data areas included
count Geographic area Previously
captured in the data count designated urban
areas included
Previously
designated urban
areas included
AZ Phoenix Area^a Chandler, Gilbert, Glendale, Phoenix, AZ
Mesa, Peoria, Phoenix,
Scottsdale, Tempe, and a
10-mile buffer extending
from the border of the
combined area.
CA Anaheim/Santa Ana Anaheim, Costa Mesa, Garden Anaheim, CA; Santa
Area Grove, Fullerton, Huntington Ana, CA
Beach, Irvine, Orange, Santa
Ana, and a 10-mile buffer
extending from the border of
the combined area.
Bay Area Berkeley, Daly City, San Francisco, CA;
Fremont, Hayward, Oakland, San Jose, CA;
Palo Alto, Richmond, San Oakland, CA
Francisco, San Jose, Santa
Clara, Sunnyvale, Vallejo,
and a 10-mile buffer
extending from the border of
the combined area.
Los Angeles/Long Burbank, Glendale, Los Angeles, CA;
Beach Area Inglewood, Long Beach, Los Long Beach, CA
Angeles, Pasadena, Santa
Monica, Santa Clarita,
Torrance, Simi Valley,
Thousand Oaks, and a 10-mile
buffer extending from the
border of the combined area.
Sacramento Area^a Elk Grove, Sacramento, and a Sacramento, CA
10-mile buffer extending
from the border of the
combined area.
San Diego Area^a Chula Vista, Escondido, and San Diego, CA
San Diego, and a 10-mile
buffer extending from the
border of the combined area.
CO Denver Area Arvada, Aurora, Denver, Denver, CO
Lakewood, Westminster,
Thornton, and a 10-mile
buffer extending from the
border of the combined area.
DC National Capital National Capital Region and National Capital
Region a 10-mile buffer extending Region, DC
from the border of the
combined area.
FL Fort Lauderdale Area Fort Lauderdale, Hollywood, N/A
Miami Gardens, Miramar,
Pembroke Pines, and a
10-mile buffer extending
from the border of the
combined area.
Jacksonville Area Jacksonville and a 10-mile Jacksonville, FL
buffer extending from the
city border.
Miami Area Hialeah, Miami, and a Miami, FL
10-mile buffer extending
from the border of the
combined area.
Orlando Area Orlando and a 10-mile buffer Orlando, FL
extending from the city
border.
Tampa Area^a Clearwater, St. Petersburg, Tampa, FL
Tampa, and a 10-mile buffer
extending from the border of
the combined area.
GA Atlanta Area Atlanta and a 10-mile buffer Atlanta, GA
extending from the city
border.
HI Honolulu Area Honolulu and a 10-mile Honolulu, HI
buffer extending from the
city border.
IL Chicago Area Chicago and a 10-mile buffer Chicago, IL
extending from the city
border.
IN Indianapolis Area Indianapolis and a 10-mile Indianapolis, IN
buffer extending from the
city border.
KY Louisville Area^a Louisville and a 10-mile Louisville, KY
buffer extending from the
city border.
LA Baton Rouge Area^a Baton Rouge and a 10-mile Baton Rouge, LA
buffer extending from the
city border.
New Orleans Area New Orleans and a 10-mile New Orleans, LA
buffer extending from the
city border.
MA Boston Area Boston, Cambridge, and a Boston, MA
10-mile buffer extending
from the border of the
combined area.
MD Baltimore Area Baltimore and a 10-mile Baltimore, MD
buffer extending from the
city border.
MI Detroit Area Detroit, Sterling Heights, Detroit, MI
Warren, and a 10-mile buffer
extending from the border of
the combined area.
MN Twin Cities Area Minneapolis, St. Paul, and a Minneapolis, MN;
10-mile buffer extending St. Paul, MN
from the border of the
combined entity.
MO Kansas City Area Independence, Kansas City Kansas City, MO
(MO), Kansas City (KS),
Olathe, Overland Park, and a
10-mile buffer extending
from the border of the
combined area.
St. Louis Area St. Louis and a 10-mile St. Louis, MO
buffer extending from the
city border.
NC Charlotte Area Charlotte and a 10-mile Charlotte, NC
buffer extending from the
city border.
NE Omaha Area^a Omaha and a 10-mile buffer Omaha, NE
extending from the city
border.
NJ Jersey City/Newark Elizabeth, Jersey City, Jersey City, NJ;
Area Newark, and a 10-mile buffer Newark, NJ
extending from the border of
the combined area.
NV Las Vegas Area^a Las Vegas, North Las Vegas, Las Vegas, NV
and a 10-mile buffer
extending from the border of
the combined entity.
NY Buffalo Area^a Buffalo and a 10-mile buffer Buffalo, NY
extending from the city
border.
New York City Area New York City, Yonkers, and New York, NY
a 10-mile buffer extending
from the border of the
combined area.
OH Cincinnati Area Cincinnati and a 10-mile Cincinnati, OH
buffer extending from the
city border.
Cleveland Area Cleveland and a 10-mile Cleveland, OH
buffer extending from the
city border.
Columbus Area Columbus and a 10-mile Columbus, OH
buffer extending from the
city border.
Toledo Area^a Oregon, Toledo, and a Toledo, OH
10-mile buffer extending
from the border of the
combined area.
OK Oklahoma City Area^a Norman, Oklahoma City, and a Oklahoma City, OK
10-mile buffer extending
from the border of the
combined area.
OR Portland Area Portland, Vancouver, and a Portland, OR
10-mile buffer extending
from the border of the
combined area.
PA Philadelphia Area Philadelphia and a 10-mile Philadelphia, PA
buffer extending from the
city border.
Pittsburgh Area Pittsburgh and a 10-mile Pittsburgh, PA
buffer extending from the
city border.
TN Memphis Area Memphis and a 10-mile buffer Memphis, TN
extending from the city
border.
TX Dallas/Fort Arlington, Carrollton, Dallas, TX; Fort
Worth/Arlington Area Dallas, Fort Worth, Garland, Worth, TX;
Grand Prairie, Irving, Arlington, TX
Mesquite, Plano, and a
10-mile buffer extending
from the border of the
combined area.
Houston Area Houston, Pasadena, and a Houston, TX
10-mile buffer extending
from the border of the
combined entity.
San Antonio Area San Antonio and a 10-mile San Antonio, TX
buffer extending from the
city border.
WA Seattle Area Seattle, Bellevue, and a Seattle, WA
10-mile buffer extending
from the border of the
combined area.
WI Milwaukee Area Milwaukee and a 10-mile Milwaukee, WI
buffer extending from the
city border.
^10 Buffer zone extensions were considered for chemical plants (25 miles)
and nuclear power plants (50 miles). According to DHS officials, these
distances were selected based on plume effects influenced by research
conducted by the Department of Energy.
Source: DHS.
^aSustainment area: an urban area that received UASI funding in fiscal
year 2005, but was not deemed eligible to apply through the fiscal year
2006 risk assessment. However, DHS extended eligibility to these areas for
one additional grant cycle.
On the basis of the risk assessment and a policy decision, DHS determined
which urban areas were eligible to apply for UASI grants in fiscal year
2006. DHS estimated risk for 172 urban areas, but in determining eligible
urban areas, it limited its analysis of risk to 90 candidate areas, based
on a 200,000 population threshold, and/or reports of credible threats. DHS
performed calculations of relative risk for the 90 urban areas. DHS
combined the two risk assessment scores for 90 candidate urban areas to
get their total relative risk score. These relative risk scores were
plotted in order, then a cutoff point was selected that determined the
number of urban areas eligible to apply for grants in fiscal year 2006 and
defined the nation's most at-risk areas(Also see appendix I, page 20).
According to DHS officials, the Secretary of Homeland Security selected a
cut point that resulted in 35 urban areas, which accounted for 85 percent
of total estimated risk. A senior DHS official also told us that decision
makers may bring other sensitive information--outside the risk model--to
the table, but exactly what that information was or what priority that
information held over other DHS goals was unclear. Further, DHS also
extended eligibility to 11 sustainment areas--urban areas that
participated in the program in fiscal year 2005, but were not identified
as eligible through the risk analysis process in fiscal year 2006. This
policy decision was made in order to foster long-term planning for program
participants across fiscal years. According to DHS, any urban area not
identified as eligible through the risk analysis process for two
consecutive years will not be eligible for continued funding under the
UASI program, but will continue to be eligible to receive funding from
other DHS programs.
DHS officials did not know the extent to which, if at all, the change in
the definition of the footprint area between fiscal years 2005 and 2006
influenced estimates of risk. According to DHS officials, it would be very
difficult to pinpoint the source of changes in risk analysis outcomes in
fiscal year 2006, since there were changes made to the urban area's
footprint, the structure of the model, and the data inputs (e.g., new
annual threat data for geographic risk). However, DHS officials believe
that the change in footprint in 2006 was associated with changes in
relative risk of many urban areas. For example, by defining the footprint
to go beyond city limits additional information, such as a nuclear power
plant outside a city boundary or suburban population, was captured in the
fiscal year 2006 risk assessment, which was previously not accounted for
in several urban areas. As a consequence of the change in the footprint,
DHS officials concluded that the relative risk of New York City and the
National Capital Region declined compared to those of other urban areas.
DHS could not determine how much of the decline was due only to the change
in the footprint versus other components of the risk methodology that
changed. While, as of November 2006, DHS expected to use the same
definition for an urban area footprint for fiscal year 2007, it has yet to
determine how eligibility for UASI funding will be decided.
Appendix III: Data Sources Used in DHS's Fiscal Year 2006 Risk Analysis
In assessing risk for the UASI grant determination process in fiscal year
2006, DHS applied 57 types of data variables from sources such as (1)
federal agencies; (2) state, territory, and local stakeholders; (3)
private proprietary data; as well as (4) data compiled by DHS. Some data
variables were populated from a combination of sources. Data variables
from DHS and other federal government data sources made up 36 asset-based
and geographic data variables. Private proprietary data sources comprised
22 asset-based and geographic variables, of which, 7 variables were
constituted exclusively with data from private proprietary sources. (See
table 4 for details.) DHS officials told us that the National Asset
Database (NADB) was not a data source for risk analysis since the database
is not populated with relevant attributes. Our review of the list and
sources of variables for the risk methodology that DHS provided us also
reveal that NADB did not appear among the sources.
DHS considered all data obtained for the risk model from the sources
identified as reliable for the purposes of estimating risk, although DHS
did not systematically test the reliability of the data used. This
includes the intelligence data which DHS officials acknowledged that they
had accepted from source agencies. DHS considered these data to be valid
and reliable in the sense that DHS believed they appropriately measure the
risk constructs for which they are collected (i.e., the data have face
validity). According to DHS officials, to identify any data-related
problems such as the validity of data used and any duplicative values, DHS
made over 100 analytical runs of the fiscal year 2006 risk assessment
model. These analyses revealed errors created by using buffer zones, which
resulted in some individual assets being attributed to more than one urban
area.
Most of the data used by DHS in fiscal year 2006 were timely and appeared
reliable. Our review of the data sources contained in table 4 show most of
the data sources to be less than 2 years old, and most sources of data
were from 2005 or 2006. All data supplied by private proprietary sources
were less than 2 years old. Data from federal sources on some asset-based
variables were from 2002 or 2001.
We performed a limited test on the reliability of the data sources, given
the time constraints in conducting this review. We attempted to determine,
as a result of prior or ongoing GAO work, whether any reliability
assessments have been conducted on any of the data sets DHS obtained from
proprietary sources, and if so, what were the results. To comply with GAO
policies, we review the reliability of data whenever our work uses sources
of data other than GAO-generated data to analyze and make conclusions in
our work. Of the 57 data variables, we identified five data sources that
were used in past GAO work, and found one of the sources has been
questioned by GAO analysts, although our past work was not directly
related to the specific type of data provided to DHS. Specifically, the
provider of DHS's data for transoceanic cable landings asset type did not
meet GAO's reliability standards, as our past work found internal control
problems such as no mechanisms in place for the providers of the data to
perform verifications.
For fiscal year 2007, DHS reported that it will apply 69 types of
asset-based and geographic data variables from these sources. Of these 69
variables, 38 were populated exclusively with data from a single source,
and 24 asset-based variables were refined in fiscal year 2007 by adding
additional data sources. Also, as we discussed in appendix II, DHS's
Office of Intelligence and Analysis performed threat reviews and provided
the Office of Grants and Training with a single threat value for each
urban area and asset type. This is in contrast to fiscal year 2006, when
DHS used total counts of threats and suspicious incidents. Data supplied
directly by state and local governments for fiscal year 2007 analyses were
current as of August 2006, except where otherwise noted.
Table 4: DHS Sources of Data Used in UASI Risk Analysis Model
Source Used in:
Type of data DHS Other State/ Private Fiscal Fiscal
federal local year year
2006 2007
Critical assets (assessed for threat, vulnerability,
and consequences)
Aqueducts X - X
Arenas X - X
Casinos X - X
Chemical Manufacturing Facilities X^a X X
City Road Bridges X^b X X X
Colleges and Universities X X X X
Commercial Airports X X X X
Commercial Overnight Shipping X X^c X X
Facilities
Convention Centers X X X X
Dams X X^a X X X
Electricity Generation Facilities X X X X
Electricity Substations X X^d X X
Enclosed Shopping Malls X X X X
Federal Office Buildings X - X
Ferry Terminals - Buildings X X X X
Financial Facilities X X X X
Hospitals X X X X
Hotels X^a X^c - X
Hotels/Casinos X^a X -
Levee X X X
Liquefied Natural Gas Terminals X^a X X
Maritime Port Facilities X^c X X
Mass Transit Commuter Rail and X X X
Subway Stations
National Health Stockpile Sites X X X X
National Monuments and Icons X X X
Natural Gas Compressor Stations X X^a X X
Non Power Nuclear Reactors X^c X X
Nuclear Power Plants X^c X X X
Nuclear Research Labs X^c X X X
Offshore Petroleum Import X^c - X
Terminals
Petroleum Pumping Stations X^a X X
Petroleum Refineries X^a X X
Petroleum Storage Tank Farms X X X X
Pharmaceutical Plants X - X
Postal Shipping Hubs X^a - X
Potable Water Treatment X^b X X X
Facilities
Primary And Secondary Schools X X X X
Railroad Bridges X^b X X X
Railroad Passenger Stations X X X X
Railroad Tunnels X X X X
Road Commuter Tunnels X X X X
Sewage Treatment Facilities X^d - X
Stadiums X X X X
Tall Commercial Buildings X^c X X
Telecommunications Telephone X X X X
Hotels
Theme Parks X X X X
Trans Oceanic Cable Landings X^a X X
Under Water Mass Commuter Tunnels X - X
Asset vulnerability
Fiscal Year 2006--ordinal value X^c X -
(e.g.: 1, 2, 3)
Fiscal Year 2007--cardinal value X^c - X
(e.g.: 0 - 100)
Asset threat
Fiscal Year 2006--
Strategic Intent X^c X -
Chatter X^c X -
Attractiveness X^c X -
Capabilities X^c X -
Fiscal Year 2007--
Threat to Infrastructure (value X - X
between 0 and 1) ^e
Geographic consequence parameters
Human Health (Population and X^c X X
Population Density)
Human Health (Commuter X X X
Population)
Human Health (Visitor Population) X X X
Human Health (Military X - X
Population)
Economic (Gross Metropolitan X^c - X
Product)
Defense Industrial Base X X X
Military Bases (Counts) X X X
Levees (Counts) X - X
State Capitals (Yes/No--UASI X^a - X
only)
Special Events X^c X X
Geographic vulnerability
Miles of International X - X
Border/Coastline
Total International Visitors X X X
ICE form I-94 Visitors from
Standard Industrial X - X
Classification Code (through
City)
ICE form I-94 Visitors from
Standard Industrial X - X
Classification Code (destination
City)
Miles of WIPP Routes X X X
Geographic threat
Fiscal Year 2006--
Intelligence Community Reports X^c X -
FBI Counts X^c X -
ICE Investigations X^c X -
Suspicious Incidents X^c X -
ICE form I-94 Visitors from
Standard Industrial X^c X -
Classification Code (through
City)
ICE form I-94 Visitors from
Standard Industrial X^c X -
Classification Code (destination
City)
Fiscal Year 2007--
Threat to Geographic Area (value X - X
between 0 and 1)^e
Source: GAO analysis of DHS data.
Note: DHS provided us information on the sources of data used in the risk
model on November 8, 2006 and, at the time of our review, indicated that
the list for fiscal year 2007 was subject to change. In addition, the
fiscal year 2007 data used have a publication date of 2006, unless
otherwise noted.
^aData sources with a publication date of prior to 2006.
^bData sources with a publication date of either 2001 or 2002.
^cData sources with a publication date not specified.
^dDenotes that publication dates for the variable were not provided for
all sources.
^eDHS considered the variables used in the fiscal year 2006 risk model to
assign a threat value between 0 and 1 for fiscal year 2007.
Appendix IV: DHS's Approach to Assessing Effectiveness for Fiscal Year 2006
Fiscal year 2006 marked the first time that eligible urban areas completed
and submitted an investment justification to formally request UASI
funding, which DHS used to assess the anticipated effectiveness of the
risk mitigation investments urban areas proposed. The investment
justification included up to 15 "investments" or proposed solutions to
address homeland security needs identified by the states and urban areas
through their strategic planning process. DHS and the states collaborated
to identify and select peer reviewers who evaluated, discussed, and scored
the investment justifications submitted by the 46 eligible urban areas.
Reviewers on each of the 17 panels assigned scores for six investment
justifications, which according to DHS officials were averaged to
determine a final effectiveness score for each urban area.
Purpose and Goals of the Effectiveness Assessment
Given the uncertainties in estimating terrorism risk, DHS introduced the
effectiveness assessment as an additional tool to inform DHS leaders when
making allocation decisions. Specifically, the investment justifications
allowed DHS to consider how the eligible urban areas planned to spend the
grant money. While one identified goal of the UASI program is to address
the needs of high-threat, high-density urban areas, DHS officials
determined that it would be more useful for urban areas to suggest
solutions for how to meet their self-identified needs within the
investment justifications. In addition, DHS officials told us the emphasis
on effectiveness was meant to avoid the potential bias that could have
occurred from self-reported needs. The Interim National Preparedness Goal,
which DHS described as a common planning framework to better understand
preparedness levels, shape priorities, and focus expenditures, was in
place for the first time for the fiscal year 2006 grant cycle. DHS
reported that designing funding criteria that incorporated both risk and
effectiveness was done to ensure that urban areas' expenditures were in
alignment with the national priorities established by the Interim National
Preparedness Goal. In particular, for the new effectiveness assessment the
states and urban areas requested fiscal year 2006 HSGP funding by
submitting applications in support of their Homeland Security Strategies
and related program planning documents.^11 In addition, according to DHS
the new effectiveness assessment added a degree of competition to the
grant determination process, which was a change from fiscal year 2005,
when urban areas did not have to justify their planned use of the grant
before they received the funding.^12
Instead of DHS determining the effectiveness of the urban areas'
applications, it decided to use peer reviewers, who were homeland security
professionals and managers from various fields, to make this assessment.
DHS reported that involving subject matter experts from federal, state,
and local government agencies was done to ensure a fair and equitable peer
review process. To learn best practices for distributing competitive
grants and conducting peer reviews, DHS met with officials who run other
competitive grant programs (Assistance to Firefighter Grants, Transit
Security Grant Program, and the National Science Foundation). According to
DHS, this approach to evaluating anticipated effectiveness seeks to
recognize applicants for proposing relevant, innovative, and reasonable
investments that will directly affect our nation's preparedness.
^11States and UASI areas were required to maintain a Homeland Security
Strategy, which was meant to (a) provide a blueprint for comprehensive,
enterprisewide planning for homeland security efforts and (b) provide a
strategic plan for the use of related federal, state, local and private
resources within the state or urban area before, during, and after
threatened or actual domestic terrorist attacks, major disasters, and
other emergencies.
^12In fiscal year 2006, as in fiscal year 2005, the states were required
to notify DHS how they spent the funds. Within 60 days of the grant award,
state administrative agencies were required to submit a prioritization of
investments based upon the final grant award amounts and a certification
that funds had been passed through to local units of government.
Preparing the Investment Justification
DHS assessed effectiveness only for the applications submitted by the 46
eligible urban areas. Aside from the 11 sustainment areas, DHS stated that
it did not allow areas that fell below the risk cut point to apply for a
UASI grant because it did not want to set false expectations and create
excessive work for candidate areas that were not going to receive funding.
DHS provided states and urban areas with guidance that included
instructions on completing the investment justification, the criteria peer
reviewers would use to score the investment justifications, and an
overview of how risk and effectiveness scores would be used to determine
UASI allocations. DHS allowed each urban area to propose up to 15
investments, and for each investment, applicants were required to answer a
total of 17 detailed questions across four sections: background,
regionalization, impact, and funding and implementation plan. DHS
instructed urban areas to build investments that supported their state's
Enhancement Plan, a program management plan to help states identify
strengths and weakness within their homeland security programs and
capabilities. ^13 DHS cited this guidance as an example of how it
encouraged states and urban areas to utilize the results of strategic
planning efforts. DHS reported it was still determining how it would use
the risk and effectiveness scores when allocating the UASI grants, and had
not yet determined what weights would be applied to the risk and
effectiveness scores at the time urban areas were completing their
applications. Therefore, applicants did not know how much the
effectiveness assessment would influence their grant amount. At the time
of our review, DHS had not announced whether or not applicants will be
provided with this information prior to submitting their fiscal year 2007
applications. In addition, in fiscal year 2006, applicants did not have
access to the outcomes of the risk analysis or to specific threats to
assets or their area for consideration when preparing the investment
justifications.
Forming the Peer Review Panels
DHS engaged the states and territories in identifying and selecting the
peer reviewers that would evaluate the investment justifications for their
anticipated effectiveness. DHS provided some guidelines on what state
officials should consider when nominating peer reviewers, and requested
information, such as professional experience, about those nominated. DHS
compiled a list of eligible peer reviewers from nominations made by the
states and territories, and made its recommendations to the states based
on the following high-level criteria:
o the extent of the nominees' familiarity across multiple homeland
security disciplines and their length of tenure,
o the nominees' demonstrated experience managing an integrated
homeland security program or initiative,
o the nominees' familiarity with the HSGP (which was considered a
benefit, but not required), and
o whether or not the nominees represented the State Administrative
Agency (if so, these nominees were prioritized).
DHS allowed the State Administrative Agencies to make the final selection
of peer reviewers from their state or territory, who included homeland
security professionals and managers from a variety of disciplines, such as
officials from law enforcement, fire service, emergency management, state
homeland security, and public health. DHS arranged 17 panels to include
one facilitator, one note-taker, and up to seven peer reviewers,
representing states or territories, urban areas, and federal agencies. DHS
reported arranging the panels to ensure a diverse mix of backgrounds and
experience, and to avoid potential conflicts of interest by:
o including representatives from the eastern, western, and central
geographic regions, and from large- and small-population states;
o preventing reviewers from scoring their own state, territory, or
urban area; and
o avoiding reviewers scoring neighboring states, territories, or
urban areas, where possible
^13According to DHS, in 2005 states conducted a Program and Capability
Review and from this created an Enhancement Plan, which is meant to
prioritize focus areas and develop high-level initiatives to address the
most critical needs. In addition, the Enhancement Plan is the foundation
for building an investment justification to request fiscal year 2006 HSGP
funding.
Overall, in DHS's view, the peer review panel process mitigated potential
bias by requiring panelists to engage in discussion, justify their scores,
and consider multiple perspectives.
Reviewing and Scoring the Investment Justifications
When scoring an urban area's investment justification, peer reviewers
conducted an individual assessment of the applications, and subsequently
discussed scoring in peer review panels during a week-long conference.
Each peer reviewer was responsible for reviewing six investment
justifications, which included roughly 60 investments, on an individual
basis over the course of 2 1/2 weeks, and then submitted their scores,
along with explanatory comments, to DHS. Specific scoring criteria were
developed by DHS for the peer reviewers to use and were provided to states
and urban areas about a month prior to the March 2, 2006, HSGP application
deadline. To score each individual investment, the reviewers evaluated the
responses to the 17 questions, comparing them to detailed criteria and the
state's Enhancement Plan to ensure the proposed investments were in
alignment. Peer reviewers also scored the overall submission, so DHS
provided the peer reviewers with criteria to consider the investment
justification as a whole. By scoring the investment justification as a
whole, DHS sought to reward innovative, forward-leaning approaches. The
scoring criteria are summarized in table 5.
Table 5: Factors Peer Reviewers Considered When Scoring Investment
Justifications in Fiscal Year 2006
Individual Investment
Section description Examples
Background: Question: Provide a summary
description of this Investment
Applicants were asked to summarize the and its purpose.
investment, its purpose, and how it will
support the Enhancement Plan, state/urban Criteria:
area homeland security strategies, and
national priorities and target o Articulates clear end
capabilities. result of using fiscal year
2006 HSGP funds;
Includes four questions with multiple o Explains how outcomes
criteria for each question (a total of 14 relate to the purpose.
criteria for the section).
Regionalization: Question: Explain how the
state/urban area is organizing
Applicants were asked to describe the to implement this Investment
investment's demographic and geographic over the identified geographic
area, and the urban area's plans for areas(s).
regional collaboration, stakeholder
engagement, and an implementation Criteria:
approach to support the investment.
o Discusses regional
Includes three questions with multiple partnerships;
criteria for each question (a total of 14 o Discusses mitigating
criteria for the section). duplication of effort.
Impact: Question: Discuss how the
implementation of this
Applicants were asked to describe Investment will decrease or
anticipated impacts of the investment, mitigate risk.
how requested funds will help achieve the
impacts, how the investment will decrease Criteria:
or mitigate risk, and what the potential
risks of not funding the investment would o Targets specific
be. consequences,
vulnerabilities, and threats;
Includes three questions with multiple o Provides a rationale of
criteria for each question (a total of 11 choices.
criteria for the section).
Funding and implementation plan: Question: Identify potential
challenges to the effective
Applicants were asked to describe the implementation of this
investment's funding plan; describe the Investment (e.g., stakeholder
planned implementation and oversight buy-in, sustainability,
approach of the management team, provide aggressive timelines).
an implementation timeline with
milestones, identify potential challenges Criteria:
to effective implementation and how they
will be addressed and mitigated, and o Describes necessary steps
describe the planned duration and required for successful
long-term sustainability plans of the implementation and describes
investment after fiscal year HSGP funds potential challenges;
are expended. o Explains why the identified
implementation challenges are
Includes seven questions with multiple challenges to this
criteria for each question (a total of 27 Investment.
criteria for the section).
Overall investment justification submission
Criteria:
o Overall relevance to implementation of the Interim National
Preparedness Goal;
o Connection to both the spirit and scope of the Enhancement Plan;
o Extent to which the individual investments relate to each other to
portray a complete picture of plans for the homeland security program;
o Innovativeness of the proposed solutions to address needs;
o Overall feasibility and reasonableness of proposed solutions.
Source: GAO analysis of DHS documents.
After peer reviewers submitted preliminary scores based on their
individual review, DHS identified the questions that received the greatest
range of scores. Then the reviewers participated in a week-long
conference, where the panels of peer reviewers discussed and scored each
individual investment and the investment justification submission as a
whole. Each panel had a facilitator, whose role according to DHS was to
focus the discussions on those questions that received the greatest range
of scores, ensure that the scoring criteria were consistently applied, and
to help the panel develop feedback for the states, territories, and urban
areas. In addition, subject matter experts were on call to answer
questions that arose. During the conference, peer reviewers could revise
their initial scores if desired, based on panel discussions. DHS computed
the final scoring of the investments and the whole investment
justification submission and then combined them to determine an overall
effectiveness score. Specifically, peer reviewers provided scores for each
of the investments in their assigned investment justifications based on
evaluation criteria, and DHS told us it averaged the reviewers' scores
together for each urban area. ^14 DHS reported it selected the median
score as the final total "investment score" for each urban area. In
addition, according to DHS, peer reviewers provided a score for each
overall investment justification submission they reviewed, and the panels
discussed these scores and determined a final "overall investment
justification" score for each urban area. DHS told us it decided to give
these two scores equal weights--0.5 to the total of investment scores, and
0.5 to the overall investment justification score--and averaged them
together to determine one final effectiveness score. While officials told
us they discussed alternative weights, they did not have any data to
indicate that they would be more appropriate than those chosen.
At the end of the panel conference DHS used a survey to gather feedback,
and 80 percent of the 102 peer reviewers responded and provided comments.
The following include some of the preliminary survey results that DHS
reported:
o Eighty-three percent of survey respondents agreed or strongly
agreed that the fiscal year 2006 HSGP peer review resulted in
objective, consistent, and defensible scores and feedback.
o Ninety-six percent of respondents agreed or strongly agreed that
each panel included balanced representation from different
regions, disciplines, and backgrounds.
o Sixty-nine percent of respondents disagreed or strongly
disagreed that the level of effort necessary for the review
process was clearly communicated, and 78 percent disagreed or
strongly disagreed that panelists were given sufficient time to
review, score, and return scoring sheets to DHS prior to the panel
convention.
At the time of our review, DHS planned to continue to use a peer review
process to assess effectiveness, but did not indicate whether it would be
making changes to the process for fiscal year 2007.
^14DHS reported that a consensus on final scores was not required.
Instead, reviewers' scores within each panel were averaged.
Appendix V: UASI Grant Allocation Approach for Fiscal Year 2006
In fiscal year 2006, DHS used a new method to determine the amounts of
UASI grants to each of the 46 eligible urban areas, based primarily on the
risk and effectiveness assessments, but final allocation decisions were
made by the Secretary of Homeland Security. DHS reported that the aim of
considering both factors--risk and effectiveness--is to allocate and apply
HSGP resources to generate the highest return on investment and, as a
result, to strengthen national preparedness. The risk and effectiveness
scores did not automatically translate into funding amounts, but rather,
according to DHS, the scores informed the decisions made by DHS officials.
While all eligible urban areas that applied for UASI grants received
funding, DHS set priorities to determine how much each urban area would
receive. When making funding decisions, DHS prioritized those areas
estimated to have the highest risk of a successful terrorist attack, while
still rewarding those areas that offered effective ways to address
homeland security needs. As a result, the risk assessment was given a
greater weight than the effectiveness assessment when allocating funds.
DHS Allocation Tool Used to Categorize Urban Areas and Set Priorities
DHS established funding priorities before making allocation decisions. For
example, DHS officials told us the Secretary of Homeland Security selected
an approach that considered both the risk and effectiveness assessments
when making allocation decisions, rather than using the outcomes of only
one of the assessments. This approach combined the two assessments by
using a graphical tool--a two-by-two matrix--to create four categories
that would be used to set funding priorities (Figure 4 illustrates the
two-by-two matrix used by DHS). The four funding categories were: Category
I--higher risk, higher effectiveness; Category II--higher risk, lower
effectiveness; Category III--lower risk, higher effectiveness; and
Category IV--lower risk, lower effectiveness.
To create these four categories, DHS made judgments that affected the
category in which urban areas fell. For example, dividing lines were drawn
on the horizontal axis for effectiveness scores and the vertical axis for
risk scores to create the four categories. Specifically, DHS officials
told us they calculated a "natural inflection point" among the risk
rankings of the 46 eligible urban areas, thereby determining the dividing
line on the risk axis. DHS reported that about a third of the urban areas
were above the dividing line and therefore considered "higher risk" and
about two-thirds were below the line and thus, "lower risk." DHS officials
told us they selected the median of the effectiveness scores as the
midpoint on the horizontal axis, and those areas to the right of this
point were considered "higher effectiveness" and those to the left "lower
effectiveness." Each of the 46 eligible urban areas was plotted into one
of the following categories according to their combination of risk and
effectiveness scores.
Figure 4: DHS Allocation Tool Used in Fiscal Year 2006 UASI Funding
Determining the Final Allocations
According to DHS, it considered many different distributions of funding to
each of the four categories, and decided to give Category I the highest
funding priority and Category IV the lowest funding priority. The figure
above illustrates the funding priorities it reported making, in which each
circle represents a hypothetical urban area and the size of the circle
corresponds to the relative amounts of the grant awards (i.e., a larger
circle indicates a greater allocation amount). DHS conducted what it
described as an optimization process to produce many possible options of
funding amounts to each category. DHS told us that once the allotments to
categories were decided, DHS used a formula to determine the grant award
for each urban area. DHS stated that it decided to prioritize the outcomes
of the risk analysis over the effectiveness assessment, and so it made the
policy decision to give each urban area's risk score a weight of 2/3 and
the effectiveness score a weight of 1/3 when calculating the formula. DHS
officials did not indicate whether or not they considered other weights
for the risk and effectiveness scores. DHS reported that some stakeholders
expressed frustration that the effectiveness assessment was not assigned a
greater weight, since the peer review process required considerable time
and effort. As was previously described in appendix IV, at the time urban
areas were completing their applications, DHS had not yet determined the
weights that would be applied to the risk and effectiveness scores. DHS
officials expect that risk and effectiveness scores will both factor into
allocation decisions for fiscal year 2007, but they do not currently know
whether or not the weights given to risk and effectiveness will change in
fiscal year 2007.
DHS officials told us they presented funding options to the Secretary of
Homeland Security, who made the final decision about funding allocations.
The official from the Office of Grants and Training we spoke to did not
provide additional details about the information presented to the
Secretary to inform his decision, and did not know what other goals or
data may have factored into the allocation decision. DHS also reported
that it determined the need to treat two urban areas differently than the
other urban areas when making funding decisions because it considered them
to be outliers in the risk analysis. DHS officials told us these areas
have consequences so great that they cannot be appropriately accounted for
in the risk model. DHS did not specify what methods it used to determine
the amount of these two UASI grants.
All of the 46 eligible urban areas that applied for a fiscal year 2006
UASI grant received funding. DHS reported that 70 percent of UASI funding
went to the higher-risk urban areas in Categories I and II of the
two-by-two matrix, and 45 percent of available funding went to the five
urban areas with the highest relative risk estimates. The total amount of
UASI funds DHS allocated in fiscal year 2006 decreased by 14 percent from
fiscal 2005, but individual funding percentage changes varied among the 46
grantees. For example, among the 46 urban areas, fifteen experienced an
increase in funding and 28 saw a funding decrease. Three of the 35 areas
did not receive funding in fiscal year 2005, but were identified as
eligible to apply for funding through the risk assessment in fiscal year
2006. The total amount awarded to these three urban areas was $23,620,000.
The table below describes the allocations to each urban area in fiscal
years 2005 and 2006, and illustrates the percentage change between years.
Table 6: Percent Change in UASI Funding between Fiscal Year 2005 and
Fiscal Year 2006
Urban area^a Fiscal year 2005 Fiscal year 2006 Percent change in
allocation allocation funds from fiscal
year 2005 to 2006
Eligible areas through risk assessment
New recipients in fiscal year 2006
FL - Ft. Lauderdale 0 $9,980,000 -
Area
FL - Orlando Area 0 $9,440,000 -
TN -Memphis Area 0 $4,200,000 -
Increased funding
NJ -Jersey City/Newark $19,172,120 $34,330,000 79%
Area
NC -Charlotte Area $5,479,243 $8,970,000 64%
GA -Atlanta Area $13,117,499 $18,660,000 42%
WI -Milwaukee Area $6,325,872 $8,570,000 35%
FL - Jacksonville Area $6,882,493 $9,270,000 35%
MO - St. Louis Area $7,040,739 $9,200,000 31%
CA -Los Angeles/Long $69,235,692 $80,610,000 16%
Beach Area
IL - Chicago Area $45,000,000 $52,260,000 16%
MO - Kansas City Area $8,213,126 $9,240,000 13%
MI - Detroit $17,068,580 $18,630,000 9%
FL - Miami Area $15,828,322 $15,980,000 1%
Reduced funding
OR - Portland Area $10,391,037 $9,360,000 (10%)
TX -Houston Area $18,570,464 $16,670,000 (10%)
PA -Philadelphia Area $22,818,091 $19,520,000 (14%)
MD - Baltimore $11,305,357 $9,670,000 (14%)
CA -Bay Area $33,226,729 $28,320,000 (15%)
OH - Cincinnati Area $5,866,214 $4,660,000 (21%)
WA - Seattle Area $11,840,034 $9,150,000 (23%)
IN - Indianapolis Area $5,664,822 $4,370,000 (23%)
MN - Twin Cities Area $5,763,411 $4,310,000 (25%)
TX -San Antonio Area $5,973,524 $4,460,000 (25%)
HI - Honolulu Area $6,454,763 $4,760,000 (26%)
MA - Boston Area $26,000,000 $18,210,000 (30%)
OH - Cleveland Area $7,385,100 $4,730,000 (36%)
CA -Anaheim/Santa Ana $19,825,462 $11,980,000 (40%)
Area
DC -National Capital $77,500,000 $46,470,000 (40%)
Region
NY -New York City $207,563,211 $124,450,000 (40%)
OH - Columbus Area $7,573,005 $4,320,000 (43%)
TX - Dallas/Fort $24,355,870 $13,830,000 (43%)
Worth/Arlington Area
PA -Pittsburgh Area $9,635,991 $4,870,000 (49%)
LA -New Orleans Area $9,305,180 $4,690,000 (50%)
CO - Denver Area $8,718,395 $4,380,000 (50%)
Total funding for 35 $749,100,346 $642,520,000 (14%)
eligible areas
Sustainment areas^b
Increased funding
KY -Louisville Area $5,000,000 $8,520,000 70%
NE -Omaha Area $5,148,300 $8,330,000 62%
CA -Sacramento Area $6,085,663 $7,390,000 21%
FL - Tampa Area $7,772,791 $8,800,000 13%
Reduced funding
NV -Las Vegas Area $8,456,728 $7,750,000 (8%)
OK - Oklahoma City $5,570,181 $4,102,000 (26%)
Area
OH - Toledo Area $5,307,598 $3,850,000 (27%)
LA -Baton Rouge Area $5,226,495 $3,740,000 (28%)
CA -San Diego Area $14,784,191 $7,990,000 (46%)
NY -Buffalo Area $7,207,995 $3,710,000 (49%)
AZ -Phoenix Area $9,996,463 $3,920,000 (61%)
Total funding for 11 $80,556,405 $68,102,000 (15%)
sustainment areas
Total UASI funding $829,656,751 $710,622,000 (14%)
allocated to 46 urban
areas
Source: GAO analysis.
Notes:
a. For a description of the cities, counties, and other geographic
areas included in each urban area, see appendix II, table 3.
b. Sustainment area: an urban area that received UASI funding in
fiscal year 2005, but was not deemed eligible to apply through the
fiscal year 2006 risk assessment. However, DHS extended
eligibility to these areas for one additional grant cycle.
DHS Actions after the Fiscal Year 2006 UASI Grants Were Awarded
After DHS awarded the fiscal year 2006 UASI grants it took additional
steps to provide information about the grant determination process and to
gather feedback from stakeholders. These steps included providing award
letters that summarized the risk and effectiveness assessments for each
urban area, requiring states to conduct grant reporting activities, and
hosting an HSGP after-action conference.
o DHS provided individual award letters that included basic
descriptions of the risk and effectiveness assessments. The award
letter, which announced the amount of the urban area's fiscal year
2006 UASI award, also provided high-level feedback. For example,
counts of asset information and geographic attributes DHS used to
estimate relative risk were included in the letter. It also
described whether DHS's estimate of relative risk placed the urban
area in the (1) top 25 percent, (2) top 50 percent, (3) bottom 50
percent, or (4) bottom 25 percent, compared to the other eligible
urban areas. The letter did not provide the urban areas with their
specific risk score or ranking, however. Summary information was
also provided on the results of the effectiveness assessment,
including which investments were anticipated to be the most and
least effective. The award letter did not explain how the risk and
effectiveness assessments were used by DHS to determine final
grant allocation amounts.
o Through its grant reporting process, DHS gathered additional
information about how the fiscal year 2006 UASI grants were to be
spent. Once it allocated the UASI funds, DHS allowed the recipient
urban areas to decide how to spend the grant under some conditions
with specific reporting requirements. ^15 According to DHS's
fiscal year 2006 grant guidance, grants were to be awarded to the
respective State Administrative Agencies, which were required to
notify DHS within 60 days of the award date as to how the grant
funds were allocated.^16 DHS also reported that grant recipients
would be monitored periodically to ensure that the program goals,
objectives, timeliness, budgets, and other related program
criteria were being met. Officials from DHS's Office of Grants and
Training reported that DHS plans to ask grant recipients how they
spent their fiscal year 2006 funds. DHS officials told us that
they plan to consider information when making decisions for fiscal
year 2008 UASI allocations.
o DHS convened a Homeland Security Grant Program After-Action
conference. At the July 2006 conference DHS gathered feedback on
the UASI grant award process. The conference held working groups
on homeland security planning, the HSGP guidance and application,
the risk assessment, and the effectiveness assessment. DHS
officials told us that the conference provided a feedback loop
intended to bolster stakeholder support and promote transparency.
The state and local partners who participated in the working
groups at the conference produced 32 substantive recommendations
to improve upon the HSGP process for fiscal year 2007 and beyond.
For example, one of the risk assessment working group's
recommendations was that DHS should provide detailed briefings to
state and local partners on the core components of the risk
methodology used in the fiscal year 2006 process as one step to
improve the transparency of the risk analysis process. The
effectiveness assessment working group recommended eliminating the
overall investment justification score, as it believed it was not
beneficial and was not a true representation of the quality of the
application. DHS reported that state and local partners agreed the
overall fiscal year 2006 planning process was the most effective
and constructive thus far, and that the process helped to
standardize the focus of state and local programs around key
homeland security capabilities.
According to DHS officials, stakeholder feedback on the fiscal year 2006
UASI grant process has been obtained and is being considered and
incorporated into the fiscal year 2007 process where appropriate. DHS
stated it will continue to regularly seek stakeholder input and feedback
to ensure that state and local governments are fully informed and that the
process proceeds in a collaborative fashion in fiscal year 2007. For
example, DHS reported plans to convene stakeholder meetings to receive
input on how to make specific grant programs more user-friendly and
transparent, including a midterm review during the HSGP application
process.
^15DHS required that conditions established by peer reviewers be met
before it funded an investment with a score below a certain threshold.
^16Subsequent information on actual expenditures was to be reported every
6 months through the Biannual Strategy Implementation Report.
Appendix VI: GAO Contact and Staff Acknowledgments
GAO Contact
William O. Jenkins, Jr., Director, GAO Homeland Security and Justice
Issues Team, (202)-512-8777 ([email protected])
Acknowledgments
In addition to the contact named above, the following individuals from
GAO's Homeland Security and Justice Team also made contributions to this
report: William Sabol, Assistant Director, Chris Keisling, Assistant
Director, John Vocino, Analyst In Charge; Leslie Sarapu; Lacy Vong; and
Kathryn Godfrey. Also contributing were Charles Bausell, Jr., Economist;
David Alexander, GAO Applied Methodology; and Research Team; and Frances
Cook, GAO Office of General Counsel.
(440574)
*** End of document. ***