Managing for Results: Strengthening Regulatory Agencies' Performance
Management Practices (Letter Report, 10/28/1999, GAO/GGD-00-10).
Pursuant to a congressional request, GAO provided information on
performance management and measurement practices that could position
regulatory agencies to more effectively implement the Government
Performance and Results Act of 1993.
GAO noted that: (1) GAO gathered information from 23 federal and state
organizations that GAO or other credible sources identified as using or
planning to use a variety of useful practices to enhance specific
aspects of their performance management and measurement processes; (2)
GAO has grouped these practices into the following five categories: (a)
restructure the organization's management approach to become more
performance-oriented; (b) establish relationships outside of the
organization to enhance performance; (c) refine performance goals,
measures, and targets to better translate activities into results; (d)
strengthen analytical capabilities and techniques to better meet
performance management information needs; and (e) assess
performance-based management efforts on a continuous basis to identify
areas for improvement; (3) the organizations, although they had
different missions, sizes, and organizational structures, said they
consistently recognized that these practices are important in their
efforts to develop a stronger results orientation; (4) the organizations
used the practices to varying degrees; (5) thus, the practices did not
appear to be a function of any particular organizational characteristic;
(6) GAO believes the practices would be readily transferable to the
federal financial institution regulatory agencies or other governmental
agencies seeking to improve their implementation of the Results Act; (7)
in addition, the practices are consistent with those identified in GAO's
previous reports; and (8) these reports, for example, described
approaches agencies were taking to address analytical and technical
challenges in measuring program performance, align employee performance
with organizational missions and goals, and address the influence of
external factors in developing performance measures.
--------------------------- Indexing Terms -----------------------------
REPORTNUM: GGD-00-10
TITLE: Managing for Results: Strengthening Regulatory Agencies'
Performance Management Practices
DATE: 10/28/1999
SUBJECT: Regulatory agencies
Accountability
Performance measures
Strategic planning
Private sector practices
IDENTIFIER: GPRA
Government Performance and Results Act
******************************************************************
** This file contains an ASCII representation of the text of a **
** GAO report. Delineations within the text indicating chapter **
** titles, headings, and bullets are preserved. **
** **
** No attempt has been made to display graphic images, although **
** figure captions are reproduced. Tables are included, but **
** may not resemble those in the printed version. **
** **
** Please see the PDF (Portable Document Format) file, when **
** available, for a complete electronic file of the printed **
** document's contents. **
** **
** A printed copy of this report may be obtained from the GAO **
** Document Distribution Center. For further details, please **
** send an e-mail message to: **
** **
** **
** **
** with the message 'info' in the body. **
******************************************************************
United States General Accounting Office
GAO
Report to the Chairman, Committee on Banking and
Financial Services, House of Representatives
October 1999
GAO/GGD-00-10
MANAGING FOR RESULTS
Strengthening Regulatory Agencies' Performance
Management Practices
Ordering Information
The first copy of each GAO report and testimony
is free. Additional copies are $2 each. Orders
should be sent to the following address,
accompanied by a check or money order made out
to the Superintendent of Documents, when
necessary. VISA and MasterCard credit cards are
accepted, also. Orders for 100 or more copies to
be mailed to a single address are discounted 25
percent.
Order by mail:
U.S. General Accounting Office
P.O. Box 37050
Washington, DC 20013
or visit:
Room 1100
700 4th St. NW (corner of 4th and G Sts. NW)
U.S. General Accounting Office
Washington, DC
Orders may also be placed by calling (202) 512-
6000 or by using fax number (202) 512-6061, or
TDD (202) 512-2537.
Each day, GAO issues a list of newly available
reports and testimony. To receive facsimile
copies of the daily list or any list from the
past 30 days, please call (202) 512-6000 using a
touch-tone phone. A recorded menu will provide
information on how to obtain these lists.
For information on how to access GAO reports on
the INTERNET, send e-mail message with "info" in
the body to:
[email protected]
or visit GAO's World Wide Web Home Page at:
http://www.gao.gov
United States
General Accounting Office
Washington, D.C. 20548-0001
Official Business
Penalty for Private Use $300
Address Correction Requested
Bulk Rate
Postage & Fees Paid
GAO
Permit No. G100
(233584)
Contents
Page 6 GAO/GGD-00-10 Managing for Results
Letter 1
Appendix I 10
Practice Category 1:
Restructure Management
Approach
Summary 10
Appendix II 24
Practice Category 2:
Establish Relationships
Outside of the
Organization
Summary 24
Appendix III 32
Practice Category 3:
Refine Performance
Goals, Measures, and
Targets
Summary 32
Appendix IV 45
Practice Category 4:
Strengthen Analytical
Capabilities and
Techniques
Summary 45
Appendix V 55
Practice Category 5:
Continue Improving
Performance-Based
Management
Summary 55
Bibliography 57
Related GAO Products 62
Tables Table IV.1: Florida Measure 47
Definitions
Figures Figure I.1: Example of Mission 16
Statement Used to Develop Strategic
Goals-U.S. Coast Guard
Figure I.2: Example of Relating 17
Efforts to Outcomes-U.S. Coast Guard
Figure II.1: Example of a Survey for 27
Client Input on Agency
Performance-Florida Department of
Banking and Finance
Figure II.2: Matrix Showing 28
Crosscutting Program
Relationships-Nuclear Regulatory
Commission
Figure III.1: Constructing National 34
Highway Traffic Safety
Administration's Program Logic Model
Figure III.2: Decisionmaking Tiers 39
Figure III.3: Examples of Performance 41
Comparisons
Figure III.4: Examples of Targets Set 44
Using Baseline Data-U.S. Coast Guard
Figure IV.1: National Highway Traffic 51
Safety Administration Matrix to
Define Problems and Strategies
Abbreviations
BCI Border Coordination Initiative
DOT Department of Transportation
EPA Environmental Protection Agency
FAA Federal Aviation Administration
FDA Food and Drug Administration
FDBF Florida Department of Banking and
Finance
FDI Florida Department of Insurance
FGFFC Florida Game and Freshwater Fish
Commission
FHCA Florida Agency for Health Care
Administration
FSIS Food Safety and Inspection Service
INS Immigration and Naturalization Service
MDES Minnesota Department of Economic
Security
MDOC Minnesota Department of Corrections
MDOR Minnesota Department of Revenue
MDOT Minnesota Department of Transportation
MNA Mission Needs Analysis Process
MOEA Minnesota Office of Environmental
Assistance
MPCA Minnesota Pollution Control Agency
NHTSA National Highway Traffic Safety
Administration
NOAA National Oceanic and Atmospheric
Administration
NRC Nuclear Regulatory Commission
NSF National Science Foundation
OPPAGA Office of Program, Policy Analysis and
Government Accountability
OSFI Office of the Superintendent of
Financial Institutions
OSHA Occupational Safety and Health
Administration
PBPM planning, budgeting, and performance
management
TDOB Texas Department of Banking
MOE Measures of Effectiveness
B-281899
Page 5 GAO/GGD-00-10 Managing for Results
B-281899
October 28, 1999
The Honorable James A. Leach
Chairman, Committee on Banking
and Financial Services
House of Representatives
Dear Mr. Chairman:
As part of your oversight of federal agencies
that are responsible for regulating financial
institutions,1 you expressed concern over the
progress these agencies have made in implementing
the Government Performance and Results Act of 1993
(Results Act). At your request, in September and
November 1998, we provided you with our
observations on how the financial institution
regulatory agencies could improve their annual
performance plans.2 Subsequently, on November 10,
1998, you requested that we undertake a
performance management and measurement "best
practices"3 study to identify model approaches
that might help these agencies in their efforts to
better implement the Results Act.
This report summarizes information provided
in a May 26, 1999, briefing to representatives
from the federal financial institution regulatory
agencies, your office, and other federal
representatives. In the briefing, we presented
performance management and measurement practices
that could position the agencies to more
effectively implement the Results Act. Details on
these practices are presented in appendixes I
through V. Additionally, a bibliography and a list
of related GAO products are included at the end of
this report.
Results in Brief
We gathered information from 23 federal and state
organizations that we or other credible sources
identified as using or planning to use a variety
of useful practices to enhance specific aspects of
their performance management and measurement
processes. We have grouped these practices into
the following five categories, which are also
detailed in appendixes I through V:
� Restructure the organization's management
approach to become more performance-oriented.
� Establish relationships outside of the
organization to enhance performance.
� Refine performance goals, measures, and
targets to better translate activities into
results.
� Strengthen analytical capabilities and
techniques to better meet performance management
information needs.
� Assess performance-based management efforts
on a continuous basis to identify areas for
improvement.
Each appendix describes individual practices
within each category and provides examples of how
one or more of the state and federal organizations
we reviewed said they have implemented or plan to
implement these practices.
The organizations, although they had different
missions, sizes, and organizational structures,
said they consistently recognized that these
practices are important in their efforts to
develop a stronger results orientation. The
organizations used the practices to varying
degrees. Thus, the practices did not appear to be
a function of any particular organizational
characteristic. We believe the practices would be
readily transferable to the federal financial
institution regulatory agencies or other
governmental agencies seeking to improve their
implementation of the Results Act. In addition,
the practices are consistent with those identified
in our previous reports, which are listed at the
end of this report. These reports, for example,
described approaches agencies were taking to
address analytical and technical challenges in
measuring program performance, align employee
performance with organizational missions and
goals, and address the influence of external
factors in developing performance measures.
Background
Congress passed the Results Act as part of a
legislative framework to instill performance-based
management in the federal government. The Results
Act establishes a management system to set agency
goals for program performance and to measure
results against those goals. In enacting the Act,
Congress and the administration realized that the
transition to results-oriented management would
not be easy. For that reason, the Act provided for
a phased approach to implementation.
Implementing the Results Act in a regulatory
environment is particularly challenging. In the
past, regulatory agencies have cited numerous
barriers to their efforts to establish results-
oriented goals and measures. These barriers
included problems in obtaining data to demonstrate
results, accounting for factors outside of the
agency's control that affect results, and dealing
with the long time periods often needed to see
results.
Over the past several years, we have issued
reports that identified practices for improving
Results Act implementation in federal agencies.
These reports have focused on, among other things,
overcoming barriers specific to regulatory
agencies, improving the usefulness of annual
performance plans to decisionmakers, and measuring
program results that are under limited federal
control. These reports point out the depth and
scope of management practices needed to
successfully implement performance-based
management as envisioned under the Results Act.
At your request, during the last year, we
reviewed the annual performance plans of the
federal financial institution regulatory agencies.
Our review identified several ways in which the
agencies could improve their plans, including
making performance goals more results-oriented;
more clearly linking performance goals and
measures with strategic goals; more fully
describing crosscutting efforts with other
agencies; more fully explaining how strategies and
resources would be used to achieve agency goals;
and providing more details on data verification
and validation efforts and data limitations.4
Scope and Methodology
To identify practices potentially useful to the
federal financial institution regulatory agencies,
we first gathered views from these agencies to
identify issues they believed we should consider
in our work. Overall, the agencies were interested
in the specific activities, organizational
support, and incentives that would comprise a
successful performance management approach, and
they wanted us to use specific examples whenever
possible. To develop this information, we selected
several government organizations-mostly regulatory
in nature-that our past work or other credible
sources identified as having, or planning to
implement, all or parts of performance management
and measurement practices that other agencies
might find useful.
At the federal level, the organizations we
reviewed included the Federal Aviation
Administration, the Food and Drug Administration,
the U.S. Customs Service, the U.S. Coast Guard,
the National Highway Traffic Safety
Administration, the Environmental Protection
Agency, the Occupational Safety and Health
Administration, the Food Safety and Inspection
Service, the National Science Foundation, and the
Nuclear Regulatory Commission. In addition, we
reviewed the performance management approach of
Canada's Office of the Superintendent of Financial
Institutions.
At the state level, the organizations we reviewed
included the following five Florida state
agencies: the Office of Program, Policy Analysis,
and Government Accountability; the Department of
Banking and Finance; the Department of Insurance;
the Game and Freshwater Fish Commission; and the
Agency for Health Care Administration. We also
reviewed the following six Minnesota state
agencies: the Department of Economic Security, the
Pollution Control Agency, the Department of
Transportation, the Office of Environmental
Assistance, the Department of Corrections, and the
Department of Revenue. Lastly, we reviewed the
Texas Department of Banking.
We collected data on the performance management
and measurement efforts of these organizations. We
did not directly verify the accuracy of these
data. Because of the recent adoption of the
practices, we were unable to determine the extent
to which the organizations' emphasis on results
was directly traceable to specific practices.
However, we did ask the organizations to review
our results for accuracy and completeness. Each
organization agreed with our characterization of
the information discussed in this report. To
supplement the organizational information, we
reviewed available public and private sector
performance management and measurement literature.
From the organizations we studied, supplemented by
our literature review, we identified
organizational performance management and
measurement practices that appeared useful for the
federal financial institution regulatory agencies.
As part of our methodology, we also asked several
government performance management experts to
review these practices and our interpretation of
the information. These experts generally concurred
with our observations and characterizations.
We did our work between September 1998 and May
1999 in accordance with generally accepted
government auditing standards.
We are providing copies of this report to
Representative John J. LaFalce, Ranking Minority
Member of this Committee; the Honorable John D.
Hawke, Jr., Comptroller of the Currency; the
Honorable Alan Greenspan, Chairman, Board of
Governors of the Federal Reserve System; the
Honorable Donna Tanoue, Chairman, Federal Deposit
Insurance Corporation; the Honorable Norman E.
D'Amours, Chairman, National Credit Union
Administration; and the Honorable Ellen Seidman,
Director, Office of Thrift Supervision. We will
also make copies available to others on request.
This report was prepared under the direction of
Kane Wong, Assistant Director. Key contributors to
this assignment were Sharon Caudle and Patrick
Ward. Please contact me at (202) 512-8678 or Mr.
Wong at (415) 904-2000 if you or your staff have
any questions.
Sincerely yours,
Richard J. Hillman
Associate Director, Financial Institutions
and Markets Issues
_______________________________
1For purposes of this report, the federal
financial institution regulatory agencies include
the Federal Deposit Insurance Corporation, the
Federal Reserve Board, the National Credit Union
Administration, the Office of the Comptroller of
the Currency, and the Office of Thrift
Supervision.
2The Results Act: Observations on FDIC's Annual
Performance Plan (GAO/GGD-98-190R, Sept. 15,
1998), The Results Act: Observations on the
Federal Reserve's 1998-99 Biennial Performance
Plan (GAO/GGD-99-9R, Nov. 9, 1998), The Results
Act: Observations on NCUA's Annual Performance
Plan (GAO/GGD-98-192R, Sept. 15, 1998), The
Results Act: Observations on OCC's Annual
Performance Plan for Calendar Year 1998 (GAO/GGD-
98-189R, Sept. 15, 1998), and The Results Act:
Observations on OTS' Annual Performance Plan (GAO-
GGD-98-191R, Sept. 29, 1998).
3"Best practices" are generally recognized as the
processes, practices, and systems identified in
organizations that could provide models for other
organizations.
4GAO/GGD-98-190R, GAO/GGD-99-9R, GAO/GGD-98-192R,
GAO/GGD-98-189R, and GAO-GGD-98-191R.
Appendix I
Practice Category 1: Restructure Management
Approach
Page 20 GAO/GGD-00-10 Managing for Results
Summary
In the first practice category,1 the organizations
said they restructured their management approach
to become more performance-oriented. Practices in
this area included the following:
1. Strengthen the organizations' performance-
based management approach.
2. Enhance organization ownership and
coordination of performance management efforts.
3. Redesign responsibility and accountability
structures.
Practice 1: Strengthen Management Approach
As a first practice, the organizations said they
strengthened their performance-based management
approach. The characteristics of this practice
included the following:
� Make implementation of performance-based
management and achievement of outcomes top
organizational priorities.
� Conduct a comprehensive assessment of the
organizations' internal and external environments.
� Establish a high-level performance management
support capability.
� Develop core performance-based management
competencies through skill development and
operational pilots.
Make Implementation and Achievement Top Priorities
The organizations said top management in the
organizations was committed to successful
implementation of performance-based management and
the achievement of agreed-upon outcomes, making
them top agency priorities. Top management buy-in
and commitment, a high level of involvement, and
consistency in leadership characterized the
organizations. According to the organizations, top
managers were involved in all aspects of
performance-based management, from developing a
performance monitoring and evaluation system to
identifying and assessing key measures. For
example:
� The Federal Aviation Administration (FAA)
said that to demonstrate support, the agency head
needed to attend key meetings, support requests
for the time and resources to work on strategic
planning, and be willing to talk with managers and
staff about performance issues and processes.
� To show high-level commitment, the U.S.
Customs Service (Customs Service) said the agency
put an Assistant Commissioner in charge of the
performance management redesign in its trade
compliance program.
The organizations we studied also said they used
several methods of communication to explain the
purpose, processes, implementation strategies, and
staff responsibilities for performance management
and measurement. For example:
� The Minnesota Office of Environmental
Assistance (MOEA) said that the agency stressed
outcomes in major policy reports. MOEA shared
performance information with staff and detailed
how the information supported major agency
decisions.
� To set the tone for major changes in its
management approach, the Minnesota Pollution
Control Agency (MPCA) said it developed
descriptions of how the organization would be
changing its management approach by, for example,
developing shared goals, using environmental
outcomes, creating alliances with others, and
becoming a learning organization.
In addition, the organizations said top management
ensured that the organizations were accountable
for the implementation of performance-based
management by rigorously tracking and evaluating
action items designed to implement strategic plans
and meet performance expectations. Variances
between actual performance and expected
performance targets were promptly identified and
acted upon. For example:
� The Food Safety and Inspection Service (FSIS)
said line managers were (1) involved in preparing
plans for their own areas and (2) regularly
discussed with top management the activities under
way to implement the plans.
� The National Science Foundation (NSF) and the
Minnesota Department of Economic Security (MDES)
said they used management meetings to discuss
performance management. NSF said it had a
permanent slot on the Director's agenda to address
Results Act implementation. MDES said the agency
ensured that performance management issues were
discussed regularly at management meetings.
Conduct a Comprehensive Assessment of Internal and
External Environments
As a first step toward developing a tightly
integrated and comprehensive strategic management
approach, the organizations said they performed
comprehensive internal and external assessments.
These assessments generally included identifying
customers and stakeholders and assessing the
agency's mission, vision statement, and operating
principles. For example:
� FAA said its overall assessment process was
to examine FAA's legislation, define mission
areas, and define measures for the intended
outcomes.
� At the beginning of its performance
management efforts, MDES said it conducted a self-
assessment using the Minnesota Quality Award
criteria, which were based directly on Malcolm
Baldrige award criteria. According to MDES, this
effort allowed the agency to systematically assess
core processes and, at the same time, more
directly engage agency leadership in performance-
based management.
� The Food and Drug Administration (FDA) said
that after passage of the FDA Modernization Act of
1997, it conducted an assessment of its external
and internal environments to identify challenges
the agency would face over the next several years.
According to the agency, the assessment included
identifying FDA's statutory requirements and
public expectations, evaluating environmental
factors affecting the agency's future actions, and
reviewing current program performance.
Establish a High-Level Organizational Capability
The organizations said they recognized that even
when performance-based management had the
involvement and support of top management, there
was still a need for a central point to facilitate
and support performance management. Therefore,
they established central units or individuals to
help coordinate the organizations' performance
management efforts, ensure consistency across the
organizations, and provide training on
methodologies and approaches. These individuals or
units were to act as internal consultants, and
they were not responsible for developing plans,
goals, targets, or measures. Those aspects were
clearly the responsibility of program managers.
For example:
� The Customs Service said its Office of Budget
Formulation and Planning served as the facilitator
for Results Act implementation, with function
managers and process owners developing their own
performance plans.
� The Minnesota Department of Transportation
(MDOT) said it used its Data Services Office to
facilitate the agency's performance measurement
efforts. In addition, the office acted as a
clearinghouse for best practices in measurement
and a source for benchmarking data.
Develop Core Performance-Based Management
Competencies
The organizations said they recognized that to
successfully implement results-oriented
organizational strategies, they needed managers
and staff to be competent in at least the basics
of performance management. According to the
organizations, the competencies were needed for
two purposes. The first purpose was to understand
the rationale of performance management and how
measurement could be used. The second purpose was
to go beyond understanding and actually put
performance management and measurement to use in
directly improving organizational and program
performance. The organizations said they sought to
build the necessary competencies through training,
on-the-job activities, and the use of pilot
efforts in lead programs or organizational units.
The organizations said they offered in-depth
training on performance management. For example:
� The National Highway Traffic Safety
Administration (NHTSA) said it conducted a 3-day
"train the trainer" seminar on Results Act-related
areas early in its Results Act efforts. NHTSA also
joined the Consortium for Continuous Improvement,
which the agency has found to be a useful source
of training materials.
� MDES said that it used a consultant to
provide a 3-day training session for all managers
and supervisors on various topics, such as quality
concepts and performance requirements. Later, the
agency provided 40 hours of similar training for
the remaining 1,500 employees.
� The Florida Department of Banking and Finance
(FDBF) said it provided performance measurement
training as a part of new employee orientations
and provided supervisors with training in
performance management basics. FDBF also said it
conducted yearly refresher training for staff.
According to the organizations, pilot efforts
within the organizations were used to motivate and
provide "lessons learned" for other parts of the
organizations or to focus on specific performance
management tasks. For example:
� FAA said one FAA line of business2 with more
experience in performance management served as a
model for the rest of the agency's lines of
business for Results Act implementation.
� MDOT said it used divisions "ahead of the
curve" in its performance management efforts. The
agency said these divisions were more practiced in
using performance data to make decisions and
support funding requests, and they motivated other
divisions to improve their performance management
efforts.
� The Environmental Protection Agency (EPA)
said its Region 10 was leading a pilot to
establish baselines for water quality and measure
the effects of EPA activities on dairy farmer
behavior and water quality.
Practice 2: Enhance Ownership and Commitment
As a second practice, the organizations said they
enhanced the ownership of and commitment to their
performance management efforts across their
organizations. The characteristics of this
practice included the following:
� Have management and staff fully participate
in performance management decisionmaking and agree
on program mission areas, goals, measures, and
targets.
� Develop a well-defined and tightly focused
mission statement to direct agency efforts.
� Develop a "family" of plans to implement
strategies across the organization.
� Align the organizational structure and
coordinating management interrelationships.
Have Management and Staff Fully Participate
The organizations said they ensured that managers
and staff at all levels extensively participated
in the development of goals, targets, and
measures, seeking to secure agreement between
management and staff on the program mission areas,
goals, targets, and measures that would be used
for detailed planning and program management. In
addition, the managers and staff actively
participated in the implementation and tracking of
performance results. For example:
� FAA said that its unions and key managers
used a National Partnership Council to address
issues of common concern, and that the Council
participated in the development of the agency's
strategic plan. In addition, FAA said all of its
employee associations received the plan for
comments. FAA program managers also had to agree
on agency mission areas and measures. Further, the
agency said there was a conscious effort to focus
on the most important results or outcomes,
resulting from a long process of negotiation with
lines of business to agree on the goals and
measures.
� The Texas Department of Banking (TDOB) said
it used an iterative process to share a draft
strategic plan with field staff, obtain comments,
and then redistribute the draft. TDOB's process
was repeated three times before final agreement
was reached. The agency also said it included key
divisional staff on a strategic planning task
force.
� Canada's Office of the Superintendent of
Financial Institutions (OSFI) said the agency's
Performance Measure Advisory Committee-tasked with
developing a performance measurement framework-met
with directors and executives to understand what
measures would be useful to them.
� The Florida Agency for Health Care
Administration (FHCA) said it made extensive use
of workgroups, using them to develop measures,
educate staff, enhance interdivisional
communication, and obtain feedback from the
legislature on FHCA's measures.
Use a Well-Defined and
Tightly Focused Mission Statement
The organizations said they built mission
statements that were focused, yet would encompass
all of the programs within the organizations. The
organizations said they found that a clear,
concise mission statement formed the foundation
for a coordinated, balanced set of strategic
goals, performance measures, and strategies to
implement the goals. Without such a mission
statement, the organizations said they found it
difficult to develop an appropriate hierarchy of
goals and strategies across the organizations and
to clearly relate the associated outputs and
outcomes to the organizations' missions. For
example:
� The Nuclear Regulatory Commission (NRC) said
it started its Results Act efforts with the
agency's mission statement, using it to formulate
"what if" questions in linking strategic goals to
its mission. That is, the agency said it assessed
whether pursuing a particular strategic goal would
lead to fulfilling its mission.
� TDOB said it reduced its mission statement
from two paragraphs to one sentence by focusing on
key outcomes and removing any discussion of goals
and strategies. The new mission statement was "to
promote a stable state banking and financial
services environment and provide the public with
convenient, safe, and competitive financial
services."
� FAA said its mission statement-to provide a
safe, secure, and efficient global aerospace
system that contributes to national security and
the promotion of U.S. aerospace safety-translated
into its three mission-based strategic goals on
safety, security, and system efficiency.
� As shown in figure I.1, the U.S. Coast Guard
said it translated its mission statement-to
protect the people, the environment, and the
maritime security of the United States-into five
strategic goals that described the outcomes the
agency sought to achieve or influence over the
long term. These goals included the major outcome
areas of safety, protection of natural resources,
mobility, maritime security, and national defense.
The Coast Guard said that the agency's programs,
policies, facilities, processes, procedures,
activities, and requirements all should ultimately
be linked to achieving the agency's mission,
vision, and strategic goals. Figure I.2 shows a
diagram used by the Coast Guard to illustrate the
relationship of effort-what the Coast Guard
does-to outcomes-why the Coast Guard undertakes
its efforts. The Coast Guard said it had five
traditional roles that are categorized into
specific mission areas for organizational and
administrative purposes. Activities performed in
each mission area were to contribute to one or
more of the agency's strategic goals, leading to
accomplishment of the Coast Guard's mission.
Figure I.1: Example of Mission Statement Used to
Develop Strategic Goals-U.S. Coast Guard
Source: U.S. Coast Guard.
Figure I.2: Example of Relating Efforts to
Outcomes-U.S. Coast Guard
Source: U.S. Coast Guard.
Design a Family of Plans
Across the Organization
The organizations said they saw a need to direct
and coordinate performance management and
measurement efforts within each of their
organizations, and found that using a family of
plans encouraged the direct linkage of strategic
goals and measures to operational and support
goals, measures, and related activities. By using
performance plans at all levels of an
organization, each level's goals could be
carefully integrated with those of the other
levels, allowing all of the organization's
strategies and activities to be oriented toward
achieving the principal strategic goals. The
organizations also said that a family of plans
helped prevent the occurrence of contradictory
goals across the organization. In addition, they
said a hierarchy of plans was developed to
recognize that different management levels need
different performance information, and that goals,
objectives, and measures become more meaningful if
they relate to the appropriate level of
responsibility and control. The use of measures
targeted at different decisionmaking tiers is
discussed further in practice 7.
Following are examples of organizations' use of a
family of plans:
� The Coast Guard said it had a complete family
of plans illustrating how planning efforts at
various levels of the Coast Guard were intended to
be linked. The agency said the family of plans was
designed to facilitate the communication of vital
information for decisionmaking purposes between
the Coast Guard's operational and logistics
components in the field and at headquarters.
According to the agency, the family of plans
started with the Coast Guard 2020-a broad internal
and external environmental scan-which outlined
probable challenges and opportunities that the
Coast Guard may face in the coming decades. The
strategic plan served as the implementation
vehicle for Coast Guard 2020 and was developed to
provide focus and alignment for the development of
business plans and specialized plans. The
performance plan showed how the agency intended to
translate the resources it has, and those it was
requesting, into performance outcomes for a
specific budget year. Other parts of the family of
plans included (1) the performance report that
detailed the annual level of performance actually
achieved; (2) the annual budget that detailed the
resources needed to fund operations and logistics
activities; (3) specialized plans for major
capital assets, such as workforce and systems; (4)
business plans for headquarters strategies,
measures, objectives, and resources; and (5)
regional strategic assessments that were
assessments of risk, threat, opportunity, and
demand as well as resource requirements and major
issues from area and district commanders for
incorporation into headquarters business plans and
the agency strategic plan.
� FAA said it used a linkage across several
primary documents, as follows: (1) FAA's strategic
plan coordinated with the DOT plan, (2) FAA's
annual performance plan, (3) FAA's line of
business and performance plans, and (4) FAA's
performance agreement with the DOT Secretary.
According to FAA, the agency's lines of business
prepared their own business plans, complete with
their own performance measures. The plans
described the work and work-related activities
that each major organizational unit would
undertake in the next several years.
� NRC said it developed operating plans for
each of its programs, covering program
commitments, significant information technology
initiatives, program assumptions, self-assessments
and evaluations, and a summary of quarterly
resource changes. The program commitments included
various items, such as planned accomplishments,
resources, milestones, measures, status,
activities, and quarterly targets.
� MDOT said it operated from a family of plans
that included a statewide strategic plan, an
agencywide strategic plan, district strategic
plans, and an upcoming agencywide business plan.
Change Organizational
Structure and/or Management Interrelations
The organizations we studied said they followed
the principal that "form follows function," or in
this case, form follows the intended results
described in the mission statement and the
strategic goals. The organizations said they
evaluated whether their organizational structures
aided or hindered meeting performance
expectations, and made the necessary changes. For
example:
� Using a business process improvement
methodology, the Customs Service said it
identified its core and mission support processes
and managed through those processes. Its outcome-
oriented core processes included trade compliance
(the commercial importation of merchandise),
passenger processing (the processing of passengers
entering and leaving the United States by vehicle,
vessel, and air), and outbound processing (the
commercial exportation of merchandise). Mission
support processes included information and
technology, financial, and human resources
processes. The agency said it assigned a process
owner to each of its processes to ensure
accountability.
� FDBF said it defined separate program areas
by first identifying the ultimate outcomes desired
by the agency, such as consumer protection or bank
safety and soundness.
� The Minnesota Department of Corrections
(MDOC) said the agency organized its divisions by
customer, such as adult facilities and juvenile
facilities, with each division required to relate
its efforts to the agencywide outcome goals.
� MPCA said it was restructuring its divisions
so that it could attack environmental problems
more holistically and avoid duplication. Instead
of being organized around four "media"
divisions-Air Quality, Ground Water and Solid
Waste, Hazardous Waste, and Water Quality-the
agency was becoming geographically organized,
allowing different district offices to focus their
resources on the most serious problems faced in
their area.
The organizations said they also recognized that
coordination among managers was important in
achieving performance goals and, therefore,
crafted operating procedures and activities to
foster this coordination, especially when several
program or functional areas shared similar goals.
For example:
� FAA said the success of the FAA plan required
assigning both accountability and coordination
within the organization. FAA lead and support
organizations were designated for each goal and
project. According to FAA, each lead organization
called together managers of key supporting
organizations to discuss interrelationships, what
is required by whom and when, and how the goal or
project would be accomplished.
� FSIS said that the performance management
efforts helped program managers work together by
laying out common goals and describing how
managers should interact with each other to meet
the goals. This provided a structure to implement
the goals and identified assignments and tasks to
implement them.
Practice 3: Redesign Responsibility and
Accountability
Structures
As a third practice, the organizations said they
redesigned organizational responsibility and
accountability structures. The characteristics of
this practice included the following:
� Target daily activities and projects to
support strategies to implement goals.
� Integrate performance goals, measures, and
costs into budget structures and decisionmaking.
� Link performance appraisals to performance
accountability and responsibility.
Target Daily Activities to Implement Performance
Goals
The organizations said the family of plans
previously mentioned was to align strategic goals
and lower-level organizational goals. As a further
alignment step, the organizations said they
clearly linked organizational strategies-which
were to reflect daily activities and projects-to
strategic goals. According to the organizations,
this approach encouraged an outcome orientation
down to the task level in the organization,
thereby building a clearly linked hierarchy of
goals, objectives, measures, and implementation
activities. Performance management became a clear
part of how the organizations were to manage day
to day in such areas as securing and justifying
resources and defining accountability. For
example:
� EPA said it used more detailed strategic plan
objectives with subobjectives for annual
performance goals. Shorter term activities
indicated the contribution of lower levels of the
organization to the strategic goals.
� FSIS said its strategic plan was coordinated
with an agency database containing 26 activities
that were used to track, on a monthly basis,
specific implementation tasks assigned to staff
for the agency's new meat and poultry safety
program.
� NHTSA said each program office had defined
milestones for activities and projects supporting
strategies for the annual performance plan, such
as completing rulemaking by a certain date.
� The Minnesota Department of Revenue (MDOR)
said the agency used information from each of its
functional areas in its daily management to
compare various measures to current targets as
well as the results achieved in prior years.
According to the agency, the performance
information was contained in reports available to
all employees over the agency's network, all
updated at specific times. For example, the agency
said it weekly updated the report on the number of
days taken to process refunds for paper tax
returns. The report provided information on the
year-to-date count for specific processing times,
including the target of processing the returns
within 90 days if received by April 1, and 120
days if received after April 1. The report
contrasted this current information with the
processing time performance of the past 2 calendar
years.
Integrate Performance Management Into Budgetary
Processes
The organizations said they recognized that one of
the biggest incentives for performance-based
management was seeing information on results
integrated into budgetary structures and
decisionmaking. The organizations said they
aligned their budgets with program activities
which, in turn, were tied to program goals,
targets, and measures. The organizations said they
were very specific in defining the direct and
indirect costs of the activities that produced
outputs, especially those that could be linked to
intermediate or final outcomes. For example:
� TDOB said it used performance measurement to
allocate and prioritize the use of its limited
resources. When TDOB did not meet targets in one
area, the agency said it could divert resources
from an area where the agency had exceeded its
performance targets.
� The Coast Guard said it was using activity-
based costing to identify direct and indirect
costs in program areas and developing standard
rates for time and for fully loaded costs.
According to the Coast Guard, this will enable it
to identify all costs to the activity level and
link them to outcome areas. The agency said it was
also refining its resource allocation process to
align its assets with mission requirements.
� NRC said it was implementing an integrated
and disciplined system to improve its processes
for planning, budgeting, and performance
management (PBPM). According to NRC, the PBPM
system was to establish a process for defining
agency goals, develop cost-effective strategies to
achieve those goals, determine the resources
needed to implement the agency's strategic
direction, and measure and assess the agency's
progress.
� FAA said it was in the process of developing
a cost accounting system that linked outputs to
activities and costs. FAA said it already had a
formal agency needs assessment process in place,
called the Mission Needs Analysis Process (MNA).
This formal process, for example, called for an
assessment of the need for new systems and
equipment to meet the mission and strategic goals
of the agency that related to future National
Airspace System capital requirements. According to
FAA, each organization within the agency was to
conduct its own assessments to establish its
mission needs. In addition, FAA said it was
developing a cost and performance management
system to track the costs incurred in performing
services, to allow management to measure the
effectiveness of its workforce, and to rank return
on assets.
� The Customs Service said it also was
developing a resource allocation model to identify
the effect of increases in operational staff on
requirements for support staff. According to the
agency, the model was to be used to develop a
budget and a basis for requesting funding. The
model allowed the user to perform an "what-if"
analysis, such as changes in the amount of
overtime, minimum staffing constraints, or
staffing allocations.
Use Performance Agreements and Appraisals
The organizations said they incorporated
performance management into performance agreements
and appraisals by clearly specifying performance
responsibilities and accountabilities. The
organizations said they did this to (1) increase
the visibility and importance of performance
management results and (2) encourage managers and
staff to pay attention to performance information
and outcomes. In addition to the performance
agreements and appraisals, the organizations said
they established a firm link, at the individual
and/or unit level, between the performance
expectations and incentives. The incentives
included both monetary and nonmonetary incentives.
Nonmonetary incentives included recognition
awards, regular performance reports to the
organization, and managerial flexibility in
exchange for more performance accountability. For
example:
� NSF said agency staff were determined that
whatever performance management system was
implemented, it would be consistent with normal
program management and a useful management tool.
As a result, senior executive service performance
objectives contained at least one Results Act-
related element.
� FAA said that each month its lines of
business managers reported on their progress in
reaching the goals set forth in the performance
agreement between the FAA head and the Department
of Transportation (DOT) Deputy Secretary. The
agency said the performance agreement contained
measures used in the performance plan and specific
projects and milestones that were not in the
performance plan. FAA also said it was piloting,
at the senior executive service level, a program
tying incentives, such as bonuses, to key
performance measures in the strategic plan.
Agreements with executive-level staff were to
specify how the executives would contribute to
FAA's three primary goals, weighted by the areas
in which the executives had responsibilities.
� NHTSA said it used an employee performance
management program that focused on results, not
behaviors and characteristics, and provided a
linkage between individual performance results and
required outcomes of organizational performance.
According to NHTSA, outcomes in the employee's
performance plan were desired end-results that the
employee and the organization were trying to
achieve. The employee performance plan was to link
the work of the employee to the organization's
goals. NHTSA said the outcomes were generally
beyond the employee's ability to control, but
defined what the employee and the organization
were working toward. The employee performance plan
contained performance targets (equivalent to
performance standards) for each outcome. The
agency said the targets were those activities that
the employee should accomplish by the end of the
performance period that contributed to the
attainment of the outcome or desired end-result.
The employee was accountable for the performance
targets, not the outcomes per se.
_______________________________
1 The 5 practice categories in appendixes I
through V contain 12 practices that are numbered
consecutively throughout the appendixes.
2Within FAA, there are six lines of business: Air
Traffic Services, Research and Acquisition,
Regulation and Certification, Airports, Civil
Aviation Security, and Commercial Space
Transportation.
Appendix II
Practice Category 2: Establish Relationships
Outside of the Organization
Page 30 GAO/GGD-00-10 Managing for Results
Summary
In the second practice category, the organizations
said they established relationships outside of the
organization for the purpose of enhancing
performance. They then worked through these
relationships to implement specific programs.
Practices in this area included the following:
4. Establish results-oriented collaborative
relationships with regulated
entities and program delivery partners.
5. Establish partnerships with other
organizations involved in
implementing crosscutting programs.
Practice 4: Establish Results-Oriented
Collaborative
Relationships
In the fourth practice, the organizations said
they established, at a program level, results-
oriented collaborative relationships with
regulated institutions and program delivery
partners. Specific practices included the
following:
� Work with regulated entities in identifying
the organization's mission, performance goals,
targets, and measures.
� Involve regulated entities in the prevention
aspect of performance.
� Build consensus among program delivery
partners on performance targets, measures, and
data use.
� Obtain periodic input from regulated entities
and delivery partners on the organization's
performance management efforts.
Involve Regulated Entities in Identifying Mission,
Goals, Targets, and Measures
The organizations said they recognized the need
for the entities they regulated to be involved in
developing the organizations' mission, goals, and
targets, as well as selecting performance measures
and helping determine their use in program
operations. According to the organizations, the
consultative process was meaningful and involved
extensive dialogue with the regulated entities.
For example:
� FAA held annual sessions with its
stakeholders, called "Challenger Sessions," in
which FAA discussed its mission and goals with
representatives from industry groups, such as
airlines and manufacturers, as well as user
groups, such as pilots. The agency said the
stakeholders also played a role in defining FAA's
strategic goals and strategies. For example, FAA
said that the agency established air traffic
control preferred routes to minimize conflicts in
congested airspace. However, the FAA routes often
differed significantly from the routes that
airline pilots and flight planners would prefer to
optimize their operations on the basis of their
own objectives and constraints. In response, FAA
said it began working closely with airlines to
share air traffic information so that
collaborative decisions could be made.
� The Coast Guard said it conducted regional
strategic assessments that examined the demand for
agency services and attempted to anticipate
current and future demands for services and
resources. These assessments examined partnerships
with such entities as ports and waterways.
� The Customs Service said it worked with
industries and communities to achieve outcomes. To
illustrate, the agency said it worked with local
community and Mexican government officials to
measure delays entering the United States from
Mexico and develop strategies to reduce the delays
but still ensure compliance.
� MDOR said the agency validated its initial
strategic objectives with over 70 customers and
external stakeholders and used focus groups as
part of its target-setting processes, thereby
getting customer feedback on what constituted
acceptable targets.
Involve Regulated Entities
in the Prevention Aspect of Performance
The organizations said they took the approach that
regulated entities should join with the regulators
in the prevention of problems and the
"coproduction" of performance results. The
organizations said they recognized that the
regulated entities themselves should assume more
responsibility for identifying and addressing
performance issues. According to the
organizations, this approach was to allow them to
target resources more effectively. For example:
� FAA said the agency wanted to reduce the
fatal commercial aviation accident rate by 80
percent, but FAA could not do so solely through
its role as regulator and enforcer. The agency
said it worked with the aviation community,
analyzing data to agree on the top accident
categories and then conducting a detailed root-
cause analysis. According to FAA, this approach
had been instrumental in helping stakeholders
throughout the aviation community-both the
regulators and the regulated entities-reach
agreement on the key accident categories on which
to focus. FAA said that most important was the
subsequent stakeholder commitment to effective
implementation of intervention strategies to
prevent future accidents in these categories.
� FSIS said the agency's new meat and poultry
food safety program provided a more specific and
critical approach to the control of
microbiological hazards in foods than the
approaches provided by traditional inspection and
quality control. The program required meat and
poultry slaughter and processing plants to adopt a
system of process controls to prevent chemical,
physical, and biological food safety hazards.
According to FSIS, the program allowed FSIS
inspectors to use a combination of a scientific
approach for monitoring meat company management
practices and traditional carcass-by-carcass
inspections. FSIS said its employees analyzed
company meat and poultry safety practices,
monitored company records, and conducted
laboratory tests to ensure that company testing
was accurate.
� The Occupational Safety and Health
Administration (OSHA) said the agency's pilot
Maine Top 200 Program encouraged employers to
identify hazards themselves and take corrective
action before the hazards led to injury and
illness. According to the agency, this pilot
program enabled OSHA to focus on workplaces where
the largest number of serious injuries and
illnesses occurred.
Build Consensus Among Program Delivery Partners
on Targets, Measures, and
Data Use
Similar to their approach with regulated entities,
the organizations said they recognized the need
for the involvement of the organizations on which
they relied in delivering program products and
services, such as state regulatory agencies. The
organizations said they saw the delivery partners
as key stakeholders and sought to establish
agreement on program mission and objectives, the
most useful performance measures, and the
potential use of performance measures. For
example:
� EPA said its Office of Enforcement and
Compliance Assurance initiated the agency's
National Performance Measures Strategy in January
1997 to develop an enhanced set of performance
measures for EPA's enforcement and compliance
assurance program. EPA said it consulted with
state environmental agencies on the measures,
which resulted in several key ideas. For example,
the states believed that EPA's enforcement and
compliance assurance program should (1) place more
emphasis on the use of outcomes and environmental
indicators to measure performance and (2) reduce
its emphasis on outputs as a measure of
performance.
� OSHA said it relied on 23 states and 2 U.S.
territories to operate their own OSHA-approved
occupational safety and health programs. According
to OSHA, these states were integral partners in
OSHA's mission of ensuring the safety and health
of workers. In developing its strategic plan, OSHA
said it directly involved these organizations in
reviewing the plan. The agency said this approach
clarified the role of states and how OSHA's
strategic plan applied to state programs.
Obtain Periodic Input on Performance Management
Efforts
On an ongoing basis, the organizations said they
secured input from their regulated entities and
delivery partners on the effectiveness of program
operations. In addition, they gathered information
from these partners on program mission statements
and goals. For example:
� MDES said it conducted quarterly customer
satisfaction surveys through its workforce
centers, with some centers going even further and
conducting focus groups with staff and clients.
MDES said it used the survey data to update its
mission, values, and goals.
� FDBF said it obtained feedback from its
customers-the banks it regulated-on how they were
regulated by conducting short surveys on various
areas, such as the bank examination process,
examination reporting, and examination team
competency. A portion of the survey is shown in
figure II.1.
Figure II.1: Example of a Survey for Client Input
on Agency Performance-Florida Department of
Banking and Finance
Source: Florida Department of Banking and Finance.
� MDOR said the agency conducted postaudit
customer surveys to obtain information on the
organization's audit process. The agency said the
surveys solicited responses on how auditors
treated the agency's customers, not the outcome of
the audit, and revealed that most employers do
want to comply with tax laws.
Practice 5: Establish Rigorous Partnerships Across
Organizations
Under the fifth practice, the organizations said
they sought to identify other organizations
involved in implementing crosscutting programs,
then establish rigorous partnerships with those
organizations. Specific practices included the
following:
� Identify the contributions of each
organization and develop common outcomes.
� Establish a leadership role to coordinate
cooperative efforts across common goal areas.
� Use tools to facilitate common data sharing
across partner organizations.
Identify Crosscutting Relationships, Common
Outcomes, and Agency Contributions
The organizations said they recognized the
importance of interactions among agencies involved
in similar or related programs. They identified
organizations with related programs, their
relationship with those organizations, and common
outcome expectations. For example, FSIS said it
included a specific goal to establish effective
working partnerships with other public health
agencies and stakeholders to support the
President's National Food Safety Initiative, which
called for a reduction in foodborne illnesses. The
agency said specific activities were to include
(1) collaborative monitoring with the Center for
Disease Control, FDA, and state public health
departments; (2) establishing cooperative
agreements with states for risk assessment; (3)
developing standard operating procedures for
coordination in cases of foodborne illness
outbreaks and other food safety emergencies; and
(4) coordinating strategies with the Department of
Health and Human Services, the United States
Department of Agriculture, and private sector
groups to expand communication of food safety
information to the general public.
According to the organizations, their crosscutting
efforts also involved coordinating their program
activities and linking their goals and strategies
with those of the partner organizations. For
example, NRC's annual performance plan identified
areas of mutual interest with other agencies, the
related NRC programs, and the crosscutting
strategies NRC would use to address the shared
responsibilities with these other agencies. NRC
presented most of this information in the form of
a matrix in its annual plan. A portion of that
matrix, describing part of NRC's interaction with
the Department of Energy, is shown in figure II.2.
Figure II.2: Matrix Showing Crosscutting Program
Relationships-Nuclear Regulatory Commission
Source: U.S. Nuclear Regulatory Commission.
The organizations said they involved crosscutting
partners, jointly and equally, in developing
common or consistent outcome goals, associated
measures, targets, and data collection procedures
as well as in determining the types of decisions
they would be used to make. The organizations said
they also used interagency agreements and
performance partnerships to clarify each partner's
roles and responsibilities. For example:
� NRC said that in most instances, it had or
was developing memorandums of understanding or
other agreements with other agencies to ensure
that areas of mutual interest and cooperation were
treated in a consistent, coordinated, and
complementary way that avoided unnecessary
duplication or conflict.
� MDOC said that it completed literature
reviews and other secondary research to ensure
that the agency was measuring many of the same
indicators as other criminal justice agencies. As
a result, the agency said it had the capability to
benchmark with other agencies.
In addition, the organizations said they worked to
identify, to the maximum extent possible, each
agency's contribution to the common outcome
expectations and whether those contributions were
unique or common to other organizations. The
organizations said they also took other
organizations' efforts into consideration when
making resource plans. According to the
organizations, working with other agencies also
helped the organizations to leverage resources by
combining project efforts. For example:
� NHTSA said it developed a beneficial
relationship in situations where another agency
could regulate something NHTSA could not. For
example, the draw strings on children's jackets
were getting caught on school bus doors, and NHTSA
worked with the Consumer Product Safety
Commission, which provided guidelines on
children's clothing.
� The Coast Guard said the agency evaluated
where it fit in a crosscutting program logic model
(described in practice 6) and the level of agency
outputs or outcomes that might be appropriate. In
one case, the Coast Guard said the National
Oceanic and Atmospheric Administration (NOAA)
determined fishing regions and was interested in
the intended outcome of protecting fish habitats.
To contribute to NOAA's intended outcome, the
Coast Guard produced an output-patrolling miles of
ocean. The Coast Guard, said the agency, was not
responsible for protecting fish habitats, but
could support NOAA in doing so.
Establish a Leadership
Role to Coordinate
Cooperative Efforts
The organizations said they recognized that
successfully implementing crosscutting efforts
among "equal" partners could be difficult without
specific leadership. To remedy this, the
organizations said they established one lead
organization or a specific leadership role among
the partners. For example:
� The Customs Service said it and the
Immigration and Naturalization Service (INS)
developed the Border Coordination Initiative (BCI)
to increase cooperation on the southwest border
for the interdiction of drugs, illegal aliens, and
other contraband. The agency said the BCI was
intended to be a comprehensive, integrated border
management system at the southwest border,
achieving the mission of both the Customs Service
and INS. According to the Customs Service, the
agencies established a joint Office of Border
Coordination, comprising two border coordinators
representing INS and the Customs Service, that
managed the BCI and was responsible for overseeing
border operations.
� FSIS said its strategic plan focused on
national and international food safety and was
targeted at all food safety efforts at the federal
and state levels. Relying on crosscutting efforts,
FSIS said it was working with federal and state
officials to optimize the food safety resources
available at all levels. The agency planned to
exercise leadership through memorandums of
understanding and cooperative agreements.
Use Tools to Facilitate
Common Data Sharing
The organizations said that to assist in
crosscutting efforts, the partner organizations
increased the usefulness of their common data
sharing by establishing common data definitions
and information systems. Common data definitions
were to help ensure that data used for common
purposes would be consistently defined, collected,
calculated, and interpreted. In addition, the
organizations said they (1) identified existing
information systems within each organization that
might serve common interests and (2) shared
information with their partner organizations. For
example:
� NHTSA said it recognized the need to share
information with other agencies and establish
information systems to do so. For example, the
agency said that the Department of Health and
Human Services asked NHTSA for information
regarding the use of drugs and driving. According
to the agency, information sharing such as this
bolstered NHTSA's requests for additional
performance data funding.
� FDA said the agency's focus on outcomes was
tied to building information systems linked with
other agencies, such as the Center for Disease
Control, with the same goal, thereby making it
easier to integrate and share performance
information with these agencies.
� The Customs Service said that it and INS,
collaborating on the previously mentioned BCI
initiative, worked to develop common data
definitions and data collection procedures.
Appendix III
Practice Category 3: Refine Performance Goals,
Measures, and Targets
Page 44 GAO/GGD-00-10 Managing for Results
Summary
In the third category, the organizations said they
refined their performance goals, measures, and
targets to better translate activities into
results. The practices included the following:
6. Establish a rationale of how individual
programs delivered results.
7. Select meaningful goals and measures
consistent with the rationale.
8. Set appropriate targets for performance
goals.
Practice 6: Establish
a Rationale of
How the Program
Delivers Results
Under the sixth practice, the organizations said
they sought to establish a rationale of how their
individual programs delivered results. Specific
practices included the following:
� Take a holistic or "systems" approach.
� Build a program logic model that described
how activities translated to outcomes.
� Expand program assessments and evaluation
efforts to validate model linkages and rationale.
Look at Programs Using
a Holistic or "Systems" Approach
The organizations said they recognized that
performance management and measurement efforts
could not be viewed from a limited perspective,
such as that of an individual program, but must be
seen in terms of the operation as a whole. The
organizations said they concentrated on the
relationships and interactions of whole systems,
as opposed to managing parts of a system. The
organizations said they took a holistic approach
to identify and evaluate factors that affected
their outcomes, determine appropriate strategic
goals, and assess how a change in one goal might
affect another. For example:
� OSHA said that the agency's strategic
planning process recognized that all of the
agency's strategic goals would be interrelated,
and that working in one goal area would affect the
work in another area.
� FDA said that the agency was improving its
efforts to describe total program relationships
and systems. For instance, FDA said it was moving
from setting only measurable performance goals to
setting goals whose measures were important as
strategic points in an overall system to respond
to the agency's mission.
� MPCA said that the agency was dealing with
pollution problems in a more holistic way. For
example, air pollution resulting from mobile
sources-cars, trains, airplanes, and boats-touched
a number of MPCA program areas. In the past, the
agency said it focused on specific media, such as
air and water. The holistic approach, using
environmental impact data, was designed to help
decisionmakers focus on the environment as a
whole.
� OSFI said that it used the image of a scale
to demonstrate the interplay of its five strategic
objectives. On one side of the balance was
"safeguarding from undue loss" (representing the
agency's identification of specific risks and
trends and intervention in a timely manner) and
"public confidence" (evaluation of systemwide
risks and promotion of the adoption of sound
business and financial practices). On the other
side was "competition" (the agency's due regard
for the regulated institutions' need to compete
effectively) and "cost effectiveness" (maintaining
full and open dialogue with stakeholders on the
costs and benefits of the agency's work). The
balance's fulcrum was "quality," representing
OSFI's objective of carrying out its mission with
quality people, processes, and technology. OSFI
said that it recognized some objectives would
counterbalance others. For example, if greater
emphasis was placed on "safeguarding from undue
loss" by introducing tougher rules for financial
institutions, there might be an adverse impact on
the ability of financial institutions to be
innovative and competitive.
Develop Basic Program Logic Models
The organizations said they sought to develop a
better understanding of how their programs worked
so that they could select appropriate performance
goals and measures. To do so, they said they
described the logic or rationale of how individual
programs used inputs, such as resources and
workload, in program components, such as
activities and processes, to produce outputs. In
turn, those outputs were connected to intermediate
and final outcomes. These descriptions, often
called program logic models, were not necessarily
the more extensive models that might be used in
more comprehensive program evaluations, but they
were concise descriptions of the basic flow from
inputs to outcomes. The organizations said they
did not necessarily intend to describe causality,
but to develop a description of a reasonable
correlation or association showing how inputs were
converted to outputs and outcomes. According to
the organizations, the exercise of developing
logic models can help internal and external
stakeholders (1) see the progression from outputs
to end outcomes, (2) see how changes in program
components and outputs might better impact
outcomes, and (3) better understand their
contributions to desired results. For example:
� NHTSA said the agency was using program
impact models that showed linkages between inputs,
outputs or process, intermediate outcomes, and end
outcomes. NHTSA's annual performance plan
described the linkages to support the presentation
of outcome and intermediate goals and measures in
the body of the plan. Output measures, which were
included in the plan's appendix, were to be used
internally for departmental budget justifications
and management decisions. A program logic model
using components from NHTSA's annual performance
plan is shown in figure III.1.
Figure III.1: Constructing National Highway
Traffic Safety Administration's Program Logic
Model
Source: NHTSA.
� The Coast Guard said the agency's Marine
Safety area's business plan followed the program
logic model from inputs to outputs to outcomes. In
addition, the agency said that in its strategic
planning efforts, the agency used a basic model of
internal processes leading to activities, which
led to outcomes. Management goals reflected
internal processes, while activities reflected
performance goals. In this model, the Coast Guard
noted that its performance outcomes were largely
affected by where the Coast Guard decided to
intervene in a problem, by its particular
strategies and activities, and by its internal
management goals.
� OSHA said it was also using formal program
logic models in developing performance measures.
The agency viewed measures of activities, such as
the number of inspections, as intermediate
measures leading to the desired outcome of fewer
workplace accidents. The agency said that OSHA's
model flowed from performance goals to outcome
goals to strategic goals to Department of Labor
outcome goals and indicated the basic assumptions
and strategies that were to be used to achieve
desired outcomes. For example, implementing safety
and health program promotional strategies would
lead to employers' having an effective safety and
health program; leading to better safety and
health programs; leading to changing the workplace
culture to increase awareness and commitment of
safety and health; and, finally, leading to
reducing workplace injuries, illnesses, and
fatalities.
Expand Program Assessment and Evaluation Efforts
The organizations said they expanded their
evaluation efforts to explore and confirm the
linkages and rationale within their program logic
models. These efforts included questioning and
testing the assumptions in the program logic
model, the processes and activities selected to
implement the models, and the way in which program
impacts might be separated from external factors.
� The Coast Guard said it planned to use
program evaluation to examine the agency's
activities of interaction, influence, and impact
after the agency developed a program policy. The
agency's view was that program evaluation should
serve performance management in validating the
program logic model and its rationale, and help
determine if performance-based management was
worthwhile.
� FDA emphasized that a program evaluation
function was needed to encourage program managers
to think from inputs to the end of the logic model
continuum.
Practice 7: Select Goals
and Measures Consistent With the Program
Rationale
Under the seventh practice, the organizations said
they sought to select vital program and support
goals, as well as performance measures, that were
consistent with the program rationale they had
developed. Specific practices included the
following:
� Select vital program and support goals
directly or indirectly from a program logic model.
� Use rigorous criteria to assess and select
the most important measures.
� Develop a comprehensive suite of measures
representing total program performance.
Select Vital Program and Support Goals From
Program Logic Models
According to the organizations, program logic
models formed a foundation for selecting vital
program performance goals, including management
and support goals. The organizations selected
outcome goals, measures, and targets that
reflected the influence of external factors and
the agency's varying levels of influence and
control at different points in the program logic
model. For example:
� The Florida Game and Freshwater Fish
Commission (FGFFC) said the agency tried to use
only intermediate measures where the link to
desired outcomes was established. As an example,
FGFFC said it measured the number of cases
successfully closed-referred to the state's
attorney-instead of reduction in crime rates
because the agency (1) had more control over the
number of cases closed and (2) believed, from
experience, that closed cases generally resulted
in conviction and therefore a reduction in the
crime rate.
� MPCA said that in cases where the agency
could not develop or obtain data on an
environmental outcome measure, it tracked changes
in behavior it believed led to the desired
outcome. For example, tracking mercury reduction
required a long time frame, so the agency said it
would track behaviors that lead to decreases in
the release of mercury into the environment.
� Florida's Office of Program, Policy Analysis
and Government Accountability (OPPAGA) stated that
agencies could consider, as an intermediate
measure, detailing the characteristics of a
healthy regulated population, then explain how the
agency contributed to the development and
maintenance of a healthy population.
The organizations said they considered
administrative or management support goals as
supportive of, and thus subordinate to, strategic
program goals. For example:
� NRC said it selected strategic goals on the
basis of the importance to the agency's mission
areas and made support efforts subordinate to the
strategic areas.
� FAA said the agency distinguished between
goals that were directly aimed at achieving the
agency's mission and other goals, such as
administrative or management support goals, that
enabled or supported the agency in achieving its
mission.
Use Rigorous Criteria to Select the Most Important
Measures
The organizations said they used rigorous criteria
to assess and select the actual measures. The
selection criteria-such as availability, accuracy,
validity, potential adverse consequences, balance,
and relevance-recognized that meaningful
performance-based management required the use of a
manageable number of useful measures. According to
the organizations, tracking more measures resulted
in an increased data collection burden.
Organizations also said that not carefully
screening measures resulted in measures that were
similar to others or that might be irrelevant to
program results and operational needs. According
to the organizations, the result might be a large
volume of measures that would overwhelm those
measures considered truly important for
decisionmaking and guiding organizational
operations.
In addition, the organizations said they used the
criteria to regularly review and modify the
measures over time. The selection criteria also
recognized the importance of selecting a suite of
measures, as discussed later, reflecting a balance
of measures across the logic model and for
different decisionmaking tiers. The organizations
said the logic models also allowed them to define
what activity information-workload and
process-would best support movement toward the
ultimate outcomes.
The following are examples of the organizations
using criteria to select the most important
measures:
� FSIS said its strategic plan focused
primarily on its fundamental public health mission
by limiting the number of goals and accompanying
measures and tying them to the primary goal of
protecting public health.
� FGFFC said it selected only measures that
were easily explainable to the legislature and did
a "reality check" with all measures to ensure they
would be useful for both internal and external
decisionmaking.
� Several organizations said they took a
customer-based focus. MDES recommended identifying
customers for each measure, ensuring that the
measures were clear, understandable, meaningful,
and measurable. MPCA and MDOT said they primarily
focused on outcomes and measures from the citizen
or customer point of view, and they secondarily
focused on outputs, such as the number of tests
conducted.
� OSFI said that before implementing a measure,
the agency emphasized having a full understanding
of the impact of the measure on stakeholders as
well as the potential repercussions of
communicating the measure.
� The Customs Service said the agency's fiscal
year 2000 annual performance plan had 31
performance measures, down from 61 in 1999, and
focused on the most meaningful and vital few
measures. At one time, the agency said it used the
number of inspections to measure the performance
of its inspectors, but dropped the measure because
it forced inspectors into "body counts." Other
measures, such as the number of indictments, were
dropped because the Customs Service had no control
over them.
Develop a Comprehensive
Suite of Measures
Representing Total
Program Performance
The organizations said they recognized that
developing a suite of measures across the logic
models and addressing different decisionmaking
tiers within the organization, best measured
program and overall organizational performance.
Thus, the organizations chose performance measures
from the different stages in the model, covering
inputs, outputs, outcomes, and activities or
processes. According to the organizations, the
suite of measures, representing the
interrelationship of multiple measures, helped the
organizations take a broader perspective of
program performance. For example, EPA said it
categorized its measures into three types. These
types included the following: (1) program output
measures that represented actions taken by the
agency, such as the number of permits
issued/revised or the number of enforcement
actions; (2) program outcome measures that
reflected the direct results that lead to
environmental improvements, such as the decreased
use of higher risk pesticides or reduced emissions
of toxics from manufacturing facilities; and (3)
core environmental indicators that represented
ultimate results, such as increases in the number
of rivers and lakes in healthy ecological
condition or decreases in the number of people
with air quality-related illnesses/deaths.
In addition, the organizations said that
developing a suite of measures allowed them to
select a subset of measures targeted at the
different needs of specific audiences. For
example, the measures could meet the information
needs of both internal managers, who needed
information for operational control from the
bottom of the organization to the top, and
external policymakers, who were interested in
measures evaluating the program's success. Figure
III.2 is a diagram highlighting the types of
information needed to make decisions at different
levels within an organization and how the
granularity increases from the top to the bottom.
That is, the most aggregated information is at the
top level, and the most detailed information is at
the bottom. Figure III.2 also describes the timing
and focus of information at these different
levels.
Figure III.2: Decisionmaking Tiers
Source: GAO summary of decisionmaking tiers.
The organizations said they used the
decisionmaking information to select measures and,
in situations where final outcome information was
not available or would be delayed, to provide
justification for the use of output or
intermediate outcome information. For example:
� For each of its five strategic objectives,
OSFI said the agency was developing a suite of
performance measures designed to evaluate progress
in reaching its objectives. For the strategic
objective of safeguarding depositors from undue
loss, the agency established the following
measures: (1) the Level of Intervention Index,
which measured the level of OSFI intervention and
tracked individual financial institutions as their
financial condition changed; (2) the Loss Recovery
Index, which measured the amounts depositors of
liquidated institutions could expect to receive;
(3) the Risk Exposure Index, which was a composite
measure of OSFI's assessment of the level of risk
facing the financial industry at a given time; and
(4) the Intervention Effectiveness Measure, which
measured OSFI's effectiveness in identifying
problem institutions and intervening in a timely
way to address regulatory concerns. In each of
these areas, OSFI said it identified the measures
as in either an early or advanced stage of
development or maturity. For example, the agency
considered its employee satisfaction level
measuring quality as advanced. However, another
quality measure, the extent to which OSFI staff
met identified core competencies, was less mature
and was in an earlier stage of development.
� For each of its agencywide goals, MDOR said
it used several different measures to evaluate the
agency's success in achieving its goals. For
example, one of its goals was that "everyone is
paying what is owed, no more, no less." To
evaluate progress toward achieving this goal, MDOR
said it measured voluntary compliance rates,
nonfiler discovery rates, tax filing accuracy, use
and sales tax compliance levels, and the number of
corporate audits completed. The agency said this
mix of measures provided a complete performance
picture.
� The Coast Guard said the agency's
aeronautical engineers identified critical
dimensions, or key issues, concerning Coast Guard
aircraft. These dimensions were reliability,
maintainability, supportability, and
affordability. Using these dimensions, the agency
said it created a system of Measures of
Effectiveness (MOE) to track performance against
aviation system goals. Daily, all air stations
were to document when an aircraft was available to
fly a mission. The MOE produced index information
on (1) the percentage of time that aircraft at
Coast Guard air stations were available to perform
missions, (2) how often air stations had needed
aircraft parts, (3) how much unit effort it took
to generate each flight hour, and (4) an overall
maintenance effort index. The agency said that a
suite of measures could be evaluated to help
identify common causes and trends, and were used
to determine workforce levels, program flight
hours, maintenance costs, and budgetary
considerations.
Practice 8: Set Appropriate Targets for
Performance Goals
Under the eighth practice, the organizations said
they used different methods to set appropriate
targets for their performance goals. Specific
practices included the following:
� Use different types of performance
comparisons to match performance goals.
� Provide multiyear and subgoal performance
targets.
� Use targets for distinct populations or
comparison categories that are meaningful to the
organization.
� Use baselines to set realistic, but
challenging targets.
Use Different Types of Comparisons
The organizations said they used a diversity of
performance comparisons, depending on the goal, to
set performance targets. The comparisons included
(1) predefined performance specifications, (2)
future performance levels or changes in levels to
be achieved at a later date, (3) best practice
benchmarks from other organizations, and (4)
program implementation milestones. For example,
the Coast Guard said it based its performance
targets on historical performance, trend analysis,
and improvements currently under way. In addition,
some targets were based on a defined performance
level, such as an absolute readiness index score
required by the Department of Defense. Figure
III.3 provides examples of several types of
performance comparisons.
Figure III.3: Examples of Performance Comparisons
Sources: FDA, the Coast Guard, NRC, FAA, and OSHA.
The organizations said they also recognized that
because of the nature of some goals, they could
not always set absolute targets. For example, in
some cases, the organizations said they did not
have a baseline or benchmark to set a target, such
as when a measure was new, a baseline had never
been established, or benchmarks were not readily
available. When this occurred, the organizations
said they either set a preliminary target or
directional target, or stated that a baseline
would be set with the initial collection of data.
For example:
� FDA said that the targets for its strategic
investment goals were a series of milestones for
achieving the desired capability, such as
developing modeling techniques to assess human
exposure and dose response to certain foodborne
pathogens. For program result or outcome goals,
the agency said targets were quantitative or
productivity goals. For example, one target was to
have 80 percent of the domestic seafood industry
operating preventative controls for safety.
� FAA said it gathered data to set a baseline
for assessing future needs and desired
accomplishments, such as FAA's efforts to reduce
delays, accidents, and incidents.
� The Customs Service said the agency often set
targets as a certain percentage increase in a
measure, rather than setting a specific number as
a target. For example, instead of setting a target
of 91 percent, the agency said the target would be
set as a certain percentage change over the
previous year. Once baseline information was
available, the agency would set specific target
numbers.
� NHTSA and MPCA said that they did not set
specific targets in some cases. For example, NHTSA
said it had a difficult time deciding on
appropriate targets in the face of unpredictable
trends and many external factors, and ended up
agreeing on a target of "no increase." For some
measures, MPCA said the agency did not set a
specific target, but only indicated whether it
wanted the measure to increase or decrease.
Use Multiyear and Subgoal Targets
The organizations said they faced situations where
they could not identify specific annual targets.
In these cases, the organizations established
multiyear goals and targets and conducted annual
progress checks. In addition, they set targets at
the subgoal level that were to cumulatively reach
an overarching goal over time. For example:
� FAA said it used 3-year or even 10-year
baselines in such areas as commercial aviation
fatal accidents because there were so few of them
in any given year. An increase or decrease of even
one or two accidents could skew the data.
� NHTSA said it set a long-term goal of
reducing highway fatalities and injuries by 20
percent by the year 2008 and tied in annual and
multiyear efforts. The target was set by using an
analysis of factors the agency could influence,
and how it could influence them, to estimate the
cumulative effect of reaching goals with various
interventions (intermediate outcomes). The agency
said it developed subgoal targets that were
expected to produce the 20-percent reductions by
the year 2008. The agency thus worked on
intermediate outcomes at the subgoal level and
tied that to an overall outcome goal. NHTSA also
said that the agency tried to avoid straight-line
annual targets, preferring floating targets that
the agency could reassess each year, depending on
progress towards the final outcome goal.
� FAA said that the agency also used varying
targets. For instance, FAA's strategic plan
contained a safety mission goal of reducing the
United States aviation fatal accident rates by 80
percent from 1996 levels by the year 2007.
According to the agency, the fiscal year 2000
performance plan had several performance goals to
achieve this overall goal with either annual or
multiyear targets. For example, performance goals
for the safety mission goal included (1) reducing
the fatal aviation accident rate for commercial
air carriers from a 1994-96 baseline of 0.037
fatal accidents per 100,000 flight hours; (2)
decreasing the rate of air shipment hazardous
materials incidents by the year 2000 from a 1998
base; (3) by the year 2007, reducing (by a to be
determined percentage from baseline levels) the
rate of airport accidents/incidents that result in
injury to persons or damage to aircraft; and (4)
reducing the rate of operational errors and
deviations by 10 percent from 1994 baselines. The
agency said it had or was developing a fiscal year
2000 target for each performance goal, such as
reducing the fatal accident rate for commercial
air carriers to 0.033 per 100,000 flight hours.
Use Distinct Populations or Comparison Categories
The organizations, with input from stakeholders,
said they carefully selected performance
categories down to the lowest, meaningful level of
disaggregation of data and used these categories
to set appropriate targets. For example, the
categories included geographical areas, workload
or customer groups, or types of services. The
organizations said they sought to set targets at
the lowest, most disaggregated level so that they
would be meaningful to managers and staff at the
activity level within the organization. In
addition, the organizations said that setting
goals at the appropriate level of disaggregation
aided decisionmakers in evaluating how successful
a program was working with respect to different
categories. For example:
� OSHA and FDA said that their agencies set
performance targets for different industries. For
example, OSHA said the agency set targets for
nursing homes, logging, food processing, and
shipyards to focus agency efforts on the most
hazardous industries and workplaces. FDA said it
set targets for such industries as the domestic
seafood industry and foreign food establishments.
� MDOR said it segmented its client population
into those with minor offenses and those who were
considered repeat offenders, and the agency
developed measures for each category.
� MDOT noted that it had the agency's divisions
categorize or segregate roads by type and develop
measures specific for each type of road. The
agency also set targets for different geographical
areas within the state.
Use Baselines to Set Realistic But Challenging
Targets
The organizations said they used baselines to set
performance targets that were realistic, but they
expected the baselines to challenge the
organization to continually improve. The
organizations said they used the baselines as the
lowest acceptable performance expectation and set
targets at higher levels. For example:
� OSHA recommended selecting a "typical" year
and developing averages, rolling averages, or
other statistical measures to set the challenging
targets.
� FDA said it planned to reduce the percentage
of food and color additive petitions under review
for more than 360 days to 20 percent in fiscal
year 2000. The baseline data were 44 percent in
fiscal year 1997.
� The Coast Guard said the agency used actual
information to develop a trend line and set a
target that was based on planned strategies. For
example, figure III.4 shows a graphic from the
Coast Guard's fiscal year 2000 annual performance
plan illustrating a goal and target to reduce the
passenger vessel casualty rate.
Figure III.4: Examples of Targets Set Using
Baseline Data-U.S. Coast Guard
Source: U.S. Coast Guard.
Appendix IV
Practice Category 4: Strengthen Analytical
Capabilities and Techniques
Page 49 GAO/GGD-00-10 Managing for Results
Summary
In this practice category, the organizations said
they built on and strengthened their analytical
capabilities and techniques to better meet
performance management information needs. The
practices included the following:
9. Ensure that data resources and analytical
capabilities were sufficient to provide
performance management information.
10. Target analysis at regulatory intervention.
11. Account for external and contextual factors.
Practice 9: Ensure Adequate Data Resources and
Analytical Capabilities
In the ninth practice, the organizations said they
sought to ensure that their data resources and
analytical capabilities were sufficient to provide
information necessary for formulating and
assessing strategies. Specific practices included
the following:
� Ensure staff analytical capabilities met
performance management needs.
� Develop a data infrastructure and information
systems to generate useful performance data.
� Use specific tools to support the use of
performance information.
� Ensure the quality, timeliness, and
continuity of performance information.
Ensure Staff Capabilities
Meet Performance
Management Needs
The organizations said they assessed their
internal analytical capacity to deal with
performance management-both in terms of staff
available to do analysis and the types of
analytical skills possessed by the staff. They
recognized that analytical capabilities were
critical in assessing risks, aggregating and
disaggregating performance information in a
meaningful fashion, and allocating appropriate
resources to activities linked to strategic goals.
For example:
� FAA found that it was critical to have
statistical and financial analysis experts
available to provide data needed by managers to
change practices and make decisions. According to
the agency, FAA analysis units also provided
information to other FAA units for performance
measurement.
� The Customs Service said it hired a
statistician to oversee sampling efforts related
to trade compliance measurement and assessment.
According to the agency, compliance measurement
and assessment helped the Customs Service identify
problems and make a specific evaluation of the
risk of noncompliance posed by an individual
company and the industry overall.
Develop Data Resource Infrastructures and
Supporting Information Systems
The organizations said they did not firmly
establish data sources and data collection
processes until they had identified their program
rationale and accompanying goals, measures, and
targets. Once they knew what data they would need,
they said they either used existing data resources
and information systems or upgraded or built new
systems to collect the necessary performance
information and conduct quicker and better data
analyses. For example:
� NHTSA said that even though the agency had
tracked performance data for several years-even
producing a data book to support agency
decisionmaking and performance management-the
agency was reassessing its existing data. NHTSA
said it wanted to look at existing data and
evaluate which factors lead to desirable outcomes
and then develop interventions to achieve
intermediate outcomes.
� NRC said it initiated the development of an
agencywide, integrated financial and resource
management system called STARFIRE, which was to
serve as the single authoritative source for
financial and resource information and support the
alignment of agency resources with program
outputs, strategies, and strategic goals.
� MDOC said it was using the development of a
new data system as an opportunity to include
routine collection of more useful performance
measures and not just a new means to collect the
same information that has always been collected.
MDOC said the system was intended to increase its
ability to provide accurate, descriptive
statistics and to support program evaluations.
� MDOT said it was integrating different
management systems within the agency to ensure
consistency in measurement through common
definitions and measurement timing.
Use Specific Tools to Support the Use of
Performance Information
To enhance their performance measurement efforts,
the organizations said they used different tools
to help support the understanding, communication,
tracking, and reporting of performance
information. They said they used performance
measure definitions to identify-for both external
and internal audiences and partners-the measure
being used, its purpose, its source, how it would
be calculated, how it would be verified and
validated, who was responsible for its collection
and analysis, and any limitations it might have.
According to the organizations, they also used
specific definitions to remove ambiguity regarding
what was being measured and to ensure that
measurements could be replicated. For example:
� MDOC said it developed a glossary with
performance measurement terms and agency acronyms
that accompanied its performance report and used
extensive footnoting to better explain measures
and data to the reader.
� The Coast Guard said that for each of its
performance goals, it included a description of
why the agency acts, key factors, strategies,
coordination, analysis and evaluation-including a
graphic of the target, information for the past
several years, the trend line, and key
initiatives.
� Florida state agencies said they were
required to provide specific performance measure
information as part of their performance
management. The measures and their definitions are
shown in table IV.1.
Table IV.1: Florida Measure Definitions
Measure information Measure definition
Measure number Unique alpha/numeric
identifier that is assigned
to each measure
Type of measure Input, output, or outcome
Name of responsible Individual responsible for
individual(s) creating/developing and
defining the measure
Purpose of measure Brief description of what
information the measure
will provide
Definition of measure Thorough description of the
measure that includes
defining each word in the
measure and what will be
included
Formula description If used to calculate the
measure, the formula is
written out and each aspect
of the formula is defined
Measurement period Time period that will be
covered by the measure
Relationship to How the measure relates to
mission the unit's overall mission
Output/Outcome If applicable, the
relationship relationship between output
measures and outcome
measures; identification of
any associated measures
that relate to the measure
Reporting Who the measure is reported
requirements to and the frequency of
reporting (e.g., monthly,
quarterly, etc.)
Source: Florida Agency for Health Care
Administration.
In addition, the organizations said they used
performance measure databases to track
performance. For example:
� FDA and EPA said they collected information
in a performance database. FDA said the agency
maintained a performance goal database showing
inputs and outputs, dollars associated with
specific goal clusters, and the part of the agency
associated with the goals. FDA said it also used a
database warehouse to track goals and resources
over time and link FDA goals and measures to other
legislative requirements, such as FDA's
modernization act and the Chief Financial Officers
Act. According to FDA, this approach was designed
to allow the agency to maintain just one set of
performance information, instead of several sets.
EPA said its database was to be used by EPA goal
teams and national program managers.
� OSHA said it was developing a database for
strategic plan measures and, when completed, the
database would be available to all OSHA employees
through its Intranet.
� MOEA and MDES were using data matrices as
part of their performance management efforts. MOEA
said it used a set of matrices that described the
linkages between the agency's environmental
outcomes and measures and the agency's primary and
secondary staff responsible for collecting and
evaluating data and information for the measures.
The agency said the matrices had been a useful
tool at all levels in the organization to
understand staff accountability. MDES said the
agency's data matrix listed the customer(s), the
customer needs, what the agency would measure, the
type of measurement, and how data would be
collected for each service that the agency
provided.
� MDOR said the agency set up a performance
measure database that included the measure
definition, the purpose of the measure, the
measure owner, and up-to-date trend data.
According to the agency, this standardized format
allowed the agency to use one set of data to
generate reports for each program area. MDOR also
said the agency used this database to generate an
executive summary of measures that listed
agencywide performance measures for each of the
agency's four goals.
Ensure the Quality,
Timeliness, and Continuity
of Performance Information
The organizations said they supported their data
analysis capabilities with accessible, high-
quality and timely data. They said they made a
senior manager or team responsible for data
resources and used data quality evaluative or
auditing functions to assess data integrity and
its sufficiency for supporting the program logic
models. According to the organizations, the
evaluative and auditing functions assessed the
choice of measures, the collection and processing
of data, the quality of information, the
interpretation and explanation of results, and the
relevance and adequacy for decisionmaking. For
example:
� The Customs Service said that when field
staff realized management was using performance
data to make decisions, they began providing
accurate information and explanations for any
incorrect data. The agency said it required each
office to establish a data quality function,
responsible for verification and validation, that
would be inspected annually. The Customs Service
also said it established specific measure owners
who were responsible for handling measures in
their own process areas.
� FGFFC said it had a key person in each
division or functional area responsible for
developing measures, educating staff, and
gathering performance information.
Practice 10: Target
Analysis at Regulatory Intervention
Under the tenth practice, the organizations said
they targeted analytical techniques for their
regulatory intervention efforts. Specific
practices included the following:
� Use risk management techniques to target
resources for the maximum results.
� Take a balanced approach to regulatory
efforts by targeting actions before, during, and
after a problem event.
Use Risk Management Techniques to Target
Resources for Maximum
Results
The regulatory organizations said they developed
risk management techniques to target the largest
regulatory problems, attempting to achieve maximum
results for their resource allocation. For
example:
� The Coast Guard, OSHA, and NRC said they were
targeting risk areas. The Coast Guard said the
agency used risk analysis to define risk groups
and levels of risk and then put investments toward
prevention in high-risk areas. OSHA noted that the
agency changed its enforcement strategy to target
high-hazard workplaces, focusing on preventing
accidents. NRC said it was developing
recommendations for improving NRC's inspection,
assessment, and enforcement processes to focus on
the most important safety issues.
� The Customs Service said it formalized a
Trade Compliance Risk Management Process that
collected data and information, analyzed and
assessed risk, prescribed action, and tracked and
reported information. The agency said that it then
used this data to analyze historical compliance
data and trends for various industries, specific
commodities, and certain importers and, by
applying definitions of significance and
materiality, to focus on areas with the greatest
potential risk. For example, identifying whether
the source of risk was an importer's lack of
knowledge, complex trade laws, or willful
disregard for importer laws would result in
different, specific action plans to assign
resources and address the risk.
� OSFI said it developed a risk exposure index
as a composite measure of the agency's assessment
of the level of risk facing the financial industry
at a given time, weighted by the condition of the
institution, the value of assets, and the type of
institution. Over time, OSFI said it planned to
compare and track the impact of major events on
overall system risk.
Take a Balanced Approach
to Regulatory Efforts
The organizations said they carefully balanced
their regulatory efforts to address both
prevention and mitigation. According to the
agencies, they did this by analyzing how they
influenced and directly impacted the entities they
regulated and by targeting regulatory actions
before, during, and after a potential or actual
problem event. The organizations said they also
supplemented their goals and measures related to
mitigation with goals and measures designed to
track and reduce risk factors, thus seeking to
anticipate and prevent problem events where
possible. For example:
� NHTSA said the agency sought to achieve
progress in the following two intermediate outcome
areas: (1) reducing the occurrence of crashes and
(2) mitigating the consequences of crashes. NHTSA
said programs used performance measures to help
achieve the intermediate outcomes, which, in turn,
influenced the outcomes. The agency said it used a
matrix, shown in figure IV.1, that addressed three
time phases-precrash, crash, and postcrash, and
where each of its programs had an impact. The
matrix was a tool to use in defining problems and
posing strategies. For example, the agency said
NHTSA's National Advanced Driving Simulator was a
specific effort to conduct research on driver
performance and behavior during the precrash
sequence of events. For postcrash events, the
Intelligent Vehicle Initiative was targeted at
understanding the causes of highway collisions,
and the Emergency Medical Services effort was
designed to enhance the comprehensive emergency
medical service systems to care for victims of
crashes.
Figure IV.1: National Highway Traffic Safety
Administration Matrix to Define Problems and
Strategies
Source: NHTSA.
� The Coast Guard said the agency targeted its
activities to (1) improve operational methods for
a quicker response-mitigation-and (2) prevent
problems from occurring-prevention. In the Coast
Guard, the agency said the focus on prevention was
preceded by a careful analysis of cause and effect
and what the agency could influence. For example,
the Coast Guard said it developed an analysis that
recommended preventing Alaskan commercial fishing
industry disasters before they occurred, as well
as preparing to react to them if they should
occur. The analysis described the fatal events and
recommended strategies to prevent or reduce the
seriousness of these events. The prevention
efforts revolved around critical factors, such as
vessel stability and hull integrity, skipper and
crew training and licensing, avoidance of harsh
sea and weather conditions, preventing falls
overboard, and safer diving practices. Instead of
saving lives after a vessel casualty, the Coast
Guard said it could take action in these areas to
improve the prevention of fatalities.
� The Customs Service said the agency found
that it could no longer depend on enforcement and
interdiction efforts because the volume of
transactions required an ever-increasing amount of
resources. Realizing that most importer errors
were not deliberate, but the result of trying to
follow complex requirements, the agency said it
began to focus on accountability management, where
the agency had an obligation to inform importers
about compliance, and thus prevent compliance
problems. However, when importers continued to
violate requirements, the agency said it pursued
appropriate enforcement actions. In addition, the
agency said it examined measures that could
provide some data on the effectiveness of its
border interdiction efforts, such as drug
transport costs. The assumption was that if the
agency could disrupt and dismantle drug
transportation attempts, then drug suppliers would
find it too costly to operate. In this case, the
agency said measures of disruption would
compliment measures of drug seizures.
Practice 11: Account for External and Contextual
Factors
Under the eleventh practice, the organizations
said they sought to account for factors beyond
their control that might have an impact on their
efforts to achieve outcomes. Specific practices
included the following:
� Identify and track external and contextual
factors to explain their influence on program
results.
� Use smaller units of analysis to better
understand program effects.
� Use statistical techniques and program
evaluation to adjust for and isolate the influence
of external factors.
Identify and Track External
and Internal Contextual
Factors
The organizations said they grappled with the
issues of responsibility and accountability for
performance in those areas affected by external
factors, such as the state of the economy or
internal changes in program operations and
technological support. They said they identified
these factors and tracked them over time,
analyzing their impact on specific performance
goals and targets. The organizations said they
used this information to help internal and
external stakeholders understand the influence of
these factors on program results. In addition,
this information was used to refine the rationale
of the program logic models. For example:
� NHTSA said it found that discussing the
influence of external factors in the performance
plan helped overcome agency fears of being held
accountable for measures over which the agency had
limited control. The agency said it examined the
effect of external factors using internal and
external resources, such as local law enforcement
studies. In addition, tracking external factors
lent credibility to other data, making it easier
to explain the impact of the agency. The external
factors were discussed in detail in the agency's
strategic plan.
� The Florida Department of Insurance (FDI)
said it identified the contributing factors that
could create a demand for services, such as
insurance or fire marshal services. These factors,
such as arson, defective products, acts of
violence, and inadequately trained emergency
personnel, provided the fire marshal with a
starting point for identifying strategies for
influencing or controlling specific factors.
� The Coast Guard said it analyzed key factors
related to its goals. For example, the Coast Guard
said it identified several key factors that
increased the difficulty of successfully
responding to mariners in distress, such as
untimely distress notification, severe weather,
poor communication, and poor information about the
distress. The agency said its strategies were
targeted at preventing the distress, but also at
maximizing the survival chances by addressing
these key factors.
� FAA said the agency determined that weather
was a factor in 40 percent of aviation accidents
and 50 percent of aviation fatalities; therefore,
FAA focused on influencing the impact of the
weather conditions. For example, the agency said
it had strategies to invest in an integrated
terminal weather system and a weather and radar
processor. These would give air traffic
controllers instant access to current weather
data. FAA said the agency was also implementing
and improving existing weather sensors, targeting
a weather research program to demonstrate storm
growth, and strengthening aviation delay
forecasting technology.
� FGFFC said the agency tracked external
factors related to boating accidents, such as
alcohol use, to be able to make a thoughtful
analysis of the factors contributing to an
accident, even though the factors were beyond the
agency's control.
� TDOB said it tracked explanatory measures,
such as the number of state-chartered banks in
Texas and the total assets they represent, which
provided the agency with the opportunity to
explain the influence of external factors on
agency performance measures. The agency said it
also included a separate section in the strategic
plan that discussed relevant external and internal
factors.
Use Smaller Units of Analysis
The organizations said they recognized that
program effects were often difficult to evaluate
if analyzed on a national or statewide basis. In
some cases, they said they used a smaller unit of
analysis that was more clearly defined and in
which program efforts and impacts could be more
clearly identified and understood. For example:
� Florida's OPPAGA recommended that agencies
segment the population they regulate into smaller
segments and evaluate their efforts on that
segment, where they were better able to control
for external factors.
� FGFFC said it conducted detailed observations
on a smaller population, such as a portion of a
river, attempting to identify outputs that
contribute to outcomes in a habitat. The agency
said it was able to carefully measure its efforts,
detailing all of the inputs and outputs, such as
number and location of officers, weather
conditions, accidents, and violations. In doing
this, the agency said it could take a snapshot of
the agency's impact and use it for future
planning.
Use Statistical Techniques
and Program Evaluation
The organizations said they used statistical
techniques, such as normalization, ratios, or
trend analysis, to better identify the influence
of external factors. For example, using
calculations that normalized information as a
standard rate, such as "per capita," provided a
standard that could be compared from year to year.
In addition, the organizations said they used
program evaluations to examine very long-term
outcomes and to isolate program effects from other
factors. For example:
� FAA said it accounted for the growth of the
airline industry by normalizing its measures,
stating them in terms of "per flight hours." In
addition, FAA said the agency planned to use logic
models to capture a uniform picture of external
factors and share them with stakeholders.
� OSHA said it found that the decline in
occupational injury and illness rates in the early
to mid-1990s was attributable to legislative
reforms motivated by increases in workers'
compensation payments and a growing awareness of
workplace hazards among unions, employers, and the
insurance industry. According to OSHA, it found
such factors as employment shifts into low-hazard
industries and underreporting of injury and
illness rates were not contributory. OSHA said its
reform efforts during this period affected the
agency's inspection strategy and resulted in a
renewed emphasis on outreach, partnering, and
working cooperatively with employers to address
workplace hazards. According to the agency, the
new approach complemented market influences
affecting industry, namely, escalating costs for
workers' compensation programs and the dawning
realization that corrective action was needed to
reduce workplace accidents. OSHA said its reforms
reinforced and supported industry initiatives and
contributed to the decline in occupational injury
and illness rates.
� MOEA said the agency contracted with a
consultant to do an evaluation of the effect of
municipal solid waste management on resource
conservation and greenhouse gas emissions to get
some idea of the impact of the agency's efforts on
its outcome goals. The agency said it also
conducted an internal study of the economic
benefits of recycling.
� MPCA said the agency discovered that its
"intuitive" linkages regarding mercury
contamination were incorrect when tested by an
independent institute. MPCA adjusted the agency's
strategies accordingly.
Appendix V
Practice Category 5: Continue Improving
Performance-Based Management
Page 56 GAO/GGD-00-10 Managing for Results
Summary
In this category, the organizations said they
continuously assessed their performance-based
management efforts and results to identify areas
for improvement. The specific practice in this
area was as follows:
12. Continuously assess and strengthen performance-
based management
efforts.
Practice 12: Continuously Assess and Strengthen
Performance-Based Management
The organizations said they continuously assessed
their performance-based management approach and
results, and modified their approach as necessary.
In addition, they said they recognized that the
evolution of an organization's performance
management efforts over time involved changes
across the other four practice areas. For example,
as previously discussed, the leading organizations
said they regularly reviewed their measures,
goals, and targets and adjusted them as necessary.
� FHCA started with key measures that were
simple and easy to understand. Once performance
measurement was accepted in the agency and
processes were well established, the agency then
could move to experiment with more sophisticated
measures.
� FDA said the agency changed its performance
plan presentation to show the integration between
initiatives and performance goals, more clearly
stating how FDA planned to close the gap between
strategic priorities and current performance. FDA
also said it provided additional information on
baselines and contexts for the agency's
performance goals.
The organizations said they also sought to improve
their management approach by moving on to more
sophisticated methodologies, such as the balanced
scorecard, once initial experience was gained.1
For example:
� FDBF said the agency used simple program
logic chains at the beginning of its performance
management efforts. Later, the agency considered
other methodologies, such as the balanced
scorecard. Over time, FDBF said it looked to
further develop and reinforce performance
management capabilities.
� The Customs Service, NRC, MDES, OSHA, MDOT,
and MDOR said they were either using, or planning
to use, all or part of a balanced scorecard
approach, building on their past performance
management efforts to create a balanced set of
measures.
In many cases, the organizations cautioned that
they were just at the beginning of a journey to
successfully implement performance-based
management. For example:
� MDES said that agencies must be willing to
change many management and operational systems to
better align them for performance. Changing those
systems might take up to 10 years to accomplish.
� MDOT said the agency spent 5 years developing
a process for establishing meaningful measures and
then continuously improving them. In the agency's
early performance management efforts, it began
focusing on what the agency could achieve when it
had direct control over events. Later, MDOT said
it adopted more extensive in-depth analysis of
cause-and-effect linkages and the influence of
external factors.
Overall, the organizations' experiences indicated
that a strong performance-based management
approach was under constant review and refinement.
The end result was not just intended to be a
written strategic or performance plan, but a
results-oriented culture within the organization
and among stakeholders.
_______________________________
1The balanced scorecard approach is summarized in
Robert S. Kaplan and David P. Norton, The Balanced
Scorecard: Translating Strategy Into Action,
Harvard Business School Press, Boston, 1996. The
scorecard emphasizes the use of a balanced set of
measures across the four categories of financial
performance, customer knowledge, internal business
processes, and learning and growth.
Bibliography
Page 61 GAO/GGD-00-10 Managing for Results
Aristigueta, Maria P. "Fine-Tuning Performance
Measurement as a Management Strategy: Insights
from the States." 1998 National Public Sector
Productivity On-Line Conference, Nov. 1998.
Atkinson, Anthony A., Waterhouse, John H., and
Wells, Robert B. "A Stakeholder Approach to
Strategic Performance Measurement," Sloan
Management Review. Spring 1997, pp. 25-37.
Australian National Audit Office. Better Practice
Principles for Performance Information. Internet
address: http://www.anao.gov.au.
Berman, Evan M. "Measuring Productivity,"
Productivity in Public and Nonprofit
Organizations. Sage, Thousand Oaks, CA, pp. 51-76,
1998.
Berman, Evan M. and West, Jonathan P.
"Productivity Enhancement Efforts in Public and
Nonprofit Organizations," Public Productivity &
Management Review. Vol. 22, No. 2, Dec. 1998, pp.
207-219.
Brizius, Jack A. and Campbell, Michael D. Getting
Results. Council of Governors' Policy Advisors,
Washington, D.C., 1991.
Buckman, James F. and Holter, Suzanne W.
"Alignment: The Path to Performance Improvement,"
Journal of Strategic Performance Measurement. Vol.
2, No. 6, Dec. 1998, pp. 40-46.
Campbell, Wilson and Fountain, Jay. "Service
Efforts and Accomplishments-Lessons Learned." 1998
National Public Sector Productivity On-Line
Conference, Nov. 1998.
Center for Accountability and Performance.
Performance Measurement: Concepts and Techniques,
American Society for Public Administration.
Washington, D.C., undated.
Crookall, Paul and Ingstrup, Ole. "The Citizen as
Customer: An International Survey of Well-
Performing Public Service Organizations." 1998
National Public Sector Productivity On-Line
Conference, Nov. 1998.
Ernst & Young Center for Business Innovation,
Perspectives on Business Innovation. Issue 2,
Measuring Business Performance, Ernst & Young,
Center for Business Innovation, Cambridge, MA,
1998.
Fischer, Richard J. "An Overview of Performance
Measurement," Benchmarks of Performance, Special
Section, reprint of Sept. 1994 issue of Public
Management, pp. S2-S8.
Frigo, Mark L. and Krumwiede, Kip R. "Balanced
Scorecards: A Rising Trend in Strategic
Performance Measurement," Journal of Strategic
Performance Measurement. Vol. 3, No. 1, Feb.-Mar.
1999, pp. 42-48.
Glover, Mark. A Practical Guide for Measuring
Program Efficiency and Effectiveness in Local
Government, The Innovation Groups. Tampa, FL,
1994.
Grifel, Stuart S. "Organizational Culture: Its
Importance in Performance Measurement," Benchmarks
of Performance. Special Section, reprint of Sept.
1994 issue of Public Management, pp. S19-S20.
Hacker, Maria E. and Brotherton, Paul A.
"Designing and Installing Effective Performance
Measurement Systems." Internet address:
http:/www.balancedscorecard.com.
Hatry, Harry P. "Where the Rubber Meets the Road:
Performance Measurement for State and Local Public
Agencies," Using Performance Measurement to
Improve Public and Nonprofit Programs. Kathryn E.
Newcomer, Editor, New Directions for Evaluation,
Number 75, Fall 1997, Jossey-Bass Publishers, San
Francisco, pp. 31-44.
Hatry, Harry P. Draft manuscript, Performance
Measurement for Results-Focused Agencies, Aug.
1998.
Hatry, Harry, Gerhart, Craig, and Marshall,
Martha. "Eleven Ways to Make Performance
Measurement More Useful to Public Managers,"
Benchmarks of Performance. Special Section,
reprint of Sept. 1994 issue of Public Management,
pp. S15-S18.
Julnes, Patria de Lancer. "Lessons Learned About
Performance Measurement: Rebuilding Procrustes'
Bed." 1998 National Public Sector Productivity On-
Line Conference, Nov. 1998.
Kaplan, Robert S. and Norton, David P. The
Balanced Scorecard. Harvard Business School Press,
Boston, 1996.
Madigan, James M. "Measures Matrix Chart: A
Holistic Approach to Understanding Operations,"
Quality Management Journal. Oct. 1993, pp. 77-86.
National Academy of Public Administration. Center
for Improving Government Performance, Helpful
Practices in Implementing Government Performance.
Internet address:
http://www.napawash.org/NAPA/NewNAPAHome.
National Academy of Public Administration. Center
for Improving Government Performance, Improving
Performance Across Programs: Thinking About the
Issue-Taking the First Steps. National Academy of
Public Administration,Washington, D.C., undated.
National Academy of Public Administration. Center
for Improving Government Performance, Planning for
Results. National Academy of Public
Administration, Washington, D.C., undated.
Newcomer, Kathryn E. "Using Performance
Measurement to Improve Programs," Using
Performance Measurement to Improve Public and
Nonprofit Programs. Kathryn E. Newcomer, Editor,
New Directions for Evaluation, Number 75, Fall
1997, Jossey-Bass Publishers, San Francisco, pp. 5-
14.
Organisation for Economic Co-Operation and
Development. Performance Measurement in
Government: Issues and Illustrations. Organisation
for Economic Co-Operation and Development, Paris,
1994.
Organisation for Economic Co-Operation and
Development. Performance Management in Government:
Performance Measurement and Results-Oriented
Management. Organisation for Economic Co-Operation
and Development, Paris, 1994.
Organisation for Economic Co-Operation and
Development. In Search of Results: Performance
Management Practices. Organisation for Economic Co-
Operation and Development, Paris, 1997.
Osborne, David and Gaebler, Ted. "The Art of
Performance Measurement," Accountability for
Performance. David Ammons, Editor, 1995, ICMA,
Washington, D.C., pp. 33-43.
Patton, Michael Q. Utilization-Focused Evaluation.
3rd Edition, Sage Publications, Thousand Oaks, CA,
1997.
Pautke, Robert W. and Redman, Thomas C. "Creating
and Aligning High-Quality Data Resources: What
Senior Leaders Need to Know," Journal of Strategic
Performance Measurement. Vol. 2, No. 6, Dec. 1998,
pp. 33-38.
Plantz, Margaret C., Greenway, Martha T., and
Hendricks, Michael. "Outcome Measurement: Showing
Results in the Nonprofit Sector," Using
Performance Measurement to Improve Public and
Nonprofit Programs. Kathryn E. Newcomer, Editor,
New Directions for Evaluation, Number 75, Fall
1997, Jossey-Bass Publishers, San Francisco, pp.
15-30.
Poister, Theodore H. "Productivity Monitoring:
Systems, Indicators, and Analysis," Accountability
for Performance. David Ammons, Editor, 1995, ICMA,
Washington, D.C., pp. 98-120.
Poister, Theodore H. and Strieb, Gregory D.
"Strategic Management in the Public Sector,"
Public Productivity and Management Review. Vol.
22, No. 3, Mar. 1999, pp. 308-325.
Rossi, Peter H. and Freeman, Howard E. Evaluation:
A Systematic Approach. 5th Edition, Sage
Publications, Newbury Park, CA, 1993.
Scheirer, Mary Ann. "Designing and Using Process
Evaluation," Handbook of Practical Program
Evaluation. Joseph S. Wholey, Harry P. Hatry, and
Kathryn E. Newcomer, Editors, 1994, Jossey-Bass
Publishers, San Francisco, pp. 40-68.
Schneiderman, Arthur M. "Why Balanced Scorecards
Fail," Journal of Strategic Performance
Measurement. Special Edition, Jan. 1999, pp. 6-11.
Sparrow, Malcolm K. Imposing Duties. Praeger
Publishers, Westport, CT, 1994.
State of Arizona, 1998 Strategic Planning and
Performance Measurement Handbook, 1998. Internet
address:
http://www.state.az.us/ospb/StatPlan.html.
Stenzel, Catherine and Stenzel, Joe. "From the
Editors," Journal of Strategic Performance
Measurement. Special Edition, Jan. 1999, pp. 3-5.
Swiss, James E. "Performance Monitoring Systems,"
Accountability for Performance. David Ammons,
Editor, 1995, ICMA, Washington, D.C., pp. 67-97.
Texas State Auditor's Office, Legislative Budget
Board, Governor's Office of Budget and Planning.
Guide to Performance Measurement for State
Agencies, Universities, and Health-Related
Institutions. Austin, TX, 1995.
Truitt, Bruce. "Measuring the Unmeasurable." Paper
presented at the Managing for Results conference,
Austin, TX, Nov. 1-3, 1995.
Van Wart, Montgomery and Berman, Evan.
"Contemporary Public Sector Productivity Values,"
Public Productivity and Management Review. Vol.
22, No. 3, Mar. 1999, pp. 326-347.
Waller, Carl. "Emerging Issues in Performance
Management," Journal of Strategic Performance
Measurement. Special Edition, Jan. 1999, pp. 37-
40.
Walters, Jonathan. Measuring Up. Governing Books,
Washington, D.C., 1998.
Weiss, Carol H. Evaluation Research. Prentice-
Hall, Englewood Cliffs, NJ, 1972.
Wholey, Joseph S. "Assessing the Feasibility and
Likely Usefulness of Evaluation," Handbook of
Practical Program Evaluation. Joseph S. Wholey,
Harry P. Hatry, and Kathryn E. Newcomer, Editors,
1994, Jossey-Bass Publishers, San Francisco, pp.
15-39.
Wholey, Joseph S. "Trends in Performance
Measurement: Challenges for Evaluators,"
Evaluation for the 21st Century. Eleanor Chelimsky
and William R. Shadish, Editors, 1997, Sage:
Thousand Oaks, CA, pp. 124-133.
Wholey, Joseph S. "Performance-Based Management:
Responding to the Challenges," Public Productivity
and Management Review. Vol. 22, No. 3, Mar. 1999,
pp. 288-307.
Related GAO Products
Page 63 GAO/GGD-00-10 Managing for Results
Performance Plans: Selected Approaches for
Verification and Validation of Agency Performance
Information (GAO/GGD-99-139, July 30, 1999).
Managing for Results: Opportunities for Continued
Improvements in Agencies' Performance Plans
(GAO/GGD/AIMD-99-215, July 30, 1999).
Agency Performance Plans: Examples of Practices
That Can Improve Usefulness to Decisionmakers
(GAO/GGD/AIMD-99-69, Feb. 26, 1999).
Managing for Results: Measuring Program Results
That Are Under Limited Federal Control (GAO/GGD-99-
16, Dec. 11, 1998).
Managing for Results: An Agenda to Improve the
Usefulness of Agencies' Annual Performance Plans
(GAO/AIMD/GGD-98-228, Sept. 8, 1998).
The Results Act: Assessment of the Governmentwide
Performance Plan for Fiscal Year 1999
(GAO/AIMD/GGD-98-159, Sept. 8, 1998).
Performance Management: Aligning Employee
Performance With Agency Goals at Six Results Act
Pilots (GAO/GGD-98-162, Sept. 4, 1998).
Program Evaluation: Agencies Challenged by New
Demand for Information on Program Results (GAO/GGD-
98-53, Apr. 24, 1998).
The Results Act: An Evaluator's Guide to Assessing
Agency Annual Performance Plans, Version 1
(GAO/GGD-10.1.20, Apr. 1998).
Executive Guide: Measuring Performance and
Demonstrating Results of Information Technology
Investments (GAO/AIMD-98-89, Mar. 1998).
Agencies' Annual Performance Plans Under the
Results Act: An Assessment Guide to Facilitate
Congressional Decisionmaking, Version 1
(GAO/GGD/AIMD-10.1.18, Feb. 1998).
Managing for Results: Critical Issues for
Improving Federal Agencies' Strategic Plans
(GAO/GGD-97-180, Sept. 16, 1997).
Managing for Results: Using the Results Act to
Address Mission Fragmentation and Program Overlap
(GAO/AIMD-97-146, Aug. 29, 1997).
Managing for Results: Regulatory Agencies
Identified Significant Barriers to Focusing on
Results (GAO/GGD-97-83, June 24, 1997).
The Government Performance and Results Act: 1997
Governmentwide Implementation Will Be Uneven
(GAO/GGD-97-109, June 2, 1997).
Managing for Results: Analytic Challenges in
Measuring Performance (GAO/HEHS/GGD-97-138, May
30, 1997).
Performance Budgeting: Past Initiatives Offer
Insights for GPRA Implementation (GAO/AIMD-97-46,
Mar. 27, 1997).
Measuring Performance: Strengths and Limitations
of Research Indicators (GAO/RCED-97-91, Mar. 21,
1997).
Executive Guide: Effectively Implementing the
Government Performance and Results Act (GAO/GGD-96-
118, June 1996).
*** End of Document ***