Government Performance: Lessons Learned for the Next
Administration on Using Performance Information to Improve
Results (24-JUL-08, GAO-08-1026T).
Over the past 15 years, legislative and executive branch reform
efforts have attempted to shift the focus of federal government
management from a preoccupation with activities to the results or
outcomes of those activities. Based on over a decade of work in
this area, GAO has found a transformation in the capacity of the
federal government to manage for results, including an
infrastructure of outcome-oriented strategic plans, performance
measures, and accountability reporting that provides a solid
foundation for improving the performance of federal programs.
However, agencies have made less progress in getting their
managers' to use performance information in their decision
making. GAO was asked to testify on the preliminary results of
ongoing work looking at (1) trends in federal managers' use of
performance information to manage, both governmentwide and at the
agency level; (2) how agencies can encourage greater use of
performance information to improve results; and (3) lessons
learned from prior management reforms for the next
administration. Our statement is based on prior GAO reports and
surveys we conducted in 1997, 2000, 2003, and 2007. For the
results of our 2007 survey, see e-supplement GAO-08-1036SP. GAO
will be issuing a report at a later date that will explore the
use of performance results in management decision making at
selected agencies.
-------------------------Indexing Terms-------------------------
REPORTNUM: GAO-08-1026T
ACCNO: A83047
TITLE: Government Performance: Lessons Learned for the Next
Administration on Using Performance Information to Improve
Results
DATE: 07/24/2008
SUBJECT: Accountability
Data collection
Data integrity
Decision making
Executive agencies
Government information
Information resources management
Internal controls
Lessons learned
Performance appraisal
Performance management
Performance measures
Program evaluation
Program management
Regulatory agencies
Reporting requirements
Strategic planning
Surveys
Assessments
******************************************************************
** This file contains an ASCII representation of the text of a **
** GAO Product. **
** **
** No attempt has been made to display graphic images, although **
** figure captions are reproduced. Tables are included, but **
** may not resemble those in the printed version. **
** **
** Please see the PDF (Portable Document Format) file, when **
** available, for a complete electronic file of the printed **
** document's contents. **
** **
******************************************************************
GAO-08-1026T
This text file was formatted by the U.S. Government Accountability
Office (GAO) to be accessible to users with visual impairments, as part
of a longer term project to improve GAO products' accessibility. Every
attempt has been made to maintain the structural and data integrity of
the original printed product. Accessibility features, such as text
descriptions of tables, consecutively numbered footnotes placed at the
end of the file, and the text of agency comment letters, are provided
but may not exactly duplicate the presentation or format of the printed
version. The portable document format (PDF) file is an exact electronic
replica of the printed version. We welcome your feedback. Please E-mail
your comments regarding the contents or accessibility features of this
document to [email protected].
This is a work of the U.S. government and is not subject to copyright
protection in the United States. It may be reproduced and distributed
in its entirety without further permission from GAO. Because this work
may contain copyrighted images or other material, permission from the
copyright holder may be necessary if you wish to reproduce this
material separately.
Testimony:
Before the Subcommittee on Federal Financial Management, Government
Information, Federal Services, and International Security, Committee on
Homeland Security and Governmental Affairs, U.S. Senate:
United States Government Accountability Office: GAO:
For Release on Delivery:
Expected at 2:30 p.m. EDT:
Thursday, July 24, 2008:
Government Performance:
Lessons Learned for the Next Administration on Using Performance
Information to Improve Results:
Statement of Bernice Steinhardt:
Director, Strategic Issues:
GAO-08-1026T:
GAO Highlights:
Highlights of GAO-08-1026T, a testimony to the Subcommittee on Federal
Financial Management, Government Information, Federal Services and
International Security, Committee on Homeland Security and Governmental
Affairs, U.S. Senate.
Why GAO Did This Study:
Over the past 15 years, legislative and executive branch reform efforts
have attempted to shift the focus of federal government management from
a preoccupation with activities to the results or outcomes of those
activities. Based on over a decade of work in this area, GAO has found
a transformation in the capacity of the federal government to manage
for results, including an infrastructure of outcome-oriented strategic
plans, performance measures, and accountability reporting that provides
a solid foundation for improving the performance of federal programs.
However, agencies have made less progress in getting their managers� to
use performance information in their decision making.
GAO was asked to testify on the preliminary results of ongoing work
looking at (1) trends in federal managers� use of performance
information to manage, both governmentwide and at the agency level; (2)
how agencies can encourage greater use of performance information to
improve results; and (3) lessons learned from prior management reforms
for the next administration. Our statement is based on prior GAO
reports and surveys we conducted in 1997, 2000, 2003, and 2007. For the
results of our 2007 survey, see e-supplement GAO-08-1036SP. GAO will be
issuing a report at a later date that will explore the use of
performance results in management decision making at selected agencies.
What GAO Found:
According to GAO surveys, since 1997 significantly more federal
managers report having performance measures for the programs they
manage. However, despite having more performance measures available,
federal managers� reported use of performance information in management
decision making has not changed significantly, as shown below.
Figure: Percentage of Federal Managers Who Reported Using Information
Obtained from Performance Measurement for Various Management Activities
to a "Great" or "Very Great" Extent:
[See PDF for image]
This figure is a horizontal multiple bar graph depicting the following
data:
Setting program priorities:
1997: 65.8%;
2007: 58.1%.
Allocating resources:
1997: 62.5%;
2007: 59%.
Adopting new program approaches or changing work processes[A]:
1997: 66.1%;
2007: 53%.
Coordinating program efforts with other internal or external
organizations: 1997: 56.8%;
2007: 50.5%.
Refining program performance measures:
1997: 51.5%;
2007: 46.3%.
Setting new or revising existing performance goals:
1997: 58.5%;
2007: 52.1%.
Setting individual job expectations for the government employees I
manage or supervise:
1997: 60.8%;
2007: 61.9%.
Rewarding government employees I manage or supervise[A]:
1997: 52.6%;
2007: 60.9%.
Developing and managing contracts[B]:
2007: 40.5%.
Source: GAO.
Notes: Percentages are based on those respondents answering on the
extent scale.
[A] There is a statistically significant difference between 1997 and
2007 surveys.
[B] This question was not asked in 1997.
[End of figure]
For the collection of performance information to be considered more
than meaningless paperwork exercises, it must be useful to and used by
federal decision makers at all levels�including Congress. To reach this
state, GAO believes that the next administration should promote three
key practices that we have identified in our work over the last 10
years: (1) demonstrate leadership commitment to results-oriented
management; (2) develop a clear �line of sight� linking individual
performance with organizational results; and (3) build agency capacity
to collect and use performance information. In addition to encouraging
agencies to employ these practices, the next administration should: (1)
adopt a more strategic and crosscutting approach to overseeing
governmentwide performance; (2) improve the relevance of performance
information to Congress; and (3) build agency confidence in assessments
for use in decision making.
To view the full product, including the scope and methodology, click on
[hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-08-1026T]. For more
information, contact Bernice Steinhardt at (202) 512-6806 or
[email protected].
[End of section]
Mr. Chairman and Members of the Committee:
I am pleased to be here today to discuss the results of our 2007 Survey
on Performance and Management Issues and lessons learned over the past
15 years through legislative and executive efforts to improve the
management and performance of the federal government. Recent events,
such as lead paint in imported children's products, tainted meat,
predatory mortgage lending, contract fraud, and national disasters like
Hurricane Katrina and the attacks of September 11, 2001, raise
questions among the American people about the capacity of the federal
government to meet their most pressing needs. Additionally, the
nation's long-term fiscal imbalance drives the need for federal
agencies to allocate increasingly scarce resources in the most
efficient and effective way possible. The next administration can
continue to bring a greater focus on improving the performance of
federal programs and ensuring that federal funds are allocated
effectively by building on the strengths of prior performance
improvement initiatives.
Over the past 15 years, various reform efforts have attempted to shift
the focus of federal government management from a preoccupation with
activities to the results or outcomes of those activities. Congress
enacted the Government Performance and Results Act of 1993 (GPRA)
[Footnote 1] to inform congressional and executive decision making by
providing objective information on the relative effectiveness and
efficiency of federal programs and spending. That same year, the
Clinton administration launched the National Performance Review (NPR),
which was intended to make the government "work better and cost less."
The current administration has also attempted to resolve long-standing
federal management weaknesses through its five governmentwide
management priorities under the President's Management Agenda (PMA),
which was first announced in 2001.[Footnote 2] A central element in the
Performance Improvement Initiative of the PMA is the Office of
Management and Budget's (OMB) Program Assessment Rating Tool (PART),
which was created in 2002 and serves as a diagnostic tool that is
intended to provide a consistent approach for evaluating federal
programs as part of the executive budget formulation process. Through
PART, OMB has sought to create better ties between program performance
and the allocation of resources. Prior to these efforts, our work on
performance measurement in the federal government showed that federal
agencies generally lacked the infrastructure needed to manage and
report on the results of federal programs in a way that was transparent
to Congress and the American people.
Based on over a decade of work in this area, we can say that there has
been a transformation in the capacity of the federal government to
manage for results. This capacity includes an infrastructure of outcome-
oriented strategic plans, performance measures, and accountability
reporting that has significantly increased over time and provides a
solid foundation for improving the performance of federal programs.
[Footnote 3] However, we have found that progress is still needed to
further integrate information about program performance into federal
managers' decision making and ensure continued progress.
You asked us to discuss: (1) the trends in federal managers' reported
use of performance information governmentwide and at the agency level
as identified through four surveys we conducted over the past 10 years;
(2) how agencies can encourage greater use of performance information
to improve federal program management; and (3) lessons learned to be
considered by the next Congress and administration for future
performance improvement initiatives.
In summary, our surveys show that, while significantly more federal
managers' have performance measures for their programs and some
agencies have shown greater use of information, overall the use of
performance information in management decision making has not changed
over the last 10 years. To remedy this situation, the next
administration should focus its efforts on ensuring that performance
information is both useful and used. First, the next administration
should promote three key practices that we have identified in our work
over the last decade to ensure that the performance information
gathered is used in making management decisions: Our statement is based
on survey data collected in response to your request that we examine
the extent to which federal agency managers are using performance
information and how selected agencies could improve their use of
performance information to achieve results. We will be issuing a report
at a later date that addresses both these questions, including an
analysis of practices at selected agencies. Our survey, which included
a random, stratified, governmentwide sample of federal managers at the
GS-13 level and above, was conducted from October 2007 through January
2008, and is comparable to surveys we conducted in 1997, 2000, and
2003. Our 2000 and 2007 surveys included a larger sample of government
managers--over 4,000 in 2007--that allowed for analysis of individual
agency-level results. Significant differences are reported at the 95
percent confidence interval. In reporting federal managers' positive
responses to survey questions asking about the extent to which a
condition or practice was present (ranging in five categories from "no"
to "very great" extent), we are reporting responses that indicated to a
"great" or "very great" extent. Concurrently with this statement, we
are issuing an electronic supplement that shows the responses to all
survey items.[Footnote 4] In addition to the survey results, we also
drew from our extensive prior work on GPRA, PART, transformational
change, and performance management. We conducted our work from March
2007 to July 2008, in accordance with generally accepted government
auditing standards. Those standards require that we plan and perform
the audit to obtain sufficient, appropriate evidence to provide a
reasonable basis for our findings and conclusions based on our audit
objectives. We believe that the evidence obtained provides a reasonable
basis for our findings and conclusions based on our audit objectives.
Governmentwide Use of Performance Information in the Past 10 Years
Remains Unchanged Although Some Agencies Show Improvements:
Based on federal managers' responses on our four governmentwide surveys
conducted over the past 10 years, performance planning and measurement
have slowly, yet increasingly, become a part of agencies' cultures. In
particular, as shown in figure 1, significantly more federal managers
today report having the types of performance measures called for by
GPRA and PART than they did 10 years ago.[Footnote 5]
Figure 1: Percentage of Federal Managers Reporting Having Performance
Measures to a "Great" or "Very Great" Extent:
[See PDF for image]
This figure is a horizontal multiple bar graph depicting the following
data:
Output measures[A]:
1997: 37.8%;
2007: 54.2%.
Efficiency measures[A]:
1997: 25.9%;
2007: 44.1%.
Customer Service|measures[A]:
1997: 31.5%;
2007: 41.6%.
Quality measures[A]:
1997: 30.9%;
2007: 40.2%.
Outcome measures[A]:
1997: 31.8%;
2007: 48.9%.
Source: GAO.
[A] There is a statistically significant difference between 1997 and
2007 surveys.
[End of figure]
However, unless federal managers use performance data to make
management decisions and to inform policymakers, the benefit of
collecting performance information cannot be realized and real
improvement in management and program results are less likely to be
achieved. We have found that despite having more performance measures,
the extent to which managers make use of this information to improve
performance has remained relatively unchanged. As shown in figure 2,
seven of the nine categories of management activities we asked about
showed no significant change over the past 10 years.
Figure 2: Percentage of Federal Managers Who Reported Using Information
Obtained from Performance Measurement for Various Management Activities
to a "Great" or "Very Great" Extent:
[See PDF for image]
This figure is a horizontal multiple bar graph depicting the following
data:
Setting program priorities:
1997: 65.8%;
2007: 58.1%.
Allocating resources:
1997: 62.5%;
2007: 59%.
Adopting new program approaches or changing work processes[A]:
1997: 66.1%;
2007: 53%.
Coordinating program efforts with other internal or external
organizations: 1997: 56.8%;
2007: 50.5%.
Refining program performance measures:
1997: 51.5%;
2007: 46.3%.
Setting new or revising existing performance goals:
1997: 58.5%;
2007: 52.1%.
Setting individual job expectations for the government employees I
manage or supervise:
1997: 60.8%;
2007: 61.9%.
Rewarding government employees I manage or supervise[A]:
1997: 52.6%;
2007: 60.9%.
Developing and managing contracts[B]:
2007: 40.5%.
Source: GAO.
Notes: Percentages are based on those respondents answering on the
extent scale.
[A] There is a statistically significant difference between 1997 and
2007 surveys.
[B] This question was not asked in 1997.
[End of figure]
In particular, despite efforts through GPRA and PART to help government
better inform resource allocation decisions with performance
information, over the past decade, there has been no significant shift
in the percent of managers reporting they use information obtained from
performance measurement when allocating resources. In addition,
contract management remains the management activity with the least
reported use of performance information, despite recommendations for
better management of federal contracts from Congress and GAO and
efforts to improve contract management through the PMA Competitive
Sourcing Initiative.[Footnote 6] In 2007, 41 percent of managers
reported that they use performance information when developing and
managing contracts, a 3 percentage point increase from 2000, when we
first asked the question. Given the growing fiscal imbalance, the
government must get the best return it can on its investment in goods
and services by improving its development, management, and assessment
of contracts; using performance information in these activities can
help to focus contract management on results.[Footnote 7]
Of interest, there were two areas relating to managers' use of
performance information in management decision making that did change
significantly between 1997 and 2007. First, there was a significant
decrease in the percentage of managers who reported that their
organizations used performance information when adopting new program
approaches or changing work processes. Performance information can play
a valuable role in highlighting the need to take a closer look at the
effectiveness of existing approaches and processes. Such an examination
could lead to identifying needed changes to bring about performance
improvements. Second, there was a significant increase in the
percentage of managers who reported that they reward the employees they
manage or supervise based on performance information. We believe this
is an important development that can play a role in getting managers to
pay attention to their performance; we will discuss this in more detail
later in this statement.
While in general there has been little change in federal managers'
reported use of performance information governmentwide, agency level
comparisons between 2000 and 2007 reveal that some agencies have made
notable progress. For example, over the last 7 years, the Nuclear
Regulatory Commission (NRC) showed a significant increase in positive
responses to eight questions related to use of performance information
in management activities. At the same time, DOD showed no change in
their responses to questions related to the use of performance
information and the Small Business Administration (SBA) reported
significantly lower use of performance in 2007 than 2000 on two
questions.
As seen in table 1, the range of use also varied considerably among
agencies with Forest Service (FS) and Department of the Interior
(Interior) managers among the lowest users, and the Social Security
Administration (SSA) and National Aeronautics and Space Administration
(NASA) among the highest.
Table 1: Agencies with Lowest and Highest Percent of Federal Managers
Who Reported Using Performance Information for Various Management
Activities:
Setting program priorities:
Managers responding to a "great" or "very great" extent: Lowest percent
(agency): 43 (Interior);
Managers responding to a "great" or "very great" extent: Highest
percent (agency): 78 (SSA);
Managers responding to a "great" or "very great" extent: Governmentwide
percent: 58.
Allocating resources:
Managers responding to a "great" or "very great" extent: Lowest percent
(agency): 39 (Interior);
Managers responding to a "great" or "very great" extent: Highest
percent (agency): 70 (NASA);
Managers responding to a "great" or "very great" extent: Governmentwide
percent: 59.
Adopting new program approaches or changing work processes:
Managers responding to a "great" or "very great" extent: Lowest percent
(agency): 30 (FS);
Managers responding to a "great" or "very great" extent: Highest
percent (agency): 71 (NSF);
Managers responding to a "great" or "very great" extent: Governmentwide
percent: 53.
Coordinating program efforts with other internal or external
organizations:
Managers responding to a "great" or "very great" extent: Lowest percent
(agency): 28 (FS);
Managers responding to a "great" or "very great" extent: Highest
percent (agency): 62 (VA);
Managers responding to a "great" or "very great" extent: Governmentwide
percent: 50.
Refining program performance measures:
Managers responding to a "great" or "very great" extent: Lowest percent
(agency): 28 (FS);
Managers responding to a "great" or "very great" extent: Highest
percent (agency): 66 (Education);
Managers responding to a "great" or "very great" extent: Governmentwide
percent: 46.
Setting new or revising existing performance goals:
Managers responding to a "great" or "very great" extent: Lowest percent
(agency): 33 (FS);
Managers responding to a "great" or "very great" extent: Highest
percent (agency): 73 (Energy);
Managers responding to a "great" or "very great" extent: Governmentwide
percent: 52.
Setting individual job expectations for the government employees I
manage or supervise:
Managers responding to a "great" or "very great" extent: Lowest percent
(agency): 44 (FS);
Managers responding to a "great" or "very great" extent: Highest
percent (agency): 79 (SSA);
Managers responding to a "great" or "very great" extent: Governmentwide
percent: 62.
Rewarding government employees I manage or supervise:
Managers responding to a "great" or "very great" extent: Lowest percent
(agency): 47 (FEMA);
Managers responding to a "great" or "very great" extent: Highest
percent (agency): 78 (NASA);
Managers responding to a "great" or "very great" extent: Governmentwide
percent: 61.
Developing and managing contracts:
Managers responding to a "great" or "very great" extent: Lowest percent
(agency): 24 (FS);
Managers responding to a "great" or "very great" extent: Highest
percent (agency): 70 (NASA);
Managers responding to a "great" or "very great" extent: Governmentwide
percent: 41.
Source: GAO.
Notes: Percentages are based on those respondents answering on the
extent scale. Education = Department of Education. Energy = Department
of Energy. FEMA = Federal Emergency Management Agency. VA = Department
of Veterans Affairs.
[End of table]
The PART has been used by the current administration to increase the
government's focus on improving program performance results.
Specifically, OMB includes an assessment of whether programs use
performance information for program management as one element of its
overall program assessment. In judging agency progress on the
Performance Integration Initiative of the PMA, OMB also considers
whether PART findings and performance information are used consistently
to justify funding requests, management actions, and legislative
proposals. However, of the federal managers familiar with PART,
[Footnote 8] a minority--26 percent--indicated that PART results are
used in management decision making, and 14 percent viewed PART as
improving performance.
Key Practices for Improving Government through the Use of Performance
Information:
As our survey results show, despite legislative and administration
efforts to focus federal management decisions on the achievement of
results and maximize the use of federal funds, changing the way federal
managers make decisions is not simply a matter of making program
performance information available. Based on our work on management
reform efforts as well as analysis of federal managers' responses to
our surveys over the past 10 years, we have identified three key
practices that can contribute to greater attention to results when
making management decisions. Regardless of the form of future
initiatives, the next administration should take steps to ensure that
agencies emphasize these practices to make sure that performance
information is used in management decision making:
1. demonstrate leadership commitment to results-oriented management;
2. create a clear "line of sight" linking individual performance with
organizational results; and:
3. build agency capacity to collect and use performance information.
Demonstrate Leadership Commitment to Results-Oriented Management:
Perhaps the single most important element in successfully implementing
organizational change is the demonstrated, sustained commitment of top
leaders.[Footnote 9] Leaders can demonstrate their support for results-
oriented management and facilitate the use of performance information
by agency managers through frequent and effective communication of
performance information.[Footnote 10] On our survey, we found a
positive relationship between agency managers who reported that
performance information is effectively communicated on a routine basis
and managers' reported use of performance information in key management
activities--in other words, greater communication of performance
information is associated with greater use. Leaders can communicate
performance information in their organizations by promoting the use of
visual tools such as poster displays, performance scorecards, and
intranet sites. In prior reviews, officials have told us that
publicizing performance information can inspire a greater sense of
ownership on the part of employees in their unit's performance; it can
also spur competition between units. Additionally, we found that
frequently reporting performance information can help to identify
program problems before they escalate, identify the factors causing the
problems, and modify services or processes to try to address problems.
Leaders can play a key role in this process by following up on problems
identified during discussions of performance information and by holding
managers accountable for addressing the problems.
Figure 3: Percentage of Federal Managers Who Reported Top Leadership
Demonstrated Commitment to Results-Oriented Management to a "Great" or
"Very Great" Extent:
[See PDF for image]
This figure is a vertical bar graph depicting the following data:
Percentage of Federal Managers Who Reported Top Leadership Demonstrated
Commitment to Results-Oriented Management to a "Great" or "Very Great"
Extent:
Year: 1997;
Percent: 56.9%.
Year: 2007:
Percent: 67.1%.
Note: There is a statistically significant difference between 1997 and
2007 surveys.
[End of figure]
From 1997 to 2007, we saw a significant increase in the percent of
managers--from 57 to 67 percent---who reported that top leadership
demonstrates a strong commitment to achieving results (see fig. 3.) Our
survey results confirm the relationship between leadership commitment
to results-oriented management and managers' reported use of
performance information in key management activities, such as
developing program strategy and making decisions about funding or
allocating resources.[Footnote 11] Similarly, managers who believed
their immediate supervisor paid attention to the use of performance
information in decision making also perceived that managers at their
level made greater use of performance information. Regarding the
contribution of PART to improving this practice, 37 percent of federal
managers familiar with PART reported that upper management has paid
greater attention to performance and achieving results. More than any
other items we asked about concerning the effect of PART, this item
received the greatest degree of endorsement from federal managers.
Create a Clear "Line of Sight" Linking Individual Performance with
Organizational Results:
To be successful, governmentwide performance improvement initiatives
must ensure that all employees involved in the process understand the
rationale for making the changes and their role and responsibility in
the process. Performance management systems are a vital tool for
managing and directing such organizational transformations because they
create a "line of sight" showing how team, unit, and individual
performance can contribute to overall organizational results.
Additionally, performance management systems can be used to hold
employees accountable for achieving and incorporating results into
management and employee decision making.[Footnote 12]
Over the past 10 years, we found positive trends in federal managers'
responses to several questions relating to how agencies are managing
their employees, which agencies can build upon to further emphasize the
importance of managing by results (see fig. 4.) Specifically, we saw a
statistically significant increase--from 53 percent in 1997 to 61
percent in 2007--in the percentage of federal managers that reported
using performance information when rewarding government employees they
manage. Additionally, a significantly higher number of federal managers
reported that employees in their agency receive positive recognition
for helping the agency accomplish its strategic goals from 1997 to
2007.
Figure 4: Percentage of Federal Managers' Indicating Performance
Information Plays a Role in Recognizing or Rewarding Individuals to a
"Great" or "Very Great" Extent:
[See PDF for image]
This figure is a horizontal multiple bar graph depicting the following
data:
Percentage of Federal Managers' Indicating Performance Information
Plays a Role in Recognizing or Rewarding Individuals to a "Great" or
"Very Great" Extent:
Employees in my agency receive positive recognition for helping the
agency accomplish its strategic goals[A]:
1997: 26.2%;
2007: 42%.
I use performance information when rewarding staff I manage or
supervise[A]:
1997: 52.6%;
2007: 60.9%.
Source: GAO.
[A] There is a statistically significant difference between 1997 and
2007 surveys.
[End of figure]
At the same time, an increasing portion of senior executives report
they are being held more accountable for results. In recent years,
Congress and the administration modernized the performance appraisal
and pay systems for senior executives by requiring a clearer link
between individual performance and pay.[Footnote 13] Specifically,
agencies are allowed to raise Senior Executive Service (SES) base pay
and total compensation caps if their performance appraisal systems are
certified by the Office of Personnel Management (OPM) with concurrence
by the Office of Management and Budget (OMB) as, among other things,
linking performance for senior executives to the organization's goals
and making meaningful distinctions based on relative performance.
In our past work on performance management and pay issues, we have
reported that performance-based pay cannot be simply overlaid on most
organizations' existing performance management systems.[Footnote 14]
Rather, as a precondition to effective pay reform, individual
expectations must be clearly aligned with organizational results,
communication on individual contributions to annual goals must be
ongoing and two-way, meaningful distinctions in employee performance
must be made, and cultural changes must be undertaken. Most important,
leading organizations have recognized that effective performance
management systems create a "line of sight" showing how unit and
individual performance can contribute to overall organizational goals
and can help them drive internal change and achieve external results.
[Footnote 15] Effective performance-management systems that hold
executives accountable for results can help provide continuity during
times of leadership transition, such as the upcoming change in the
administration, by maintaining a consistent focus on organizational
priorities.
Interestingly, since our 2003 survey, SES responses regarding
accountability show a significant increase. Between 2003 and 2007,
there was a 14 percentage point increase in the number of SES who
responded that managers/supervisors at their level are held accountable
for accomplishment of agency strategic goals. In 2007, there was a 12
percentage point increase in the number of SES who reported that they
are held accountable for the results of the programs, operations, or
projects for which they are responsible as compared to 2003 (see fig.
5.) There was no significant change in responses from 2003 to 2007 in
non-SES level responses to either of these questions.
Figure 5: Percentage of Federal Managers Who Reported That They Were
Held Accountable for the Results of the Program/Operations/Projects for
Which They Are Responsible to a "Great" or "Very Great" Extent:
[See PDF for image]
This figure is a multiple vertical bar graph depicting the following
data:
Percentage of Federal Managers Who Reported That They Were Held
Accountable for the Results of the Program/Operations/Projects for
Which They Are Responsible to a "Great" or "Very Great" Extent:
All[A]:
1997: 54.6%;
2000: 62.7%;
2003: 70.7%;
2007: 71.9%.
SES[A]:
1997: 61.6%;
2000: 66.2%;
2003: 69.6%;
2007: 81.4%.
Non-SES[A]:
1997: 54.1%;
2000: 62.4%;
2003: 70.8%;
2007: 71.2%.
Source: GAO.
[A] There is a statistically significant difference between 1997 and
2007 surveys.
[End of figure]
As we have previously reported, it is important to ensure that managers
have the authority to implement changes to the programs for which they
are held accountable.[Footnote 16] Our 2007 survey results, however,
indicate a growing gap between senior executives' perceptions of their
accountability for program performance as opposed to their decision-
making authority (see fig. 6). In 2007, 81 percent of senior executives
reported that they are held accountable for the results of the programs
for which they are responsible, while 62 percent reported that they
have the decision-making authority they need to help the agency achieve
its strategic goals, a 19 percentage point difference. Managers'
ability to effect change within their organization is limited if they
do not have the decision-making authority to help the agency accomplish
its strategic goals.
Figure 6: Comparison of SES Responses Regarding Accountability and
Decision-Making Authority:
[See PDF for image]
This figure is a combination vertical bar and line graph depicting the
following data:
Year: 1997;
Managers at my level have the decision-making authority they need to
help the agency accomplish its strategic goals[A]: 50.6%;
Managers at my level are held accountable for the results of the
programs they are responsible for[A,B]: 61.6%.
Year: 2000;
Managers at my level have the decision-making authority they need to
help the agency accomplish its strategic goals[A]: 56%;
Managers at my level are held accountable for the results of the
programs they are responsible for[A,B]: 66.2%.
Year: 2003;
Managers at my level have the decision-making authority they need to
help the agency accomplish its strategic goals[A]: 57.5%;
Managers at my level are held accountable for the results of the
programs they are responsible for[A,B]: 69.6%.
Year: 2007;
Managers at my level have the decision-making authority they need to
help the agency accomplish its strategic goals[A]: 61.5%;
Managers at my level are held accountable for the results of the
programs they are responsible for[A,B]: 81.4%.
Source: GAO.
[A] There is a statistically significant difference between 1997 and
2007.
[B] There is a statistically significant difference between 2003 and
2007.
[End of figure]
Build Agency Capacity to Collect and Use Performance Information:
While agencies can require managers to collect and report performance
information, this does not ensure that managers have the knowledge or
experience necessary to use the information or will trust the
information they are gathering. The practice of building analytical
capacity to use performance information and to ensure its quality--both
in terms of staff trained to do the analysis and availability of
research and evaluation resources--is critical to using performance
information in a meaningful fashion and plays a large role in the
success of government performance improvement initiatives.
Managers must understand how the performance information they gather
can be used to provide insight into the factors that impede or
contribute to program successes; assess the effect of the program; or
help explain the linkages between program inputs, activities, outputs,
and outcomes. In earlier work, we found a positive relationship between
agencies providing training and development on setting program
performance goals and the use of performance information when setting
or revising performance goals.[Footnote 17] While our survey found a
significant increase in training since 1997, only about half of our
survey respondents in 2007 reported receiving any training that would
assist in strategic planning and performance assessment. We previously
recommended that OMB ensure that agencies are making adequate
investments in training on performance planning and measurement, with a
particular emphasis on how to use performance information to improve
program performance.[Footnote 18] However, OMB has not yet implemented
our recommendation.
In addition to building agency capacity by educating staff on how to
use performance information, it is also important to ensure that the
information gathered meets users' needs for completeness, accuracy,
consistency, timeliness, validity, and ease of use. Our survey results
indicate that those federal managers who felt they had sufficient
information on the validity of the performance data they use to make
decisions were more likely to report using performance information in
key management activities. Interestingly, this question regarding
managers' perception of the validity of performance data was more
strongly associated with managers' reported use of performance
information than it was with any other question on the survey.
Additionally, we found a significant relationship between federal
managers reporting that managers at their level are taking steps to
ensure that performance information is useful and appropriate and their
reported use of performance information in key management activities.
Getting buy-in from managers by involving them in the selection and
development of measures for their programs can help increase their
confidence in the data collected and the likelihood that they will use
the information gathered in decision making.
Lessons Learned from Prior Performance Improvement Initiatives:
Regardless of the form, future governmentwide initiatives to improve
performance should take into consideration key lessons learned that we
have identified through our work. First, the next administration should
promote the three key practices we found that facilitate the use of
performance information by all levels of agency management. Beyond
this, the next administration can better focus its efforts to improve
performance by (1) adopting a more strategic and crosscutting approach
to overseeing performance; (2) improving the relevance of performance
information to Congress; and (3) building agency confidence in
assessments for use in decision making.
Adopt a More Strategic and Crosscutting Approach to Overseeing
Governmentwide Performance:
Given the time and effort required to assess agency and program
performance, taking a more crosscutting, strategic approach to such
assessments may better use limited resources. Additionally, focusing
decision makers' attention on the most pressing policy and program
issues and on how related programs and tools affect broader outcomes
and goals may better capture their interest throughout the process. The
current administration's PART initiative focuses on individual
programs, which aligns with OMB's agency-by-agency budget reviews, but
has been used infrequently to address crosscutting issues or to look at
broad program areas in which several programs or program types address
a common goal. Crosscutting analysis looking at broad program areas is
necessary to determine whether a program complements and supports other
related programs, whether it is duplicative and redundant, or whether
it actually works at cross-purposes to other initiatives. While OMB has
reported on a few crosscutting assessments in recent budget requests,
[Footnote 19] we have suggested that OMB adopt this approach more
widely and develop a common framework to evaluate all programs--
including tax expenditures and regulatory programs--intended to support
common goals.[Footnote 20]
We have previously reported GPRA could provide OMB, agencies, and
Congress with a structured framework for addressing crosscutting
program efforts.[Footnote 21] OMB, for example, could use the provision
of GPRA that calls for OMB to develop a governmentwide performance plan
to integrate expected agency-level performance. Unfortunately, this
provision has not been implemented fully. OMB issued the first and only
such plan in February 1998 for fiscal year 1999. Without such a
governmentwide focus, OMB is missing an opportunity to assess and
communicate the relationship between individual agency goals and
outcomes that cut across federal agencies and more clearly relate and
address the contributions of alternative federal strategies. The
governmentwide performance plan also could help Congress and the
executive branch address critical federal performance and management
issues, including redundancy and other inefficiencies in how the
government does business. It could also provide a framework for any
restructuring efforts.
In addition to the annual performance plan, a governmentwide strategic
plan could identify long-term goals and strategies to address issues
that cut across federal agencies.[Footnote 22] Such a plan for the
federal government could be supported by a set of key national outcome-
based indicators of where the nation stands on a range of economic,
environmental, safety/security, social, and cultural issues. A
governmentwide strategic plan combined with indicators could help in
assessing the government's performance, position, and progress, and
could be a valuable tool for governmentwide reexamination of existing
programs, as well as proposals for new programs. Further, it could
provide a cohesive perspective on the long-term goals of the federal
government and provide a much needed basis for fully integrating,
rather than merely coordinating, a wide array of federal activities.
Improve the Relevance of Performance Information to Congress:
In order for performance improvement initiatives to hold appeal beyond
the executive branch, and to be useful to the Congress for its decision
making, garnering congressional buy-in on what to measure and how to
present this information is critical.[Footnote 23] In a 2006 review,
congressional committee staff told us that although OMB uses a variety
of methods to communicate the PART assessment results, these methods
cannot replace the benefit of early consultation between Congress and
OMB about what they consider to be the most important performance
issues and program areas warranting review.[Footnote 24] However, a
mechanism to systematically incorporate a congressional perspective and
promote a dialogue between Congress and the President in the PART
review process is missing. As a result of this lack of consultation,
there have been several areas of disagreement between OMB and Congress
about this executive branch tool, resulting in most congressional staff
we spoke with not using the PART information. Most congressional staff
reported that they would more likely use the PART results to inform
their deliberations if OMB (1) consulted them early in the PART process
regarding the selection and timing of programs to assess, (2) explained
the methodology and evidence used or to be used to assess programs, and
(3) discussed how the PART information can best be communicated and
leveraged to meet their needs.
OMB has recently taken some steps to more succinctly report agency
performance information. In 2007, OMB initiated a pilot program that
explores alternative approaches to performance and accountability
reporting, including a "highlights report" summarizing key performance
and financial information. However, more work could be done to better
understand congressional information needs and communication
preferences. We have reported previously that congressional staff
appreciate having a variety of options for accessing the information
they need to address key policy questions about program performance or
to learn about "hot" issues.[Footnote 25] In a case study we conducted
on FAA's communication of performance, budgeting, and financial
information with Congress, congressional committee staff from the House
Transportation and Infrastructure Committee were interested in better
using technology to gain additional agency data in a timely manner. For
example, staff reported that agencies could create a For Congress page
on their Web site dedicated to serve as a single repository of data for
congressional requesters. In future initiatives, OMB could explore
alternative communication strategies and data sources to better meet
congressional needs and interest and ensure that the valuable data
collected for performance improvement initiatives is useful and used.
Additionally, Congress could consider whether a more structured
oversight mechanism is needed to permit a coordinated congressional
perspective on governmentwide performance issues. Just as the executive
branch needs a vehicle to coordinate and address programs and
challenges that span multiple departments and agencies, Congress might
need to develop structures and processes that better afford a
coordinated approach to overseeing agencies and tools where
jurisdiction crosses congressional committees. We have previously
suggested that one possible approach could involve developing a
congressional performance resolution identifying the key oversight and
performance goals that Congress wishes to set for its own committees
and for the government as a whole. Such a resolution could be developed
by modifying the annual congressional budget resolution, which is
already organized by budget function.[Footnote 26] This may involve
collecting the input of authorizing and appropriations committees on
priority performance issues for programs under their jurisdiction and
working with crosscutting committees such as the Senate Committee on
Homeland Security and Governmental Affairs, the House Committee on
Oversight and Government Reform, and the House Committee on Rules. This
year, Congress issued its budget resolution for fiscal year 2009
containing a section directing Committees of the House of
Representatives and the Senate to review programs' performance within
their jurisdiction for waste, fraud, and abuse and report
recommendations annually to the appropriate Committee on the Budget.
[Footnote 27]
Build Agency Confidence in Assessments for Use in Decision Making:
As the primary focal point for overall management in the federal
government, OMB plays a critical role in the planning and
implementation of the President's initiatives. During the current
administration, OMB has reported that is has reviewed over 1,000, or 98
percent, of all federal programs through its PART initiative. Moreover,
through its PMA and PART initiatives, OMB has set the tone of
leadership at the top by holding agencies accountable for their
implementation of recommendations intended to improve program
management. However, regardless of the mechanism that the next
administration employs to oversee agency and program performance, OMB's
efforts could be enhanced by building agency confidence in the
credibility and usefulness of its assessments for management decision
making. To build this confidence, OMB could further its efforts to
increase OMB examiners' knowledge of the programs they are assessing
and agency knowledge about how to develop and use the information
gathered for PART.
Our survey results indicate that concerns exist among federal managers
regarding the quality of OMB's assessments. Specifically, managers
responding to our survey expressed concerns that OMB examiners may be
spread too thinly and do not have sufficient knowledge of the programs
they are reviewing necessary for accurate assessments. On our survey,
the suggested improvement to PART with the highest level of endorsement
from federal managers familiar with PART was to ensure that OMB's
examiners have an in-depth knowledge of the programs they review.
Seventy percent of respondents indicated that this was a high to very
high priority for improving PART. For example, one respondent told us
that "the PART reviewer does not have time to try to understand [their]
program" and another stated that "some PART reviewers are not familiar
with their agency mission and scope." These responses echo previous
statements officials have given us regarding PART, in particular that
PART assessments can be thoughtful when OMB is knowledgeable about a
program and has enough time to complete the reviews, but that
assessments are less useful when OMB staff are unfamiliar with programs
or have too many PART assessments to complete. By taking a more
targeted, strategic approach as we previously recommended, OMB could
allow examiners time to conduct more in-depth assessments of selected
programs and build their knowledge base about the programs.
OMB can also help to facilitate implementation of future initiatives by
offering training to agency officials on the reporting requirements of
the initiatives and how the information gathered for these efforts
might be incorporated into management decision making. As we previously
mentioned, it is important to build agency capacity in terms of the
capability of staff to analyze and use performance information in their
decision making. Nearly half of managers familiar with PART indicated
that agency-level training on developing acceptable performance
measures for PART as well as training on how to use performance
measures identified as a result of the PART process should be high to
very high priorities for improving PART. One survey respondent
commented that "PART is a great concept but poorly understood by many
in federal service; more training and interaction among managers
[working on PART] could lead to substantial improvements in performance
and overall efficiencies." Another survey respondent emphasized that
training needed to be provided to field offices "so field supervisors
and front-line employees understand how their work outcomes/outputs
roll up to highest levels in government goals and initiatives."
Building agency officials familiarity with and confidence in the
performance assessments being conducted will be critical to improving
the integration and use of the information gathered in management
decision making.
Conclusions:
Each new administration has the opportunity to learn from and build
upon the experiences of its predecessors. While the last decade has
seen the creation of an infrastructure for government performance
improvement efforts, and a more results-oriented culture in the federal
government, we still see more that can be done to make this
transformation more widespread among federal agencies. Adopting the key
practices we have highlighted--demonstrating leadership commitment to
performance, aligning individual performance with the goals of the
organization, and building the capacity to use information--would be an
important first step, and OMB can play an important role in fostering
these practices across government. OMB could also adopt some of these
practices in its own engagement with agencies--particularly, by helping
to provide the training and development that both OMB analysts and
agency program managers will need to make sure that any OMB-led
performance review is useful and used.
Beyond this, Congress and the administration can help bring a more
strategic approach to how government performance is monitored and
measured. As we have noted repeatedly in our work, a governmentwide
strategic plan, underpinned by a set of key national indicators (KNI),
would, in defining outcomes shared by multiple agencies and programs,
help keep sight of how well agency programs are working collectively to
produce intended results. Whatever performance improvement initiatives
the next administration adopts, it will be vital to engage the Congress
in helping to identify the meaningful measures of success, as well as
the form in which performance information will be useful to Congress
itself in carrying out its oversight, legislative, and appropriations
roles.
Mr. Chairman, this concludes my statement. I would be pleased to
respond to any questions you or other members of the committee may have
at this time.
GAO Contacts and Acknowledgments:
For further information on this testimony, please contact Bernice
Steinhardt at (202) 512-6806 or [email protected] Elizabeth Curda at
(202) 512-4040 or [email protected]. Contact points for our Offices of
Congressional Relations and Public Affairs may be found on the last
page of this testimony. Individuals making key contributions to this
testimony were Matt Barranca, Thomas Beall, Laura Craig, Scott
Doubleday, Daniel Dunn, Catherine Hurley, Stuart Kauffman, Alison
Keller, Anna Maria Ortiz, Mark Ramage, Kaitlin Riley, Jerry Sandau, and
Katherine Hudson Walker.
[End of section]
Appendix I: Objectives, Scope and Methodology:
A Web-based questionnaire on performance and management issues was
administered to a stratified random probability sample of 4,412 persons
from a population of approximately 107,326 mid-level and upper-level
civilian managers and supervisors working in the 24 executive branch
agencies covered by the Chief Financial Officers (CFO) Act of 1990. The
sample was drawn from the Office of Personnel Management's (OPM)
Central Personnel Data File (CPDF) as of March 2007, using file
designators indicating performance of managerial and supervisory
functions. In reporting the questionnaire data, when we use the term
"governmentwide" and the phrase "across the federal government," we are
referring to these 24 CFO Act executive branch agencies, and when we
use the terms "federal managers" and "managers," we are referring to
both managers and supervisors. The questionnaire was designed to obtain
the observations and perceptions of respondents on various aspects of
such results-oriented management topics as the presence and use of
performance measures, hindrances to measuring performance and using
performance information, and agency climate. In addition, the
questionnaire included a section requesting respondents' views on the
Office of Management and Budget's (OMB) Program Assessment Rating Tool
(PART) and the priority that should be placed on various potential
improvements to it.
With the exception of the section of the questionnaire asking about
OMB's PART, most of the items on the questionnaire were asked in three
earlier surveys. The earliest survey was conducted between November
1996 and January 1997 as part of the work we did in response to a
Government Performance and Results Act (GPRA) requirement that we
report on implementation of the act. The second survey, conducted
between January and August 2000, and the third survey, conducted
between June and August 2003, were designed to update the results from
each of the previous surveys.[Footnote 28] The 2000 survey, unlike the
other two surveys, was designed to support analysis of the data at the
department and agency level as well as governmentwide.
Similar to the three previous surveys, this survey covered the CFO Act
agencies and the sample was stratified by whether the manager or
supervisor was Senior Executive Service (SES) or non-SES. The
management levels covered general schedule (GS), general management
(GM), or equivalent schedules at levels comparable to GS/GM-13 through
career SES or equivalent levels of executive service. Similar to our
2000 and 2003 surveys, we incorporated special pay plans, for example,
Senior Foreign Service executives, into the population and the sample
to ensure at least a 90 percent coverage of all managers and
supervisors at or comparable to the GS/GM-13 through career SES level
at the departments and agencies we surveyed.
One purpose of this survey was to update the information gathered at
the departmental and agency level for the survey done in 2000. Similar
to the design of the 2000 survey, stratification was also done by the
24 CFO Act agencies with an additional breakout of five selected
agencies from their departments--Forest Service, Centers for Medicare
and Medicaid Services (CMS), Federal Aviation Administration (FAA),
Internal Revenue Service (IRS), and Federal Emergency Management Agency
(FEMA). The first four agencies were selected for breakout in our 2000
survey on the basis of our previous work, at that time, identifying
them as facing significant managerial challenges. FEMA, which was an
independent agency at the time of our 2000 survey, became part of the
Department of Homeland Security (DHS) when the department was created.
The intent of this survey was to cover the same set of entities
examined in the 2000 survey with the addition of DHS, which was created
in 2003, in order to examine possible change in managerial perceptions
of performance measurement and use over time at the department and
agency level between 2000 and 2007. The PART section was included to
obtain feedback from managers that would help inform the transition and
management agenda of the next administration.
Most of the items on the questionnaire were closed-ended, meaning that,
depending on the particular item, respondents could choose one or more
response categories or rate the strength of their perception on a 5-
point extent scale ranging from "to no extent" at the low end of the
scale to "to a very great extent" at the high end. For the PART
questions about improvement priorities, the 5-point scale went from "no
priority" to "very great priority." On most items, respondents also had
an option of choosing the response category "no basis to judge/not
applicable."
We sent an e-mail to members of the sample that notified them of the
survey's availability on the GAO Web site and included instructions on
how to access and complete the survey. Members of the sample who did
not respond to the initial notice were sent up to four subsequent
reminders asking them to participate in the survey. The survey was
administered from October 2007 through January 2008.
During the course of the survey, we deleted 199 persons from our sample
who had either retired, separated, died, or otherwise left the agency
or had some other reason that excluded them from the population of
interest. We received useable questionnaires from 2,943 sample
respondents, or about 70 percent of the remaining eligible sample. The
eligible sample includes 42 persons that we were unable to locate and
therefore unable to request that they participate in the survey. The
response rate across the 29 agencies ranged from about 55 percent to 84
percent.
The overall survey results are generalizable to the population of
managers as described above at the CFO Act agencies. The responses of
each eligible sample member who provided a useable questionnaire were
weighted in the analyses to account statistically for all members of
the population. All results are subject to some uncertainty or sampling
error as well as nonsampling error. As part of our effort to reduce
nonsampling sources of error in survey results, we checked and edited
(1) the survey data for responses that failed to follow instructions
and (2) verified the programs used in our analyses. In general,
percentage estimates in this report for the entire 2007 sample have
confidence intervals ranging from about +1 to +6 percentage points at
the 95 percent confidence interval. Percentage estimates in this report
for individual agencies have confidence intervals that range from +3 to
+18 percentage points. An online e-supplement GAO-08-1036SP shows the
questions asked on the survey with the weighted percentage of managers
responding to each item.
As part of our analyses of the 2007 survey data, we identified a set of
nine items from the questionnaire that inquired about uses of
performance information that we identified in a previous GAO report.
[Footnote 29] Using those items we developed an index that reflected
the extent to which managers' perceived their own use of performance
information for various managerial functions and decisions as well as
that of other managers in the agency. To obtain this overall index
score of reported use of performance information, we computed an
average score for each respondent across the nine items we identified.
By using this average index score, which yields values in the same
range as the 5-point extent scale used on each item, we were able to
qualitatively characterize index score values using the same response
categories used for the items comprising the index.[Footnote 30] We
refer to this index as the "core uses index" in that it indicates
managers' perceptions about the extent to which performance information
is used across a core set of management decision-making areas.
Because a complex sample design was used in the current survey as well
as the three previous surveys, and different types of statistical
analyses are being done, the magnitude of sampling error will vary
across the particular surveys, groups, or items being compared due to
differences in the underlying sample sizes and associated variances.
The number of participants in the current survey is slightly larger
than the 2000 survey (2,510) and much larger than the 1996-1997 survey
(905) and the 2003 survey (503), both of which were designed to obtain
governmentwide estimates only. Consequently, in some instances, a
difference of a certain magnitude may be statistically significant. In
other instances, depending on the nature of the comparison being made,
a difference of equal or even greater magnitude may not achieve
statistical significance. We note throughout the report when
differences are significant at the .05 probability level. Also, as part
of any interpretation of observed shifts in individual agency response
between the 2007 and the earlier 2000 survey, it should be kept in mind
that components of some agencies and all of the Federal Emergency
Management Agency (FEMA) became part of the Department of Homeland
Security (DHS).
We conducted our work from March 2007 to July 2008, in accordance with
generally accepted government auditing standards. Those standards
require that we plan and perform the audit to obtain sufficient,
appropriate evidence to provide a reasonable basis for our findings and
conclusions based on our audit objectives. We believe that the evidence
obtained provides a reasonable basis for our findings and conclusions
based on our audit objectives.
[End of section]
Footnotes:
[1] Pub. L. No. 103-62 (Aug. 3, 1993).
[2] In addition to budget and performance integration, the other four
priorities under the PMA are strategic management of human capital,
expanded electronic government, improved financial performance, and
competitive sourcing.
[3] GAO, Results-Oriented Government: GPRA Has Established a Solid
Foundation for Achieving Greater Results, [hyperlink,
http://www.gao.gov/cgi-bin/getrpt?GAO-04-38] (Washington, D.C.: Mar.
10, 2004).
[4] GAO, Government Performance: 2007 Federal Managers Survey on
Performance and Management Issues, an E-supplement to [hyperlink,
http://www.gao.gov/cgi-bin/getrpt?GAO-08-1026T], [hyperlink,
http://www.gao.gov/cgi-bin/getrpt?GAO-08-1036SP] (Washington, D.C.:
July 24, 2008).
[5] [hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-04-38]. When
discussing federal managers' responses to survey questions, we are
reporting the percent of federal managers that responded from a great
to very great extent.
[6] We asked managers about their use of performance information when
developing and managing contracts for the first time in 2000.
[7] GAO, Federal Acquisitions and Contracting: Systemic Challenges Need
Attention, [hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-07-1098T]
(Washington, D.C.: July 17, 2007).
[8] In our discussion of questions relating to PART, the data include
the responses of federal managers who indicated they had a low,
moderate, or extensive level of knowledge of the details of OMB's PART
initiative and excluded those with no knowledge. Twenty-three percent
of respondents indicated having a low to extensive level of knowledge.
[9] [hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-04-38].
[10] GAO, Managing for Results: Enhancing Agency Use of Performance
Information for Management Decision Making, [hyperlink,
http://www.gao.gov/cgi-bin/getrpt?GAO-05-927] (Washington, D.C.: Sept.
9, 2005).
[11] We measured managers' use of performance information in key
management activities by developing a core uses index derived from nine
questions on the 2007 federal managers' survey. These questions
inquired about uses of performance information in management activities
and decision making that can lead to improved results as identified in
our 2005 report Enhancing Agency Use of Performance Information for
Management Decision Making [hyperlink, http://www.gao.gov/cgi-
bin/getrpt?GAO-05-927]. For a complete list of the practices used in
this index see app. I. This index was then used in various analyses,
including a ranking of the 24 Chief Financial Officers (CFO) Act
agencies and five components that participated in our survey on their
use of performance information. Throughout this testimony, when we
refer to "managers' use of performance information in key management
activities" we are referring to their reported use of performance
information according to this index.
[12] GAO, Results-Oriented Cultures: Creating a Clear Linkage between
Individual Performance and Organizational Success, [hyperlink,
http://www.gao.gov/cgi-bin/getrpt?GAO-03-488] (Washington, D.C.: Mar.
14, 2003).
[13] See Section 1322 of the Chief Human Capital Officers Act of 2002,
Title XIII of the Homeland Security Act of 2002, Pub. L. No. 107-296
(Nov. 25, 2002), and section 1125(a) (2) of the National Defense
Authorization Act for Fiscal Year 2004, Pub. L. No. 108-136 (Nov. 24,
2003).
[14] GAO, Human Capital: Symposium on Designing and Managing Market-
Based and Performance-Oriented Pay Systems, [hyperlink,
http://www.gao.gov/cgi-bin/getrpt?GAO-05-832SP] (Washington, D.C.: July
27, 2005).
[15] GAO, Human Capital: Senior Executive Performance Management Can Be
Strengthened to Achieve Results, [hyperlink, http://www.gao.gov/cgi-
bin/getrpt?GAO-04-614] (Washington, D.C.: May 26, 2004).
[16] [hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-04-38].
[17] [hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-04-38].
[18] [hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-04-38].
[19] For the fiscal year 2006 President's budget request, OMB conducted
two crosscutting assessments on Community and Economic Development and
Rural Water. In addition, OMB recently announced two new PMA
initiatives aimed at improving the performance of federal credit
programs and health information quality and transparency across the
major relevant federal agencies.
[20] GAO, 21st Century Challenges: How Performance Budgeting Can Help,
[hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-07-1194T]
(Washington, D.C.: Sept. 20, 2007).
[21] [hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-04-38].
[22] [hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-04-38].
[23] [hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-07-1194T].
[24] GAO, Performance Budgeting: OMB's Performance Rating Tool Presents
Opportunities and Challenges for Evaluating Program Performance,
[hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-04-550T] (Washington,
D.C.: Mar. 11, 2004).
[25] GAO, Managing for Results: Views on Ensuring the Usefulness of
Agency Performance Information to Congress, [hyperlink,
http://www.gao.gov/cgi-bin/getrpt?GAO/GGD-00-35] (Washington, D.C.:
Jan. 26, 2000).
[26] [hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-07-1194T].
[27] Concurrent Resolution on the Budget for Fiscal Year 2009, H.R.
Rep. 110-659, at 45-46 (2008).
[28] For information on the design and administration of the three
earlier surveys, see GAO, The Government Performance and Results Act:
1997 Governmentwide Implementation Will Be Uneven, [hyperlink,
http://www.gao.gov/cgi-bin/getrpt?GAO/GGD-97-109] (June 2, 1997);
Managing for Results: Federal Managers' Views on Key Management Issues
Vary Widely Across Agencies, [hyperlink, http://www.gao.gov/cgi-
bin/getrpt?GAO-01-592] (May 25, 2001); and Results-Oriented Government:
GPRA Has Established a Solid Foundation for Achieving Greater Results,
[hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-04-38] (Mar. 10,
2004).
[29] See GAO, Managing for Results: Enhancing Agency Use of Performance
Information for Management Decision Making, [hyperlink,
http://www.gao.gov/cgi-bin/getrpt?GAO-05-927] (Sept. 9, 2005). See the
online e-supplement GAO, Government Performance: 2007 Federal Managers
Survey on Performance and Management Issues, an E-supplement to
[hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-08-1026T],
[hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-08-1036SP]
(Washington, D.C.: July 24, 2008) for the wording of the items. The
nine items constituting the index are questions 8a, 8c, 8d, 8e, 8k, 8m,
10d, 10m, and 11b.
[30] For example, index score values between 1 and 2.99 were viewed as
covering the two categories of "small" or "to no extent' while values
of 3 to 3.99 fit the category "moderate extent" and values between 4
and 5 encompassed the categories of "great" or "very great" extent.
[End of section]
GAO's Mission:
The Government Accountability Office, the audit, evaluation and
investigative arm of Congress, exists to support Congress in meeting
its constitutional responsibilities and to help improve the performance
and accountability of the federal government for the American people.
GAO examines the use of public funds; evaluates federal programs and
policies; and provides analyses, recommendations, and other assistance
to help Congress make informed oversight, policy, and funding
decisions. GAO's commitment to good government is reflected in its core
values of accountability, integrity, and reliability.
Obtaining Copies of GAO Reports and Testimony:
The fastest and easiest way to obtain copies of GAO documents at no
cost is through GAO's Web site [hyperlink, http://www.gao.gov]. Each
weekday, GAO posts newly released reports, testimony, and
correspondence on its Web site. To have GAO e-mail you a list of newly
posted products every afternoon, go to [hyperlink, http://www.gao.gov]
and select "E-mail Updates."
Order by Mail or Phone:
The first copy of each printed report is free. Additional copies are $2
each. A check or money order should be made out to the Superintendent
of Documents. GAO also accepts VISA and Mastercard. Orders for 100 or
more copies mailed to a single address are discounted 25 percent.
Orders should be sent to:
U.S. Government Accountability Office:
441 G Street NW, Room LM:
Washington, D.C. 20548:
To order by Phone:
Voice: (202) 512-6000:
TDD: (202) 512-2537:
Fax: (202) 512-6061:
To Report Fraud, Waste, and Abuse in Federal Programs:
Contact:
Web site: [hyperlink, http://www.gao.gov/fraudnet/fraudnet.htm]:
E-mail: [email protected]:
Automated answering system: (800) 424-5454 or (202) 512-7470:
Congressional Relations:
Ralph Dawn, Managing Director, [email protected]:
(202) 512-4400:
U.S. Government Accountability Office:
441 G Street NW, Room 7125:
Washington, D.C. 20548:
Public Affairs:
Chuck Young, Managing Director, [email protected]:
(202) 512-4800:
U.S. Government Accountability Office:
441 G Street NW, Room 7149:
Washington, D.C. 20548:
*** End of document. ***