Performance Budgeting: Past Initiatives Offer Insights for GPRA
Implementation (Letter Report, 03/27/97, GAO/AIMD-97-46).

Pursuant to a legislative requirement, GAO reviewed the implementation
of the Government Performance and Results Act of 1993 (GPRA), focusing
on key design elements and approaches of GPRA as compared with those of
past initiatives which also sought to link resources with results, a
concept generally termed performance budgeting.

GAO noted that: (1) in its overall structure, focus, and approach, GPRA
incorporates critical lessons learned from previous efforts, but many of
the same issues encountered in previous initiatives remain and will
likely pose significant challenges if GPRA is to achieve its aim of
better linking resource decisions to performance levels; (2) where past
efforts failed to link executive branch performance planning and
measurement with congressional resource allocation processes, GPRA
requires explicit consultation between the executive and legislative
branches on agency strategic plans; (3) past initiatives' experiences
suggest that efforts to link resources with results must begin in the
planning phase with some fundamental understanding about program goals;
(4) where past initiatives devised unique performance information
formats often unconnected to the structures used in congressional budget
presentations, GPRA requires agencies to plan and measure performance
using the "program activities" listed in their budget submissions; (5)
where past initiatives were generally unprepared for the difficulties
associated with measuring the outcomes of federal programs and often
retreated to simple output or workload measures, GPRA states a
preference for outcome measurement while recognizing the need to develop
a range of measures; (6) GAO's discussions with selected legislative
staff and agency officials revealed fundamental differences in
perspectives and expectations that are often a necessary consequence of
the system of separated powers; (7) past initiatives often foundered
because no mechanism existed to reconcile or even to address these
legitimate, but at times competing, views; (8) GPRA, through required
consultations and formal, public documents, is intended to encourage an
explicit and periodic exchange of views between the branches; (9) GPRA
differs from prior initiatives in two important respects; (10) past
performance budgeting initiatives were typically implemented
governmentwide within a single annual budget cycle; (11) GPRA, in
contrast, defines a multiyear and iterative governmentwide
implementation process that incorporates pilot tests and formal
evaluations of key concepts; (12) GPRA will face an operating
environment unknown to its predecessors: persistent efforts to constrain
spending; (13) past initiatives demonstrate that performance budgeting *

--------------------------- Indexing Terms -----------------------------

 REPORTNUM:  AIMD-97-46
     TITLE:  Performance Budgeting: Past Initiatives Offer Insights for 
             GPRA Implementation
      DATE:  03/27/97
   SUBJECT:  Planning programming budgeting
             Congressional/executive relations
             Budget decision units
             Zero-base budgeting
             Budget cuts
             Reporting requirements
             Strategic planning
IDENTIFIER:  Planning, Programming, and Budgeting System
             
******************************************************************
** This file contains an ASCII representation of the text of a  **
** GAO report.  Delineations within the text indicating chapter **
** titles, headings, and bullets are preserved.  Major          **
** divisions and subdivisions of the text, such as Chapters,    **
** Sections, and Appendixes, are identified by double and       **
** single lines.  The numbers on the right end of these lines   **
** indicate the position of each of the subsections in the      **
** document outline.  These numbers do NOT correspond with the  **
** page numbers of the printed product.                         **
**                                                              **
** No attempt has been made to display graphic images, although **
** figure captions are reproduced.  Tables are included, but    **
** may not resemble those in the printed version.               **
**                                                              **
** Please see the PDF (Portable Document Format) file, when     **
** available, for a complete electronic file of the printed     **
** document's contents.                                         **
**                                                              **
** A printed copy of this report may be obtained from the GAO   **
** Document Distribution Center.  For further details, please   **
** send an e-mail message to:                                   **
**                                                              **
**                                            **
**                                                              **
** with the message 'info' in the body.                         **
******************************************************************


Cover
================================================================ COVER


Report to Congressional Committees

March 1997

PERFORMANCE BUDGETING - PAST
INITIATIVES OFFER INSIGHTS FOR
GPRA IMPLEMENTATION

GAO/AIMD-97-46

Performance Budgeting

(935220)


Abbreviations
=============================================================== ABBREV

  BAPA - Budget and Accounting Procedures Act
  BOB - Bureau of the Budget
  DOD - Department of Defense
  GDP - gross domestic product
  GPRA - Government Performance and Results Act
  MBO - Management by Objectives
  NAPA - National Academy of Public Administration
  OMB - Office of Management and Budget
  PFP - Program and Financial Plan
  PM - Program Memoranda
  PPBS - Planning-Programming-Budgeting-System
  ZBB - Zero-Base Budgeting

Letter
=============================================================== LETTER


B-275095

March 27, 1997

The Honorable Fred Thompson
Chairman
The Honorable John Glenn
Ranking Minority Member
Committee on Governmental Affairs
United States Senate

The Honorable Dan Burton
Chairman
The Honorable Henry A.  Waxman
Ranking Minority Member
Committee on Government Reform and Oversight
House of Representatives

Since 1950, the federal government has attempted several
governmentwide initiatives designed to better align spending
decisions with expected performance--what is often commonly referred
to as "performance budgeting."\1

Consensus exists that all of these efforts, whether launched by the
legislative or executive branch, failed to shift the focus of the
federal budget process from its longstanding concentration on the
items of government spending to the results of its programs. 

In 1993, the Congress enacted the Government Performance and Results
Act (GPRA) to improve the effectiveness, efficiency, and
accountability of federal programs by having agencies focus their
management practices on program results.  Through better information
on the effectiveness of federal programs and spending, GPRA seeks to
help federal managers improve program performance; it also seeks to
make performance information available for congressional
policy-making, spending decisions, and program oversight.  With
regard to spending decisions, GPRA aims for a closer and clearer
linkage between resources and results.  In this sense GPRA can be
seen as the most recent event in a now almost 50-year cycle of
federal government efforts to improve public sector performance and
to link resource allocations to performance expectations. 

GPRA mandates that GAO review the implementation of the Act's many
requirements and comment on the prospects for compliance by federal
agencies as governmentwide implementation begins in 1997.  This
report is one component of GAO's response to that statutory
mandate.\2 Specifically, this report compares and contrasts the key
design elements and approaches of GPRA with those of past initiatives
which also sought to link resources with results, a concept generally
termed performance budgeting.\3 A principal hypothesis of our work
was that understanding past initiatives can aid the Congress in
anticipating future implementation challenges for GPRA. 

In addition to an extensive literature review of past initiatives and
GPRA, we convened panels of agency officials and legislative staff
involved in GPRA implementation, as well as academic and other
experts familiar with GPRA and some of the prior initiatives. 
Panelists were asked to comment on a set of challenges we identified
for GPRA implementation from our review.  Throughout this report, we
refer to these panelists by their affiliation with a particular
branch of government or as "experts" due to their background in
budgeting and public administration.  Although not necessarily
complete or generalizable, the views expressed by the panelists cover
a broad range of perspectives reflecting multiple congressional
committee jurisdictions and a wide range of executive departments and
agencies.  Lastly, we discussed this report with senior officials
from the Office of Management and Budget (OMB).  They proposed
several technical adjustments, which we have incorporated as
appropriate. 


--------------------
\1 In this report, we use the term "performance budgeting" to refer
generally to the process of linking expected results to budget
levels, but not to any particular approach.  As discussed in the body
of this report, both the concept and techniques of performance
budgeting have evolved considerably since 1950. 

\2 For additional discussion of GPRA implementation issues, see
Executive Guide:  Effectively Implementing the Government Performance
and Results Act (GAO/GGD-96-118, June 1996); Managing for Results: 
Achieving GPRA's Objectives Requires Strong Congressional Role
(GAO/T-GGD-96-79, Mar.  6, 1996); GPRA Performance Reports
(GAO/GGD-96-66R, Feb.  14, 1996); and Managing for Results:  Status
of the Government Performance and Results Act (GAO/T-GGD/AIMD-95-193,
June 27, 1995). 

\3 As discussed in the body of this report, GPRA requires performance
budgeting pilots that present the anticipated levels of outputs and
outcomes that would result from varying spending levels. 


   RESULTS IN BRIEF
------------------------------------------------------------ Letter :1

In its overall structure, focus, and approach, GPRA incorporates
critical lessons learned from previous efforts.  Nevertheless, many
of the same issues encountered in previous initiatives remain and
will likely pose significant challenges if GPRA is to achieve its aim
of better linking resource decisions to performance levels. 

  -- Where past efforts failed to link executive branch performance
     planning and measurement with congressional resource allocation
     processes, GPRA requires explicit consultation between the
     executive and legislative branches on agency strategic plans. 
     Past initiatives' experiences suggest that efforts to link
     resources with results must begin in the planning phase with
     some fundamental understanding about program goals.  The
     challenge for those implementing GPRA will be to ensure that
     consultations are substantive and address the sometimes
     conflicting and competing goals of federal programs and the
     differing expectations of participants. 

  -- Where past initiatives devised unique performance information
     formats often unconnected to the structures used in
     congressional budget presentations, GPRA requires agencies to
     plan and measure performance using the "program activities"
     listed in their budget submissions.\4 However, program activity
     structures vary throughout the federal government, and the
     extent to which current structures can support both GPRA
     performance planning needs and congressional budget
     decision-making is also likely to vary. 

  -- Where past initiatives were generally unprepared for the
     difficulties associated with measuring the outcomes of federal
     programs and often retreated to simple output or workload
     measures, GPRA states a preference for outcome measurement while
     recognizing the need to develop a range of measures, including
     output and nonquantitative measures.\5 Focusing on outcomes
     shifts the definition of accountability from the traditional
     focus on inputs, processes, and projects to a perspective
     centered on the results of federal programs.  However, the
     difficulties associated with selecting appropriate measures and
     establishing relationships between activities and results will
     continue to make it difficult in many cases to judge whether
     changes in funding levels will affect the outcomes of federal
     programs. 

Our discussions with selected legislative staff and agency officials
revealed fundamental differences in perspective and expectations that
are often a necessary consequence of our system of separated powers. 
For example, legislative staff concentrated on their oversight role
and stressed near-term program performance, consistency over time in
information presentations, and accountability.  Conversely, executive
agency officials stressed long-term goals, adaptability to changing
needs, and flexibility in execution.  Past initiatives often
foundered because no mechanism existed to reconcile or even to
address these legitimate but at times competing views.  GPRA, through
required consultations and formal, public documents, is intended to
encourage an explicit and periodic exchange of views between the
branches; nevertheless the inherent challenges posed by our system of
checks and balances will inevitably and appropriately remain. 

GPRA differs from prior initiatives in two important respects. 
First, past performance budgeting initiatives were typically
implemented governmentwide within a single annual budget cycle. 
GPRA, in contrast, defines a multiyear and iterative governmentwide
implementation process that incorporates pilot tests and formal
evaluations of key concepts.  In this manner, GPRA increases the
potential for integration of planning, budgeting, and performance
measurement while guarding against the unreasonably high expectations
that plagued earlier initiatives.  Second, GPRA will face an
operating environment unknown to its predecessors:  persistent
efforts to constrain spending.  This restrictive budgetary climate
can create an imperative for linking performance information to
resource decisions but will likely intensify existing differences in
expectations between executive and legislative branches. 

Past initiatives demonstrate that performance budgeting is an
evolving concept that cannot be viewed in simple mechanistic terms. 
The process of budgeting is inherently an exercise in political
choice--allocating scarce resources among competing needs and
priorities--in which performance information can be one, but not the
only, factor underlying decisions.  GPRA is based on the premise that
budget decisions should be more clearly informed by expectations
about program performance.  Ultimately this goal of linking resources
with results implies both risks and rewards.  The risk lies in
expecting too much too soon--for example, that discrete outcomes can
be associated with specific resource commitments, or that performance
information can quickly provide solutions to today's budgetary
pressures.  But rewards exist as well.  GPRA holds the potential to
more explicitly infuse performance information into budgetary
deliberations, thereby changing the terms of the debate from simple
inputs to expected and actual results. 


--------------------
\4 The term "program activity" refers to the listings of projects and
activities in the Appendix portion of the Budget of the United States
Government.  Program activity structures are intended to provide a
meaningful representation of the operations financed by a specific
budget account. 

\5 GPRA defines outcome measures as an assessment of the results of a
program activity compared to its intended purpose.  Output measures,
conversely, refer to the tabulation, calculation, or recording of
activity or effort, such as checks processed or students enrolled. 


   FEDERAL INITIATIVES HAVE TAKEN
   VARYING APPROACHES TO
   PERFORMANCE BUDGETING
------------------------------------------------------------ Letter :2

At the federal level, interest in performance budgeting has led to
numerous initiatives since World War II, including four that were
governmentwide in scope:  (1) reforms flowing from the first Hoover
Commission in its efforts to downsize the post-World War II
government, (2) Planning-Programming-Budgeting-System (PPBS) begun in
1965 by President Johnson, (3) Management by Objectives (MBO)
initiated in 1973 by President Nixon, and (4) Zero-Base Budgeting
(ZBB) initiated in 1977 by President Carter.  Each of these efforts
established unique procedures for linking resources with results. 
The following discussion briefly summarizes and relates each of these
initiatives; appendixes II through V provide additional background
information. 

First championed in 1949 by the Hoover Commission, a federal
"performance budget" was intended to shift the focus away from the
inputs of government to its functions, activities, costs, and
accomplishments.  Rather than emphasizing items of expenditure--for
example, salaries, rent, and supplies--a performance budget was to
describe the expected outputs resulting from a specific function or
activity--for example, weapons, training, insurance claims,
construction projects, or research activities.  Consistent with the
Commission's recommendations, the Congress enacted the Budget and
Accounting Procedures Act of 1950 (BAPA), which, among other things,
required the President to present in his budget submission to the
Congress the "functions and activities" of the government, ultimately
institutionalized as a new budget presentation:  "obligations by
activities." These presentations were intended to describe the major
programs, projects, or activities associated with each federal budget
request--in a sense, the "performance budget" of a government which
at that time was primarily involved in directly providing specific
goods and services.\6 Workload and unit cost information began to
appear in the President's Budget, associated with the "obligations by
activities" presentations, providing a means of publicly reporting
the outputs of federal spending. 

The Planning-Programming-Budgeting-System (PPBS), mandated
governmentwide by President Johnson in 1965, assumed that different
levels and types of performance could be arrayed, quantified and
analyzed to make the best budgetary decisions.  In essence, PPBS
introduced a decision-making framework to the executive branch budget
formulation process by presenting and analyzing choices among
long-term policy objectives and alternative ways of achieving them. 
Multiyear planning was to be based on an agency's "program
structure," which was to provide a coherent statement of a national
need, an agency's directive to fill that need, and the activities
planned to meet it.  Performance was generally defined as agency
outputs, with an agency's program structure linking outputs to
long-term objectives.  Systems analysis and other sophisticated
analytical tools were an intrinsic part of PPBS, with measurement
seen as an essential means to better understand federal outputs,
benefits, and costs. 

Management by Objectives, which was primarily a federal management
improvement initiative, ultimately sought to link agencies' stated
objectives to their budget requests.  Initiated by President Nixon in
1973, MBO put in place a process to hold agency managers responsible
for achieving agreed-upon outputs and outcomes.  Agency heads would
be accountable for achieving presidential objectives of national
importance; managers within an agency would be held accountable for
objectives set jointly by supervisors and subordinates.  Performance
was primarily defined as agency outputs and processes, but efforts
were also made to define performance as the results of federal
spending--what would today be called "outcomes."

Zero-Base Budgeting (ZBB) was an executive branch budget formulation
process introduced into the federal government in 1977 by President
Carter.  Its main focus was on optimizing accomplishments available
at alternative budgetary levels.  Under ZBB agencies were expected to
set priorities based on the program results that could be achieved at
alternative spending levels, one of which was to be below current
funding.  In developing budget proposals, these alternatives were to
be ranked against each other sequentially from the lowest level
organizations up through the department and without reference to a
past budgetary base.  In concept, ZBB sought a clear and precise link
between budgetary resources and program results. 


--------------------
\6 The Hoover Commission's recommendation for a performance budget
was first made a statutory requirement applicable to the Department
of Defense through the National Security Act Amendments of 1949. 
President Truman's fiscal year 1951 budget, released in January of
1950, was subsequently hailed as the "first performance budget," as
it applied the Commission's concepts governmentwide.  However, BAPA,
which was enacted in the fall of 1950 and effectively
institutionalized the President's 1951 budget presentation, did not
contain the phrase "performance budget." During final debate, some
members argued that the term was redundant with the phrase "functions
and activities" and could be restrictive of future budget
presentations. 


      THE LEGACY OF PAST
      INITIATIVES:  THE EVOLUTION
      OF PERFORMANCE BUDGETING
---------------------------------------------------------- Letter :2.1

Past initiatives, although generally perceived as having fallen far
short of stated goals, contributed to the evolution of
performance-based measurement and budgeting in the federal
government.  Many concepts first introduced by these initiatives
became absorbed in the federal government and persisted long after
their origins in PPBS, MBO, or ZBB had been forgotten. 

  -- Hoover Commission reforms ultimately led to permanent changes in
     the President's budget presentations and a greater inclusion of
     performance information in the narrative summaries associated
     with each budget account.  The "obligations by activities"
     presentations established in response to the Commission's
     performance budgeting recommendations continue today, although
     they are now referred to as "obligations by program activity"
     or, more informally, "program activities."

  -- PPBS and MBO fostered exploration of difficult performance
     measurement issues, ultimately demonstrating the inherent
     limitations of analysis in a political environment and the often
     complex and uncertain relationship between federal activities,
     outputs, and outcomes. 

  -- ZBB illustrated the usefulness of defining and presenting
     alternative funding levels and expanded participation of program
     managers in the budget process. 

When viewed collectively the past initiatives suggest two common
themes.  First, any effort to link plans and budgets--that is, to
link the responsibility of the executive to define strategies and
approaches with the legislative "power of the purse"--must explicitly
involve both branches of our government.  PPBS and ZBB faltered in
large part because they intentionally attempted to develop
performance plans and measures in isolation from congressional
oversight and resource allocation processes.  Since goals,
objectives, and activities were not jointly discussed and agreed
upon, there was no consensus on what performance should be, how to
measure it, or how to integrate performance information with resource
decisions. 

Second, the concept of performance budgeting has and will likely
continue to evolve.  Past initiatives illustrate a progression from
the straight-forward, efficiency notion implicit in the Hoover
Commission recommendations, through the increasingly complex and
mechanistic processes of PPBS and ZBB.  Budgeting is the process of
making choices, and all of these initiatives sought to improve the
rationality of budget choices by focusing on the results of
activities--however those results might be defined.  This history
suggests that no single definition of performance budgeting
encompasses the range of past and present needs and interests of
federal decisionmakers.  One commentator has summarized this reality
as follows. 

     "To a student of politics and of legislative bodies, it
     [performance budgeting] means .  .  .  a presentation and review
     of budget requests in such a manner as to emphasize issues and
     make possible more effective choices.  To a top administrator,
     it .  .  .  also [means] greater flexibility and discretion in
     his operations, plus better control and accountability with
     regard to his subordinates.  Down the line of an agency, it may
     mean a single source for funds, an enlargement of authority,
     flexibility, and responsibility in the use of funds.  .  .  . 
     To the accountant, it means accrual accounting, cost accounting,
     segregation of capital from operating accounts, working capital
     funds, and many other techniques."\7

In other words, the multiplicity of definitions reflects the
differences in the roles various participants play in the budget
process.  And, given the complexity and enormity of the federal
budget process, performance budgeting at the federal level will need
to encompass a variety of perspectives in its efforts to link
resources with results.\8


--------------------
\7 Frederick C.  Mosher, Program Budgeting:  Theory and Practice with
Particular Reference to the U.S.  Department of the Army (New York: 
American Book-Stratford Press, 1954), pp.  80-81. 

\8 This observation is not limited to the federal government.  See
Performance Budgeting:  State Experiences and Implications for the
Federal Government (GAO/AFMD-93-41, February 17, 1993); Using
Performance Measures in the Federal Budget Process, prepared by the
Congressional Budget Office, July 1993; Joint Staff Report: 
Performance Budgeting, prepared by the Joint Budget Committee and
Office of State Planning and Budgeting, State of Colorado, September
20, 1995; and Budgeting for Results:  Perspectives on Public
Expenditure Management, prepared by the Organisation for Economic
Co-operation and Development, 1995. 


      THE FUTURE OF PERFORMANCE
      BUDGETING:  THE GOVERNMENT
      PERFORMANCE AND RESULTS ACT
---------------------------------------------------------- Letter :2.2

As the current federal initiative seeking to link resources to
results, GPRA seeks to involve all participants, directly ties plans
and measures to budget presentations, and centers attention on
outcome performance measurement.  GPRA requires all federal agencies
to set strategic goals in consultation with the Congress and key
stakeholders; develop plans for program activities; measure
performance; and annually report to the President and the Congress on
the degree to which goals were met.  Appendix VI contains additional
information on GPRA's purposes and requirements. 

GPRA can be seen as melding the best features of its predecessors. 
Its required connection to budget presentations harkens back to BAPA;
its interest in performance measurement and cross-agency comparisons
reflects PPBS; and its concern with outcomes and outputs emulates
MBO.  In performance budgeting terms, GPRA avoids the mechanistic
approaches of previous efforts, notably PPBS and ZBB.  The Senate
committee report on GPRA\9 emphasized that although "this Act
contains no provision authorizing or implementing a performance
budget," it was imperative that the "Congress develop a clear
understanding of what it is getting in the way of results from each
dollar spent." Recognizing that "it is unclear how best to present
[performance] information and what the results will be," GPRA
requires pilot projects to develop alternative forms of performance
budgets. 

As one observer has noted, "there is no magic bullet that will
replace budget judgement and budget policies with science."\10 Past
initiatives demonstrate that any link between performance information
and resource allocation decisions is unlikely to be straightforward. 
The implicit presumptions of PPBS and ZBB--that systematic analysis
of options could substitute for political judgment--ultimately proved
unsustainable.  GPRA recognizes that decisionmakers, rather than
budget systems, must provide judgments needed within a public sector
context.  That is, in a political process, performance information
can be one, but not the only factor in budgetary choice; performance
information can change the terms of debate, but not necessarily the
ultimate decision. 

Finally, GPRA should be seen as part of a series of critical
managerial and financial reform efforts currently underway in the
federal government that share common goals of better management and
accountability for results.  For example, the Chief Financial
Officers Act and efforts by the Federal Accounting Standards Advisory
Board seek to increase public confidence in government through
improved financial reporting.  These efforts will, among other
things, help achieve improved cost accounting and reliability of
data, essential steps in accurately matching resources to program
performance. 


--------------------
\9 S.  Rep.  No.  103-58 (1993). 

\10 John Mikesell, Fiscal Administration, 4th ed.  (Belmont, CA: 
Wadsworth, 1995), p.  190. 


   KEY DESIGN ELEMENTS OF GPRA
   INCORPORATE LESSONS FROM THE
   PAST, BUT IMPLEMENTATION
   CHALLENGES REMAIN
------------------------------------------------------------ Letter :3

In its structure, focus, and approach, GPRA incorporates important
lessons from past federal performance budgeting initiatives.  For
example,

  -- by requiring consultation between the executive and legislative
     branches on overall agency goals and missions, GPRA addresses
     past failures to link planning and goal setting processes with
     the congressional budget process;

  -- by requiring use of program activities in agency budget requests
     as the basis for performance planning and measurement, GPRA
     enhances prospects for effective links with the budget; and

  -- by emphasizing a range of performance measures that strive
     toward but do not initially demand outcomes, GPRA provides a
     realistic framework for the expectations and capabilities of
     performance measurement in the federal environment. 

Nevertheless, many of the challenges which confronted earlier efforts
remain unresolved and will likely affect early GPRA implementation
efforts.  Agency officials, legislative staff, and other experts we
met with recognized these continuing concerns and emphasized the need
to adjust expectations as new approaches and capabilities are
developed and tried. 


      GPRA EMPHASIZES FORMAL
      STRATEGIC PLANNING
      INCORPORATING STAKEHOLDER
      CONSULTATION
---------------------------------------------------------- Letter :3.1

Where most past initiatives did not link performance information
developed within the executive branch with congressional processes,
GPRA provides that agencies must consult with cognizant congressional
committees, and other stakeholders, as strategic planning efforts
progress.  This requirement is GPRA's most fundamental change and
perhaps its most significant challenge, because any effort to link
resources and results must encompass some fundamental understanding
of the goals of a particular program.  However, discussions between
agencies and the Congress on strategic planning are likely to
underscore the competing and conflicting goals of many federal
programs as well as the sometimes different expectations among the
various stakeholders in the legislative and executive branches.  In
addition, the federal government's increasing reliance on third
parties--principally, state and local governments and
contractors--further complicates efforts to reach consensus on
program goals.  And, significantly, executive branch officials and
legislative staff we spoke with seemed to approach strategic planning
consultations with very different expectations. 

For the most part, past initiatives defined planning processes as
internal agency activities, with limited external visibility and
virtually no external involvement.  Not surprisingly, where
initiatives were in practice confined within the executive branch,
legislative oversight and budget decision-making were ultimately
unaffected; the Congress resorted to traditional information sources,
which agencies quickly reemphasized.  For example, in PPBS, executive
agencies did not provide the Congress with information on alternative
program choices or even on the basis for decisions to pursue a
particular program course, often despite requests from the
Congress.\11 During MBO, presidential objectives approved by the
administration were made public, but congressional involvement in
determining these objectives was not sought.  Although some ZBB
decision packages were made available to the Congress, differences in
format and the voluminous amount of paperwork limited congressional
interest and discouraged use.\12

GPRA's premise of joint legislative and executive involvement in
strategic planning is new.  GPRA requires a formal document based in
part on consultations with the Congress and other interested
stakeholders.  The Senate report on GPRA indicates that strategic
plans are to be the basic foundation for a recurring process of
goal-setting and performance measurement tied to the agency's program
activities and that goals must be clear and precise in order to
maintain a consistent direction.  The Senate report on GPRA
recognizes that shifts in political philosophy may alter priorities
and means of achieving objectives but assumes that legislatively
determined missions and goals would remain largely unchanged from
year to year.  GPRA strategic planning was viewed as fundamentally
different from previous efforts, requiring that agency missions and
goals be connected to day-to-day operations. 

Past governmentwide initiatives suggest that achieving GPRA's
strategic planning consultation goals will be difficult, particularly
given the changes in emphasis and approach established by GPRA.  For
example, reaching a reasonable level of consensus on clear and
precise strategic goals will almost certainly encounter political
hurdles.  Competing and/or ambiguous goals in many federal programs
are often a by-product of the process of consensus building;
strategic planning which is seen as merely rekindling old conflicts
may not be well-received within the political process.  Furthermore,
the federal government's continued and, in recent years expanded,
reliance on state and local governments and other third parties to
deliver federally funded services--some of the stakeholders that
would likely be part of the consultation process--adds extra
complications to the prospect of reaching consensus.  For example,
applying PPBS to programs requiring participation by federal, state,
and local governments was seen as a major implementation problem. 

Discussions with legislative and executive branch staff confirmed the
above concerns and also suggested that these officials may be
approaching strategic planning from fundamentally differing
perspectives.  Agency officials viewed strategic plans as a
potentially useful means to a dialogue with congressional committees
but were skeptical that consensus on strategic goals could be
reached, especially given the often conflicting views among an
agency's multiple congressional stakeholders.  Some noted that
achieving consensus may result in rhetorical rather than substantive
plans and doubted the capacity of such plans to inform congressional
decision-making.  Legislative staff characterized some of the early
strategic plans as lacking in substance and requisite detail.  One
staff member expressed concern that agency strategic plans would be
used to present political agendas and justifications for the status
quo, rather than real assessments of need and value provided by
specific program activities.  Another legislative staff suggested
that the broader focus of the strategic planning process could hinder
traditional congressional oversight and control processes. 

Some experts we contacted suggested that the expectations for
strategic planning must be lowered, particularly for the initial
attempts at congressional consultation.  Specifically, they urged
agencies and the Congress to seek a "reasonable degree" of consensus
on draft strategic plans and allow several iterations to refine plans
and demarcate lines of conflict and agreement.  In the opinion of
these experts, establishing an ongoing dialogue between the branches
will be more important than seeking immediate consensus. 


--------------------
\11 In a congressional hearing discussing the availability of PPBS
information for the Congress, the Bureau of the Budget (BOB), the
predecessor to OMB, took the position that information used to
develop the budget was internal to the executive branch.  However,
BOB stated that budget requests and legislative justifications should
incorporate evaluation data and cost estimates that arose from PPBS
analysis. 

\12 Some ZBB pilots were congressionally directed prior to
governmentwide implementation, but results were similar to the Carter
initiative:  congressional use was hindered by large volumes of
information in unfamiliar formats. 


      GPRA PERFORMANCE PLANNING
      AND MEASUREMENT REQUIRES
      DIRECT LINKAGE WITH THE
      BUDGET
---------------------------------------------------------- Letter :3.2

Where past initiatives tended to devise unique structures to capture
performance information that ultimately proved difficult to link to
congressional budget presentations, GPRA requires agencies to plan
and measure performance using the same structures which form the
basis for the agency's budget request:  program activities.  This
critical design element of GPRA aims at assuring a simple,
straightforward link among plans, budgets, and performance
information and the related congressional oversight and resource
allocation processes.  However, the suitability of agencies' current
program activity structures for GPRA purposes is likely to vary
widely and require modification or the use of crosswalks. 
Discussions with agency officials and legislative staff suggest that
both are well aware of potential challenges in implementing this GPRA
requirement but, again, tend to view the need for and benefits of
adjustments to program activity structures from very different
perspectives. 

As discussed previously, the "program structures" (PPBS) and
"decision units" (ZBB) of earlier performance budgeting initiatives
were not intended, at least initially, to explicitly connect to
either an agency's organizational structure or congressional budget
justifications.\13 Attempts to crosswalk PPBS program structures to
budget presentations proved unduly cumbersome, and subsequent efforts
to align these structures with the federal budget were ultimately
unsuccessful.  Similarly, under ZBB, crosswalks were needed between
decision units and budget structures, and decision unit
consolidations obscured the analysis of alternative spending levels
and performance that was ZBB's presumed hallmark.  Congressional
interest in both initiatives quickly waned as plans and performance
information could not be directly linked to familiar oversight and
budget structures.  In the end, structural incompatibilities meant
that resources were not linked to the new results information. 

GPRA's required use of program activities appearing in the
President's Budget as the basis for performance planning and
measurement is intended to establish the direct budgetary link absent
in earlier initiatives.  But this goal is dependent on the capacity
of the current program activity structures to meet GPRA's needs. 
That is, where the success of earlier initiatives hinged on the
extent that unique planning structures could link to congressional
processes, current program activities structures useful to
congressional resource allocation processes must prove their
suitability for planning and measurement purposes.  Subject to
clearance by OMB\14 and generally resulting from negotiations between
agencies and appropriations subcommittees, program activity
structures differ from agency to agency and, within an agency, from
budget account to budget account.  Program activities, like budget
accounts, may represent programmatic, process, organizational, or
other orientations\15 and, similarly, their suitability for GPRA
planning and measurement purposes will also vary.  For example,
during ZBB, some agencies used their program activities as the basis
for consolidated decision units; one agency that did so found that
the process orientation of its program activities (e.g., regulatory
development) rendered ZBB rankings meaningless. 

Under GPRA, when program activity structures present challenges to
performance planning and measurement objectives, agencies have
options.  GPRA allows agencies to consolidate, aggregate, or
disaggregate program activity structures for performance planning
purposes, where needed.  This approach would of course require
subsequent crosswalks, but presumably not as burdensome as those of
prior initiatives.  Agencies may also attempt to renegotiate program
activities with their appropriations subcommittees and OMB.  Program
activities, however, serve specific functions and may prove resistant
to frequent or substantial change.  For example, program activities
(1) provide a relatively consistent structure for OMB and the
Congress, allowing comparison of current spending to estimates of
future needs, and (2) often form the basic unit of congressional
oversight for determining reprogramming thresholds.\16

Agency officials we spoke with confirmed the varying suitability of
their program activity structures for GPRA purposes.  One agency
successfully worked through the performance planning process using
its existing program activities; another agency found it necessary to
devise a separate planning structure and then link back to program
activities using a crosswalk.  This second agency had a program
activity structure that reflected its organizational units--a
structure useful for traditional accountability purposes but less
useful for outcome planning.  Still other agencies separated
performance planning from program activity structures, believing it
necessary to first establish appropriate program goals, objectives,
and measures before considering the link to the budget.  These
agencies planned to rely on GPRA's provision to aggregate,
disaggregate, or consolidate program activities. 

Our discussions with agency officials and legislative staff
highlighted a potential tension on the use of program activities as a
basis for agencies' performance planning and measurement.  Some
agency officials saw program activity structures as secondary to
strategic planning; thus, where current program activity structures
proved unsuitable for planning purposes, these officials viewed
change in the program activity structure as inevitable and
appropriate.  Legislative staff generally viewed these structures as
fundamental to congressional oversight of agency activities; thus,
change was viewed with apprehension and concern.  Legislative staff
were generally comfortable with existing structures and questioned
whether changes would frustrate congressional oversight.  Agency
officials generally saw a need to be flexible in using program
activities as a planning mechanism, and considered it likely and
desirable to change program activity structures to better align with
GPRA goals and objectives; however, they noted that changes could
prove difficult and time-consuming to negotiate with the Congress. 
In addition, agency officials were not convinced that changes to
program activities would necessarily achieve GPRA's purposes,
particularly when competing or unclear goals existed or when agency
goals and objectives were likely to change over time. 

The experts we met with generally agreed that the program activity
requirement of GPRA would likely constitute a significant
implementation challenge.  One expert expressed the tension between
legislative and executive branch officials as a difference in the
purpose and role of the program activity structure.  Congressional
interests emphasize oversight and control, thus necessitating detail
and continuity in the structure.  Agencies, however, use program
activities for managerial purposes, thus seeking less detail in favor
of more flexibility.  Another expert noted that GPRA does not define
how to aggregate, disaggregate, or consolidate program activity
structures. 


--------------------
\13 In fact, under PPBS, budget presentations were ultimately
expected to conform to the new program structures.  PPBS guidance
noted that over time it "may be necessary and desirable for the
program by activity portion .  .  .  to be brought into line with the
program structure developed."

\14 OMB Circular A-11, "Preparation and Submission of Budget
Estimates," requires that an agency's program activities must be
useful for the analysis and evaluation of budget estimates, be
related to the administrative operations of the agency, and have
accounting support. 

\15 For a discussion of this point, see Budget Account Structure:  A
Descriptive Overview (GAO/AIMD-95-179, September 18, 1995). 

\16 Reprogramming is the shifting of funds within an appropriation to
purposes different from those contemplated at the time the
appropriation was requested and provided.  Several appropriations
subcommittees use program activity structures to establish
reprogramming thresholds.  If an agency needs to shift funds from one
activity to another above the defined threshold, it is expected to
notify the subcommittee. 


      GPRA PERFORMANCE REPORTING
      EMPHASIZES OUTCOMES
---------------------------------------------------------- Letter :3.3

GPRA performance reporting allows agencies to use a range of
performance measures but contains a specific emphasis on
outcomes--the actual results of a program activity compared to its
intended purpose.  Past initiatives struggled with a variety of
approaches, ultimately finding it more practical to measure agency
processes and outputs than outcomes.  Agency officials implementing
GPRA affirmed the value of outcome measurement and were also
exploring alternative approaches due to the inherent challenge of
outcome measurement in a federal environment marked by entitlement
programs and other programs performed by nonfederal actors. 
Legislative staff questioned the validity and usefulness of outcome
data in decision-making and perceived a potential for loss of needed
detail.  Taken together, the views of executive and legislative
officials suggest GPRA will be challenged to identify performance
measures that are both outcome-based and useful for traditional
accountability purposes.\17

Past initiatives struggled with performance reporting.  Taken
together, their experiences reflect a slow refinement of the
objectives and awareness of the difficulties of performance
measurement within the federal government.  Efforts spurred by the
Hoover Commission centered on identifying the activities to be
performed and their costs, most commonly described as unit cost and
workload analysis.  Under PPBS, the purposes and uses of analysis
were expanded to include a decision-making component; hence, not only
were outputs and their costs analyzed, but PPBS expected that such
analysis could define the most urgent national goals and determine
the most effective and efficient means of reaching these goals.  But
agencies which attempted to gather this performance-oriented data
found the process to be far more difficult than expected, and
officials reported that several years would be required to develop
the information and collection systems envisioned by PPBS.  Agency
officials reporting on their experiences under PPBS also noted
situations where it was difficult to relate programs to a stated
outcome or to separate out other influences that might affect
ultimate outcomes.  For example, the Upward Bound program was
designed to increase skills and motivation for low-income high school
children.  However, PPBS officials had no way to isolate the
program's effect from other environmental influences which might also
have contributed to the success or failure of different program
participants. 

While subsequent initiatives reduced their expectations regarding the
use of performance measurement and analysis, they continued to
encounter similar difficulties.  Under MBO, in contrast to PPBS
experiences, presidential objectives and related agency programs were
to be determined, and then followed by discussion of appropriate
measures.  This approach recognized that some presidential
objectives, such as achieving cooperation with other countries or
successfully negotiating international economic treaties, did not
lend themselves to scientific analysis.  ZBB decision packages were
expected to include the outputs or accomplishments expected from a
program.  However, these performance measures were very quickly
overwhelmed by the need to present decision packages within budget
deadlines.  ZBB allowed the use of proxy measures of performance and
even indicated that decision packages were expected to be ranked with
or without the benefit of performance information.  In fact, a
subsequent analysis of ZBB efforts found that fewer than half of the
decision packages examined had quantifiable accomplishments,
workload, or unit cost information. 

While acknowledging the inherent difficulties of performance
measurement, GPRA requires that agencies establish performance
indicators to be used in measuring the relevant outputs and outcomes
of each program activity.  The Senate report on GPRA indicates that
sponsors understood the importance of measurement to any
performance-based initiative and that outcomes are the most desirable
performance indicator.  However, GPRA also accepts that measurable
outcomes may not always be possible--that causal links between
federal efforts and desired outcomes may never be established--and
encourages that a range of related indicators, such as quantity,
quality, timeliness, and cost be developed and used to approximate
outcomes. 

Executive officials we spoke with were strongly supportive of
performance measurement, including outcome measurement, but raised
concerns about the use of this information, particularly as a vehicle
for congressional oversight.  These officials saw value in defining
outcomes for planning purposes and were also testing various
approaches, including identifying intermediate performance
measurements,\18 using multiple measures to reflect different
stakeholders' interest, and applying nonquantitative measures, due to
the difficulties inherent in outcome measurement. 

But executive officials were concerned that in today's federal
environment, full or ultimate program outcome was typically not under
the control of a single federal agency, complicating responsibility
determinations and resulting resource allocation decisions.  In some
cases, outcomes can only be achieved over many years; in other cases,
federal activities are but one, and often a small, component of total
public and private sector interventions in a given program area; and
in still other cases, intended results cut across the activities of
several agencies.  In each of these cases, any individual agency
outcome measurement is often incomplete and therefore of limited
value to budgetary decisions.  Moreover, the increasing role of state
and local governments as well as of other third parties as the
delivery agents for federally financed activities means that in
achieving many federal outcomes, the efforts of nonfederal
actors--and their objectives and concerns--were critical factors in
performance measurement.  Lastly, the predominance of entitlement
spending, in which federal actions are typically a function of
statutory eligibility determinations, further clouds the ability to
hold agencies accountable for outcomes by shifting attention from
broad goals (e.g., assuring a certain standard of living) to specific
processes (e.g., ensuring correct and timely payments to
individuals). 

Legislative staff also expressed concerns regarding the use of
outcome measurement for oversight purposes, but principally in terms
of the completeness, validity, and reliability of the data for
decision-making.  In particular, legislative staff were reluctant to
have outcome information substitute for the more detailed information
they customarily receive, indicating that such a substitution could
lead to less, rather than better informed legislative
decision-making.  One official described an agency's strategic plan
as outcome-based, but with little discussion of the activities
planned to meet the established agency goals; others expressed
frustration that an agency's goals defined in its GPRA plans can be
very different from those negotiated in congressional oversight and
resource allocation processes.  Finally, legislative staff also
expressed strong interest in congressional involvement in measurement
questions.  Although concerned about the added burden for
congressional staff, legislative staff felt that the Congress should
take an active interest in what is measured and how it is measured. 
GPRA performance information, augmented by audited financial data,
was seen as most useful for the Congress, but the staff emphasized
that the quality of this information would need to be greatly
improved. 

Experts we spoke with encouraged agencies to identify a range of
measures and indicated that this approach is particularly useful for
programs with multiple or conflicting goals.  Nonquantitative
measures were also cited as important for activities such as research
and development, and one expert urged the use of multiyear measures
where goals could not be realistically achieved in a single year.\19
One expert cautioned against agencies identifying outcomes too
quickly, indicating that such a practice risked rhetoric over
measurement and would not be useful in holding agencies to a level of
performance.  Similarly social indicators--poverty rates or mortality
statistics--should only be used where it is evident that federal
actions have the capacity to affect the indicator. 


--------------------
\17 For a discussion of related issues within foreign and state
governments, see Managing for Results:  Experiences Abroad Suggest
Insights for Federal Management Reforms (GAO/GGD-95-120, May 2, 1995)
and Managing for Results:  State Experiences Provide Insights for
Federal Management Reforms (GAO/GGD-95-22, Dec.  21, 1994). 

\18 Intermediate outcomes are outcomes that occur between outputs
(delivery of products or services) and the achievement of the
ultimate purposes of a program (reducing pollution and improving
health, for example).  Intermediate outcomes might include client
satisfaction, actions taken by other levels of government, or actions
by those in the private sector. 

\19 See, for example, Managing for Results:  Key Steps and Challenges
in Implementing GPRA in Science Agencies (GAO/T-GGD/RCED-96-214, July
10, 1996). 


   GPRA DEFINES A PHASED,
   ITERATIVE IMPLEMENTATION
   PROCESS THAT ENHANCES PROSPECTS
   OF INTEGRATION WITH THE BUDGET
------------------------------------------------------------ Letter :4

Unlike past initiatives, GPRA's implementation design enhances
prospects for a fuller integration of performance information with
budgeting.  As noted above past initiatives were generally attempted
within a single annual budget cycle and tended to lack processes for
addressing implementation problems.  In contrast GPRA posits a
multiyear, iterative implementation process, built around periodic
publicly available products, that will allow agencies and the
Congress the opportunity to refine performance planning, measurement,
and reporting, and to modify, as needed, current budget processes and
presentations. 

Past initiatives tended to take an "instant implementation" approach
that limited their capacity to address challenges as they arose.  At
their outset, these initiatives generally gave agencies little time
for complex implementation tasks.  For example, PPBS gave agencies 10
weeks to develop requisite program structures--a task which the
Department of Defense, the originator of PPBS, took 10 years to
accomplish.  ZBB similarly imposed numerous changes to executive
branch budget formulation processes within a single budget cycle,
with guidance agencies believed was inadequate on key requirements. 
Given this abbreviated implementation process and the fact that cost
estimates and decision packages developed under these initiatives
were not routinely made available to the Congress, it is not
surprising that congressional budget decision-making was unaffected. 

In contrast, GPRA defines a 7-year implementation time frame, from
initial pilots to first governmentwide performance reports, and
incorporates feedback mechanisms such as required evaluations of key
concepts before governmentwide implementation.\20 Once key
requirements have been phased in, successive iterations of agencies'
strategic plans, performance plans, and performance reports will
allow opportunities for needed refinements.  In addition, GPRA's
products, which will be part of the public record, are to be made
available routinely to the Congress in time to allow for the
information to be integrated with congressional budget and oversight
processes.  For example, the Senate report on GPRA states that its
plans and reports can give the Congress the ability to identify where
planned resources do not appear adequate to achieve intended results
and then to make realignments as appropriate. 

GPRA's implementation approach also provides for 2-year pilot
projects of alternative performance budget approaches in at least
five agencies.  During the second year of these pilots (fiscal year
1999), performance-based budget presentations for each of the
designated agencies are to be included in the President's Budget
submission to the Congress.  The pilots' aim is to test possible
approaches and develop capabilities toward realizing the potential of
performance budgeting, and to present varying levels of performance,
including outcome-related performance, resulting from different
budgeted amounts.  GPRA also requires OMB to evaluate the results of
the pilots by March 31, 2001, and assess whether legislation
requiring performance budgets should be proposed. 

The Senate report on GPRA said that the performance budgeting pilots
are to begin "only after agencies had sufficient experience in
preparing strategic and performance plans, and several years of
collecting performance data." In this context, and recognizing the
importance of concentrating on governmentwide GPRA implementation in
1998, OMB indicates that these pilots will be delayed for at least a
year.  As envisioned under GPRA, performance budgeting will require
the ability to calculate the effects on performance of marginal
changes in cost and funding.  According to OMB, very few agencies
currently have this capability, and the delay will give time for its
development. 


--------------------
\20 OMB called for changes in the quality and quantity of performance
information in agency budget submissions earlier than required by
GPRA.  For further discussion, see Office of Management and Budget: 
Changes Resulting from the OMB 2000 Reorganization
(GAO/GGD/AIMD-96-50, Dec.  29, 1995). 


   GPRA WILL FACE UNIQUE CONFLICTS
   ARISING FROM BUDGETARY
   PRESSURES
------------------------------------------------------------ Letter :5

In one critical dimension, GPRA will face an environment unknown to
previous performance budgeting initiatives:  sustained, real declines
in discretionary spending.\21 Past efforts faced budget-related
tensions, but nothing comparable to that which will likely form the
initial operating environment for GPRA.  Both implementation
challenges and opportunities will likely arise from different
expectations regarding the appropriate role for GPRA within this
period of declining resources.  To executive officials we spoke with,
performance information was seen as essential to justify and improve
current program performance; to legislative staff, performance
information was expected to prove valuable as a government downsizing
tool. 

As GPRA is implemented governmentwide, total discretionary spending
is projected to decline in real terms, continuing the pattern of the
last 6 years.  This constitutes a unique implementation environment
when compared to past initiatives.  Hoover Commission recommendations
were implemented as the federal government shifted from a wartime
bureaucracy; PPBS faced the competing spending tensions of the
Vietnam war and an ambitious social agenda; ZBB was instituted as
federal deficits reached then post-war highs and the economy
experienced unusually high inflation.  While all of these concerns
affected consideration and passage of the budget, federal spending
during each of these initiatives generally continued to experience
real increases, particularly for discretionary spending. 

Budgetary constraints will likely raise implementation issues for
both agencies and the Congress.  Experts we spoke with noted that in
implementing GPRA all participants will need to build capacity to
develop and use performance information.  For agencies, this will
mean acquiring necessary resources and skilled personnel, and
developing the management leadership needed to sustain a
performance-based organization.  Similarly, the Congress will need to
expand its capacity to actively participate in strategic planning,
effectively communicate results-based expectations, and manage its
use of performance information provided by agencies.  Generally,
executive officials did not see resource availability as a
significant concern; they tended to view GPRA as the "right thing to
do" and believed that needed resources would be found.  However, they
were concerned about the potential burden of expanded performance
measurement requirements, noting that GPRA's requirements could be
especially onerous if, as some expected, they were layered on top of
existing information requirements. 

In our discussions with executive officials and legislative staff,
both agreed that declining budgets provided new incentives to use
performance information as a key input to decision-making, but each
had differing expectations as to how this should be done.  In effect,
each had differing views on what constituted appropriate and
effective "use." Executive officials believed that GPRA can be used
to more effectively present budgetary requirements in
performance-based terms, for example to the Congress.  In addition,
they noted that GPRA can be useful within the executive branch to
identify ways to streamline operations and to make necessary budget
reductions; its principal value was internal and management oriented,
stemming from its ability to clarify missions and performance
expectations.  However, they also noted that current budgetary
pressures and apprehension about use of GPRA information could
increase levels of defensiveness among agency staff. 

Legislative officials agreed that GPRA should aid in presenting
budgetary requirements in performance-based terms.  They saw GPRA as
encouraging both agencies and the Congress to revisit current
functions and activities in relation to their articulated mission and
to identify poorly performing or overlapping program activities. 
Legislative staff added that, given the difficult budget choices
facing the nation, terminating programs based on GPRA performance
information was a far more defensible practice than instituting
across-the-board reductions in all spending--all too often the only
other alternative.  These staff expressed concern that as agencies
and the Congress search for ways to reduce federal spending,
conflicts over agency missions and program goals are more likely to
surface, leading to agency "repackaging" of information to obscure
poor performance.  And they questioned whether agencies could provide
valid and accurate performance data. 

Other experts saw potential for use of GPRA in the budget process but
expressed caution.  These experts noted that the GPRA process can
allow agencies and the Congress to renegotiate program goals, thus
forcing rigor into federal budgeting and management processes.  One
expert emphasized that agencies would need to see GPRA information
used in decision-making if they were to continue to invest in the
initiative.  However, if GPRA's exclusive result is to terminate
programs, the initiative could suffer a loss of support within the
executive branch.  Experts also stated that GPRA information would
also need to be used outside of the budget process. 


--------------------
\21 Real discretionary spending refers to outlays that are controlled
through annual appropriations and adjusted for inflation.  The
President's fiscal year 1998 budget projects a decline in real
discretionary spending from 1998 through 2002. 


   OBSERVATIONS
------------------------------------------------------------ Letter :6

While GPRA has incorporated critical lessons from the past, the
Congress and the executive branch will face certain challenges in
their efforts to connect resources to results in the federal
government.  These challenges cannot be addressed by either the
executive or legislative branch alone; all those involved in the
resource allocation process must play a part.  In particular, efforts
to implement GPRA must address the following issues: 

  -- The Congress and the executive branch will need to explore what
     can be expected of a performance budgeting system.  GPRA can
     inform the budget process and change the nature of its dialogue
     by more routinely introducing performance information into
     decision-making.  But, GPRA cannot be expected to eliminate
     conflict inherent in the political process of resource
     allocation, and final decisions will appropriately take into
     account many factors, including performance. 

  -- The Congress and the executive branch must acknowledge that it
     takes time to develop goals, outcomes, and measures that are
     valid and acceptable to a range of stakeholders.  All
     participants must take full advantage of the iterative planning
     and reporting processes defined by GPRA.  Immediate expectations
     regarding budgetary impact and the ease of performance
     measurement must be tempered with long-term involvement and
     commitment to achieving GPRA's purposes. 

  -- The Congress and the executive branch must recognize the
     difficulties associated with devising a system that integrates
     performance and budget information.  GPRA provides for such
     integration through the program activity structure of the
     federal budget.  Both the budget and GPRA processes must be
     better aligned, requiring adjustments and accommodations.  In
     some cases, agencies may need to develop effective crosswalks
     between strategic plans and the budget; in other cases, agencies
     and the Congress may decide to change the program activity
     structure in the budget.  Improved financial reporting and
     auditing as required by the Chief Financial Officers Act will
     further strengthen the cost basis and reliability of data
     underlying the link between performance information and the
     budget. 

Over the longer term, GPRA can become a powerful tool for the hard
budgetary choices that the Congress and the administration will face
in the coming years.  In addition to improving attention on the
performance of individual program activities, GPRA can be used to
address one of the more intractable problems of the federal
government--that of duplicative programs that cut across federal
missions and agencies.  The Congress and the administration could use
GPRA as the vehicle to devise a framework that compares and
integrates decisions that affect related programs.  In this manner,
GPRA's focus on governmentwide performance can offer an important
alternative to across-the-board reductions and better inform choices
among competing budgetary claims. 


---------------------------------------------------------- Letter :6.1

We are sending copies of this report to the Chairmen and Ranking
Minority Members of the Senate Committee on the Budget; House
Committee on the Budget; Senate Committee on Appropriations; House
Committee on Appropriations; Subcommittee on Government Management,
Information and Technology, House Committee on Government Reform and
Oversight; the Director of the Office of Management and Budget; and
other interested parties.  We will also make copies available to
others on request. 

The major contributors to this letter were Michael J.  Curro, Carolyn
L.  Yocom, and Linda F.  Baker.  If you have any questions, I can be
reached at (202) 512-9573. 

Paul L.  Posner
Director, Budget Issues


OBJECTIVES, SCOPE, AND METHODOLOGY
=========================================================== Appendix I

The specific objective of our work was to compare and contrast the
key design elements and approaches of GPRA with those of similar past
initiatives in order to identify potential challenges for GPRA
implementation.  To identify past federal performance budgeting
initiatives, we used the following criteria:  (1) the initiative
occurred after World War II, (2) the initiative was implemented
governmentwide, and (3) the initiative asserted (either initially or
ultimately) a relationship between performance information and the
federal budget process.  Based upon these criteria, we identified
four prior federal initiatives:  federal performance budgeting
initiatives derived from the first Hoover Commission; the
Planning-Programming-Budgeting-System (PPBS); Management by
Objectives (MBO); and Zero-Base Budgeting (ZBB).  Our work did not
address performance budgeting initiatives that were limited to a few
programs or agencies, nor did we address initiatives that were
planned but never fully implemented.  For example, this approach
excluded the end-results budgeting efforts in the Forest Service
during the 1980s and President Ford's Presidential Management
Initiatives. 

To collect information on GPRA and on the four prior federal
initiatives, we used a qualitative research design.  In making our
review of each prior initiative, we conducted extensive literature
searches, including pertinent legislative histories, hearings, and
committee prints.  For GPRA, we collected information on its
legislative history as well as other relevant information including
OMB guidances, selected pilot performance plans and reports, and
available reviews of GPRA implementation efforts to date.  We
compiled information on the context, implementation approach, and
results of each of the prior initiatives.  To compare and contrast
these analysis results with GPRA, we summarized our findings for each
initiative, then compiled a set of observations relevant to GPRA
design and implementation.  From this work we identified a set of
potential challenges for GPRA implementation as well as relevant
observations based on past initiatives. 

To compare the results of our analysis with GPRA implementation
experiences to date, we contacted selected individuals in the
executive and legislative branches and other experts from outside
government.  We selected these individuals based on their knowledge,
experience, and interest in GPRA.  We asked them to review the
identified challenges and observations and participate in one of
three panels:  (1) an executive panel of individuals with direct
responsibility for implementing GPRA and representing agencies
covering a range of functions and program types (e.g., regulatory,
direct service provision, grant administration, research and
development); (2) a legislative panel composed of staff from
authorizing, budget, and appropriations committees in the House of
Representatives; and (3) a panel of individuals from the National
Academy of Public Administration (NAPA), academia, and former
government officials with expertise in GPRA or prior performance
budgeting initiatives. 

We asked the panelists to review our observations and indicate the
extent to which the challenges we identified held true for the
programs and/or budgets under their purview or within their
experience.  We also asked panelists to discuss what approaches had
been used or might be considered to mitigate these concerns.  To
assure maximum candor, individuals were informed that there would be
no attribution of their comments to them or their organizations. 

We conducted our work in Washington, D.C., between October 1996 and
March 1997.  We requested comments on a draft of this product from
the Director of OMB.  On March 3, 1997, we met with designated OMB
officials and discussed and incorporated changes based upon their
comments. 


THE FIRST HOOVER COMMISSION
========================================================== Appendix II

CONTEXT

After World War II, America was left with a wartime organizational
bureaucracy and a huge national debt that exceeded the gross domestic
product (GDP).  Reorganization planning evolved as a systematic means
of reducing federal spending while allaying concerns that such
reductions would cause a return to the depression of the 1930's.  The
President and the Congress explored various reorganization efforts,
the most effective and well known being the Commission on the
Organization of the Executive Branch, more commonly referred to as
the first Hoover Commission, established by law in 1947. 

The Declaration of Policy in the act creating the first Hoover
Commission (61 Stat.  246, July 7, 1947) focused on promoting
economy, efficiency and improved services in the executive branch of
government.  The Commission was charged with the structural
reorganization of departments and agencies and the President's
managerial authorities; it published 19 reports with over 270
recommendations in the Spring of 1949.  With estimates of the number
of implemented recommendations being as high as 196, the first Hoover
Commission is considered to have been highly successful. 

One recommendation deemed successfully implemented was that for
performance budgeting, which the Commission defined as follows: 

     "Under performance budgeting, attention is centered on the
     function or activity--on the accomplishment of the
     purpose--instead of on lists of employees or authorizations of
     purchases .  .  .  .  this method of budgeting concentrates
     congressional action and executive direction on the scope and
     magnitude of the different Federal activities.  It places both
     accomplishment and cost in a clear light before the Congress and
     the public."

Performance budgets as prescribed by the Hoover Commission were to
provide more comprehensive and intelligible information to the
President, the Congress, and the public.  And, the Commission
recommended that attention should shift away from government
inputs--items of expense, lists of federal employees--to government
outputs--its accomplishments, activities, and their related costs. 

IMPLEMENTATION APPROACHES

Both the executive and legislative branches of government made
efforts to implement a performance budget.  In the executive branch,
initial work on a performance budget began in 1949 when the Bureau of
the Budget (BOB) began preparation for the 1951 budget.\1 BOB issued
a statement to the Congress about the unique nature of the 1951
budget presentation, and pledging its support for a performance-type
budget suggested by the Hoover Commission and others: 

     "While the 1951 budget may be described as the first performance
     budget, it will be far from perfect, and we hope that we can
     improve it immeasurably in later years."

The 1951 budget submission was a distinct change from prior
Presidential budgets.  One of the more significant changes made was
in the "obligations by activities" section of the budget.  This
section provided (1) listings of the programs or activities imbedded
within a budget account, (2) separated operating and capital
expenses, and (3) established breakouts for grants, and other fixed
charges as well.  Prior to the 1951 budget, less than 45 percent of
all budget accounts contained obligation by activity subdivisions;
after the 1951 budget, all accounts did.  The 1951 budget also
included narrative statements on program and performance for each
account.  Narrative statements varied in their approach, some
presenting workload and unit cost information and others simply
describing activities within the budget account.  Finally, the 1951
budget replaced detailed lists of civilian positions and salaries
that accompanied each account with summary information on employment
levels. 

Most executive agencies charged with implementation had high
expectations for performance budgets as a means of better defining,
presenting, and executing the budget.  Performance budgets were
expected to align programs and activities in a uniform manner and
assist managers in making trade-offs between--and within--particular
programs.  Agencies also viewed performance budgets as correcting
budgeting and accounting weaknesses and improving the administration
and oversight of programs.  And, some agencies saw the submission of
budgets on a program and functional basis as a simplification of the
federal budget. 

However, some agencies did provide more cautionary statements
regarding the implementation of performance budgeting.  In
particular, agencies expressed concern regarding whether--or how--to
define different functions and activities consistently.  Agencies
also noted that the requirements for performance budgeting were
adding to rather than substituting for their current budget and
reporting requirements.  Agency comments regarding the requirements
for performance budgeting were mixed, with some expressing concern
that requirements were too rigid and others stating that requirements
were very generally and broadly defined. 

Congressional efforts to enact performance budgeting requirements
were contained in two laws.  The first was the National Security Act
Amendments of 1949 (63 Stat.  578, August 10, 1949) which set
performance budgeting requirements for the newly created Department
of Defense (DOD) specifically patterned after the Hoover Commission's
recommendation for a performance budget.  That act added Title IV,
the "Promotion of Economy and Efficiency Through Establishment of
Uniform Budgetary and Fiscal Procedures and Organizations," to the
National Security Act of 1947 and statutorily mandated the
implementation of a performance budget similar in form to the
President's fiscal year 1951 budget.  New Section 403, "Performance
Budget," stated: 

     "The budget estimates of the Department of Defense shall be
     prepared, presented, and justified, where practicable .  .  . 
     so as to account for, and report, the cost of performance of
     readily identifiable functional programs and activities, with
     segregation of operating and capital programs.  .  .  ."

And, as far as practicable, the Defense budget estimates and
authorized programs were to be presented in a comparable form and
follow a uniform pattern.  The use of a performance budget was
expected to correct weaknesses in budget formulation and presentation
as well as improving the administration and management of authorized
programs.  And, BOB expected that a uniform pattern of accounts would
allow comparisons across the services that were currently difficult
to obtain. 

A second law, the Budget and Accounting Procedures Act of 1950 (BAPA,
64 Stat.  832, September 12, 1950), ultimately provided a less
prescriptive definition of performance budgeting for governmentwide
application.  Early versions of this bill had contained detailed
definitions of performance budgeting very similar to that of Title IV
legislation.  However, during congressional deliberations, the
specific language for performance budgeting was removed from the
bill.  The conference report notes that the term performance budget
was considered surplusage--words in a statute which add nothing to
the force and legal effect of the statute--and might result in an
interpretation more restrictive than intended by the Congress.  BOB
also supported BAPA's less prescriptive language, arguing that (1)
the executive branch was already implementing performance budgeting
and (2) specific performance budgeting language would appear too
rigid and make it difficult to proceed with future budgeting
improvements.  Thus, the final enacted version of BAPA did not
contain the term "performance budgeting." Instead, the final language
stated in part: 

     "The Budget shall set forth in such form and detail as the
     President may determine-- (a) functions and activities of the
     Government;"

RESULTS

The Congress considered that the Hoover Commission recommendation for
performance budgeting was instituted on a governmentwide basis with
the passage of BAPA.  The second Hoover Commission, established on
July 10, 1953 (67 Stat.  184), noted that performance budgeting was
first used generally in the budget for fiscal year 1951.  Reflecting
on the implementation of performance budgeting, the second Commission
observed that many programs did not have adequate cost information
and suggested that budget activities and organization patterns be
made consistent and accounts established to reflect this pattern;
and, that budget classifications, organization, and accounting
structures should be synchronized. 

DOD performance budgeting efforts in the 1950's did work towards a
consistent presentation of budget accounts that led to the current
budget structure of DOD.  Comptrollers were established in DOD and
the Services with the aim of enhancing the development of adequate
budget preparation and review.  Each Service was required to develop
similar systems which allowed for some general comparisons between
the services and standard classifications of cost categories were
developed.\2

Although it did not specifically mention performance budgeting, BAPA
is generally credited with advancing several important changes to
federal budget practices.  The statute institutionalized efforts to
report sub-account level information to the Congress through the
obligations by activity sections, now termed program activities.\3 A
greater amount of performance information was placed into the
President's budget, primarily output based work-load and unit cost
information.  BAPA also required additional coordination between
agencies, created management devices such as working capital funds,
delineated responsibilities for budgeting and accounting between the
executive and legislative branches, and emphasized the need for a
close relationship between accounting, management, and programming
activities. 

Despite the successes cited, concerns remained that the budget did
not adequately link programs with their costs.  The report of the
second Hoover Commission summarized these concerns as follows: 

     "The installation of performance budgeting in the Federal
     agencies has met with varying degrees of success.  .  .  . 
     performance budgeting has encountered practical difficulties
     greater than originally contemplated and in some cases created
     congressional dissatisfaction with respect to program
     classification and accounting support."

In 1954, Arthur Smithies, noted chronicler and analyst of the budget,
clarified this issue by distinguishing between a performance budget
and a program budget. 

     "Congressmen themselves are dissatisfied with the present form
     of the budget.  They feel they have lost something by the
     performance budget and have not gained much .  .  .  .  Unless
     the performance budget can evolve into a true program budget,
     the Congress may decide to revert to the old system and console
     itself with the fiction that it has no programmatic
     responsibilities .  .  .  .  While the preparation of a
     meaningful program budget is a task of immense difficulty, and
     may never be wholly successful, there can be little doubt that
     further progress without direction is both feasible and
     desirable."


--------------------
\1 The executive branch acted on Hoover Commission recommendations to
change the President's budget prior to legislative enactment of the
Budget and Accounting Procedures Act of 1950.  In fact, the executive
branch argued that congressional action was not necessary, since the
budget presentation was already being changed. 

\2 While DOD budgets continue to reflect the performance budgeting
requirements developed, the term performance budget was repealed when
the National Security Act Amendments and other statutes were codified
in 1962.  The legislative history of this codification (P.L.  87-651,
September 7, 1962) notes that its passage was not intended to make
any substantive change to the law, but to bring up to date Title 10
of the U.S.  Code. 

\3 In fact, GPRA requires that annual performance plans cover each
program activity set forth in the budget of an agency. 


PLANNING-PROGRAMMING-BUDGETING-SYSTEM
(PPBS)
========================================================= Appendix III

CONTEXT

In January of 1965, President Johnson described the nation's economic
performance as "a creditable record of achievement." From 1961 to
1964 the economy had been growing in real terms at an average annual
rate of over 5 percent.  Average annual inflation was just over 1
percent during this period, while unemployment was roughly constant
at 5 percent.  There was some concern about annual federal deficits,
which in 1962 reached $7 billion, or 1.3 percent of GDP. 

A Planning-Programming-Budgeting-System (PPBS) was seen as a means of
building upon the Nation's economic strength by modernizing the
management tools used in the federal government.  Proponents of PPBS
believed that efficiencies and improvements in government operations
could be achieved through a common approach for (a) establishing long
range planning objectives, (b) analyzing the costs and benefits of
alternative programs which would meet these objectives, and
(c) translating programs into budget and legislative proposals and
long-term projections.  President Johnson considered PPBS a technique
for controlling federal programs and budgets, rather than "having
them control us."

Furthermore, an earlier introduction of a PPBS-type system in DOD in
1961 was deemed a significant improvement over previous budget
practices.  Prior to PPBS, the DOD system was highly decentralized
and resource formulation and allocation processes across the services
were duplicative, inequitable, and limited to consideration of a
single budget year.  Initially termed a "program package-program
element" system, DOD's PPBS activities provided a means of evaluating
and deciding among major alternative methods of accomplishing
military missions.  Planning horizons were also extended with the
development of a 5-year defense plan. 

On August 25, 1965, President Johnson announced his intention to
introduce PPBS on a governmentwide basis, asserting that three major
objectives would be achieved: 

     "(1) It will help us find new ways to do jobs faster, to do jobs
     better, and to do jobs less expensively.  (2) It will insure a
     much sounder judgment through more accurate information,
     pinpointing those things that we ought to do more, spotlighting
     those things that we ought to do less.  (3) It will make our
     decision-making process as up-to-date, I think, as our
     space-exploring program."

IMPLEMENTATION APPROACHES

There were distinct differences between DOD approaches and the
subsequent governmentwide implementation of PPBS.  DOD implementation
involved several hundred analysts and over 10 years of
contractor-assisted development efforts.  DOD introduced three key
phases of activity for implementing PPBS:  (1) reviewing
requirements, (2) formulating and reviewing programs, extended
several years into the future, and (3) developing annual budget
estimates.  The first two phases were continual, year-round efforts
that resulted in a 5-year program plan for the entire defense
establishment.  In phase three, the budget year requirements
established in the 5-year program plan are separated out into an
annual budget request. 

In contrast to this phased approach used at DOD, governmentwide
implementation of PPBS was expected to be accomplished in less than 6
months.  On October 12, 1965, less than 2 months after the formal
announcement of PPBS, the Bureau of the Budget (BOB) issued Bulletin
66-3 which provided agency guidance and instructions for implementing
PPBS.  Overall, 22 executive departments and establishments were
mandated and 17 smaller agencies were encouraged to implement PPBS. 
Bulletin 66-3 gave agencies 10 days to designate an official
responsible for their PPBS system and to report their choice to BOB. 
Within the next 20 days, agencies were to make tentative decisions on
their broad program categories.  Agency instructions, procedures, or
regulations regarding PPBS implementation were to be forwarded to BOB
within the next 2 months.  A final Program Structure, approved by the
director of the agency, was expected by February 1, 1966. 

Program Structures were the basic foundation of the PPBS system,
designed to provide a coherent statement of a national need, an
agency's authority to fill that need, and the activities planned to
meet that need.  BOB expected agencies to categorize all operations
and activities in output oriented terms reflecting each agency's
objectives.  Three subdivisions of activities were available within
the Program Structure:  (1) program categories, defined as activities
with similar broad missions, (2) program subcategories, defined as
subdivisions of narrower objectives, and (3) program elements,
defined as the specific products (e.g., goods and services)
contributing to agency objectives.  For example, if education is a
sample Program Structure, a program category might be secondary
education; subcategories might include college preparatory and
vocational activities; and program elements might include facilities,
books, and teachers. 

Three documents were expected to provide data on Program Structures: 
Program and Financial Plans (PFP), Program Memoranda (PM) and Special
Studies.  The PFP were similar to the DOD 5-year plan, containing
multiyear descriptions of program objectives and accomplishments in
quantitative nonfinancial terms related to the universe of need.  PM
were expected to describe agency program categories, summarize PFP
data, and delineate recommended programs.  Agencies were to
illustrate how they would achieve national needs, showing costs and
effectiveness of alternative objectives, program types, and levels of
operation.  Furthermore, PM should include any assumptions and
uncertainties on the cost and criteria used to support agency
recommendations and estimates.  Special Studies were expected to vary
greatly in scope and were carried out in response to agency top
management or BOB inquiries, or at the initiative of analytic staff. 

Contrary to expected time frames, PPBS implementation proceeded
slowly--even after several years of effort.  In November of 1966,
President Johnson issued a memorandum to Cabinet members and agency
heads stating that too many agencies had been slow in establishing
PPBS and that PPBS had not been used to make top management
decisions.  The President urged personal participation of agency
heads and instructed the Director of BOB to review and report on
agency progress in implementing PPBS.  Nevertheless, fully 2 years
into implementation, agency directors and former BOB officials
testified that implementation was proceeding more slowly than hoped. 
Some agencies characterized their efforts as in the beginning stages
or as requiring several more years before achieving notable results. 
Others reported that new information systems had to be developed or
devised in order to track data on a program or mission basis. 

As originally designed, PPBS information systems were not expected to
correlate to the Presidents' budget submission to the Congress. 
Instead, agency operating budgets--used to allocate resources and
control day-to-day operations--were expected to conform gradually
with PFP.  Hence, BOB did not expect changes to the President's
budget or to the internal submission of annual budget requests to
BOB.  Bulletin No.  66-3, the first guidance on implementing PPBS,
specifically noted: 

     "The introduction of the Planning, Programming, and Budgeting
     system will not, by itself, require any changes in the form in
     which budget appropriation requests are sent to Congress. 
     Further, this Bulletin is not to be interpreted to set forth
     changes in the format of annual budget submissions to the Budget
     Bureau."

However, to affect resource allocation decisions made within the
executive branch, PPBS reports were timed to occur with the BOB
budget preparation schedule.  PM and Special Studies were expected to
be used during the BOB spring review of the budget, when agencies and
BOB would develop initial estimates of budgetary need and PFP was
expected to be used during the fall as agencies developed annual
budget requests for BOB.  The result was two tracks of budget
information:  one which addressed the new PPBS requirements and one
which addressed the existing BOB requirements for submitting the
President's budget to the Congress. 

This separation between Program Structures and the President's budget
created an implementation burden that later BOB bulletins tried to
address, primarily by devising a more concrete link between PPBS and
the budget.  In July 1967, a second BOB bulletin (No.  68-2) directed
agencies to provide a crosswalk--or a reconciliation--between their
PPBS and appropriations structures.  The crosswalk was to be
sufficient to ensure that the budget submission was consistent with
the intent of the program decisions.  In 1968, the Congress requested
and received an accompanying commentary to BOB's third bulletin (No. 
68-9); the commentary noted that the then-current "two-track system"
of program and appropriation structures was confusing and causing an
undue burden.  Agencies were asked to consider changing their PPBS
program structures so as to avoid crosswalks and integrate PPBS and
appropriations structures. 

Subsequent BOB guidances made procedural changes to the PPBS system,
primarily limiting the scope and magnitude of reporting requirements
for agencies and increasing staff hiring and training.  Although
originally allowed to include unlimited program proposals without
regard to agency budget levels, the PFP requirements became limited
to budgeted activities.  Noting that many PM lacked analysis of major
alternatives, policy decisions, or strategies directed towards
specific outputs, BOB dramatically reduced its requests for major
policy issues presented in PM documents.  Further, BOB provided
agencies extra preparation time for PM, and pledged assistance with
the analysis and review of major policy issues.  Lastly, during the
first two years of implementation, almost 900 PPBS-specific positions
were created, of which almost 400 were filled through new hires. 
Four years into implementation, over 4,500 staff had attended PPBS
training sessions. 

RESULTS

While DOD continues to use PPBS procedures today, the governmentwide
initiative begun with such great promise in 1966 was formally
discontinued in 1971 with remarkably little comment.  Some observers
and participants faulted the implementation process, contrasting
DOD's 10 years of preparation with a significantly shorter
governmentwide implementation period.  A former agency official
charged that PPBS was implemented indiscriminately, with agencies
lacking the capability to perform PPBS activities, and BOB lacking
the competence to guide them.  Others said PPBS failed to garner the
necessary support it needed because it affected the balance of power
between the executive and legislative branches. 

PPBS participants and observers cited many problems developing
measures and analysis techniques, as well as incorporating results
into decision-making practices.  Congressional hearings reviewed
executive approaches to estimating, measuring, and valuing benefits,
ultimately recommending the use of standard interest rates and
discount policies.  A GAO report cited several obstacles to relating
output measures to program benefits; for example the report noted
that the increased use of grants meant that program outputs could not
be obtained due to a "rather loose and intermittent" federal control
over grantees' program performance.\1 Some members of Congress
questioned the broader purposes and accomplishments of PPBS as a
decision-making tool, particularly in light of the impact of
assumptions on analysis results; they further noted that their lack
of access to PPBS documents placed them at a disadvantage in
considering resource allocation questions.  Some agencies cautioned
that PPBS analysis could not substitute for inherently political
decisions such as the allocation of resources among different
priorities (e.g., health v.  education); others asserted that
decisions for certain federal functions--such as foreign
affairs--could not be relegated to systems analysis.  Other observers
found PPBS unrealistic because it attempted to improve
decision-making without recognizing the differing goals and interests
of the decisionmakers. 

Over 3 years into PPBS implementation, the Joint Economic Committee
of the Congress published a compendium of papers on the analysis and
evaluation of public expenditures in PPBS.\2 In this compendium, an
Assistant Director for Program Evaluation at BOB noted that
expectations for PPBS needed to be constrained by certain realities
of the federal environment, namely

  -- Governments operate with limited resources, and the demand for
     these resources always exceeds the available supply. 

  -- Past resource commitments place heavy constraints on current
     budgets, providing limited control over resource allocation. 

  -- Workable program measurement techniques are difficult to
     achieve, particularly given the complexity and size of the
     federal government. 

  -- Implementation of new ideas can be slowed by the size of
     government, the inherent uncertainties of its tasks, and the
     high degree of coordination needed. 

  -- Often there are political and moral claims made on the federal
     government which do not necessarily reflect an interest in cost
     effectiveness or efficiency. 

  -- The resource allocation process in government is not well linked
     to planning, as these activities serve different needs and
     respond to different time frames. 

  -- Once a budget is established, there is minimal accountability
     for performance.\3

Although it failed as a governmentwide performance budgeting
initiative, PPBS is credited with instituting improvements in federal
program management.  PPBS allowed agencies to reappraise their
mission and functions; accumulate better information on inputs,
outputs, and their relationship to objectives; and increase top
official interest over planning, budgeting, and performance. 
Furthermore, decisionmakers increased the use of systems analysis,
recognizing its value as a means of better understanding outputs,
benefits, and costs.  Finally, PPBS left a long-standing legacy of
increases in the amount and quality of program evaluation in the
federal government. 

Despite the immense implementation difficulties--a truncated
start-up, significant increases in paperwork, problems measuring
program benefits and costs, and complex crosswalks to link program
and budget structures--few individuals argued against the goals of
PPBS.  Some argued for its continuation, asserting that the goals and
purposes of PPBS were critical to improving government operations. 
At a congressional hearing in 1970, one former HEW official
summarized this view in the following manner. 

     ".  .  .  Rekindle the spluttering flame of PPB[S] .  .  .  . 
     In my judgment PPB[S] is absolutely right in concept.  It
     requires more sustained support from the Congress, the White
     House, and the BOB.  It requires patience.  Its message and
     value is care in considering what the Government has done and
     might do.  New initials will be needed but the job must be
     done."\4


--------------------
\1 Survey of Progress in Implementing the
Planning-Programming-Budgeting System in Executive Agencies
(B-115398, July 29, 1969). 

\2 The Analysis and Evaluation of Public Expenditures:  the PPB
system, a compendium of papers submitted to the Subcommittee on
Economy in Government of the Joint Economic Committee, Congress of
the United States, U.S.  Government Printing Office, Washington: 
1969. 

\3 "The Status and Next Steps for Planning, Programming, and
Budgeting," by Jack W.  Carlson, Assistant Director for Program
Evaluation, BOB. 

\4 Statement of William Gorham, formerly Assistant Secretary for
Program Coordination at HEW, in a hearing before the Subcommittee on
Economy in Government of the Joint Economic Committee, Congress of
the United States, 91st Congress, Second Session, June 2, 1970. 


MANAGEMENT BY OBJECTIVES (MBO)
========================================================== Appendix IV

CONTEXT

During the 1960's a bipartisan consensus developed that federal
management needed improvement.  A study requested by President
Johnson in 1966 and carried out by the Heineman Task Force criticized
the federal government's management of the new Great Society
programs.  The Task Force recommended strengthening the management
responsibilities of the then-Bureau of the Budget (BOB).  In 1970
President Nixon proposed changing BOB into a new Office of Management
and Budget (OMB), with the new agency expected to give greater
attention to federal management issues. 

To gain greater administrative control over major executive branch
departments and agencies, President Nixon proposed a new
governmentwide initiative:  Management by Objectives (MBO).  MBO was
a popular management technique used in the private sector and had
also been implemented at the Department of Health, Education, and
Welfare during the President's first term.  MBO was intended to
centralize goal-setting decisions while at the same time allowing
managers to choose how to achieve the goals.  It focused on tracking
progress toward goals previously agreed upon between a supervisor and
subordinate. 

IMPLEMENTATION APPROACHES

President Nixon formally initiated MBO in an April 18, 1973,
memorandum to 21 agencies, which included the 11 cabinet departments
and constituted about 95 percent of the budget and federal employees. 
President Nixon stated:  "I am now asking each department and agency
head to seek a sharper focus on the results which the various
activities under his or her direction are aimed at achieving.  .  . 
.  This conscious emphasis on setting goals and then achieving
results will substantially enhance federal program performance." A
follow-up memo to the MBO department heads from the Director of OMB
further explained that the new initiative aimed at better
communication, faster identification of problems, and greater
accountability of managers to supervisors.  Ultimately, the OMB
Director stated, MBO would lay the groundwork for the President to
decentralize more responsibility to the agencies. 

In his April 1973 letter, the President asked each agency to propose
the 10 or 15 most important objectives--referred to as "presidential
objectives"--to be accomplished in the coming year; the goal was to
identify 100 presidential objectives.  Different agencies were given
different deadlines, varying between 2 and 8 weeks, to submit
proposals.  Subsequently, agencies were told that their search for
objectives need not be limited to their proposals to the President. 
Agencies were encouraged to identify additional objectives, to track
progress towards achieving them, and to use MBO in all aspects of
their operations. 

OMB was to play a key role in implementing MBO.  As part of MBO
implementation, a new position within OMB was created:  the
"management associate." Thirty management associates with varying
backgrounds, some with government experience and some without, were
hired.  Their responsibilities would include providing day-to-day
assistance to the departments in preparing objectives, tracking
progress, working closely with OMB budget examiners, and providing
technical assistance to agency staff and OMB top management to help
implement the initiative.  In addition, staff were specially selected
to implement MBO at the agencies and were generally located between
the Office of the Secretary and program managers. 

OMB statements emphasized that the initiative was to be conducted
with a minimum of paperwork.  Face-to-face meetings were to be held
roughly every 2 months between top OMB and agency staff.  The
meetings were to focus on agency progress in achieving objectives,
problems requiring top management attention, and any changes to
objectives.  Some existing OMB requirements were eliminated as a way
of encouraging agency acceptance of the new initiative. 

OMB gave agencies some guidelines on their proposals for presidential
objectives.  In proposing presidential objectives, agencies were to
consider the importance to the President's agenda, measurability, and
the ability to achieve the objective without additional resources and
within 1 year.  Agencies were to identify objectives on their
own--that is, without intervention by OMB--and were asked to develop
action plans with specific milestones for accomplishing objectives. 
All objectives were to be linked to the organizational units that
would be held accountable for achieving them.  If circumstances
warranted, objectives could be changed during the year.  OMB would
review agencies' proposed presidential objectives as well as track
progress toward achieving them.  In its first year, no explicit
connection of MBO to the budget process was attempted. 

MBO fell far short of expectations during its first year.  Although
20 of the 21 MBO agencies had identified presidential objectives and
18 had progress tracking systems in place by the end of the first
year, many other important implementation steps were not achieved. 
For example, management conferences were held, although not as often
as originally planned with 4 to 6 months passing between conferences
for some agencies.  Despite OMB's intention to address this problem,
scheduled meetings continued to be canceled frequently.  And, as MBO
reviews were increasingly done at the staff level, rather than at the
OMB and agency head level, MBO paperwork increased.  At OMB, tensions
initially developed between the new management associates and OMB's
budget examiners; this eased to some extent as the management
associates found that monitoring agency objectives was not a
full-time task, especially given the associates' lack of control over
agency actions.  Increasingly the management associates became
involved in non-MBO tasks such as doing special studies.  Most
importantly, presidential involvement in MBO also faltered during
1974, affecting agency implementation and acceptance of MBO. 

In the second year of MBO, an attempt was made to re-emphasize MBO by
linking objectives with agency budget submissions.  In a February
1974 meeting, OMB informed agency heads that their 1976 budget
requests were to be based on their presidential and agency
(secretarial) objectives.  OMB hoped that this would increase the
permanence of MBO and encourage more explicit statements of the
purposes for which money was to be spent.  In June 1974, OMB asked
the 21 MBO agencies to identify selected objectives in the letters
transmitting their budget requests to OMB; these objectives were to
be discussed in depth in the budget justifications.  Agencies were
told to "be prepared to provide" outlay estimates and "preliminary"
schedules of milestones upon request, but were not required to
include action plans.  In August 1974, President Nixon resigned and,
shortly after taking office, President Ford endorsed agencies'
proposed 1975 Presidential objectives.  These were the last
presidential objectives requested under MBO. 

RESULTS

Although certainly affected by President Nixon's resignation, the MBO
initiative suffered from its initial separation from existing budget
formulation processes and from problems in identifying and measuring
objectives.  Efforts in the second year to tie the MBO initiative to
the budget's priority setting processes were quickly overwhelmed by
its early demise.  The President's request that agencies focus on
results and express those results in measurable terms did not make
the practice of performance measurement any easier.  For various
reasons agencies found this difficult to do.  Not surprisingly, as
initially submitted to OMB, agencies' objectives were often vaguely
worded (e.g., "the abolition of crime in society" or "to make the U. 
S.  Merchant Marine the most competitive in the world") and not
easily measurable.  In addition, agency objectives often dealt with
matters not achievable within a single year (such as finding a cure
for cancer) or were beyond the control of agency managers (such as
improving water quality), making accountability problematic. 

Despite these issues and its brief life as a formal initiative,
proponents believe that MBO had positive results in both the short
and long term.  For the administration that proposed it, the MBO
initiative enhanced its ability to explain the President's agenda to
the public--for example, the emphasis on transferring more federal
power to cities and states.  Some OMB staff and agency officials
found MBO valuable as an internal agency management process, helping
to clarify goals and associated activities.  To some extent, the
basic concepts of MBO--negotiating goals and holding subordinates
accountable for achieving them--have survived in federal management
practices.  In addition, the potential of MBO as a tool for
articulating presidential agendas and linking them with the budget
was later confirmed by a similar initiative under President Bush;
this initiative included publishing presidentially approved
objectives, the resources needed to achieve them, and relevant
accomplishments in the President's Budget.  And issues raised during
MBO concerning the difficulties inherent in identifying and measuring
federal outcomes would remain for later initiatives to address. 


ZERO-BASE BUDGETING (ZBB)
=========================================================== Appendix V

CONTEXT

In the mid 1970s, the annual deficit was a matter of public debate. 
By 1977 the annual deficit had been above $50 billion for 2 years,
reaching a post-World War II high of $73.8 billion for fiscal 1976. 
A general sense existed that federal spending was out of control,
with much of it no longer subject to annual appropriations but driven
by permanent entitlement programs and multiyear budgetary authority. 

During 1976, the Congress and Candidate Jimmy Carter had responded to
the new budget situation.  Beginning in the spring the Congress held
hearings on proposals for so-called "sunset" legislation that would
have required periodic zero-base reviews of all federal programs by
their congressional authorizing committees.  Sunset proposals,
however, did not become law.  While campaigning for the presidency,
Jimmy Carter promised to balance the budget within his first term and
to reform the federal budgeting system, which he characterized as
"inefficient, chaotic, and virtually uncontrollable by either the
President or the Congress." To these ends he had promised to
introduce zero-base budgeting (ZBB), which he had used as Governor of
Georgia and which also had been discussed in sunset hearings.  In
fall 1976, congressional appropriations committees asked selected
independent agencies to pilot test the applicability of ZBB concepts
to legislative decision-making. 

Used in private industry as well as in some state and local
governments, ZBB in theory required expenditure proposals to compete
for funding on an equal--starting from "zero"--basis.  ZBB prepares a
detailed identification and evaluation of all activities together
with alternatives, and spending necessary to achieve desired plans
and goals.  Where federal budgeting in recent years had made
incremental changes to an accepted base of past spending, ZBB in
contrast sought to look below the base, evaluating the efficiency and
effectiveness of current operations and comparing the needs of one
program against the needs of other programs that might be of higher
priority.  ZBB also looked to a greater involvement of program
managers in budgeting as a way to identify new efficiencies and to
incorporate better analysis into budget decision-making. 

IMPLEMENTATION APPROACHES

On February 14, 1977, shortly after his inauguration, President
Carter issued a memorandum to the heads of executive departments and
agencies mandating use of zero-base budgeting for all fiscal year
1979 agency budget requests.  The memorandum mandated that a new ZBB
budget process would replace--not simply accompany or link to--the
existing executive branch budget formulation process for all budget
proposals in the immediately upcoming budget cycle.  Consistent with
an emphasis in ZBB theory on a close link between planning with
budgeting, federal planning and budgeting under ZBB were to be done
at the same time, in a single process.  In contrast to its
implementation of PPBS and MBO, OMB did not add or create a special
staff for ZBB.  Federal managers and budgeteers were expected to
implement the new initiative.  ZBB would not affect budget materials
provided to congressional appropriations or authorizing committees,
nor would it change the form of the President's Budget. 

Formal implementation steps were taken within 2 months of the
memorandum.  On March 21, 1977, OMB sent agencies draft ZBB
guidelines for comment, issuing final guidance on April 19 as
Bulletin 77-9.  In effect, agencies were given a lead time of about 6
months before final budget submissions were due to OMB.  Agencies
were to set up their own ZBB systems using the steps outlined in the
Bulletin as a framework.  Among other new requirements, agencies were
asked to identify the "decision units" for which budget requests
would be made.  A decision unit was to be

  -- "at an organizational or program level at which the manager
     makes major decisions on the amount of spending and the scope,
     direction, or quality of work to be performed."

  -- "not so low in the structure as to result in excessive paperwork
     and review .  .  .  [nor] so high as to mask important
     considerations and prevent meaningful review of the work being
     performed."

  -- "normally .  .  .  included within a single account, be
     classified in only one budget subfunction, and to the extent
     possible, reflect existing program and organizational structures
     that have accounting support."

In all cases, the guidance stated, the identification of the decision
units was to be determined by the information needs of top
management.  Budget requests for each decision unit were to be
prepared by their managers, who would (1) identify alternative
approaches to achieving the unit's objectives, (2) identify several
alternative funding levels, including a "minimum" level normally
below current funding, (3) prepare "decision packages" according to a
prescribed format for each unit, including budget and performance
information, and (4) rank the decision packages against each other in
a series of steps, beginning with program managers and proceeding up
the hierarchy.  The results of the ZBB process would be agency budget
justifications and rankings, with the latter required to be submitted
to OMB but not to the Congress.  With OMB's approval, agencies could
consolidate decision units as a means to minimize paperwork and the
review burden on top management. 

The guidance also required agencies to set objectives and performance
indicators at the beginning of their ZBB process.  Top and program
managers were to set objectives as "explicit statements of intended
output, clearly related to the basic need for which the program or
organization exists." Concurrently they were to identify the key
indicators to be used in measuring performance and results.  These
should be "measures of effectiveness, efficiency, and workload for
each decision unit.  These measures can often be obtained from
existing evaluation and workload measurement systems." Indirect or
proxy indicators could be used if these systems did not exist or were
under development.  A "lack of precise identification and
quantification of such objectives," however, would "not preclude the
development and implementation of zero-base budgeting procedures."

Despite considerable variation in how agencies implemented ZBB, some
patterns can be discerned.  Some agencies tended to associate their
decision units with their account structure or, within their account
structure, with their program activities.  Some agencies did not
identify minimums below current funding, and many identified minimums
as an arbitrary percentage of current funding, generally between 75
and 90 percent.  Agencies also made use of the option to consolidate
decision units and often set initial decision units at high
organizational levels (e.g., the division level or higher).  Lastly,
one study of several agencies found that fewer than half the decision
packages examined had quantifiable accomplishments, workload, or unit
cost information. 

The next year, in May 5, 1978, OMB issued Circular A-115, which
revised some aspects of the ZBB process.  Addressing problems with
objectives and performance information, OMB now urged agencies to use
the results of their performance evaluations in analyzing alternative
methods of accomplishing objectives and in analyzing anticipated
accomplishments identified with each level of performance.  The
circular also strengthened language dealing with the
objective-setting requirement.  The guidance on selecting decision
units, preparing rankings, and consolidation was clarified.  A
requirement to train staff before they participated in the ZBB
process was also added.  In other respects, however, ZBB requirements
were unchanged.  For example, no provision was made for a separate
planning phase, and the requirement to prepare decision packages for
all budget requests, including those for mandatory programs,
remained. 

The budget that resulted from agencies' and OMB's first year of ZBB
efforts disappointed some observers.  Few significant budgetary
actions were identified as resulting from ZBB, and some questioned
the utility of the many hours spent by program managers, budgeteers,
and top managers on ZBB.  In the following year, agency budget
justifications to OMB continued to be prepared using ZBB, but agency
budget justifications to the Congress continued to be prepared as in
the past, largely without reference to agencies' ZBB information. 

As the Carter presidency proceeded, less and less attention was
devoted in the Budget Message to the role and claimed achievements of
ZBB.  On August 7, 1981, in the first year of the succeeding
administration, OMB rescinded circular A-115 requiring agencies to
have ZBB systems.  Some ZBB requirements, however, survived beyond
the formal life of the initiative.  Requirements for agencies to
identify "decision units" and prepare consolidated rankings remained
until May 1986.  A requirement to identify three funding levels
lasted even longer, remaining until 1994, as did an OMB option to
request that the agency present a "consolidated" ranking of "program
elements and related funding levels."

RESULTS

In one sense, ZBB was successfully implemented:  all agencies
submitted the required paperwork on time.  By the end of ZBB's first
budget year, agencies had prepared about 25,000 internal decision
packages and submitted about 10,000 of these to OMB.  But in
essential ways federal ZBB had not been an exercise in zero-basing a
budget.  The widespread use of arbitrarily chosen percentages to
identify alternative funding levels, rather than analysis based on
program knowledge and performance information, precluded genuine
zero-basing, as did consolidation and selection of initial decision
units at high levels in the organization. 

From the beginning, paperwork burden for federal managers constituted
a significant implementation problem.  One study estimated that
paperwork increased, on average, 229 percent in ZBB's first year.  In
addition to the ZBB packages, agencies had to prepare separate budget
materials, often using different categories, for OMB, appropriations,
and authorizing committees.  Preparing crosswalks between these added
to agency burden. 

Agencies believed that inadequate time had been allowed to implement
the new initiative.  The requirement to compress planning and
budgeting functions within the timeframes of the budget cycle had
proven especially difficult, affecting program managers' ability to
identify alternative approaches to accomplishing agency objectives. 
Some agency officials also believed that the performance information
needed for ZBB analysis was lacking.  Available information concerned
processes and activities, not how well these processes and activities
performed.  Agencies also questioned the need to prepare and rank
decision packages for programs whose spending levels were outside
their control.  For example, the Department of Health, Education, and
Welfare did not identify minimum levels for social security and other
programs where it believed spending was uncontrollable; Treasury
stated it saw little use in ranking decision packages for interest on
the debt since the interest would have to be paid in any case. 
Paperwork and other burden and technical difficulties were compounded
by agency perceptions that OMB had not used the results of agencies'
ZBB efforts in its budget decision-making. 

In Congress, the results of the congressionally requested ZBB pilots,
made public in June 1977 cast doubts on ZBB's suitability as a
potential tool for congressional decision-making.  One major thrust
of the pilots had been to see whether ZBB rankings--comparing
priorities of "decision packages" against one another--could be used
by appropriators to identify the impact of budget cuts.  The results
of the pilots were not encouraging.  In one pilot, the agency had
failed to set minimums below current funding for over one-third of
its decision units and refused to rank its decision packages because
the process-oriented program activity structure of the agency's
budget was too interdependent to permit meaningful ranking.  The lack
of cost accounting information needed to identify alternative funding
levels was also cited as a technical problem.  Finally, the level of
burden and paperwork was a problem for both for agencies and
appropriators.  In one typical case, 362 pages were needed for an
agency's ZBB-based budget justification versus 72 pages for its
non-ZBB justification. 

The results of the congressional pilots were largely consistent with
later agency experiences.  No mechanism existed, however, to
incorporate lessons learned from the congressional pilots into
executive branch ZBB implementation.  By the time OMB sent agencies a
survey in October 11, 1977, seeking their views on implementation
problems and proposed solutions, the gaps between ZBB's initial
promise and its first year results were becoming apparent. 

Despite implementation problems and the relatively short time span in
which all its elements were required, federal ZBB has been credited
with some positive results.  Some participants in the budget process
as well as other observers attributed certain program efficiencies,
arising from the consideration of alternatives, to ZBB. 
Interestingly, ZBB established within federal budgeting a requirement
to present alternative levels of funding linked to alternative
results--a requirement that lasted until 1994. 


OVERVIEW OF THE GOVERNMENT
PERFORMANCE AND RESULTS ACT
========================================================== Appendix VI

GPRA seeks to promote greater public confidence in the institutions
of government through a better reporting and accounting for the
outcomes of federal programs.  As stated in the act, the goals of
GPRA are to

     "(1) improve the confidence of the American people in the
     capability of the Federal Government, by systematically holding
     Federal agencies accountable for achieving program results;

     (2) initiate program performance reform with a series of pilot
     projects in setting program goals, measuring program performance
     against those goals, and reporting publicly on their progress;

     (3) improve Federal program effectiveness and public
     accountability by promoting a new focus on results, service
     quality, and customer satisfaction;

     (4) help Federal managers improve service delivery, by requiring
     that they plan for meeting program objectives and by providing
     them with information about program results and service quality;

     (5) improve congressional decisionmaking by providing more
     objective information on achieving statutory objectives, and on
     the relative effectiveness and efficiency of Federal programs
     and spending; and

     (6) improve internal management of the Federal Government."\1

From these broad purposes, a system of interrelated plans and reports
provides the basis for linking federal resources and results, with
requirements and new concepts piloted before governmentwide
application. 

STRATEGIC PLANS

GPRA requires each agency to develop strategic plans covering a
period of at least 5 years.  Agencies' strategic plans must include
the agency's mission statement; identify long-term general goals,
including outcome-related goals and objectives; and describe how the
agency intends to achieve these goals through its activities and
through its human, capital, information, and other resources.  Under
GPRA, agency strategic plans are the starting point for agencies to
set annual program goals and to measure program performance in
achieving those goals.  To this end, strategic plans are to include a
description of how long-term general goals will be related to annual
performance goals as well as a description of the program evaluations
used in establishing goals.  As part of the strategic planning
process, agencies are required to consult with the Congress as well
as solicit the views of other stakeholders.  Agencies' first
strategic plans are to be submitted to the Director of OMB and the
Congress by the end of fiscal year 1997.  Strategic plans must be
updated at least every 3 years. 

ANNUAL PERFORMANCE PLANS

GPRA also requires each agency to prepare an annual performance plan
that includes the performance indicators that will be used to measure
"the relevant outputs, service levels, and outcomes of each program
activity" in an agency's budget.  The annual performance plan is to
provide the direct link between strategic goals outlined in the
agency's strategic plan and what managers and employees do
day-to-day.  When an agency believes it is not possible to express a
measurable goal for a program activity, the agency may ask OMB's
authorization to use a nonquantifiable goal.  In addition, GPRA
allows agencies to aggregate, disaggregate, or consolidate program
activities for purposes of performance planning.  These plans are
also to be used by OMB to develop an overall federal performance plan
for the federal budget, which is to be submitted each year to the
Congress with the President's budget.  The first annual performance
plans are to be submitted to OMB in the fall of 1997, with the first
overall federal performance plan due for fiscal year 1999. 

ANNUAL PERFORMANCE REPORTS

Ultimately, GPRA will require that each agency prepare an annual
report on program performance for the previous fiscal year.  In each
report, agencies are to review and discuss performance compared with
the performance goals established in annual performance plans.  When
a goal is not met, agencies are to explain the reasons the goal was
not met; plans and schedules for meeting the goal; and, if the goal
was impractical or not feasible, the reasons for that and the actions
recommended.  Actions needed to accomplish a goal could include
legislative, regulatory, or other actions or, when the agency found a
goal to be impractical or infeasible, a discussion of whether the
goal ought to be modified.  The report is also to include the summary
findings of program evaluations completed during the fiscal year
covered by the report.  Agencies' first performance reports for
fiscal year 1999 are due to the President and the Congress no later
than March 31, 2000.\2

MANAGERIAL FLEXIBILITY

In crafting GPRA, the Congress also recognized that managerial
accountability for results is linked to managers having sufficient
flexibility, discretion, and authority to accomplish desired results. 
GPRA authorizes agencies to apply for managerial flexibility waivers
in their annual performance plans beginning with fiscal year 1999. 
The authority of agencies to request waivers of administrative
procedural requirements and controls is intended to provide federal
managers with more flexibility to structure agency systems to better
support program goals.  The nonstatutory requirements that OMB can
waive under GPRA generally involve the allocation and use of
resources, such as restrictions on shifting funds among items within
a budget account.  Agencies must report in their annual performance
reports on the use and effectiveness of any GPRA managerial
flexibility waivers that they receive. 

IMPLEMENTATION APPROACH: 
PHASING-IN AND PILOTING OF
REQUIREMENTS

GPRA calls for phased implementation, as described above, beginning
with selected pilot projects in performance goals and managerial
flexibility in fiscal years 1994 through 1996.  These pilots are
expected to develop experience with GPRA processes and concepts
before implementation begins governmentwide in 1997.  As of March
1997, 68 pilot projects for performance planning and performance
reporting were under way in 28 agencies.  OMB also is required to
select at least five agencies from among the initial pilot agencies
to pilot managerial accountability and flexibility for fiscal years
1995 and 1996; however, OMB did not do so.  GAO is required to report
on governmentwide readiness for implementation by June 1, 1997; OMB
is required to report on the costs, benefits, and usefulness of the
performance planning and measurement pilots by May 1, 1997,
identifying any recommended changes in GPRA requirements. 

GPRA also requires OMB to select at least five agencies, at least
three of which have had experience developing performance plans
during the initial GPRA pilot phase, to test performance budgeting
for fiscal years 1998 and 1999.  Performance budgets to be prepared
by the pilot agencies are intended to provide the Congress with
information on the direct relationship between proposed program
spending and expected program results and the anticipated effects of
varying spending levels on results.  OMB is required to report on
these pilots by March 31, 2001.  OMB's report is to assess the
feasibility of performance budgeting, recommend whether legislation
requiring performance budgets should be proposed, and identify any
other recommended changes to GPRA requirements. 

--------------------
\1 P.L.  103-62, sec.  2. 

\2 For fiscal years 2000 and 2001, agencies' reports are to include
performance data beginning with fiscal year 1999.  For each
subsequent year, agencies are to include performance data for the
year covered by the report and 3 prior years. 


*** End of document. ***