Military Readiness: DOD Needs to Develop a More Comprehensive Measurement
System (Letter Report, 10/27/94, GAO/NSIAD-95-29).
The Defense Department's (DOD) definition and indicators for measuring
readiness provide valuable information, but this information is limited
and cannot signal an impending change in readiness. Moreover, the
Status of Resources and Training System, which measures whether
individual service units possess the required resources and are trained
to do their wartime missions, was never intended to provide the
comprehensive assessment of overall military readiness that has become
increasingly important in today's national security environment. To
supplement data reported in DOD's system and facilitate readiness
assessments at the unit level, the military commands independently
monitor many additional indicators. These indicators are generally not
reported to higher command levels. GAO visited 39 military commands and
other DOD agencies and compiled a list of more than 650 such indicators.
Military commanders and outside defense experts said that many of these
indicators are not only critical to a comprehensive readiness assessment
at the unit level but also have some predictive value. The indicators do
require, however, some refinement to improve their usefulness.
--------------------------- Indexing Terms -----------------------------
REPORTNUM: NSIAD-95-29
TITLE: Military Readiness: DOD Needs to Develop a More
Comprehensive Measurement System
DATE: 10/27/94
SUBJECT: Combat readiness
Evaluation methods
Personnel evaluation systems
Military training
Defense contingency planning
Military forces
Military operations
Training utilization
Defense capabilities
IDENTIFIER: JCS Status of Resources and Training System
Desert Shield
Desert Storm
Army Readiness Management System
Air Force ULTRA Computer Model
**************************************************************************
* This file contains an ASCII representation of the text of a GAO *
* report. Delineations within the text indicating chapter titles, *
* headings, and bullets are preserved. Major divisions and subdivisions *
* of the text, such as Chapters, Sections, and Appendixes, are *
* identified by double and single lines. The numbers on the right end *
* of these lines indicate the position of each of the subsections in the *
* document outline. These numbers do NOT correspond with the page *
* numbers of the printed product. *
* *
* No attempt has been made to display graphic images, although figure *
* captions are reproduced. Tables are included, but may not resemble *
* those in the printed version. *
* *
* A printed copy of this report may be obtained from the GAO Document *
* Distribution Facility by calling (202) 512-6000, by faxing your *
* request to (301) 258-4066, or by writing to P.O. Box 6015, *
* Gaithersburg, MD 20884-6015. We are unable to accept electronic orders *
* for printed documents at this time. *
**************************************************************************
Cover
================================================================ COVER
Report to the Ranking Minority Member, Committee on Armed Services,
House of Representatives
October 1994
MILITARY READINESS - DOD NEEDS TO
DEVELOP A MORE COMPREHENSIVE
MEASUREMENT SYSTEM
GAO/NSIAD-95-29
Military Readiness
Abbreviations
=============================================================== ABBREV
CINC - commanders in chief
DOD - Department of Defense
JCS - Joint Chiefs of Staff
SORTS - Status of Resources and Training System
Letter
=============================================================== LETTER
B-258015
October 27, 1994
The Honorable Floyd D. Spence
Ranking Minority Member
Committee on Armed Services
House of Representatives
Dear Mr. Spence:
This report addresses your concerns that declining defense budgets
are increasing the potential for a return to the days of "hollow
forces" that prevailed during the 1970s. More specifically, you
asked that we conduct a review to determine (1) whether the
definition and indicators of readiness adequately reflect the many
complex components that contribute to overall military readiness and
(2) whether there are current readiness indicators that can predict
positive or negative changes in readiness.
BACKGROUND
------------------------------------------------------------ Letter :1
During the past several years, service chiefs and commanders in chief
(CINC) have expressed concerns about the effect on current and future
readiness of (1) the level of current military operations, (2)
contingency operations, (3) the shifting of funds to support these
operations, and (4) personnel turbulence. Related to these concerns
is a question about the ability of the Department of Defense's (DOD)
readiness reporting system to provide a comprehensive assessment of
overall readiness.
DOD's current system for reporting readiness to the Joint Chiefs of
Staff (JCS) is the Status of Resources and Training System (SORTS).
This system measures the extent to which individual service units
possess the required resources and are trained to undertake their
wartime missions. SORTS was established to provide the current
status of specific elements considered essential to readiness
assessments, that is, personnel and equipment on hand, equipment
condition, and the training of operating forces. SORTS' elements of
measure, "C" ratings that range from C-1 (best) to C-4 (worst), are
probably the most frequently cited indicator of readiness in the
military.
RESULTS IN BRIEF
------------------------------------------------------------ Letter :2
The DOD definition and indicators for measuring readiness provide
valuable information, but this information is limited and cannot
signal an impending change in readiness. Moreover, the SORTS system
was never intended to provide the comprehensive assessment of overall
military readiness that has become increasingly important in today's
national security environment. For example, SORTS measures only
individual service readiness; there are no indicators currently
available to measure joint readiness.\1 Nor does SORTS address all
the factors that JCS considers critical to a comprehensive readiness
assessment, such as operating tempo and morale.
To supplement data reported in DOD's system and facilitate readiness
assessments at the unit level, the military commands independently
monitor numerous additional indicators. These indicators are
generally not reported to higher command levels. We visited 39
military commands and other DOD agencies and compiled a list of over
650 such indicators. Discussions with military commanders in all
four services and outside defense experts revealed that many of these
indicators are not only critical to a comprehensive readiness
assessment at the unit level but also have some degree of predictive
value. The indicators do require, however, some further refinement
to improve their usefulness.
--------------------
\1 Joint readiness is the level of preparedness of combatant commands
and joint task forces to integrate ready combat and support units
into an effective joint and combined operating force.
DOD'S CURRENT APPROACH TO
MEASURING READINESS HAS
LIMITATIONS
------------------------------------------------------------ Letter :3
According to JCS and DOD officials, the definition and measures of
readiness that are currently available in SORTS are no longer
adequate in today's national security environment. Specifically,
SORTS does not (1) address all the factors that JCS considers
critical, (2) provide a warning of impending decreases in readiness,
and (3) provide data on joint readiness. In addition, SORTS includes
subjective assessments of training proficiency.
Figure 1 shows those elements reported under SORTS and all the
elements that JCS believes would make up a more comprehensive
assessment.
Figure 1: Factors Important to
a Comprehensive Readiness
Assessment
(See figure in printed
edition.)
Information reported under SORTS is a snapshot in time and does not
predict impending changes. Units report readiness monthly or, for
some units, upon a change of status. These reports provide
commanders and JCS with status information only for that point in
time. Commanders have stated that in today's environment of force
reductions and increasing commitments, there is a need for indicators
that can predict readiness changes.
Some elements of SORTS are not based on objective data. The C-rating
for training, for example, is based on a commander's subjective
assessment of the number of additional training days the unit needs
to reach a C-1 status. This assessment may be based on any number of
factors, including completion of required or scheduled training or
personal observation. In the past, we have found that Army training
assessments have not been reliable. For example, in 1991 we reported
that training readiness assessments of active Army units may have
been overstated.\2 We reported that the information provided to
higher commands and JCS was of limited value because the assessments
(1) were based on training conducted primarily at home stations
rather than on results of more realistic exercises conducted at
combat training centers and (2) may not have adequately considered
the effect that the loss of key personnel had on proficiency.
Likewise, in our reviews pertaining to the Persian Gulf War, we noted
that readiness reports for Army support forces and National Guard
combat forces were often inflated or unreliable.\3 For example, in a
September 1991 report, we noted that when three Army National Guard
combat brigades were mobilized for Operation Desert Shield, their
commanders were reporting readiness at the C-2 and C-3 levels, which
meant that no more than 40 days of post-mobilization training would
be needed for the brigades to be fully combat ready. However, on the
basis of their independent assessment of the brigades' proficiency,
active Army officials responsible for the brigades' post-mobilization
training developed training plans calling for over three times the
number of days that the readiness reports stated were needed.
Finally, SORTS does not provide data with which commanders can
adequately assess joint readiness. There is no clear definition of
areas of joint readiness that incorporates all essential elements,
such as individual service unit readiness, the deployability of
forces, or en route and theater infrastructure support.\4
The need for joint readiness information was demonstrated by the
Persian Gulf War and reaffirmed by contingency operations in Somalia
and Bosnia. Officials at four joint commands told us that SORTS, the
primary source of readiness data, was inadequate for assessing joint
readiness. Although the Joint Staff recently developed its first
list of joint mission tasks, it has not developed the training
conditions for conducting joint exercises and criteria for evaluating
them. It may be several years before JCS completes these efforts.
--------------------
\2 Army Training: Evaluations of Units' Proficiency Are Not Always
Reliable (GAO/NSIAD-91-72, Feb. 15, 1991).
\3 National Guard: Peacetime Training Did Not Adequately Prepare
Combat Brigades for Gulf War (GAO/NSIAD-91-263, Sept. 24, 1991) and
Operation Desert Storm: Army Had Difficulty Providing Adequate
Active and Reserve Support Forces (GAO/NSIAD-92-67, Mar. 10, 1992).
\4 Recently published findings of a DOD Defense Science Board task
force support this.
DOD EFFORTS TO IMPROVE
READINESS ASSESSMENTS
------------------------------------------------------------ Letter :4
Recognizing the limitations of SORTS and the need for more reliable
readiness information, DOD and the services have initiated actions to
improve readiness assessments.
In June 1994 the Defense Science Board Readiness Task Force, which is
composed of retired general officers, issued its report to the
Secretary of Defense on how to maintain readiness. The Task Force
identified major shortcomings in assessing joint readiness and noted
that while the services have increased their commitment to joint and
combined training since Operation Desert Storm, such training
requires greater emphasis. The Task Force recommended improvements
in the measurement of joint readiness, stating that "real readiness
must be measured by a unit's ability to operate as part of a joint or
combined task force."
More recently, DOD created the Senior Readiness Oversight Council to
evaluate and implement the recommendations of the Readiness Task
Force and to develop new ways to measure combat readiness. The
Council, whose membership includes high-level military and civilian
officials, is focusing on three main ways to improve readiness: (1)
developing better analytical tools for determining the relationship
of resources to readiness and predicting the potential impact of
budget cuts on readiness, (2) developing analytical tools for
measuring joint readiness, and (3) taking advantage of computer
simulation to improve readiness, especially joint readiness.
The Army implemented its Readiness Management System in June 1993.
This system allows the Army to project for 2 years the status of
elements reported under SORTS. The system integrates the reported
SORTS data with other databases that contain future resource
acquisition and distribution information. The Army can, for example,
compare a unit's reported equipment shortages with planned
acquisition and distribution schedules, and the system can then
forecast when those shortages will be alleviated and the unit's
readiness posture improved.
In September 1993, the Air Force began to develop a computer model,
called ULTRA, to forecast readiness. ULTRA is intended to measure
four major elements: (1) the ability to deploy the right forces in a
timely manner to achieve national objectives; (2) the ability to
sustain operations; (3) the personnel end strength, quality, and
training of people; and (4) the availability of facilities. If
successful, the system will allow the Air Force to estimate the
effect that various levels of funding have on readiness. The project
is still under development, and the Air Force estimates it will be
about 2 years before the system will provide credible, widely
accepted forecasts.
A MORE COMPREHENSIVE ASSESSMENT
OF READINESS IS POSSIBLE
------------------------------------------------------------ Letter :5
To supplement data currently reported in SORTS and facilitate
readiness assessments at the unit level, the military commands in all
four services independently monitor literally hundreds of additional
indicators. These indicators are generally not reported to higher
command levels. Military commanders and outside defense experts
agreed that many of the indicators are not only critical to a
comprehensive readiness assessment at the unit level but also have
some degree of predictive value regarding readiness changes within
the services.
We compiled a list of over 650 indicators that 28 active and reserve
service commands were monitoring in addition to SORTS. To further
refine these indicators, we asked the commands to rate the indicators
in three areas: (1) the importance of the indicator for assessing
readiness, (2) the degree of value the indicator has as a predictor
of readiness change, and (3) the quality of the information the
indicator provides.
Table 1 shows the readiness indicators that service officials told us
were either critical or important to a more comprehensive assessment
of readiness and that also have some predictive value. The
indicators that are shaded are those rated highest by at least
one-half of the commands visited.
Table 1: Readiness Indicators
Critical or Important to
Predicting Readiness
(See figure in printed
edition.)
(See figure in printed
edition.)
(See figure in printed
edition.)
\a Indicators especially critical for the reserve components.
\b Data should also be maintained on individuals with Combat Training
Center experience.
\c Readiness Task Force commented that maintenance backlogs should be
purged of irrelevant items to make this a more useful indicator.
\d Readiness Task Force commented that on-hand and programmed
purchase of precision-guided munitions should be specifically
monitored.
We asked the Defense Science Board Task Force on Readiness to examine
the indicators presented in table 1. Task Force members agreed with
the commands' ratings and said that the indicators are an excellent
beginning for developing a more comprehensive readiness measurement
system. The Task Force suggested four additional indicators: (1)
the use of simulators to improve individual and crew proficiency on
weapon systems; (2) the quality of recruits enlisted by the services;
(3) equipment readiness based on fully mission capable rates rather
than on mission capable rates, which permit a weapon system to be
reported as mission capable even though it cannot fully perform its
mission; and (4) the extent to which readiness-related information in
DOD is automated. In commenting on a draft of this report DOD
pointed out that it is useful to know if a system having a
multimission capability can perform parts of the mission, therefore,
it believes that both fully mission capable and mission capable rates
are useful indicators. Also, DOD said that the extent to which
readiness-related information is automated is not an indicator of
readiness but that it might be helpful in obtaining an understanding
of automation requirements. We agree with DOD's position on these
two issues.
As table 1 shows, some indicators are supported more by commanders of
one service than by the others. For example, information on
commitments and deployments (Training, item 15) and deployed
equipment (Logistics, item 17) were assessed as critical by Marine
Corps commanders because of the manner in which its forces and
equipment are deployed. They were not listed as critical by any of
the commands from the other services.
By examining a group or series of indicators, one may gain a broader
insight than is possible from a single indicator. To illustrate,
changes in the extent of borrowed manpower (Personnel, item 7) may be
related to proficiency on weapon systems (Training, item 12) or crew
turnover (Personnel, item 8). Also, table 1 identifies indicators
that because of restricted training time and opportunities are
especially critical to the reserve components.
Several of the indicators that commanders rated as critical to
readiness assessments relate to major readiness concerns recently
expressed by service chiefs and CINCs. For example, while in the
midst of downsizing, U.S. military forces are being called upon for
operational contingencies--delivering humanitarian aid in Iraq,
Bosnia, Rwanda, and Somalia and enforcing "no-fly" zones in Bosnia
and Iraq, to name just a few. Unusually high operating tempos
required for these contingencies have exacerbated the turbulence
inherent in a major downsizing of U.S. forces. Several senior
service leaders have raised concerns about the impact of this
situation on morale, retention, and the ability to maintain readiness
for traditional warfighting missions. Among the indicators suggested
by some of the command officials we interviewed were personnel tempo,
a measure of the frequency and number of personnel deployed on
assigned missions, and crew turnover, a measure of personnel turnover
within weapon system crews. Similarly, the services report that they
were required to shift funds from operations and maintenance
appropriations to support contingency operations, and, according to
officials of each of the services, some scheduled training exercises
were canceled and others were postponed. Several commanders
suggested readiness indicators related to operating tempo, funding
levels, and individual/unit proficiency.
Related to the feature of predictive capability is the ability to
conduct trend analyses based on the most important indicators.
Assuming that relevant data is available, the services can identify
trends in the additional indicators over time. However, no criteria
are currently available to assess the meaning of a trend in terms of
its impact on readiness. During our visits to the military commands,
we noted an unevenness in the availability of historical data,
depending on the indicator being monitored. Also, the commands
reported that there is unevenness in the quality of the data
available for measurement. While some indicators were rated high in
importance, they were rated low in quality.
RECOMMENDATIONS
------------------------------------------------------------ Letter :6
We recommend that the Secretary of Defense direct the Under Secretary
of Defense for Personnel and Readiness to develop a more
comprehensive readiness measurement system to be used DOD-wide. We
recommend that as part of this effort, the Under Secretary
review the indicators we have identified as being critical to
predicting readiness and select the specific indicators most
relevant to a more comprehensive readiness assessment,
develop criteria to evaluate the selected indicators and prescribe
how often the indicators should be reported to supplement SORTS
data, and
ensure that comparable data is maintained by all services to allow
the development of trends in the selected indicators.
AGENCY COMMENTS AND OUR
EVALUATION
------------------------------------------------------------ Letter :7
In written comments on a draft of our report, DOD generally agreed
with our findings and recommendation (see app. I). The Department
said that it plans to address the issue of using readiness indicators
not only to monitor force readiness but also to predict force
readiness. In response to our recommendation, DOD said that it is
developing a specification for a readiness prediction system and that
it has already used the indicators presented in our report as input
to that process.
DOD did not agree with our assessment of the overall value of SORTS
information and the reliability of training ratings contained in
SORTS. First, DOD said that it did not agree that SORTS information
provided to higher commands and JCS is of limited value. We agree
that SORTS provides valuable information on readiness. Nevertheless,
the system does have several limitations. The matters discussed in
the report are not intended as criticisms of SORTS but rather as
examples of limitations that are inherent in the system. For
example, C-ratings represent a valuable snapshot of readiness in time
but by design they do not address long-term readiness or signal
impending changes in the status of resources. Second, DOD said that
it did not agree that SORTS may not adequately consider the effect
that the loss of key personnel has on proficiency. DOD may have
misinterpreted our position on this issue. Although SORTS recognizes
the loss of key personnel, it does not always consider the impact of
replacing key personnel with less experienced personnel. Lastly, DOD
cited a number of factors that it believes make it infeasible to base
training readiness on the results of combat training center
exercises. This report does not propose that DOD take this course of
action. Reference to the fact that training readiness is based
primarily on training conducted at home stations rather than on
results of more realistic exercises conducted at combat training
centers is intended only to illustrate how the reliability of SORTS
training information can be effected.
SCOPE AND METHODOLOGY
------------------------------------------------------------ Letter :8
To assess the adequacy of the current definition and indicators of
readiness, we examined military service and JCS regulations, reviewed
the literature, and interviewed officials from 39 DOD agencies,
including active and reserve service commands, defense civilian
agencies, unified commands, and the Joint Staff (see app. II). To
identify indicators that are being monitored to supplement SORTS
data, we asked the 39 agencies to identify all the indicators they
use to assess readiness and operational effectiveness.\5 After
compiling and categorizing the indicators by type, that is,
personnel, training, and logistics, we asked the commands to rate the
indicators' significance, predictive value, and quality. Indicator
significance was rated as either critical, important, or
supplementary. The commands' opinions of predictive value were
provided on a five-point scale ranging from little or none to very
great. The quality of the indicator was rated on a three-point
scale--low, medium, and high.
We asked the Defense Science Board's Task Force on Readiness to (1)
review and comment on the indicators that the commands rated the
highest in terms of their importance and predictive value and (2)
identify additional indicators that, in their judgment, were also
critical to a comprehensive readiness assessment.
We conducted our review from May 1993 to June 1994 in accordance with
generally accepted government auditing standards.
--------------------
\5 Of the 39 DOD agencies, 28 monitored additional readiness
indicators.
---------------------------------------------------------- Letter :8.1
As agreed with your office, unless you publicly announce this
report's contents earlier, we plan no further distribution until 30
days from its issue date. At that time, we will send copies to the
Chairmen of the Senate and House Committees on Armed Services and on
Appropriations; the Subcommittee on Military Readiness and Defense
Infrastructure, Senate Armed Services Committee; and the Subcommittee
on Readiness, House Armed Services Committee; and to the Secretaries
of Defense, the Army, the Navy, and the Air Force. Copies will also
be made available to others on request.
Please contact me at (202) 512-5140 if you or your staff have any
questions concerning this report. Major contributors to this report
are listed in appendix III.
Sincerely yours,
Mark E. Gebicke
Director, Military Operations
and Capabilities Issues
(See figure in printed edition.)Appendix I
COMMENTS FROM THE DEPARTMENT OF
DEFENSE
============================================================== Letter
(See figure in printed edition.)
(See figure in printed edition.)
(See figure in printed edition.)
(See figure in printed edition.)
(See figure in printed edition.)
LOCATIONS VISITED
========================================================== Appendix II
ARMY
Secretary of the Army
Washington, D.C.
4th Infantry Division (Mechanized)
Fort Carson, Colorado
18th Airborne Corps
Fort Bragg, North Carolina
24th Infantry Division
Fort Stewart, Georgia
Corps Support Command
18th Airborne Corps
Fort Bragg, North Carolina
Headquarters, Forces Command
Fort McPherson, Georgia
Headquarters, Training and Doctrine Command
Fort Monroe, Virginia
National Guard Bureau
Washington, D.C.
U.S. Army Reserve Command
Atlanta, Georgia
NAVY
Secretary of the Navy
Washington, D.C.
Carrier Air Wing Three
Norfolk, Virginia
Destroyer Squadron Two
Norfolk, Virginia
Naval Air Force
U.S. Atlantic Fleet
Norfolk, Virginia
Naval Air Reserve Force
New Orleans, Louisiana
Naval Reserve Force
New Orleans, Louisiana
Naval Surface Force
U.S. Atlantic Fleet
Norfolk, Virginia
Naval Surface Reserve Force
New Orleans, Louisiana
Submarine Force
U.S. Atlantic Fleet
Norfolk, Virginia
Submarine Squadron Eight
Norfolk, Virginia
U.S. Atlantic Fleet
Norfolk, Virginia
AIR FORCE
Secretary of the Air Force
Washington, D.C.
1st Tactical Fighter Wing
Langley Air Force Base, Virginia
375th Air Wing
Scott Air Force Base, Illinois
Air Combat Command
Langley Air Force Base, Virginia
Air Force Reserve
Washington, D.C.
Air Mobility Command
Scott Air Force Base, Illinois
MARINE CORPS
Office of the Inspector General
Washington, D.C.
Headquarters, Marine Forces Atlantic
Norfolk, Virginia
Marine Reserve Force
Fleet Marine Force
U.S. Marine Corps Reserve
New Orleans, Louisiana
Second Force Service Support Group
Camp Lejeune, North Carolina
Second Marine Air Wing
Marine Corps Air Station
Cherry Point, North Carolina
Second Marine Division
Camp Lejeune, North Carolina
Second Marine Expeditionary Force
Camp Lejeune, North Carolina
Second Surveillance, Reconnaissance, Intelligence Group
Camp Lejeune, North Carolina
UNIFIED COMMANDS
Commander in Chief, Special Operations Command
MacDill Air Force Base, Florida
Commander in Chief, Central Command
MacDill Air Force Base, Florida
Commander in Chief, Pacific Command
Camp Smith, Hawaii
Commander in Chief, U.S. Atlantic Command
Norfolk Naval Base, Virginia
OTHER
Office of the Joint Chiefs of Staff
Washington, D.C.
MAJOR CONTRIBUTORS TO THIS REPORT
========================================================= Appendix III
NATIONAL SECURITY AND
INTERNATIONAL AFFAIRS DIVISION,
WASHINGTON, D.C.
Norman J. Rabkin, Associate Director
Charles J. Bonanno, Jr., Assistant Director
NORFOLK REGIONAL OFFICE
Ray S. Carroll, Jr., Evaluator-in-Charge
James E. Lewis, Evaluator (Data Analyst)
James K. Mahaffey, Site Senior
Robert C. Mandigo, Jr., Site Senior
Jeffrey C. McDowell, Evaluator
Jeffrey L. Overton, Jr., Site Senior
Susan J. Schildkret, Evaluator
Lester L. Ward, Site Senior