[House Hearing, 110 Congress]
[From the U.S. Government Publishing Office]
AVIATION SAFETY: CAN NASA DO
MORE TO PROTECT THE PUBLIC?
=======================================================================
HEARING
BEFORE THE
COMMITTEE ON SCIENCE AND TECHNOLOGY
HOUSE OF REPRESENTATIVES
ONE HUNDRED TENTH CONGRESS
FIRST SESSION
__________
OCTOBER 31, 2007
__________
Serial No. 110-70
__________
Printed for the use of the Committee on Science and Technology
Available via the World Wide Web: http://www.science.house.gov
----------
U.S. GOVERNMENT PRINTING OFFICE
38-535 PDF WASHINGTON : 2008
For sale by the Superintendent of Documents, U.S. Government Printing
Office Internet: bookstore.gpo.gov Phone: toll free (866) 512-1800;
DC area (202) 512-1800 Fax: (202) 512-2104 Mail: Stop IDCC,
Washington, DC 20402-0001
COMMITTEE ON SCIENCE AND TECHNOLOGY
HON. BART GORDON, Tennessee, Chairman
JERRY F. COSTELLO, Illinois RALPH M. HALL, Texas
EDDIE BERNICE JOHNSON, Texas F. JAMES SENSENBRENNER JR.,
LYNN C. WOOLSEY, California Wisconsin
MARK UDALL, Colorado LAMAR S. SMITH, Texas
DAVID WU, Oregon DANA ROHRABACHER, California
BRIAN BAIRD, Washington ROSCOE G. BARTLETT, Maryland
BRAD MILLER, North Carolina VERNON J. EHLERS, Michigan
DANIEL LIPINSKI, Illinois FRANK D. LUCAS, Oklahoma
NICK LAMPSON, Texas JUDY BIGGERT, Illinois
GABRIELLE GIFFORDS, Arizona W. TODD AKIN, Missouri
JERRY MCNERNEY, California JO BONNER, Alabama
LAURA RICHARDSON, California TOM FEENEY, Florida
PAUL KANJORSKI, Pennsylvania RANDY NEUGEBAUER, Texas
DARLENE HOOLEY, Oregon BOB INGLIS, South Carolina
STEVEN R. ROTHMAN, New Jersey DAVID G. REICHERT, Washington
JIM MATHESON, Utah MICHAEL T. MCCAUL, Texas
MIKE ROSS, Arkansas MARIO DIAZ-BALART, Florida
BEN CHANDLER, Kentucky PHIL GINGREY, Georgia
RUSS CARNAHAN, Missouri BRIAN P. BILBRAY, California
CHARLIE MELANCON, Louisiana ADRIAN SMITH, Nebraska
BARON P. HILL, Indiana PAUL C. BROUN, Georgia
HARRY E. MITCHELL, Arizona
CHARLES A. WILSON, Ohio
C O N T E N T S
October 31, 2007
Page
Witness List..................................................... 2
Hearing Charter.................................................. 3
Opening Statements
Statement by Representative Bart Gordon, Chairman, Committee on
Science and Technology, U.S. House of Representatives.......... 5
Written Statement............................................ 6
Statement by Representative Ralph M. Hall, Minority Ranking
Member, Committee on Science and Technology, U.S. House of
Representatives................................................ 7
Written Statement............................................ 8
Prepared Statement by Representative Mark Udall, Chairman,
Subcommittee on Space and Aeronautics, Committee on Science and
Technology, U.S. House of Representatives...................... 8
Prepared Statement by Representative Tom Feeney, Minority Ranking
Member, Subcommittee on Space and Aeronautics, Committee on
Science and Technology, U.S. House of Representatives.......... 9
Prepared Statement by Representative Jerry F. Costello, Member,
Committee on Science and Technology, U.S. House of
Representatives................................................ 10
Prepared Statement by Representative Brad Miller, Chairman,
Subcommittee on Investigations and Oversight, Committee on
Science and Technology, U.S. House of Representatives.......... 11
Prepared Statement by Representative Daniel Lipinski, Member,
Committee on Science and Technology, U.S. House of
Representatives................................................ 11
Prepared Statement by Representative Harry E. Mitchell, Member,
Committee on Science and Technology, U.S. House of
Representatives................................................ 11
Panel 1:
Dr. Michael D. Griffin, Administrator, National Aeronautics and
Space Administration (NASA)
Oral Statement............................................... 12
Written Statement............................................ 15
Mr. James E. Hall, Managing Partner, Hall and Associates, LLC;
Former Chairman, National Transportation Safety Board (NTSB)
Oral Statement............................................... 18
Written Statement............................................ 19
Biography.................................................... 24
Discussion
Release of NASA Report......................................... 25
Reasons for Not Releasing Parts of the Report.................. 25
Information About the Data That Was Released................... 27
Confidentiality of Information About Pilots and Commercial
Information.................................................. 28
Getting the Information to the Public.......................... 30
Disciplinary Action for Responsible Party...................... 31
NASA Survey and Confidentiality................................ 32
Releasing Information and Why Was the Survey Ended?............ 33
Airline Safety Compared to Other Safety Concerns............... 35
Responsibility for Public Statement............................ 36
Why Wasn't NASA Information Made Public and Why Didn't It Live
Up to NASA's Standards?...................................... 37
State of Current Space Shuttle Mission......................... 37
Quality of Data................................................ 38
The Responsibility for the $11 Million......................... 40
Data Recovery, Peer Review, and Avoidance of Requests.......... 41
Panel 2:
Dr. Robert S. Dodd, Safety Consultant and President, Dodd &
Associates, LLC
Oral Statement............................................... 43
Written Statement............................................ 44
Biography.................................................... 46
Dr. Jon A. Krosnick, Frederic O. Glover Professor in Humanities
and Social Sciences, Stanford University
Oral Statement............................................... 49
Written Statement............................................ 52
Biography.................................................... 64
Captain Terry L. McVenes, Executive Air Safety Chairman, Air Line
Pilots Association, International
Oral Statement............................................... 93
Written Statement............................................ 95
Biography.................................................... 96
Discussion
NAOMS Survey and Methodology................................... 96
Survey Methodology and Confidentiality......................... 98
Why Didn't the FAA Continue the Project?....................... 99
Best Organization to Operate NAOMS............................. 100
Termination of Program......................................... 101
Appendix 1: Answers to Post-Hearing Questions
Dr. Michael D. Griffin, Administrator, National Aeronautics and
Space Administration (NASA).................................... 106
Mr. James E. Hall, Managing Partner, Hall and Associates, LLC;
Former Chairman, National Transportation Safety Board (NTSB)... 110
Dr. Robert S. Dodd, Safety Consultant and President, Dodd &
Associates, LLC................................................ 112
Dr. Jon A. Krosnick, Frederic O. Glover Professor in Humanities
and Social Sciences, Stanford University....................... 115
Captain Terry L. McVenes, Executive Air Safety Chairman, Air Line
Pilots Association, International.............................. 121
Appendix 2: Additional Material for the Record
Exhibit 1. Rempel, W. and Freed, D. (1991, February 3). Danger on
the Ground, Too Safety: Near-misses have occurred on runways
and taxiways, federal records show. Pilots were sometimes lost
or controllers moved planes into another's path. Los Angeles
Times. Retrieved 2007, from http://factiva.com/................ 124
Exhibit 2. Brazil, J. (1994, December 11). FAA's Safety Response
Record Hits Turbulence, Over the past decade, the agency has
been slow to heed safety warnings--sometimes acting only after
fatal crashes, according to a Times study. Los Angeles Times.
Retrieved 2007, from http://factiva.com/....................... 126
Exhibit 3. Statler, I. and Maluf, D.A. (2003). ``NASA Aviation
System Monitoring and Modeling Project,'' SAE Aerospace and
Aerospace Conference........................................... 132
Exhibit 4. National Aviation Operational Monitoring Service
(NAOMS) Fact Sheet............................................. 137
Exhibit 5. Connell, L. (1999, May 11). Welcome and NAOMS
Introduction. Presented at the Workshop 1 on the Concept of the
National Aviation Operational Monitoring Service (NAOMS)....... 138
Exhibit 6. Dodd, R. (2000, March 1). NAOMS Concept, Rationale and
Field Trial Development. Presented at the NAOMS Workshop 2..... 148
Exhibit 7. Connors, M. and Connell, L. (2003, December 18).
Future Directions. Prepared for Meeting 1, NAOMS Status and
Results Review................................................. 169
Exhibit 8. Rosenthal, L., Krosnick, K., Cwi, J., Connell, L.,
Dodd, R., and Connors, M. (2003, April 9). National Aviation
Operations Monitoring Service (NAOMS). Prepared for Detailed
Program Overview; Results to Date for FAA Senior Management.... 172
Exhibit 9. NAOMS (2003, August 5). National Operations Monitoring
Service (NAOMS). Prepared for NAOMS Overview and Status to FAA-
JIMDAT......................................................... 278
Exhibit 10. FAA (2007, February). R&D Activities. National
Aviation Research Plan......................................... 304
Exhibit 11. National Aviation Operational Monitoring Service
(NAOMS) Air Carrier Pilot Survey (Ver AC-July 15, 2003)........ 306
Exhibit 12. NAOMS document request from Chairman Brad Miller,
Subcommittee on Investigations and Oversight, Committee on
Science and Technology to Dr. Michael Griffin, Administrator,
National Aeronautics and Space Administration (NASA) (2007,
October 19).................................................... 372
Exhibit 13. Response to Chairman Miller's October 19, 2007
request from William W. Burner, III, Assistant Administrator
for Legislative and Intergovernmental Affairs, NASA (2007,
October 22).................................................... 375
Exhibit 14. NAOMS document safety request from Chairman Bart
Gordon, Committee on Science and Technology, Chairman Brad
Miller, Subcommittee on Investigations and Oversight, and
Chairman Mark Udall, Subcommittee on Space and Aeronautics, to
Dr. Michael Griffin, Administrator, NASA (2007, October 22).... 378
Exhibit 15. Response to Committee October 19, 2007 and October
22, 2007 requests from Dr. Michael Griffin, Administrator, NASA
(2007, October 29)............................................. 382
AVIATION SAFETY: CAN NASA DO MORE TO PROTECT THE PUBLIC?
----------
WEDNESDAY, OCTOBER 31, 2007
House of Representatives,
Committee on Science and Technology,
Washington, DC.
The Committee met, pursuant to call, at 1:35 p.m., in Room
2318 of the Rayburn House Office Building, Hon. Bart Gordon
[Chairman of the Committee] presiding.
[GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]
hearing charter
COMMITTEE ON SCIENCE AND TECHNOLOGY
U.S. HOUSE OF REPRESENTATIVES
Aviation Safety: Can NASA Do
More to Protect the Public?
wednesday, october 31, 2007
1:30 p.m.-3:30 p.m.
2318 rayburn house office building
Purpose
The Committee will hold a hearing on NASA policy regarding the
agency's management of the National Aviation Operations Monitoring
Service (NAOMS). NAOMS has been in the press due to NASA's refusal to
release the data to an Associated Press (AP) reporter, offering the
rationale that release of the information might undermine the flying
public's confidence in the aviation system because it relates to
safety. NASA's refusal to release this data has been widely condemned
in the Nation's press with editorials in many papers. NASA's
Administrator Michael Griffin has formally distanced himself from that
rationale, but he has not yet made it clear when or even whether NASA
will publicly release this data.
Witnesses
Panel 1
Dr. Michael Griffin, Administrator, National Aeronautics and Space
Administration (NASA)
Mr. Jim Hall, Managing Partner, Hall and Associates, LLC, and Former
Chairman, National Transportation Safety Board (NTSB)
Panel 2
Dr. Robert S. Dodd, Safety Consultant and President, Dodd & Associates,
LLC
Dr. Jon A. Krosnick, Frederic O. Glover Professor in Humanities and
Social Sciences, Stanford University
Captain Terry McVenes, Executive Air Safety Chairman, Air Line Pilots
Association
Background
On October 29, Administrator Griffin sent a letter to the Committee
indicating that the data was being provided to the Committee, but
noting that ``NASA believes that the data contains both confidential
commercial data and information that could compromise anonymity that
should be redacted prior to public release.'' Staff have been unable to
find a NASA or Battelle staffer [the contractor on the project] who can
articulate what commercially sensitive information resides in these
data bases. As to anonymity, Battelle indicated that all personal
identifying information was stripped away from the data within 24 hours
of conducting a survey. It is unclear what data should be removed prior
to public release and this may be a question for NASA.
The concern NASA has expressed in its initial FOIA rejection letter
was that public release of the data may undermine confidence in flying
among the public. However, other data safety systems are already open
to the public and include plenty of details that could have far more
impact on public confidence than data contained in a spreadsheet. The
best known is the Aviation Safety Reporting System (ASRS) which
includes numerous stories about near misses in the air and on the
ground. The bottom line is that when planes have actually crashed,
people keep going right to the airport. The Committee asked NASA to
provide all records of the aviation industry expressing concerns that
their commercial interests could be damaged or objecting to the impact
on the flying public's attitudes if NAOMS data were made publicly
available, and NASA could find no responsive records.
In addition to the FOIA issue, the hearing will provide an
opportunity for the Committee to learn about aviation safety data
sources and the rationale behind launching NAOMS in the first place.
All other data systems involve voluntary self-reporting tied to either
incidents that have happened or else data that has been filtered by
private parties to strip information out of the report prior to being
turned over to the government. FAA collects most of these data sources;
NASA manages the Aviation Safety Reporting System (ASRS) for FAA. If it
had been rolled out operationally, NAOMS would have integrated
continuous survey data from pilots, ground controllers, ground crews,
and cabin crews to create a complete picture of what is happening in
the air safety system nationally. This information would not be driven
by adverse events and would have a statistical rigor that the self-
reporting anecdotal systems lack. As a result, safety experts could
mine the NAOMS data for insights into new safety threats as they
emerge.
The aviation system is changing due to new information and
communications technologies that are being introduced into the system.
It is also anticipated that the national airspace system will have to
handle up to three times as much demand by 2025 compared to 2000. The
voluntary reporting systems of the past may not be good enough, and
certainly do not represent what could be achieved with improved data
systems, to keep the skies over the United States safe. NAOMS was to be
that pro-active, forward looking tool to identify problems tied to
increasing demands on capacity and unexpected problems with the
introduction of new technologies.
NASA spent three years developing and field testing the NAOMS
survey with support by Battelle and several distinguished
subcontractors who were experts in survey methodology or aviation
safety. Then NASA ran a survey of commercial pilots for almost four
years. Over 24,000 pilots responded to the survey. Another 4000 general
aviation pilots were surveyed during a span of several months over
2002-2003. The contractor also began work to roll out a survey of air
traffic controllers, but it was never implemented in the field. After
spending more than $8 million to develop this tool and begin to put it
in place, NASA shut it down before it became operational. The project
enjoyed unusual success in gathering responses from pilots, but the
project also ran up against competing priorities within the agency, as
well as a lack of interest at the FAA.
In shutting the project down, NASA has done absolutely nothing to
either advertise the methodology and the goal they hoped to achieve or
release any analytical products that give insights into air safety
trends. This was true until the AP reporter pushed to get the materials
out. Only then did the top managers for this project at NASA begin to
try to put some sort of report together. NASA says a technical report
will be released by the end of the year, but prior to a week ago, the
report was described by both NASA counsel and NASA researchers to
Committee staff as something that would represent analytical insights
drawn from the data with recommendations for improving air safety. It
appears that NASA has moved the goal posts even on this belated work
product.
The reasons that NAOMS was needed have not changed. The national
air transportation system appears safe at the moment, but new
technologies and stresses will produce exactly the situation that NAOMS
was designed to help address.
To help the Committee sort through some of this, we will receive
testimony from Dr. Michael Griffin, the NASA Administrator. The
Committee will also take testimony from Mr. Jim Hall (former head of
the National Transportation Safety Board and member of the 1997
Aviation Safety and Security Commission--the Gore Commission), Dr.
Robert Dodd (aviation safety expert who managed the NAOMS project under
contract to Battelle), Dr. Jon Krosnik (Stanford statistics professor
who helped design the survey), and a representative of the Airline
Pilots Association (ALPA), Captain Terry McVenes. ALPA actually opposes
release of the raw data, but they do favor analysis of that
information. NASA has also ``handed-off'' the NAOMS methodology to ALPA
(though it has been redesigned as a web-based, not phone-based survey)
so that they can administer the survey to their members. However, ALPA
has told Committee staff that they have not decided what questions they
would ask, who they would ask them of, or even when to run a survey.
They have done nothing with NAOMS to date.
Chairman Gordon. I want to welcome all of you, and I
especially want to welcome our witnesses to today's hearing.
You have made yourself available to testify on relatively short
notice, and I appreciate your willingness to assist the
Committee in carrying out our oversight responsibilities on
this important issue.
It was important that we met as soon as possible to get to
the bottom of what has been going on and what NASA intends to
do from this point forward. America's air transportation system
is critical both to our nation's economic vitality and to our
quality of life.
However, it is no secret that the system faces increasing
stresses as air traffic demand continues to grow, demand that
is expected to increase by a factor of two or three by the year
2025. And those stresses make it even more important that all
necessary steps are taken to maintain air safety. It is the
right thing to do, and the American public expects it.
Our citizens want to be sure that the government and the
aviation industry are doing all that can be done to keep the
air transportation system safe. That is why both the public and
Members of Congress alike have such a strong reaction to
reports that NASA has been withholding an aviation safety
survey database compiled by taxpayer dollars. NASA's
explanation for its refusal to release the data was both
troubling and unconvincing.
Specifically, NASA has stated the data can't be released
because, and I quote, ``It could materially affect the public
confidence in, and the commercial welfare of air carriers.''
Well, as I have said before, NASA needs to focus on
maintaining and increasing the safety of the flying public, not
protecting the commercial air carriers. And if NASA
accomplishes that and if we have a safe traveling environment,
then the commercial air carriers, their situation will
certainly be enhanced. Dr. Griffin has indicated that he
agrees, and he will testify today that NASA will publicly
release the NAOMS data.
While we need to clarify just exactly what will be released
and when, and I hope it will be soon, I am pleased that he is
taking that action, as his usual candor dictates. If scheduling
this hearing helped bring about the change of direction at
NASA, I think that it has been a constructive exercise by our
oversight responsibilities.
However, the issue we have to consider today goes beyond
simply the release of the data NASA is withholding. We also
have a question of priorities. As former NTSB Chairman Jim Hall
will testify, and again, I quote, ``A true safety culture
requires transparency and consistent vigilance.''
Numerous individuals familiar with this report have told us
that it has envisioned, was envisioned as a long-term,
continuing data collection and analysis effort to identify
aviation accident precursors and safety trends. And several of
our witnesses today will testify that it has potential to
provide information and insights unobtainable from existing
data sources.
Therefore, by most accounts, the report appeared to be a
promising avenue for ensuring that our nation's air
transportation system would retain its impressive safety record
in the coming years. Yet whether it was due to shifting
priorities, budgetary constraints, cultural differences between
agencies, or something else, the report has largely been cast
adrift by NASA and the FAA.
I hope that one outcome of today's hearing will be the
reconstruction of the report and project by NASA and the FAA.
However, I think we in Congress also need to take a close look
at NASA's overall aviation safety program to make sure that it
still addresses the most relevant safety questions facing the
Nation's air transportation system.
That is going to be one of the focuses of today's hearings
and in the coming months. Maintaining and improving aviation
safety is an important task for the Federal Government to
accomplish, working in partnership with the aviation industry.
The stakes are high, and we need to get it right.
We have a lot to do and to cover today, so I again welcome
our witness at today's hearing, and I now yield to my good
friend and colleague, Ranking Member Ralph Hall.
[The prepared statement of Chairman Gordon follows:]
Prepared Statement of Chairman Bart Gordon
Good afternoon. I'd like to welcome all of our witnesses to today's
hearing. You have made yourselves available to testify on relatively
short notice, and I appreciate your willingness to assist the Committee
in carrying out our oversight on this important issue.
It was important that we meet as soon as possible to get to the
bottom of what has been going on, and what NASA intends to do from this
point forward. America's air transportation system is critical both to
our nation's economic vitality and to our quality of life.
However, it's no secret that the system faces increasing stresses
as air travel demand continues to grow--demand that is expected to
increase by a factor of two to three by 2025. And those stresses make
it even more important that all necessary steps are taken to maintain
air safety. It's the right thing to do, and the American public expects
it.
Our citizens want to be sure that the government and the aviation
industry are doing all that can be done to keep the air transportation
system safe. That's why both the public and Members of Congress alike
had such a strong reaction to reports that NASA has been withholding an
aviation safety survey data base compiled with taxpayer dollars. NASA's
explanation for its refusal to release the data was both troubling and
unconvincing.
Specifically, NASA was saying the data can't be released because it
``could materially affect the public confidence in, and the commercial
welfare of the air carriers. . .''
Well, as I've said before, NASA needs to focus on maintaining and
increasing the safety of the flying public, not on protecting the
commercial air carriers. Dr. Griffin has indicated that he agrees, and
he will testify today that NASA will publicly release the NAOMS data.
While we need to clarify just exactly what will be released and
when--and I hope it will be soon--I am pleased that he is taking that
action. If scheduling this hearing helped bring about this change of
direction at NASA, I think that it has been a constructive exercise of
our oversight responsibilities.
However, the issues we have to consider today go beyond simply the
release of the data NASA is withholding. We also have a question of
priorities. As former NTSB Chairman Jim Hall will testify today: ``A
true safety culture requires transparency and constant vigilance.''
Numerous individuals familiar with the NAOMS project have told us
that it was envisioned as a long-term, continuing data collection and
analysis effort to identify aviation accident precursors and safety
trends. And several of our witnesses today will testify that it has the
potential to provide information and insights unobtainable from
existing data sources.
Thus, by most accounts, NAOMS appeared to be a promising avenue for
ensuring that our nation's air transportation system would retain its
impressive safety record in the coming years. Yet whether it was due to
shifting priorities, budgetary constraints, cultural differences
between agencies, or something else--NAOMS has largely been cast adrift
by NASA and the FAA.
I hope that one outcome of today's hearing will be a
reconsideration of the NAOMS project by NASA and the FAA. However, I
think we in Congress also need to take a close look at NASA's overall
aviation safety program to make sure that it is still addressing the
most relevant safety questions facing the Nation's air transportation
system.
That is going to be one of the focuses of this committee's
oversight in the coming months.
Maintaining and improving aviation safety is an important task for
the Federal Government to accomplish--working in partnership with the
aviation industry.
The stakes are high, and we need to get it right.
We have a lot to cover today, so I again want to welcome our
witnesses to today's hearing, and I now yield to my good friend and
colleague, Ranking Member Ralph Hall.
Mr. Hall of Texas. Mr. Chairman, I thank you and today's
hearing on NASA's National Aviation Operations Monitoring
Service, NAOMS, is a timely hearing, especially considering the
amount of scrutiny this program has received in the press.
Several issues have arisen that bring into question the manner
in which NASA closed out NAOMS, whether it achieved its
original goals and the agency's refusal to provide raw survey
data to the press in response to a Freedom of Information Act
request. I am optimistic that by the hearing's conclusion and
we hear these very capable men and women, if there are any on
here, that we will have a clear understanding regarding these
and other pressing issues.
And I do want to associate myself with NASA Administrator
Mike Griffin's public statement that lays out the agency's
philosophy on the treatment of research data. Like him, I
believe NASA ought to be in the business of putting information
in front of the public, not withholding it. That being said
every care should be taken to protect the identities of survey
respondents. NAOMS has promised pilots complete confidentiality
to ensure their candid participation, and most folks believe
that ought not to be breached.
If information is disclosed that may allow respondents to
be identified, there will be a serious chilling effect in
future survey efforts funded by the Federal Government, whether
we are talking about pilots or other citizen groups who provide
our government meaningful insight into a whole host of
activities. In the case of NAOMS, we should be cognizant of
striking a balance between transparency and confidentiality.
I have the greatest faith in the Administrator. I have been
through half a dozen or so administrators since I have been up
here, and I think there is none surpasses him in background,
ability. He is a pilot, he is young, he is agile, and he is a
lot of other things that are good for NASA. And I am just
really proud of him and honored to have him come before this
committee.
NASA should release the data, but, you know, to help us all
gain a better understanding of what it is telling us, they
ought to provide information, whether in the form of analysis,
methodology, or reports, to give us a clear sense of context.
But it is also important that the data be scrubbed, I think, to
ensure errors are omitted. Get the errors out of there.
I want to thank our witnesses for taking time from their
busy schedules to appear before us this afternoon and
acknowledge their hard work and preparation. All of us
appreciate your willingness to be here, and Mr. Hall from
Tennessee, we certainly well you and thank you, sir.
I yield back my time.
[The prepared statement of Mr. Hall of Texas follows:]
Prepared Statement of Representative Ralph M. Hall
Mr. Chairman, today's hearing on NASA's National Aviation
Operations Monitoring Service (NAOMS) is timely, especially considering
the amount of scrutiny this program has received in the press. Several
issues have arisen that bring into question the manner in which NASA
closed out NAOMS, whether it achieved its original goals, and the
agency's refusal to provide raw survey data to the press in response to
a Freedom of Information Act request. I am optimistic that, by the
hearing's conclusion, we'll all have a clear understanding regarding
these and other pressing issues.
I do want to associate myself with NASA Administrator Mike
Griffin's public statement that lays out the agency's philosophy on the
treatment of research data. Like him, I believe NASA ought to be in the
business of putting information in front of the public, not withholding
it. That being said every care should be taken to protect the
identities of survey respondents. NAOMS promised pilots complete
confidentiality to ensure their candid participation, and that ought
not be breached. If information is disclosed that may allow respondents
to be identified, there will be a serious chilling effect in future
survey efforts funded by the Federal Government, whether we're talking
about pilots or other citizen groups who provide our government
meaningful insight into a whole host of activities. In the case of
NAOMS, we should be cognizant of striking a balance between
transparency and confidentiality.
NASA should release the survey data, but to help all of us gain a
better understanding of what it is telling us, they should also provide
information, whether in the form of analysis, methodology, or reports,
to give us a clear sense of context. It's also important that the data
be scrubbed to ensure errors are eliminated.
I want to thank our witnesses for taking time from their busy
schedules to appear before us this afternoon, and acknowledge their
hard work and preparation. All of us appreciate your willingness to be
here.
Thank you, Mr. Chairman.
Chairman Gordon. Thank you, Mr. Hall from Texas.
If there additional Members who wish to submit additional
opening statements, your statements will be added to the
record. Without objection, so ordered.
[The prepared statement of Mr. Udall follows:]
Prepared Statement of Chairman Mark Udall
Good afternoon. I am disappointed that we have had to convene
today's hearing. But NASA's stated rationale for refusing to release
publicly information from the taxpayer-funded National Aviation
Operations Monitoring Service (NAOMS) aviation safety survey is
unsupportable and required congressional scrutiny. The safety of the
public has to be our first priority, especially with more and more
Americans flying every year.
Specifically, in its response to the Associated Press's request for
release of the NAOMS aviation safety survey data, NASA stated that:
``Release of the requested data, which are sensitive and safety-related
could materially affect the public confidence in, and the commercial
welfare of, the air carriers and general aviation companies whose
pilots participated in the survey.''
NASA's response in effect seems to be saying that it sees its job
as putting the commercial interests of the aviation industry above the
public's right to aviation safety information.
That response is unacceptable. It's certainly not in accordance
with the National Aeronautics and Space Act of 1958, which created NASA
and established objectives for the agency--one of which is ``the
improvement of the usefulness performance, speed, safety, and
efficiency of aeronautical and space vehicles,'' while directing NASA
to operate in a manner that will ``provide for the widest practicable
and appropriate dissemination of information concerning its activities
and the results thereof.''
The NASA Administrator has since distanced himself from the
language in NASA's response to the FOIA request, saying that he regrets
``the impression that NASA was in any way trying to put commercial
interests ahead of public safety. That was not and will never be the
case.''
I'd like to hear the Administrator reiterate that stance at today's
hearing. And although I am glad that he has now agreed to release at
least some of the NAOMS data publicly so that it can be used to help
maintain and hopefully improve the safety of the Nation's airways, I
feel strongly that all the NAOMS data should be made publicly available
as soon as possible.
I intend to be vigilant to ensure that this release actually occurs
in a timely manner.
Former National Traffic Safety Board Chairman Jim Hall, who is one
of our witnesses today, got it right in his prepared testimony when he
wrote that ``It is difficult to overemphasize the importance of
transparency and accountability in aviation. It is the single greatest
reason why you are so safe when you get on an airplane today.'' I
wholeheartedly agree. We need to work hard to expand that transparency
and accountability--not restrict it. And that is why all the
information from the study must be released--and soon.
Yet, the struggle over the fate of the NAOMS data is not the only
issue that needs attention at today's hearing. We also need to decide
where we should go from here. We will hear from a number of witnesses
here today about the value of a comprehensive, ongoing survey and
analysis approach to aviation safety trend analysis and accident
precursor identification--the approach exemplified by the NAOMS
project.
As Chairman of the Space and Aeronautics Subcommittee, I have
oversight responsibility for both NASA's aeronautics and aviation R&D
programs and FAA's aviation R&D programs.
I intend to make sure that the government is taking all necessary
steps to have the aviation safety data sources and analysis tools that
will be needed to maintain air safety in the coming years.
Based on testimony we will hear today, there appears to be a great
deal of merit to the NAOMS approach, and we need to assess whether NASA
and FAA should reinstitute the project. Given its potential value and
the modest amounts of funding required to make effective use of the
NAOMS methodology relative to the more than $30 billion spent on NASA
and FAA annually, I think the burden of proof should be on those who
want to walk away from the investment made to date in the NAOMS
project.
I am aware that a number of FAA officials have indicated that the
FAA is not interested in NAOMS and would rather develop a new aviation
safety information system combining data from multiple existing safety
and performance data bases. Making as effective use as possible of
existing data bases is a worthy objective, and one that quite frankly
FAA should have been doing all along. However, FAA's own documentation
states that it doesn't envision completing more than ``the Phase 1 pre-
implementation activities, including concept definition'' for the
proposed new combined Aviation Safety Information Analysis and Sharing
(ASIAS) system until 2013 at the earliest.
That's an unacceptably long time to wait, when it appears that NASA
and FAA could be generating useful safety trend and accident precursor
information--which will help keep the flying public safe--from a
restarted NAOMS initiative almost immediately.
It also doesn't address the question of whether NAOMS could provide
additional valuable insights into the safety status and trends for the
Nation's air transportation system beyond those available from existing
data bases.
These issues go beyond what we are likely to have time to consider
today, so I intend to have the Space and Aeronautics Subcommittee
pursue them in the coming months.
Mr. Chairman, we can take pride in the overall safety record of
America's air transportation system. However, we dare not rest on our
laurels. We need to be vigilant to ensure that all is being done that
should be done to maintain and improve that safety record--and the
information gained from the taxpayer-funded NAOMS study is very
important to our work. This hearing is an important step in meeting our
safety oversight responsibilities, and I am glad we are holding it.
[The prepared statement of Mr. Feeney follows:]
Prepared Statement of Representative Tom Feeney
When this hearing was first scheduled, allegations of cover up and
document destruction swirled in the air. So I initially thought--how
did the Science and Technology Committee obtain jurisdiction over Sandy
Berger's escapades at the National Archives? Alas, that topic remains
untouched.
Originally, the Full Committee was to spend today examining the
environmental and safety aspects of nanotechnology--a timely and
thoughtful topic given nanotechnology's current and future importance.
Such a hearing would continue this committee's serious treatment of
serious issues.
But like a cop on the beat, the powers-that-be have apparently
given this committee a quota of ``oversight'' tickets to write.
Infractions must be found and highlighted with great drama. So the
nanotechnology hearing was relegated to a subcommittee and replaced
with today's festivities. But to paraphrase Gertrude Stein, the trouble
with today's hearing is that ``when you get there, there isn't any
there there.''
Here's today's kerfuffle in a nutshell. Starting in fiscal year
1998, NASA funded a research project--the National Aviation Operations
Monitoring Service (NAOMS)--that attempted to use telephone survey data
to provide a representative picture of aviation system safety. Over
eight years, $11.3 million (0.00867582 percent of NASA's budget over
this period) was spent on this non-peer reviewed research.
Unfortunately, NAOMS failed to yield worthwhile information.
Instead, it painted a picture of the aviation system with anomaly rates
(such as engine failures) that bore no relationship with reality. It's
as if the public were polled and the data suggested a 75 percent
approval rate for today's Congress. Any politician would know that
something was terribly wrong with that survey's methodology.
Four months ago, the Associated Press made a Freedom of Information
Act (FOIA) request for the raw and rather stale data collected in the
NAOMS study. NASA denied that request and used some inarticulate
reasoning.
When this matter was brought to NASA Administrator Mike Griffin's
attention, he promptly responded with his characteristic pattern of
integrity, candor, and action. Griffin has vowed to bring openness and
transparency to NASA. In that type of environment, participants feel
empowered to acknowledge and address problems--a behavior that could
have averted the Challenger and Columbia tragedies. Thus, Griffin
promptly acknowledged that NASA should have better handled this FOIA
request and vowed to correct the matter.
And so he has. Griffin has determined that this data should be
released and will do so once confidential information is redacted
(survey participants were promised confidentiality in return for their
candor). Furthermore, he has cautioned about properly interpreting the
data since the survey methodology appears to be quite flawed.
In the wake of the Columbia Accident Investigation Board's finding
of a NASA culture discouraging openness and frankness, one would think
Administrator Griffin would be commended for his leadership. After all,
leaders set examples. Here he has promptly responded to a concern,
acknowledged an error, and outlined corrective actions. Isn't this the
type of conduct to be encouraged?
But that would deviate from today's script and ruin the planned
drama. So like the abusive spouse who enjoys publicly brow-beating his
partner, the Majority will undoubtedly pummel NASA's finest
Administrator in recent memory. No acknowledgement of error or
corrective action will satisfy the belittling and rampaging spouse.
Undoubtedly at another forum, today's inquisitors will bemoan how
skilled, accomplished, and decent people eschew public service. Or how
today's Congress avoids addressing issues of genuine concern to the
governed.
[The prepared statement of Mr. Costello follows:]
Prepared Statement of Representative Jerry F. Costello
Mr. Chairman, I am pleased that the Committee is pursuing this
issue, as the reports surrounding NASA's NAOMS program and it's refusal
to release initial data have been troubling.
As Chairman of the Aviation Subcommittee of the Committee on
Transportation and Infrastructure, I firmly believe that safety must be
our top priority. As Ranking Member of the Subcommittee last Congress,
I called for a comprehensive hearing on aviation safety and since
becoming Chairman, I have held numerous subsequent hearings that have
highlighted the importance of this issue.
What concerns me regarding NASA's handling of the NAOMS study is
that regardless of the initial findings of the study, this information
has the ability to help improve transportation safety, and that should
be our priority, not the possible adverse affects the information may
or may not have on the industry. In addition, this situation has been
handled poorly by NASA, and it fits into a pattern of reluctance to
release information--particularly regarding safety--and concerns that
NASA officials are too close to, and too quick to protect, the
interests of industry.
Again, Mr. Chairman, I'd like to commend you for calling this
hearing, I am very interested in learning the findings of this study,
and how we can use the information to help ensure the safety of all air
travelers.
[The prepared statement of Mr. Miller follows:]
Prepared Statement of Representative Brad Miller
The purpose of today's hearing is to look at the National
Aeronautics and Space Administration's (NASA) management of the
National Aviation Operations Monitoring Service (NAOMS), and to examine
how, in the absence of a system such as NAOMS, NASA plans on monitoring
air safety in the future.
Every year more planes are in the air, and each year brings new
challenges to aviation safety. The purpose of NAOMS was to identify
problems with both increasing demand and the introduction of new
technologies. Instead of reacting to aviation disasters NAOMS would
have been able to identify emerging safety problems. The program
appears to be a cost-effective and scientifically valid way of looking
at airline safety. More important, I would like to know what NASA is
going to do to ensure American's safety in the absence of NAOMS.
I am glad that NASA and Administrator Griffin have voiced a
willingness to release the data gathered under the NAOMS project.
Analysis of this data could be a key tool in understanding what is
happening at US airports. I understand that there is some concern over
the release of proprietary commercial data and the anonymity of survey
participants. It is my strong hope that NASA will take realistic
precautions to ensure anonymity, but not let that become an excuse not
to release the data in a timely manner.
[The prepared statement of Mr. Lipinski follows:]
Prepared Statement of Representative Daniel Lipinski
Thank you, Mr. Chairman.
This is a very timely subject and one that is extremely important
to the residents of the 3rd District of Illinois. Chicago is a key
national and international aviation hub and collaboration is key to
ensuring the continued safety and vitality of the aviation industry. At
Midway International Airport in my District, working collaboratively we
brought new safety upgrades online which will greatly enhance the
safety of the flying public and everyone who works at the airport. And
through additional collaboration, such as the sharing of informative
data findings from your report, we can work to further improve the
safety of our nation's aviation industry.
This issue hits especially close to home for me. Many remember the
tragic accident in 2005 when an aircraft skidded off the runway at
Midway Airport into a passing car, killing a young boy. That is why, as
a Member of the T&I Committee's Subcommittee on Aviation, I worked hard
to incorporate necessary funding into this year's FAA reauthorization
bill that will make our runways safer and increase aviation safety
inspectors by more than one-third. I also sought to ensure the
accelerated implementation of the Next Generation Air Transportation
system, which will allow our air traffic control system to meet two to
three times the amount of current demand and keep pace with the ever-
increasing number of flights.
[The prepared statement of Mr. Mitchell follows:]
Prepared Statement of Representative Harry E. Mitchell
Thank you, Mr. Chairman.
Like most Americans, I was stunned last week to hear that NASA had
refused to release the results of an $11 million survey of airline
pilots on potential safety lapses in our nation's aviation network. .
.because the information ``could undermine public confidence in the
airlines and could affect the airlines' profits.''
The idea that the Federal Government would put private profits
ahead of the flying public's safety is as outrageous and inexcusable.
The only thing more shocking about this awful decision is where it
came from. We're talking about NASA--the agency that houses some of the
best and brightest minds on Earth.
But it shouldn't take a rocket scientist to figure out that safety
comes first.
Aviation is serious business in my district. One of the Nation's
largest airlines is headquartered in Tempe, and Phoenix Sky Harbor is
now the eighth busiest in the country. We depend on aviation. . .and we
depend on the Federal Government to keep our skies safe.
NASA's survey reportedly contains information. . .from pilots. .
.about runway incursions, wildlife strikes, and near collisions. These
are real risks. If pilots have concerns about them, we need to know.
And if NASA wants to tell us that its survey methodology was
flawed. . .and, therefore, the results of its survey are inconclusive.
. .then we need to know how they were able to waste $11 million
taxpayer dollars creating and conducting it.
Is it really asking too much for us to expect NASA to know a thing
or two about scientific methodology?
The flying public deserves an explanation.
They deserve to know how this happened. . .but more importantly,
what is being done to correct the situation, and what steps are being
taken to ensure that something like this never happens again.
I look forward to hearing from our witnesses.
I yield back.
Chairman Gordon. At this time I would like to recognize our
first panel. First we have Dr. Michael Griffin, who is the
Administrator of the National Aeronautics and Space
Administration, and I will concur with Mr. Hall's accolades,
even the youthfulness. And we also have Mr. Jim Hall, who is a
Managing Partner at Hall and Associates and is also the Former
Chairman of the National Transportation and Safety Board.
Welcome to you both.
And Chairman Griffin, we will begin with you or Director
Griffin.
Panel 1:
STATEMENT OF DR. MICHAEL D. GRIFFIN, ADMINISTRATOR, NATIONAL
AERONAUTICS AND SPACE ADMINISTRATION (NASA)
Dr. Griffin. Thank you, Mr. Gordon, Mr. Hall for your kind
statements. I only wish I were still young, but, oh, well. It
is all a matter of relativity here. Mr. Hall is my hero. He is
still on the right side of the dais.
So, thank you, Mr. Chairman, Members of the Committee for
the opportunity to appear here today to discuss aviation safety
and the NAOMS Project. When I was made aware last week that a
NAOMS pilot survey data had been withheld under Freedom of
Information Act request initiated by the AP, I asked Dr. Lisa
Porter, our AA for Aeronautics Research, to investigate the
matter. And I hope to provide you with the information that
will address the questions and the concerns that have been
raised by you and others in the past several days.
Let me start by making three points clear up front. First,
the survey results that we can legally release will be
released. Period. Two, the contractor and NASA maintain master
copies of all NAOMS survey results, and we have instructed the
NAOMS project management team and the contractor, Battelle, to
retain all records related to the project. Battelle provided
the same direction to its subcontractors. Also, sir, your staff
has this data.
Three, the NAOMS Project had from its inception a planned
and finite duration. It was not terminated early. It was, in
fact, extended, and it was not terminated early to provide
funds for the Moon Mars Program or anything else.
Quite simply, the NAOMS Project began in 1998, with the
goal of developing methods to facilitate a data-driven approach
to aviation systems safety analysis. To accomplish this goal
required the generation of data that are statistically
meaningful and representative of the system. The NAOMS Project
Team developed a survey methodology to acquire that data. The
survey methodology development took about two years to
complete.
The actual data collection using that methodology began in
April of '01, and ended in December of '04. During that time
the project team interviewed, surveyed approximately 24,000
commercial airline pilots and 5,000 general aviation pilots. In
early '05, it was determined that the number of survey results
collected were sufficient to evaluate whether the NAOMS survey
methodology indeed produced statistically meaningful and
representative data.
NASA's Aviation Safety and Security Program leadership then
directed the NAOMS Project to complete the assessment of its
survey methodology and transfer it to industry and government
decision-makers and provided the FY 2005 funding to do that.
It is worth noting that the 2004 review of NASA's aerospace
technology enterprise by the National Academies concluded at
that time that there was not a compelling argument for
continued independent data collection in the NAOMS Project. In
fact, quoting from that report, the ``NAOMS Project seems to be
developing a methodology to establish trends in aviation safety
performance that are already available through other sources
within industry and government.''
In 2006, the Aviation Safety Program of NASA's Aeronautic
Research Mission Directorate provided additional funding to
complete the transition and to document the results. The
transition of the survey methodology has now been successfully
completed, but the documentation has taken longer to complete
than anticipated. That will be completed by the end of this
year.
Now, it has been widely reported that NAOMS funding was cut
or prematurely ended. That is not the case. When the project
originated in 1998, it was intended to continue until 2004, as
indicated in project briefings that were provided to various
government and industry audiences when it began. Copies of
these briefings have been provided to Committee staff for the
record.
As I previously mentioned, funding was extended through
'06, to allow for transition of the methodology and final
documentation. And the total amount that we have now spent on
this effort has been $11.3 million.
Now, with all that said, the arch, overarching goal of
trying to develop methodologies the enabled data-driven safety
analyses is one that we at NASA continue to embrace in the
current Aviation Safety Program, and we do so in close
partnership with the FAA, industry, and academia.
In order to significantly reduce the accident rate to meet
the expected growth of the next generation air transportation
system, it is imperative to develop a robust safety information
system that discovers safety precursors before accidents occur.
Accomplishing this requires the ability to combine and analyze
enormous amounts of data from varied sources to detect and act
on new safety threats.
To address this challenge, NASA and FAA are combining their
separate and unique skills and resources under clearly-defined
roles and responsibilities. NASA is focused on the development
of advanced analysis alga rhythms that can be implemented in a
comprehensive system that the FAA can utilize to effectively
analyze a wide variety of safety data.
In order to ensure that the technology is effectively
transitioned between the organizations, a program plan has been
developed and is being executed. The initial response to this
approach from the stakeholder community has been very positive.
The FAA's Research Engineering and Development Advisory
Committee, the REDAC Safety Subcommittee, recently reported and
recent means in October of '07, that it, ``Believes significant
progress has been made over the past year,'' in defining the
program and its execution. The Safety Subcommittee credited the
leadership of both FAA and NASA for, ``Driving a well-
integrated plan that will form the basis for proactive risk
identification and assessment in the future.''
There has been a lot of speculation in the press regarding
what the NAOMS survey might reveal about the safety of the
National Aerospace System. Several briefings were given to
other government agencies and industry organizations by members
of the NAOMS Project Team, and some of those presentations
included some analyses that were based upon extrapolating the
survey results to obtain, to estimate absolute numbers of
events that would occur within a given time period. When this
was done, for many of these events the numbers were
significantly higher than reported by other means such as the
Aviation Safety Reporting System or ASRS that NASA manages by
statute.
However, no attempt was made to validate the NAOMS
extrapolation methodology, and indeed, given the results for
some cases such as engine failure events that are highly public
and carefully documented affairs, there may be a reason to
question the validity of the methodology itself. It is
interesting to note here that in NASA's own Safety Reporting
System, the NSRS, 40 percent of the events which are reported
are either found-are found later to be either overstated,
unverifiable, or not significant enough to require follow-up.
While some analysis of the survey results was presented to
NASA, other government agencies and other personnel,
unfortunately none of the research conducted in the NAOMS
Project, including the underlying survey methodology, was peer
reviewed or has been peer reviewed to date. Accordingly, any
product of the NAOMS Project, including the survey methodology,
the resulting data, and any analysis of that data should not be
viewed or should not be considered at this stage as having been
validated.
So in plain speaking, when I said we can release whatever
data can, we will release whatever data we can be legally
released, and we will do that, we do not certify that data.
There has been considerable attention in the press to the
supposed destruction of NAOMS data. In fact, Battelle, the
prime contractor, maintains master copies of all survey data on
CDs and other back-up media in its Mountain View facility.
NASA's Ames Research Facility at Moffett Field also has copies
of this data.
We had directed Battelle to recover or to ensure the secure
destruction of any copies of survey results that might be held
at locations outside Mountain View. This includes copies held
by present or past Battelle NAOMS subcontractors. The purpose
of that request was to ensure compliance with NASA's data
security requirements as part of the contract close-out
process, because the contract was scheduled to end in October
of '07. This request in no way jeopardized the security of the
master copies, which remain secure at Battelle and at Ames.
To ensure that no instruction--no destruction of survey
results occurs, however, including those held by
subcontractors, after the concerns about data destruction were
raised by this committee, NASA directed the NAOMS Project
Management Team and Battelle to retain all records related to
the NAOMS Project, and Battelle provided the same direction to
its subcontractors. We have provided all this information to
the Committee.
Finally, let me focus on the Freedom of Information Act
request. Under federal law we at NASA are required to protect
confidential commercial information that is voluntarily
provided to the agency and would not customarily be released to
the public. That is the law. In preparing our response to the
AP Freedom of Information Act appeal, the characterization of
the requested data by Ames researchers raised concerns that the
data likely contained confidential commercial information. This
characterization was the basis for withholding the data under
Exemption 4.
Now, considerable attention has been focused on one
sentence in the final determination letter suggesting the data
was being withheld because, ``It could affect public confidence
in and the commercial welfare of air carriers and general
aviation companies.'' Now, I have already made it clear that I
do not agree with the way this was written, and I regret any
impression that NASA was or would in any way try to put
commercial interests ahead of public safety. That was not and
will never be the case.
As for our plans for the data, I have directed that all
NAOMS data not containing confidential commercial information
or information that could compromise the anonymity of
individual pilots be released as soon as possible. But at
present we are concerned that it might be possible that a
knowledgeable person could identify a specific individual or
reconstruct specific events back to a specific individual, and
we must protect against that, and no proprietary commercial
information could be compromised.
We will receive a written report by Battelle by the end of
this year that will include a description of the methodology,
the approach, the field trials, et cetera. We will make this
report available to any interested party. We intend to continue
to emphasize the importance of peer review of all research
results, whether conducted by NASA's researchers or our
contractors funded by NASA. Peer review is critical to the
achievement of technical excellence.
Let me conclude by thanking you for this opportunity to
appear before you to discuss the NAOMS issue and to answer your
questions. Thank you.
[The prepared statement of Dr. Griffin follows:]
Prepared Statement of Michael D. Griffin
Mr. Chairman and Members of the Committee, thank you for this
opportunity to appear before you today to discuss the National Aviation
Operations Monitoring Service (NAOMS) project, and the issue concerning
the release of data obtained by various researchers pursuant to that
project. When I was made aware last week that NAOMS pilot survey data
had been withheld under a Freedom of Information Act request initiated
by the Associated Press, I asked Dr. Lisa Porter, Associate
Administrator for Aeronautics Research, to investigate the matter. I
hope to provide you with information that will address the questions
and concerns that have been raised by you and others during the past
several days.
What is NAOMS?
There has been some confusion regarding what NAOMS actually is. The
NAOMS project began in 1998 with an overarching goal of developing
methods to facilitate a data-driven approach to aviation system safety
analysis. Accomplishing this goal requires the generation of data that
are statistically meaningful and representative of the system. The
NAOMS project team decided to develop a survey methodology to acquire
such data. The survey methodology development took roughly two years to
complete. The actual data collection using the methodology began in
April 2001 and ended in December 2004. During that time, the project
team surveyed approximately 24,000 commercial airline pilots and
approximately 5,000 general aviation pilots.
In early 2005, it was determined that the amount of data collected
was sufficient to evaluate whether the NAOMS survey methodology indeed
produced statistically meaningful and representative data. NASA's
Aviation Safety and Security Program leadership thus directed the NAOMS
project to complete the assessment of its survey methodology and
transfer it to industry-government decision-makers (Commercial Aviation
Safety Team [CAST] and Air Line Pilots Association [ALPA]), and
provided FY 2005 funding to do so. It is worth noting that the 2004
Review of NASA's Aerospace Technology Enterprise by the National
Academies concluded that there was not a compelling argument for
continued independent data collection in the NAOMS project. In FY 2006,
the Aviation Safety Program of the Aeronautics Research Mission
Directorate (ARMD) provided additional funding to complete the
transition and to document the results. The transition of the survey
methodology has been successfully completed, but the documentation has
taken longer to complete than anticipated. The documentation will be
completed by the end of this year.
Why was funding for NAOMS cut?
It has been widely reported that NAOMS funding was cut or
prematurely shut down. That is not the case. When the project
originated in 1998, it was intended to continue until 2004, as
indicated in project briefings that were provided to various government
and industry audiences when the project began. (These briefings have
been provided to the Committee for the record. Later briefings
indicated an extension to 2005.) As I previously mentioned, funding was
extended through 2006 to allow for transition of the methodology and
final documentation. The total amount we spent on this effort was
$11.3M.
That said, the overarching goal of trying to develop methodologies
that enable data-driven system safety analyses is one that NASA
continues to embrace in its current Aviation Safety Program, in close
partnership with the FAA, industry, and academia. In order to
continually and significantly reduce the accident rate to meet the
expected growth of the Next Generation Air Transportation System
(NextGen), it is imperative to develop a robust safety information
system that discovers safety precursors before accidents occur.
Accomplishing this requires the ability to combine and analyze vast
amounts of data from many varied sources to detect and act on new
safety threats.
NASA and the FAA are combining their unique skills and resources
under clearly defined roles and responsibilities to address this
challenge. NASA is focused on the development of advanced analysis
algorithms that can be implemented in a comprehensive system that the
FAA can utilize to effectively analyze a wide variety of safety data.
In order to ensure that the technology is effectively transitioned
between organizations, a program plan has been developed and is being
executed. The initial response to this approach from the stakeholder
community has been very positive. The FAA Research Engineering and
Development Advisory Committee (REDAC) Safety Subcommittee recently
reported out to the REDAC in October 2007 that it ``believes
significant progress has been made over the past year'' in defining the
program and its execution. The Subcommittee credited the leadership of
both the FAA and NASA for ``driving a well integrated plan that will
form the basis for proactive risk identification and assessment in the
future.''
What do the data show?
There has been much speculation in the press regarding what the
data will reveal about the safety of our national airspace system.
Several briefings were given to other government and industry
organizations by members of the NAOMS project team, and some of those
presentations included some analyses that were based upon extrapolation
methods to estimate absolute numbers of events occurring within a given
time period. For many of these events, the numbers were significantly
higher than reported by other means, such as the Aviation Safety
Reporting System (ASRS). However, there was no attempt made to validate
the extrapolation methodology. Indeed, given the results for some
examples such as engine failure events, there may be reason to question
the validity of the methodology.
While some analysis of the data was presented to NASA and other
government personnel, unfortunately, none of the research conducted in
the NAOMS project, including the survey methodology, has been peer-
reviewed to date. Accordingly, any product of the NAOMS project,
including the survey methodology, the data, and any analysis of that
data, should not be viewed or considered at this stage as having been
validated.
Did NASA destroy any data?
There has been considerable attention in the press to the supposed
destruction of NAOMS data. Battelle Memorial Institute, the prime
contractor, maintains master copies of all NAOMS survey results on
compact discs and other backup media in its Mountain View, Calif.,
facility. NASA's Ames Research Facility at Moffett Field, Calif., also
maintains copies of this data.
NASA had directed Battelle to recover, or ensure secure destruction
of, any copies of the NAOMS data that might be held at locations
outside of Mountain View. This includes copies held by present or past
Battelle NAOMS subcontractors. The purpose of this request was to
ensure compliance with NASA data security requirements as part of the
contract close-out process, because the contract is scheduled to end in
October 2007. This request in no way jeopardized the security of the
master copies, which remain secure at Battelle and the Ames Research
Facility.
To ensure that no destruction of data, including data held by sub-
contractors, occurred after concerns about data destruction were raised
by this committee, NASA notified the NAOMS project management team and
Battelle to retain all records related to the NAOMS project. Battelle
provided the same direction to its subcontractors.
Dissemination of research results
One of the most important NASA principles is to ensure the
dissemination of research results to the widest practical and
appropriate extent. This principle has received particular focus during
the restructuring of ARMD. The emphasis on open dissemination is
clearly stated in ARMD's fully and openly competed NASA Research
Announcements as well as in the Space Act Agreements that it
establishes with commercial organizations for collaborative research.
Furthermore, all of ARMD's project plans include documentation and
publication of results as deliverables. We firmly believe in the
importance of the peer-review process, which is essential for ensuring
technical excellence.
Why did NASA reject the FOIA request?
Under federal law, NASA is required to protect confidential
commercial information that is voluntarily provided to the agency and
would not customarily be released to the public. In preparing the
response to the Associated Press' Freedom of Information Act appeal,
the characterization of the requested data by Ames researchers raised
concerns that the data likely contained confidential commercial
information. This characterization was the basis for withholding the
data under Exemption 4.
Considerable attention has been focused on one sentence in the
final determination letter suggesting the data was being withheld
because it could ``affect the public confidence in, and the commercial
welfare of, the air carriers and general aviation companies.'' I have
already made clear that I do not agree with the way it was written. I
regret any impression that NASA was in any way trying to put commercial
interests ahead of public safety. That was not and never will be the
case.
NASA plans
I have directed that all NAOMS data that does not contain
confidential commercial information, or information that could
compromise the anonymity of individual pilots, be released as soon as
possible. The release of this data will be accompanied with the proviso
that neither the methodology nor the results have received the level of
peer review required of a NASA research project. Therefore, the survey
methodology and the data should not be considered to have been
verified.
NASA will receive a final report from Battelle by December 31, 2007
that will include a comprehensive description of the methodology,
including approach, field trials, etc. NASA will make this report
available to any interested party.
We intend to continue to emphasize the importance of peer-review of
all research results, whether conducted by NASA researchers or
contractors funded by NASA. Peer-review is critical to the achievement
of technical excellence.
Concluding remarks
Let me conclude by thanking you again for this opportunity to
appear before you to discuss NAOMS and to answer your questions.
Chairman Gordon. Thank you, Dr. Griffin, for your candor
once again, and Mr. Hall, you are recognized.
STATEMENT OF MR. JAMES E. HALL, MANAGING PARTNER, HALL AND
ASSOCIATES, LLC; FORMER CHAIRMAN, NATIONAL TRANSPORTATION
SAFETY BOARD (NTSB)
Mr. Hall. Thank you, Mr. Chairman, Representative Hall, and
distinguished Members of this committee. I have provided
extended testimony that I would like to submit for the record
if it pleases the Chairman.
Chairman Gordon. No objection.
Mr. Hall. And it is I think significant that this meeting
is being held on the eighth anniversary of the Egypt air
accident that occurred during my watch at the NTSB. I
appreciate the opportunity to speak on aviation safety. Can
NASA do more to protect the public? This is one of the issues
that was addressed 10 years ago by the 1996, White House
Commission on Aviation Safety and Security, which I had the
privilege to serve on. The commission was prompted in large
part by the tragic aviation accidents of that year, ValuJet and
TWA 800.
Before I begin, however, I would like to share with this
committee that the most important thing, the most important
thing I learned in my seven years at the NTSB, and that is the
culture of aviation safety has been built upon constant
critical self-examination. Open and transparent information
flow is the key to aviation safety. With openness in mind, the
members of the 1996 commission felt that we needed to get ahead
of events in a rapidly changing environment to be able to
improve the safety and security of aviation before, not after,
another tragic accident occurred.
Notable safety recommendations issued by the commission
included the establishment of standards for continuous safety
improvement, a target rate of 80 percent was said for the
reduction of fatal accidents. And we continued, which has
considerable expertise in resources and the area of safety
research, to expand its involvement in the promotion of
aviation safety.
In this last point the extremely important safety research
function is what brings us here today. Since the commission
met, we have seen a 65 percent reduction in fatal accidents.
While this is certainly welcome news, there are dangerous
trends in the aviation industry that stand to jeopardize that
progress. These include air traffic controller and pilot
staffing levels, the number of runway incursions, the dramatic
increase we will see in general aviation, the development and
implementation of NextGen, UAVs and the explosion in passenger
levels, which the Chairman referred to and which is estimated
to reach 2.3 billion by the year 2027.
More work indeed remains, which makes it all the more
frustrating that NASA withheld results obtained from what I
first believed was an $8.5 million taxpayer-funded National
survey of almost 24,000 pilots. This survey reportedly states
that runway incursions, wildlife strikes, and near collisions
occur at a rate at least twice as much as is commonly thought.
As justification to its denial of a FOIA request the NASA
spokesman cited the potentially harmful effects on the
commercial welfare of the air carriers and public confidence in
aviation.
Such action, I believe, runs counter to the safety culture
mentality that the government and industry have worked to
create over the past 10 years. As the Government Accounting
Office has observed, transparency forms the fundamental basis
for any safety program. If we don't know something is broken,
we cannot fix it.
It is difficult to overemphasize the importance of
transparency and accountability in aviation. I know each one of
you Members fly probably weekly. I believe that that
transparency and accountability that is the single greatest
reason you are so safe when you get on an airplane today. The
history of transparency began with the Wright Brothers, who
assisted in the investigation of the first fatal aviation
accident in 1908, and used the results to incorporate changes
to their flying machine in order to save lives.
This open process has resulted in numerous important
advances in aviation. NTSB investigations and recommendations
have led to the advent of the Traffic Alert and Collision
Avoidance System, commonly known as TACAS, Low-Level Wind Sheer
Alert System, anti-collision Systems and Ground Proximity
Warning Systems to name but a few.
To repeat, information flow is the key to safety. In its
investigation into the two Shuttle accidents in 1986, and 2003,
NASA itself noted that a decline in transparency and
accountability among management and not simply a lack of
adequate funding for safety was a root cause of both disasters.
Furthermore, because major aviation accidents are now such
a rarity, our ability to identify risks and maintain or
increase safety now depends primarily on our ability to fully
analyze incidents and trends. A true safety culture requires
transparency and constant diligence. The vigilance, excuse me,
is required of all involved in the aviation industry, but its
absence is probably most glaring when it is the fault of
government, the servants of the American people.
NASA needs to release this information and fulfill its
responsibilities as envisioned by the 1996, White House
Commission. To do otherwise, I believe, flies in the face of
aviation history, responsible government, and common sense.
Thank you, Mr. Chairman.
[The prepared statement of Mr. Hall follows:]
Prepared Statement of James E. Hall
Good afternoon Mr. Chairman and Members of the Committee:
Thank you for allowing me the opportunity today to speak on the
subject of Aviation Safety: Can NASA Do More to Protect the Public? My
name is Jim Hall, and for more than seven years I served as Chairman of
the National Transportation Safety Board (NTSB). I also had the honor
to serve as a Commissioner on the 1996 White House Commission on
Aviation Safety and Security.
As you know, the NTSB is an independent federal agency charged by
Congress with investigating every civil aviation accident in the United
States as well as significant accidents in the other modes of
transportation--railroad, highway, marine, and pipeline. Since its
inception in 1967, the NTSB has investigated more than 124,000 aviation
accidents and over 10,000 surface transportation accidents, and has
also assisted many foreign governments with their own investigations.
In its issuance of more than 12,000 recommendations in all
transportation modes to more than 2,200 recipients, the Board has
established a solid reputation for diligence and impartiality. From
1994 to 2001, I headed this organization that serves as the ``eyes and
ears'' of the American people at aviation and other transportation
accidents across the country and around the world. Now, as a
transportation safety and security consultant, I continue my commitment
to promoting safety in our nation's transportation system.
Today I would like to put the current aviation safety environment
in a historical context. Ten years ago we were confronted with a
special situation of change and risk in the aviation industry. In
response, the Commission on Aviation Safety and Security was formed,
which I will discuss in a moment. I believe that today we face a
similar situation, what I like to call ``the next generation of
risks.''
The Gore Commission
In 1996, the Federal Government initiated a decade-long overhaul of
aviation safety that began with the establishment of the White House
Commission on Aviation Safety and Security, headed by Vice President Al
Gore. The Gore Commission, as it would come to be called, was formed
for three major reasons.
On May 11, 1996, ValuJet flight 592 crashed in the Everglades after
an in-flight fire caused by transported oxygen canisters, killing all
110 people on board. In the resulting NTSB investigation, we found
airline contractors and ValuJet--an airline that had been formed just
three years prior to the flight 592 crash--negligent in several areas,
including oversight and mishandling of hazardous materials. We also
determined if previous recommendations issued in 1988 regarding fire
detection and extinguishing systems had been adopted, flight 592 would
likely not have crashed. It was, therefore, a largely preventable and
tragic loss of life.
The second major reason for the formation of the Gore Commission
was an incident occurring only two months after the ValuJet crash. On
July 17, 1996, Trans World Airlines Flight 800 experienced an in-flight
break up following an explosion of the center wing fuel tank (CWT)
shortly after take off from John F. Kennedy Airport in New York City,
killing all 230 people on-board. After an extensive 17-month
investigation, we determined the source of the explosion to be an
ignition of the flammable fuel/air mixture in the tank, an ignition
most likely caused by a short circuit outside of the fuel tank. The
NTSB issued specific recommendations on wiring and design as well as
broader management of the aging aircraft fleet. In the period
immediately following the crash, concerns of possible security problems
led President Clinton to call for an immediate report on aviation
security within 45 days.
The third reason that led to the Gore Commission was the general
feeling that aviation--an industry that generated $300 billion annually
and employed close to one million Americans--was undergoing profound
changes. In the ten years prior to 1996, the Federal Aviation
Administration (FAA) had certified twenty new aircraft models and the
number of passengers flying in the United States exceeded more than a
half billion. New digital technology was being developed to improve
communication and navigation. Sixty new airlines, such as ValuJet, had
started operations since 1992. The commercial airline fleet was both
quickly aging and in the midst of rapid replacement of aircraft. The
domestic market faced the possibility of increased competition from
foreign carriers. To add to this, the FAA predicted that by 2007, more
than 800 million passengers would fly in the United States.
In this setting, and in light of two very public and tragic
accidents, the Gore Commission was created with three specific
mandates: to examine security threats and ways to address them; to
analyze overall changes in the industry and the appropriate adaptation
of government regulation to these changes; and to look at technological
changes in the air traffic control system. All of us involved at the
time felt that we needed to ``get ahead'' of events in a rapidly
changing environment, to improve the safety and security of aviation
before--not after--another tragic accident occurred.
Over six months I and the fellow members of the commission--which
included the Secretary of Transportation, two retired Air Force
generals, the director of FBI, and several scientists--conducted dozens
of site visits in the U.S. and abroad, held six public meetings, and
co-sponsored an International Conference on Aviation Safety and
Security attended by over 700 representatives from sixty-one countries.
From our findings we issued some fifty-one separate recommendations
covering a variety of issues from safety to security to the
notification of family members following an incident.
Notable safety recommendations issued by the Commission included:
the establishment of standards for continuous safety improvement (a
target rate of 80 percent was set for the reduction of fatal
accidents); extension of FAA oversight to aviation contractors; the
simplification of Federal Aviation Regulations; an emphasis on human
factor safety research and training; and an extension of whistleblower
statutory protection to the aviation industry. To be sure, not every
recommendation made was subsequently enacted, nor was every possible
safety item individually addressed--no commission can claim perfection
in this respect. Nevertheless, many recommendations were in fact
adopted and perhaps even more significantly, the Presidential attention
shown to the issue sent a message to both government and industry
leaders that the establishment of a safety culture was not an option.
It is therefore no coincidence that in the ten year period following
the commission, the industry successfully reduced fatal accidents by 65
percent, 15 percent shy of the national goal, but noteworthy
nonetheless.
This reduction was due not only to the actions of the airlines but
to government efforts as well. The Commission charged the FAA,
Department of Transportation (DOT), and NTSB to be more vigorous in
their certification, regulation, and investigative functions. It also
urged the expansion of research, and specifically noted the need for
the National Aeronautics and Space Administration (NASA), ``which has
considerable expertise and resources in the area of safety research, to
expand its involvement in the promotion of aviation safety.''
As a result of the Commission's recommendation, NASA launched its
$500 million Aviation Safety Program (AvSP) a partnership with the
Department of Defense (DOD), FAA, and the aviation industry to focus on
accident prevention, accident mitigation, and aviation system
monitoring and modeling. It is this last point, the extremely important
safety research function, which brings us here today. Given a rapidly
changing environment and a new set of risks, the attempt on the part of
NASA to suppress safety data is a grave and dangerous challenge to the
safety culture that has developed over the last century of aviation
history, due to lessons learned from past accidents and incidents.
The Next Generation of Risks
The 65 percent reduction in fatal accidents over the past ten years
is certainly welcome news, but while many advances have been made,
there are dangerous trends in the aviation industry that stand to
jeopardize this progress.
We are currently in the middle of an air traffic controller
staffing crisis. Fueled in part by the lack of a contract, this crisis
has industry-wide consequences including: more and longer flight
delays, combined radar and tower control positions, and an increased
use of mandatory overtime resulting in an exhausted, stressed out, and
burned out workforce. According to the National Air Traffic Controller
Association (NATCA) there were 856 retirements in fiscal year 2007,
(7.4 percent of the total experienced controller workforce), leaving
the country with a 15-year low in the number of fully certified
controllers and a surplus of new hires--many with no air traffic
control experience or education. Total controller attrition in FY07 was
1,558, nearly wiping out any net gains in total staffing made by the
FAA's hiring efforts. In fact, the agency estimates it will lose about
70 percent of the air traffic controller workforce over the next 10
years.
Air Traffic Controllers are not the only ones retiring. Pilot
staffing levels are dangerously low as a result of retiring baby-
boomers and an explosion of new airlines and increased airline fleets
in Asia and the Middle East, raising similar concerns of an influx of
inexperienced and insufficiently trained pilots. In 2009, airlines will
have to fill 20,000 openings due to retirements and other factors. Some
airlines facing pilot shortages are lowering experience requirements to
the FAA minimum.
Other operational and technological areas present potentially
problematic trends as well. Runway incursions, which have been on the
NTSB's Most Wanted Safety Improvement list since 2001, totaled over
1,300 between fiscal years 2003 and 2006. Among the aviation safety
community, the Tenerife incursion accident that killed 583 people in
the Canary Islands in 1977 stands as a sober reminder of the importance
of getting this number down. The April 25, 2006 crash of an unmanned
aerial vehicle (UAV) in Nogales, Arizona, and the resulting NTSB
investigation and 22 recommendations illustrate the potential problems
with the growing expansion of drone flights in the U.S. General
aviation and the air ambulance fleet have also increased in the last
ten years; however the FAA does not collect actual flight activity data
for general aviation operators and air taxis, instead using an annual
survey to query a sample of registered aircraft owners.
Several new aircraft types will emerge in the years ahead, ranging
from the jumbo Airbus A380 that seats more than 500 passengers--a jet
so large as to raise safety concerns in its own right--to very light
jets that might transport six or fewer passengers. As many as four to
five hundred new very light jets are scheduled to be introduced into
American airspace each year starting in 2008.
The Next Generation Air Transportation System (NextGen), a major
and much-needed technology upgrade for the air traffic control system
scheduled for completion in 2025, will only add to the variables that
need to be factored in aviation safety, especially if NextGen is not
adequately funded, implemented, or regulated.
Overshadowing all these developments is a major growth in demand
for air travel. In fiscal year 2006, over 740 million passengers flew
in American skies. That figure is projected to reach one billion by
2015 and close to 2.3 billion by 2027. These numbers are absolutely
staggering. On January 1, 2007 federal regulations on the quantity of
planes able to use J.F.K. airport ended, and traffic has increased by
some 20 percent. Congestion and resulting delays may be inconvenient,
but it also increases the potential for mishaps. As a Government
Accounting Office (GAO) report released in February of this year noted,
``although the system remains extraordinarily safe, if the current
accident rate continues while air traffic potentially triples in the
next 20 years, this country would see nine fatal commercial accidents
each year, on average.''
I am not suggesting that nothing is being done to address these
issues. I think individuals such as Marion Blakely, former
administrator of the FAA, and Bobby Sturgell, current Acting
Administrator of the FAA, have taken strong steps to address safety
concerns. And yet, to again cite the GAO study, ``FAA's approaches to
safety require that the agency obtain accurate and complete data to
monitor safety trends, fully implement its safety programs, and assess
their effectiveness to determine if they are focused on the greatest
safety risk. FAA has made progress in this area but more work remains
[italics added].''
The Withholding of NASA's Data
More work indeed remains, which makes it all the more frustrating
that NASA withheld results obtained from an $8.5 million tax payer
funded national survey of almost 24,000 pilots. This survey reportedly
states that runway incursions, wildlife strikes, and near collisions
occur at a rate at least twice as much as is commonly thought. As
justification to its denial of a Freedom of Information Act request,
NASA cited the potentially harmful affects on the commercial welfare of
the air carriers and general aviation companies.
Such an action runs exactly counter to the safety culture mentality
the government and industry have worked to create over the past ten
years. As the GAO observed, transparency forms the fundamental basis
for any safety program. If we don't know something is broken, we cannot
fix it. If we do not know that runway incursions are actually occurring
at a much higher level, then we cannot take steps and assign the
resources to deal with them.
It is difficult to overemphasize the importance of transparency and
accountability in aviation. It is the single greatest reason why you
are so safe when you get on an airplane today. The history of
transparency began with the Wright Brothers, who assisted in the
investigation of the first fatal aviation accident and used the results
to incorporate changes to their flying machine in order to save lives.
In September 1908, five years after the Wrights' historic flight,
Orville and Lt. Thomas Selfridge were conducting an aerial
demonstration for the Army in Fort Meyers, Virginia when their airplane
stopped responding to controls and crashed, injuring Orville and
killing Lt. Selfridge. The Wright Brothers' commitment to objective
scrutiny and constant improvement set an historic precedent and has led
to a safety culture in aviation that is built on fact finding, analysis
and open sharing of information to advance aviation and save lives.
This open process has resulted in numerous important advances in
aviation. In the modern era, NTSB investigations and recommendations
have led to smoke detectors in airplane lavatories, floor level
lighting strips to lead passengers to emergency exits, anti-collision
systems, and ground proximity warning devices, to name but a few.
The industry often very clearly responds to the efforts of safety
research even before investigations are completed. On September 8,
1994, USAir flight 427, a Boeing 737, crashed while on approach to
Pittsburgh, Pennsylvania. After 80,000 hours of investigation, the NTSB
had not yet completed its final report but had issued several
recommendations. In response, Boeing and the FAA began developing and
certifying several modifications to the 737 main rudder power control
unit (PCU) servo valve. The FAA proposed an Airworthiness Directive to
require the installation of newly designed PCUs within two years. Most
airlines began providing training to pilots on the recognition,
prevention, and recovery of aircraft attitudes normally not associated
with air carrier flight operations.
On October 31, 1994, an American Eagle ATR-72 crashed in Roselawn,
Indiana. Seven days after the crash of an ATR-72 in Roselawn, Indiana,
we issued recommendations covering the operation of those aircraft in
icing conditions. Thanks to a then state-of-the-art flight recorder, we
were able to learn within days that the French-built ATRs upset was
initiated by a rapid deflection of the right aileron. The NTSB deduced
that this deflection was caused by the accumulation of a substantial
amount of ice on the wings during the 30 minutes the plane was in a
holding pattern. Within a week of the accident, the NTSB issued urgent
safety recommendations to the FAA to restrict the operation of ATRs in
icing conditions until a fix could be developed to counteract the
phenomenon the accident aircraft encountered. Within a month, following
test flights in the United States and France, the FAA effectively
grounded the aircraft in icing conditions. A redesign of the wing anti-
icing boots was developed, and the modified airplanes returned to the
skies.
One of the keys to the Roselawn investigation was the fact that the
flight data recorder (FDR) was recovered and that it recorded some 98
parameters, giving investigators ample information with which they
could quickly establish the cause of the accident and the most
appropriate fix. This contrasts with the FDR on-board flight 427 the
previous month, which recorded only 11 parameters and in so small part
delayed the release of the final investigation report by over four
years. In a sense, NASA's refusal to release their safety data is
tantamount to denying investigators access to black boxes. Both actions
seriously impede the ability to determine potentially critical safety
concerns.
Information flow is the key to safety, whether to the investigator
actually assembling pieces on the ground or to the analyst compiling
survey data back in the office. In its investigations into the two
Shuttle accidents in 1986 and 2003, NASA itself noted that a decline in
transparency and accountability among management--and not simply a lack
of adequate funding for safety--was a root cause of both incidents.
The investigation into the Challenger explosion specifically
faulted management isolation and a failure to provide full and timely
information. The final report of the Columbia Accident Investigation
Board (CAIB) noted that for both the Columbia and Challenger accidents,
``there were moments when management definitions of risk might have
been reversed were it not for the many missing signals--an absence of
trend analysis, imagery data not obtained, concerns not voiced,
information overlooked or dropped from briefings.'' The Chairman of the
CAIB, Retired Navy Admiral Harold Gehman pointed out that NASA tends to
initially follow safety procedures quite well, but then loses its
diligence as time progresses. Columbia investigation board member Air
Force Major General John Barry stated that ``there is still evidence of
a silent safety program with echoes of Challenger.'' Safety and silence
are simply incompatible.
The culture of aviation safety has been built on constant critical
self examination, in an open environment, with full sharing of all the
facts and analysis. Because we are safer today than yesterday does not
mean that we cannot be safer tomorrow. It also doesn't mean that our
gains are not perishable. For example, on July 2, 1994 USAir flight
1016 crashed in Charlotte, North Carolina. We determined that the
causal factor was something we hadn't seen in the United States in
almost a decade: wind shear. Wind shear detection equipment and
improved pilot training had all but eliminated this hazard and yet more
sophisticated weather detection equipment--Terminal Doppler Radar--had
fallen years behind schedule due to procurement and design problems.
Furthermore, because we have made major accidents such a rarity,
our ability to identify risks, and maintain or increase safety now
depends primarily on our ability to fully analyze incidents and trends.
In the absence of a major fatality accident or without a complete
picture of runway incursions, wildlife strikes, and near-misses, we may
be lulled into a false sense of security--only to have that eventually
broken by a catastrophic loss of life. A true safety culture requires
transparency and constant vigilance.
This vigilance is required of all involved in the aviation
industry, but its absence is perhaps most glaring when it is the fault
of government, the servants of the American people. As Chairman of the
NTSB, I followed the dictum of Benjamin Franklin, who said, ``The man
who does things makes many mistakes, but he never makes the biggest
mistake of all--doing nothing.'' I never wanted the American people to
think that, when a need was identified--as it was in any number of
safety-sensitive issues--we did nothing. Let us then not shrink from
action but rather call on NASA to release its information, the denial
of which flies in the face of aviation history, responsible government,
and common sense.
Conclusion
We are clearly facing a new generation of risks. New technology,
new planes, personnel shortages, and a massive projected increase in
air travel mean that new hazards are approaching. Before we push the
panic button, however, we should remember that we have been in this
situation before. In 1996, we projected an increase of 220 million
passengers in the next ten years and identified a host of technological
and operational concerns that would compound this development. In
response the President formed a commission and its recommendations--
though not perfect and not all implemented--contributed to a
substantial reduction in fatal accidents. Today in 2007, we are
forecasting an increase of 260 million passengers in the next eight
years and an increase of 1.5 billion in the next twenty. We have
personnel shortages looming or already underway and have committed
ourselves to new technology. In fact the only major difference between
1996 and 2007 was 1996's dramatic and tragic loss of 340 lives in two
accidents.
Congress, government agencies, and the aviation industry must once
again come together to address the rapidly changing aviation
environment. We must stay ahead of events instead of waiting for
another crash. Steps must be taken to prevent a deterioration of our
nation's aviation safety culture, a deterioration that NASA's denial of
transparency plainly represents. In only such a manner can we adapt to
a growing and diversifying industry with a rigid adherence and
commitment to the safety of all who fly in our nation's airspace.
Biography for James E. Hall
Jim Hall is a leading expert on crisis management and government
relations, and transportation safety and security, having served
government and private clients for more than 35 years.
Hall was nominated by President Clinton to the National
Transportation Safety Board in 1993, became the Board's Chairman in
1994 and led the Board through January 2001.
During his chairmanship, Hall worked tirelessly to improve safety
in all modes of transportation in the U.S. and abroad. He visited more
than 30 nations as Chairman, and oversaw a period of unprecedented
activity as the NTSB investigated numerous major aviation, rail,
pipeline and maritime accidents in the U.S. and assisted in many
international accident investigations. Among the major investigations
the NTSB conducted while Hall was Chairman were the aviation cases of
USAir 427, TWA 800, and EgyptAir 990, the Olympic Pipeline accident in
Bellingham, Wash., the AMTRAK crash in Bourbonnais, Ill., and a
Carnival Cruise Line accident near Miami. In 1996, President Clinton
named Hall to the White House Commission on Aviation Safety and
Security.
Under Hall's leadership, the NTSB issued landmark safety studies on
commuter airlines, the air tour industry, the performance and use of
child restraint systems, personal watercraft, transit bus operations,
passive-grade railway crossings and the dangers posed to children by
passenger-side airbags in automobiles.
Hall began his career in Washington serving as counsel to the
Senate Subcommittee on Intergovernmental Relations and a member of the
staff of Senator Albert Gore, Sr. He maintained a private legal
practice in Chattanooga, Tennessee, before serving in the cabinet of
Tennessee Governor Ned McWherter. Hall served as Director of the
state's Planning Office for five years, and then returned to
Washington, D.C., to serve as Chief of Staff for Senator Harlan Mathews
before being appointed to the NTSB.
Today, Hall serves as an adviser to governments and private clients
on transportation safety and security, crisis management and government
relations. He is a frequent speaker at industry events, an oft-quoted
expert source by television and print reporters, and an author of
numerous Op-Ed pieces. Hall has appeared on virtually every major
television news program, including ``60 Minutes,'' the ``Today'' show,
``Nightline,'' ``Larry King Live,'' ``Fox & Friends,'' and ``BBC
News,'' and his columns have appeared in publications such as the New
York Times and USA Today. In 2002, the U.S. Forest Service named Hall
to co-chair a blue-ribbon safety review of the operations of
firefighting aircraft after three such aircraft crashed that summer.
Hall is a University of Tennessee Trustee, serves as Chairman of
the Enterprise Center in Chattanooga, on the Board of Directors of the
Chattanooga Metropolitan Airport Authority and the Tennessee River
Gorge Trust. Hall has also served on the National Academy of
Engineering's Committee on Combating Terrorism, Co-Chairman of Blue
Ribbon Report of Aerial Fire Fighting Safety and the Aviation Institute
Advisory Board of George Washington University.
Hall has given congressional testimony before numerous House and
Senate committees, including the House Committee on Transportation and
Infrastructure (Aviation and Railroad Subcommittees), the Senate
Committee on Commerce, Science and Transportation (Transportation and
Surface Transportation/Merchant Marine Subcommittees).
Hall graduated from the University of Tennessee in 1967 with a
Baccalaureate of legal letters degree. He served as a commissioned
officer in the U.S. Army from 1967 to 1973, receiving the Bronze Star
for Meritorious Service in Vietnam in 1969.
Discussion
Chairman Gordon. Thank you, Mr. Hall. At this point we will
open it for our first round of questions, and the Chair
recognizes himself.
Let me first state that I think by any measure you might
take, particularly if you want to say the number of miles flown
that United States has the safest air transportation system in
the world. I fly, as Mr. Hall says, almost every week, often
with my wife and my daughter. I don't intend to change those
flight plans in any way, so our discussion today is not safety
and non-safety. It is safety and more safety. And so we should
make that very clear.
And let me also say that, you know, 24,000 commercial
pilots and 5,000 private pilots, I mean, that to me sounds like
an unprecedented amount for a survey, and so that is an
enormous amount of data that I think should be made available,
and although I recognize NASA's interest in a particular
methodology, I think that oftentimes some of the most important
discoveries in America have been those offshoots of
information.
Release of NASA Report
So I would ask you, Director Griffin, now that your lawyers
have for over a year had this request on the Freedom of
Information, when can we, why can't this material be released
today?
Dr. Griffin. When we look at the material, despite the
certifications that you--that I know you have heard from the
contractor involved, the data, in fact, today could not in its
fullness be legally released.
Chairman Gordon. And why is that?
Dr. Griffin. Because it does contain specific comments that
identify certain airlines. It contains--it notes accidents and
incidents or occurrences that sight specific timeframes,
specific airports, specific makes and models of airplanes. If I
look at that data, I can reconstruct for you----
Chairman Gordon. Dr. Griffin----
Dr. Griffin.--and so we are going to delete those fields.
We are asking our contractor to delete those fields and to
render data back to us which is not identifiable as they were
originally required to do.
Chairman Gordon. Director, I only have five minutes. I am
sorry.
We have asked your lawyers to cite that you were nice
enough to provide the information to us. We couldn't find it.
We have asked your lawyers to point us in that direction to
that information. They couldn't do it. Have you seen the
specific information?
Reasons for Not Releasing Parts of the Report
Dr. Griffin. I have seen examples of specific information
which would not be----
Chairman Gordon. In this report?
Dr. Griffin. In this report which would not be releasable.
Chairman Gordon. Okay. Well, it would have been helpful if
your lawyers had shown us, because we specifically asked that.
But let me also--I want to put up a slide if I could, please,
from your contractor. Apparently there is a program that is
supposed to scrub it, and within NASA's own information it
says, participant confidentiality is assured. So apparently you
have already done this.
Dr. Griffin. Well, no. That information is not as it stands
correct.
Chairman Gordon. Even though it has a NASA logo on it?
Dr. Griffin. I am sorry. It is not correct. Okay. It is
possible to look at this data, and if one knows anything about
aviation, in some cases to go back and identify the
participants, and that can't be allowed.
Chairman Gordon. So NASA was premature in certifying its
confidentiality?
Dr. Griffin. Correct.
Chairman Gordon. All right. Well, let me ask you this. You
are familiar with the Aviation Safety Reporting System.
Dr. Griffin. Very much so.
Chairman Gordon. Okay. Let me just--I want to read to you
one section of that that is from March of 2004. And this is up
on the Internet. This is available for everybody. ``After two
previous,'' and I am quoting. ``After two previous red-eyes,
this being the third red-eye in a row, the last 45 minutes of
flight I fell asleep and so did the first officer, missed all
calls from the air traffic control.'' That was the quote. This
is a report made by an aircraft crew member who slept through
their decent, clearance, 60 miles southeast of Denver. Once
they are awakened by the frantic calls from the air traffic
control, they executed a successful landing.
Now, this is just one of thousands of the reports that
identify the airport, sometimes the approximate time, aircraft,
runway numbers. This material is public.
Dr. Griffin. That is true.
Chairman Gordon. So why should your survey not be public?
Is it going to go into more, I mean, have you not, have they
not done what they said they were going to do and scrub it to
at least this extent?
Dr. Griffin. When we look at the data, we do not at this
point believe the data has been scrubbed sufficiently to assure
confidentiality of the participants and to protect confidential
commercial information according to the standard to which we
are held. As soon as we can do that, we will release the data.
Now----
Chairman Gordon. Are you going to have a standard higher
than this ASRS?
Dr. Griffin. I wouldn't say so.
Chairman Gordon. Okay. So the information that I just read
to you that is already public, you would not say that has to be
scrubbed. They have to be greater, I don't know how, you know,
a greater level of detail to be scrubbed?
Dr. Griffin. I don't know that I would characterize it as a
greater or lesser level of detail, but we do need to remove
specific references to airlines, specific references to
incidents and timeframes such that pilot identity could be
reconstructed. We think that that would be a relatively
straightforward process to delete certain of the fields which
convey that information, and we believe the initial release of
the data could occur by the end of this year.
Chairman Gordon. And so you are going to do it by fields,
so it will be by a computer program?
Dr. Griffin. Right. Certain of the fields will be----
Chairman Gordon. Okay. Well, it seems like that is what has
already been done here, and if it is going to computer program,
why can't you do it today, tomorrow.
Dr. Griffin. I think you----
Chairman Gordon. ----before the end of the year?
Dr. Griffin.--maybe, when you look at that view graph,
there may be some confusion between anonymizing the data to
satisfy Privacy Act considerations and rendering the data such
that no one knowledgeable in the field of aviation could go
back and reconstruct it.
Chairman Gordon. Well, isn't that the same thing?
Dr. Griffin. I am not trying----
Chairman Gordon. If it was the Freedom of Information that
the AP asked for this from the Freedom of Information, then
wouldn't you have assumed it would be made public record? And
so it is the same thing, the same level of caution?
And you folks had a year to do this already.
Dr. Griffin. I don't think we have had a year since the
original submission, since the submission of the FOIA request.
In any case, I am not defending, I stated for the record,
and I will state for the record again that I believe the FOIA
determination that we should not release the data was
incorrect, okay? We will release the data. As we set out to
look at the data, to verify whether we could release it or not,
we found that the data had not, in fact, been correctly
scrubbed to remove identifying data. And if it had been, I
would have released it on the spot, but it has not, and so
until and unless I can verify that it has been correctly
scrubbed, it will not be released.
Information About the Data That Was Released
Chairman Gordon. Okay. I don't want to infringe on my time.
You have never given me a reason not to trust your statement in
any way. Let me just tell you that we have asked your lawyers
specifically to provide us that information, to point some
place. We have not been, you know, you have given us data.
Dr. Griffin. Yes, sir.
Chairman Gordon. So all you got to do is say, look here,
look there. And so it would give me a greater level of
confidence if your folks could tell us where and could give us
one example. Then we could feel more comfortable that you need
this additional time.
Dr. Griffin. Yes, sir. Let me then take that request for
the record, and we will provide you with a couple, at least a
couple of examples----
Chairman Gordon. Okay.
Dr. Griffin.--where specific identifying information is
included that would allow pilot, participant identities to be
compromised. They do exist, and we will provide those for you.
Chairman Gordon. And I would hope there would be to a
greater clarity than what is already of public record on the
ASRS.
Dr. Griffin. They are extraordinarily clear.
Chairman Gordon. Thank you----
Dr. Griffin. I will provide that.
[The information follows:]
Material for the Record
One way, but not the only way, by which the identification of a
NAOMS survey respondent can potentially be determined is by combining
the free-text fields (pilots' open-ended responses and clarifications)
with data from other parts of the survey and/or external (exogenous)
data sources.
The availability of exogenous databases and sophisticated search
technology makes the likelihood of implicit identification greater, and
it is correspondingly more difficult to ensure that adequate
protections have been implemented.
The following two examples cite free-text field responses to
Question ER 1 of Section B (Safety Related Events), which asked pilots
how many times in the past 60 days an aircraft, on which they were a
crew member, was diverted to an alternative airport and provide the
cause for the diversion.
Example I (Case ID 90P0001): the pilot responded, ``Earthquake in
Seattle.'' A web search reveals the only seismic event that diverted
flights from the Seattle-Tacoma Airport during the survey period: A
magnitude 6.8 earthquake on February 28, 2001. During the period of
closure and reduced operations that day, approximately 100 arriving
flights were diverted; the exact number, together with airline and
flight identifiers, could be obtained from Federal Aviation
Administration (FAA), airline, and/or airport. databases.
This single response has reduced the number of
candidate responders from over 60,000 (the number of air-
carrier certificated pilots listed in the Airmen's Registry) to
approximately 200.
This profile can be further refined using non-
redacted NAOMS data from other questions.
- From Section A, we can determine the pilot's flight
profile for the past 60 days (e.g., number of hours and
flight legs flown; the makes, models, and series of
aircraft flown; whether flights were passenger or
cargo, whether the pilot flew as captain or first
officer, whether the pilot flies for a small, medium,
or large operator; and the pilot's total commercial
flight hours).
- From Section B, if the pilot gave a positive
response to any reportable safety event, an individual
could cross-reference the pilot profile to event
reports (FAA, airline) from the defined interview
window (i.e., February 28th 60 days) to match an
individual's name to the profile. If not, the profile
may still match a name on airline duty rosters or other
exogenous databases.
Example II (Case ID 90C2001): the pilot's stated cause for
diversion was, ``American 587 crashed at JFK. R was en-route to JFK at
the time and was diverted to Philadelphia.'' Again, this free-text
field response provides a specific event (the crash of an Airbus A300-
600 into Belle Harbor at 9:17 AM local time on November 12, 2001) for
which there are detailed records of diverted flights. The pilot has
also specified the alternate airport (Philadelphia), further limiting
the field of possible flights. As before, the respondent's profile
could be refined by the non-redacted NAOMS data. When cross-correlated
with exogenous databases, the refined profile might again lead to the
identification of a NAOMS survey respondent.
Chairman Gordon.--very much, sir. And I now recognize Mr.
Hall.
Confidentiality of Information About Pilots and Commercial
Information
Mr. Hall of Texas. Dr. Griffin, I will get right down to
the basis of this, and we are talking about confidentiality at
this time. When, and you can give me a yes or no answer on this
I think, when pilots were surveyed, were they led to believe
that their responses would be confidential?
Dr. Griffin. They were promised confidentiality. Yes.
Mr. Hall of Texas. So if their responses were released, do
you think it would have had a chilling affect on their future
participation in FAA or NASA surveys, and would airline safety
ultimately be hurt by disclosing this data if fewer pilots
contributed to other surveys and reporting systems?
Or let me go on a little bit--be a little more personal
with that. As a pilot yourself with many years of flying
experience, would this data give you pause as to whether it is
really anonymous, or would it worry you that your input could
be traced back to you? And would that have a chilling affect on
you?
Dr. Griffin. Well, in its present form some of the examples
can be traced back to pilots and some named individual
airlines. That can't be allowed. If the data is properly
rendered untraceable, then I think it must be released and
should be released and will be released as I have stated
several times.
So if it were properly anonymized, I have no concern.
Mr. Hall of Texas. And they do that by cross referencing
flight routes, times, and carriers?
Dr. Griffin. We need to delete the fields that contain that
information. So that will be done. Now, the major concern I
would have over this data at this point is that somebody might
put too much credence in it. It is simply not credible to
believe that the aviation community is experiencing nearly four
times the number of engine failures that are being documented
by the FAA. That is not credible to believe. If it is true, it
is going to require some very strong justification, and we will
pursue that. The community will pursue that, but it is not
credible at this point.
So I would not want the flying public to believe the data
in the form that it appears today.
Mr. Hall of Texas. Mr. Hall, wake up.
Mr. Hall. Yes, sir.
Mr. Hall of Texas. NASA surveys but one source of data that
can be used to major safety transit and National Airspace
System. What other sources can be used to monitor system
safety, and how useful are they?
Mr. Hall. Well, the ASRS System, which NASA has used for
years, is, of course, I think very useful in terms of it is a
voluntary program. It is a program that--that is why I am a
little confused on--in regard to some of the comments from the
Administrator. NASA has run this program for the Federal
Government for a number of years. So they are familiar with how
to put a program together and maintain confidentiality.
There are other programs that run by FAA and, of course,
NTSB has gotten into trying to look at as many incidents as
possible in providing information. But aviation safety benefits
from having, as I mentioned in my statement, sir, a very open
system and a system where there is a whole lot of information
and that information is constantly in the public for analysis
and review.
Mr. Hall of Texas. Dr. Griffin, you state that NASA will
release this survey data so long as it doesn't compromise the
anonymity of the pilots, keep them anonymous. Does that not
contain confidential commercial information?
Dr. Griffin. Well, by the time we release it, it will not
contain confidential commercial information. Some of the data
that we have today does, and we are not legally allowed to do
that by statute.
You know, there have been a number of comparisons made to
the Aviation System Reporting System, the ASRS, which NASA does
manage by statute, and this survey. One of the primary
differences between ASRS and this survey was that ASRS is
managed by aviation specialists. When reports are made, the
aviation specialists can contact the submitter of the report
and ask follow-up questions. They are knowledgeable about
aviation safety.
This survey was conducted by telephone polling surveyors,
who have no knowledge or had no knowledge at all as to aviation
or aviation safety. They had no domain expertise, and it is
precisely that which has led to some of the problems that we
are here discussing today.
Mr. Hall of Texas. I think my time is up, Mr. Chairman. I
thank you. I yield back if I have any.
Chairman Gordon. Thank you, Mr. Hall. Let me once again
state that this is not a matter of a hearing between a safe
system and an unsafe system. It is a matter of a very safe
system that we want to, you know, make, continuing the model
for the entire world.
Now we will recognize Mr. Costello, the Chairman of the
Aviation Subcommittee of the Transportation and Infrastructure
Committee.
Mr. Costello. Thank you, Mr. Chairman, and thank you for
calling this hearing today as well as the Chairman of the
Subcommittee, Chairman Miller. Welcome, Dr. Griffin, Mr. Hall.
Mr. Hall, it is good to see you in Science Committee room for a
change as opposed to the T and I room.
But let me--we can go through a whole long list of
questions. Let us cut to the chase and get down to why we are
here.
Getting the Information to the Public
We talk about scrubbing the report in order for it to be
released without breaking anyone's confidence or a commit----.
You are saying that you can release the information possibly by
the end of the year. Is that correct?
Dr. Griffin. Yes, sir.
Mr. Costello. How long will it take, I mean, if it is a
priority in the agency, we all, and I would hope that you would
acknowledge that the agency made a huge mistake in how they
responded to the AP and to the media. Your spokesperson did, in
fact, unless you are refuting this, did, in fact, say to the
news media that, if we release the data it could be, could have
an adverse affect on the industry. Is that correct, Dr.
Griffin?
Dr. Griffin. We did say that, and as I have now said
several times, that was the wrong thing to have said. I
apologize that anyone in my agency did say that.
Mr. Costello. So you know that it was a mistake to say
that. You know that it has created a lot of controversy. You
know that people in the aviation industry and the traveling
public, because I have heard from my constituents, and I have
heard from complete strangers to me at airports as I am flying,
what is going on with this report, and what won't you release
it to the public?
If it is a priority to us, shouldn't it be a priority to
your agency to scrub this and get it out to the public
immediately?
Dr. Griffin. It is a priority. I have spent, I have a
Shuttle mission in the air right now, and I have spent little
else this past week except to work on this issue. I regret----
Mr. Costello. I would hope that there are other people in
the agency that you could assign this to as opposed to you
handling this personally.
Dr. Griffin. Well, we have had quite a number of people
working on it. We do consider it to be a priority, and we
consider it to be an important one. Now, the fact that people
at NASA misspoke concerning the reasons behind the denial of
the FOIA request does not mean that we can compromise our
statutory requirements----
Mr. Costello. And no one is asking you----
Dr. Griffin.--on FOIA.
Mr. Costello.--compromise a statutory requirement.
Dr. Griffin. Right.
Mr. Costello. What we are saying is get this done and get
it out to the public, and my question to you is do you have
people today and this evening and around the clock working on
this project to scrub it to get it out to the public?
Dr. Griffin. The people who have to work on this project to
scrub the data and get it to the--out to the public are at
Battelle Institute. They have been directed to do that. I hope
that they are doing that with all deliberate speed, and we will
be verifying that. When Battelle has finished scrubbing it, the
quality of the scrub must be judged by government officials,
who will then do that as quickly as possible, and we will get
it out to you.
Mr. Costello. So have you directed Battelle to work on this
around the clock? Have you given them a deadline?
Dr. Griffin. I have not directed them to work on it around
the clock. We have directed them to work on it.
Mr. Costello. Isn't it reasonable for us to expect for you
to give them a deadline? They are working for you.
Dr. Griffin. They are, and we have asked them to complete
it by the end of the year. That is what we are asking. That is
two months away.
Mr. Costello. And if you told them June of '08, they would
complete it in June of '08. Isn't that correct?
Dr. Griffin. You are asking for more detail than I have. It
is a significant amount of data processing. We will do it as
soon as we can, and we are trying for the end of the year.
Mr. Costello. Dr. Griffin, you have acknowledged that the
agency misspoke. They created this uproar with the American
people and with the Congress and with everyone in this room. It
is your responsibility to clean this up.
Dr. Griffin. That is correct.
Mr. Costello. If I were in your shoes, I would be directing
Battelle to work 24 hours a day, seven days a week to get this
thing cleaned up so it can be released to the public.
Disciplinary Action for Responsible Party
Last and final question that I have, the person who
misspoke representing the agency, have you identified who that
is?
Dr. Griffin. Yes, sir.
Mr. Costello. Have you taken any disciplinary action
against that person?
Dr. Griffin. It is not a matter of discipline. People make
mistakes. This was a mistake.
Mr. Costello. My question, we all understand it was a
mistake. Has there been any disciplinary action taken?
Dr. Griffin. No.
Mr. Costello. Thank you, Mr. Chairman.
Chairman Gordon. The gentleman from Wisconsin, former
Chairman of the Aviation Subcommittee on this committee, as
well as the Full Committee, Mr. Sensenbrenner, is recognized.
Mr. Sensenbrenner. Thank you very much.
NASA Survey and Confidentiality
Dr. Griffin, first of all, I think we all want to see what
the results of the survey are. Secondly, I think we all agree
that certain things have to be kept confidential. That was what
was represented to the people who were asked to respond to the
survey, and they responded candidly based upon the
representation of confidentiality.
I guess what I would like to know is the survey was
finished in 2005, and we are almost at the end of 2007. That is
two and one-half years more or less between the time the survey
was finished. Why is there this gap in time? Who dropped the
ball?
Dr. Griffin. We at NASA did not manage this project to its
conclusion well. We did not. Because of that I have instituted
a look at other projects that we are doing in various classes
of research at NASA to make sure that we are not doing the same
thing elsewhere.
Mr. Sensenbrenner. Which NASA center of ``excellence''
supervised Battelle and this survey?
Dr. Griffin. This particular project was supervised out of
the Ames Research Center.
[The information follows:]
Material for the Record
NASA Ames Research Center agrees that a more timely report on NAOMS
should have been provided. The NAOMS contractor team consisted of a
small group of individuals who supported a few related projects. The
NASA NAOMS project management officials decided to allow the contractor
team to defer preparing a timely report in order to conduct other
activities in support of the NAOMS project, notably the transition of
the NAOMS survey methodology, as well as to address priorities in other
projects they were supporting. In the process, attention was diverted
from the final report, resulting in an inordinately lengthy delay.
The NAOMS contractor completed the survey collection in December
2004. In FY 2005, the NASA NAOMS project management officials
prioritized project resources to enable the transfer of the NAOMS
methodology to a new host organization. This transfer required adapting
the NAOMS data collection methodology from a computer-aided telephone
interview to a web-based format. Throughout FY 2005 and FY 2006, the
NAOMS contractor team was thus directed to develop the new methodology,
in collaboration with NASA researchers, and to transfer the methodology
to the Air Line Pilots Association (ALPA), under the auspices of the
Joint Implementation Measurement and Data Team (JIMDAT, the evaluation
arm of the Commercial Aviation Safety Team).
By early FY 2007, the NAOMS project team had not completed the
transition of the methodology to ALFA nor had the contractor completed
its final report. By this time, the contractor was needed to support
the Aviation Safety Program priority to develop safety data mining
tools. The NASA NAOMS project management officials, therefore, directed
the contractor to focus on this priority and provided an extension to
the contractor for producing a final report on NAOMS. Proper attention
is now being given to producing this report, and measures will be taken
to ensure that this kind of delay on contract deliverables does not
happen in the future.
Mr. Sensenbrenner. Okay. Have you found out why the Ames
Research Center didn't follow up and have a timely report?
Dr. Griffin. I have not.
Mr. Sensenbrenner. Will you do it and let us know?
Dr. Griffin. I will take that for the record. We will find
out what their rationale was for taking so long to allow this
report to be generated, and we will answer back to you.
Mr. Sensenbrenner. Okay. Well, let me say that this appears
to be a mess of NASA's own causing, and you are the agency
head, and I would hope that we don't hear from you again on
another mess of NASA's own causing.
You know, I would point out that in about two and a half
years we are going to have a census in this country, and one of
the things the Census Bureau represents to every American or
everybody who is in this country, is that their responses will
be confidential. And that is in order to get a candid response
on not only how many people are here but the housing questions
and the other things that are asked on the census form.
Any government agency that gets itself caught in a pickle
like NASA is in is going to reduce the confidence of the
American public that responses that are supposed to be kept
confidential will indeed be kept confidential. Sir, you dug
yourself into a hole. I can't say that you are not digging
yourself deeper into the hole from what I have heard at this
hearing, but I think it is important more than just for your
agency but the government as a whole that you start working
yourself out of that hole.
Thank you, and I yield back the balance of my time.
Chairman Gordon. The gentleman from Colorado, the Chairman
of the Space and Aeronautics Subcommittee, Mr. Udall is
recognized.
Mr. Udall. Thank you, Mr. Chairman. Welcome, Dr. Griffin.
Releasing Information and Why Was the Survey Ended?
I would like to start by echoing what Chairman Gordon said
today. We are all disappointed we have had to convene the
hearing, but the fact that NASA refused to release the
taxpayer-funded aviation safety survey and the rationale that
NASA gave for refusing to release this information is
unacceptable, and it obviously required Congressional scrutiny.
I think we all agree the safety of the public has to be our
first priority, especially with more and more Americans flying
every year.
I am glad that you have now agreed to release at least some
of the survey data publicly so that it can be used to help
maintain and hopefully improve the safety of the Nation's
airways, but I feel strongly that all of the data should be
made publicly available as soon as possible.
I also have some concerns about why the study was ended.
Several witnesses here today have affirmed the value of a
comprehensive, ongoing survey and analysis approach to aviation
safety trend analysis and accident precursor identification,
which is the approach exemplified by the NAOMS Project. I think
there appears that we would all agree to be a great deal of
merit to the NAOMS approach, and we need to assess whether NASA
and the FAA should reinstitute the project.
Doctor, if I could just leave aside for a moment the issue
of peer review, survey methodologies, which our second panel
will be addressing, I have to say that I am troubled by your
testimony on the NAOMS project. At one point in the testimony
you state that the project was not shut down prematurely and
that the transition of the survey methodology to industry,
government decision-makers was successfully completed.
However, later in your testimony you say that any product
of the NAOMS Project including the survey methodology should
not be viewed or considered at this stage as having been
validated. Basically, at least to this Member, you are saying
that NASA didn't complete a critically-important R&D task, the
validation of the survey methodology before it transitioned
NAOMS out of NASA.
Later Captain McVenes will testify that the Aviation
Committee had plans to work with NASA to help determine if the
survey data were reliable, but funding for NAOMS ran out, and
that is when the A-L-P-A, ALPA, stepped in to help keep the
project alive.
This doesn't appear to be the normal way R&D programs
should be run, and I think that the Space and Aeronautics
Subcommittee will need to take a closer look at NASA's
aeronautics programs and its aviation safety programs in
particular in the coming months. But in the spirit of openness
and dialogue here, I would see if you care to respond to those
comments.
Dr. Griffin. Well, we certainly agree--could not agree more
that the aviation safety information leading to trending
analysis and accident factor identification before the fact is
crucial. We are working on exactly those things in concert with
the FAA, again, in a program that has been reviewed by the
FAA's own safety subcommittee, in which we have submitted for
review to the National Academy. So we agree with that.
NASA, however, is not the entity responsible or even
allowed to take on the job of operational aviation safety. We
do research, and we are doing that. And we expect to continue
to do it, because we do believe it is important.
Now, as I said in my testimony earlier, the National
Academy in its 2004, review specifically stated that they did
not see a reason for the NAOMS Project to continue. We agree.
We have transitioned our other projects of that type to a joint
FAA, NASA arrangement that I think is working well, and when
NAOMS was, as it was, scheduled to end in 2004, with follow-up
reporting to be done in 2005, we allowed that to occur as had
been planned.
So I don't think there are--I don't think there is any evil
intent there. There was no intent to abrogate our
responsibilities. In fact, our intent was to execute them as
best we could with our FAA partner. What was not done here was
to bring the project to a timely conclusion, to assess the
data, to issue a report, to publish that report in peer review
journals, and to release the data to the public in a timely
way, properly anonymized. That was not done, and we are going
to have to do it.
Mr. Udall. The spirit in which I offer my remarks are as
follows. I think this situation, of course, is one that we have
great concern about on the Committee, but I think we should
take advantage of the clear opportunity here to make our system
safer and to take this data, 24,000 responses, that is very,
very significant, and apply it and use it in a way that has
some utility in the coming months and the coming years.
I see my time has expired. Thank you for being here today
again.
Chairman Gordon. Dr. Ehlers, thanks for your being prompt
today. I am sorry that I overlooked that earlier and Dr. Ehlers
of Michigan is recognized for five minutes.
Mr. Ehlers. That is quite all right. I am used to being
overlooked. I hope you all feel sorry for me.
Airline Safety Compared to Other Safety Concerns
Actually, I am going to take a somewhat different attack
and also do some criticism but not of you, Dr. Griffin.
Your situation reminds me very much of a quote from Harry
Truman when he left the Presidency. His comment was, ``This job
was not so great. I spent all of my time trying to persuade
people to do things they should have had sense enough to do in
the first place.'' Your situation reminds me a bit of that, and
I agree with the comment made that you have more important
things to do than to deal with this particular problem, and it
is unfortunate that it developed and entwined you in it.
But as the son of a preacher, I have to give a little
sermon here, and I have been warned never to insult the media,
but I am going to anyway. Because it has always puzzled me why
the media are so obsessed with aviation safety when it is the
safest mode of transportation in this country. I remember some
years ago when I was new in the Congress but there was a low-
cost airline that had an airplane crash in the Everglades
because some attendant or some mechanic had loaded some oxygen
units on the plane which shouldn't have been there. Day after
day, month after month this was headlines in the newspapers,
and I pointed out repeatedly that the same day that airplane
crashed, more people were killed in automobile accidents in
this country than were killed in that airplane. Every day after
that more people were killed on the highways than were killed
in that plane crash. Yet headlines day after day.
The safety is better than any other mode of transportation.
We should recognize that and participate in it. I don't fault
you whatsoever for things that may have gone wrong in this. You
were caught in an unfortunate situation in responding to a FOIA
request, which is a no no. But nevertheless, I think your
motives here were very good.
I would also point out that if people are so concerned
about safety, there is an immediate problem you can tackle with
traffic accidents, and that is drunk drivers. We have had a
number of drunk drivers kill individuals while we are sitting
here in this session, more than were killed by airplanes. And
it goes on year after year. In fact, so far--or--in any given
year more individuals are killed by drunk drivers than were
killed among our troops in the entire Iraq War up to this day.
That is every year that happens, and yet we spend all this time
on aviation safety.
I don't, I am not opposed to making airlines, airplanes as
safe as they should be. I am a would-be pilot myself, and I
certainly want a safe airplane and safe air traffic control
system. But let us get over this obsession and let us recognize
that our goal is to improve what is already very good and not
get obsessed about little incidents that occur when we have
much bigger problems to try to tackle in the aviation sector.
So I beg your apology for the sermon, Mr. Chairman, but I
just have to say these things once in awhile. Let us get stuff
in perspective, and the world is not going to rise or fall, and
the aviation industry is not going to rise or fall on the
results of this survey. I doubt if we will learn much different
than we have learned from the previous surveys. It is all good.
Let us all do it, but let us not overstate it.
Thank you very much. I yield back.
Chairman Gordon. Thank you, Dr. Ehlers. I hope you feel
better.
The gentleman from Louisiana, Mr. Melancon, is recognized
for five minutes. Melancon passes and let me see, Mr. Mitchell
from Arizona, also on the Transportation Committee, is
recognized.
Mr. Mitchell. Thank you very much, Mr. Chairman. This is
for Dr. Griffin.
You know, airline business in my particular district is a
very, very serious business. One of the Nation's largest
airlines is located in Tempe and Phoenix Sky Harbor is the
eighth largest or busiest airport in the country. We depend on
aviation, and we depend upon the Federal Government to keep our
skies safe.
Responsibility for Public Statement
Now, I was stunned as most people were, and I think this is
why we are here, because of the statement that came out that
you have heard many times before--the affect of public--the
reason the report was not released is because of the affect
that it might have on the public confidence and so on.
Now you are telling us that you don't agree with that
statement that was made last week. But, Dr. Griffin, you are
the Administrator of NASA. How could this statement be released
without first being reviewed and agreed upon as NASA's stance
on this particular issue?
Dr. Griffin. The delegated FOIA official released the
response in the form of a letter and included a statement that
I believed to have been mistaken. I try to review everything
that I believe will be significant before it goes out, but I
don't have enough hours in the day to review every single thing
that goes out of NASA, and sometimes mistakes are made. This
was one, and when that occurs, as the agency had, I pay the
price for it.
Mr. Mitchell. But you have the time now to come and----
Dr. Griffin. Obviously I have had to make the time, because
we did make a mistake, and the mistake rests on my shoulders,
and I apologize for it, and I have before, and I will again.
The language that was used was inappropriate. We will not
repeat it. We will correct the error. We will de-identify the
data, and we will release it.
Mr. Mitchell. Well, besides this particular statement I
would hope that you would have a better review of what comes
out of your office, because you may be back here again the way
things seem, doing the same thing you are doing now.
Why Wasn't NASA Information Made Public and Why Didn't It Live
Up to NASA's Standards?
You know, you said that NASA is interested in getting this
safety information out, and my question is why has NASA refused
to produce it to the Associated Press for a year? Now, my
understanding is the study started in April of 2001, ended in
December, 2004. Why does it take a hearing in Congress and
public pressure for a hearing to get the public made--to get
this information made public?
Dr. Griffin. As I said earlier, the only way that I can
answer that question is to admit, as I have, that we did not
manage that project well. We did not bring it to a timely
conclusion. We did not publish the data and the report's
conclusions in an appropriate way, and we will fix it, and we
will try not to do it again.
Mr. Mitchell. The next part of this question is you stated
that this was not conducted--this survey under proper standards
of NASA. So it seems like there has been a lot of mistakes
here. And this is one of them you say it wasn't under NASA's
normal review.
Why would NASA invest over $11 million in a project like
this if it didn't follow NASA standards?
Dr. Griffin. We did not manage the project well. We did not
supervise our contractor appropriately. We made a mistake.
Mr. Mitchell. You know, all of this reflects on NASA's
credibility.
Dr. Griffin. Yes, sir, I do.
Mr. Mitchell. I yield back.
Dr. Griffin. I deeply regret the situation, and I will
look, and we are now looking to make sure that this does not
occur again.
Chairman Gordon. Thank you. We are going to be having votes
in about 20 minutes, so I am going to, I want everybody to have
their say. I will be stricter than usual on the five minutes,
and if you want to be briefer than usual, then that would be
good, too.
So, Mr. Bonner from Alabama is recognized.
State of Current Space Shuttle Mission
Mr. Bonner. Thank you, Mr. Chairman, and I could probably
spend the next five minutes trying to think of some creative
way to ask the same question that has been asked repeatedly to
get a different answer, but instead, if I might, I would like
to ask, take advantage of this opportunity that we don't often
have to ask Dr. Griffin how the Shuttle mission is going.
Because I think a lot of people are interested. We have
followed that with great interest over the years, and I think
it would be great to hear from you on how it is going at this
point.
Dr. Griffin. It is going extremely well. We have an
unfortunate rip in one of the solar arrays, not a huge rip, but
a rip, and that is important to repair before the crew returns.
And so we are going to extend the mission an extra couple of
days to do that. But other than that it is going extremely
well.
Mr. Bonner. Do you feel personally responsible for that
solar rip?
Dr. Griffin. You know, I am an ex-program manager, and my
belief is if lightening strikes your payload, it is your fault.
So, yes, I feel responsible for that rip, and we are, and for
repairing it, and we are going to fix it.
Mr. Bonner. Thank you very much. Thank you, Mr. Chairman.
Chairman Gordon. Thank you, Mr. Bonner, and the Chairman of
our Oversight Committee, Mr. Miller, is recognized.
Quality of Data
Mr. Miller. Thank you, Mr. Chairman. Dr. Griffin, good
afternoon. The next panel includes Dr. Robert Dodd, who was a
principle investigator for the NAOMS Project. Have you
discussed your testimony with Dr. Dodd at all? Have you
reviewed his testimony?
Dr. Griffin. No, I have not----
Mr. Miller. Okay.
Dr. Griffin.--met your next two witnesses.
Mr. Miller. His prepared testimony says that the NAOMS Team
made an extraordinary effort to clean and validate the data
collected through the survey. The resulting data is of good
quality and ready for meaningful analysis. You disagree with
Dr. Dodd?
Dr. Griffin. I do disagree with that statement.
Mr. Miller. Okay.
Dr. Griffin. The self-assertion by the purveyors of the
data that the data is okay does not make it okay.
Mr. Miller. Okay. Well, that was another, I mean, I
understand a concern for methodology, but there does need to be
an extraordinary concern for methodology. Dr. Dodd's statement
of the purpose of NAOMS was help identify risks that could
result in losses, evaluate the impact of new technology,
provide insights into how well the safety enhancements are
working out. In other words, provide results based upon which
we could act.
And your testimony is that the overarching goal of
developing, was developing methods to facilitate a data-driven
approach to aviation system safety analysis, that in early
2005, you determined that the amount of data collected was
sufficient to evaluate, whether NAOMS survey methodology was
statistically useful. There were 29,000 survey results. I would
hope that that would be enough in representative. And then you
said--you have said in your testimony that it was not
prematurely ended. It sounds from your testimony like the
purpose of the project was to develop a methodology.
It seems like $11.3 million is a lot for methodology. That
ought to buy you a lot of methodology. Was it your purpose to
do the things that Dr. Dodd said, which is have information
that you could use?
Dr. Griffin. Well, from NASA's perspective the purpose was
to develop and validate methodologies and then to transition
the work to the agencies with operational responsibility.
Mr. Miller. Okay. And when did that transfer happen?
Dr. Griffin. The transfer of methodology and data to the
Air Line Pilots Association, which had expressed some interest
in a web-based version of the survey, occurred in 2004, 2005,
and 2006. NASA has briefed the results of the study to the FAA,
among other government agencies.
Mr. Miller. So it has been analyzed to that extent?
Dr. Griffin. It has been analyzed to that extent, and that
analysis that you refer to that has been done to that extent
revealed substantial concerns. For example, if you were--if you
extrapolate the rate of certain things done, revealed by the
survey, you get an uncredible answer.
For example, pilots were asked how often they had to land
an airplane at an unscheduled airport in order to deal with an
unruly passenger. We accumulated those statistics. If those
statistics are extrapolated forward, it yields a result that
four times a day a transport aircraft is landing because the
crew has to deal with an unruly passenger.
Now, I recall since 9-11 that that has happened maybe two
or three times.
Mr. Miller. Okay.
Dr. Griffin. If we had people landing four times per day to
deal with an unruly passenger, it would be on the nightly news
every night. That is not happening. So it causes us to suspect
the quality of this data.
Mr. Miller. All right. Dr. Griffin, I understand that the
Office of Management Budget has an office of experts, of survey
experts, survey methodology is not unusual. It is widely used
in the Federal Government. It is widely used in social
sciences. It is widely used. Survey experts and statisticians
who review the methodology of all surveys used by the Federal
Government. Was this survey reviewed by that office of OMB?
Dr. Griffin. I don't know. I was not at NASA when that work
was done, and so I don't know if it was reviewed by the OMB at
that time or not.
Mr. Miller. All right. You cite as still correct the
refusal to provide the information under FOIA as revealing
confidential commercial information. My understanding of that
exception is that that is to protect the confidentiality of
information provided by a business entity that might be
confidential for business reasons. Market information,
financial information, et cetera.
It is hard to see how this survey data provided by pilots
would meet that exception. What kinds of confidential
commercial information did this survey produce?
Dr. Griffin. Well, the exemption that you refer to is, of
course, correct as you state, but it is not the only one. In
the case where information is voluntarily provided and when
that information would not be customarily provided to the
public, then we also have an obligation to protect that
information.
Chairman Gordon. The gentleman's time has expired.
Mr. Inglis is recognized.
Mr. Inglis. I pass, Mr. Chairman.
Chairman Gordon. The gentleman, Mr.--thank you. And is
there--Mr. Lipinski. Excuse me. Mr. Chandler is next, then Mr.
Lipinski and----
The Responsibility for the $11 Million
Mr. Chandler. Dr. Griffin, I have been listening to the
testimony, and I understand that, it sounds to me like you may
believe you all made a mistake.
Dr. Griffin. I have admitted it several times.
Mr. Chandler. I think that has come out in this hearing.
And I understand that has to do with the handling of the FOIA
request.
Dr. Griffin. Yes, sir.
Mr. Chandler. But I also just--it just came across my mind
that maybe you believe that this entire process has been
mishandled, and you have made a mistake in the entire survey
process and not overseeing what is a pretty enormous project.
Is that the case as well?
Dr. Griffin. I have--this is not an enormous project by
NASA's standards.
Mr. Chandler. No, but in this particular instance it is a
pretty important project.
Dr. Griffin. But when we spend $11 million of the
taxpayers' money it should be done well, and I have stated--I
regret to state it, but I have stated that by my standards we
did not manage this project well. We did not manage our
contractor well.
Mr. Chandler. And you are also saying that at the end of
all of this and when this data is, in fact, released, there is
going to be reason to not have much confidence in the ultimate
data. Is that correct?
Dr. Griffin. I have been a pilot for decades. Anyone who
knows anything about aviation is going to look at this data and
have a lot of questions about it because it is on its face--on
its face, when you look at it, you can extract from it
conclusions which are not credible.
Mr. Chandler. Well, what I am hearing you say is we have
just thrown $11 million down a rat hole.
Dr. Griffin. I hope that is not the case, and I believe
that we should be able to get much that is useful from this
data, but there will be cause to question it by knowledgeable
aviation experts.
Chairman Gordon. Would my friend yield to Mr. Lipinski so
we can try to finish this panel?
Mr. Chandler. Sure. Thank you, Mr. Chairman.
Chairman Gordon. Thank you.
Mr. Lipinski. I will try to make this quick, although this
is very important. Airline safety is critical. I have an
airport in my district, O'Hara Airport very close proximity. I
just want to zero in, Dr. Griffin, I have a lot of respect for
you. Today you are on the hot seat, deservedly so with this
project.
You talk about this project was not managed well. To me I
look and see the project started six years, $11 million, no
results. It could mean one of two things. Either complete
incompetence, that this project had so many problems with it,
that you couldn't get anything good out of it, or, you know, I
could use the word cover up, I will say, but there, or there is
some reason that this was stopped. There was something that,
for some reason someone did not want to show up.
When this stopped, were there plans for anyone else to be
surveyed after you did the airline pilots? Was there anyone
else after that?
Dr. Griffin. No. There was not. This project----
Mr. Lipinski. Was that the end?
Dr. Griffin.--in the original material, which has been
submitted to this committee, documenting this project, it was
intended that the project be ended in 2004. We have for
purposes of transition and simply because things have gone
slower than they should have, this project has continued onto
the present day. But there has been no cover up. There is no
desire to conceal anything.
Mr. Lipinski. Okay. I am very----
Chairman Gordon. Mr. Lipinski, would you mind yielding to
Ms.----
Mr. Lipinski. Yes, I will yield.
Chairman Gordon.--Richardson to, for her concluding
statement?
Ms. Richardson. Thank you, Mr. Chairman. I will just be
very brief or as quick as I possibly can.
Dr. Griffin, I represent the California area, and we have
had several reported incidences within the LAX Airport, and I
also represent the Long Beach Airport. I have the following
questions, and if you can't answer them within the time we have
provided, you can provide them to this committee.
Data Recovery, Peer Review, and Avoidance of Requests
Number one, who and when decided that there would be a
destruction of data requested? In your statement you say that
that didn't happen, and so my question to you would be if it
didn't happen, then why was it requested that the
subcontractor--why were they directed to recover data? It just
doesn't make sense. If they weren't required to destruct it,
then they should now be required to recover it.
Chairman Gordon. If the gentle lady, would you go ahead and
read your questions and then Dr. Griffin can respond for the
record if that is okay.
Dr. Griffin. We will take them for the record. Yeah.
Ms. Richardson. The second thing is if the project was
initiated in 1998, started collection in April of 2001, and
started that in 2004, I find it really hard to understand,
number one, why in seven, eight years you failed to complete a
peer review, why we now suddenly question the methodology. I
come from the private sector. I don't know of anyone who
manages a project that you don't look at the data, how the data
is being collected, how is it being presented, how are you
going to use it, what should be included, what should not be
included. That we finally wake up eight years later? I have
never--I don't know of a system of how we do this and we
operate it.
And then finally, I would say really the continued
avoidance of requests is just unprofessional. I am a new Member
here, but I will tell you what I call it. I don't call it a
mistake. I call it negligence, and I really think that NASA is
liable, and if something happens, this is a very serious issue,
and I really resent that we are here today even having this
discussion. This is something that could have been dealt with,
I believe, if you really wanted it to. And for me to say two
months is completely unacceptable. These are computer programs,
you either make it a priority or you don't, and it seems to me
today it is not a priority to you.
Thank you.
[The information follows:]
Material for the Record
NASA Ames Contracting Officer issued the phase-out instruction to
Battelle Inc. on September 10, 2007, via a new task request; this task
instruction was made in preparation for task phase-out scheduled for
October 31, 2007. Per this task instruction, written in order to
properly disposition sensitive government information, the Ames
Contracting Officer instructed Battelle Inc. to collect, inventory,
archive, and transfer the complete set of data to the government. Once
Battelle Inc. completed this transfer, and the NASA project management
officials verified the completeness of the data set, Battelle Inc. was
instructed to securely dispose of all data. This instruction was to
ensure that the data set was NASA-owned and to prevent the potential
for unauthorized use of the data.
NASA received a letter, dated October 22, 2007, jointly signed by
Chairman Gordon, House Science and Technology Committee, Chairman
Udall, House Space and Aeronautics Subcommittee, and Chairman Miller,
House Investigations and Oversight Subcommittee, directing that NASA
halt any destruction of records related to the NAOMS project. To comply
with the direction, the Ames Contracting Officer directed the
contractor to halt the phase-out process until further notice. This
action was done via a task modification dated November 5, 2007.
NASA Ames Research Center agrees that the methodology should have
been peer-reviewed much earlier in its development. While the survey
was approved by the OMB in accordance with the Paperwork Reduction Act,
and briefed to stakeholders in two workshops, the work was not peer-
reviewed.
From 1998 to 2004, the NAOMS project team gave approximately 17
Power Point briefings to various audiences, mainly government and
industry personnel. However, none of the research conducted in the
NAOMS project has been peer-reviewed to date. Power Point briefings to
stakeholders, while having some value, do not constitute peer review.
Accordingly, no product of the NAOMS project, including the survey
methodology, the survey data, and any analysis of those data, should be
viewed or considered at this stage as having been validated.
It should be noted that NASA's assertion that none of results from
the NAOMS project can be considered validated does not mean that NASA
is drawing conclusions about the validity of the survey data; we are
simply stating that no such conclusions can be credibly drawn.
In order to rectify this situation as best as possible, NASA has
asked the National Academies to conduct an independent assessment of
the contractor's final report as well as of the survey results that are
to be publicly released. The National Academies' assessment will be
made available to the public as soon as it is completed.
Mr. Hall of Texas. Mr. Chairman.
Chairman Gordon. Yes, Mr. Hall.
Mr. Hall of Texas. Could I make an inquiry of----
Chairman Gordon. Certainly.
Mr. Hall of Texas. Mike, would you mind staying around
during the second panel where we might respond to anything else
that might happen? You know, something may come up as to
whether or not we have handled immigration well, you know, the
whole Congress might get indicted on that. We may have some
questions on why we don't have an appropriations bill for the
first time in history. A lot of us haven't handled things well,
and you have said that you haven't, you acknowledged it. Please
stay around, if you would, for this next--to where we can
inquire of you for some answers if we need them. Would you?
Dr. Griffin. Of course. Yes, sir.
Mr. Hall of Texas. Thank you.
Chairman Gordon. Mr. Hall of Tennessee, thank you for being
here. Dr. Griffin, you are a good Administrator of NASA, the
buck stops with you. It is unfortunate you have to spend this
time. I hope the message goes out to those folks that work for
you that they should not put you in this position in the
future.
We will take a recess to go vote and then come back for our
second panel shortly.
[Recess.]
Chairman Gordon. As a courtesy to our witnesses if everyone
would come back and be ready to go we will get started when Mr.
Hall arrives.
I have been informed that Mr. Hall is on his way, and we
are going to assume that it is his pleasure that we do not hold
you up any more than necessary, so we will go ahead and
proceed.
We don't have control over when votes occur. Sorry to hold
you up. This is an important hearing, and we do want to
proceed.
So at this time I will introduce our second panel of
witnesses. Dr. Robert S. Dodd is the Safety Consultant and
President of Dodd and Associates, LLC. Next, Dr. Jon Krosnick
is the Frederic O. Glover Professor in Humanities and Social
Science at Stanford University, and our last witness on this
second panel is Dr. or rather, excuse me, Captain Terry
McVenes, who is the Executive Air Safety Chairman of the Air
Line Pilots Association.
Welcome to all of you. As you know, we hope that you will
submit your full testimony and summarize it in five minutes if
that is possible. If not, we do not want to interfere with a
good hearing today.
And so, Dr. Dodd, the microphone is yours.
Panel 2:
STATEMENT OF DR. ROBERT S. DODD, SAFETY CONSULTANT AND
PRESIDENT, DODD & ASSOCIATES, LLC
Dr. Dodd. Thank you. Good afternoon, Mr. Chairman and
Members of the Committee. My name is Dr. Robert Dodd, and I
appreciate the opportunity to address the Committee on the
NAOMS Project.
For seven years I served as the principal investigator for
NAOMS. I consider myself extremely fortunate to have been
involved with NAOMS. This was a unique project based on
thorough preparation and outstanding science.
NASA managers provided the research team with the support
and leadership needed to design and conduct an exceptional
project. The research team itself was composed of an extremely
well-qualified and knowledgeable group of scientists whose
commitment to the project was unparalleled.
Finally and most importantly, I must acknowledge the
commitment and effort of the hundreds of professional and
general aviation pilots who helped design the survey and the
29,000 pilots who donated over 14,000 hours of their time to
tell us about their safety experiences in an effort to improve
the safety of the Nation's air transportation system.
When I learned that NASA had decided the data collected by
NAOMS would not be released to the public, I was disappointed
and perplexed. I have seen many reasons cited for why NASA
decided these data should not be released. The press reported
that NASA was concerned that the data might frighten airline
passengers, and this would have a negative affect on the well
being of the airlines.
Other aviation organizations claim that the NAOMS data were
soft data and voluntarily submitted. The implication was that
the NAOMS data were somehow of limited or no value because they
originated with pilots who were voluntarily responding to a
survey.
Finally, there are press reports that stated NAOMS data
were not needed because current FAA oversight systems provided
an adequate picture of the safety performance of the aviation
system. I don't agree with these perspectives.
I believe the American public understands and accepts that
travel by commercial airlines in the United States is the
safest mode of travel in the world. Major air carrier crashes
are thankfully rare events. I don't believe based on my
experience that the NAOMS data contained any information that
would increase the passengers' fear of flying.
NAOMS data, which were collected to help insure that the U.
S. airline safety remains best in the world, should be released
so it can be used for its intended purpose.
I would like to encourage the Committee to consider why a
program like NAOMS is currently not operating. In most other
aspects of public health and safety, U.S. Government and
industry organizations routinely use surveys to identify and
understand risks. Many of these programs have been in existence
for years and are essential to effective oversight and
evaluations of the Nation's safety and health programs.
A program like NAOMS can help identify risks by obtaining
information from those who should know, the people operating
the system. It can also help evaluate the safety impact of new
technologies as they are introduced. This is an important
consideration in light of all the changes occurring in the
aviation system on a daily basis and especially when we
consider the new technologies such as the air traffic control
overhaul, which is going to be coming shortly.
Finally, an NAOMS-like program can provide quick insight
into how well safety enhancements and improvements are working,
a capability difficult to duplicate with today's aviation
safety oversight systems.
In closing, I believe that NAOMS should be restarted and
operated by an independent and unbiased organization. Such a
program should receive funding directly from Congress to insure
its budget remains adequate to fulfill its mission.
I appreciate the opportunity to comment on this important
program.
[The prepared statement of Dr. Dodd follows:]
Prepared Statement of Robert S. Dodd
Good afternoon Mr. Chairman, Members of the Committee. My name is
Dr. Robert Dodd and I appreciate the opportunity to address the
Committee on the National Aeronautics Operations Monitoring System,
also known as NAOMS.
Between February 1998 and March 2005, a period of seven years, I
served as the principal investigator for the NAOMS project. I
participated in all aspects of the survey including its design,
application, data analysis and project management, often in
collaboration with Mr. Loren Rosenthal, the Battelle Project Manager
for NAOMS. Battelle was the prime contractor for the project.
I consider myself extremely fortunate to have been involved with
NAOMS. This was a unique project based on thorough preparation and
outstanding science. NASA managers provided the research team with the
support and leadership needed to design and conduct an absolutely
outstanding project. The research team itself was composed of an
extremely well qualified and knowledgeable group of scientists whose
commitment to the project was unparalleled. Finally and most
importantly, I must acknowledge the commitment and effort of the
hundreds of professional and general aviation pilots who helped us
design the survey and the 24,000 pilots who donated over 12,000 hours
of their time to tell us about their safety experiences in an effort to
improve the safety of the Nation's air transportation system.
I was disappointed and perplexed when I learned that NASA decided
the data collected by the NAOMS survey would not be released to the
public. While I know that the most notable denial was that issued to
the Associated Press, the Johns Hopkins University Center for Injury
Research and Policy, a reputable safety research organization in
addition to be a leading scholarly institution, was also denied.
Many different reasons were cited for NASA's refusal to release
these data to the public. The press reported that NASA was concerned
that the data might ``frighten airline passengers'' and this would have
``a negative effect on the well being of the airlines.'' Press reports
also indicted that other aviation organizations claimed that the NAOMS
data were ``soft data'' and voluntarily submitted. The implication was
that the NAOMS data were somehow of limited, or no value, because they
originated with pilots voluntarily responding to a survey. Finally,
there were press reports that stated NAOMS data were not needed because
current FAA oversight systems provided an adequate picture of the
safety performance of the National Airspace System.
I find these arguments without merit.
I believe the American public understands and accepts that travel
by commercial airlines in the United States is the safest mode of
travel in the world. Major air carrier crashes are thankfully rare
events. When a major crash occurs, it receives exceptional press
coverage throughout the world, usually with images of destruction and
chaos. Yet passengers continue to fly. I don't believe that the NAOMS
data contained any information that could compare with the image of a
crashed air carrier airplane or would increase passengers' fear of
flying.
I also don't believe the argument that NAOMS data are somehow
limited or of no value because they are derived from a survey has
merit. All data used for analysis, no matter its origin, have
limitations and errors. Based on my experience, most if not all the
databases used by the FAA for safety oversight and analysis contain
errors and have limitations. This is why knowledgeable scientists and
experts are involved in turning these data into useful information for
decision makers. NAOMS data are no different in this regard. The NAOMS
team made an extraordinary effort to clean and validate the data
collected through the survey. The resulting data is of good quality and
ready for meaningful analysis. Why would anyone decide that additional
information, especially when it deals with the safety of the traveling
public, should be hidden?
Finally, the belief that the NAOMS data are not needed because
current safety oversight systems are adequate is untrue. Not all
airlines have Flight Operational Quality Assessment (FOQA) programs or
participate in the Aviation Safety Action Program (ASAP), a pilot based
voluntary reporting system. Further, current safety oversight systems
do not do a good job of measuring safety errors in the general aviation
fleet, among small commercial operators, or among maintenance
technicians, all of which have a direct influence on airline safety. A
program like NAOMS can provide a unique oversight capability for all of
the aviation system.
In closing I would like to encourage the Committee to consider why
a program like NAOMS is not currently operating. In most other aspects
of public health and safety, U.S. Government and industry organizations
routinely use surveys to identify and understand risks to public safety
and health. Many of these programs have been in existence for years and
are central to the evaluation and oversight of the Nation's health and
safety.
A program like NAOMS can:
1. Help identify risks before they result in losses by
obtaining information from those who are in the best position
to know, the people operating the system.
2. Help evaluate the impact of new technology, an important
consideration in light of all the changes occurring in the
National Airspace System including the overhaul of the air
traffic control system.
3. Provide quick insight into how well safety enhancements and
improvements are working, a capability difficult to duplicate
with today's oversight systems.
I believe NAOMS should be reinstituted and operated by an
independent and unbiased organization. Such a program should receive
funding directly from Congress to ensure its budget remains adequate to
fulfill its mission.
Thank you for the opportunity to comment on this important issue.
Biography for Robert S. Dodd
WORK EXPERIENCE
Johns Hopkins University School of Public Health, Baltimore, MD; 1/
2004-Present
Adjunct Faculty
I teach a course at the Johns Hopkins University Bloomberg School
of Public Health titled Transportation Research, Public Policy and
Politics. This is a graduate level course. This course is intended to
provide an overview of the significant role of national politics on
transportation safety policy in the United States. Using case studies
of notable safety enhancement efforts in aviation, highway, rail and
maritime transportation, the students are introduced to the significant
roles and interactions of lobbyists, industry associations,
politicians, and federal agencies in transportation safety research and
subsequent safety improvement rule-making. Through lectures, readings
and a field trip, students learn that transportation safety and injury
prevention improvements often require significant efforts to
successfully navigate the path from research findings to interventions
that improve the traveling public's safety and health.
Dodd & Associates, LLC, Gambrills, MD; 6/1998-Present
Owner
Dodd & Associates, LLC is a consulting company that specializes in
transportation safety research and analysis. As owner, I serve as the
senior research scientist and manager. Our business focus includes
transportation safety research, data analysis, research design, survey
research, transportation injury control assessments, safety program
design, safety training, safety audits and analysis, and OSHA
compliance assessments.
I serve as a research scientist on research projects for the
Federal Government and private clients. In many of the projects, I have
served as the principle investigator. Consequently, I am usually
responsible for developing project proposals and the research protocol,
project work plans and time lines, managing project participants,
writing the final reports and presenting the findings to the client and
other organizations as required. I am knowledgeable about government
contracting and grant procedures as a result my extensive experience in
managing such programs both as a contract and grant recipient.
A sample of projects include:
Principal Investigator, National Aviation Operations Monitoring
Service (NAOMS): Multi-year, multi-million dollar survey study that
collected information on safety incidents from over 22,000 air line
pilots and 4,000 small airplane pilots. Study was conducted for NASA. I
oversaw experimental development, testing and application of the
project research plan and survey. The surveys were conducted via
telephone and achieved an 80 percent response rate. The project is now
complete and papers are being written for peer review journals.
Principal Investigator, Wide Area Augmentation System (WAAS): This
research project was designed to quantify in dollars saved by the
potential reduction in crashes associated with the planned introduction
of the wide area augmentation system (WAAS) navigation system. The WAAS
is a satellite-based navigation system developed by the Federal
Aviation Administration (FAA) to provide precision approach capability
to the majority of airports in the continental United States. The
project was conducted for the FAA and resulted in a report for FAA use.
Co-Principle Investigator, Evaluation of the Use of Common
Denominators for Cross Modal Transportation Safety Evaluation: I served
as a co-principal investigator with Professor Susan Baker on a Johns
Hopkins University research project to evaluate the feasibility of
using common exposure measures for cross-modal evaluations in
transportation safety evaluations. This study was sponsored by the
Bureau of Transportation Statistics which is part of the Department of
Transportation.
Audit Team Leader, Patient Transport System Operational Safety
Audits: I lead a team of experts who evaluate the safety of patient
transport operations (both ground and air) for medical transport
services. We have completed over 65 audits to date. Focus of audits
included patient safety, occupational safety and transport operations.
Records Management Systems, Incorporated, Fairfax, VA; 3/1996-6/1998
Senior Research Scientist
I served as a senior research scientist for RMA, a government
contractor supporting the Federal Aviation Administration's (FAA)
Office of Aviation Safety. I conducted safety research, assisted in the
design of database and safety analysis systems for the FAA's National
Aviation Safety Data Analysis System (NASDAC) and helped develop safety
programs. I participated in strategic planning, helped design research
protocols and project management plans, and participated in industry
meetings for the FAA.
A key component of NASDAC's mission at that time was the evaluation
and integration of aviation data safety systems into a common access
point for analysis. These data systems were owned and operated by the
FAA, the National Transportation Safety Board (NTSB), the National
Aeronautics and Space Administration (NASA), the British Civil Aviation
Authority (CAA), and private data sources such as AirClaims. As the
primary analyst supporting the NASDAC's mission, I became very familiar
with these data sources. My familiarity originated with using these
data for analytical projects and evaluating the databases for accuracy,
structure, relevancy to current safety issues and much more. Through
this experience, I became expert in the strengths and limitations of
these data sets.
Battelle Memorial Institute, Columbus, OH; 5/1990-3/1996
Principal Research Scientist
I supported Battelle's transportation group conducting research and
participating as a Battelle representative in meetings and conferences
held in Washington D.C. I also supported the FAA's Aviation Safety
Reporting Program (ASRS), a voluntary aviation incident reporting
system, by conducting analysis of the data contained in the ASRS
database. I conducted analysis, generated reports, and presented
findings of interest to both government and industry organizations.
Johns Hopkins University School of Public Health, Baltimore, MD; 8/
1988-5/1990
Research Assistant
I was a teaching and research assistant while a full-time doctoral
student. As such, I assisted professors in the research activities
conducing database design, development and research. I also assisted in
teaching courses.
National Transportation Safety Board, Washington, DC--7/1986-8/1988
Transportation Safety Specialist
I was a transportation safety specialist and worked in the safety
studies division. I was responsible for conducting targeted research
investigations of specific transportation safety issues, writing
summary reports and generating corrective recommendations. I assisted
in crash investigations and statistical evaluations. I also
participated in industry meetings, wrote speeches for individual Board
members and made public presentations. I left this position to return
to school for my doctorate.
Air Line Pilots Association, Herndon, VA; 6/1980-7/1986
Staff Safety Engineer
As a staff member of the Engineering and Air Safety Department, I
supported pilot safety committees and worked on safety issues involving
crash survival, airport design and airport safety. Part of my duties
involved responding to FAA Notices of Proposed Rule-making (NPRM) for
safety regulation rule changes. I also worked closely with the FAA and
NTSB on a broad variety of air carrier safety issues. I also managed
safety committees for the Association, participated in industry working
groups sponsored by the Society of Automotive Engineers, National Fire
Protection Association, American Association of Airport Executives and
similar organizations.
Freeway Airport Inc., Mitchellville, MD; 12/1978-6/1980
Flight Instructor
As a charter pilot and flight instructor I was responsible for
conducting air taxi flights for customers and training primary,
advanced and instrument pilots.
EDUCATION
Johns Hopkins University School of Public Health, Baltimore, MD;
Doctorate, 5/1992; Major: Public Health; Minor: Behavioral
Science
Relevant Course Work, Licensures and Certifications:
This course of study was research-oriented and predominantly
quantitative and lead to a Doctorate of Science (Sc.D). It included
study of statistics, epidemiology, experimental design, survey design
and application, database design, transportation safety and research
methodology. The main focus was transportation injury prevention and
occupational safety, with secondary study in the behavioral sciences.
This focus included injury coding and outcome measurement, and
observational study design. My thesis evaluated occupant crash survival
and was titled ``Factors Related to Occupant Crash Survival in
Emergency Medical Service Helicopters.''
University of Southern California, Los Angeles, CA; Master's Degree,
12/1981; Major: Safety
Relevant Course Work, Licensures and Certifications:
This degree program used an interdisciplinary systems approach to
the theory and application of modern transportation safety practice.
The curriculum included study in management, technology application,
human factors, accident investigation, risk management, system safety,
environment and communications. Focus areas for my specific course of
study included: structural safety and failure analysis, accident
investigation, human factors, system safety engineering, statistical
analysis, and experimental design in safety research.
University of Maryland, College Park, MD; Bachelor's Degree, 12/1978;
128 Semester Hours; Major: General Studies
Relevant Course Work, Licensures and Certifications:
This course of study led to an independent studies degree with the
main focus on the life sciences, including courses in micro biology,
zoology, physiology, chemistry, and anatomy.
AFFILIATIONS
Association of Air Medical Services, Member, Board of Directors
American Society of Safety Engineers, Professional Member
American Public Health Association, Professional Member
PROFESSIONAL PUBLICATIONS
Scott A, Krosnick J, Dodd R, et al., Comparing Telephone Interviews
with Self-Administered Mailed Questionnaires: Results from a
Field Experiment Assessing Reporting Accuracy. Public Opinion
Quarterly, submitted.
Baker S, Grabowski J, Dodd R, et al., EMS Helicopter Crashes: What
Influences Fatal Outcome? Annals of Emergency Medicine, April
2006 (Vol. 47, Issue 4, Pages 351-356).
Enders J, Dodd R, Fickeisen F, Continuing Airworthiness Risk
Evaluation, Flight Safety Digest, Flight Safety Foundation,
Sept-Oct 1999, Arlington, VA.
Enders J, Dodd R, et al., A Study of Airport Safety With Respect to
Available Approach and Landing Aids, Flight Safety Digest,
Flight Safety Foundation, Nov. 1995.
Baker SP, Lamb M, Dodd R, Crashes of Instructional Flights, Analysis of
Cases and Remedial Approaches, FAA Grant Report #93-G-045,
Johns Hopkins Center for Injury Research and Policy, Baltimore,
MD, Oct. 1994.
Dodd R, The Cost-Effectiveness of Air Medical Helicopter Crash Survival
Enhancements, Air Medical Journal, 13:7, July 1994.
Baker SP, Lamb MW, Li G, Dodd R, Human Factors in Crashes of Commuter
Airplanes, Aviation Space and Environmental Medicine, 193, May;
64(5):417.
Dodd R, Occupant Survival In Emergency Medical Service Helicopter
Crashes, Transportation Research Record of the National
Research Council, 1992.
Dodd R, ASRS: An Under used Resource, The Journal of Air Medical
Transport, Vol. 10, No. 10, Oct. 1991.
Eldredge D, Dodd R, Mangold S, Categorization and Classification of
Flight Management System Incidents Reported to The Aviation
Safety Reporting System, Battelle Memorial Institute, Columbus,
OH, Contract No. DRTS-57-89-D00086, June 1991.
Dodd R, Reporting Accident Rates per 100,000 Patient Transports
Responsible Technique, letter to the editor, The Journal of Air
Medical Transport, Vol. 10, No. 2, Feb.
ADDITIONAL INFORMATION
Adjunct Faculty, Johns Hopkins University Bloomberg School of
Public Health, Center for Injury Research and Evaluation
John W. Hill Safety Scholarship, University of Georgia
William Haddon Fellowship in Injury Control, Insurance
Institute for Highway Safety
Graduate Research Award Program, Public-Sector Aviation
Issues, Transportation Research Board, National Academy of Sciences
Outstanding Performance Award, National Transportation Safety
Board
At-Large Member, Board of Directors, Association of Air
Medical Services
Chair of the Safety Committee, Association of Air Medical
Services
Airline transport rated multi-engine pilot (ATP-ME)
Chairman Gordon. Thank you very much, Dr. Dodd.
Our next witness, please proceed.
STATEMENT OF DR. JON A. KROSNICK, FREDERIC O. GLOVER PROFESSOR
IN HUMANITIES AND SOCIAL SCIENCES, STANFORD UNIVERSITY
Dr. Krosnick. Thank you. Mr. Chairman, thank you for the
opportunity to testify today. I am a Professor at Stanford
University with expertise in psychology and political science,
and I have devoted most of my career to the study and use of
survey methodology. I have conducted more than 100 surveys and
have conducted research to identify best practices in the
design of surveys. I have written more than 350 research papers
and received 65 grants and contracts to support my research,
mostly from the Federal Government.
I have written a textbook in this area, and as an expert on
survey methods, I have advised many federal agencies on how to
conduct their surveys, including the GAO, the IRS, the CIA, the
NIH, NOAA, EPA, the Census Bureau, the Bureau of Labor
Statistics, CDC, and others.
I am here to thank and congratulate NASA and to offer my
praise to them for a job well done to the highest standards of
excellence so far in their work on NAOMS. There are many data
collection systems in place to track air safety problems, and
NAOMS is a terrific addition to this array.
In my opinion NAOMS has been a great success, and NASA
deserves to be very proud of this success and deserves the
thanks of this Congress and of all Americans.
As you know NAOMS was designed to measure the frequency of
the precursors of aviation accidents through statistically-
reliable scientific surveys of pilots. You might imagine that
information on these events can be collected reliably by
machines, by black boxes on aircraft, by computers in the air
traffic control system, and by video cameras watching airport
operations.
But imagine the gigantic volume of information that would
be collected by such systems in just one day and imagine trying
to wade through that mountain of information to try to identify
safety compromising events. And that mountain would not even
include the many experiences and events that occur during
interactions between people without a machine record.
This is why NAOMS was conceived as it was; to use the eyes
and ears of the people actually operating the aviation system
to track what they experience and convey the resulting
information to policy-makers. For decades the Federal
Government has sponsored many longstanding and recurring survey
projects to collect information used to promote public welfare.
The unemployment rate is measured through surveys, the
inflation rate is measured through surveys, and federal
agencies regularly conduct surveys to measure much, much more.
Surveys are a mainstay at the Federal Government and have
been shown to provide valuable scientific measurements of the
experiences of our nation's huge population quickly,
accurately, and inexpensively as compared to other ways to
learn the same information.
Loren Rosenthal's vision of NAOMS is shown on this slide,
which was presented by NASA in many public meetings. The NAOMS
Project was to involve the design and implementation of surveys
not only of pilots but also of air traffic controllers, flight
attendants, and mechanics every week of every year to measure
how many of various specific accident precursors they had
witnessed while working during the past 60 days.
As you can see from this diagram in the upper right, this
was to be a permanent monitoring system. I was privileged to be
asked to serve as a consultant to the team of superb
professionals who have carried out the work done on NAOMS to
date. As I watched the team do its work over a period of years,
I saw a great deal about how it was done.
I look forward to answering your questions, but in the
remaining opening moments I have I would like to set the record
straight on five important misunderstandings that have found
their way into the public discussion of NAOMS during the past
week.
First, some people have claimed that the NAOMS methodology
was not peer reviewed. This is incorrect. The survey methods
used in NAOMS have been peer reviewed and widely accepted in
the field for more than 40 years. And the NAOMS Team used peer
reviewed and well-established evaluation techniques to select
the best standard methods for use in the NAOMS surveys.
Furthermore, survey research experts at the White House
Office of Management and Budget must review every federal
survey project to assure that the methods to be used are
optimal, and they reviewed and approved the NAOMS methodology.
And prior to that approval process the NAOMS Team had held
dozens of meetings, workshops, and consultations around the
country with aviation experts, interested parties, and social
scientists to describe the project's methodology and get
reviews, comments, and suggestions.
Second, some people have said that NAOMS was not shut down
prematurely. This is incorrect. The slide up on the screen
shows you that initial NAOMS funding was intended to pay for
surveys to be done not only of pilots but of air traffic
controllers, flight attendants, and mechanics. But the funding
for NAOMS was ended before that work was initiated.
Third, some people have said that the NAOMS Project was
designed simply to test the feasibility of a method, not to
implement that method in a long-term survey monitoring system.
This is incorrect. We determined that the method was viable and
effective after a field trial involving 635 pilots. You don't
do 24,000 interviews of pilots to test the feasibility of a
method. You do that many interviews after you know the method
is feasible and ready for prime time.
Fourth, some people have said that if the NAOMS data were
released to the public, individual pilots or airlines would be
identifiable. This is incorrect. The overwhelming majority of
NAOMS data cannot be linked to any pilot or airline because the
system was set up to assure that from the start. The very small
number of instances in which a pilot mentioned a specific
airline or event date spontaneously can easily be removed from
the public data set and made available to analysts only through
Census Data Centers, which the Federal Government created
exactly for the purpose of allowing researchers to use highly-
confidential government data for research purposes while
protecting anonymity.
Lastly, some people have said NAOMS data cannot be used to
compute the rates at which events happened because multiple
respondents might have reported the same event, leading to
overestimates. This is incorrect. NAOMS was designed
intentionally to collect multiple reports of the same event,
and NAOMS was also designed to implement a statistical
procedure to recognize this multiple reporting when translating
the results of the surveys into computation of event counts.
My best guess of why you heard earlier that events are--
event rates are too high in the survey is because that
correction is not being implemented properly.
Thus, these five criticisms of NAOMS are unfounded, and for
these many reasons I believe that NASA deserves terrific praise
for initiating NAOMS and for carrying out the work done so far
so well. The method offers a new way to complement existing
streams of data on aviation safety and it is relatively cheap
and quick compared to the other methods being implemented.
So in closing I want to thank NASA for the decision to make
existing NAOMS data available to the public, along with
complete documentation on exactly how the data were collected,
but most importantly I want to urge NASA and this committee to
restart NAOMS data collection where they left off. There is
much left on the diagram on the screen to be done, and if NASA
gets to work doing it, there will almost certainly be terrific
benefits for this nation. And this committee can take some
credit for those benefits if it comes about.
NASA did a great job with NAOMS already, and they have a
unique position of trust, objectivity, and scientific expertise
in the aviation world that will allow them to carry out this
work with efficiency and credibility. I hope they will chose to
continue this important work in the future.
Thank you very much.
[The prepared statement of Dr. Krosnick follows:]
Prepared Statement of Jon A. Krosnick
Thank you very much for the invitation to submit this statement and
to testify before the Committee as it explores the history of NASA's
National Aviation Operations Monitoring Service (NAOMS).
Currently at Stanford, I am the Frederic O. Glover Professor of
Humanities and Social Sciences, Professor of Communication, Professor
of Political Science, Professor of Psychology (by courtesy), and
Associate Director of the Institute for Research in the Social
Sciences.
As a member of the team that developed NAOMS, my role was as an
expert on survey research methodology and questionnaire design.
My Qualifications and Experience
While I have been a Professor at the Ohio State University and now
at Stanford University, a great deal of my research has involved the
collection and analysis of survey data, and many of my publications
have been designed to identify best practices in survey methodology.
As my curriculum vitae outlines (see Appendix A of this statement),
I have published five books and am currently completing the fifth, The
Handbook of Questionnaire Design (Oxford University Press). I have
published 107 journal articles and book chapters in peer-reviewed
publications. I have presented 252 papers reporting my research
findings at research conferences around the world, where presentations
were selected through a peer review process. I have received 65 grants
and contracts supporting my research and am currently overseeing active
grants and contracts totaling more than $10 million.
I have served as a consultant to the following federal agencies on
survey research issues: The Government Accountability Office (GAO), the
Internal Revenue Service (IRS), the Central Intelligence Agency (CIA),
the National Institutes of Health (NIH), the National Oceanic and
Atmospheric Administration (NOAA), the Environmental Protection Agency
(EPA), the Bureau of the Census, the Bureau of Labor Statistics (BLS),
the Centers for Disease Control and Prevention (CDC), and the National
Cancer Institute (NCI). I have advised these agencies on how to
implement best practices in the survey research they conduct.
I currently serve as co-principal investigator of the American
National Election Study (ANES), the academic world's leading survey
study of voting and elections, which is supported by a $7.6 million
grant from the National Science Foundation. This project began in 1948
with a national survey of a representative sample of American voters,
and the same sort of survey has been conducted every two years since
then. The data from the ANES are made public at no charge to all
interested investigators around the world. As co-principal
investigator, my responsibilities include all decisions about
methodology for the collection of the survey data and all decisions
regarding the design of the questionnaires used.
I also serve on the Board of Overseers of the General Social
Survey, which is the Nation's preeminent survey study of trends in
Americans' social and political attitudes and behavioral experiences.
Since the early 1970s, this study has involved annual or biannual
surveys of representative national samples of American adults
interviewed in their homes for hours and documenting a wide range of
their opinions and experiences. Like the ANES, the GSS has been funded
by the National Science Foundation, and the study's data are made
available for free to all interested researchers around the world and
The NAOMS Vision
The instigation of NAOMS was a commitment made in the 1990s by the
Federal Government to reduce the risk of commercial airplane crashes by
a specific targeted amount within ten years. Once that target was set,
federal agencies looked for ways to assess whether that goal would be
achieved and realized they had none. Simply tracking plane crashes
would not be sufficient, because they happen extremely rarely and
therefore do not indicate the amount of underlying risk posed by the
many small events that, when cumulated, can increase the risk of an
accident. Consequently, some alternative monitoring system was needed.
The Federal Aviation Administration, other agencies, and private
sector organizations (e.g., commercial airlines) have been collecting
some information on the frequency with which some risk-elevating events
have been occurring. But the array of event types being tracked was
more limited than is needed for thoroughly tracking the functioning of
the entire air travel system. Some anecdotal information has also been
collected, but this information could not be used to calculate
statistically reliable risk levels. Therefore, a new system for
collecting information on the frequency of precursors to accidents was
needed.
NAOMS was designed to serve this purpose and to collect the needed
information via high quality scientific and reliable surveys of people
around the world who were watching the operation of the aviation system
first-hand and who knew what was happening in the field. Indeed this
use of the survey method was in keeping with many other long-term
federally funded survey projects that provide valuable information to
monitor public risk, identify sources of risk that could be minimized,
identify upward or downward trends in specific risk areas, to call
attention to successes, identify areas needing improvement, and thereby
save lives while promoting commerce in the Nation.
As originally conceived by Battelle Project Manager Loren
Rosenthal, NAOMS was to be a multifaceted survey project building on
the Aviation Safety Reporting System (ASRS). For many years, ASRS has
been a successful system for collecting anecdotal information from
pilots about some of the risk-elevating events they witnessed. Each
time an event occurs, a pilot can choose to fill out a form describing
it briefly and mail the form to NASA's ASRS office in Mountain View,
California. An aviation expert then telephones the reporter to conduct
a telephone interview to gather detailed information about the event. A
subset of this information is then entered anonymously into a database
that NASA maintains. And when important insights about risks have been
obtained through this system, NASA has sent out reports to the aviation
community.
ASRS has successfully collected information that has had observable
positive effects enhancing public safety. Pilots have come to trust it
and NASA generally (because nothing undesirable has occurred to a pilot
as the result of filing an ASRS report), and ASRS has had the
flexibility to collect data on whatever events pilots deem worth
reporting.
But this flexibility also constitutes a significant limitation of
ASRS as well. Because pilots voluntarily choose to file reports on
events, their choices about when to report and what to report are
uncontrolled. Consequently, many safety-related events go unreported to
ASRS. And as a result, it is impossible to use ASRS to track trends in
event rates over time. Therefore, NAOMS was envisioned to complement
ASRS by producing accurate measurements of rates and trends in rates of
a wide array of types of events.
Every week of every year, NAOMS was planned to collect information
from a representative sample of pilots flying commercial aircraft. The
pilots would be asked to report the number of each of a series of
different specific events that they had witnessed during a specific
recent time period (e.g., the last 60 days). These counts could then be
used to calculate the rates at which the events had occurred during
that period throughout the entire air travel system.
NAOMS had the potential to succeed especially because ASRS had
already been successful. The trust that the community of commercial
pilots had developed in NASA through its running of ASRS meant that
these pilots could most likely be counted on to participate in NAOMS
surveys at a high rate without concern about retribution. That is, the
pilots could be expected to provide accurate and honest reports of
event frequencies, because they already knew that NASA (through ASRS)
was capable of compiling and reporting such data in a trustworthy and
safety-enhancing way.
But NAOMS was envisioned to go well beyond ASRS, by tapping the
knowledge and experiences of other professionals participating in the
air travel system and observing risk-elevating events. Specifically,
the original plan for NAOMS included collecting survey data every week
of every year from general aviation pilots, helicopter pilots, air
traffic controllers, flight attendants, and mechanics, as shown in the
following timeline that was presented by NASA at various public
meetings describing the project:
[GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]
Thus, the plan was to design and implement a ``permanent survey''
data collection operation to generate ongoing data to track event rates
into the future.
NAOMS Resembled Many Other Federal Surveys
This use of survey methodology in NAOMS was consistent with the
conduct of surveys by many organizations in the public and private
sectors to track rates of events over time and to inform decision-
making and organizational practices. Survey methodology is a highly
developed science that can utilize reports of people's experiences to
document events occurring around the Nation and around the world
quickly and cheaply. In fact, each year, billions of dollars are spent
conducting surveys around the world. The U.S. Federal Government is one
of the largest producers of such data. For decades, survey data have
been routinely collected and used by many federal agencies to track
contemporary life in America in a wide array of domains and to provide
valuable information for policy-making and policy implementation.
A small subset of the survey research projects that have been
funded by the U.S. government continuously, beginning in the years
shown and sponsored by the agencies in parentheses, includes:
Survey of Income and Program Participation (Census
Bureau) 1984-
Consumer Expenditure Surveys (Census Bureau) 1968-
Annual Housing Surveys (Census Bureau) 1973-
Survey of Consumer Attitudes and Behavior (National
Science Foundation) 1953-
Health and Nutrition Examination Surveys (National
Center for Health Statistics) 1959-
National Health Interview Surveys (National Center
for Health Statistics) 1970-
American National Election Studies (National Science
Foundation) 1948-
Panel Study of Income Dynamics (National Science
Foundation) 1968-
General Social Survey (National Science Foundation)
1972-
National Longitudinal Survey (Bureau of Labor
Statistics) 1964-
Behavioral Risk Factor Surveillance System (Centers
for Disease Control and Prevention) 1984-
Monitoring the Future (National Institute of Drug
Abuse) 1975-
Continuing Survey of Food Intake by Individuals
(Department of Agriculture) 1985-
National Aviation Operations Monitoring System
(National Aeronautics and Space Administration) 2002-
National Survey of Drinking and Driving (National
Highway Traffic Safety Administration) 1991-
National Survey of Family Growth (National Center for
Health Statistics) 1973-
National Survey of Fishing, Hunting, and Wildlife-
Associated Recreation (Census Bureau) 1991-
National Survey of Child and Adolescent Well-Being
(Department of Health and Human Services) 1997-
Survey of Earned Doctorates (Science Resources
Statistics Program, National Science Foundation) 1958-
National Survey on Drug Use and Health (Department of
Health and Human Services) 1971-
Youth Risk Behavior Surveillance System (Department
of Health and Human Services) 1990-
National Crime Victimization Survey (Bureau of
Justice Statistics) 1973-
Schools and Staffing Survey (National Center for
Educational Statistics) 1987-
Educational Longitudinal Survey (National Center for
Educational Statistics) 2002-
Current Employment Statistics Survey (Bureau of Labor
Statistics) 1939-
Just a few of the many other major surveys sponsored by federal
agencies over the years include:
National Survey of Distracted and Drowsy Driving
(National Highway Traffic Safety Administration)
National Survey of Veterans (Department of Veterans
Affairs)
National Survey of Children's Health (Health
Resources and Services Administration's Maternal and Child
Health Bureau)
National Survey of Recent College Graduates (Science
Resources Statistics Program, National Science Foundation)
National Survey of Speeding and Other Unsafe Driving
Actions (National Highway Traffic Safety Administration,
Department of Transportation)
Survey data form the basis of many important government policy-
making decisions. For example, economists in the Federal Reserve and
other agencies pay close attention to the federal unemployment and
inflation rates, both of which are calculated using data from national
surveys. The many other federal agencies listed above collect survey
data because those data are used in on-going decision-making.
Decades of research have shown that the reliability and validity of
optimally-collected survey data are generally quite high, and that
respondents can be relied upon to provide quite accurate descriptions
of their past experiences, behaviors, and opinions. Most visibly,
surveys conducted just before U.S. presidential elections predict the
actual election vote results very closely (see, e.g., Visser, P.S.,
Krosnick, J.A., Marquette, J., & Curtin, M., 1996; Mail surveys for
election forecasting? An evaluation of the Columbus Dispatch poll.
Public Opinion Quarterly, 60, 181-227, Visser, P.S., Krosnick, J.A.,
Marquette, J., & Curtin, M., 2000; Improving election forecasting:
Allocation of undecided respondents, identification of likely voters,
and response order effects. In P. Lavrakas & M. Traugott (Eds.),
Election polls, the news media, and democracy. New York, NY: Chatham
House). Even when there is error in such survey measurements (and there
is), the error is not huge in percentage point terms (bearing in mind
that a small shift in percentages can change the winner of a close
election). For example, since 1936, the percent of votes won by the
winner has correlated with the Gallup Poll's pre-election prediction of
that percentage .85, a nearly perfect association.\1\ Likewise, since
1948, the American National Election Study surveys' post-election
measurements of the proportions of votes won by the winning
presidential candidate have correlated with official government vote
counts .92, again nearly perfect.
---------------------------------------------------------------------------
\1\ Correlations can range from 1 (meaning a perfect match between
the variables) to 0 (meaning a relation between the variables no better
than chance) to -1 (meaning a perfect inverse relation between the
variables).
---------------------------------------------------------------------------
Equally striking are the results of the Monthly Survey of Consumer
Attitudes and Behavior, conducted continuously by the University of
Michigan's Survey Research Center since 1970. Each month, a
representative national sample of American adults has been asked what
they expect to happen to the unemployment and inflation rates in the
future (as well as many other topics), and their aggregated answers
have predicted later changes in actual unemployment and inflation
remarkably well (correlations of .80 and .90, respectively, between
1970 and 1995). This is testimony not only to the aggregated wisdom of
the American public but also to the ability of scientific surveys to
measure that wisdom accurately.
A high level of accuracy can be achieved if optimal procedures are
implemented to conduct a survey, and departures from such procedures
can significantly compromise the accuracy of a survey's findings.
Necessary features include drawing a representative sample of the
population, taking extensive steps to collect data from as many sampled
people as possible, optimizing the choice of survey mode to achieve
accurate measurements, asking questions that are easily comprehensible
and do not entail biased wording or format, weighting results to
correct for unequal sampling probabilities, and much more.
Survey Methods Development in NAOMS
When I was brought onto the research team, I was told that the
project was committed not just to designing and conducting surveys, but
to doing so with the best possible practices to assure the most
accurate data possible. Thus, rather than simply using intuition and
budget limitations as guidelines for making methodological decisions,
the project set out to design practices that would optimize data
accuracy.
To this end, we conducted a series of studies, including a large-
scale field trial, to answer a series of questions with regard to the
first survey we developed for air carrier pilots:
What risk-elevating events should we ask the pilots
to count?
How shall we gather the information from pilots--
written questionnaires, telephone interviews, or face-to-face
interviews?
How far back in the past can we ask pilots to
remember without reducing the accuracy of their recollections?
In what order should the events be asked about in the
questionnaire?
What events? The goal of the NAOMS survey was to collect
information on as many different sorts of risk-elevating events as
possible. To begin generating a comprehensive list of such events, we
conducted a series of focus group discussions with professionals who
were active in the air traffic system, including air carrier pilots,
general aviation pilots, helicopter pilots, and air traffic
controllers. In each of these group discussions, we asked participants
to generate as comprehensive a list of risk-inducing events as they
could during a two-hour period. These exercises revealed a coherent and
repeatedly-occurring list of events that seemed quite suitable for
tracking by NAOMS surveys.
In addition, we consulted with industry and government safety
groups, including members of CAST, the FAA, and the analysts who
conducted telephone interviews of pilots submitting reports to ASRS. We
also reviewed the contents of aviation event databases, such as the
ASRS, NAIMS, and BTS databases. In the end, we chose to track a set of
events that was faithful to those pinpointed by these data-gathering
exercises.
What mode? At the time that NAOMS was launched, it was widely
recognized in the survey research community that face-to-face
interviewing was the optimal way to collect accurate and honest data
from respondents. Although most surveys at that time were being
conducted by telephone, the Federal Government's most important and
visible surveys continued to rely on face-to-face interviewing. When a
competent, committed, and professional interviewer meets face-to-face
with a respondent, the respondent develops a sense of trust in and
rapport with the interviewer, inspiring the respondent to devote the
cognitive effort needed to generate accurate responses and the
confidence that his/her identity will be protected, so that honest
reports can be provided without fear of retribution.
We therefore decided to explore the viability of face-to-face
interviewing of pilots for NAOMS. However, we recognized that such
interviewing would be costly and logistically challenging, so we also
explored the viability of two alternative modes: telephone interviewing
and paper-and-pencil questionnaires. At the time we initiated NAOMS,
the published survey methodology literature did not offer clear
guidance about the quality of data to be expected from these two latter
modes. We therefore designed a ``field trial'' to compare the three
modes of data collection.
At the start of the field trial, a sample of licensed pilots was
selected to be interviewed face-to-face. But it quickly became clear
that because of the ongoing mobility of the pilots, it would be
practically impossible to coordinate schedules with them to allow
interviewers to meet with them and conduct interviews at anything
approaching a reasonable cost. Therefore, face-to-face interviewing was
abandoned. Consequently, the field trial focused on comparing telephone
interviewing and paper questionnaires mailed to respondents using a
method developed by Professor Don Dillman (a long-time consultant to
the U.S. Census Bureau) to assure high response rates.
Pilots were randomly assigned to be interviewed in one of these
modes, and the survey research group at Battelle's Center for Public
Health Research and Evaluation conducted the data collection. The cost
per interview was $60 for each mailed questionnaire completed, as
compared to $75 for each telephone interview completed. But according
to all indicators of data quality, we got what we paid for: the
telephone interviews yielded superior data. For example, the response
rate for the mail questionnaires was 73 percent, and the response rate
for the telephone interviews was 81 percent. Whereas pilots never
failed to answer a question during a telephone interview, respondents
failed to answer 4.8 percent of the questions on the paper
questionnaires. Respondents reported significantly more confidence in
the accuracy of their answers during the telephone interviews than of
their answers on the paper questionnaires. And a built in accuracy
check showed that the telephone responses were 30 percent more accurate
than the paper responses. We therefore chose to conduct the survey via
telephone interviews.
How far back in the past could pilots remember accurately? Our goal
was to collect information on as many events as possible without
compromising the accuracy of recollections. The longer the time period
that pilots were asked to describe, the more rare events could be
detected, with no added cost. But if the recall period addressed in the
questionnaire was short, then we would have had to increase the number
of pilots interviewed considerably in order to detect rare events. A
comprehensive review of the existing scholarly literature did not
provide clear guidance on what the optimal recall period would be for
NAOMS pilots, so we built into the field trial a manipulation designed
to identify this optimal recall period.
Specifically, we randomly assigned some pilots to report on the
events they witnessed during the last week and others to report on the
last two weeks, the last four weeks, the last two months, the last four
months, or the last six months. We found that the most accurate reports
were provided for the two-month recall period, so we selected that
period for the final questionnaire. During the initial months of NAOMS
main study data collection, respondents were randomly assigned to be
asked about either the last 30 days, the last 60 days, or the last 90
days. But eventually, all pilots were asked about the last 60 days.
What order of questions? Once we had specified a list of events to
be addressed, we had to specific the order in which to ask about these
events. If the order is optimized, it can make respondents' reporting
process easier and their reports more accurate. And if order is not
optimized, it can increase the difficulty of the task for the
respondents, decrease their enjoyment of it, thereby decrease their
motivation to provide accurate reports, and in the end, reduce the
accuracy of the reports they do provide.
Optimizing question order begins with the recognition that more
complete and accurate recollection occurs when question order matches
the way that information is organized in people's long-term memories.
That is, psychologists believe that clusters of related pieces of
information are stored together in memory. Asking a person to go to a
specific location in memory and retrieve all the needed information
there before moving on to retrieving information from a different
location is preferable to asking people to jump around from place to
place in memory, question by question (e.g., Barsalou, 1988; DeNisi &
Peters, 1996; Raaijmakers, & Shiffrin, 1981; Sudman, Bradburn, &
Schwarz, 1996; Tulving, 1972).
According to this logic, memories of similar safety-compromising
events are likely to be stored together in clusters in pilots'
memories. So once a pilot begins retrieving memories from a particular
cluster, it is easiest and most efficient to recall all other memories
in that cluster, rather than jumping to another cluster. Therefore, our
questionnaire grouped together questions asking about events that were
stored near one another in pilots' memories.
Identifying each respondent's memory organization scheme at the
start of each interview is not practical. However, it was possible to
assess the most common type or types of mental organizations used by
pilots and tailor our questionnaire design to those types. We conducted
a series of studies using a series of methods drawn from cognitive
psychology to identify pilots' memory organizations, and the results of
these studies clearly pointed to a memory organization that applied
well across pilots and that we showed could be used to enhance the
accuracy of recollections. In fact, our testing indicated that using
the memory organization we identified to order questions enhanced
recall accuracy by 25 percent or more over other orders we tested.
Questionnaire pretesting. Once a survey questionnaire is designed,
it is important to pretest it in various ways to assure that
respondents understand the questions and can answer them. To test
understandability and answerability, we conducted a series of tests.
One test was built into the field trial, whereby we asked respondents
to comment on and evaluate the understandability of the questions and
to identify any questions that were not sufficiently clear and
understandable. We also conducted cognitive think-aloud pretest
interviews using a technique pioneered by researchers at the National
Center for Health Statistics. This involved having pilots listen to the
questions, restate them in their own words, and think aloud while
answering the questions. These pretests were used to identify instances
in which question wording needed improvement.
Field trial results. The field trial involved collecting data from
about 600 pilots, and this allowed us to evaluate the performance of
the methodology fully. The results produced by the field trial
documented that the methodology worked well. We achieved a very high
response rate, and tests indicated high validity of the data. Thus, at
the conclusion of the field trial, we had evidence sufficient to
conclude that the method was well-designed and suitable for generating
reliable data.
Peer reviewing. Questions have been raised recently about whether
the NAOMS methodology was subjected to a peer review process. In fact,
peer review did occur. The research plan for NAOMS was presented at
many public meetings and private meetings with stakeholder
organizations and with experts involved in aviation and social science
researchers. In all of these meetings, details of the rational for
NAOMS and its methodology were described. The attendees asked
questions, made comments, and offered suggestions. In addition,
multiple meetings were held with large groups of NASA staff and FAA
staff to provide details on the NAOMS plan and accomplishments and to
acquire feedback.
As far as I understand, NASA did not request or suggest to the
NAOMS project team that any additional peer review occur. If such a
request had been made, we would have been happy to implement additional
review processes. However, that lack of such a request was not
surprising to me or unusual in the context of federal survey design and
data collection. I have been involved in many federal survey projects,
and I have advised federal agencies on many others. The vast majority
of these projects involved less peer review than NAOMS carried out. In
fact, the only federally funded survey studies I know of that have
routinely involved elaborately structure peer review processes are ones
that were conducted by the government for use in litigation. These peer
review processes rarely yielded significant changes in the survey
process. I therefore do not believe that any additional peer review of
the NAOMS methodology would have been significantly beneficial or
caused any significant changes in procedure.
An important reason for this is that in my role as a professor, I
am responsible for keeping fully informed about the state of the survey
methodology literature and new developments in survey techniques. By
reading printed and online publications and attending conferences to
hear presentations, I stay abreast of the field's understanding of best
practices. Consequently, I was called upon regularly to evaluate our
methodology vis-a-vis common practices in the field of survey research
and the views of my professional peers on design issues. Thus, the
views of my peers were regularly a focus during our planning process.
Summary. The methods we used to develop the NAOMS questionnaire
were state of the art. Indeed, the preliminary studies we conducted
constitute valuable contributions to the scholarly literature on
optimal survey design, producing findings pointing to best practices
and identifying new methods for future tests intended to optimize
survey designs. Thus, NASA can be very proud of what it accomplished
during this phase of the project.
My View of NAOMS
It was a privilege and an honor for me to have been asked to serve
as a methodology expert on the NAOMS project. And it was a pleasure to
work with the research team that carried out the project. Robert Dodd
(now of the NTSB), Loren Rosenthal and Joan Cwi (of Battelle Memorial
Institute), and Mary Conners and Linda Connell (of NASA) were
consummate professionals who worked wonderfully together, even through
times of tough decision-making. And the work done by the team was of
superb quality.
Because NAOMS was so well conceived, I looked forward to
continuation of the project and the development of a large publicly
available database for the study of air travel safety. In our public
meetings with interested parties, we presented the following slides to
illustrate the widespread use of surveys by federal agencies and the
common practices for running these surveys over long time periods and
distributing the data.
[GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]
Thus, we intended to set up such a long-term data collection and
dissemination system for NAOMS.
When I heard that interviewing of air carrier pilots had been
terminated and then that all funding for NAOMS had been stopped, I was
surprised. As far as I knew, the project had been conducted according
to best practices, and nothing that happened during that period
suggested anything to the contrary.
In my view, NAOMS was intelligently conceived and excellently
implemented. Thus, for as far as it went, NAOMS deserves a great deal
of praise from NASA and from all Americans. Indeed, NASA and the
Federal Government should be very proud of what it accomplished with
NAOMS, because its success is just what all government agencies hope
for when setting out to do good for this nation.
My belief in the value of NAOMS for this country led me to write an
op-ed essay published in the New York Times in 2006 just after I got
the news of discontinued funding. I wrote that essay with the goal of
calling attention to the great success of NAOMS and perhaps to lead to
a reconsideration of its termination.
At the very least, I hoped that a way could be devised to allow
researchers to have access to the data that were collected via
approximately 24,000 interviews with air carrier pilots over a period
of years.
These data can be useful in a number of ways. First, they can
document the frequency with which various types of events were
occurring. According to our interviews with pilots early on in the
project, they thought that NAOMS would be valuable partly because it
would call attention to surprisingly high frequencies of some low-risk
events that could be easily reduced or eliminated.
Second, the NAOMS data can be compared to data on the frequency of
similar events collected by other data sources. For example, ASRS and
the FAA collect data that can be used to compute event rates and
compared directly to some of the events asked about in the NAOMS
questionnaire. If the NAOMS questionnaires yield different rates than
these other reporting systems, that would highlight potential
opportunities to explore the sources of those discrepancies, which
might yield improvements in measurement methods and a clearer
understanding of what measurement procedures are most accurate.
Third, the NAOMS data can be used to compute trends over time in
event rates. This was of course the primary intended purpose of NAOMS
when it was originally envisioned. Thus, NAOMS could be used to gauge
whether changes in the air travel system during the years of data
collection were successful in reducing risk. Because NAOMS data were
collected both before and after September 11, 2001, it would be
possible to see how the changes in practices that occurred at that time
translated into changes in event frequencies.
Fourth, the NAOMS questionnaires are designed in ways that allow
analysts to assess some of the conditions under which particular types
of events are most likely to occur. For example, it is possible to
explore whether some types of events occurred more on aircraft flown by
pilots with less total career flying experience or by pilots with more
than a certain amount of experience. It is possible to explore whether
some types of events occurred more on some types of aircraft than on
others. Such findings could be used to inspire further research to
identify the reasons for the observed relations and then perhaps to
change aviation practices to enhance safety.
Fifth, the NAOMS data would allow researchers to conduct studies
for optimizing survey methods generally. Not only is this possible by
publishing reports of the field trial and preliminary studies done to
prepare the NAOMS questionnaire and methodology, but the main study
data can be used for this purpose in multiple ways. For example, it
would be possible to compare the findings of data collected from pilots
asked about events they witnessed during the last 30, 60, or 90 days to
see how length of the recall period affected the accuracy of their
recollections. This would be useful information to inform survey
designers generally interested in optimizing recall questions. Also, it
would be possible to explore how survey non-response is related to
survey results, addressing a particularly hot topic in the survey
methodology literature at the moment.
For all of these reasons, I believe that the existing NAOMS data
should be made publicly available right away so that analysts can learn
everything that can be learned from the data, to make the most of the
$8.4 million that NASA spent on the project. I believe that the model
for making these data public should be the ASRS. NASA has been very
successful in setting up a system for fully publicly disseminating the
terrifically valuable information provided by pilots through the ASRS
reporting system, and a comparable dissemination system can be created
for NAOMS data as well.
Documenting the NAOMS Data in Detail
In order to allow the dissemination of these data to yield the most
positive benefits, it is essential that NASA provide extensive and
detailed documentation of the procedures by which the study was
designed and the procedures by which the main data were collected. This
includes descriptions of sampling, of respondent recruiting, of
locating potential respondents, of training interviewers, of releasing
cases for interviewing at particular times, and more. The full array of
electronic files documenting all phases of the data collection should
be made public while protecting the identities of the individuals who
were interviewed.
In addition, NASA should help analysts use the data by providing
written guidelines on how to properly analyze the data in light of the
study design. No one knows the design complexities better than the
NAOMS research staff. So they should write documentation to help
analysts understand the origins of and potential uses of the data set.
Just one illustration of how complex analysis of these data is
involves the issue of multiple reporting of the same event. One
potential use of NAOMS data is to calculate the rates at which
particular risk-increasing events happened during particular time
periods. NAOMS was designed to yield such estimates, but calculation of
them must be done carefully.
Consider, for example, bird strikes. An analyst might be tempted to
simply count up the number of times that pilots who were interviewed
during a particular time period (e.g., calendar year 2003) reported
experiencing a bird strike. Then, the analyst might be tempted to
multiply this total by the ratio of the total number of licensed pilots
during that time period divided by the number of pilots who completed
interviews in the survey to yield a projected total number of bird
strikes that occurred to the entire population of pilots.
However, multiple pilots witnessed each bird strike, and each bird
strike could have been reported by each of those pilots. Specifically,
a collision of a bird with an airplane would have been witnessed by two
pilots on aircraft with two cockpit crew members and by three pilots on
aircraft with three cockpit crew members. Thus, each bird strike had
twice the probability of being reported by two-crew aircraft pilots and
three times the probability of being reported by three-crew aircraft
pilots. So in order to calculate the number of events accurately, the
observed total number of events must be adjusted downward to account
for this multiple reporting.
NAOMS was designed knowing that this sort of calculation must be
carried out. The questionnaire collected information necessary to
implement corrections for this multiple reporting. Providing
information to analysts about how to do this computation would be a
valuable public service. With substantial documentation accompanying
the data, analysts can be sure to calculate statistics properly by
taking into account all such analytic considerations.
In addition to providing this documentation immediately, I would
strongly recommend to NASA that they assemble and fund a ``swat'' team
of suitable experts to conduct all possible analyses with the NAOMS
data and issue an initial report of their findings as quickly as
possible. Subsequent reports can then be issued later as additional
analyses are conducted.
I assume that this ``swat team's'' effort should build on the work
that NASA has done already in constructing a final report on the data,
which they planned to release later this year. I have not seen a draft
of that report and don't know anything about its contents. But if it is
not completely comprehensive in addressing all issues that the data can
address and completely comprehensive in fully documenting all
procedural details of how the data were collected, I would recommend
that its scope be expanded accordingly, with proper government funding
to permit it to be done as well as all of the rest of NAOMS to date.
The Future of NAOMS
One might imagine that the book has been closed on NAOMS and that
clean-up activity is all that remains on this project. But I believe
that to think of NAOMS in these terms would forego a wonderful
opportunity for NASA and for this government and for this country.
NAOMS data are not being generated by any other source. And from
all indications, the NAOMS data that were collected are reliable and
valid. Furthermore, our team's public meetings with stakeholders
indicated considerable enthusiasm for the sorts of data that NAOMS was
intended to provide.
Therefore, I believe, the vision of a multi-faceted NAOMS data
collection monitoring service was and is terrifically positive for
everyone who flies on planes, everyone who works in the commercial
aviation system, everyone who manufactures airplanes, and everyone who
monitors and helps to optimize aeronautics in American.
Consequently, I recommend restarting NAOMS data collection where it
left off and bring its potential fully into being. Doing so would be a
great service of this government to this country.
There has been some discussion recently of the notion that NASA has
prepared NAOMS to be handed off to another organization to continue the
data collection in the future. Two organizations that have been
mentioned in this regard are the Air Line Pilots Association (ALPA) and
the Commercial Aviation Safety Team (CAST).
I believe that such a hand-off would be unwise, untenable, and
unlikely to lead to successful continuation of NAOMS data collection.
The reason is that within the aviation safety community, NASA is
uniquely qualified to carry out this work in an optimal form, for a
series of reasons.
First, NASA has built up a unique credibility and trust in the
aviation safety community by running ASRS successfully over the years.
No other agency has the trust of all interested parties inside and
outside of government the way NASA does. This trust will enhance the
likelihood that pilots, air traffic controllers, flight attendants, and
mechanics will agree to participate in survey interviews. NASA's
reputation for scientific excellence is especially important to allow
NAOMS data to earn the trust that they deserve.
Second, NASA has the scientific credibility and third-party
objectivity to be able to collect data at a distance from those who run
airlines, manufacture aircraft, and fly on those aircraft. If the data
collection were to be run by any interested party, their values might
be perceived, rightly or wrongly, to have influenced the results they
obtain and/or distribute. This is a context in which government
oversight and management of an information collection system run by a
private sector contractor with considerable expertise is the best way
to allow that system to be most effective and most helpful to all who
can benefit from it.
Most importantly, I have not heard of any commitment made by ALPA,
CAST, or any other private sector organization to commit funds to
initiate and maintain continued NAOMS data collection using the same
high-quality methodology that NASA developed. The benefits of ASRS data
are obvious to all who use that growing data set of anecdotes.
Considerable added value can and should be created by making long-term
commitment through appropriate funding to allow NASA to restart NAOMS
data collection from pilots, air traffic controllers, flight
attendants, and mechanics.
The Members of this committee fly on commercial airlines, as do
huge numbers of your constituents, including me. I believe that we all
deserve to fly on the safest possible system. NASA's efforts in
building and carrying out NAOMS offer the opportunity to significantly
enhance our safety by watching carefully what happens in real time and
documenting risk-elevating events in ways that enable minimization of
them. As the aviation system grows and changes in the coming years,
keeping a close eye on its functioning can only increase public
confidence in air travel. I therefore urge this committee to please
take this opportunity to do what I believe your constituents would
want: to reactive this valuable system under NASA's roof.
Conclusion
The U.S. Federal Government in general and NASA in particular have
a great deal to be proud of regarding NAOMS. NAOMS was intended to fill
a hole by creating an ongoing pipeline of valuable information for the
public and for the private sector to enhance the welfare of all
Americans. It has succeeded in doing so and can continue to do so in
the future. Thank you for taking this opportunity to consider assuring
that to happen.
Biography for Jon A. Krosnick
Education
A.B., Harvard University (in Psychology, Magna Cum Laude), 1980.
M.A., University of Michigan (in Social Psychology, with Honors), 1983.
Ph.D., University of Michigan (in Social Psychology), 1986.
Employment
2006- , Research Professor, Survey Research Laboratory, University of
Illinois.
2005- , Senior Fellow, Institute for the Environment, Stanford
University.
2004- , Frederic O. Glover Professor in Humanities and Social
Sciences, Stanford University.
2004- , Professor, Department of Communication, Stanford University.
2004- , Professor, Department of Political Science, Stanford
University.
2004- , Professor, Department of Psychology (by courtesy), Stanford
University.
2004- , Associate Director, Institute for Research in the Social
Sciences, Stanford University.
2004- , Director, Methods of Analysis Program in the Social Sciences,
Stanford University.
2004-2006, Visiting Professor, Department of Psychology, The Ohio State
University.
2003-2004, Visiting Professor, Department of Communication, Stanford
University.
1986-2004, Assistant to Associate to Full Professor, Departments of
Psychology and Political Science, The Ohio State University.
1987-1989, Adjunct Research Investigator, Survey Research Center,
Institute for Social Research, University of Michigan.
1987-1989, Lecturer, Survey Research Center Summer Program in Survey
Research Techniques, University of Michigan.
1986-1987, Visiting Scholar, Survey Research Center, Institute for
Social Research, University of Michigan.
1985, Lecturer, Department of Psychology, The Ohio State University.
1982-1985, Research Assistant, Center for Political Studies and Survey
Research Center, Institute for Social Research, University of
Michigan.
1980-1981, Senior Research Assistant, Department of Psychology, Harvard
University.
1979-1981, Senior Research Assistant, Department of Behavioral
Sciences, School of Public Health, Harvard University.
Honors
1976, Bausch and Lomb Science Award.
1982, National Institute of Mental Health Graduate Training Fellowship.
1984, Phillip Brickman Memorial Prize for Research in Social
Psychology.
1984, American Association for Public Opinion Research Student Paper
Award.
1984, National Institute of Mental Health Graduate Training Fellowship.
1984, Pi Sigma Alpha Award for the Best Paper Presented at the 1983
Midwest Political Science Association Annual Meeting.
1984, Elected Departmental Associate, Department of Psychology,
University of Michigan, recognizing outstanding academic
achievement.
1990, Invited Guest Editor, Social Cognition (Special issue on
political psychology, Vol. 8, #1, May)
1993, Brittingham Visiting Scholar, University of Wisconsin.
1995, Erik H. Erikson Early Career Award for Excellence and Creativity
in the Field of Political Psychology, International Society of
Political Psychology.
1996-1997, Fellow, Center for Advanced Study in the Behavioral
Sciences, Stanford, California.
1998, Elected Fellow, American Psychological Association.
1998, Elected Fellow, Society for Personality and Social Psychology.
1998, Elected Fellow, American Psychological Society.
2001-2007, Appointed University Fellow, Resources for the Future,
Washington, DC.
2003, Prize for the Best Paper Presented at the 2002 Annual Meeting of
the American Political Science Association, Section on
Elections, Public Opinion, and Voting Behavior.
Selected Invited Addresses
2003, Invited Address, Midwestern Psychological Association Annual
Meeting, Chicago, Illinois.
2004, Invited Address, Distinguished Lecture Series Sponsored by the
Departments of Psychology and Political Science, University of
California, Davis, California.
2004, Keynote Lecture, International Symposium in Honour of Paul
Lazarsfeld, Katholieke Universiteit Leuven (Belgium).
2005, Invited Address, Joint Program in Survey Methodology
Distinguished Lecture Series, University of Maryland, College
Park, Maryland.
2005, Invited Address, ``Climate Change: Science Action,''
Conference Hosted by the Yale School of Forestry and
Environmental Studies, Aspen, Colorado.
2005, Invited Commentator, ``Science for Valuation of EPA's Ecological
Protection Decisions and Programs,'' a U.S. Environmental
Protection Agency Science Advisory Board Workshop, Washington,
DC.
2006, Invited Address, ``The Wonderful Willem Saris and his
Contributions to the Social Sciences.'' Farewell Symposium for
Willem Saris, University of Amsterdam, Amsterdam, the
Netherlands.
2006, Invited Workshop, ``The State of Survey Research.'' Annual Summer
Meeting of the Society for Political Methodology, Davis,
California.
2006, Invited Keynote Address, ``Recent Lessons Learned About
Maximizing Survey Measurement Accuracy in America: One Surprise
After Another.'' 2006 Survey Research Methodology Conference,
Center for Survey Research, Academia Sinica, Taipei, Taiwan.
2006, Invited Address, ``Review of Nonresponse Analysis Across Multiple
Surveys.'' Conference on ``Sample Representativeness:
Implications for Administering and Testing Stated Preference
Surveys,'' Resources for the Future, Washington, D.C.
2006, Invited Address, ``Introduction to Survey Issues in Ecological
Valuation.'' Meeting of the U.S. Environmental Protection
Agency Scientific Advisory Board Committee on Valuing the
Protection of Ecological Systems and Services (CVPESS),
Washington, D.C.
2006, Invited Address, ``Gas Pumps and Voting Booths: Energy and
Environment in the Midterm Elections.'' First Wednesday
Seminar, Resources for the Future, Washington, D.C.
2006, Invited Address, ``What Americans Believe and Don't Believe about
Global Warming: Attitude Formation and Change in Response to a
Raging Scientific Controversy.'' National Science Foundation
Speaker Series, Washington, D.C.
2006, Invited Address, ``Moving Survey Data Collection to the Internet?
Surprising Ways that Mode, Sample Design and Response Rates
Affect Survey Accuracy.'' New York Chapter of the American
Association for Public Opinion Research, Fordham University,
New York, New York.
2006, Invited Address, ``Climate change: What Americans Really Think.''
Conference entitled ``A Favorable Climate for Climate Action,''
sponsored by the Sustainable Silicon Valley, Santa Clara
University, Santa Clara, California.
2006, Invited Lecture, ``What Americans Really Think About Climate
Change: Attitude Formation and Change in Response to a Raging
Scientific Controversy.'' Brown Bag Series, National Oceanic
and Atmospheric Administration, Silver Spring, Maryland.
2007, Invited Lecture, ``What Americans Really Think About Climate
Change: Attitude Formation and Change in Response to a Raging
Scientific Controversy.'' Education And Outreach Colloquium,
Earth Sciences Division, NASA Goddard Space Flight Center,
Greenbelt, Maryland.
2007, Inaugural Lecture, ``The Brave New World of Survey Research: One
Surprise After Another.'' Survey Research Institute First
Annual Speaker Series, Cornell University, Ithaca, New York.
2007, Inaugural Lecture, ``What Americans Really Think About Climate
Change: Attitude Formation and Change in Response to a Raging
Scientific Controversy.'' National Centers for Coastal Ocean
Science/Center for Sponsored Coastal Ocean Research Ecosystem
Science Seminar Series & NOS Science Seminar Series, National
Oceanic and Atmospheric Administration, Silver Spring,
Maryland.
2007, Plenary Speaker, ``What Americans Really Think About Climate
Change: Attitude Formation and Change in Response to a Raging
Scientific Controversy.'' Annual Ocean and Coastal Program
Managers' Meeting, Sponsored by the Office of Ocean and Coastal
Resource Management in partnership with the National Estuarine
Research Reserve Association, National Oceanic and Atmospheric
Administration, Washington, DC.
2007, Oral Testimony on Assembly Bill 372 (to revise the order in which
the names of candidates for an office must appear on the
ballot) before the Nevada State Legislature, Carson City,
Nevada.
2007, Invited Lecture, ``What Americans Really Think About Climate
Change: Attitude Formation and Change in Response to a Raging
Scientific Controversy.'' The White House Office of Science and
Technology Policy, Washington, D.C.
2007, Invited Lecture, ``What Americans Really Think About Climate
Change: Attitude Formation and Change in Response to a Raging
Scientific Controversy.'' Workshop on Climate Science and
Services: Coastal Applications for Decision Making through Sea
Grant Extension and Outreach. NOAA Coastal Services Center,
Charleston, South Carolina.
2007, Invited Lecture, ``Climate Change: What Americans Think.''
Capital Hill Briefing Sponsored by the Environment and Energy
Study Institute, Cannon House Office Building, Washington, D.C.
Broadcast live in C-SPAN.
2007, Invited Lecture, ``The Impact of Candidate Name Order on Election
Outcomes.'' The Carter Center, Atlanta, Georgia.
2007, Invited Lecture, ``What Americans Really Think About Climate
Change: Attitude Formation and Change in Response to a Raging
Scientific Controversy.'' Google, Mountain View, California.
2007, Invited Lecture, ``Climate Change: What Americans Really Think.''
The Commonwealth Club, San Francisco, California.
2007, Invited Address, ``Representativeness of Online Panels.'' Time-
Warner 2007 Research Conference, New York, New York.
2007, Invited Lecture, ``What the Public Knows.'' News Executives
Roundtable: Covering Climate Change, Stanford, California.
2007, Invited Address, ``The Top Ten Signs of an Excellent Survey
Vendor.'' Intuit Corporatate Customer & Market Insight Offsite,
Palo Alto, California.
2007, Invited Lecture, ``What Americans Really Think About Climate
Change.'' Association of Science-Technology Centers Conference,
Los Angeles, California.
Editorial Board Member
1989-2000, Journal of Personality and Social Psychology
2006- ,
1990-1994, Journal of Experimental Social Psychology
1997-2003, Basic and Applied Social Psychology
1988-1991, Public Opinion Quarterly
1994-2002,
1998-2005, Media Psychology
2006- , Sociological Methodology
External Grants and Contracts
1977, CBS Research Grant, to support development and evaluation of a
mass media promotional campaign for sound recordings.
1984, Society for the Psychological Study of Social Issues Doctoral
Dissertation Grant-in-aid.
1984, CBS Research Grant, to support literature review/research on the
causes of heavy television viewing among children and
adolescents.
1985, CBS Research Grant, to support empirical research on the effect
of television viewing on alcohol use among children and
adolescents.
1985, CBS Research Grant, to support empirical research on the causes
of heavy television viewing among children and adolescents.
1987-1989, National Institute on Aging Research Grant, to study changes
in political orientations over the life span (with Duane F.
Alwin).
1987, National Association of Broadcasters Research Grant, to study the
causes of heavy television viewing among children and
adolescents.
1988, Society for the Psychological Study of Social Issues Grant-in-
Aid, to support research on the causes of heavy television
viewing among children and adolescents.
1990-1992, National Science Foundation, The information processing
consequences of attitude importance.
1991, National Science Foundation Research Experience for
Undergraduates Grant Supplement, The information processing
consequences of attitude importance.
1992, Society for the Psychological Study of Social Issues Grant-in-
Aid, to support research on the impact of the Gulf War on the
constituents of presidential evaluations.
1992, National Science Foundation Research Experience for
Undergraduates Grant Supplement, The information processing
consequences of attitude importance.
1994, National Science Foundation, Explaining the surprising accuracy
of mail surveys.
1995, National Science Foundation Research Experience for
Undergraduates Grant Supplement, Explaining the surprising
accuracy of mail surveys.
1995, U.S. Department of the Interior/Minerals Management Service/
University of California Coastal Marine Institute, Testing and
calibrating the measurement of nonmarket values for oil spills
via the contingent valuation method (with Michael Hanemann).
1995, Electric Power Research Institute/Industrial Economics,
Elicitation of public perceptions regarding the potential
ecological effects of climate change (part I).
1996, Electric Power Research Institute/Industrial Economics,
Elicitation of public perceptions regarding the potential
ecological effects of climate change (part II).
1997, National Science Foundation, Formation and change of public
beliefs about global warming.
1997, National Oceanic and Atmospheric Administration/U.S.
Environmental Protection Agency/Resources for the Future,
Formation and change of public beliefs about global warming:
Wave II of survey interviewing.
1998, 1999, 2000, 2001, Robert Dodd and Associates/The Battelle
Memorial Institute/National Aeronautics and Space
Administration, National Aviation Operations Monitoring System
questionnaire development.
2000, 2001, Resources for the Future, American public opinion on the
environment.
2001, 2002, Columbus Airport Authority, The dynamics and causes of
airport customer satisfaction.
2002, Time-sharing Experiments for the Social Sciences (TESS) grant
(funded by the National Science Foundation), Social
desirability and reports of voter turnout (with Allyson L.
Holbrook).
2003, National Science Foundation, Social and psychological mechanisms
of the relation between age and openness to attitude change
(with Penny Visser).
2003, New York Academy of Medicine/W. K. Kellogg Foundation, Engaging
the community in terrorism preparedness planning.
2003, Decade of Behavior 2000-2010 Distinguished Lecture Program Grant
to feature Richard E. Petty at the 2003 annual meeting of the
American Association for Public Opinion Research.
2004, National Science Foundation, Optimizing the number of points on
rating scales.
2004, The Bureau of Labor Statistics, U.S Department of Labor, Refining
the categorization of jobs in the biotechnology industry.
2005, National Science Foundation, 2005 Summer Institute in Political
Psychology.
2005, National Science Foundation, Survey Research Methodology
Optimization for the Science Resource Statistics Program.
2005, National Science Foundation, American National Election Studies
2005-2010 (with Arthur Lupia).
2006, American Psychological Association, The psychology of voting and
election campaigns: A proposal for a stand-alone conference
(with Wendy Wood, Arthur, Lupia, and John Aldrich).
2006, National Science Foundation, Agenda-setting workshop in the area
of e-science: Development of the next generation of cybertools
applied to data collections in the social and behavioral
sciences (with Arthur Lupia).
2006, National Science Foundation, Development of a computer network
for experimental and non-experimental data collection via the
Internet from a nationally representative sample of American
households.
2006, National Science Foundation and the Department of Homeland
Security, Expansion of the American National Election Study:
Gauging the public's Attitudes on terrorism and homeland
security (with Arthur Lupia).
2007, National Science Foundation, 2007 Summer Institute in Political
Psychology.
2007, National Science Foundation, Survey Research Methodology
Optimization for the Science Resource Statistics Program.
2007, National Science Foundation, Survey Research Methodology
Optimization for the Science Resource Statistics Program
(Supplement).
2007, National Science Foundation, Research Experience for
Undergraduates Supplement for the American National Election
Study.
2007, National Science Foundation, The Impact of Polls on Political
Behavior.
2007, National Science Foundation, American National Election Studies
Supplement to Support Additional Pretesting of Questionnaire
Items.
2007, National Science Foundation, American National Election Studies
Supplement to Support a Conference on Methodology for Coding
Open-ended Question Responses.
Books
Weisberg, H., Krosnick, J.A., & Bowen, B. (1989). Introduction to
survey research and data analysis. Chicago: Scott, Foresman.
Krosnick, J.A. (Ed.). (1990). Thinking about politics: Comparisons of
experts and novices. New York: Guilford Press (Book version of
a special issue of Social Cognition, Volume 8, Number 1, 1990).
Petty, R.E., & Krosnick, J.A. (Eds.). (1995). Attitude strength:
Antecedents and consequences. Hillsdale, NJ: Erlbaum.
Weisberg, H., Krosnick, J.A., & Bowen, B. (1996). Introduction to
survey research, polling, and data analysis. Thousand Oaks, CA:
Sage.
Carson, R.T., Conaway, M.B., Hanemann, W.M., Krosnick, J.A., Mitchell,
R.C., Presser, S. (2004). Valuing oil spill prevention: A case
study of California's central coast. Dordrecht, The
Netherlands: Kluwer Academic Publishers.
Krosnick, J.A., & Fabrigar, L.R. (forthcoming). The handbook of
questionnaire design. New York: Oxford University Press.
Journal Articles and Book Chapters
Krosnick, J.A. (1978). One approach to the analysis of drumset playing.
Percussive Notes, Spring-Summer, 143-149.
Judd, C.M., Krosnick, J.A., & Milburn, M.A. (1981). Political
involvement and attitude structure in the general public.
American Sociological Review, 46, 660-669.
Krosnick, J.A., & Judd, C.M. (1982). Transitions in social influence at
adolescence: Who induces cigarette smoking? Developmental
Psychology, 18, 359-368.
Judd, C.M., & Krosnick, J.A. (1982). Attitude centrality, organization,
and measurement. Journal of Personality and Social Psychology,
42, 436-447.
Krosnick, J.A. (1982). Teaching percussion: Growing with your students.
National Association of College Wind and Percussion Instructors
Journal, Summer, 4-7.
Judd, C.M., Kenny, D.A., & Krosnick, J.A. (1983). Judging the positions
of political candidates: Models of assimilation and contrast.
Journal of Personality and Social Psychology, 44, 952-963.
McAlister, A.L., Krosnick, J.A., & Milburn, M.A. (1984). Causes of
adolescent cigarette smoking: Tests of a structural equation
model. Social Psychology Quarterly, 47, 24-36.
Iyengar, S., Kinder, D.R., Peters, M.D., & Krosnick, J.A. (1984). The
evening news and presidential evaluations. Journal of
Personality and Social Psychology, 46, 778-787.
Reprinted in Peplau, L.A., Sears, D.O., Taylor, S.E., & Freedman,
J.L. (Eds.) (1988), Readings in social psychology: Classic and
contemporary contributions. Englewood Cliffs, NJ: Prentice
Hall.
Alwin, D.F., & Krosnick, J.A. (1985). The measurement of values in
surveys: A comparison of ratings and rankings. Public Opinion
Quarterly, 49, 535-552.
Reprinted in Singer, E., & Presser, S. (Eds.) (1989). Survey
research methods: A reader. Chicago: University of Chicago
Press.
Reprinted in Bartholomew, D. (Ed.) (2006). Measurement. Oxford, UK:
The Bardwell Press.
Schuman, H., Ludwig, J., & Krosnick, J.A. (1986). The perceived threat
of nuclear war, salience, and open questions. Public Opinion
Quarterly, 50, 519-536.
Krosnick, J.A., & Alwin, D.F. (1987). An evaluation of a cognitive
theory of response order effects in survey measurement. Public
Opinion Quarterly, 51, 201-219.
Krosnick, J.A. (1988). Attitude importance and attitude change. Journal
of Experimental Social Psychology, 24, 240-255.
Krosnick, J.A., & Schuman, H. (1988). Attitude intensity, importance,
and certainty and susceptibility to response effects. Journal
of Personality and Social Psychology, 54, 940-952.
Krosnick, J.A. (1988). The role of attitude importance in social
evaluation: A study of policy preferences, presidential
candidate evaluations, and voting behavior. Journal of
Personality and Social Psychology, 55, 196-210.
Krosnick, J.A., & Alwin, D.F. (1988). A test of the form-resistant
correlation hypothesis: Ratings, rankings, and the measurement
of values. Public Opinion Quarterly, 52, 526-538.
Judd, C.M., & Krosnick, J.A. (1989). The structural bases of
consistency among political attitudes: The effects of political
expertise and attitude importance. In A.R. Pratkanis, S.J.
Breckler, & A.G. Greenwald (Eds.), Attitude Structure and
Function. Hillsdale, NJ: Erlbaum.
Krosnick, J.A. (1989). Attitude importance and attitude accessibility.
Personality and Social Psychology Bulletin, 15, 297-308.
Krosnick, J.A. (1989). Question wording and reports of survey results:
The case of Louis Harris and Aetna Life and Casualty. Public
Opinion Quarterly, 53, 107-113.
Reprinted in Bulmer, H. (Ed.), Questions. Thousand Oaks, CA: Sage
Publications.
Krosnick, J.A., & Alwin, D.F. (1989). Aging and susceptibility to
attitude change. Journal of Personality and Social Psychology,
57, 416-425.
Krosnick, J.A. (1990). Government policy and citizen passion: A study
of issue publics in contemporary America. Political Behavior,
12, 59-92.
Krosnick, J.A. (1990). Expertise in political psychology. Social
Cognition, 8, 1-8. (also in J. Krosnick (Ed.), Thinking about
politics: Comparisons of experts and novices. New York:
Guilford, 1990, pp. 1-8).
Krosnick, J.A. (1990). Lessons learned: A review and integration of our
findings. Social Cognition, 8, 154-158. (also in J. Krosnick
(Ed.), Thinking about politics: Comparisons of experts and
novices. New York: Guilford, 1990, pp. 154-158).
Krosnick, J.A., Li, F., & Lehman, D. (1990). Conversational
conventions, order of information acquisition, and the effect
of base rates and individuating information on social
judgments. Journal of Personality and Social Psychology, 59,
1140-1152.
Krosnick, J.A., & Milburn, M.A. (1990). Psychological determinants of
political opinionation. Social Cognition, 8, 49-72. (also in J.
Krosnick (Ed.), Thinking about politics: Comparisons of experts
and novices. New York: Guilford, 1990, pp. 49-72).
Krosnick, J.A., & Sedikides, C. (1990). Self-monitoring and self-
protective biases in the use of consensus information to
predict one's own behavior. Journal of Personality and Social
Psychology, 58, 718-728.
Krosnick, J.A., & Kinder, D.R. (1990). Altering the foundations of
support for the president through priming. American Political
Science Review, 84, 497-512.
Reprinted in J.T. Jost and J. Sidanius (Eds.) (2004). Political
psychology: Key readings. New York, NY: Psychology Press.
Alwin, D.F., & Krosnick, J.A. (1991). Aging, cohorts, and the stability
of sociopolitical orientations over the life span. American
Journal of Sociology, 97, 169-195.
Alwin, D.F., & Krosnick, J.A. (1991). The reliability of survey
attitude measurement: The influence of question and respondent
attributes. Sociological Methods and Research, 20, 139-181.
Judd, C.M., Drake, R.A., Downing, J.W., & Krosnick, J.A. (1991). Some
dynamic properties of attitude structures: Context induced
response facilitation and polarization. Journal of Personality
and Social Psychology, 60, 193-202.
Krosnick, J.A. (1990). Americans' perceptions of presidential
candidates: A test of the projection hypothesis. Journal of
Social Issues, 46, 159-182.
Krosnick, J.A. (1991). Response strategies for coping with the
cognitive demands of attitude measures in surveys. Applied
Cognitive Psychology, 5, 213-236.
Krosnick, J.A. (1991). The stability of political preferences:
Comparisons of symbolic and non-symbolic attitudes. American
Journal of Political Science, 35, 547-576.
Krosnick, J.A. (1992). The impact of cognitive sophistication and
attitude importance on response order effects and question
order effects. In N. Schwarz and S. Sudman (Eds.), Order
effects in social and psychological research (pp. 203-218). New
York: Springer-Verlag.
Krosnick, J.A., & Abelson, R.P. (1992). The case for measuring attitude
strength in surveys. Pp. 177-203 in J. Tanur (Ed.), Questions
about questions: Inquiries into the cognitive bases of surveys.
New York: Russell Sage.
Krosnick, J.A., Betz, A.L., Jussim, L.J., & Lynn, A.R. (1992).
Subliminal conditioning of attitudes. Personality and Social
Psychology Bulletin, 18, 152-162.
Lehman, D.R., Krosnick, J.A., West, R.L., & Li, F. (1992). The focus of
judgment effect: A question wording effect due to hypothesis
confirmation bias. Personality and Social Psychology Bulletin,
18, 690-699.
Krosnick, J.A., & Berent, M.K. (1993). Comparisons of party
identification and policy preferences: The impact of survey
question format. American Journal of Political Science, 37,
941-964.
Krosnick, J.A., & Brannon, L.A. (1993). The impact of the Gulf War on
the ingredients of presidential evaluations: Multidimensional
effects of political involvement. American Political Science
Review, 87, 963-975.
Krosnick, J.A., & Brannon, L.A. (1993). The media and the foundations
of Presidential support: George Bush and the Persian Gulf
conflict. Journal of Social Issues, 49, 167-182.
Krosnick, J.A., Boninger, D.S., Chuang, Y.C., Berent, M.K., & Carnot,
C.G. (1993). Attitude strength: One construct or many related
constructs? Journal of Personality and Social Psychology, 65,
1132-1149.
Krosnick, J.A., Berent, M.K., & Boninger, D.S. (1994). Pockets of
responsibility in the American electorate: Findings of a
research program on attitude importance. Political
Communication, 11, 391-411.
Krosnick, J.A., & Smith, W.A. (1994). Attitude strength. In V.S.
Ramachandran (Ed.), Encyclopedia of human behavior. San Diego,
CA: Academic Press.
Ostrom, T.M., Bond, C., Krosnick, J.A., & Sedikides, C. (1994).
Attitude scales: How we measure the unmeasurable. In S. Shavitt
& T.C. Brock (Eds.), Persuasion: Psychological insights and
perspectives. Boston, MA: Allyn and Bacon.
Rahn, W.M., Krosnick, J.A., & Breuning, M. (1994). Rationalization and
derivation processes in survey studies of political candidate
evaluation. American Journal of Political Science, 38, 582-600.
Berent, M.K., & Krosnick, J.A. (1995). The relation between political
attitude importance and knowledge structure. In M. Lodge & K.
McGraw (Eds.), Political judgment: Structure and process. Ann
Arbor, MI: University of Michigan Press.
Boninger, D.S., Krosnick, J.A., & Berent, M.K. (1995). The origins of
attitude importance: Self-interest, social identification, and
value-relevance. Journal of Personality and Social Psychology,
68, 61-80.
Boninger, D.S., Krosnick, J.A., Berent, M.K., & Fabrigar, L.R. (1995).
The causes and consequences of attitude importance. In R.E.
Petty and J.A. Krosnick (Eds.), Attitude strength: Antecedents
and consequences. Hillsdale, NJ: Erlbaum.
Fabrigar, L.R., & Krosnick, J.A. (1995). Attitude importance and the
false consensus effect. Personality and Social Psychology
Bulletin, 21, 468-479.
Fabrigar, L.R., & Krosnick, J.A. (1995). Attitude measurement and
questionnaire design. In A.S.R. Manstead & M. Hewstone (Eds.),
Blackwell encyclopedia of social psychology. Oxford: Blackwell
Publishers.
Fabrigar, L.R., & Krosnick, J.A. (1995). Voting behavior. In A.S.R.
Manstead & M. Hewstone (Eds.), Blackwell encyclopedia of social
psychology. Oxford: Blackwell Publishers.
Krosnick, J.A., & Petty, R.E. (1995). Attitude strength: An overview.
In R.E. Petty and J.A. Krosnick (Eds.), Attitude strength:
Antecedents and consequences. Hillsdale, NJ: Erlbaum.
Krosnick, J.A., & Telhami, S. (1995). Public attitudes toward Israel: A
study of the attentive and issue publics. International Studies
Quarterly, 39, 535-554.
Reprinted in Israel Affairs, vol. 2 (1995/1996).
Reprinted in G. Sheffer (Ed.) (1997). U.S.-Israeli relations at the
crossroads (Israeli history, politics, and society). London:
Frank Cass & Co., Ltd.
Wegener, D.T., Downing, J., Krosnick, J.A., & Petty, R.E. (1995).
Measures and manipulations of strength-related properties of
attitudes: Current practice and future directions. In R.E.
Petty and J.A. Krosnick (Eds.), Attitude strength: Antecedents
and consequences. Hillsdale, NJ: Erlbaum.
Weisberg, H.F., Haynes, A.A., & Krosnick, J.A. (1995). Social group
polarization in 1992. In H.F. Weisberg (Ed.), Democracy's
feast: Elections in America. Chatham, NJ: Chatham House.
Krosnick, J.A., Narayan, S.S., & Smith, W.R. (1996). Satisficing in
surveys: Initial evidence. In M.T. Braverman & J.K. Slater
(Eds.), Advances in survey research (pp. 29-44). San Francisco:
Jossey-Bass.
Miller, J.M., & Krosnick, J.A. (1996). News media impact on the
ingredients of presidential evaluations: A program of research
on the priming hypothesis. In D. Mutz & P. Sniderman (Eds.),
Political persuasion and attitude change. Ann Arbor, MI:
University of Michigan Press.
Narayan, S., & Krosnick, J.A. (1996). Education moderates some response
effects in attitude measurement. Public Opinion Quarterly, 60,
58-88.
Visser, P.S., Krosnick, J.A., Marquette, J., & Curtin, M. (1996). Mail
surveys for election forecasting? An evaluation of the Columbus
Dispatch poll. Public Opinion Quarterly, 60, 181-227.
Krosnick, J.A., & Fabrigar, L.R. (1997). Designing rating scales for
effective measurement in surveys. In L. Lyberg, P. Biemer, M.
Collins, L. Decker, E. DeLeeuw, C. Dippo, N. Schwarz, and D.
Trewin (Eds.), Survey Measurement and Process Quality. New
York: Wiley-Interscience.
Miller, J.M., & Krosnick, J.A. (1997). The anatomy of news media
priming. In S. Iyengar and R. Reeves (Eds.), Do the media
govern? Politicians, voters, and reporters in America. Thousand
Oaks, CA: Sage.
Carson, R.T., Hanemann, W.M., Kopp, R.J., Krosnick, J.A., Mitchell,
R.C., Presser, S., Ruud, P.A., & Smith, V.K., with Conaway, M.,
& Martin, K. (1997). Temporal reliability of estimates from
contingent valuation. Land Economics, 73, 151-163.
Carson, R.T., Hanemann, W.M., Kopp, R.J., Krosnick, J.A., Mitchell,
R.C., Presser, S., Ruud, P.A., & Smith, V.K., with Conaway, M.,
& Martin, K. (1998). Referendum design and contingent
valuation: The NOAA panel's no-vote recommendation. Review of
Economics and Statistics, 80, 335-338.
Miller, J.M., & Krosnick, J.A. (1998). The impact of candidate name
order on election outcomes. Public Opinion Quarterly, 62, 291-
330.
Visser, P.S., & Krosnick, J.A. (1998). The development of attitude
strength over the life cycle: Surge and decline. Journal of
Personality and Social Psychology, 75, 1388-1409.
Krosnick, J.A. (1999). Maximizing questionnaire quality. In J.P.
Robinson, P.R. Shaver, & L.S. Wrightsman (Eds.), Measures of
political attitudes. New York: Academic Press.
Krosnick, J.A. (1999). Survey research. Annual Review of Psychology,
50, 537-567.
Bassili, J.N., & Krosnick, J.A. (2000). Do strength-related attitude
properties determine susceptibility to response effects? New
evidence from response latency, attitude extremity, and
aggregate indices. Political Psychology, 21, 107-132.
Holbrook, A.L., Krosnick, J.A., Carson, R.T., & Mitchell, R.C. (2000).
Violating conversational conventions disrupts cognitive
processing of attitude questions. Journal of Experimental
Social Psychology, 36, 465-494.
Holbrook, A.L., Bizer, G.Y., & Krosnick, J.A. (2000). Political
behavior of the individual. In A.E. Kazdin (Ed.), Encyclopedia
of psychology. Washington, DC, and New York, NY: American
Psychological Association and Oxford University Press.
Krosnick, J.A., Holbrook, A.L., & Visser, P.S. (2000). The impact of
the Fall 1997 debate about global warming on American public
opinion. Public Understanding of Science, 9, 239-260.
Miller, J.M., & Krosnick, J.A. (2000). News media impact on the
ingredients of presidential evaluations: Politically
knowledgeable citizens are guided by a trusted source. American
Journal of Political Science, 44, 301-315.
Visser, P.S., Krosnick, J.A., & Lavrakas, P. (2000). Survey research.
In H.T. Reis & C.M. Judd (Eds.), Handbook of research methods
in social psychology. New York: Cambridge University Press.
Visser, P.S., Krosnick, J.A., Marquette, J., & Curtin, M. (2000).
Improving election forecasting: Allocation of undecided
respondents, identification of likely voters, and response
order effects. In P. Lavrakas & M. Traugott (Eds.), Election
polls, the news media, and democracy. New York, NY: Chatham
House.
Bizer, G.Y., & Krosnick, J.A. (2001). Exploring the structure of
strength-related attitude features: The relation between
attitude importance and attitude accessibility. Journal of
Personality and Social Psychology, 81, 566-586.
Holbrook, A.L., Krosnick, J.A., Visser, P.S., Gardner, W.L., &
Cacioppo, J.T. (2001). Attitudes toward presidential candidates
and political parties: Initial optimism, inertial first
impressions, and a focus on flaws. American Journal of
Political Science, 45, 930-950.
Krosnick, J.A. (2002). Is political psychology sufficiently
psychological? Distinguishing political psychology from
psychological political science. In J. Kuklinski (Ed.),
Thinking about political psychology. New York: Cambridge
University Press.
Krosnick, J.A. (2002). The challenges of political psychology: Lessons
to be learned from research on attitude perception. In J.
Kuklinski (Ed.), Thinking about political psychology. New York:
Cambridge University Press.
Krosnick, J.A. (2002). The causes of no-opinion responses to attitude
measures in surveys: They are rarely what they appear to be. In
R.M. Groves, D.A. Dillman, J.L. Eltinge, & R.J.A. Little
(Eds.), Survey nonresponse. New York: Wiley.
Krosnick, J.A., Holbrook, A.L., Berent, M.K., Carson, R.T., Hanemann,
W.M., Kopp, R.J., Mitchell, R.C., Presser, S., Ruud, P.A.,
Smith, V.K., Moody, W.R., Green, M.C., & Conaway, M. (2002).
The impact of ``no opinion'' response options on data quality:
Non-attitude reduction or an invitation to satisfice? Public
Opinion Quarterly, 66, 371-403.
Krosnick, J.A., & McGraw, K.M. (2002). Psychological political science
vs. political psychology true to its name: A plea for balance.
In K.R. Monroe (Ed.), Political psychology. Mahwah, NJ:
Erlbaum.
Swait, J., Adamowicz, W., Hanemann, M., Diederich, A., Krosnick, J.A.,
Layton, D., Provencher, W., Schakade, D., & Tourangeau, R.
(2002). Context dependence and aggregation in disaggregate
choice analysis. Marketing Letters, 13, 195-205.
Anand, S., & Krosnick, J.A. (2003). The impact of attitudes toward
foreign policy goals on public preferences among presidential
candidates: A study of issue publics and the attentive public
in the 2000 U.S. Presidential election. Presidential Studies
Quarterly, 33, 31-71.
Chang, L., & Krosnick, J.A. (2003). Measuring the frequency of regular
behaviors: Comparing the `typical week' to the `past week.'
Sociological Methodology, 33, 55-80.
Holbrook, A.L., Green, M.C., & Krosnick, J.A. (2003). Telephone vs.
face-to-face interviewing of national probability samples with
long questionnaires: Comparisons of respondent satisficing and
social desirability response bias. Public Opinion Quarterly,
67, 79-125.
Krosnick, J.A., Anand, S.N., & Hartl, S.P. (2003). Psychosocial
predictors of heavy television viewing among preadolescents and
adolescents. Basic and Applied Social Psychology, 25, 87-110.
Visser, P.S., Krosnick, J.A., & Simmons, J. (2003). Distinguishing the
cognitive and behavioral consequences of attitude importance
and certainty: A new approach to testing the common-factor
hypothesis. Journal of Experimental Social Psychology, 39, 118-
141.
Bizer, G.Y., Krosnick, J.A., Holbrook, A.L., Wheeler, S.C., Rucker,
D.D., & Petty, R.E. (2004). The impact of personality on
cognitive, behavioral, and affective political processes: The
effects of need to evaluate. Journal of Personality, 72, 995-
1028.
Bizer, G.Y., Visser, P.S., Berent, M.K., & Krosnick, J.A. (2004).
Importance, knowledge, and accessibility: Exploring the
dimensionality of strength-related attitude properties. In W.E.
Saris & P.M. Sniderman (Eds.), Studies in public opinion:
Gauging attitudes, nonattitudes, measurement error and change.
Princeton, NJ: Princeton University Press.
Krosnick, J.A., Miller, J.M., & Tichy, M.P. (2004). An unrecognized
need for ballot reform: Effects of candidate name order. In
A.N. Crigler, M.R. Just, and E.J. McCaffery (Eds.), Rethinking
the vote: The politics and prospects of American election
reform. New York, NY: Oxford University Press.
Miller, J.M., & Krosnick, J.A. (2004). Threat as a motivator of
political activism: A field experiment. Political Psychology,
25, 507-523.
Anand, S., & Krosnick, J.A. (2005). Demographic predictors of media use
among infants, toddlers, and preschoolers. American Behavioral
Scientist, 48, 539-561.
Holbrook, A.L., Berent, M.K., Krosnick, J.A., Visser, P.S., & Boninger,
D.S. (2005). Attitude importance and the accumulation of
attitude-relevant knowledge in memory. Journal of Personality
and Social Psychology, 88, 749-769.
Holbrook, A.L., & Krosnick, J.A. (2005). Meta-psychological vs.
operative measures of ambivalence: Differentiating the
consequences of perceived intra-psychic conflict and real
intra-psychic conflict. In S.C. Craig & M.D. Martinez (Eds.),
Ambivalence and the structure of public opinion. New York, NY:
Palgrave Macmillan.
Krosnick, J.A, Judd, C.M., & Wittenbrink, B. (2005). Attitude
measurement. In D. Albarracin, B.T. Johnson, & M.P. Zanna
(Eds.), Handbook of attitudes and attitude change. Mahwah, NJ:
Erlbaum.
Schaeffer, E.M., Krosnick, J.A., Langer, G.E., & Merkle, D.M. (2005).
Comparing the quality of data obtained by minimally balanced
and fully balanced attitude questions. Public Opinion
Quarterly, 69, 417-428.
Fabrigar, L.R., Krosnick, J.A., & MacDougall, B.L. (2006). Attitude
measurement: Techniques for measuring the unobservable. In M.C.
Green, S. Shavitt, & T.C. Brock (Eds.), Persuasion:
Psychological insights and perspectives. Thousand Oaks, CA:
Sage Publications.
Krosnick, J.A., Chang, L., Sherman, S.J., Chassin, L., & Presson, C.
(2006). The effects of beliefs about the health consequences of
cigarette smoking on smoking onset. Journal of Communication,
56, 518-537.
Krosnick, J.A., Holbrook, A.L., Lowe, L. & Visser, P.S. (2006). The
origins and consequences of democratic citizens' policy
agendas: A study of popular concern about global warming.
Climatic Change, 77, 7-43.
Krosnick, J.A., Holbrook, A.L., & Visser, P.S. (2006). Optimizing brief
assessments in research on the psychology of aging: A pragmatic
approach to survey and self-report measurement. In National
Research Council, When I'm 64. Committee on Aging Frontiers in
Social Psychology, Personality, and Adult Developmental
Psychology. Laura L. Carstensen and Christine R. Hartel,
editors. Board on Behavioral, Cognitive, and Sensory Sciences,
Division of Behavioral and Social Sciences and Education.
Washington, DC: The National Academies Press.
Visser, P.S., Bizer, G.Y., & Krosnick, J.A. (2006). Exploring the
latent structure of strength-related attitude attributes. In M.
Zanna (Ed.), Advances in Experimental Social Psychology. New
York, NY: Academic Press.
Cornell, D.G., Krosnick, J.A., & Chang, L. (2006). Student reactions to
being wrongly informed of failing a high-stakes test: The case
of the Minnesota Basic Standards Test. Educational Policy, 20,
718-751.
Holbrook, A.L., Krosnick, J.A., Moore, D., & Tourangeau, R. (2007).
Response order effects in dichotomous categorical questions
presented orally: The impact of question and respondent
attributes. Public Opinion Quarterly, 71, 325-348.
Malhotra, N., & Krosnick, J.A. (in press). The effect of survey mode on
inferences about political attitudes and behavior: Comparing
the 2000 and 2004 ANES to internet surveys with non-probability
samples. Political Analysis, 15, 286-323.
Malhotra, N., & Krosnick, J.A. (2007). Retrospective and prospective
performance assessments during the 2004 election campaign:
Tests of mediation and news media priming. Political Behavior,
29, 249-278.
Mahotra, N. & Krosnick, J.A. (2007). Procedures for updating
classification systems: A study of biotechnology and the
standard occupational classification system. Journal of
Official Statistics, 23, 409-432.
Schneider, D., Tahk, A., & Krosnick, J.A. (2007). Reconsidering the
impact of behavior prediction questions on illegal drug use:
The importance of using proper analytic methods in social
psychology. Social Influence, 2, 178-196.
Holbrook, A.L., Krosnick, J.A., & Pfent, A.M. (in press). Response
rates in surveys by the news media and government contractor
survey research firms. In J. Lepkowski, B. Harris-Kojetin, P.J.
Lavrakas, C. Tucker, E. de Leeuw, M. Link, M. Brick, L. Japec,
& R. Sangster (Eds.), Telephone survey methodology. New York:
Wiley.
Iyengar, S., Hahn, K.S., Krosnick, J.A., & Walker, J. (in press).
Selective exposure to campaign communication: The role of
anticipated agreement and issue public membership. Journal of
Politics.
Visser, P.S., Holbrook, A.L., & Krosnick, J.A. (in press). Knowledge
and attitudes. In W. Donsbach & M.W. Traugott (Eds.), Handbook
of public opinion research. Thousand Oaks, CA: Sage
Publications
Other Publications
Telhami, S., & Krosnick, J.A. (1989). American sentiment on Israeli-
Palestinian fight: No favorites; Just make peace. Op-ed article
in The Los Angeles Times, March 14, 1989. (Reprinted in the
Columbus Dispatch, March 17, 1989)
Krosnick, J.A. (1990). The uses and abuses of public opinion polls: The
case of Louis Harris and Associates. Chronicles, 14, 47-49.
Krosnick, J.A. (1990). The impact of satisficing on survey data
quality. In Proceedings of the Bureau of the Census 1990 Annual
Research Conference (pp. 835-845). Washington, D.C.: U.S.
Government Printing Office.
Smith, W.R., Culpepper, I.J., & Krosnick, J.A. (1992). The impact of
question order on cognitive effort in survey responding. In
Proceedings of the Sixth National Conference on Undergraduate
Research. Minneapolis, MN: University of Minnesota Press.
Krosnick, J.A., & Hermann, M.G. (1993). Report on the 1991 Ohio State
University Summer Institute in Political Psychology. Political
Psychology, 14, 363-373.
Carson, R.T., Hanemann, W.M., Kopp, R.J., Krosnick, J.A., Mitchell,
R.C., Presser, S., Ruud, P.A., & Smith, V.K. (1994).
Prospective interim lost use value due to DDT and PCB
contamination in the Southern California Bight. La Jolla, CA:
Natural Resource Damage Assessment.
Carson, R.T., Conaway, M.B., Hanemann, W.M., Krosnick, J.A., Martin,
K.M., McCubbin, D.R., Mitchell, R.C., Presser, S. (1995). The
value of preventing oil spill injuries to natural resources
along California's central coast. La Jolla, CA: Natural
Resource Damage Assessment.
Krosnick, J.A., Visser, P.S., & Holbrook, A.L. (1998). American opinion
on global warming: The impact of the Fall 1997 debate.
Resources, 133, 5-9.
Krosnick, J.A. (2000). The threat of satisficing in surveys: The
shortcuts respondents take in answering questions. Survey
Methods Newsletter, 20, 4-8.
Krosnick, J.A. (2000). Americans are ready for the debacle to end.
Newsday, December 7, A63-A66.
Krosnick, J.A. (2001). The psychology of voting. The Psychology Place.
http://www.psychplace.com/editorials/krosnick/krosnick1.html
Green, M.C., & Krosnick, J.A. (2001). Comparing telephone and face-to-
face interviewing in terms of data quality: The 1982 National
Election Studies Method Comparison Project. In D. O'Rourke
(Ed.), Health survey research methods. Hyattsville, Maryland:
Department of Health and Human Services. DHHS Publication No.
(PHS) 01-1013.
Silver, M.D., & Krosnick, J.A. (2001). Optimizing survey measurement
accuracy by matching question design to respondent memory
organization. In Federal Committee on Statistical Methodology
Research Conference, 2001. NTIS: PB2002-100103. http://
www.fcsm.gov/01papers/Krosnick.pdf
Krosnick, J.A. (2003). Introduction. In G.R. Walden, Survey research
methodology, 1990-1999: An annotated bibliography. Westpoint,
Connecticut: Greenwood Press.
Krosnick, J.A. (2003). AAPOR in Nashville: The program for the 58th
annual conference. AAPOR News, 31, 1, 3.
Krosnick, J.A. (2003). Response rates, Huffington, and More:
Reflections on the 58th annual conference. AAPOR News, 31, 1,
4-5.
Krosnick, J.A. (2003). Proceedings of the fifty-eighth annual
conference of the American Association for Public Opinion
Research. Public Opinion Quarterly.
Fiorina, M., & Krosnick, J.A. (2004). The Economist/YouGov Internet
Presidential poll. http://www.economist.com/media/pdf/Paper.pdf
Krosnick, J.A. (2006). What pilots could tell us. Op-ed essay in The
New York Times, August 30, 2006.
Krosnick, J.A. (2006). Are we really safer in the skies today? Aviation
Law Prof Blog, September 5. http://lawprofessors.typepad.com/
aviation/
Krosnick, J.A. (2006). In the voting booth, bias starts at the top. Op-
ed in The New York Times, November 4, 2006.
Krosnick, J.A. (2006). In the voting booth, name order can sway an
election. Opinion essay in the ``Perspective'' section of The
San Jose Mercury News, November 26, 2006.
Book Reviews
Krosnick, J.A. (1987). Review of Political Cognition: The 19th Annual
Carnegie Symposium on Cognition, edited by R.R. Lau and D.O.
Sears. American Political Science Review, 81, 266-268.
Krosnick, J.A. (1988). Review of The Choice Questionnaire, by Peter
Neijens. Public Opinion Quarterly, 52, 408-411.
Krosnick, J.A. (1993). Review of Measurement Errors in Surveys, edited
by P.P. Biemer, R.M. Groves, L.E. Lyberg, N.A. Mathiowetz, & S.
Sudman. Public Opinion Quarterly, 57, 277-280.
Krosnick, J.A. (1994). A new introduction to survey methods: Review of
Questionnaire Design, Interviewing and Attitude Measurement, by
A.N. Oppenheim. Contemporary Psychology, 39, 221-222.
Krosnick, J.A. (1997). Review of Thinking About Answers: The
Application of Cognitive Processes to Survey Methodology, by S.
Sudman, N.M. Bradburn, and N. Schwarz, and Answering Questions:
Methodology for Determining Cognitive and Communicative
Processes in Survey Research, edited by N. Schwarz and S.
Sudman. Public Opinion Quarterly, 61, 664-667.
Krosnick, J.A. (1998). Review of What Americans Know about Politics and
Why It Matters, by M.X. Delli-Carpini and S. Keeter. The Annals
of the American Academy of Political and Social Science, 559,
189-191.
Presentations
Milburn, M.A., & Krosnick, J.A. (1979). Social psychology applied to
smoking and drug abuse prevention. Paper presented at the New
England Psychological Association Annual Meeting, Framingham,
Massachusetts.
Krosnick, J.A., McAlister, A.L., & Milburn, M.A. (1980). Research
design for evaluating a peer leadership intervention to prevent
adolescent substance abuse. Paper presented at the American
Psychological Association Annual Meeting, Montreal, Canada.
McAlister, A.L., Gordon, N.P., Krosnick, J.A., & Milburn, M.A. (1982).
Experimental and correlational tests of a theoretical model for
smoking prevention. Paper presented at the Society for
Behavioral Medicine Annual Meeting, Chicago, Illinois.
Kinder, D.R., Iyengar, S., Krosnick, J.A., & Peters, M.D. (1983). More
than meets the eye: The impact of television news on
evaluations of presidential performance. Paper presented at the
Midwest Political Science Association Annual Meeting, Chicago,
Illinois.
Krosnick, J.A. (1983). The relationship of attitude centrality to
attitude stability. Paper presented at the American
Sociological Association Annual Convention, Detroit, Michigan.
Alwin, D.F., & Krosnick, J.A. (1984). The measurement of values: A
comparison of ratings and rankings. Paper presented at the
American Association for Public Opinion Research Annual
Meeting, Delavan, Wisconsin.
Schuman, H., Ludwig, J., & Krosnick, J.A. (1984). Measuring the
salience and importance of public issues over time. Paper
presented at the American Association for Public Opinion
Research Annual Meeting, Delavan, Wisconsin.
Krosnick, J.A. (1984). Attitude extremity, stability, and self-report
accuracy: The effects of attitude centrality. Paper presented
at the American Association for Public Opinion Research Annual
Meeting, Delavan, Wisconsin.
Krosnick, J.A. (1984). The influence of consensus information on
predictions of one's own behavior. Paper presented at the
American Psychological Association Annual Meeting, Toronto,
Canada.
Krosnick, J.A., & Alwin, D.F. (1986). An evaluation of a cognitive
theory of response order effects in survey measurement. Paper
presented at the American Association for Public Opinion
Research Annual Meeting, St. Petersburg, Florida.
Krosnick, J.A. (1986). A new look at question order effects in surveys.
Paper presented at the Symposium on Cognitive Sciences and
Survey Research, Ann Arbor, Michigan.
Krosnick, J.A. (1987). The role of attitude importance in social
evaluation: A study of policy preferences, presidential
candidate evaluations, and voting behavior. Paper presented at
the Midwest Political Science Association Annual Meeting,
Chicago, Illinois.
Krosnick, J.A., Schuman, H., Carnot, C., Berent, M., & Boninger, D.
(1987). Attitude importance and attitude accessibility. Paper
presented at the Midwest Psychological Association Annual
Meeting, Chicago, Illinois.
Krosnick, J.A., & Sedikides, C. (1987). Self-monitoring and self-
protective biases in use of consensus information to predict
one's own behavior. Paper presented at the Midwest
Psychological Association Annual Meeting, Chicago, Illinois.
Krosnick, J.A., Stephens, L., Jussim, L.J., & Lynn, A.R. (1987).
Sublimhinal priming of affect and its cognitive consequences.
Paper presented at the Midwest Psychological Association Annual
Meeting, Chicago, Illinois.
Krosnick, J.A., & Alwin, D.F. (1987). Satisficing: A strategy for
dealing with the demands of survey questions. Paper presented
at the American Association for Public Opinion Research Annual
Meeting, Hershey, Pennsylvania.
Judd, C.M., & Krosnick, J.A. (1987). The structural bases of
consistency among political attitudes: The effects of political
expertise and attitude importance. Paper presented at the
American Psychological Association Annual Meeting, New York,
New York.
Krosnick, J.A., & Milburn, M.A. (1987). Psychological determinants of
political opinionation. Paper presented at the American
Political Science Association Annual Meeting, Chicago,
Illinois.
Krosnick, J.A. (1987). The role of attitude importance in social
evaluation: A study of policy preferences, presidential
candidate evaluations, and voting behavior. Paper presented at
the Society for Experimental Social Psychology Annual Meeting,
Charlottesville, Virginia.
Krosnick, J.A. (1988). Psychological perspectives on political
candidate perception: A review of research on the projection
hypothesis. Paper presented at the Midwest Political Science
Association Annual Meeting, Chicago, Illinois.
Krosnick, J.A., Boninger, D.S., Berent, M.K., & Carnot, C.G. (1988).
The origins of attitude importance. Paper presented at the
Midwest Psychological Association Annual Meeting, Chicago,
Illinois.
Krosnick, J.A., Carnot, C.G., Berent, M.K., & Boninger, D.S. (1988). An
exploration of the relations among dimensions of attitude
strength. Paper presented at the Midwest Psychological
Association Annual Meeting, Chicago, Illinois.
Krosnick, J.A., Li, F., & Ashenhurst, J. (1988). Order of information
presentation and the effect of base-rates on social judgments.
Paper presented at the Midwest Psychological Association Annual
Meeting, Chicago, Illinois.
Krosnick, J.A., Berent, M.K., Carnot, C.G., & Boninger, D.S. (1988).
Attitude importance and recall of attitude relevant
information.Paper presented at the Midwest Psychological
Association Annual Meeting, Chicago, Illinois.
Krosnick, J.A., & Carnot, C.G. (1988). A comparison of two theories of
the origins of political attitude strength. Paper presented at
the Midwest Psychological Association Annual Meeting, Chicago,
Illinois.
Krosnick, J.A., & Alwin, D.F. (1988). The stability of political
attitudes across the life span. Paper presented at the American
Association for Public Opinion Research Annual Meeting,
Toronto, Canada.
Krosnick, J.A., & Carnot, C.G. (1988). Identifying the foreign affairs
attentive public: A comparison of competing theories. Paper
presented to the Mershon Center Seminar on Foreign Policy
Decision Making, The Ohio State University, Columbus, Ohio.
Alwin, D.F., & Krosnick, J.A. (1988). The reliability of attitudinal
survey data. Paper presented at the International Conference on
Social Science Methodology, Dubrovnik, Yugoslavia.
Alwin, D.F., & Krosnick, J.A. (1988). Aging, cohort stability, and
change in socio-political attitudes: Exploring the
generational-persistence model. Paper presented at the
International Society of Political Psychology Annual Meeting,
Secaucus, New Jersey.
Krosnick, J.A., & Kinder, D.R. (1988). Altering the foundations of
popular support for the president through priming: Reagan, the
Iran-Contra affair, and the American public. Paper presented at
the American Political Science Association Annual Meeting,
Washington, D.C.
Krosnick, J.A., & Weisberg, H.F. (1988). Liberal/conservative
ideological structures in the mass public: A study of attitudes
toward politicians and social groups. Paper presented at the
American Political Science Association Annual Meeting,
Washington, D.C.
Krosnick, J.A. (1988). Government policy and citizen passion: A study
of issue publics in contemporary America. Paper presented at
the Shambaugh Conference on Communication, Cognition, Political
Judgment, and Affect, Iowa City, Iowa.
Berent, M.K., Krosnick, J.A., & Boninger, D.S. (1989). Attitude
importance and the valanced recall of relevant information.
Paper presented at the Midwest Psychological Association Annual
Meeting, Chicago, Illinois.
Betz, A., & Krosnick, J.A. (1989). Can people detect the affective tone
of subliminally presented stimuli? Paper presented at the
Midwest Psychological Association Annual Meeting, Chicago,
Illinois.
Krosnick, J.A., & Berent, M.K. (1989). Age-related changes in peer and
parental influence on heavy television viewing among children
and adolescents. Paper presented at the Midwest Psychological
Association Annual Meeting, Chicago, Illinois.
Alwin, D.F., & Krosnick, J.A. (1989). The reliability of attitudinal
survey data. Paper presented at the American Association for
Public Opinion Research Annual Meeting, St. Petersburg,
Florida.
Krosnick, J.A. (1989). The implications of social psychological
findings on compliance for recruiting survey respondents. Paper
presented at the American Association for Public Opinion
Research Annual Meeting, St. Petersburg, Florida.
Telhami, S., & Krosnick, J.A. (1989). Public attitudes and American
policy toward the Arab-Israeli conflict. Paper presented at the
International Society of Political Psychology Annual Meeting,
Israel.
Krosnick, J.A., & Alwin, D.F. (1989). Symbolic versus non-symbolic
political attitudes: Is there a distinction? Paper presented at
the American Political Science Association Annual Meeting,
Atlanta, Georgia.
Krosnick, J.A. (1989). The impact of cognitive sophistication and
attitude importance on response order effects and question
order effects. Paper presented at the conference entitled Order
effects in social and psychological research, Nags Head
Conference Center, Kill Devil Hills, North Carolina.
Krosnick, J.A. (1990). The impact of satisficing on survey data
quality. Paper presented at the Annual Research Conference of
the Bureau of the Census, U.S. Department of Commerce,
Washington, D.C.
Krosnick, J.A. (1990). New perspectives on survey questionnaire
construction: Lessons from the cognitive revolution. Invited
presentation at the 1990 Technical Conference of the United
States General Accounting Office, College Park, Maryland.
Krosnick, J.A. (1990). Americans' perceptions of presidential
candidates: A test of the projection hypothesis. Paper
presented at the Midwest Political Science Association Annual
Meeting, Chicago, Illinois.
Krosnick, J.A., & Berent, M.K. (1990). The impact of verbal labeling of
response alternatives and branching on attitude measurement
reliability in surveys. Paper presented at the American
Association for Public Opinion Research Annual Meeting,
Lancaster, Pennsylvania.
Krosnick, J.A., & Alwin, D.F. (1990). The stability of political
preferences: Comparisons of symbolic and non-symbolic
attitudes. Paper presented at the International Society of
Political Psychology Annual Meeting, Washington, D.C.
Krosnick, J.A. (1990). Confounding of attitude objects with attitude
measurement techniques in studies of political attitude
stability. Paper presented at the Summer Institute in Survey
Research Techniques, University of Michigan.
Fabrigar, L.R., & Krosnick, J.A. (1991). The effect of question order
and attitude importance on the false consensus effect. Paper
presented at the Midwestern Psychological Association Annual
Meeting, Chicago, Illinois.
Berent, M.K., & Krosnick, J.A. (1991). Attitude measurement
reliability: The impact of verbal labeling of response
alternatives and branching. Paper presented at the Midwestern
Psychological Association Annual Meeting, Chicago, Illinois.
Lehman, D.R., Krosnick, J.A., West, R.L., & Li, F. (1991). The focus of
judgment effect: A question wording effect due to hypothesis
confirmation bias. Paper presented at the American Association
for Public Opinion Research Annual Meeting, Phoenix, Arizona.
Krosnick, J.A., Boninger, D.S., Chuang, Y.C., & Carnot, C.G. (1991).
Attitude strength: One construct or many related constructs?
Paper presented at the Nags Head Conference on Attitude
Strength, Nags Head, North Carolina.
Krosnick, J.A. (1991). Research on attitude importance: A summary and
integration. Paper presented at the Nags Head Conference on
Attitude Strength, Nags Head, North Carolina.
Krosnick, J.A., & Berent, M.K. (1991). Memory for political
information: The impact of attitude importance on selective
exposure, selective elaboration, and selective recall. Paper
presented at the Society for Experimental Social Psychology
Annual Meeting, Columbus, Ohio.
Krosnick, J.A., & Brannon, L.A. (1992). The impact of war on the
ingredients of presidential evaluations: George Bush and the
Gulf conflict. Paper presented at the Conference on the
Political Consequences of War, The Brookings Institution,
Washington, D.C.
Berent, M.K., & Krosnick, J.A. (1992). The relation between attitude
importance and knowledge structure. Paper presented at the
Midwest Political Science Association Annual Meeting, Chicago,
Illinois.
Smith, W.R., Culpepper, I.J., & Krosnick, J.A. (1992). The impact of
question order on cognitive effort in survey responding. Paper
presented at the Sixth National Conference on Undergraduate
Research, University of Minnesota, Minneapolis, Minnesota.
Krosnick, J.A., & Brannon, L.A. (1992). The impact of war on the
ingredients of presidential evaluations: George Bush and the
Gulf conflict. Paper presented at the American Association for
Public Opinion Research Annual Meeting, St. Petersburg,
Florida.
Narayan, S.S., & Krosnick, J.A. (1992). Response effects in surveys as
a function of cognitive sophistication. Paper presented at the
Midwestern Psychological Association Annual Meeting, Chicago,
Illinois.
Boninger, D.S., Krosnick, J.A., & Berent, M.K. (1992). Imagination,
perceived likelihood, and self-interest: A path toward attitude
importance. Paper presented at the Midwestern Psychological
Association Annual Meeting, Chicago, Illinois.
Culpepper, I.J., Smith, W., & Krosnick, J.A. (1992). The impact of
question order on satisficing in attitude surveys. Paper
presented at the Midwestern Psychological Association Annual
Meeting, Chicago, Illinois.
Berent, M.K., & Krosnick, J.A. (1992). Attitude importance, information
accessibility, and attitude-relevant judgments. Paper presented
at the Midwestern Psychological Association Annual Meeting,
Chicago, Illinois.
Krosnick, J.A., & Brannon, L.A. (1992). The impact of war on the
ingredients of presidential evaluations: George Bush and the
Gulf conflict. Paper presented at the International Society of
Political Psychology Annual Meeting, San Francisco, California.
Rahn, W.M., Krosnick, J.A., & Breuning, M. (1992). Rationalization and
derivation processes in political candidate evaluation. Paper
presented at the American Political Science Association Annual
Meeting, Chicago, Illinois.
Krosnick, J.A., & Brannon, L.A. (1992). Effects of knowledge, interest,
and exposure on news media priming effects: Surprising results
from multivariate analysis. Paper presented at the Society for
Experimental Social Psychology Annual Meeting, San Antonio,
Texas.
Berent, M.K., & Krosnick, J.A. (1993). Attitude importance and
selective exposure to attitude-relevant information. Paper
presented at the Midwestern Psychological Association Annual
Meeting, Chicago, Illinois.
Fabrigar, L.R., & Krosnick, J.A. (1993). The impact of personal and
national importance judgments on political attitudes and
behavior. Paper presented at the Midwestern Psychological
Association Annual Meeting, Chicago, Illinois.
Miller, J.M., & Krosnick, J.A. (1993). The effects of candidate ballot
order on election outcomes. Paper presented at the Midwestern
Psychological Association Annual Meeting, Chicago, Illinois.
Narayan, S.S., & Krosnick, J.A. (1993). Questionnaire and respondents
characteristics that cause satisficing in attitude surveys.
Paper presented at the Midwestern Psychological Association
Annual Meeting, Chicago, Illinois.
Narayan, S.S., & Krosnick, J.A. (1993). Response effects in surveys as
a function of cognitive sophistication. Paper presented at the
American Psychological Society Annual Meeting, Chicago,
Illinois.
Smith, W.R., & Krosnick, J.A. (1993). Need for cognition, prior
thought, and satisficing in attitude surveys. Paper presented
at the Midwestern Psychological Association Annual Meeting,
Chicago, Illinois.
Smith, W.R., & Krosnick, J.A. (1993). Cognitive and motivational
determinants of satisficing in surveys. Paper presented at the
American Psychological Society Annual Meeting, Chicago,
Illinois.
Berent, M.K., & Krosnick, J.A. (1994). Attitude importance and
selective exposure to attitude-relevant information. Paper
presented at the Midwest Political Science Association Annual
Meeting, Chicago, Illinois.
Fabrigar, L.R., & Krosnick, J.A. (1994). The impact of attitude
importance on consistency among attitudes. Paper presented at
the Midwestern Psychological Association Annual Meeting,
Chicago, Illinois.
Krosnick, J.A. (1994). Survey methods and survey results: Overturing
conventional wisdom. Paper presented to the American Marketing
Association, Columbus Chapter.
Krosnick, J.A., & Fabrigar, L.R. (1994). Attitude recall questions: Do
they work? Paper presented at the American Association for
Public Opinion Research Annual Meeting, Danvers, Massachusetts.
Miller, J.M., & Krosnick, J.A. (1994). Does accessibility mediate
agenda-setting and priming? Paper presented at the Midwestern
Psychological Association Annual Meeting, Chicago, Illinois.
Smith, W.R., & Krosnick, J.A. (1994). Sources of non-differentiation
and mental coin-flipping in surveys: Tests of satisficing
hypotheses. Paper presented at the American Association for
Public Opinion Research Annual Meeting, Danvers, Massachusetts.
Visser, P.S., & Krosnick, J.A. (1994). Mail surveys for election
forecasting? An evaluation of the Columbus Dispatch Poll. Paper
presented at the Midwestern Psychological Association Annual
Meeting, Chicago, Illinois.
Visser, P.S., Krosnick, J.A., & Curtin, M. (1994). Mail surveys for
election forecasting? Paper presented at the American
Association for Public Opinion Research Annual Meeting,
Danvers, Massachusetts.
Krosnick, J.A., & Brannon, L.A. (1995). News media priming and the 1992
U.S. presidential election. Paper presented at the American
Political Science Association Annual Meeting, Chicago,
Illinois.
Krosnick, J.A., & Cornet, P.J. (1995). Attitude importance and attitude
change revisited: Shifts in attitude stability and measurement
reliability across a presidential election campaign. Paper
presented at the American Psychological Society Annual Meeting,
New York, New York.
Krosnick, J.A., & Fabrigar, L.R. (1995). Designing rating scales for
effective measurement in surveys. Invited address at the
International Conference on Survey Measurement and Process
Quality, Bristol, England.
Krosnick, J.A., Narayan, S.S., & Smith, W.R. (1995). The causes of
survey satisficing: Cognitive skills and motivational factors.
Paper presented at the Midwest Association for Public Opinion
Research, Chicago, Illinois.
Miller, J.M., Fabrigar, L.R., & Krosnick, J.A. (1995). Contrasting
attitude importance and collective issue importance: Attitude
properties and consequences. Paper presented at the Midwestern
Psychological Association Annual Meeting, Chicago, Illinois.
Miller, J.M., & Krosnick, J.A. (1995). Ballot order effects on election
outcomes. Paper presented at the Midwest Political Science
Association Annual Meeting, Chicago, Illinois.
Miller, J.M., & Krosnick, J.A. (1995). Mediators and moderators of news
media priming: It ain't accessibility, folks. Paper presented
at the International Society of Political Psychology Annual
Meeting, Washington, D.C.
Narayan, S.S., & Krosnick, J.A. (1995). Education moderates response
effects in surveys. Paper presented at the American Association
for Public Opinion Research Annual Meeting, Ft. Lauderdale,
Florida.
Smith, W.R., & Krosnick, J.A. (1995). Mental coin-flipping and non-
differentiation in surveys: Tests of satisficing hypotheses.
Invited address at the Midwestern Psychological Association
Annual Meeting, Chicago, Illinois.
Visser, P.S., & Krosnick, J.A. (1995). The relation between age and
susceptibility to attitude change: A new approach to an old
question. Paper presented at the Midwestern Psychological
Association Annual Meeting, Chicago, Illinois.
Visser, P.S., & Krosnick, J.A. (1995). Mail surveys win again: Some
explanations for the superior accuracy of the Columbus Dispatch
poll. Paper presented at the American Association for Public
Opinion Research Annual Meeting, Ft. Lauderdale, Florida.
Ankerbrand, A.L., Krosnick, J.A., Cacioppo, J.T., & Visser, P.S.
(1996). Candidate assessments and evaluative space. Paper
presented at the Midwestern Psychological Association Annual
Meeting, Chicago, Illinois.
Bizer, G.Y., & Krosnick, J.A. (1996). Attitude accessibility and
importance revisited. Paper presented at the Midwestern
Psychological Association Annual Meeting, Chicago, Illinois.
Krosnick, J.A. (1996). Linking survey question structure to data
quality: The impact of no-opinion options. Paper presented at
the conference on ``Quality Criteria in Survey Research,''
sponsored by the World Association for Public Opinion Research,
Cadenabbia, Italy.
Krosnick, J.A., & Brannon, L.A. (1996). News media priming during the
1992 U.S. presidential election campaign. Paper presented at
the International Society of Political Psychology Annual
Meeting, Vancouver, British Columbia.
Miller, J.M., Fabrigar, L.R., & Krosnick, J.A. (1996). The roles of
personal importance and national importance in motivating issue
public membership. Paper presented at the Midwest Political
Science Association Annual Meeting, Chicago, Illinois.
Miller, J.M., & Krosnick, J.A. (1996). Can issue public membership be
triggered by the threat of a policy change? Paper presented at
the International Society of Political Psychology Annual
Meeting, Vancouver, British Columbia.
Krosnick, J.A., & Visser, P.S. (1996). Changes in political attitude
strength through the life cycle. Paper presented at the Society
for Experimental Social Psychology Annual Meeting, Sturbridge,
Massachusetts.
Miller, J.M., & Krosnick, J.A. (1997). The impact of policy change
threat on issue public membership. Paper presented at the
Midwest Political Science Association Annual Meeting, Chicago,
Illinois.
Ankerbrand, A.L., Krosnick, J.A., Cacioppo, J.T., Visser, P.S., &
Gardner, W. (1997). Attitudes toward political candidates
predict voter turnout. Paper presented at the Midwestern
Psychological Association Annual Meeting, Chicago, Illinois.
Ankerbrand, A.L., & Krosnick, J.A. (1997). Response order effects in
dichotomous questions: A social desirability explanation. Paper
presented at the American Psychological Society Annual Meeting,
Washington, DC.
Krosnick, J.A. (1997). Miraculous accuracy in political surveys: The
keys to success. Presentation in the Federation of Behavioral,
Psychological, and Cognitive Sciences Seminar on Science and
Public Policy, Library of Congress, Washington, D.C.
Krosnick, J.A. (1997). Non-attitudes and no-opinion filters. Paper
presented at the Conference on no opinion, instability, and
change in public opinion research. University of Amsterdam, the
Netherlands.
Krosnick, J.A. (1997). Attitude strength. Paper presented at the
Conference on no opinion, instability, and change in public
opinion research. University of Amsterdam, the Netherlands.
Bizer, G.Y., & Krosnick, J.A. (1998). The relation between attitude
importance and attitude accessibility. Paper presented at the
Midwestern Psychological Association Annual Meeting, Chicago,
Illinois.
Holbrook, A., Krosnick, J.A., Carson, R.T., & Mitchell, R.C. (1998).
Violating conversational conventions disrupts cognitive
processing of survey questions. Paper presented at the American
Association for Public Opinion Research Annual Meeting, St.
Louis, Missouri.
Krosnick, J.A. (1998). Applying stated preference methods to assessing
the value of public goods. Paper presented at the National
Oceanic and Atmospheric Administration Application of Stated
Preference Methods to Resource Compensation Workshop,
Washington, DC.
Krosnick, J.A. (1998). Implications of psychological research on
justice and compensation for handling of natural resource
damage cases. Paper presented at the National Oceanic and
Atmospheric Administration Application of Stated Preference
Methods to Resource Compensation Workshop, Washington, DC.
Krosnick, J.A. (1998). Acquiescence: How a standard practice in many
survey organizations compromises data quality. Paper presented
at the conference on ``Quality Criteria in Survey Research,''
sponsored by the World Association for Public Opinion Research,
Cadenabbia, Italy.
Krosnick, J.A., Lacy, D., & Lowe, L. (1998). When is environmental
damage Americans' most important problem? A test of agenda-
setting vs. the issue-attention cycle. Paper presented at the
International Society of Political Psychology Annual Meeting,
Montreal, Quebec, Canada.
Visser, P.S., Krosnick, J.A., Marquette, J., & Curtin, M. (1998).
Improving election forecasting: Allocation of undecided
respondents, identification of likely voters, and response
order effects. Paper presented at the American Association for
Public Opinion Research Annual Meeting, St. Louis, Missouri.
Krosnick, J.A. (1998). The impact of science on public opinion: How
people judge the national seriousness of global warming and
form policy preferences. Paper presented at the American
Political Science Association Annual Meeting, Boston,
Massachusetts.
Krosnick, J.A. (1998). Response choice order and attitude reports: New
evidence on conversational conventions and information
processing biases in voting and in election forecasting polls.
Paper presented at the Society of Experimental Social
Psychology Annual Meeting, Lexington, Kentucky.
Krosnick, J.A. (1998). The impact of the Fall 1997 debate about global
warming on American public opinion. Paper presented at
Resources for the Future, Washington, D.C.
Krosnick, J.A. (1998). What the American public believes about global
warming: Results of a national longitudinal survey study. Paper
presented at the Amoco Public and Government Affairs and
Government Relations Meeting, Woodruff, Wisconsin.
Krosnick, J.A. (1998). What the American public believes about global
warming: Results of a national longitudinal survey study. Paper
presented in the Second Annual Carnegie Lectures on Global
Environmental Change, Carnegie Museum of Natural History,
Pittsburgh, Pennsylvania.
Green, M.C., & Krosnick, J.A. (1999). Survey satisficing: Telephone
interviewing increases non-differentiation and no opinion
responses. Paper presented at the Midwestern Psychological
Association Annual Meeting, Chicago, Illinois.
Green, M.C., & Krosnick, J.A. (1999). Comparing telephone and face-to-
face interviewing in terms of data quality: The 1982 National
Election Studies Method Comparison Project. Paper presented at
the Seventh Annual Conference on Health Survey Research
Methods, Williamsburg, Virginia.
Holbrook, A.L., Krosnick, J.A., Carson, R.T., & Mitchell, R.C. (1999).
Violating conversational conventions disrupts cognitive
processing of attitude questions. Paper presented at the
American Association for Public Opinion Research Annual
Meeting, St. Petersburg, Florida.
Krosnick, J.A. (1999). What happens when survey respondents don't try
very hard? The notion of survey satisficing. Paper presented at
the National Center for Social Research, London, United
Kingdom.
Krosnick, J.A. (1999). Satisficing: A single explanation for a wide
range of findings in the questionnaire design literature. Paper
presented at Linking the Path: A Conference for Analysts,
Researchers, and Consultants, sponsored by the Gallup
Organization, Lincoln, Nebraska.
Krosnick, J.A. (1999). Methodology for the NAOMS Survey. Presentation
at the Workshop on the Concept of the National Aviation
Operations Monitoring System (NAOMS), Sponsored by the National
Aeronautics and Space Administration, Alexandria, Virginia.
Krosnick, J.A. (1999). Refining measurement of public values for
policy-making: A test of contingent valuation procedures. Paper
presented at the American Political Science Association Annual
Meeting, Atlanta, Georgia.
Krosnick, J.A. (1999). The threat of satisficing in surveys: The
shortcuts respondents take in answering questions. Paper
presented at the National Center for Social Research Survey
Methods Seminar on Survey Data Quality, London, England.
Krosnick, J.A. (1999). Optimizing questionnaire design: How to maximise
data quality. Paper presented at the National Center for Social
Research Survey Methods Seminar on Survey Data Quality, London,
England.
Krosnick, J.A. (1999). The causes and consequences of no-opinion
responses in surveys. Paper presented at the International
Conference on Survey Nonresponse, Portland, Oregon.
Miller, J.M., & Krosnick, J.A. (1999). The impact of threats and
opportunities on political participation. Paper presented at
the Midwest Political Science Association Annual Meeting,
Chicago, Illinois.
O'Muircheartaigh, C., Krosnick, J.A., & Helic, A. (1999). Middle
alternatives, acquiescence, and the quality of questionnaire
data. Paper presented at the American Association for Public
Opinion Research Annual Meeting, St. Petersburg, Florida.
Bizer, G.Y., & Krosnick, J.A. (2000). The importance and accessibility
of attitudes: Helping explain the structure of strength-related
attitude attributes. Paper presented at the Midwestern
Psychological Association Annual Meeting, Chicago, Illinois.
Holbrook, A.L., Krosnick, J.A., Visser, P.S., Gardner, W.L., &
Cacioppo, J.T. (2000). The formation of attitudes toward
presidential candidates and political parties: An asymmetric
nonlinear process. Paper presented at the American
Psychological Society Annual Meeting, Miami, Florida.
Holbrook, A.L., Krosnick, J.A., Visser, P.S., Gardner, W.L., &
Cacioppo, J.T. (2000). The formation of attitudes toward
presidential candidates and political parties: An asymmetric,
nonlinear, interactive process. Paper presented at the American
Political Science Association Annual Meeting, Washington, D.C.
Krosnick, J.A. (2000). Peering into the future of thinking and
answering: A psychological perspective on internet survey
respondents. Paper presented at Survey Research: Past, Present,
and Internet, the 2000 Nebraska Symposium on Survey Research,
University of Nebraska, Lincoln, Nebraska.
Krosnick, J.A. (2000). The present and future of research on survey
non-responses: Reflections on Portland '99 and beyond.
Roundtable presentation at the American Association for Public
Opinion Research Annual Meeting, Portland, Oregon.
Holbrook, A.L., Krosnick, J.A., Moore, D.W., & Tourangeau, R. (2000).
Response order effects in Gallup surveys: Linguistic structure
and the impact of respondent ability, motivation, and task
difficulty. Paper presented at the American Association for
Public Opinion Research Annual Meeting, Portland, Oregon.
Miller, J.M., Krosnick, J.A., & Lowe, L. (2000). The impact of policy
change threat on financial contributions to interest groups.
Paper presented at an invited conference, Political
Participation: Building a Research Agenda, Center for the Study
of Democratic Politics, Princeton University, Princeton, New
Jersey.
Miller, J.M., & Krosnick, J.A. (2000). Attitude change outside the
laboratory: News media ``priming'' turns out not to be priming
after all. Paper presented at the Society of Experimental
Social Psychology Annual Meeting, Atlanta, Georgia.
Saris, W., & Krosnick, J.A. (2000). The damaging effect of acquiescence
response bias on answers to agree/disagree questions. Paper
presented at the American Association for Public Opinion
Research Annual Meeting, Portland, Oregon.
Visser, P.S., & Krosnick, J.A. (2000). Exploring the distinct
mechanisms through which strength-related attitude attributes
confer resistance to attitude change. Paper presented at the
Society for Personality and Social Psychology Annual Meeting,
Nashville, Tennessee.
Bizer, G.Y., & Krosnick, J.A. (2001). Need to evaluate and need for
cognition predict political attitudes and behavior. Paper
presented at the Midwestern Psychological Association, Chicago,
Illinois.
Krosnick, J.A. (2001). Who shapes public policy? Presentation made at
the Annual Conference of the Ohio Farm Bureau Federation,
Columbus, Ohio.
Krosnick, J.A., & Bizer, G.Y. (2001). Exploring the structure of
strength-related attitude features: The relation between
attitude importance and attitude accessibility. Paper presented
at the Society for Personality and Social Psychology Annual
Meeting, San Antonio, Texas.
Krosnick, J.A., Visser, P.S., & Holbrook, A.L. (2001). Real-time
attitude change outside the laboratory: The case of the 1997
national debate on global warming. Paper presented at the
Society for Personality and Social Psychology Annual Meeting,
San Antonio, Texas.
Krosnick, J.A., & Miller, J.M. (2001). An unrecognized need for ballot
reform: Effects of candidate name order. Paper presented at the
conference entitled Election Reform: 2000 and Beyond, sponsored
by the USC-Caltech Center for the Study of Law and Politics and
the Jesse M. Unruh Institute of Politics, University of
Southern California, Los Angeles, California.
Miller, J.M., & Krosnick, J.A. (2001). What motivates political
cognition and behavior? Paper presented at the Midwest
Political Science Association Annual Meeting, Chicago,
Illinois.
Green, M.C., Krosnick, J.A., & Holbrook, A.L. (2001). Experimental
comparisons of the quality of data obtained from face-to-face
and telephone surveys. Paper presented at the American
Association for Public Opinion Research Annual Meeting,
Montreal, Canada.
Silver, M.D., & Krosnick, J.A. (2001). An experimental comparison of
the quality of data obtained in telephone and self-administered
mailed surveys with a listed sample. Paper presented at the
American Association for Public Opinion Research Annual
Meeting, Montreal, Canada.
Chang, L., & Krosnick, J.A. (2001). The representativeness of national
samples: Comparisons of an RDD telephone survey with matched
Internet surveys by Harris Interactive and Knowledge Networks.
Paper presented at the American Association for Public Opinion
Research Annual Meeting, Montreal, Canada.
Chang, L., & Krosnick, J.A. (2001). The accuracy of self-reports:
Comparisons of an RDD telephone survey with Internet Surveys by
Harris Interactive and Knowledge Networks. Paper presented at
the American Association for Public Opinion Research Annual
Meeting, Montreal, Canada.
O'Muircheartaigh, C., & Krosnick, J.A. (2001). A cross-national
comparison of middle alternatives, acquiescence, and the
quality of questionnaire data. Paper presented at the American
Association for Public Opinion Research Annual Meeting,
Montreal, Canada.
Marquette, J., Green, J., & Krosnick, J.A. (2001). Experimental
analysis of the accuracy of pre-election vote choice reports.
Paper presented at the American Association for Public Opinion
Research Annual Meeting, Montreal, Canada.
Holbrook, A.L., Krosnick, J.A., Carson, R.T., & Mitchell, R.C. (2001).
Violating conversational conventions disrupts cognitive
processing of attitude questions. Paper presented at the 2001
Fifth Tri-Annual UC Berkeley Invitational Choice Symposium,
Pacific Grove, California.
Krosnick, J.A. (2001). Americans' perceptions of the health risks of
cigarette smoking: A new opportunity for public education.
Paper presented at the invited conference ``Survey Research on
Household Expectations and Preferences,'' Institute for Social
Research, University of Michigan, Ann Arbor, Michigan.
McCready, W., Skitka, L., & Krosnick, J.A. (2001). Using a web-enabled
national panel to conduct social psychological experiments.
Workshop presented at the Society of Experimental Social
Psychology Annual Meeting, Spokane, Washington.
Krosnick, J.A., Courser, M., Mulligan, K., & Chang, L. (2001).
Exploring the determinants of vote choices in the 2000
Presidential election: Longitudinal analyses to document
causality. Paper presented at the American Political Science
Association Annual Meeting, San Francisco, California.
Silver, M.D., & Krosnick, J.A. (2001). Optimizing survey measurement
accuracy by matching question design to respondent memory
organization. Paper presented at the Federal Committee on
Statistical Methodology Research Conference, Arlington,
Virginia.
Krosnick, J.A., Courser, M., Mulligan, K., & Chang, L. (2002).
Exploring the causes of vote choice in the 2000 Presidential
election: Longitudinal analyses to document the causal
determinants of candidate preferences. Paper presented at a
conference entitled ``Assessing the Vitality of Electoral
Democracy in the U.S.: The 2000 Election,'' The Mershon Center,
Ohio State University, Columbus, Ohio.
Miller, J.M., & Krosnick, J.A. (2002). Mediators and moderators of news
media agenda-setting. Paper presented at the Midwest Political
Science Association Annual Meeting, Chicago, Illinois.
Shaeffer, E.M., Krosnick, J.A., & Holbrook, A.L. (2002). Assessing the
efficacy of object rankings following ratings. Paper presented
at the Midwestern Psychological Association Annual Meeting,
Chicago, Illinois.
Lampron, S., Krosnick, J.A., Petty, R.E., & See, M. (2002). Self-
interest, values, involvement, and susceptibility to attitude
change. Paper presented at the Midwestern Psychological
Association Annual Meeting, Chicago, Illinois.
Krosnick, J.A. (2002). Comments on Baruch Fischhoff's ``Environmental
Risk: What's Worth Knowing--and Saying?'' Paper presented at
the 2nd Annual Public Policy Symposium, ``Responding to
Contemporary Environmental Risks.'' Sponsored by the Ohio State
University Environmental Policy Initiative, Fischer College of
Business, Ohio State University, Columbus, Ohio.
Thomas, R.K., Uldall, B.R., & Krosnick, J.A. (2002). More is not
necessarily better: Effects of response categories on
measurement stability and validity. Paper presented at the
American Association for Public Opinion Research Annual
Meeting, St. Petersburg, Florida.
Uldall, B.R., Thomas, R.K., & Krosnick, J.A. (2002). Reliability and
validity of web-based surveys: Effects of response modality,
item format, and number of categories. Paper presented at the
American Association for Public Opinion Research Annual
Meeting, St. Petersburg, Florida.
Shook, N., Krosnick, J.A., & Thomas, R.K. (2002). Following the storm:
Public opinion changes and political reactions in surveys.
Paper presented at the American Association for Public Opinion
Research Annual Meeting, St. Petersburg, Florida.
Chang, L., & Krosnick, J.A. (2002). Comparing self-administered
computer surveys and auditory interviews: An experiment. Paper
presented at the American Association for Public Opinion
Research Annual Meeting, St. Petersburg, Florida.
Silver, M.D., & Krosnick, J.A. (2002). Optimizing survey measurement
accuracy by matching question design to respondent memory
organization. Paper presented at the American Association for
Public Opinion Research Annual Meeting, St. Petersburg,
Florida.
Krosnick, J.A., Visser, P.S., Holbrook, A.L., & Berent, M.K. (2002).
Challenging the common-factor model of strength-related
attitude attributes: Contrasting the antecedents and
consequences of attitude importance and attitude-relevant
knowledge. Paper presented at the General Meeting of the
European Association of Experimental Social Psychology, San
Sebastian, Spain.
Krosnick, J.A., Miller, J.M., & Tichy, M.P. (2002). An unrecognized
need for ballot reform: Effects of candidate name order. Paper
presented at the International Society for Political Psychology
Annual Meeting, Berlin, Germany.
Chang, L., & Krosnick, J.A. (2002). RDD telephone vs. Internet survey
methodology for studying American presidential elections:
Comparing sample representativeness and response quality. Paper
presented at the American Political Science Association Annual
Meeting, Boston, Massachusetts.
Bizer, G.Y., Krosnick, J.A., Holbrook, A.L., Petty, R.E., Rucker, D.D.,
& Wheeler, S.C. (2002). The impact of personality on electoral
behavior and cognition: A study of need for cognition and need
to evaluate. Paper presented at the American Political Science
Association Annual Meeting, Boston, Massachusetts.
Krosnick, J.A., Visser, P.S., & Holbrook, A.L. (2002). Social
psychology under the microscope: Do classic experiments
replicate when participants are representative of the general
public rather than convenience samples of college students?
Paper presented at the Society of Experimental Social
Psychology Annual Meeting, Columbus, Ohio.
Visser, P.S., Krosnick, J.A., Simmons, J. (2002). Distinguishing the
cognitive and behavioral consequences of attitude importance
and certainty. Paper presented at the Society of Experimental
Social Psychology Annual Meeting, Columbus, Ohio.
Chang, L., & Krosnick, J.A. (2002). RDD telephone vs. Internet survey
methodology for studying American presidential elections:
Comparing sample representativeness and response quality.
Invited presentation at Westat, Rockville, Maryland.
Chang, L., & Krosnick, J.A. (2002). Comparing the quality of data
obtained from telephone and Internet surveys: Field and
laboratory experiments. Invited paper presented at the FCSM
Statistical Policy Seminar ``Challenges to the Federal
Statistical System in Fostering Access to Statistics.'
Bethesda, Maryland.
Lampron, S.F., Krosnick, J.A., Shaeffer, E., Petty, R.E., & See, M.
(2003). Different types of involvement moderate persuasion
(somewhat) differently: Contrasting outcome-based and value-
based involvement. Paper presented at the Society for
Personality and Social Psychology Annual Meeting, Los Angeles,
California.
Visser, P.S., & Krosnick, J.A. (2003). Attitude strength: New insights
from a life-course development perspective. Paper presented at
the Society for Personality and Social Psychology Annual
Meeting, Los Angeles, California.
Krosnick, J.A. (2003). Basic methodological work for and in repeated
cross-sectional and longitudinal surveys: A few thoughts. Paper
presented at the National Science Foundation Workshop on
Repeated Cross-sectional and Longitudinal Surveys, Arlington,
Virginia.
Pfent, A.M., & Krosnick, J.A. (2003). Rationalization of presidential
candidate preferences. Paper presented at the Midwestern
Psychological Association Annual Meeting, Chicago, Illinois.
Holbrook, A.L., & Krosnick,, J.A. (2003). Meta-psychological and
operative measures of psychological constructs: The same or
different? Paper presented at the Midwestern Psychological
Association Annual Meeting, Chicago, Illinois.
Krosnick, J.A., Visser, P.S., & Holbrook, A.L. (2003). Social
psychology under the microscope: Do classic experiments
replicate when participants are representative of the general
public rather than convenience samples of college students?
Invited presentation at the Midwestern Psychological
Association Annual Meeting, Chicago, Illinois.
Saris, W.E., Krosnick, J.A., & Shaeffer, E.M. (2003). Comparing the
quality of agree/disagree and balanced forced choice questions
via an MTMM experiment. Paper presented at the Midwestern
Psychological Association Annual Meeting, Chicago, Illinois.
Anand, S., & Krosnick, J.A. (2003). Satisficing in attitude surveys:
The impact of cognitive skills and motivation on response
effects. Paper presented at the Midwestern Psychological
Association Annual Meeting, Chicago, Illinois.
Bizer, G.Y., Krosnick, J.A., Holbrook, A.L., Petty, R.E., Rucker, D.D.,
& Wheeler, S.C. (2003). The impact of personality on political
beliefs, attitudes, and behavior: Need for cognition and need
to evaluate. Paper presented at the American Psychological
Society Annual Meeting, Atlanta, Georgia.
Holbrook, A.L., Pfent, A., & Krosnick J.A. (2003). Response rates in
recent surveys conducted by non-profits and commercial survey
agencies and the news media. Paper presented at the American
Association for Public Opinion Research Annual Meeting,
Nashville, Tennessee.
Shaeffer, E.M., Langer, G.E., Merkle, D.M., & Krosnick, J.A. (2003). A
comparison of minimal balanced and fully balanced forced choice
items. Paper presented at the American Association for Public
Opinion Research Annual Meeting, Nashville, Tennessee.
Pfent, A., Krosnick, J.A., & Courser, M. (2003). Rationalization and
derivation processes in presidential elections: New evidence
about the determinants of citizens' vote choices. Paper
presented at the American Association for Public Opinion
Research Annual Meeting, Nashville, Tennessee.
Krosnick, J.A., Visser, P.S., & Holbrook, A.L. (2003). How to
conceptualize attitude strength and how to measure it in
surveys: Psychological perspectives. Paper presented at the
American Association for Public Opinion Research Annual
Meeting, Nashville, Tennessee.
Chang, L., & Krosnick, J.A. (2003). Comparing data quality in telephone
and internet surveys: Results of lab and field experiments.
Invited paper presented at the American Statistical Association
Annual Meetings, San Francisco, California.
Pfent, A., & Krosnick, J.A. (2003). Post-decisional dissonance
reduction by a new method: Rationalization of political
candidate choices illuminates the basic dynamics of decision-
making. Paper presented at the Society of Experimental Social
Psychology Annual Meeting, Boston, Massachusetts.
Krosnick, J.A., & Fabrigar, L.R. (2003). ``Don't know'' and ``no
opinion'' responses: What they mean, why they occur, and how to
discourage them. Invited paper presented at the Basel Workshop
on Item Non-response and Data Quality in Large Social Surveys,
University of Basel, Basel, Switzerland.
Krosnick, J. A.(2003). Comments on theories of persuasion. Invited
discussant at the conference entitled ``Integrating Message
Effects and Behavior Change Theories in Cancer Prevention,
Treatment, and Care,'' Annenberg Public Policy Center,
Annenberg School for Communication, University of Pennsylvania,
Philadelphia, Pennsylvania.
Krosnick, J.A. (2003). Survey methodology--scientific basis.
Presentation at the National Aviation Operations Monitoring
Service Working Group Meeting #1, Seattle, Washington.
Krosnick, J.A. (2003). Survey methodology--NAOMS design decisions.
Presentation at the National Aviation Operations Monitoring
Service Working Group Meeting #1, Seattle, Washington.
Krosnick, J.A. (2004). Survey methodology--scientific basis.
Presentation at the National Transportation Safety Board,
Washington, DC.
Krosnick, J.A. (2004). Survey methodology--NAOMS design decisions.
Presentation at the National Transportation Safety Board,
Washington, DC.
Krosnick, J.A. (2004). Public uses of the news media. Presentation as a
part of the symposium ``Politics and the media,'' Social
Sciences Resource Center, Stanford Libraries, Stanford
University, Stanford, CA.
Krosnick, J.A. (2004). Peering into the minds of respondents: The
cognitive and social processes underlying answers to survey
questions. Invited keynote lecture at the International
Symposium in Honour of Paul Lazarsfeld, Katholieke Universiteit
Leuven (Belgium).
Krosnick, J.A., Shook, N., & Thomas, R.K. (2004). Public opinion change
in the aftermath of 9/11. Paper presented at the American
Association for Public Opinion Research Annual Meeting,
Phoenix, Arizona.
Holbrook, A.L., & Krosnick, J.A. (2004). Vote over-reporting: A test of
the social desirability hypothesis. Paper presented at the
American Association for Public Opinion Research Annual
Meeting, Phoenix, Arizona.
Chang, L., & Krosnick, J.A. (2004). Assessing the accuracy of event
rate estimates from national surveys. Paper presented at the
American Association for Public Opinion Research Annual
Meeting, Phoenix, Arizona.
Shaeffer, E.M., Lampron, S.F., Krosnick, J.A., Tompson, T.N., Visser,
P.S., & Hanemann, W.M. (2004). A comparison of open vs. closed
survey questions for valuing environmental goods. Paper
presented at the American Association for Public Opinion
Research Annual Meeting, Phoenix, Arizona.
Holbrook, A.L., Berent, M.K., Krosnick, J.A., Visser, P.S., & Boninger,
D.S. (2004). Attitude importance and the accumulation of
attitude-relevant knowledge in memory. Paper presented at the
American Political Science Association Annual Meeting, Chicago,
Illinois.
Chang, L., & Krosnick, J.A. (2004). Measuring the frequency of regular
behaviors: Comparing the `typical week' to the `past week.'
Paper presented at the American Political Science Association
Annual Meeting, Chicago, Illinois.
Krosnick, J.A. (2004). What do Americans want government to do about
global warming? Evidence from national surveys. Invited
presentation at the ``Workshop on Global Warming: The
Psychology of Long Term Risk,'' Cooperative Institute for
Climate Science, Woodrow Wilson School of Public and
International Affairs, Princeton University, Princeton, New
Jersey.
Krosnick, J.A., & Malhotra, N. (2004). The causes of vote choice in the
2004 American Presidential Election: Insights from the 2004
YouGov surveys. Paper presented at the conference ``The 2004
American Presidential Election: Voter Decision-Making in a
Complex World,'' Stanford University, Stanford, California.
Krosnick, J.A., Visser, P.S., & Holbrook, A.L. (2004). The impact of
social psychological manipulations embedded in surveys on
special populations. Paper presented at the Pacific Chapter of
the American Association for Public Opinion Research Annual
Meeting, San Francisco, California.
Krosnick, J.A. (2005). The future of the American National Election
Studies. Roundtable: The political psychology of surveys. Paper
presented at the Midwestern Political Science Association
Annual Meeting, Chicago, Illinois.
Malhotra, N., & Krosnick, J.A. (2005). What motivated Americans' views
of the candidates and vote preferences across the 2004
presidential campaign? Paper presented at the American
Association for Public Opinion Research Annual Meeting, Miami,
Florida.
Garland, P., Krosnick, J.A., & Clark, H.H. (2005). Does question
wording sometimes send unintended signals about expected
answers? Paper presented at the American Association for Public
Opinion Research Annual Meeting, Miami, Florida.
Callegaro, M., De Keulenaer, F., Krosnick, J.A., & Daves, R. (2005).
Interviewer effects in an RDD telephone pre-election poll in
Minneapolis 2001: An analysis of the effects of interviewer
race and gender. Paper presented at the American Association
for Public Opinion Research Annual Meeting, Miami, Florida.
Krosnick, J.A., & Rivers, D. (2005). Web survey methodologies: A
comparison of survey accuracy. Paper presented at the American
Association for Public Opinion Research Annual Meeting, Miami,
Florida.
Holbrook, A.L., & Krosnick, J.A. (2005). Vote over-reporting: Testing
the social desirability hypothesis in telephone and Internet
surveys. Paper presented at the American Association for Public
Opinion Research Annual Meeting, Miami, Florida.
Anand, S., Krosnick, J.A., Mulligan, K., Smith, W., Green, M., & Bizer,
G. (2005). Effects of respondent motivation and task difficulty
on nondifferentiation in ratings: A test of satisficing theory
predictions. Paper presented at the American Association for
Public Opinion Research Annual Meeting, Miami, Florida.
Rivers, D., & Krosnick, J.A. (2005). Comparing major survey firms in
terms of survey satisficing: Telephone and internet data
collection. Paper presented at the American Association for
Public Opinion Research Annual Meeting, Miami, Florida.
Krosnick, J.A. (2005). Thought piece on survey participation. Paper
presented at the conference entitled ``New Approaches to
Understanding Participation in Surveys,'' Belmont Conference
Center, Elkridge, Maryland.
Malhotra, N., & Krosnick, J.A. (2005). Pilot test of new procedures for
identifying new and emerging occupations and their places in
the SOC: A study of biotechnology. Paper presented at the U.S.
Bureau of Labor Statistics, Washington, DC.
Holbrook, A.L., & Krosnick, J.A. (2005). Do survey respondents
intentionally lie and claim that they voted when they did not?
New evidence using he list and randomized response techniques.
Paper presented at the American Political Science Association
Annual Meeting, Washington, DC.
Malhotra, N., & Krosnick, J.A. (2005). The determinants of vote choice
in the 2004 U.S. Presidential Election. Paper presented at the
American Political Science Association Annual Meeting,
Washington, DC.
Krosnick, J.A. (2005). Effects of survey data collection mode on
response quality: Implications for mixing modes in cross-
national studies. Paper presented at the conference ``Mixed
Mode Data Collection in Comparative Social Surveys,'' City
University, London, United Kingdom.
Krosnick, J.A., & Malhotra, N. (2006). The impact of presidential job
performance assessments on vote choices in 2004. Paper
presented at the conference ``The Wartime Election of 2004,''
Ohio State University, Columbus, Ohio.
Rabinowitz, J.L. & Krosnick, J.A. (2006). Investigating the
discriminant validity of symbolic racism. Paper presented at
the annual meeting of the Society for Personality and Social
Psychology, Palm Springs, California.
Krosnick, J.A. (2006). An evaluation framework: Total survey error in
research practice. Paper presented at the Survey Methods
Symposium sponsored by Central Market Research and Insights,
Microsoft, Redmond, Washington.
Krosnick, J.A. (2006). Data quality from phone vs. internet surveys.
Paper presented at the Survey Methods Symposium sponsored by
Central Market Research and Insights, Microsoft, Redmond,
Washington.
Krosnick, J.A. (2006). The distinguishing characteristics of frequent
survey participants. Paper presented at the annual meeting of
the Midwest Political Science Association, Chicago, Illinois.
Krosnick, J.A. (2006). An overview of the mission of the American
National Election Studies. Presentation at the annual meeting
of the Midwest Political Science Association, Chicago,
Illinois.
Krosnick, J.A. (2006). The use of the internet in valuation surveys.
Presentation at the workshop ``Morbidity and Mortality: How Do
We Value the Risk of Illness and Death?'', sponsored by the
U.S. Environmental Protection Agency, the National Center for
Environmental Research, and the National Council on Economic
Education, Washington, DC.
Krosnick, J.A. (2006). What the American public thinks about climate
change: Findings from a new Stanford/ABC/Time Magazine Survey.
Presentation at the ``California Climate Change Policy
Workshop,'' sponsored by the Woods Institute for the
Environment, California State Capital Building, Sacramento,
California.
Holbrook, A.L., & Krosnick, J.A. (2006). Vote over-reporting: A test of
the social desirability hypothesis. Paper presented at the
American Psychological Association Annual Meeting, New Orleans,
Louisiana.
Bannon, B., Krosnick, J.A., & Brannon, L. (2006). News media priming:
Derivation or rationalization? Paper presented at the American
Political Science Annual Meeting, Philadelphia, Pennsylvania.
Malhotra, N., Krosnick, J.S., & Thomas, R. (2006). The effect of polls
on political behavior. Paper presented at the American
Political Science Annual Meeting, Philadelphia, Pennsylvania.
Krosnick J.A. (2006). Doing social psychology that's relevant and
valued and valuable. Paper presented at the Society of
Experimental Social Psychology Annual Meeting, Philadelphia,
Pennsylvania.
Krosnick, J.A. (2006). Overview of the American National Election
Studies: Lessons learned about the causes of voter turnout and
candidate choice. Paper presented at the conference ``The
Psychology of Voting and Election Campaigns,'' Social Science
Research Institute, Duke University, Durham, North Carolina.
Krosnick, J.A. (2006). What Americans really think about climate
change. Presentation to the Stanford Women's Club of the East
Bay, Contra Costa County Library, Orinda, California.
Krosnick, J.A. (2006). The impact of survey mode and the merging of
face-to-face recruitment with Internet data collection. Paper
presented at the 2006 Federal Committee on Statistical
Methodology Statistical Policy Seminar, ``Keeping Current: What
We Know--What We Need to Learn.'' Washington, DC.
Krosnick, J.A. (2006). Comparisons of the accuracy of information
obtained by face-to-face, telephone, Internet, and paper and
pencil data collection. Paper presented at the Pacific Chapter
of the American Association for Public Opinion Research Annual
Meeting, San Francisco, California.
Bizer, G.Y., Krosnick, J.A., Holbrook, A.L., Wheeler, S.C., Rucker,
D.D., & Petty, R.E. (2007). The impact of personality on
political beliefs, attitudes, and behavior: Need for cognition
and need to evaluate. Paper presented at the Society for
Personality and Social Psychology Annual Meeting, Memphis,
Tennessee.
Sargent, M.J., Rabinowitz, J., Shull, A., & Krosnick, J.A. (2007).
Support for government efforts to promote racial equality:
Effects of antigroup affect and perceptions of value violation.
Paper presented at the Society for Personality and Social
Psychology Annual Meeting, Memphis, Tennessee.
Krosnick, J.A. (2007). Americans' beliefs about global climate change:
New national survey findings. Paper presented at the American
Association for the Advancement of Science Annual Meeting, San
Francisco, California.
Krosnick, J.A. (2007). Comparisons of survey modes and a new hybrid.
Paper presented at the American Association for the Advancement
of Science Annual Meeting, San Francisco, California.
Garland, P., & Krosnick, J.A. (2007). The impact of race on evaluations
of artistic products: Evidence of 'ownership' bias among
prejudiced whites. Paper presented at the National Conference
of Black Political Scientists, Burlingame, California.
Lupia, A., & Krosnick, J.A. (2007). Remaking the American National
Election Studies. Paper presented at the National Conference of
Black Political Scientists, Burlingame, California.
Krosnick, J.A. (2007). What Americans really think about climate
change: Attitude formation and change in response to a raging
scientific controversy. Presentation sponsored by the
California Research Bureau at the California State House,
Sacramento, California.
Harbridge, L., & Krosnick, J.A. (2007). Presidential approval and gas
prices: The Bush presidency in historical context. Paper
presented at the American Association for Public Opinion
Research annual meeting, Garden Grove, California.
Krosnick, J.A., & Smith, T. (2007). Proposing questionnaire design
experiments for the General Social Survey. Paper presented at
the American Association for Public Opinion Research annual
meeting, Garden Grove, California.
Cote, F., Tahk, A., & Krosnick, J.A. (2007). Comparing the validity of
public predictions of changes in the economy: RDD telephone
data vs. volunteer samples completing paper and pencil
questionnaires. Paper presented at the American Association for
Public Opinion Research annual meeting, Garden Grove,
California.
Schneider, D., Krosnick, J.A., & Ophir, E. (2007). Ballot order effects
in California from 1976 to 2006. Paper presented at the
American Association for Public Opinion Research annual
meeting, Garden Grove, California.
O'Muircheartaigh, C., Krosnick, J.A., & Dennis, J.M. (2007). Face-to-
face recruitment of an Internet survey panel: Lessons from an
NSF-sponsored demonstration project. Paper presented at the
American Association for Public Opinion Research annual
meeting, Garden Grove, California.
Malhotra, N., & Krosnick, J.A. (2007). The effect of survey mode and
sampling on inferences about political attitudes and behavior:
Comparing the 2000 and 2004 ANES to Internet surveys with non-
probability samples. Paper presented at the American
Association for Public Opinion Research annual meeting, Garden
Grove, California.
Krosnick, J.A., Malhotra, N., & Miller, L. (2007). Survey mode in the
21st Century: Probability vs. non-probability samples of a
nation's population. Paper presented at the conference entitled
``Cyberinfrastructure and National Election Studies: The
Wivenhoe House Conference.'' University of Essex, Colchester,
UK.
Pasek, J., & Krosnick, J.A. (2007). Trends over time in America:
Probability/telephone vs. non-probability/internet. Paper
presented at the conference entitled ``Cyberinfrastructure and
National Election Studies: The Wivenhoe House Conference.''
University of Essex, Colchester, UK.
Krosnick, J.A. (2007). Methods and results from the New Scientist
Survey on Climate Change Policy. Presentation at the National
Press Club, Washington, DC.
Krosnick, J.A. (2007). The ANES Recompetition and its Implications for
the GSS recompetition. Presentation at the American
Sociological Association annual meeting, New York, New York.
Harder, J., & Krosnick, J.A., (2007). Causes of voter turnout: A social
psychological perspective. Paper presented at the American
Psychological Association annual meeting, San Francisco,
California.
Schneider, D., Berent, M.K., Thomas, R., & Krosnick, J.A. (2007).
Measuring customer satisfaction and loyalty: Improving the 'net
promoter' score. Paper presented at the World Association for
Public Opinion Research annual meeting, Berlin, Germany.
Cobb, C., & Krosnick, J.A. (2007). The impact of postdoc appointments
on science and engineering career outcomes and job
satisfaction. Paper presented at the conference ``Using Human
Resource Data,'' Science Resources Statistics Workshop,
Washington, DC.
Off-Campus Academic Colloquia
1985--State University of New York at Stony Brook, Department of
Political Science; Princeton University, Department of
Sociology; Princeton University, Department of Politics;
University of California at Berkeley, Department of Sociology;
Yale University, Department of Sociology; Yale University,
Department of Political Science; Ohio State University,
Department of Psychology; University of Southern California,
Annenberg School for Communication.
1986--University of Michigan, Department of Sociology.
1987--Yale University, Department of Psychology; Yale University,
Department of Political Science; University of Michigan,
Department of Sociology.
1988--University of Minnesota, Department of Political Science.
1990--University of Florida, Department of Psychology; University of
Florida, Bureau of Economic and Business Research; Denison
University, Department of Psychology.
1991--University of Michigan, Summer Institute in Survey Research
Techniques.
1992--University of Michigan, Summer Institute in Survey Research
Techniques; University of Michigan, Department of
Communication.
1993--University of Wisconsin, Departments of Psychology, Sociology,
and Political Science; University of Michigan, Summer Institute
in Survey Research Techniques.
1994--Yale University, Department of Psychology; University of
Michigan, Research Center for Group Dynamics; Cornell
University, Peace Studies Center.
1995--University of Michigan, Summer Institute in Survey Research
Techniques; University of Minnesota, Department of Political
Science.
1996--University of Pennsylvania, Annenberg School for Communication;
University of Chicago, Center for Decision Research; Purdue
University, Department of Psychology.
1997--Stanford University, Department of Psychology; University of
California-Berkeley, Institute of Governmental Studies;
University of California-Berkeley, Institute of Personality and
Social Research; University of California-Irvine, Department of
Social Sciences; University of California-Los Angeles,
Institute for Social Science Research; University of
California-Santa Barbara, Department of Psychology; University
of California-Santa Cruz, Board of Psychology; Center for
Advanced Study in the Behavioral Sciences; London School of
Economics and Political Science, Methodology Institute.
1998--Arizona State University, Department of Psychology; London School
of Economics and Political Science, Methodology Institute;
University of Amsterdam, Department of Psychology; Carnegie
Mellon University, Center for the Integrated Study of the Human
Dimensions of Global Change, Department of Engineering and
Public Policy.
1999--University of Chicago, American Politics Workshop, Department of
Political Science; Indiana University, Departments of Political
Science and Psychology; University of Minnesota, Departments of
Political Science and Psychology.
2000--University of California, Los Angeles, Department of Political
Science; University of Southern California, Jesse M. Unruh
Institute of Politics; University of Michigan, Institute for
Social Research, Survey Research Center.
2001--The William and Flora Hewlett Foundation, Menlo Park, California;
London School of Economics and Political Science, Methodology
Institute; Resources for the Future, Washington, DC.
2002--University of Colorado-Boulder, Department of Psychology;
University of Florida-Gainesville, Department of Psychology;
Stanford University, Department of Communication; University of
Chicago, Harris School of Public Policy; Uppsala University
(Sweden), Department of Government; University of North
Carolina, Department of Political Science; University of
Chicago, Political Psychology Workshop, Departments of
Psychology and Political Science; Pitzer College, Department of
Political Science.
2003--University of Illinois at Chicago, College of Urban Planning and
Public Affairs; University of Illinois at Chicago, Survey
Research Laboratory; Stanford University, Social Psychology
Research Seminar (April); Stanford University, Social
Psychology Research Seminar (October); Stanford University,
Department of Psychology Colloquium Series.
2004--Harvard University, Research Workshop in American Politics,
Department of Government; Stanford University, Organizational
Behavior Seminar, Graduate School of Business; Stanford
University, Marketing Seminar, Graduate School of Business;
Stanford University, American Empirical Seminar, Stanford
Institute for the Quantitative Study of Society; University of
California, Davis, Distinguished Lecture Series, Departments of
Psychology and Political Science.
2005--The Rand Organization, Santa Monica, California.
2006--Harvard University, Department of Psychology; Duke University,
Social Science Research Institute; University of North
Carolina, Chapel Hill, Department of Political Science;
University of Florida, Department of Psychology; University of
Florida, Department of Political Science; University of
California, Santa Barbara, Department of Psychology.
2007--The Rand Organization, Santa Monica, California.
Consulting and Court Testimony
Socio-Environmental Studies Laboratory, National Institutes of Health,
Washington, D.C.
National Oceanic and Atmospheric Administration, Washington, D.C.
Environmental Protection Agency, Washington, D.C.
National Aeronautics and Space Administration (Robert Dodd and
Associates/The Battelle Memorial Institute), Mountain View,
California.
Center for Survey Methods Research, U.S. Bureau of the Census,
Washington, D.C.
Office of Survey Methods Research, U.S. Bureau of Labor Statistics,
Washington, D.C.
Leadership Analysis Group, U.S. Central Intelligence Agency, McLean,
Virginia.
United States Government Accountability Office, Washington, DC.
Centers for Disease Control and Prevention, Atlanta, Georgia.
National Cancer Institute, Rockville, Maryland.
Center for Human Resource Research, Ohio State University, Columbus,
Ohio.
Office of Lake County Prosecuting Attorney, Painesville, Ohio.
The Attorney General of the State of Ohio, Columbus, Ohio.
Centre for Comparative Social Surveys, City University, London, United
Kingdom.
Rand Corporation, Santa Monica, California.
Stanford University Alumni Association, Stanford, California.
SRI International, Arlington, Virginia.
The Attorney General of Oklahoma.
Office of Social Research, CBS Inc., New York, New York.
ABC News, New York, New York.
Home Box Office, New York, New York.
Google, Mountain View, California.
Pfizer, Inc., New York, New York.
American Civil Liberties Union of Northern California/Brad Seligman/
Howard, Rice, Nemerovski, Canady, Falk, & Rabkin, San
Francisco/Berkeley, California.
Beau Townsend Ford Dealership, Dayton, Ohio.
United States Trotting Association, Columbus, Ohio.
Berlex Laboratories, Inc., Wayne, New Jersey.
YouGov, London, United Kingdom.
MJ Research, Waltham, Massachusetts.
Empire Blue Cross/Blue Shield, New York, New York.
Momentum Market Intelligence, Portland, Oregon.
Central Market Research and Insights, Microsoft, Redmond, Washington.
The Urban Institute, Washington, D.C.
Industrial Economics, Cambridge, Massachusetts.
Healthcare Research Systems, Columbus, Ohio.
Survey Research Center, University of Maryland, College Park, Maryland.
Center for Human Resource Research, Columbus, Ohio.
Washington State University, Pullman, Washington.
Turner Research, Jacksonville, Florida.
NuStats, Austin, Texas.
Kaiser Family Foundation, Menlo Park, California.
Achievement Associates, Darnestown, Maryland.
The Saguaro Seminar: Civic Engagement in America, Harvard University,
Cambridge, Massachusetts.
Donald McTigue, Esq., Columbus, Ohio.
Thompson Coburn LLP, St. Louis, Missouri.
Shook, Hardy, & Bacon LLP, Kansas City, Missouri.
Arnold and Porter LLP, New York, New York.
Bradley W. Hertz, Esq., Los Angeles, California.
Larson King LLP, Minneapolis, Minnesota.
Paul, Hastings, Janofsky, and Walker, LLP, San Francisco, California.
Carr, Korein, Tillery, LLP, Chicago, Illinois.
Milberg, Weiss, Bershad, Hynes, and Lerach, LLP, New York, New York.
Bourgault & Harding, Las Vegas, Nevada.
Aikin Gump Strauss Hauer & Feld, LLP, Washington, DC.
McManemin and Smith, PC, Dallas, Texas.
Zimmerman Reed, PLLP, Minneapolis, Minnesota.
Spolin Silverman, Cohen, and Bertlett LLP, Santa Monica, California.
Righetti Wynne P.C., San Francisco, California.
Blackwell Sanders Peper Martin LLP, Kansas City, Missouri.
Davis Wright Tremaine LLP, Seattle, Washington.
Storch Amini & Munves, P.C., New York, New York.
Twomey Law Office, Epsom, New Hampshire.
Righetti Law Firm, P.C., San Francisco, California.
Dostart Clapp Gordon & Coveney LLP, San Diego, California.
Wynne Law Firm, Greenbrae, California.
Lorens and Associates, San Diego, California.
Arias, Ozzello & Gignac, LLP, Los Angeles, California.
Keller Grover, LLP, San Francisco, California.
Law Offices of Kevin T. Barnes, Los Angeles, California.
Cohelan & Khoury, San Diego, California.
Law Offices of Joseph Antonelli, West Covina, California.
Short Courses on Questionnaire Design
Internal Revenue Service, Washington, DC.
United States General Accounting Office, Washington, DC.
Office of Management and Budget, The White House, Washington, DC.
United States Government Accountability Office, Washington, DC.
Science Resources Statistics Program, National Science Foundation,
Washington, DC.
National Opinion Research Center, Chicago, Illinois.
Survey Research Laboratory, University of Illinois at Chicago, Chicago,
Illinois.
Center for AIDS Prevention Studies, Department of Epidemiology and
Biostatistics, University of California, San Francisco,
California.
Monitor Company, Cambridge, Massachusetts.
American Association for Public Opinion Research Annual Meeting, St.
Louis, Missouri.
American Association for Public Opinion Research Annual Meeting,
Portland, Oregon.
American Association for Public Opinion Research Annual Meeting, Miami,
Florida.
New York Chapter of the American Association for Public Opinion
Research, New York, New York.
Office for National Statistics, London, United Kingdom.
Market Strategies, Southfield, Michigan.
Total Research Corporation, Princeton, New Jersey.
Pfizer, Inc., New York, New York.
Worldwide Market Intelligence Conference, IBM, Rye, New York.
American Society of Trial Consultants Annual Meeting, Williamsburg,
Virginia.
American Society of Trial Consultants Annual Meeting, Westminster,
Colorado.
American Society of Trial Consultants Annual Meeting, Memphis,
Tennessee.
American Marketing Association Advanced Research Techniques Forum,
Vail, Colorado.
Satisfaction Research Division, IBM, White Plains, New York.
American Marketing Association Marketing Effectiveness Online Seminar
Series.
Faculty of Education, University of Johannesburg, Johannesburg, South
Africa.
Odom Institute, University of North Carolina, Chapel Hill, North
Carolina.
Google, Mountain View, California.
Eric M. Mindich Encounters with Authors, Harvard University, Cambridge,
Massachusetts.
RTI International, Research Triangle Park, North Carolina.
BC Stats, Province of British Columbia Ministry of Labour and Citizens'
Services, Victoria, British Columbia, Canada.
Alphadetail, San Mateo, California.
Chairman Gordon. Thank you, Doctor, and Captain McVenes,
you are recognized.
STATEMENT OF CAPTAIN TERRY L. MCVENES, EXECUTIVE AIR SAFETY
CHAIRMAN, AIR LINE PILOTS ASSOCIATION, INTERNATIONAL
Captain McVenes. Mr. Chairman, Mr. Hall, Members of the
Committee, good afternoon, and thank you for the opportunity to
outline the Air Line Pilots Association's views on aviation
safety and the role that we play in protecting the traveling
public.
ALPA is the world's largest pilot union. We represent more
than 60,000 pilots at 42 airlines in the United States and
Canada. ALPA was founded in 1931, and for more than 76 years
now ALPA has had a tremendous impact on improving aviation
safety. Today ALPA continues to be the world's leading aviation
safety advocate, protecting the safety interests of our
passengers, our fellow crew members, and cargo around the
world.
Over the past 10 years the U.S. aviation industry has seen
a 65 percent decrease in the accident rate, and as a result,
the U.S. safety record is the envy of the rest of the world.
Much of our success is due to the collaborative approach that
has taken place among airline managements, labor, and the FAA
in voluntary collection and analysis of de-identified safety-
related data. By analyzing recorded data that is obtained
during routine flight operations and receiving written reports
from the front-line employees in a confidential and non-
punitive environment, we can not only see what is happening out
there but also why it is happening.
Today these stand-alone programs at individual airlines are
reaching their maturity, and that is a reflection of the
dynamic nature of any data collection effort. It has to adapt
to changes in the environment, and in this case, the changes in
the aviation industry.
As safety professionals continue to see value in these
programs and work with them in more detail, it has become clear
that even more can be learned by sharing safety information
among various stakeholders in the industry. The FAA and the
airline industry, including ALPA, continue to work together on
developing a formalized process in which safety information can
be accessed through secure networks under mutually-agreeable
rules of engagement.
ALPA has been working closely with the FAA, NASA, and the
airlines to develop a process that will make the safety
information available to decision-makers to help them in their
efforts to manage risks. This process is also invaluable in the
sharing of accident and incident prevention strategies across
the entire industry.
Again, though, I would point out that as time goes on, the
industry continues to refine our processes for maximizing the
safety benefits that the traveling public receives from
collecting data while at the same time protecting those
employees and the airlines that bring the data to the table in
the first place.
NASA, especially through the Aviation Safety Reporting
System or ASRS, has always been an important player in aviation
safety. Its human factors research in particular has provided
great value to our industry. The NAOMS survey was part of the
early effort to provide more information to help all of us
improve aviation safety. And this first survey was a test of
the process and methodology, and we understand that the data
extracted from this survey were summarized, and those summaries
were shared with government and industry.
But as in any first test the data didn't correlate very
well with data from other sources, possibly due to the mix of
general aviation and airline operations. The aviation community
had plans to further analyze those discrepancies and determine
if the data was reliable, but the funding for NAOMS ran out,
and that is when ALPA stepped in to help keep that project
alive as part of our involvement with the Commercial Aviation
Safety Team or CAST. And while we have been working with CAST
to modify that survey, we did not receive any collected data
from NASA.
So what should we do with the data now? Well, there are
several solutions that are available. We have heard some of
them this afternoon. The one that makes a lot of sense is to
provide NASA with the necessary resources so it can complete
its peer review of the data, then analyze that data, while at
the same time maintain the confidentiality and protective
provisions that apply to voluntarily supplied safety
information.
Other solutions may also exist, but regardless of the
solution, it is important to keep in mind that raw data
distributed without appropriate analysis and scrutiny to ensure
its validity can lead to unintended consequences. Incomplete or
inaccurate conclusions can be reached if the collection method
is flawed or if people looking at the data aren't familiar with
aviation or the context of how that information was provided.
No one knows and understands the data better than the
stakeholders that provided the data in the first place. That is
why it is so important that those stakeholders work closely
with the analysts of the data, and this will ensure accurate
and meaningful conclusions can be reached. Just as importantly,
if raw data is simply dumped onto the general public without
the quality controls I have mentioned, it would undermine the
confidence that pilots and the airline community that had
voluntarily and confidentially supplied data and other sources.
We have to make sure that that confidentiality remains secure.
Now, as an airline captain, one who represents the safety
interests of 60,000 other airline pilots, I am concerned that
this could very well erode the very programs that have driven
the excellent safety record of airline travel that the public
has come to rely on.
Thank you, and again, for the opportunity to testify today,
and I will be pleased to address any questions you may have.
[The prepared statement of Captain McVenes follows:]
Prepared Statement of Captain Terry L. McVenes
Good afternoon and thank you for the opportunity to outline the Air
Line Pilots Association's views on aviation safety and the role we play
in protecting the traveling public. ALPA is the world's largest pilot
union, representing more than 60,000 pilots who fly for 42 airlines in
the U.S. and Canada. ALPA was founded in 1931, and for more than 76
years, ALPA has had a tremendous impact on improving aviation safety.
Today, ALPA continues to be the world's leading aviation safety
advocate, protecting the safety interests of our passengers, fellow
crew members, and cargo around the world.
Over the past 10 years, the U.S. aviation industry has seen a 65
percent decrease in the accident rate, and as a result, the U.S. safety
record is the envy of the rest of the world. Much of our success is due
to the collaborative approach that has taken place among airline
managements, labor, and the FAA in the voluntary collection and
analysis of de-identified safety related data. By analyzing recorded
data obtained during routine flight operations and receiving written
reports from the front line employees in a confidential and non-
punitive environment, we can not only see what is happening, but also
why it is happening. Today, these stand-alone safety programs at
individual airlines are reaching their maturity. That is a reflection
of the dynamic nature of any data collection effort--it must adapt to
changes in the environment; in this case, the changes in the aviation
industry.
As safety professionals continue to see value in these programs and
work with them in more detail, it has become clear that even more can
be learned by sharing safety information among the various stakeholders
in the industry. The FAA and the airline industry, including ALPA,
continue to work together on developing a formalized process in which
safety information can be accessed through secure networks under
mutually agreeable rules of engagement. ALPA has been working closely
with the FAA, NASA, and the airlines to develop a process that will
make this safety information available to decision-makers to help them
in their efforts to manage risk. This process is also invaluable in the
sharing of accident- and incident-prevention strategies across the
industry. Again, though, I would point out that as time goes on, the
industry continues to refine our processes for maximizing the safety
benefits that the traveling public receives from collecting data while
at the same time protecting those employees and airlines that bring the
data to the table.
NASA, especially through the Aviation Safety Reporting System
(ASRS) program, has always been an important player in aviation safety.
Its human factors research, in particular, has provided great value to
our industry. The National Aviation Operations Monitoring Service
(NAOMS) survey was part of the early effort to provide more information
to help all of us improve aviation safety. This first survey was a test
of the process and methodology. We understand that the data extracted
from this survey were summarized and those summaries were shared with
the government and industry. As in any first test, the data didn't
correlate very well with data from other sources, possibly due to the
mix of general aviation and airline operations. The aviation community
had plans to further analyze those discrepancies and determine if the
data were reliable, but funding for NAOMS ran out. That is when ALPA
stepped in to help keep the project alive as a part of our involvement
with the Commercial Aviation Safety Team (CAST). While we have been
working with CAST to modify the survey, we did not receive any of the
collected data from NASA.
What should happen to the data now? Several solutions are
available. One that makes a lot of sense is to provide NASA with the
necessary resources so that it can complete a peer review of the data
and then analyze the data, while at the same time maintain the
confidentiality and protective provisions that apply to voluntarily
supplied safety information. Other solutions may also exist.
Regardless of the solution, it is important to keep in mind that
raw data, distributed without appropriate analysis and scrutiny to
ensure its validity, can lead to unintended consequences. Incomplete or
inaccurate conclusions can be reached if the collection method is
flawed or if people looking at the data aren't familiar with aviation
or the context of how that information was provided. No one knows and
understands the data better than the stakeholders that provide the data
in the first place. That is why it is so important that those
stakeholders work closely with the analysts of the data. This will
ensure accurate and meaningful conclusions can be reached.
Just as importantly, if raw data are simply distributed to the
general public without the quality controls I've mentioned, it would
undermine the confidence that pilots and the airline community have
that voluntarily and confidentially supplied safety data will remain
secure. As an airline captain, and one who represents the safety
interests of 60,000 other airline pilots, I'm concerned that this could
very well erode the very programs that have driven the excellent safety
record of airline travel that the public has come to rely on.
Thank you, again for the opportunity to testify today. I will be
pleased to address any questions that you may have.
Biography for Terry L. McVenes
Capt. Terry McVenes serves as the Executive Air Safety Chairman for
the Air Line Pilots Association, International, representing ALPA
pilots in airline safety and engineering matters arising within the
industry. His responsibilities include oversight of more than 600
safety representatives from 42 airlines in the United States and
Canada, as well as budgetary and management supervision of more than
200 projects within the ALPA safety structure.
Capt. McVenes chairs the Steering and Oversight Committee for the
ALPA International safety structure and is a former member of the
Operations Committee and MMEL Working Group. He represents ALPA pilots
on the FAA's Voluntary Aviation Safety Information Sharing Aviation
Rule-making Committee and serves as its co-chairman. He has spoken at
many international forums on a wide variety of aviation safety topics.
He has also authored numerous articles on aviation safety, which have
appeared in national and international publications.
Prior to his current appointment, Capt. McVenes served as Executive
Air Safety Vice Chairman, Chairman of the Central Air Safety Committee
for U.S. Airways, and Chairman of the Aircraft Evaluation Committee. He
coordinated the establishment of the Aviation Safety Action Program
(ASAP) at U.S. Airways and served as a member of the FOQA Monitoring
Team. He has participated in numerous accident and incident
investigations and was a member of several line safety audit teams.
Capt. McVenes also served as a member of the Airbus Integration Team
and the Fuel Awareness and Conservation Team.
Capt. McVenes began his airline career in 1978 with Rocky Mountain
Airways in Denver, Colo., flying the DHC-6 (Twin Otter) and DHC-7 (Dash
7) aircraft. In March 1985, he was hired by Pacific Southwest Airlines
(PSA), which later merged into US Airways. He is rated on the DHC-7,
BAe-146, FK-28, DC-9, MD-80, A-320, and B-737. He currently is a
captain on the A320 for U.S. Airways and has more than 17,000 hours of
flying time.
Prior to his airline career, Capt. McVenes was employed as an
engineer for the Boeing Company in Seattle, Wash. He holds a Bachelor
of Science degree in aerospace engineering from the University of
Colorado and the certificate of aviation safety management from the
University of Southern California.
Discussion
NAOMS Survey and Methodology
Chairman Gordon. Thank you, Captain McVenes.
Dr. Krosnick, is it fair to summarize a portion of your
testimony by saying that when the methodology and the program
was set up, the NAOMS Program, that it was set up in a way that
the confidentiality of the material would be protected?
And if that was the case, and I think that, again, NASA
certified that when they said that they set it up by saying, we
have no means for--anyway--they assured us in their report that
that would be the case. So how long should it take them to get
that information to us?
Dr. Krosnick. I would think less than a week to assure that
any incidental open-ended responses in the file don't happen to
mention an airport or an airline. And the Director mentioned
the idea of eliminating fields in the data set. I would think
the normal federal procedure would be to redact words rather
than entire fields of data.
Chairman Gordon. Well, I would hope that NASA would hear
your testimony and that the end of the year is a worst-case
scenario and next week is a best-case scenario.
Also, Dr. Krosnick, the purpose of the NAOMS was to go
beyond the event driven or so-called action response syndrome
to aviation safety and develop a statistical, valid database
for safety-related events for decision-makers. It was
specifically designed to overcome the shortcomings of the
voluntary anecdotal Aviation Safety Reporting System, which
couldn't be used to tell anyone how often certain events
occurred.
Is that accurate?
Dr. Krosnick. Yes. That is exactly right. That, the ASRS
System relies on pilots to voluntarily choose to fill out a
form and mail it in when they feel an event has occurred that
merits that. And certainly plenty of forms are filled out and
mailed in every year, but because it is voluntary, there is
every reason to believe that many events that occur do not get
reported through that system.
So the purpose of NAOMS was to assure that with a
representative sample of pilots who were interviewed every week
of every year, that it would be possible to count up events in
many categories that never get described in reports to ASRS.
Chairman Gordon. And was it successful in doing so?
Dr. Krosnick. Well, we can't quite answer that question,
can we? What we know is that we designed--I should say the team
designed with my help a superb methodology and implemented it
with the approval of OMB, which is a pretty tough critic of
survey methods in the Federal Government, and so we can believe
in the method, but when the data come back, the next step is to
analyze those data fully, write reports, have those reports
peer reviewed, and proceed ahead with assessments of validity,
which we would have loved to do if the funding hadn't been shut
down early.
Chairman Gordon. Well, it seems to me that this was an
extraordinary high percentage of return. And you mentioned,
what did you, was it 40,000 commercial pilots?
Dr. Krosnick. Twenty-four thousand commercial pilots
interviewed.
Chairman Gordon. Right. I understand that, but how many are
there in total?
Dr. Krosnick. Oh, in the population?
Chairman Gordon. Yes, sir.
Dr. Krosnick. I will defer to Bob Dodd on that.
Chairman Gordon. Or maybe Captain McVenes. Approximately
what number of commercial pilots are there?
Captain McVenes. There is probably roughly 100,000
commercial pilots.
Dr. Krosnick. That is the number that we worked with.
Chairman Gordon. So, you know, it is, to me a fourth that
responded voluntarily is an incredible number and should be----
Dr. Krosnick. Well, if you don't mind, let us be careful
about that.
Chairman Gordon. Okay.
Dr. Krosnick. It is actually not 24,000 pilots who were
interviewed. It is 24,000 interviews were conducted. So we drew
statistical samples of very small numbers of pilots to be
interviewed each week.
Chairman Gordon. How many would you say, how many different
pilots would have been interviewed?
Dr. Krosnick. About 8,000 a year.
Chairman Gordon. Which is still an exceptionally large
sampling.
Dr. Krosnick. Yeah. Much bigger than most surveys.
Absolutely.
Chairman Gordon. And was it intended to be a continuing
permanent database or just a short-term experiment?
Dr. Krosnick. Well, the slide that I showed earlier that
NASA displayed at all the public meetings that we did early on
indicated that it was planned to be a permanent monitoring
system.
Chairman Gordon. Well, then I hope that we get it up and
running. I think it--again, let me, once again state that the
United States of America has the safest air transportation
system in the world, and I think part of that reason as Mr.
Hall said earlier, was because of the transparency, of
continuing to try to do things better, better, better, better,
and this is just one more effort to raise that extraordinarily
high bar or I won't say raise it any higher but keep it there.
I thank you, and Mr. Hall is recognized.
Survey Methodology and Confidentiality
Mr. Hall of Texas. Thank you, Mr. Chairman. Captain
McVenes, you said regardless of the solution it is important to
keep in mind that raw data distributed without appropriate
analysis and scrutiny to ensure its validity can lead to
unintended consequences. Actually, sir, we have heard from
several researchers that commercial and general aviation pilots
were very receptive and even were very eager to share their
experiences and views with NASA researchers in part because
they were told that they would be anonymous and would be
protected.
So how confident are you that releasing the data with
confidential information removed as described by Administrator
Griffin will not hinder pilots from participating in future
surveys?
Captain McVenes. Well, the confidentiality piece is so very
important.
Mr. Hall of Texas. Very important.
Captain McVenes. Because it is what makes that transparency
happen. It makes people want to report knowing that that
information is going to be used pro-actively in a safety-
related type of activity as opposed to some other activity of
any sort of sensationalism or whatever it may be. So that is
why it is very important to keep that flow of information
coming, and the reason that we have been successful as an
industry to get a lot of voluntary participation in these
programs, whether it is the NAOMS survey or the individual
programs that are going on at our airlines, is because that
information is used pro-actively. It is not used in a punitive
type of environment. It is used for safety purposes. And that
is why that is so important.
Mr. Hall of Texas. And Dr. Dodd, you and Dr. Krosnick were
shaking your head indicating that you agree with his----
Dr. Dodd. That is correct.
Dr. Krosnick. Yeah. I think it is very important that
respondent confidentiality----
Mr. Hall of Texas. Yeah. It certainly makes sense.
Dr. Krosnick.--never be compromised.
Mr. Hall of Texas. Sure.
Dr. Krosnick. And the good news is for everyone here that
the survey data were collected in a way so that no one could
identify the pilots. In other words, the data are in electronic
files that do not have the identities of the pilots in them.
And so there are 24,000 rows of numbers indicating the answers
that they gave to statistical questions but not in any way
indicating their name, phone number, or identity in any other
way.
So that is the good news.
Mr. Hall of Texas. And I think the Chairman in his inquiry
to you asked you in your testimony you state that NAOMS was
always envisioned to continue operating, and whether or not
this was planned to be continued at NASA or at another
government agency like the FAA. How was it? Who did finish it
that last year?
Dr. Krosnick. Well, in the--all of the work on NAOMS to
date has been done by NASA, and so my understanding is that
there was a planned attempt at a hand-off of the methodology to
ALPA. The plan as you have heard already from the Chairman was
to switch from telephone interviewing, which we had determined
to be the most reliable way to make these measurements, over to
Internet data collection, where respondents could go to a
website and answer questionnaires there.
Unfortunately, a test of that methodology was carried out
by NASA, and as I understand it was unfortunately quite a
failure, that hardly any pilots participated in the Internet
version of the survey. And I am not surprised by that, because
our research methods literature suggests that respondents of
this sort are far more likely to participate if the telephone
call method is used.
So my personal concern at the moment is the only plans I
have heard for ALPA possibly to pick up this project are with
this methodology which has already shown to be not feasible.
But more importantly I guess I share perhaps the implication of
what you are suggesting, and that is that I don't know that
this is an operation that can work effectively outside of
government. And I think it is particularly important to
recognize, as I said in my comments, that NASA is really
trusted by pilots, as I am sure Capt. McVenes will acknowledge,
because the ASRS has been so successful in collecting very
detailed information that is made public and that reveals a lot
about the details of bad things that go on. We heard earlier a
transcript of a pilot talking about falling asleep in the
cockpit. That is a pretty scary story, and that is on the
Internet for anyone to read.
And so, you know, the possibility that that information
being revealed to the public and its benefits seems clearly to
outweigh the possibility that someone could get in trouble
because NASA has successfully protected people from that. And I
believe NASA has the trust and credibility with pilots to
continue to do that.
Why Didn't the FAA Continue the Project?
Mr. Hall of Texas. Doctor, thank you. I will ask any of the
three of you, do you all know why FAA didn't pick up the
project? Why didn't they pick the project up?
Dr. Dodd. Well, I don't think we originally planned for FAA
to pick up the project, and what Dr. Krosnick was addressing is
key to that issue, and that is NASA has a reputation among the
pilot community of protecting their identity.
The Aviation Safety Reporting System, which you have heard
referenced a number of times today, is a program that has been
in existence for 30 years. During that time not one pilot's
confidentiality has been compromised. There has never been any
reverse engineering where somebody has gone into the report and
been able to identify who the reporter was, and because of that
NASA was chosen to be the primary and best government agency to
do this work because of that reputation.
The FAA's mission is slightly different, and of course, the
FAA is responsible for enforcement and certification of pilots.
And because of that, pilots may be unwilling to voluntarily
report issues that might result in them getting a slap on the
hand, if you will, or what we call a certificate of action.
So historically surveys run by the FAA among the pilot
community don't have a very high response rate, which is one of
the metrics that we use to evaluate how well we are doing with
the survey.
As an aside, with this particular survey that NAOMS, that
NASA did with NAOMS, we had an 85 percent acceptance rate among
the pilots contacted who agreed to do the survey. That is an
exceptionally high response rate and gives us confidence that
the pilots were willing to meaningfully engage in the process.
Mr. Hall of Texas. Thank you. My time really has expired. I
yield back any time I have or don't have.
Chairman Gordon. Not much, Mr. Hall.
The Space and Aeronautics Subcommittee Chairman, Mr. Udall,
is represented or recognized.
Best Organization to Operate NAOMS
Mr. Udall. Thank you, Mr. Chairman. Again, I want to thank
the panel for your compelling and insightful testimony.
Dr. Dodd, if I could focus on part of your testimony to
start with, you stated, I want to get this right. I believe
that NAOMS should be reinstituted and operated by an
independent and unbiased organization. Should NAOMS be operated
by NASA, some other organization? What would you recommend?
Dr. Dodd. I think there is a number of suitable
organizations, and it would depend on a number of issues. I
think Dr. Krosnick's observation that this is inherently a
government type of activity is absolutely correct. So I would
not hazard to recommend what agencies might be appropriate. I
think NASA at the working staff level did an outstanding job
with this project, and they have the technical expertise to do
that. So I certainly would have no objections from NASA
continuing to do this work.
So certainly that would be one agency that fits my
definition.
Mr. Udall. Do the other panelists care to comment on that
question?
Dr. Krosnick. Yeah. I agree, of course, that NASA is
suitable as I have suggested already in my comments, and from
the extensive learning I have benefited from about the airline
operation and industry, it is hard for me to identify another
organization from my many hours talking with pilots that the
practitioners would have the same confidence in. To some degree
there is specialization in these federal agencies, and the FAA
is particularly good at collecting large electronic databases
from machines.
NASA's specialty in this area has been activities like ASRS
where humans are reporting their experiences. So to some degree
NAOMS fits very nicely under the NASA umbrella. And I agree
with Dr. Dodd that NASA has done a wonderful job with this
project and has earned the recognition that they deserve, I
think, by that quality of work, and why not let them continue.
Captain McVenes. And certainly from our perspective, you
know, having NASA continue with the project is-- we certainly
wouldn't object to. Our role in the whole thing was to keep it
alive in whatever way we could. And to Mr. Hall's point as, you
know, why FAA didn't take it over directly, kind of indirectly
they were involved with it in the fact that their work with the
commercial aviation safety team, as well as the rest of the
industry, we were trying to utilize that group as a way to keep
this thing going and involve all the stakeholders including the
FAA on this.
Termination of Program
Mr. Udall. If I might return to you, Captain for a second,
final question, editorialize briefly, and my colleague,
Congressman Lipinski asked a question in the earlier round, the
first panel, what, why did NASA stop? What was underway here,
and it is curious, but sitting that aside, I want to thank you,
Captain, for your willingness to testify on such short notice
in front of the Committee.
Captain McVenes. My pleasure.
Mr. Udall. And I was struck by one of your statements which
read, NASA has always been an important player in aviation
safety. It is human factors, research in particular, that
provide a great value to our industry. At some committee
hearings that I have chaired earlier this year we have heard
numerous concerns raised about the cutbacks and the NASA human
factors R&D programs in recent years, particularly in the
applied areas.
Have you heard these same concerns raised, and do you share
them, and after the Captain is finished, if the other two
panelists would care to comment, I would sure appreciate it.
Captain McVenes. Yeah. We were concerned. I know we wrote
several letters from our president to the various groups here
in Washington to try to change the mind of those that
controlled the purse strings over that, because we saw a great
value in human factors research that was going on, especially
as it applied to some of the automation, changes that were
taking place in our aircraft, and so that is why we were very
interested in trying to keep that alive as best we could.
This is an important part of aviation, especially the
future of aviation as we continue to evolve with new
technologies, we understand what that human element is in the
role of how we fly our airplanes. And NASA played a very big
role in that in the past, and unfortunately, they are not doing
it as much anymore as they--we feel they should be.
Dr. Krosnick. I agree, and if you look at the slide that is
still up on the screen, you will see that was an ambitious work
plan for a great deal of research to be carried out over a long
period of time, and the budget that was established for that
work in the beginning was appropriately generous to cover the
cost at an A-plus quality level.
But that budget shrank regularly during the years and
contracted for reasons we were not informed about, such that in
the end there was not money available to pay for most of the
work to be done. And it was that sort of choking off of the
project that accounts for the incompletion of the work.
And I think, you know, you are perhaps pointing to a larger
issue at NASA about changing priorities and changing budgets in
a way that Bob is actually even more informed about than I am.
Dr. Dodd. The only additional----
Mr. Udall. Mr. Chairman, if I might use of Mr. Hall's
remaining time for Dr. Dodd to comment.
Dr. Dodd. Very quickly. The NASA human factors program, we
saw it while we were involved with the NAOMS Project of year by
year having funding removed from the program, and we saw it,
and at the local level and saw that pain that it caused among
the staff at NASA Ames.
The other thing I want to point out is that aviation is an
incredibly labor-intensive activity. I won't go through all the
activities that are involved with it, but human factors is key.
It is usually human error that is associated with most of our
major problems, and we need to continue to fund that research
and that focus on that because it is not going to go away.
Mr. Udall. Thank you. Thank you, Mr. Chairman.
Chairman Gordon. Sir, your time has expired, and now the
Vice Chairman of the Science and Technology Committee, Mr.
Lipinski, is recognized for five minutes.
Mr. Lipinski. Thank you, Chairman Gordon. I want to thank
all of our witnesses for their testimony today, especially
Captain McVenes. As Mr. Udall said, I know you did this on
short notice. We appreciate that.
Dr. Krosnick, I am not sure if you remember 14 years ago, I
think it was, I did the summer program, political psychology,
at Ohio State University.
Dr. Krosnick. That is why you look familiar. There we go.
Mr. Lipinski. And so I have known you for, going back many
years there, and I certainly have a great deal of respect for
your work.
I wanted to sort of keep going down the line of what I
started on earlier with the first panel. I asked them to put
the timeline up there. I ask you, Dr. Krosnick, because I think
I--actually I heard this earlier. I was in my office listening
to the testimony, and when I heard this, I decided I had to
come and run back here to ask some questions.
Where did the process stop in this timeline?
Dr. Krosnick. I think Dr. Dodd is the best person to
describe it.
Mr. Lipinski. Okay.
Dr. Dodd. We basically--2003, is when we really had the
plug pulled on us. We--one of the things I should clarify is
that NASA had a five-year program from a budgeting point of
view for this project and many others, and so when you hear
NASA saying that there was an end point for this particular
project, it was because of a five-year budgeting exercise and
that the project was not continued outside of that budget for
the next cycle.
It stopped in 2003, essentially, as far as continuing
development. In 2002, we were getting ready for air traffic
controllers. We did three focus groups with 15 air traffic
controllers each, and we had about a year and a half
development cycle planned for the development of the air
traffic controller questionnaire. We briefed NASA. They were
very receptive to the idea, and at that point is when we
stopped ADC development, and it was because of the funding
issues clearly were going to be cut back at that point. And so
we dropped that out of the plan at that point. We didn't have
the money for the development. We focused on continuing the
pilot survey.
Mr. Lipinski. That is even more information than I was
aware of, but it fits perfectly into my question, and none of
you, I believe, can answer this but I have questions that I
asked Dr. Griffin, and I think he was mistaken about there not
being--the program not being interrupted at a certain point,
that it had, you know, gone its full course.
Certainly there have been issues involving air traffic
controllers and the FAA. That is a major issue, something that
we have been dealing with, trying to get dealt with in the
Transportation Committee. It is a big labor issue, and it seems
to me that what Dr. Dodd just said seems to fit with possibly
when it was time to actually go and do the survey of the air
traffic controllers, that is where this stopped.
And so I really would like to, Mr. Chairman, I think that
is an important point to look at because what it comes down to
is safety is the most important thing, and the whole purpose of
this was for safety. It is $11 million, it is six years that
was spent on it, but what can we do to improve safety.
I am not going to say the sky is falling literally, but as
you said, Mr. Chairman, it is just trying to make a safe system
even safer, and I just want to leave out there the, you know, I
don't know if anyone--if Dr. Dodd or anyone else has any other
comments on that, but the possibility of there got to be a
place where the FAA perhaps did not want to go with the survey
of the air traffic controllers at that time, and that is where
this stopped.
Now, if anyone wants to add anything to that or we would
just leave it there. So any witnesses want to add anything to
that?
Dr. Krosnick. I really can't comment because I didn't know
what the FAA decision-making was on that or senior NASA
management as far as funding decisions. I am sure that there
were probably other issues as part of that process, and other
than that I can't comment.
Mr. Lipinski. Thank you. I will leave it at that, Mr.
Chairman.
Chairman Gordon. Thank you, Mr. Lipinski, and my thanks to
our witnesses today for their time and expertise and Dr.
Krosnick, I hope you will make yourself available and your
expertise to NASA if they need you to help do this final, you
know, cleaning if there is any need to be of this list.
And if there is no objection, the record will remain open
for additional statements from the Members and for answers to
any follow-up questions the Committee may ask of the witnesses.
Without objection, so ordered.
[Whereupon, at 4:05 p.m., the Committee was adjourned.]
Appendix 1:
----------
Answers to Post-Hearing Questions
Answers to Post-Hearing Questions
Responses by Michael D. Griffin, Administrator, National Aeronautics
and Space Administration (NASA)
Questions submitted by Chairman Bart Gordon
Q1. Please provide a full discussion of the transfer of the NAOMS
methodology to the Air Line Pilots Association (ALPA), including
revisions in the questionnaire, ALPA's contribution to the cost of the
transfer and revisions, and whether any peer review was conducted on
either the original survey methodology or the revised methodology as
transferred to ALPA. If the NAOMS methodology was not peer-reviewed,
please describe what process was used to validate the methodology prior
to transfer to ALPA.
A1. Transfer of the NAOMS Methodology to ALPA
The NAOMS team adapted the computer-assisted, telephone interview
process (original survey methodology) to a web-based, data collection
mode (revised survey methodology) using commercial, off-the-shelf
(COTS) software (ILLUME by DatStat Inc.). NASA conducted testing during
the months of February and March 2006 to compare the web-based survey
process with the original computer-assisted telephone survey process.
The NAOMS team purchased a one-year license starting in December
2006 from DatStat to apply ILLUME technology to a web-based survey that
replicated the functionality of the original computer-assisted
telephone survey. NASA transferred this license to ALPA in January
2007.
The NAOMS team provided training sessions for the ALPA team on the
NAOMS web-based survey methodology.
NASA has asked the National Academy of Sciences to assess the NAOMS
survey methodology, and to the extent possible, to assess the potential
utility of the survey responses.
Revisions in the Questionnaire
There were revisions to the content of the questionnaire associated
with adaptation for web-based surveys. Modifications were made to the
computer-assisted telephone interface model to adapt the questions so
they would capture the same data via the web-based interface model. In
addition, some questions were modified to simplify the telephone survey
questions for the web-based survey application.
ALPA's Contribution to the Cost of the Transfer
ALPA is estimated to have contributed approximately one work-year
equivalent to support the transfer of the web-based survey methodology.
Peer-Review of the NAOMS Methodology
From 1998 to 2004, the NAOMS project team gave approximately 17
PowerPoint briefings to various audiences, mainly government and
industry personnel. (These briefings have been provided to the House
Committee on Science and Technology at their request.) However, none of
the research conducted in the NAOMS project has been peer-reviewed to
date. PowerPoint briefings to stakeholders, while having some value, do
not constitute peer review. Accordingly, no product of the NAOMS
project, including the survey methodology, the survey data, and any
analysis of those data, should be viewed or considered at this stage as
having been validated.
It should be noted that NASA's assertion that none of results from
the NAOMS project can be considered validated does not mean that NASA
is drawing conclusions about the validity of the survey data; we are
simply stating that no such conclusions can be credibly drawn. That
said, comparisons of some of the results reported in the briefings
prepared by the NAOMS project team to event rates that are known with
reasonable certainty, such as engine failures, have led NASA to
conclude that there is reason to question the results presented by the
NAOMS project team in their various briefings.
In order to rectify this situation as best as possible, NASA has
asked the National Academy of Sciences to assess the NAOMS survey
methodology, and to the extent possible, to assess the potential
utility of the survey responses.
Q2. Please provide a breakout by fiscal year by recipient of the $11.3
million you stated in your testimony was spent on the NAOMS project.
A2. Battelle was the prime contractor for the NAOMS project; $11.23
million is the total full cost of the project. The costs break out by
fiscal year as follows:
[GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]
Question submitted by Representative Daniel Lipinski
Q1. I have major concerns that it is going to be difficult to make
improvements in the aviation industry if the agencies cannot work
collaboratively and trust each other's work. Dr. Griffin, could you
comment on your working relationship with the FAA?
A1. A solid collaborative working relationship between NASA and the
Federal Aviation Administration (FAA) is critical to the successful
outcome of the Nation's vision for the Next Generation Air
Transportation System (NextGen). The working relationship between NASA
and the FAA has traditionally been solid and continues to strengthen at
all levels. As part of the Joint Planning and Development Office
(JPDO), a multi-agency organization focused on developing the NextGen,
the FAA and NASA have formed a strong partnership with a common goal of
a greatly improved future air transportation system for the Nation.
Both the NASA and FAA Administrators are members of a Senior Policy
Committee (SPC) that oversees the work of the JPDO. Among its key
activities, the Committee works to provide policy guidance, resolve
major policy issues, and identify and align resource needs. The
partnership to bring about NextGen encompasses not only safety research
but also air traffic management and environmental research.
Participation of both Administrators on the SPC demonstrates at the
highest level within each agency a relationship that is committed to a
future aviation system that is responsive to the mobility needs of the
public.
To further ensure that a strong working relationship between NASA
and FAA is promoted at all levels, Dr. Lisa Porter, the NASA Associate
Administrator for Aeronautics, meets regularly with senior management
of the FAA to have open and frank discussions on matters the two
agencies are jointly working. For example, during FY 2007, Dr. Porter
and Mr. Nicholas Sabatini, the FAA Associate Administrator for Aviation
Safety, held joint meetings to monitor the progress of technologies
that were being developed by NASA and implemented by the FAA into what
has become the Aviation Safety Information and Analysis Sharing (ASIAS)
system. At the beginning of FY 2008, the ASIAS system successfully
transitioned from NASA to the FAA and the aviation industry as a means
to share a wide variety of safety data pertaining to the national air
transportation system. Going forward, NASA continues to develop
advanced methods and algorithms for analyzing multiple and varied
sources of safety data in order to enable the ability to discover
safety precursors before accidents occur. In addition, NASA will
continue to work collaboratively with the FAA and industry to
transition these new methods into the evolving NextGen.
With regard to air traffic management research, NASA Aeronautics,
the FAA Air Traffic Organization (ATO), and the JPDO are working
collaboratively to establish a process to transfer technologies from
fundamental research and development (R&D) into implementation for the
NextGen. This process, which ensures research is sufficient and
appropriate to enable NextGen, has top-level commitment from Dr. Porter
and Ms. Victoria Cox, Vice President for Operations Planning Services,
ATO. A coordinating committee that includes both FAA and NASA
representatives oversees four research transition teams that are
organized around the NextGen Concept of Operations framework. This
framework connects the FAA's Operational Evolution Partnership elements
with the NASA research. Teams are collaboratively working to plan near-
term R&D transition in areas such as surface management and long-term
transition in areas such as dynamic airspace allocation.
As NextGen evolves to handle the projected growth in the national
air transportation system, environmental concerns, including the
expected increase in noise and air pollution from a variety of
emissions, pose a significant hurdle that must be overcome. The future
aircraft fleet will need to include technology advancements that enable
the growth in the air transportation system without additional impact
on the environment. NASA and the FAA have a long history of
collaborative work in this area. A variety of predictive tools
developed at NASA have been incorporated into the FAA Environmental
Management System and used to inform regulatory decisions. In addition,
over the last year, the FAA and NASA have worked together on the
development of the Goals and Objectives for the Energy & Environment
portion of the National Plan for Aeronautics R&D. Both agencies
continue to work closely to ensure that fundamental technology
developed at NASA can be transitioned to the future fleet.
Finally, NASA and the FAA actively participate in each other's
advisory/review committees with representatives of each agency
engaging, in an advisory role, in determining the strategic directions
of the research of the other. For example, Dr. Porter serves on the
FAA's Research and Development Advisory Committee (REDAC) which reviews
and then advises the FAA senior management on the relevance and
progress of their research and development activities. Further
strengthening this collaboration across multiple technical areas and
management levels, representatives from each of the three NASA Research
Programs in Aeronautics serve as members on subcommittees to the FAA
REDAC. In a similar fashion, and at the request of NASA, the FAA has
provided representatives to participate on NASA review panels to assess
the technical quality, performance, and relevance of NASA research
programs. For two of the NASA programs, the designated leads of the
review panels were FAA representatives. In addition, NASA researchers
serve on various technical committees, such as the Radio Technical
Commission for Aeronautics (RTCA) special committees that provide
advice to the FAA on technical matters. NASA also makes use of FAA
subject matter experts to help evaluate proposals received from
universities and industry via the NASA Research Announcement process.
These examples of interagency participation on advisory committees, and
other joint activities across all levels, demonstrate a working
relationship based on trust and respect for the talent and integrity
between NASA and the FAA, particularly at the senior leadership level.
Continued commitment to such a partnership is critical to the future
success of NextGen.
Questions submitted by Representative Russ Carnahan
Q1. Dr. Griffin, news reports have indicated that NASA Associate
Administrator, Thomas S. Luedtke, said that revealing the findings
could damage the public's confidence in airlines and affect airline
profits. Do you believe that it is more important to keep the American
people in the dark about the basic reality of where we are in terms of
airline safety than to paint an honest portrait for our constituents?
A1. The Associated Press (AP) requested the survey results from this
project through the Freedom of Information Act (FOIA). NASA made a
determination not to release the survey results, using an exemption
available under the FOIA. I stated earlier, both to the public and in
Congressional testimony, that I do not agree with the way the FOIA
exemption was explained and regret any impression that NASA was in any
way putting commercial interests ahead of public safety. That was not,
and never will be, the case.
Q2. In Mr. Luedtke's final denial letter to the AP regarding its
request for the survey results, he wrote that ``release of the
requested data, which are sensitive and safety-related, could
materially affect the public confidence in, and the commercial welfare
of, the air carriers and general aviation companies whose pilots
participated in the survey . . .'' This seems to indicate that the
results portrayed a fairly dire assessment of air safety travel--was
Mr. Luedtke going to worst case scenario or is this a legitimate
doomsday scenario?
A2. Mr. Luedtke's determination to not release the survey results was
neither. The determination had nothing to do with survey results, as no
final report or conclusions had been made. Rather, Mr. Luedtke's letter
articulated NASA's determination that the raw survey responses
contained information protected by the Freedom of Information Act
(FOIA) Exemption 4, which incorporates Trade Secrets Act protection for
confidential commercial information. This exemption requires the
protection of confidential commercial information that is voluntarily
provided to the Agency and would not customarily be released to the
public. Confidential commercial information is defined very broadly and
includes company information: 1) relating to its business, including
processes, operations and statistical data; 2) which is obtained from
someone outside the government; and, 3) which is not generally released
to the public.
In response to the FOIA request from the AP, NASA cited concerns
for ``public confidence'' and for the ``commercial welfare'' of air
carriers as the supporting basis for the exemption cited in denying the
request for the data. This sentence, though taken from case law, was a
mistake, as NASA Administrator Griffin has made clear. The intent was
better explained in the following sentences in the NASA response, which
noted that the airlines and aviation industry may have a commercial
interest in this data. It does not reflect any conclusions drawn from
the data. NASA regrets any impression that the Agency was in any way
trying to put commercial interests ahead of public safety. That was
not, and never will be, the case.
Answers to Post-Hearing Questions
Responses by James E. Hall, Managing Partner, Hall and Associates, LLC;
Former Chairman, National Transportation Safety Board (NTSB)
Question submitted by Representative Daniel Lipinski
Q1. The FAA has responded to the stories on NAOMS by pointing out how
safe the skies have been in recent years. At the same time, congestion
at airports has been growing, we have had several near-miss collisions
at airports just this year, and the projections are that aviation
traffic will keep growing. Are the safety systems in place today
adequate to meet the emerging challenges in aviation?
A1. I am pleased that the question asked for comments on the adequacy
of our nation's current safety structure to address rising challenges,
particularly when many falsely believe that our past safety successes
are sufficient to guarantee continued success in the future. In short,
Rep. Lipinski, the answer to your question is no.
The FAA is correct in pointing out that the skies have been safer
in recent years. In the ten year period following the 1996 Gore
Commission, the airline industry successfully reduced fatal accidents
by 65 percent. It is certainly safer to fly today than it was ten years
ago. However, there are two major reasons why this success, while
laudable, should not lead us to conclude that all is well in the
aviation industry.
1. Safety Requires Constant Vigilance
The ten-year reduction in fatal accidents was the product of
substantial changes--most of which were recommended by the Gore
Commission--on the way the FAA, NTSB, DOT, airlines, and others handled
safety and regulation. These changes occurred largely in response to
two high-profile accidents and the general trends of rapid expansion in
the industry, technological and aircraft design development, and large
projected increases in passenger volume.
In other words, while prior to 1996 we had an aviation safety
framework--and though overall aviation safety had increased in the
preceding 40 years--that framework was deemed no longer adequate to
meet future challenges. The current safety of the skies that the FAA
cites is therefore due to the historical commitment in our nation's
safety culture to resist complacency and satisfaction with existing
safety frameworks. This commitment to constant vigilance and
improvement should continue to be reaffirmed.
2. Nine Fatal Accidents Per Year: The Next Generation of Risks
Today there are dangerous trends in the aviation industry that
could pose serious safety risks if we do in fact regress to
complacency. As you note in your question, near-miss incidents are
still a major concern and congestion and volume are soaring. Near-
misses are illustrative of the new challenges facing aviation safety.
Because we have reduced the number of major mishaps and fatalities we
must analyze such close-calls and nonfatal incidents in order to see if
hidden dangers lurk beneath the surface of seemingly positive
statistics. This is why the denial of the NAOMS data was so
particularly distressing.
Airport congestion and volume, for their part, are but some
examples of what I call the ``Next Generation of Risks,'' which also
includes a dramatic shortage of air traffic controllers, pilots, and
technology upgrades. Perhaps the most significant statistic I can find
in response to your question is that cited in the February 2007 GAO
study (Federal Aviation Administration: Challenges Facing the Agency in
Fiscal Year 2008 and Beyond, GAO-07-490T), which stated that:
``although the system remains extraordinarily safe, if the
current accident rate continues while air traffic potentially
triples in the next 20 years, this country would see nine fatal
commercial accidents each year, on average.''
Nine fatal accidents and hundreds or thousands of deaths per year
would not only represent an annual tragedy and dramatic reversal of
historical safety trends, but would also severely affect the confidence
of the flying public--ironically, the very reason NASA initially
provided for withholding the NAOMS data.
Clearly, we do not currently have the safety system necessary for
the next generation of risks. The FAA estimates it will lose about 70
percent of the air traffic controller workforce over the next 10 years.
In 2009, airlines will have to fill 20,000 pilot openings due to
retirements and other factors. Passenger volume is projected to reach
one billion by 2015 and close to 2.3 billion by 2027. Numerous other
potential dangers to aviation safety also exist, but perhaps the
greatest threat is the idea that because we are safe now, there is no
cause to worry or even think about future hazards. Nothing, in fact,
could be more dangerous to the aviation traveling public.
I applaud the Committee's past and recent attention to aviation
safety and I urge the Members to continue to exercise their vital
oversight role as a driving force behind safety improvement and reform.
Chairman Gordon, thank you again for the opportunity to be of service
to yourself, the Committee, and the Congress. Please do not hesitate to
contact me if I may be of any further assistance.
Answers to Post-Hearing Questions
Responses by Robert S. Dodd, Safety Consultant and President, Dodd &
Associates, LLC
Questions submitted by Chairman Mark Udall
Q1. In his testimony at the hearing, NASA Administrator Griffin
compared the NAOMS project with the existing Aviation Safety Reporting
System (ASRS), stating that ``One of the primary differences between
ASRS and this survey was that ASRS is managed by aviation specialists.
When a report is made, the aviation specialists can contact the
submitter of the report and ask follow-up questions. They are
knowledgeable about aviation safety. This (NAOMS) survey was conducted
by telephone polling surveyors who have no knowledge or had no
knowledge at all as to aviation or aviation safety. They had no domain
expertise and it is precisely that which has led to some of the
problems that we are discussing today.''
Q1a. Do you agree or disagree with Administrator Griffin's
characterization? Why?
A1a. I do not agree with the Administrator's characterization. There
are two implicit assumptions in his statement that are in error. One
relates to interviewer expertise and the other implies that NAOMS and
ASRS are similar.
First, the Administrator implied that having knowledgeable aviation
interviewers is preferred because it allows the interviewer to conduct
follow-up questions with the interview subject to capture additional
information, or perhaps answer questions if the interview subject was
confused about a particular question. While on the surface this may
appear to be the preferred approach, it is in reality the wrong
approach.
It is vitally important in survey research that the questions be
applied in the same way for each interview subject. This is a basic and
fundamental characteristic of any quality survey. The way questions are
asked matter and can influence how an interview subject responds.
Consequently, questionnaires must be carefully designed AND
interviewers trained to conduct the interview in the same way each and
every time.
NAOMS interviewers were not allowed to deviate in any way from the
prepared questionnaire. The questions were designed to be clear. For
the vast majority of questions, pilots were not confused and did not
ask for clarification. For those few questions where pilots did ask for
clarification, the NAOMS team prepared scripted responses in advance
for the interviewers to use for the most common clarification
questions. That was the only acceptable response if a pilot asked a
question.
NAOMS interviewers were professional interviewers who had extensive
experience in conducting interviews. They were trained to conduct the
NAOMS interview over three separate sessions lasting a total of 12
hours. Each interviewer was then certified to conduct the interviews
through simulated interviews with NAOMS aviation experts posing as
pilots. As mentioned, their performance was also randomly monitored by
their managers.
NAOMS interviewers were not aviation experts and this was
preferred. The NAOMS team did not want the interviewers to offer
impromptu responses to pilots if they asked questions about the survey
or a particular question. The goal was for each question to be asked
the same way each time it was applied. Aviation knowledge was not
required for this to occur. What was required was professional and
disciplined interviewers experienced in conducting telephone
interviews. NAOMS interviews were also randomly monitored so managers
could ensure this basic tenet was being followed.
The second assumption that appeared in the Administrator's
statement is that NAOMS and ASRS are in some way compatible. The
programs are similar in that they both collect data on aviation
incidents from pilots but they are very different in their design and
goals.
The ASRS is a voluntary reporting system where the PILOT INITIATES
the contact with NASA to report an incident they experienced. NAOMS is
voluntary reporting system where the PILOT IS ASKED to voluntarily
provide information on incidents he or she may have experienced. ASRS
is designed to collect information on a SPECIFIC INCIDENT while NAOMS
is designed to collect information on the frequency of occurrence of a
BROAD RANGE OF INCIDENTS.
ASRS data cannot be used to estimate the frequency of safety events
in the National Airspace System (NAS) but ASRS reports are very useful
in understanding why a particular event occurred. NAOMS on the other
hand was designed to provide accurate estimates on how often events
occurred and to measure changes in event rates over time but NAOMS was
not designed to collect data on why events occurred. NAOMS was designed
to provide a method for the ongoing systematic collection, analysis and
interpretation of the frequency of incidents in the NAS for use in
assisting the planning, implementation and evaluation of the Nation's
air safety system.
Q1b. If the NAOMS project ere to be restarted would there be any
changes that you think should be made either to the methodology or
implementation of the project?
A1b. The biggest issue that would need to be addressed would be the
establishment of an advisory board and working group. The advisory
board would address strategic issues such as funding, operating
agreements among organizations and oversight of the program. The
working group would provide guidance on survey methodology and
application, data analysis, publications and recommendations to the
aviation industry. Both of these organizations would need to support
the program and believe in its value. NAOMS NASA staff tried to engage
the aviation community and encouraged the development of a working
group. This was not successful. That lack of success may have been
related to fear of the results, lack of confidence in survey methods,
and lack of certainly on how the data might be used. In any event, this
in my opinion was the key factor that doomed NAOMS to failure.
The second largest issue that would have to be addressed would be
establishment of a reliable funding stream for NAOMS not subject to the
variations in agency budget cycles. (It is assumed that NAOMS would be
operated by a government entity.) Ideally, NAOMS should receive funding
directly from Congress until it was fully accepted and integrated into
the Nation's aviation safety oversight system. Reduced funding once the
system was operating would cause a compromise in data quality and
usefulness. This would likely happen if NAOMS was part of an Agency's
budget.
The last issue that would need to be addressed would be revisions
to the NAOMS survey process. The questionnaire for the Air Carrier
pilots is mature and well vetted but it should be reviewed for
acceptance by the working group and modified accordingly (without
compromising technical accuracy). The General Aviation pilot survey
would require more work to ensure it was measuring safety incidents as
intended. Finally, development work would have to be initiated to
include other aviation safety stakeholders like maintenance
technicians, air traffic controllers and others.
Questions submitted by Representative Daniel Lipinski
Q1. Peggy Gilligan, the Deputy Associate Administrator for Aviation
Safety at the FAA recently cast doubt on the survey by questioning
NASA's methodology. For example, she is quoted as stating that the
answers in the study were not sufficiently detailed. Further, Dr.
Griffin's testimony highlights inconsistencies in the study as compared
to surveys conducted in other ways and also calls into question the
validity of the methodology. Dr. Dodd, your testimony explains that the
process was meticulously designed and very thorough. Could you
elaborate on your work on the survey and explain why others may call
the study into question?
A1. It is difficult for me to respond to Question One since the
criticism of the NAOMS questions and study methodology are offered in
the abstract without specific citations or examples. It should be noted
that both FAA and NASA management had numerous opportunities to review
and comment on the NAOMS questions, the program design and the
associated methodology. These opportunities were afforded the FAA and
NASA through two industry workshops, numerous industry briefings, and
program reviews. Critical comments and questions were offered by NASA
and FAA and the NAOMS team was responsive.
I think FAA and NASA criticisms however highlight two failings of
the NAOMS team and the aviation industry at large. Ideally, detailed
criticisms should have been vetted and discussed within the context of
a vibrant and engaged industry working group so that such concerns
could have been addressed while the program was operating. This type of
procedure would have resulted in a stronger product. This didn't happen
because no ongoing working group was ever successfully established and
functional. The failure was NASA's inability to establish an engaged
working group that was supportive of the project. NASA staff tried but
the aviation industry was not supportive.
The second failure was not having the NAOMS questionnaire and
underlying methodology reviewed and critiqued by survey methodology
experts not affiliated with the NAOMS team or the aviation community.
This demonstrated a certain naivete on part of the NAOMS team. Such a
review should have been accomplished by experts who could comment
knowledgeably on survey program development and design, questionnaire
development, data security and respondent anonymity and other issues.
This wasn't done and consequently, the NAOMS team continues to respond
to criticisms of the survey design and methodology by organizations not
well versed in such issues.
While NASA and FAA are certainly entitled to their opinion, their
organizational expertise does not lie in survey research. Criticisms of
the NAOMS methodology should be considered within the context of the
background and knowledge of those offering the criticism.
Q2. How would NAOMS data be used by an aviation safety expert to
improve safety in the skies?
A2. NAOMS was modeled after a public health epidemiologic surveillance
system. The Centers for Disease Control and Prevention (CDC) states a
surveillance system is ``the ongoing systematic collection, analysis
and interpretation of . . . data for use in the planning,
implementation and evaluation of public health practice.'' \1\ In the
case of NAOMS, ``public health practice'' could be replaced with the
term ``aviation system safety.''
---------------------------------------------------------------------------
\1\ Thacker SB, Berkelman RL, Public Health Surveillance in the
United States, Epidemiological Review, 1988; 10:164-90.
---------------------------------------------------------------------------
NAOMS was designed to accomplish two different tasks. First, it was
designed to reliably track aviation safety incident trends. This was
accomplished by asking a routine set of questions that remained
constant over time. If a ``statistically valid'' increase in a
particular response (trend) was noted then the appropriate safety
experts in government and industry would determine if the trend was of
concern. If so, then an appropriate supplemental investigation would be
initiated to determine why the trend was changing.
Tracking trends would allow safety experts to recognize changes in
the aviation system before losses occurred. Additionally, NAOMS event
trending would allow aviation safety experts the ability to measure the
positive effects of safety enhancements. If a particular safety
enhancement was working, reported events (trends) associated with that
issue should decrease.
In addition to the ability to accurately measure and track safety
incident trends, NAOMS was also designed to collect information on
targeted or special topics. These would have been small focused data
collection efforts on particular topics of interest. The NAOMS
questionnaire was designed to be flexible so questions could be added
to evaluate a particular topic such as the introduction of a new
technology or new procedure. Data would be collected for a specific
period of time (determined by the need) and evaluated. Once the data
collection and associated evaluation was completed, the data collection
for that topic would stop and questions for a new topic added if
needed. The ability to trend data over time, and to evaluate specific
issues relatively quickly,\2\ is a very powerful combination for safety
oversight.
---------------------------------------------------------------------------
\2\ The NAOMS team estimated that a special topic section could be
added to the questionnaire in about three months.
---------------------------------------------------------------------------
The NAOMS team envisioned NAOMS trend analysis to be an automated
and ongoing process. Evaluation of the trends would be done regularly
with exceedance limits set so notification of meaningful changes would
be automatic. Manual review of the results would occur monthly. The
industry working group and other interested parties would receive
regular updates and immediate notice if worrisome trends emerged.
Regular meetings of the working group were envisioned for review of the
data. Publication of annual reports summarizing the data collected over
the previous year was also planned.
NAOMS was designed to be an early warning system and a method by
which to collect targeted safety information quickly, reliably and
cheaply. NAOMS was never designed to replace current safety initiatives
but to supplement current information systems and provide capabilities
currently not available.
Answers to Post-Hearing Questions
Responses by Jon A. Krosnick, Frederic O. Glover Professor in
Humanities and Social Sciences, Stanford University
Questions submitted by Chairman Mark Udall
Q1. In his testimony at the hearing, NASA Administrator Griffin noted
that a ``2004 Review of NASA's Aerospace Technology Enterprise by the
National Academies concluded that there was not a compelling argument
for continued independent data collection in the NAOMS project.'' He
went on to quote the Review as stating that the ``NAOMS Project seems
to be developing a methodology to establish trends in aviation safety
performance that are already available through other sources within
industry and government.'' Do you agree with the National Academies
assessment? If not, why not?
A1. The National Academies Panel spent about one hour with the NAOMS
team amidst a long visit they paid to Mountain View, California, to
collect information on an array of projects in addition to NAOMS. The
Panel did not receive a detailed briefing on the NAOMS methodology, its
development, or the data that had been collected. Thus, the panel was
limited in its ability to fully understand the project.
In its written report, the Panel stated the following:
``NAOMS consists of a longitudinal survey of aircraft
operators, gather information about safety-related experiences
of pilots, cabin crews, and maintenance operators for both
general aviation and air carriers. . . . It provides
statistically reliable results about the frequency of
occurrence of safety-related incidents.'' (An Assessment of
NASA's Aeronautics Technology Programs, 2004, p. 100).
``The NAOMS approach is built on research and implementation
of national surveys such as those of the Bureau of Labor
Statistics. The NAOMS sampling methods have been grounded in
sound interview polling science.''
Thus, the Panel believed that the NAOMS project was intended to
include data collection not only from pilots but also from flight
attendants and mechanics. And the panel recognized that the NAOMS
methodology was well established and credible.
The Panel did not conclude that NAOMS should be terminated.
Instead, they recommended that ``NASA should combine the National
Aviation Operations Monitoring Service methodology and resources with
the Aviation Safety Reporting System program data to identify aviation
safety trends.''
The Panel did express concern about the issue of potential
redundancy with other data sources but mentioned only one instance of
such overlap: engine shutdowns, which are tracked by the FAA. The Panel
did not provide a thorough analysis of the extent of such redundancy.
In fact, there was a very small degree of such overlap, and it was
intentionally designed into the NAOMS data collection system. The
purpose of this overlap was to allow for cross-validation of the NAOMS
measurements. That is, we expected to find similar rates and trends in
the NAOMS data as would be seen in the FAA data on engine shutdowns, as
long as the NAOMS survey question wording exactly matched the
specifications of the records being kept by the FAA. If we were to see
such correspondence across data sources, that would be reassuring about
the validity of the NAOMS data. Building questionnaires with such a
plan for validation is a normal part of designing a new questionnaire-
based measurement system.
If NAOMS were to reveal levels of and trends in event rates that
corresponded closely with rates and trends of the same events as
measured in other ways, questions addressing these events could then
have been removed from the NAOMS questionnaires. But if NAOMS rates and
trends turned out to be very different from those produced by different
data sources, this would merit further investigation. The discrepancy
could be attributable to inadequacy in either or both measurement
methods, and it would be worthwhile to investigate both possibilities.
For example, NAOMS event rates may be considerably higher than
those yielded by voluntary or mandatory airline or government reporting
systems because people must take the initiative to report events via
the latter systems, and if some people accidentally or intentionally
fail to report some events, the registered rates in the administrative
records will be misleadingly low. Much as we might hope that employees
will fully and properly participate in all voluntary and mandatory
reporting systems, it is possible that they do not. This possibility
should not be disregarded when comparing NAOMS event rates to rates of
the same events monitored in other ways.
In sum, the NAS Panel did note redundancy between NAOMS and other
record-keeping systems, but only a very small proportion of events
measured by the NAOMS questionnaires were being tracked with other
methods. Indeed the purpose of NAOMS was to track reliable trends in
types of events not being measured in any other ways.
Q2. In his testimony at the hearing, NASA Administrator Griffin
compared the NAOMS Project with the existing Aviation Safety Reporting
System (ASRS), stating that ``One of the primary differences between
ASRS and this survey was that ASRS is managed by aviation specialists.
When reports are made, the aviation specialists can contact the
submitter of the report and ask follow-up questions. They are
knowledgeable about aviation safety. This [NAOMS] survey was conducted
by telephone polling surveyors, who have no knowledge or had no
knowledge at all as to aviation or aviation safety. They had no domain
expertise, and it is precisely that which has led to some of the
problems that we are here discussing today.''
Q2a. Do you agree or disagree with Administrator Griffin's
characterization? Why?
A2a. Dr. Griffin was correct when he said that the ASRS is ``managed''
by aviation specialists. This was true for NAOMS as well.
Dr. Griffin was not quite correct in saying that ``the aviation
specialists can contact the submitter of the report and ask follow-up
questions.'' The managers of the ASRS program do not contact event
reporters.
Instead, retired pilots and other air travel professionals are
employed by ASRS as interviewers. These individuals routinely telephone
pilots who submit reports to ASRS to debrief them and acquire details
about the event not provided by the written report. This is a key
feature of the ASRS data gathering system: its focus is not on
quantitative trends but rather is on gathering rich qualitative
information about the events that pilots choose to report.
In contrast, NAOMS is not designed to collect such rich contextual
information. Rather, NAOMS is designed simply to count events and track
trends. It is therefore not necessary for telephone interviewers to
have expertise in aviation, because their task is simply to read aloud
well designed and technically correct questions to pilots and record
the counts of events that the pilots report. NAOMS' question wordings
were crafted through an extensive process of pretesting to assure that
they would be clear and understandable as administered in this fashion
and would not require aviation expertise from the interviewers.
In fact, it would be undesirable for the interviewers to engage in
any conversation with the survey respondents about the events they
report--doing so would violate one of the central premises of high
quality, objective survey data collection: interviewers must read the
exact same question in exactly the same way to all respondents and
provide no feedback on the answers provided, so as to minimize any
potential for interviewer-induced bias.
Nonetheless, the NAOMS interviewers did receive some training in
aviation matters from an experienced pilot before they began conducting
the NAOMS interviews. The purpose of this training was to clarify the
meanings of the questions and terminology in the questionnaire, so that
the interviewers could competently handle any unexpected interchanges
with respondents on technical issues.
Furthermore, there is no factual basis for Dr. Griffin's claim that
lack of domain expertise among the interviewers ``has led to some of
the problems that we are here discussing today.'' Because the job of
the interviewers was to read the questions and record the answers
accurately, lack of domain expertise could not have accounted for any
of Dr. Griffin's concerns about the data.
Q2b. If the NAOMS Project were to be restarted, would there be any
changes that you think should be made to either the methodology or
implementation of the Project?
A2b. If the NAOMS data collection were to be restarted, I would
recommend the following:
1) Conduct thorough analysis of the data collected already by
NAOMS, in comparison with other databases tracking some of the
same events, to assess the quality of the NAOMS data.
2) Restart telephone interviewing of air carrier pilots using
the same interviewing methodology as was being used when data
collection was suspended.
3) Draw samples of air carrier pilots to be interviewed from
the full population of licensed pilots. The FAA maintains an
updated list of this population, so the samples should be drawn
from this list. A subset of this list has been made available
to the public, but because that public subset is only partial,
the NAOMS sample should be drawn from the full FAA list.
4) An external advisory committee should be formed to oversee
and advise on all data collection activities, following the
example set by most major survey data collection projects
funded by the Federal Government. This committee should be
composed of a mixture of aviation and survey research experts.
Ultimately, all design decisions regarding implementation of
NAOMS data collection should be made by the project's Principal
Investigator(s), based upon the advice of the advisory
committee.
5) The data that are collected each month should be released
in electronic files accompanied by full written documentation
of the data collection procedures as soon as possible after
each month's interviewing is completed.
6) All releases of data should be accompanied by written
documentation telling analysts how to properly compute event
rates and over-time trends. Because the design of the survey is
complex, such documentation will be useful to help assure that
the public does not draw unfounded inferences from the data.
7) The Principal Investigator of NAOMS should issue monthly
reports documenting rates and trends in the recently collected
data, modeled after the press releases put out by the
Conference Board and the University of Michigan's Survey
Research Center documenting their monthly surveys measuring
consumer confidence.
8) Data collection from general aviation pilots should be
restarted using the procedures that NAOMS employed prior to
data collection suspension.
9) Data collection from air traffic controllers, flight
attendants, and mechanics should be initiated after preparatory
design work is initiated and completed. This preparatory work
should include focus groups and other data collections to build
a list of events to ask about, experimental studies to document
optimal recall period lengths for these professionals, and
studies to document the predominant organization of events in
these professionals' memories. Data should be collected from
these individuals via telephone interviewing.
10) In keeping with the National Academy of Sciences
recommendation, it would be desirable to coordinate NAOMS data
analysis with ASRS data analysis. Whenever possible, trends in
ASRS reports for an event should be compared with NAOMS trends
of the same event to explore comparability. Likewise, NAOMS
rates should be compared with rated generated using any other
data sources tracking a small number of events measured by both
NAOMS and other record-keeping systems.
Questions submitted by Representative Daniel Lipinski
Q1. Peggy Gilligan, the Deputy Associate Administrator for Aviation
Safety at the FAA, recently cast doubt on the survey by questioning
NASA's methodology. For example, she is quoted as stating that the
answers in the study were not sufficiently detailed. Further, Dr.
Griffin's testimony highlights inconsistencies in the study as compared
to surveys conducted in other ways and also calls into question the
validity of the methodology. Dr. Krosnick, your testimony explains that
the process was meticulously designed and very thorough. Could you
elaborate on your work on the survey and explain why others might call
the study into question?
A1. I was invited to help with the development of NAOMS because the
project sought to design surveys of the highest quality to produce the
most accurate measurement possible according to best practices of
survey research used throughout the Federal Government.
I served as an advisor to the team that carried out the work.
Specifically, I attended numerous project planning meetings and public
dissemination meetings (at which I made presentations on the science
behind the survey component of the project and the findings of our
pretest studies). I designed a series of pretesting studies to
ascertain (1) the optimal length of time to include in the period that
respondents would be asked to describe, (2) the order in which the
questions should be asked, and (3) whether the data should be collected
by telephone interviewing, face-to-face interviewing, or paper and
pencil questionnaires. I oversaw the analysis of data collected in
those studies and oversaw the process of writing reports describing
their findings. I also participated in the design and implementation of
focus groups held with air carrier pilots, air traffic controllers, and
general aviation pilots to build lists of safety-related events that
they witnessed while working. I oversaw the process of conducting
cognitive think-aloud pretesting interviews with air carrier pilots to
assure that the questionnaires were understandable. And I provided
advise on most other aspects of the study design.
My goal in providing this advice was to be sure that NAOMS design
decisions would yield the most accurate possible measurements.
Many observers have raised concerns about the reliability of the
NAOMS data. These include administrators at the FAA and administrators
at NASA.
Some expressions of concern have addressed the procedures used to
collect the NAOMS data. These concerns were articulated prior to the
public release of a full report by Battelle describing the procedures
used to collect the data and the rationales for those procedures (as
far as I know, that report has not yet been publicly released). It
therefore strikes me as premature for anyone to offer opinions about
inadequacies in the NAOMS procedures.
For example, Dr. Griffin expressed concern that the NAOMS
interviewers were not aviation experts and were not tasked with
collecting detailed information about safety-related events through
conversational interviewing. As I explained above, this approach to
interviewing is appropriate for ASRS but not for NAOMS. Standard
practice in high quality survey interviewing involves reading the same
questions identically to all respondents and not offering any
additional comments or improvising conversation with the respondents,
so as to minimize the potential for such improvised conversation to
bias respondents' answers. Thus, concerns about lack of aviation
experience among the interviewers are misplaced.
Other expressions of concern have focused on the rates of events
documented using the NAOMS data. For example, during his testimony, Dr.
Griffin mentioned that NAOMS indicated that diversions to alternate
airports occurred at implausibly high rates. Some other NAOMS critics
have similarly articulated concerns that NAOMS rates vastly exceeded
rates of the same events documented by other monitoring mechanisms.
I believe that there are at least two possible reasons for these
expressions of concern. First, the NAOMS surveys were designed to yield
multiple measurements of the same event, and any rate calculations must
be made adjusting for this multiple registering of single events. In
Appendix A of this letter, I explain how statistical calculations must
be implemented to correct for this inherent aspect of NAOMS data
collection.
I am concerned that this sort of calculation correction was not
implemented properly by people who have analyzed the NAOMS data to
date. If so, this would lead to the misleading impression of event
rates much higher than really occurred and much higher than other data
sources might indicate.
A second possible reason for concern about NAOMS rates is
inadequate attention to the details of the wording of the NAOMS
questions and the measurement being made by other data sources.
Consider, for example, Dr. Griffin's testimony that NAOMS data
indicated that four times per day, a transport aircraft was landed at
an unscheduled airport in order to deal with an unruly passenger. Dr.
Griffin said that to his knowledge, that has happened a total of two or
three times since September 11, 2001.
If such a discrepancy were really present between the NAOMS data
and administrative records of such events, it would be a basis for
concern about the accuracy of one or both of those streams of data. But
in fact, the discrepancy Dr. Griffin pointed to is an illusion.
In fact, the NAOMS survey did not ask the pilots to report how many
times they had to land an airplane at an unscheduled airport in order
to deal with an unruly passenger. Instead, the NAOMS question asked:
``During the last 60 days, how many times did an in-flight aircraft on
which you were a crew member expedite landing or divert to an alternate
airport due to a passenger disturbance?'' Notice that this question
combines diversions with expedited landings. It is therefore not
appropriate to compare the total number of NAOMS reports of events in
this category with another measuring system's assessment of the number
of times that unruly passengers caused diversions to alternate
airports. Of course, the NAOMS question will yield higher rates than
the other monitoring system will.
These are two of the possible reasons for unfounded concerns about
the accuracy of NAOMS data: incorrect computation of statistics using
the data, and insufficient attention to the details of the survey
question wordings and the types of events tracked by other monitoring
systems. Mistakes of the sort outlined above would cause the illusory
appearance of implausibly high event rates in the NAOMS survey.
Assuming that such calculation and interpretation mistakes have
been made and have led to a misperception that NAOMS event rates are
unusually high, it is understandable that people observing those rates
might take them to be dangerous if released publicly, for at least two
reasons. First, as the NASA FOIA appeal reply outlined, reports of
event rates much higher than have been recognized might cause public
concern about the safety of flying and impact the economic viability of
commercial airlines. Based on my knowledge of the risk perception and
decision-making literatures generated by social and behavioral
scientists, I believe that releasing such numbers in the context of a
public report about NAOMS is very unlikely to increase public fear of
flying or decrease airline passenger loads. But it is certainly
possible. So an observer within NASA or the FAA might fear negative
consequences of releasing high rates based on NAOMS data.
Second, staff within the FAA may perceive that event rates higher
than those yielded by their own monitoring systems could call those
monitoring systems into question. And in fact, that is just what higher
rates from NAOMS should do, in my opinion. Staff members who wish to
protect the appearance of integrity of those systems might prefer that
such concern not be raised in the public mind. But in my opinion, every
measurement system is potentially subject to error, so it is always
preferable to track important events using multiple measuring tools and
to check their correspondence. Rather than assuming that one measuring
tool is necessarily correct and the other is inaccurate, a discrepancy
should inspire scrutiny of the implementation of both methods. Such
scrutiny may lead to the detection of flaws in either or both measuring
systems, which can in the end inspire repairs that enhance accuracy of
assessments in the future.
In sum, I believe that some observers may be motivated to criticize
NAOMS because of perception that NAOMS yielded implausibly high event
rates. After careful and proper statistical calculations are
implemented, accompanied by careful attention to the exact wordings of
the NAOMS questions, these rates may turn out to be considerably lower
and may match rates of events tracked using other monitoring systems.
Q2. Dr. Krosnick, you are a renowned expert on survey methodology and
statistical analysis brought in as a subcontractor on the NAOMS
project. Did the process used to develop the NAOMS survey instrument
seem inadequate in any way? Did you lack expert feedback--peer review--
as the methodology of the project went forward?
A2. I believe that the NAOMS development process was indeed consistent
with best practices in survey methodology. Indeed, in some ways, the
preparatory design work exceeded that done for many major, long-
standing federally funded and administered survey research projects.
And the very high response rates that typified NAOMS are evidence of
little if any non-response bias in the resulting data. In sum, I
believe that NASA did an excellent job of funding top-level
methodological work and that Battelle and its subcontractors did their
work to the highest standards of excellence.
The NAOMS project implemented survey methods that have been
extensively peer reviewed and have been widely accepted as standard
practice in the industry for decades. The tailoring of implementation
of those procedures to the NAOMS context was also done using pretesting
procedures that have well-established status in the methods literature.
As I mentioned during my oral testimony, the project sought peer
commentary and suggestions by social scientists and aviation experts at
many public and private briefing meetings. These conversations yielded
useful suggestions that influenced the design of NAOMS. In addition,
the White House Office of Management and Budget reviewed the NAOMS
procedure in order to approve its data collection. OMB routinely
evaluates federal survey project methodology and makes suggestions for
improvement, and we benefited from this process as well.
The only potentially valuable form of peer review that was not
implemented but might have been helpful would have entailed forming a
committee of peer reviewers who were paid to critique the methodology
as harshly as possible and to suggest alternative methods to implement
the survey. I believe that such a procedure would most likely have
yielded few if any suggestions of changes to the methodology that was
employed. But it could have been done prior to NAOMS' suspension and
could still be implemented today.
Appendix A:
Explanation of Inflated Probabilities
of Event Occurrences in NAOMS Data
This Appendix explains why each event that occurs during the course
of air travel has an inflated probability of occurrence in the NAOMS
survey, by design, because it is witnessed by multiple people. And this
Appendix explains how correction for this aspect of the survey design
must be implemented in order to properly generate estimates of rates of
events.
Consider, for example, the NAOMS question asking pilots to report
the number of times a bird hit a plane on which he/she was working.
Each such bird strike would be witnessed by at least two pilots (the
pilot and co-pilot) and could have been witnessed by three pilots (on
aircraft with a third working cockpit crew member). Thus, the
probability that each bird strike would be reported by some respondent
in the survey was twice or three times as high as would have occurred
if only one person had witnessed each event. And for events that
involve two aircraft at the same time (e.g., a near miss), between four
and six cockpit crew members will witness the event.
Some observers have asserted that such inflated probabilities can
be ignored, because the relatively small number of pilots interviewed
each month relative to the total population of pilots means that the
chances that the same event will be reported in the survey by two
different respondents is extremely small. That is true, but it is
irrelevant to the multiple-counting issue: each event nonetheless has
twice or three times the probability of being reported by someone.
To illustrate how the calculation of event rates must be done,
imagine that there are 10,000 total active air carrier pilots and that
1,000 of them were interviewed and asked to describe events that
occurred during some or all of January, 2001.\1\ Imagine further that
during these interviews, the total number of January bird strikes
reported by all respondents was 50.
---------------------------------------------------------------------------
\1\ Remember that some NAOMS respondents were interviewed on almost
every day of each year, and they were asked to report the total number
of events of each type that they witnessed during the past 60 days.
Therefore, some respondents will have made reports for periods
including all of January, 2001. And other respondents will have made
reports for periods including only part of that month. Therefore, the
data from different respondents must be integrated carefully, to
recognize the fact that some people's reports included a mixture of
days in January and days in other months.
---------------------------------------------------------------------------
To calculate the total number of bird strikes that occurred in
January, it might be tempting to divide the number 50 by the sampling
fraction (1,000/10,000), which would equal 500. But this would be
incorrect.
To calculate the total number of events properly, it would be
necessary to use information from the NAOMS questionnaires about the
type of aircraft flown by each pilot who reported a bird strike to
infer whether that bird strike was most likely witnessed by only two
pilots or three pilots (this can be determined by type of aircraft).
Then each bird strike report must be divided by the total number of
pilots who would most likely have witnessed it (two or three)--so some
bird strikes would contribute one-half to the total and others would
contribute one-third to the total. Then the resulting fractions could
be added up across respondents, divided by 1,000 and multiplied by
10,000 to yield an estimate of the total number of bird strikes that
occurred during January.
Another calculation method would involve dividing the total number
of bird strikes reported to have happened during January, 2000, by the
total number of hours that the interviewed sample of pilots said they
flew during that month or by the total number of flight legs that the
interviewed sample of pilots said they flew during that month. These
rates could then be multiplied by the total number of flight hours
flown by all pilots during the month or by the total number of legs
flown by all pilots during that month, respectively. But again, these
numbers would be inappropriately high, because they would be inflated
due to the doubled or tripled probability of reporting the same event
by multiple witnesses. So again, each respondent's report of an event
should be counted as either one-half or one-third (depending on whether
two or three cockpit crew were working on the aircraft); these
fractions should then be summed, and the total should be multiplied by
the total number of hours or legs flown by the entire population of
pilots during the month of interest.
Answers to Post-Hearing Questions
Responses by Captain Terry L. McVenes, Executive Air Safety Chairman,
Air Line Pilots Association, International
Question submitted by Representative Daniel Lipinski
Q1. When Battelle ran this project, the names of survey participants
were removed from their records within 24 hours of the conclusion of
their survey. Is that the kind of step that you would endorse in any
other survey of this kind?
A1. Participation in the survey by pilots was done under the assumption
that it would be completely confidential. That premise was key to
getting open and honest reporting. As I understand it, follow-up
questioning wasn't part of the methodology used in the survey and the
names of the participants wasn't germane to the type of questions
asked. Consequently, it is my belief that future surveys of this type
should also de-identify the participants as was done in the NAOMS
survey. This would further promote the confidence in that
confidentiality as well as provide the industry with quality safety
information.
Appendix 2:
----------
Additional Material for the Record
[GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]