[Senate Report 118-89]
[From the U.S. Government Publishing Office]
Calendar No. 192
118th Congress} { Report
SENATE
1st Session } { 118-89
======================================================================
TRANSPARENT AUTOMATED GOVERNANCE
ACT
__________
R E P O R T
OF THE
COMMITTEE ON HOMELAND SECURITY AND
GOVERNMENTAL AFFAIRS
UNITED STATES SENATE
TO ACCOMPANY
S. 1865
TO DIRECT AGENCIES TO BE TRANSPARENT WHEN USING
AUTOMATED AND AUGMENTED SYSTEMS TO INTERACT WITH
THE PUBLIC OR MAKE CRITICAL DECISIONS, AND FOR OTHER PURPOSES
[GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]
August 22, 2023.--Ordered to be printed
Filed, under authority of the order of the Senate of July 27, 2023
U.S. GOVERNMENT PUBLISHING OFFICE
WASHINGTON : 2023
-----------------------------------------------------------------------------------
COMMITTEE ON HOMELAND SECURITY AND GOVERNMENTAL AFFAIRS
GARY C. PETERS, Michigan, Chairman
THOMAS R. CARPER, Delaware RAND PAUL, Kentucky
MAGGIE HASSAN, New Hampshire RON JOHNSON, Wisconsin
KYRSTEN SINEMA, Arizona JAMES LANKFORD, Oklahoma
JACKY ROSEN, Nevada MITT ROMNEY, Utah
ALEX PADILLA, California RICK SCOTT, Florida
JON OSSOFF, Georgia JOSH HAWLEY, Missouri
RICHARD BLUMENTHAL, Connecticut ROGER MARSHALL, Kansas
David M. Weinberg, Staff Director
Lena C. Chang, Director of Governmental Affairs
Michelle M. Benecke, Senior Counsel
Evan E. Freeman, Counsel
William E. Henderson III, Minority Staff Director
Christina N. Salazar, Minority Chief Counsel
Kendal B. Tigner, Minority Professional Staff Member
Laura W. Kilbride, Chief Clerk
Calendar No. 192
118th Congress} { Report
SENATE
1st Session } { 118-89
======================================================================
TRANSPARENT AUTOMATED GOVERNANCE
ACT
_______
August 22, 2023.--Ordered to be printed
Filed, under authority of the order of the Senate of July 27, 2023
_______
Mr. Peters, from the Committee on Homeland Security and Governmental
Affairs, submitted the following
R E P O R T
[To accompany S. 1865]
[Including cost estimate of the Congressional Budget Office]
The Committee on Homeland Security and Governmental
Affairs, to which was referred the bill (S. 1865) to direct
agencies to be transparent when using automated and augmented
systems to interact with the public or make critical decisions,
and for other purposes, having considered the same, reports
favorably thereon with an amendment, in the nature of a
substitute, and recommends that the bill, as amended, do pass.
CONTENTS
Page
I. Purpose and Summary..............................................1
II. Background and Need for the Legislation..........................2
III. Legislative History..............................................3
IV. Section-by-Section Analysis of the Bill, as Reported.............3
V. Evaluation of Regulatory Impact..................................5
VI. Congressional Budget Office Cost Estimate........................5
VII. Changes in Existing Law Made by the Bill, as Reported............6
I. PURPOSE AND SUMMARY
S. 1865, the Transparent Automated Governance Act, requires
the Director of the Office of Management and Budget (OMB) to
issue guidance to agencies requiring them to notify individuals
when they are interacting with, or subject to critical
decisions made using, certain artificial intelligence (AI) or
other automated systems. OMB's guidance would also direct
agencies to institute an appeal process for individuals who
believe an adverse critical decision impacting them was made in
error using such a system. These processes would involve
alternative human review of the decision.
II. BACKGROUND AND NEED FOR THE LEGISLATION
Agencies across the federal government are already using AI
and other automated systems to interact with and make--or
assist in making--critical decisions about members of the
public, and deployment of these systems is expected to continue
to grow.\1\
---------------------------------------------------------------------------
\1\David Freeman Engstrom et al., Government by Algorithm:
Artificial Intelligence in Federal Administrative Agencies,
Administrative Conference of the United States (Feb. 19, 2020).
---------------------------------------------------------------------------
AI systems can allow government agencies to provide more
efficient services, automating routine tasks and drawing new
insights from existing data sets. The federal government is
already using AI, and the opportunities for new efficiencies
will increase as AI capabilities continue to improve. For
example, the Social Security Administration is developing AI
tools with the goal of improving the accuracy and efficiency of
formal adjudications, and agencies like the Federal
Communications Commission and the Consumer Financial Protection
Bureau are increasingly using AI and machine learning tools to
streamline their processing and analysis of public comments.\2\
---------------------------------------------------------------------------
\2\Id.
---------------------------------------------------------------------------
While AI use is widespread and continues to grow, there is
no comprehensive information on the use cases and contexts in
which government agencies use these technologies.\3\ These
systems are not monitored or evaluated in any comprehensive or
standardized way, and most agencies do not have dedicated
governance structures to oversee policies in this space.\4\ As
a result, governments are leaving the door open to risks posed
by AI technologies that do not work as intended, such as lack
of accuracy, bias in decision-making, and breaches of
privacy.\5\ These systems also pose deep accountability
challenges if bias or privacy violations are detected. This is
because, as these systems become more complex, it is not always
possible to accurately describe how an algorithm reached a
particular output.\6\ When the stakes are high, these
problematic outputs can cause significant, and sometimes life-
threatening, harms, as discussed below.
---------------------------------------------------------------------------
\3\Id.
\4\National AI Advisory Committee (NAIAC), Year 1 Report (May 2023)
(www.ai.gov/wp-
content/uploads/2023/05/NAIAC-Report-Year1.pdf).
\5\Id.; National Institute of Standards and Technology, Artificial
Intelligence Risk Management Framework (AI RMF 1.0) (Jan. 2023)
(nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf).
\6\Id.
---------------------------------------------------------------------------
These harms are not hypothetical; they are already
occurring both at the federal level and in states across the
country. For example, CBP began to require the use of a facial
recognition technology in order for migrants to apply for
asylum at the U.S.-Mexico border. However, the app failed to
register many people with darker skin tones, effectively
barring them from their right to request entry into the United
States.\7\ In another example, algorithms deployed across at
least a dozen states to decide who is eligible for Medicaid
benefits erroneously stripped critical assistance from
thousands of Americans who relied on disability benefits.\8\ In
at least one state, when the applicants tried to understand how
their benefits were determined, they were told the formula
could not be disclosed because it was a ``trade secret.''\9\
---------------------------------------------------------------------------
\7\Facial recognition bias frustrates Black asylum applicants to
US, advocates say, The Guardian (Feb. 8, 2023).
\8\What happens when an algorithm cuts your healthcare, The Verge
(Mar. 21, 2018); What happened when a `wildly irrational' algorithm
made crucial healthcare decisions, The Guardian (July 2, 2021).
\9\What happens when an algorithm cuts your healthcare, The Verge
(Mar. 21, 2018).
---------------------------------------------------------------------------
As the above examples show, transparency and the
opportunity for members of the public to seek help correcting
harms are critical steps to ensuring that agencies are using AI
and other automated systems with purpose, forethought, and
care, and that individuals are not left at the whim of
erroneous decision-making assisted by these systems. The
Transparent Automated Governance Act would increase
transparency regarding the federal government's use of
artificial intelligence and other automated systems when these
systems interact with or make critical decisions about members
of the public through required notice and opportunity for
redress.
III. LEGISLATIVE HISTORY
Senator Gary Peters (D-MI) introduced S. 1865, the
Transparent Automated Governance Act, on June 7, 2023, with
original cosponsors Senator Mike Braun (R-IN) and Senator James
Lankford (R-OK). The bill was referred to the Committee on
Homeland Security and Governmental Affairs.
The Committee considered S. 1865 at a business meeting on
June 14, 2023. At the business meeting, Senator Peters offered
a substitute amendment to the bill, as well as a modification
to that amendment. The Peters amendment, as modified, added a
definition for artificial intelligence. It also changed an OMB
consultation with other agencies from a required consultation
to a suggested consultation and removed specific mentions of
the National Institute for Standards and Technology, the Office
of Science and Technology Policy, and academia from the
consultation list. The provision now recommends OMB consult
with the Government Accountability Office (GAO), the Government
Services Administration (GSA), other agencies with relevant
expertise, the private sector, and the nonprofit sector. The
Committee adopted the modification to the Peters substitute
amendment and the Peters substitute amendment, as modified, by
unanimous consent, with Senators Peters, Hassan, Sinema, Rosen,
Padilla, Ossoff, Blumenthal, Paul, Lankford, Romney, Scott, and
Hawley present.
The bill, as amended by the Peters substitute amendment as
modified, was ordered reported favorably by roll call vote of
10 yeas to 1 nay, with Senators Peters, Hassan, Sinema, Rosen,
Padilla, Ossoff, Lankford, Romney, Scott, and Hawley voting in
the affirmative, and Senator Paul voting in the negative.
Senators Carper, Blumenthal, Johnson, and Marshall voted yea by
proxy, for the record only.
IV. SECTION-BY-SECTION ANALYSIS OF THE BILL, AS REPORTED
Section 1. Short title
This section establishes the short title of the bill as the
``Transparent Automated Governance Act'' or the ``TAG Act.''
Section 2. Definitions
This section defines the terms ``agency,'' ``artificial
intelligence,'' ``augmented critical decision process,''
``automated system,'' ``critical decision,'' ``Director,''
``plain language,'' and ``transparent automated governance
guidance'' for the purposes of this Act.
Section 3. Transparent automated governance guidance
Subsection (a) directs the Director of OMB to issue
guidance to agencies to require them to disclose when they use
certain automated systems to interact with or to help make a
critical decision about a member of the public within 270 days.
Subsection (b) requires that the guidance include: the
identification of additional agency actions that qualify as
critical decisions, if appropriate; a list of automated systems
that may be used in augmented critical decision processes that
are not subject to the Act's requirements; how agencies must
provide plain language notice at the time and place of an
individual's interaction with certain automated systems; the
proper contents of the plain language description of the
automated system; examples of the plain language description
for the automated system; how agencies must provide plain
language notice to individuals when they receive a critical
decision made using an automated system; the proper contents of
the plain language description of the critical decision;
examples of the plain language description for the critical
decision; how an agency must provide an appeals process when an
individual receives an adverse critical decision made using an
automated system; how agencies shall provide for alternative
review of adverse critical decisions made using an automated
system, including by an individual; and the guidance must
include criteria for agency information collection regarding
issues that arise during the use of these systems.
Subsection (c) requires OMB to provide 180 days for public
comment on a preliminary version of the guidance from
subsection (a) as described in subsection (b).
Subsection (d) requires OMB to consider consulting with
GSA, GAO, the private sector, and the nonprofit sector,
including experts in privacy, civil rights, and civil
liberties, when developing this guidance.
Subsection (e) allows for the guidance required by section
104 of the AI in Government Act of 2020 (40 U.S.C. 11301 note)
to satisfy the requirements of the Act if it meets all the
requirements of subsection (b).
Subsection (f) requires OMB to update this guidance every
two years.
Section 4. Agency implementation
Subsection (a) requires agencies to implement the guidance
provided by OMB in section 3 within 270 days.
Subsection (b) requires the Comptroller General to review
agency compliance with the Act and submit a report to the
Senate Homeland Security and Governmental Affairs Committee and
the House Oversight and Accountability Committee within 2 years
and then biannually thereafter.
Section 5. Sunset
The Act sunsets 10 years after enactment.
V. EVALUATION OF REGULATORY IMPACT
Pursuant to the requirements of paragraph 11(b) of rule
XXVI of the Standing Rules of the Senate, the Committee has
considered the regulatory impact of this bill and determined
that the bill will have no regulatory impact within the meaning
of the rules. The Committee agrees with the Congressional
Budget Office's statement that the bill contains no
intergovernmental or private-sector mandates as defined in the
Unfunded Mandates Reform Act (UMRA) and would impose no costs
on state, local, or tribal governments.
VI. CONGRESSIONAL BUDGET OFFICE COST ESTIMATE
[GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]
S. 1865 would require the Office of Management and Budget
(OMB) to provide guidance to federal agencies on how to adopt
secure artificial intelligence (AI) programs. The guidance
would inform agencies how to notify the public that they are
interacting with a federal system that uses artificial
intelligence to make decisions regarding benefits or
eligibility for federal programs. It also would instruct
agencies how to provide a process for members of the public to
appeal those AI-generated decisions. The bill would require the
Government Accountability Office (GAO) to report to the
Congress on the effectiveness of these efforts.
The Administration already has issued some orders and
memorandums concerning the creation of federal AI programs that
CBO expects will satisfy most of the requirements of the bill.
Thus, the costs of implementing S. 1865 would stem mainly from
the need for OMB provide any additional guidance that might be
necessary and for GAO to publish the required report.
CBO estimates those costs would be less than $500,000; any
spending would be subject to the availability of appropriated
funds.
The CBO staff contact for this estimate is Aldo Prosperi.
The estimate was reviewed by Christina Hawley Anthony, Deputy
Director of Budget Analysis.
Phillip L. Swagel,
Director, Congressional Budget Office.
VII. CHANGES IN EXISTING LAW MADE BY THE BILL, AS REPORTED
This legislation would make no change in existing law,
within the meaning of clauses (a) and (b) of subparagraph 12 of
rule XXVI of the Standing Rules of the Senate, because this
legislation would not repeal or amend any provision of current
law.
[all]