[Congressional Bills 118th Congress]
[From the U.S. Government Publishing Office]
[S. 5539 Introduced in Senate (IS)]
<DOC>
118th CONGRESS
2d Session
S. 5539
To require systematic review of artificial intelligence systems before
deployment by the Federal Government, and for other purposes.
_______________________________________________________________________
IN THE SENATE OF THE UNITED STATES
December 16, 2024
Mr. Welch (for himself and Mr. Lujan) introduced the following bill;
which was read twice and referred to the Committee on Homeland Security
and Governmental Affairs
_______________________________________________________________________
A BILL
To require systematic review of artificial intelligence systems before
deployment by the Federal Government, and for other purposes.
Be it enacted by the Senate and House of Representatives of the
United States of America in Congress assembled,
SECTION 1. SHORT TITLE.
This Act may be cited as the ``Trustworthy By Design Artificial
Intelligence Act of 2024'' or the ``TBD AI Act of 2024''.
SEC. 2. DEFINITIONS.
In this Act:
(1) Artificial intelligence system.--The term ``artificial
intelligence system'' means a machine-based system that can,
for a given set of machine-defined or human-defined objectives,
make or inform predictions, recommendations, or decisions
influencing real or virtual environments. Artificial
intelligence systems use machine and human-based inputs to--
(A) perceive real and virtual environments;
(B) abstract such perceptions into models through
analysis in an automated manner; and
(C) use model inference to formulate options for
information or action.
(2) Director.--The term ``Director'' means the Director of
the National Institute of Standards and Technology.
(3) Federal agency.--The term ``Federal agency'' means any
Federal department, agency, or organization.
SEC. 3. GUIDELINES FOR EVALUATION OF TRUSTWORTHINESS OF ARTIFICIAL
INTELLIGENCE SYSTEMS.
(a) Development Required.--
(1) In general.--Not later than 1 year after the date of
the enactment of this Act, the Director shall develop and
release a set of guidelines for evaluation of the
trustworthiness of artificial intelligence systems.
(2) Use of existing guidelines or elements.--
(A) In general.--In carrying out paragraph (1), the
Director may use existing guidelines, best practices,
or elements of guidelines or best practices that the
Director identifies from other sources.
(B) Annotation.--If the Director uses guidelines,
best practices, or elements of guidelines or best
practices pursuant to subparagraph (A) of this
paragraph in order to carry out paragraph (1), the
Director shall clearly annotate such use in a central
location and make such annotations available online to
the public.
(3) Periodic updates.--The Director shall, on a periodic
basis but not less frequently than annually, update the
guidelines developed pursuant to paragraph (1).
(b) Components Covered.--The guidelines developed pursuant to
subsection (a) for evaluation of artificial intelligence systems shall
cover the following:
(1) The models used for the artificial intelligence
systems.
(2) The data used and activities conducted in training the
artificial intelligence systems, including the collection and
filtering of data.
(3) The processes and techniques applied after initial
training to enhance the capabilities of the artificial
intelligence systems, such as fine-tuning, reinforcement
learning, and other post-training optimization methods.
(4) Content generated by the artificial intelligence
systems.
(5) The hardware systems used by the artificial
intelligence systems.
(6) Interactions between humans and the artificial
intelligence systems that are expected to arise during intended
or reasonably foreseeable use of the artificial intelligence
systems.
(7) Risks presented by anthropomorphic artificial
intelligence systems.
(c) Elements of Trustworthiness.--
(1) In general.--The guidelines developed pursuant to
subsection (a) shall cover trustworthiness with respect to the
following:
(A) Validity and reliability.
(B) Safety.
(C) Security.
(D) Resiliency.
(E) Transparency and accountability.
(F) Explainability and interpretability.
(G) Privacy.
(H) Fairness and bias.
(I) Such other matters relating to safety,
security, or trustworthiness as the Director considers
appropriate.
(2) Protected classes.--The guidelines developed pursuant
to subsection (a) shall specifically highlight and consider
accuracy and bias risks relating to protected classes under
Federal law.
(d) Applicability.--The Director shall ensure that the guidelines
developed pursuant to subsection (a) are developed so that they include
an assessment of the trustworthiness of all components covered under
subsection (b) with regards to all elements under subsection (c), but
account for circumstances in which certain assessment methods or
recommendations may not be applicable to certain components or elements
may not be applicable.
(e) Limitation Relating to Synthetic Content.--Under the guidelines
developed pursuant to subsection (a), the Director shall identify
appropriate mechanisms to manage the risks from relying on synthetic
content or content created by artificial intelligence systems to
improve a dataset or model, or to meet evaluation guidelines.
(f) Developing Robust Guidelines.--The Director shall ensure that
the guidelines developed pursuant to subsection (a) are developed in
such a manner that encourages transparency, cooperation, and
collaboration with developers or evaluators of artificial intelligence
systems, academia, and civil society sufficient to independently verify
the elements set forth under subsection (c)(1).
(g) Iterative Evaluation.--The Director shall ensure that the
guidelines developed pursuant to subsection (a) cover how best to
evaluate the trustworthiness of artificial intelligence systems
iteratively, throughout the design, development, and deployment
lifecycle of an artificial intelligence system.
(h) Report to Congress.--
(1) In general.--The Director shall submit to Congress a
report on any expected barriers to implementing and adhering to
the guidelines developed pursuant to subsection (a), especially
with respect to transparency, cooperation, or collaboration
barriers with developers of artificial intelligence systems.
(2) Form.--The report under paragraph (1) shall be
submitted in unclassified form, but may include a classified
appendix, if necessary.
SEC. 4. FEDERAL DEPLOYMENT OF ARTIFICIAL INTELLIGENCE SYSTEMS.
(a) Covered Use Defined.--In this section, with respect to an
artificial intelligence system, the term ``covered use''--
(1) means use in any automated decision making; and
(2) does not include any use that the Director exempts from
any portion of the guidelines issued under section 3(a),
including any use that--
(A) is subject to evaluation by existing national
security assessments; or
(B) is an edge case, especially a time sensitive or
emergency case, as determined by the Director.
(b) Existing Artificial Intelligence Systems.--With respect to any
artificial intelligence system deployed for a covered use that is in
use by a Federal agency before the date of enactment of this Act, the
Federal agency shall evaluate the artificial intelligence system
deployment to ensure the artificial intelligence system meets the
guidelines developed under section 3(a) not later than 2 years after
the effective date of this section or cease using the artificial
intelligence system.
(c) New Artificial Intelligence Systems.--The head of each Federal
agency shall ensure that each new acquisition or integration of an
artificial intelligence system by the Federal agency for a covered use
meets the guidelines developed under section 3(a) prior to deployment
of the artificial intelligence system.
(d) Labeling.--
(1) In general.--The head of each Federal agency shall
identify as compliant, and make publicly available, in
accordance with all applicable classification requirements and
national security restrictions, the documentation of evaluation
status and compliance details for, each artificial intelligence
system deployment for a covered use by the Federal agency that
meets the guidelines developed under section 3(a).
(2) Noncompliant deployments.--Not later than 2 years after
the effective date of this section, the head of each Federal
agency that deploys an artificial intelligence system that is
not compliant or is not evaluated pursuant to paragraph (1),
shall--
(A) make publicly available documentation of each
such deployment; and
(B) with respect to each such deployment, report--
(i) the status of the evaluation;
(ii) progress made towards compliance;
(iii) a clear, specific justification for
any delay; and
(iv) any barriers to compliance, including
resource constraints.
(3) Report to congress.--
(A) In general.--Not later than 3 years after the
effective date of this section, the head of each
Federal agency shall submit to Congress a report of
each deployment described in paragraph (2).
(B) Form.--The report under subparagraph (A) shall
be submitted in unclassified form, but may include a
classified appendix, if necessary.
(e) Chief AI Officers.--
(1) In general.--Not later than 120 days after the
effective date of this section, the head of each Federal agency
shall designate a Chief Artificial Intelligence Officer, with
responsibility for the management, governance, acquisition, and
oversight processes of the Federal agency relating to
artificial intelligence, including implementation of the
guidelines developed under section 3(a).
(2) Full-time employee.--To the extent practicable, each
Chief Artificial Intelligence Officer designated under
paragraph (1) shall be a full-time employee of the Federal
agency on the date of the designation.
(3) Seniority.--With respect to the Chief Artificial
Intelligence Officer of any agency described in section 901(b)
of title 31, United States Code, the Chief Artificial
Intelligence Officer shall be an executive with a position
classified above GS-15 of the General Schedule or the
equivalent.
(f) Effective Date.--This section shall take effect on the date
that the guidelines developed pursuant to section 3(a) are released.
<all>