<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="billres.xsl"?>
<!DOCTYPE bill PUBLIC "-//US Congress//DTDs/bill.dtd//EN" "bill.dtd">
<bill bill-stage="Introduced-in-Senate" dms-id="A1" public-private="public" slc-id="S1-LIP24795-MR7-1N-J1L"><metadata xmlns:dc="http://purl.org/dc/elements/1.1/">
<dublinCore>
<dc:title>118 S5539 IS: Trustworthy By Design Artificial Intelligence Act of 2024</dc:title>
<dc:publisher>U.S. Senate</dc:publisher>
<dc:date>2024-12-16</dc:date>
<dc:format>text/xml</dc:format>
<dc:language>EN</dc:language>
<dc:rights>Pursuant to Title 17 Section 105 of the United States Code, this file is not subject to copyright protection and is in the public domain.</dc:rights>
</dublinCore>
</metadata>
<form>
<distribution-code display="yes">II</distribution-code><congress>118th CONGRESS</congress><session>2d Session</session><legis-num>S. 5539</legis-num><current-chamber>IN THE SENATE OF THE UNITED STATES</current-chamber><action><action-date date="20241216">December 16, 2024</action-date><action-desc><sponsor name-id="S422">Mr. Welch</sponsor> (for himself and <cosponsor name-id="S409">Mr. Luján</cosponsor>) introduced the following bill; which was read twice and referred to the <committee-name committee-id="SSGA00">Committee on Homeland Security and Governmental Affairs</committee-name></action-desc></action><legis-type>A BILL</legis-type><official-title>To require systematic review of artificial intelligence systems before deployment by the Federal Government, and for other purposes.</official-title></form><legis-body><section id="S1" section-type="section-one"><enum>1.</enum><header>Short title</header><text display-inline="no-display-inline">This Act may be cited as the <quote><short-title>Trustworthy By Design Artificial Intelligence Act of 2024</short-title></quote> or the <quote><short-title>TBD AI Act of 2024</short-title></quote>.</text></section><section id="id348368f80d6b47bf841d3057bcc26e26"><enum>2.</enum><header>Definitions</header><text display-inline="no-display-inline">In this Act:</text><paragraph commented="no" display-inline="no-display-inline" id="id1a3dec150ae342538a12015d5881e991"><enum>(1)</enum><header>Artificial intelligence system</header><text display-inline="yes-display-inline">The term <term>artificial intelligence system</term> means a machine-based system that can, for a given set of machine-defined or human-defined objectives, make or inform predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to—</text><subparagraph id="iddff79800db6b433f9e70a791a7bec277"><enum>(A)</enum><text>perceive real and virtual environments;</text></subparagraph><subparagraph id="id18ac7c7289db45d2956c2ea65d2b04b8"><enum>(B)</enum><text>abstract such perceptions into models through analysis in an automated manner; and</text></subparagraph><subparagraph id="id9bb1033ef22842c4b9883650ad0f631f"><enum>(C)</enum><text>use model inference to formulate options for information or action.</text></subparagraph></paragraph><paragraph commented="no" display-inline="no-display-inline" id="id4278fd97ae11415ab780aa43891c8b93"><enum>(2)</enum><header>Director</header><text display-inline="yes-display-inline">The term <term>Director</term> means the Director of the National Institute of Standards and Technology.</text></paragraph><paragraph commented="no" display-inline="no-display-inline" id="id4525d76f3192439584942f13d3ba2dcf"><enum>(3)</enum><header>Federal agency</header><text>The term <term>Federal agency</term> means any Federal department, agency, or organization.</text></paragraph></section><section commented="no" display-inline="no-display-inline" id="id10a5d06b87bb4e188312b603070ec62d"><enum>3.</enum><header>Guidelines for evaluation of trustworthiness of artificial intelligence systems</header><subsection commented="no" display-inline="no-display-inline" id="id3f1f7455bb704351820eea5c8802e063"><enum>(a)</enum><header>Development required</header><paragraph commented="no" display-inline="no-display-inline" id="id7d5647ea853c448fb753b5caff2175c2"><enum>(1)</enum><header>In general</header><text display-inline="yes-display-inline">Not later than 1 year after the date of the enactment of this Act, the Director shall develop and release a set of guidelines for evaluation of the trustworthiness of artificial intelligence systems.</text></paragraph><paragraph id="id1535c404874a42258f27e858b9a73639"><enum>(2)</enum><header>Use of existing guidelines or elements</header><subparagraph commented="no" display-inline="no-display-inline" id="id84ede8ddc9d7410c85345e4658b5d0c3"><enum>(A)</enum><header>In general</header><text display-inline="yes-display-inline">In carrying out paragraph (1), the Director may use existing guidelines, best practices, or elements of guidelines or best practices that the Director identifies from other sources.</text></subparagraph><subparagraph commented="no" display-inline="no-display-inline" id="id1f29cc1f8fc34fa598b0b046faceed55"><enum>(B)</enum><header>Annotation</header><text>If the Director uses guidelines, best practices, or elements of guidelines or best practices pursuant to subparagraph (A) of this paragraph in order to carry out paragraph (1), the Director shall clearly annotate such use in a central location and make such annotations available online to the public.</text></subparagraph></paragraph><paragraph commented="no" display-inline="no-display-inline" id="id57b6b6995f3f467eaefb185bbb1153e7"><enum>(3)</enum><header>Periodic updates</header><text>The Director shall, on a periodic basis but not less frequently than annually, update the guidelines developed pursuant to paragraph (1).</text></paragraph></subsection><subsection id="id7f6ebf57630a466ca6d60269b2841148"><enum>(b)</enum><header>Components covered</header><text>The guidelines developed pursuant to subsection (a) for evaluation of artificial intelligence systems shall cover the following:</text><paragraph commented="no" display-inline="no-display-inline" id="id0e83e6744ab642ef92660949da2a378f"><enum>(1)</enum><text>The models used for the artificial intelligence systems.</text></paragraph><paragraph id="id56056298c94a4b30b03238e8fa8aa6d5"><enum>(2)</enum><text>The data used and activities conducted in training the artificial intelligence systems, including the collection and filtering of data.</text></paragraph><paragraph id="ida2078ae51ffc467ba750d9c79e3afbfe"><enum>(3)</enum><text>The processes and techniques applied after initial training to enhance the capabilities of the artificial intelligence systems, such as fine-tuning, reinforcement learning, and other post-training optimization methods. </text></paragraph><paragraph commented="no" display-inline="no-display-inline" id="id2d01bbe38ea54d328452ecf30a96a93c"><enum>(4)</enum><text display-inline="yes-display-inline">Content generated by the artificial intelligence systems.</text></paragraph><paragraph commented="no" display-inline="no-display-inline" id="id3dce595c99d34cfab46a4c47b76b5ad6"><enum>(5)</enum><text display-inline="yes-display-inline">The hardware systems used by the artificial intelligence systems.</text></paragraph><paragraph id="idcc9b2b27888c4179a78d0fbdd29a52d8"><enum>(6)</enum><text>Interactions between humans and the artificial intelligence systems that are expected to arise during intended or reasonably foreseeable use of the artificial intelligence systems. </text></paragraph><paragraph commented="no" display-inline="no-display-inline" id="id2c43af6682ce442eb8d979d5c8107879"><enum>(7)</enum><text>Risks presented by anthropomorphic artificial intelligence systems.</text></paragraph></subsection><subsection id="id0e5dd2be55e14aa49073289106a113c3"><enum>(c)</enum><header>Elements of trustworthiness</header><paragraph commented="no" display-inline="no-display-inline" id="id8615f5c5277e495f9c775791e38f88a5"><enum>(1)</enum><header>In general</header><text display-inline="yes-display-inline">The guidelines developed pursuant to subsection (a) shall cover trustworthiness with respect to the following:</text><subparagraph commented="no" display-inline="no-display-inline" id="idcc476de93d7b4c758376ea8eb15d1c47"><enum>(A)</enum><text display-inline="yes-display-inline">Validity and reliability.</text></subparagraph><subparagraph commented="no" display-inline="no-display-inline" id="ida823cfb09d3f42709145faba6422eb83"><enum>(B)</enum><text display-inline="yes-display-inline">Safety.</text></subparagraph><subparagraph commented="no" display-inline="no-display-inline" id="id032c82b4d1e14e6b9a13d0f73d76b094"><enum>(C)</enum><text display-inline="yes-display-inline">Security.</text></subparagraph><subparagraph commented="no" display-inline="no-display-inline" id="id0b52ea7bbef0485d907f62014a07c8b3"><enum>(D)</enum><text display-inline="yes-display-inline">Resiliency.</text></subparagraph><subparagraph commented="no" display-inline="no-display-inline" id="id3db52279edb245199163e7009d8678f2"><enum>(E)</enum><text display-inline="yes-display-inline">Transparency and accountability.</text></subparagraph><subparagraph commented="no" display-inline="no-display-inline" id="id188536c079cb4fa387882fbb4ba9b048"><enum>(F)</enum><text display-inline="yes-display-inline">Explainability and interpretability.</text></subparagraph><subparagraph commented="no" display-inline="no-display-inline" id="id76edaa209ac54f0a8a8b383ca2a2d6da"><enum>(G)</enum><text display-inline="yes-display-inline">Privacy.</text></subparagraph><subparagraph commented="no" display-inline="no-display-inline" id="id70611094712944c3bbc9137fbb1498af"><enum>(H)</enum><text display-inline="yes-display-inline">Fairness and bias.</text></subparagraph><subparagraph id="idd652be006c9a4effaa14ed041d2c803f"><enum>(I)</enum><text>Such other matters relating to safety, security, or trustworthiness as the Director considers appropriate.</text></subparagraph></paragraph><paragraph commented="no" display-inline="no-display-inline" id="id8ba7af938a95432aa516e443077b4a01"><enum>(2)</enum><header>Protected classes</header><text>The guidelines developed pursuant to subsection (a) shall specifically highlight and consider accuracy and bias risks relating to protected classes under Federal law.</text></paragraph></subsection><subsection id="id623c60baf0a749ec8ef31a6c2877d3be"><enum>(d)</enum><header>Applicability</header><text>The Director shall ensure that the guidelines developed pursuant to subsection (a) are developed so that they include an assessment of the trustworthiness of all components covered under subsection (b) with regards to all elements under subsection (c), but account for circumstances in which certain assessment methods or recommendations may not be applicable to certain components or elements may not be applicable.</text></subsection><subsection id="id8b475924da924acda27a96573f9e0daa"><enum>(e)</enum><header>Limitation relating to synthetic content</header><text>Under the guidelines developed pursuant to subsection (a), the Director shall identify appropriate mechanisms to manage the risks from relying on synthetic content or content created by artificial intelligence systems to improve a dataset or model, or to meet evaluation guidelines.</text></subsection><subsection id="id20ac6ecb55cd445e96f89d157dc3f631"><enum>(f)</enum><header>Developing robust guidelines</header><text>The Director shall ensure that the guidelines developed pursuant to subsection (a) are developed in such a manner that encourages transparency, cooperation, and collaboration with developers or evaluators of artificial intelligence systems, academia, and civil society sufficient to independently verify the elements set forth under subsection (c)(1).</text></subsection><subsection commented="no" display-inline="no-display-inline" id="id0154c8dcc6154d5f87f726b552905743"><enum>(g)</enum><header>Iterative evaluation</header><text display-inline="yes-display-inline">The Director shall ensure that the guidelines developed pursuant to subsection (a) cover how best to evaluate the trustworthiness of artificial intelligence systems iteratively, throughout the design, development, and deployment lifecycle of an artificial intelligence system.</text></subsection><subsection id="id62df6be327904900873981d53cd01951"><enum>(h)</enum><header>Report to Congress</header><paragraph commented="no" display-inline="no-display-inline" id="idf522afc7509b410ea15b8917bf625206"><enum>(1)</enum><header>In general</header><text display-inline="yes-display-inline">The Director shall submit to Congress a report on any expected barriers to implementing and adhering to the guidelines developed pursuant to subsection (a), especially with respect to transparency, cooperation, or collaboration barriers with developers of artificial intelligence systems. </text></paragraph><paragraph id="idcfa3d5b42f6048cb8b41f6e12a7f7a55"><enum>(2)</enum><header>Form</header><text>The report under paragraph (1) shall be submitted in unclassified form, but may include a classified appendix, if necessary.</text></paragraph></subsection></section><section id="idaf278b8bd8f946348bc392eae7cc20a3"><enum>4.</enum><header>Federal deployment of artificial intelligence systems</header><subsection commented="no" display-inline="no-display-inline" id="id279de0e66bd140baa26e9c43ac7f87e9"><enum>(a)</enum><header>Covered use defined</header><text>In this section, with respect to an artificial intelligence system, the term <term>covered use</term>—</text><paragraph commented="no" display-inline="no-display-inline" id="id2c3720f98d47470bb8faa27a480937ad"><enum>(1)</enum><text display-inline="yes-display-inline">means use in any automated decision making; and</text></paragraph><paragraph commented="no" display-inline="no-display-inline" id="id3dac83b8475241589e5693ff4754744a"><enum>(2)</enum><text display-inline="yes-display-inline">does not include any use that the Director exempts from any portion of the guidelines issued under section 3(a), including any use that—</text><subparagraph commented="no" display-inline="no-display-inline" id="id8c99725b20584c1883ff0372ab16ac41"><enum>(A)</enum><text display-inline="yes-display-inline">is subject to evaluation by existing national security assessments; or</text></subparagraph><subparagraph commented="no" display-inline="no-display-inline" id="id80e228ea1f2e4d7381177291e7f6e7b8"><enum>(B)</enum><text>is an edge case, especially a time sensitive or emergency case, as determined by the Director.</text></subparagraph></paragraph></subsection><subsection id="id9f16bb6ea1174fb5b78f72e2e348c91f"><enum>(b)</enum><header>Existing artificial intelligence systems</header><text>With respect to any artificial intelligence system deployed for a covered use that is in use by a Federal agency before the date of enactment of this Act, the Federal agency shall evaluate the artificial intelligence system deployment to ensure the artificial intelligence system meets the guidelines developed under section 3(a) not later than 2 years after the effective date of this section or cease using the artificial intelligence system.</text></subsection><subsection id="id8de987cbfc2144e9af5cc714162e545c"><enum>(c)</enum><header>New artificial intelligence systems</header><text>The head of each Federal agency shall ensure that each new acquisition or integration of an artificial intelligence system by the Federal agency for a covered use meets the guidelines developed under section 3(a) prior to deployment of the artificial intelligence system.</text></subsection><subsection id="idf506fe8747614dc38c9a25a979ae8598"><enum>(d)</enum><header>Labeling</header><paragraph commented="no" display-inline="no-display-inline" id="id4e93238d3a124b1aa34ad36c7587afa9"><enum>(1)</enum><header>In general</header><text display-inline="yes-display-inline">The head of each Federal agency shall identify as compliant, and make publicly available, in accordance with all applicable classification requirements and national security restrictions, the documentation of evaluation status and compliance details for, each artificial intelligence system deployment for a covered use by the Federal agency that meets the guidelines developed under section 3(a).</text></paragraph><paragraph commented="no" display-inline="no-display-inline" id="idb9a674c3887a468893996a9ceee91933"><enum>(2)</enum><header>Noncompliant deployments</header><text>Not later than 2 years after the effective date of this section, the head of each Federal agency that deploys an artificial intelligence system that is not compliant or is not evaluated pursuant to paragraph (1), shall—</text><subparagraph commented="no" display-inline="no-display-inline" id="id4b30a5138a1b45c0aa602fd45d4b4c32"><enum>(A)</enum><text display-inline="yes-display-inline">make publicly available documentation of each such deployment; and</text></subparagraph><subparagraph commented="no" display-inline="no-display-inline" id="id362fb35c87f2426397b284b3d4916390"><enum>(B)</enum><text display-inline="yes-display-inline">with respect to each such deployment, report—</text><clause commented="no" display-inline="no-display-inline" id="id5b78aa0103814e84894f0d618421c8f7"><enum>(i)</enum><text display-inline="yes-display-inline">the status of the evaluation;</text></clause><clause commented="no" display-inline="no-display-inline" id="idf83af0f3a33d4c80b042365d02c8ec51"><enum>(ii)</enum><text display-inline="yes-display-inline">progress made towards compliance;</text></clause><clause commented="no" display-inline="no-display-inline" id="id36c3c6d6734a44ce9ff217b004944d09"><enum>(iii)</enum><text display-inline="yes-display-inline">a clear, specific justification for any delay; and</text></clause><clause commented="no" display-inline="no-display-inline" id="id8c93e454569546d48c22365fd6835025"><enum>(iv)</enum><text display-inline="yes-display-inline">any barriers to compliance, including resource constraints.</text></clause></subparagraph></paragraph><paragraph commented="no" display-inline="no-display-inline" id="id7061ed9fe2184a4ca5aa56d21ce2d9c4"><enum>(3)</enum><header>Report to Congress</header><subparagraph commented="no" display-inline="no-display-inline" id="idc83e9b5367dc45e3a1b4103cd09934a1"><enum>(A)</enum><header>In general</header><text display-inline="yes-display-inline">Not later than 3 years after the effective date of this section, the head of each Federal agency shall submit to Congress a report of each deployment described in paragraph (2). </text></subparagraph><subparagraph id="id78A16C223BD949D3B1F753724282EBE9"><enum>(B)</enum><header>Form</header><text>The report under subparagraph (A) shall be submitted in unclassified form, but may include a classified appendix, if necessary.</text></subparagraph></paragraph></subsection><subsection commented="no" display-inline="no-display-inline" id="id32307c0cf83f44db92e4d772ed5c6636"><enum>(e)</enum><header>Chief AI Officers</header><paragraph commented="no" display-inline="no-display-inline" id="id5b3c7520166048ed80ec9f477396d504"><enum>(1)</enum><header>In general</header><text display-inline="yes-display-inline">Not later than 120 days after the effective date of this section, the head of each Federal agency shall designate a Chief Artificial Intelligence Officer, with responsibility for the management, governance, acquisition, and oversight processes of the Federal agency relating to artificial intelligence, including implementation of the guidelines developed under section 3(a).</text></paragraph><paragraph commented="no" display-inline="no-display-inline" id="id5b9cb4fd6e7a44c1979cff70dc3694ec"><enum>(2)</enum><header>Full-time employee</header><text>To the extent practicable, each Chief Artificial Intelligence Officer designated under paragraph (1) shall be a full-time employee of the Federal agency on the date of the designation.</text></paragraph><paragraph commented="no" display-inline="no-display-inline" id="id81def6c278a048f6949c400a19242468"><enum>(3)</enum><header>Seniority</header><text>With respect to the Chief Artificial Intelligence Officer of any agency described in section 901(b) of title 31, United States Code, the Chief Artificial Intelligence Officer shall be an executive with a position classified above GS–15 of the General Schedule or the equivalent. </text></paragraph></subsection><subsection commented="no" display-inline="no-display-inline" id="idb30fc4f7651d4e23bb4e4812fb5c2fc0"><enum>(f)</enum><header>Effective date</header><text>This section shall take effect on the date that the guidelines developed pursuant to section 3(a) are released. </text></subsection></section></legis-body></bill> 

