<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="billres.xsl"?>
<!DOCTYPE bill PUBLIC "-//US Congress//DTDs/bill.dtd//EN" "bill.dtd">
<bill bill-stage="Introduced-in-Senate" dms-id="A1" public-private="public" slc-id="S1-BAG24561-TV1-L7-YW4"><metadata xmlns:dc="http://purl.org/dc/elements/1.1/">
<dublinCore>
<dc:title>117 S4230 IS: Secure Artificial Intelligence Act of 2024</dc:title>
<dc:publisher>U.S. Senate</dc:publisher>
<dc:date>2024-05-01</dc:date>
<dc:format>text/xml</dc:format>
<dc:language>EN</dc:language>
<dc:rights>Pursuant to Title 17 Section 105 of the United States Code, this file is not subject to copyright protection and is in the public domain.</dc:rights>
</dublinCore>
</metadata>
<form>
<distribution-code display="yes">II</distribution-code><congress>118th CONGRESS</congress><session>2d Session</session><legis-num>S. 4230</legis-num><current-chamber>IN THE SENATE OF THE UNITED STATES</current-chamber><action><action-date date="20240501">May 1, 2024</action-date><action-desc><sponsor name-id="S327">Mr. Warner</sponsor> (for himself and <cosponsor name-id="S384">Mr. Tillis</cosponsor>) introduced the following bill; which was read twice and referred to the <committee-name committee-id="SSCM00">Committee on Commerce, Science, and Transportation</committee-name></action-desc></action><legis-type>A BILL</legis-type><official-title>To improve the tracking and processing of security and safety incidents and risks associated with artificial intelligence, and for other purposes.</official-title></form><legis-body><section id="id51a724052ed942ef9502b33540f28769" section-type="section-one"><enum>1.</enum><header>Short title</header><text display-inline="no-display-inline">This Act may be cited as the <quote><short-title>Secure Artificial Intelligence Act of 2024</short-title></quote> or the <quote><short-title>Secure A.I. Act of 2024</short-title></quote>.</text></section><section id="id6fc079ceb7da4a50b354302e844d14e1" section-type="subsequent-section"><enum>2.</enum><header>Definitions</header><text display-inline="no-display-inline">In this Act:</text><paragraph id="idd631439913e74ab189f46f424d4dd670"><enum>(1)</enum><header>Artificial intelligence safety incident</header><text>The term <term>artificial intelligence safety incident</term> means an event that increases the risk that operation of an artificial intelligence system will—</text><subparagraph commented="no" display-inline="no-display-inline" id="id9395c61862c541a9a775c6b721516ea3"><enum>(A)</enum><text display-inline="yes-display-inline">result in physical or psychological harm; or</text></subparagraph><subparagraph commented="no" display-inline="no-display-inline" id="ide5f40ffe3e8e4e199df90b140f71265f"><enum>(B)</enum><text display-inline="yes-display-inline">lead to a state in which human life, health, property, or the environment is endangered.</text></subparagraph></paragraph><paragraph id="id13dcb8c1dfe14fc0990f8d99ad6c88ff"><enum>(2)</enum><header>Artificial intelligence security incident</header><text>The term <term>artificial intelligence security incident</term> means an event that increases—</text><subparagraph commented="no" display-inline="no-display-inline" id="id7f769361c49e4da1bb02f7507d85a9ab"><enum>(A)</enum><text display-inline="yes-display-inline">the risk that operation of an artificial intelligence system occurs in a way that enables the extraction of information about the behavior or characteristics of an artificial intelligence system by a third party; or</text></subparagraph><subparagraph commented="no" display-inline="no-display-inline" id="id71dc5a3d34084af992338e67083055fe"><enum>(B)</enum><text display-inline="yes-display-inline">the ability of a third party to manipulate an artificial intelligence system in order to subvert the confidentiality, integrity, or availability of an artificial intelligence system or adjacent system.</text></subparagraph></paragraph><paragraph id="idca99d38e5a14474391ade8c92b5b20c8"><enum>(3)</enum><header>Artificial intelligence security vulnerability</header><text>The term <term>artificial intelligence security vulnerability</term> means a weakness in an artificial intelligence system that could be exploited by a third party to subvert, without authorization, the confidentiality, integrity, or availability of an artificial intelligence system, including through techniques such as—</text><subparagraph id="id2a3a1eddd3184a978654b1296b0960ad"><enum>(A)</enum><text>data poisoning;</text></subparagraph><subparagraph id="id6643d09f15264a1f9ad18e639c05215c"><enum>(B)</enum><text>evasion attacks;</text></subparagraph><subparagraph id="idc04ee9feb155439baf1ba5e81826a7f9"><enum>(C)</enum><text>privacy-based attacks; and</text></subparagraph><subparagraph id="idd3a052c52b764256b1ed483513a67493"><enum>(D)</enum><text>abuse attacks.</text></subparagraph></paragraph><paragraph id="id9bab9ad9bf484e31b7eb9d69f7060a89"><enum>(4)</enum><header>Counter-artificial intelligence</header><text>The term <term>counter-artificial intelligence</term> means techniques or procedures to extract information about the behavior or characteristics of an artificial intelligence system, or to learn how to manipulate an artificial intelligence system, in order to subvert the confidentiality, integrity, or availability of an artificial intelligence system or adjacent system. </text></paragraph></section><section id="idb23dc89be1cc4d53b70e0db1c75fcbcd"><enum>3.</enum><header>Voluntary tracking and processing of security and safety incidents and risks associated with artificial intelligence</header><subsection commented="no" display-inline="no-display-inline" id="idcbdb0eae24fd43d99f936ee4a2f662fe"><enum>(a)</enum><header display-inline="yes-display-inline">Processes and procedures for vulnerability management</header><text display-inline="yes-display-inline">Not later than 180 days after the date of the enactment of this Act, the Director of the National Institute of Standards and Technology shall—</text><paragraph id="id417aa381de384ac0a201541398b69d7d"><enum>(1)</enum><text>initiate a process to update processes and procedures associated with the National Vulnerability Database of the Institute to ensure that the database and associated vulnerability management processes incorporate artificial intelligence security vulnerabilities to the greatest extent practicable; and</text></paragraph><paragraph id="ida2c4e59d12554b6fbb601bf43d9d8829"><enum>(2)</enum><text>identify any characteristics of artificial intelligence security vulnerabilities that make utilization of the National Vulnerability Database inappropriate for their management and develop processes and procedures for vulnerability management for those vulnerabilities.</text></paragraph></subsection><subsection id="id2818065f9f6c4cd390c32072de521b2d"><enum>(b)</enum><header>Voluntary tracking of artificial intelligence security and artificial intelligence safety incidents</header><paragraph commented="no" display-inline="no-display-inline" id="idf8fbabca0dc54cdf8be9e102ac2ec3de"><enum>(1)</enum><header>Voluntary database required</header><text display-inline="yes-display-inline">Not later than 1 year after the date of the enactment of this Act, the Director of the Institute, in coordination with the Director of the Cybersecurity and Infrastructure Security Agency, shall—</text><subparagraph id="idee35c3c1c44c46cb9f3274ec96fab42d"><enum>(A)</enum><text>develop and establish a comprehensive, voluntary database to publicly track artificial intelligence security and artificial intelligence safety incidents; and</text></subparagraph><subparagraph id="idf3f1ab73735e45bd8552d8ecbda6ed80"><enum>(B)</enum><text>in establishing the database under subparagraph (A)—</text><clause id="id396d0d760dfd4dd4aa0dc9025a154c9c"><enum>(i)</enum><text>establish mechanisms by which private sector entities, public sector organizations, civil society groups, and academic researchers may voluntarily share information with the Institute on confirmed or suspected artificial intelligence security or artificial intelligence safety incidents, in a manner that preserves confidentiality of any affected party;</text></clause><clause id="idf9becece7a6b4b22a6c3979e6d54fd2c"><enum>(ii)</enum><text>leverage, to the greatest extent possible, standardized disclosure and incident description formats;</text></clause><clause id="idfdfc90ec3bc94eda85275dea79c52bd6"><enum>(iii)</enum><text>develop processes to associate reports pertaining to the same incident with a single incident identifier; </text></clause><clause id="idc193d60de9a94f4f8a4b03e461f5a979"><enum>(iv)</enum><text>establish classification, information retrieval, and reporting mechanisms that sufficiently differentiate between artificial intelligence security incidents and artificial intelligence safety incidents; and </text></clause><clause id="id5011f2cf99d84463b123ded0c1af8234"><enum>(v)</enum><text>create appropriate taxonomies to classify incidents based on relevant characteristics, impact, or other relevant criteria.</text></clause></subparagraph></paragraph><paragraph id="id0628571053d24898b84540b1ad22cd67"><enum>(2)</enum><header>Identification and treatment of material artificial intelligence security or artificial intelligence safety risks</header><subparagraph commented="no" display-inline="no-display-inline" id="idc49f163fc6584dac9cea6dcff9d1e6ae"><enum>(A)</enum><header>In general</header><text display-inline="yes-display-inline">Upon receipt of relevant information on an artificial intelligence security or artificial intelligence safety incident, the Director of the Institute shall determine whether the described incident presents a material artificial intelligence security or artificial intelligence safety risk sufficient for inclusion in the database developed and established under paragraph (1).</text></subparagraph><subparagraph commented="no" display-inline="no-display-inline" id="id573eccbf8ca84b3b99937d8a963df140"><enum>(B)</enum><header>Priorities</header><text display-inline="yes-display-inline">In evaluating a reported incident pursuant to paragraph (1), the Director shall prioritize inclusion in the database cases in which a described incident—</text><clause id="id32ad274617904632bbdbfabc8be2146d"><enum>(i)</enum><text>describes an artificial intelligence system used in critical infrastructure or safety-critical systems;</text></clause><clause id="ide63d1d0c09be4f93b505b1cdd05680f8"><enum>(ii)</enum><text>would result in a high-severity or catastrophic impact to the people or economy of the United States; or</text></clause><clause id="idbc7340ead71c48fcaf5248bc95ee4e0f"><enum>(iii)</enum><text>includes an artificial intelligence system widely used in commercial or public sector contexts.</text></clause></subparagraph></paragraph><paragraph id="id32e864f4f15a4e2d950aa3ccfe9988ad"><enum>(3)</enum><header>Reports and anonymity</header><text>The Director shall populate the voluntary database developed and established under paragraph (1) with incidents based on public reports and information shared using the mechanism established pursuant to subparagraph (B)(i) of such paragraph, ensuring that any incident description sufficiently anonymizes those affected, unless those who are affected have consented to their names being included in the database.</text></paragraph></subsection></section><section id="id9607aa2764114aea8edcde756006dbf2"><enum>4.</enum><header>Updating processes and procedures relating to Common Vulnerabilities and Exposures Program and evaluation of consensus standards relating to artificial intelligence security vulnerability reporting</header><subsection commented="no" display-inline="no-display-inline" id="idceb152a53641427a93a08878ab6a1a82"><enum>(a)</enum><header display-inline="yes-display-inline">Definitions</header><text>In this section:</text><paragraph commented="no" display-inline="no-display-inline" id="id33540ff12680442db0dedd2867e11b17"><enum>(1)</enum><header>Common Vulnerabilities and Exposures Program</header><text>The term <term>Common Vulnerabilities and Exposures Program</term> means the reference guide and classification system for publicly known information security vulnerabilities sponsored by the Cybersecurity and Infrastructure Security Agency. </text></paragraph><paragraph commented="no" display-inline="no-display-inline" id="idecc742d48bbe45b2b9948b6a5af95fbc"><enum>(2)</enum><header>Relevant congressional committees</header><text>The term <term>relevant congressional committees</term> means—</text><subparagraph commented="no" display-inline="no-display-inline" id="id7c60fc5911ae43db9d93ffac98a573ba"><enum>(A)</enum><text display-inline="yes-display-inline">the Committee on Homeland Security and Governmental Affairs, the Committee on Commerce, Science, and Transportation, the Select Committee on Intelligence, and the Committee on the Judiciary of the Senate; and</text></subparagraph><subparagraph commented="no" display-inline="no-display-inline" id="id8150c7424ebf494aaa002113e8142943"><enum>(B)</enum><text>the Committee on Oversight and Accountability, the Committee on Energy and Commerce, the Permanent Select Committee on Intelligence, and the Committee on the Judiciary of the House of Representatives.</text></subparagraph></paragraph></subsection><subsection commented="no" display-inline="no-display-inline" id="id39cad5ca193a47c9a193deee28afedf1"><enum>(b)</enum><header display-inline="yes-display-inline">In general</header><text display-inline="yes-display-inline">Not later than 180 days after the date of enactment of this Act, the Director of the Cybersecurity and Infrastructure Security Agency shall—</text><paragraph id="id5203b19f7d094710b861693bf5209438"><enum>(1)</enum><text>initiate a process to update processes and procedures associated with the Common Vulnerabilities and Exposures Program to ensure that the program and associated processes identify and enumerate artificial intelligence security vulnerabilities to the greatest extent practicable; and</text></paragraph><paragraph id="idd32a2229f72f472187bfd3ddb3d78f4c"><enum>(2)</enum><text>identify any characteristic of artificial intelligence security vulnerabilities that make utilization of the Common Vulnerabilities and Exposures Program inappropriate for their management and develop processes and procedures for vulnerability identification and enumeration for those artificial intelligence security vulnerabilities.</text></paragraph></subsection><subsection id="idc529df1866d54272b4defa8ddf28a9ce"><enum>(c)</enum><header>Evaluation of consensus standards</header><paragraph commented="no" display-inline="no-display-inline" id="idcd027c54ba28473fb5919a32abde2653"><enum>(1)</enum><header display-inline="yes-display-inline">In general</header><text>Not later than 30 days after the date of enactment of this Act, the Director of the National Institute of Standards and Technology shall initiate a multi-stakeholder process to evaluate whether existing voluntary consensus standards for vulnerability reporting effectively accommodate artificial intelligence security vulnerabilities.</text></paragraph><paragraph commented="no" display-inline="no-display-inline" id="id6e31e3334187400780cf6156f5d0cc06"><enum>(2)</enum><header>Report</header><subparagraph commented="no" display-inline="no-display-inline" id="ida098ee58e9734d25b4b5ede37532ac46"><enum>(A)</enum><header>Submission</header><text display-inline="yes-display-inline">Not later than 180 days after the date on which the evaluation under paragraph (1) is carried out, the Director shall submit a report to the relevant congressional committees on the sufficiency of existing vulnerability reporting processes and standards to accommodate artificial intelligence security vulnerabilities. </text></subparagraph><subparagraph commented="no" display-inline="no-display-inline" id="ide9f401939a594f6ead6d1674a7cac068"><enum>(B)</enum><header>Post-report action</header><text display-inline="yes-display-inline">If the Director concludes in the report submitted under subparagraph (A) that existing processes do not sufficiently accommodate reporting of artificial intelligence security vulnerabilities, the Director shall initiate a process, in consultation with the Director of the National Institute of Standards and Technology and the Director of the Office of Management and Budget, to update relevant vulnerability reporting processes, including the Department of Homeland Security Binding Operational Directive 20–01, or any subsequent directive. </text></subparagraph></paragraph></subsection><subsection id="idb4d441f2abee4dec8d695b804b726f2b"><enum>(d)</enum><header>Best practices</header><text>Not later than 90 days after the date of enactment of this Act, the Director of the Cybersecurity and Infrastructure Security Agency shall, in collaboration with the Director of the National Security Agency and the Director of the National Institute of Standards and Technology and by leveraging efforts of the Information Communications Technology Supply Chain Risk Management Task Force to the greatest extent practicable, convene a multi-stakeholder process to encourage the development and adoption of best practices relating to addressing supply chain risks associated with training and maintaining artificial intelligence models, which shall ensure consideration of supply chain risks associated with—</text><paragraph id="idd6b3c95c8f174f6388e1f9cbeebcbfe6"><enum>(1)</enum><text>data collection, cleaning, and labeling, particularly the supply chain risks of reliance on remote workforce and foreign labor for such tasks;</text></paragraph><paragraph id="id5d672a1a50a14f4c953ec8f1d9b0fdc5"><enum>(2)</enum><text>inadequate documentation of training data and test data storage, as well as limited provenance of training data; </text></paragraph><paragraph id="id9f0100450ed0451d895ed971d7ff50c6"><enum>(3)</enum><text>human feedback systems used to refine artificial intelligence systems, particularly the supply chain risks of reliance on remote workforce and foreign labor for such tasks;</text></paragraph><paragraph id="id6ad33cb544814736a346edb8d00b87e2" commented="no" display-inline="no-display-inline"><enum>(4)</enum><text>the use of large-scale, open-source datasets, particularly the supply chain risks to repositories that host such datasets for use by public and private sector developers in the United States; and</text></paragraph><paragraph id="id8854fc078d3e44199873ce51eba33b35" commented="no" display-inline="no-display-inline"><enum>(5)</enum><text>the use of proprietary datasets containing sensitive or personally identifiable information. </text></paragraph></subsection><subsection display-inline="no-display-inline" commented="no" id="id8F2FC7DB59CD41DF906D4B8F75035429"><enum>(e)</enum><header>Rule of construction</header><text>To the extent practicable, the Director shall examine the reporting requirements pursuant to division Y of the Cyber Incident Reporting for Critical Infrastructure Act of 2022 (<external-xref legal-doc="public-law" parsable-cite="pl/117/103">Public Law 117–103</external-xref>) and the amendments made by that division and ensure that the requirements under this section are not duplicative of requirements set forth in that division and the amendments made by that division. </text></subsection></section><section id="id418ce2c6f64440c8bba9e31e4216b9cb"><enum>5.</enum><header>Establishment of Artificial Intelligence Security Center</header><subsection commented="no" display-inline="no-display-inline" id="id85465d2fc3ed4f61a99ae8320a209ba8"><enum>(a)</enum><header>Establishment</header><text display-inline="yes-display-inline">Not later than 90 days after the date of the enactment of this Act, the Director of the National Security Agency shall establish an Artificial Intelligence Security Center within the Cybersecurity Collaboration Center of the National Security Agency.</text></subsection><subsection commented="no" display-inline="no-display-inline" id="ida580b0c4198f4243868bc891ade10079"><enum>(b)</enum><header>Functions</header><text display-inline="yes-display-inline">The functions of the Artificial Intelligence Security Center shall be as follows:</text><paragraph id="ide24d6ccc81274ba1ba2c5c768207cfb0"><enum>(1)</enum><text>Making available a research test-bed to private sector and academic researchers, on a subsidized basis, to engage in artificial intelligence security research, including through the secure provision of access in a secure environment to proprietary third-party models with the consent of the vendors of the models.</text></paragraph><paragraph id="id04d127f21c544bef978cdbb0ed9940e2"><enum>(2)</enum><text>Developing guidance to prevent or mitigate counter-artificial intelligence techniques.</text></paragraph><paragraph id="id588da3013fcd4cd884d06fbe8e76f216"><enum>(3)</enum><text>Promoting secure artificial intelligence adoption practices for managers of national security systems (as defined in section 3552 of title 44, United States Code) and elements of the defense industrial base. </text></paragraph><paragraph id="id8E0D60111FAA4FE7A0A396767972E15F"><enum>(4)</enum><text>Coordinating with the Artificial Intelligence Safety Institute within the National Institute of Standards and Technology.</text></paragraph><paragraph commented="no" display-inline="no-display-inline" id="id068543b74e1640f6bfdd26e660431d36"><enum>(5)</enum><text>Such other functions as the Director considers appropriate.</text></paragraph></subsection><subsection commented="no" display-inline="no-display-inline" id="id8c6e4b69174343cc8097b7a2c55b23cb"><enum>(c)</enum><header>Test-Bed requirements</header><paragraph id="id4df2baa38dc24940b49d8b6dba339f1f"><enum>(1)</enum><header>Access and terms of usage</header><subparagraph commented="no" display-inline="no-display-inline" id="idbd7ce37f04694b5c9ab85b48f44a9894"><enum>(A)</enum><header>Researcher access</header><text display-inline="yes-display-inline">The Director shall establish terms of usage governing researcher access to the test-bed made available under subsection (b)(1), with limitations on researcher publication only to the extent necessary to protect classified information or proprietary information concerning third-party models provided through the consent of model vendors.</text></subparagraph><subparagraph commented="no" display-inline="no-display-inline" id="id99c05146aa5f4588919634e422aa51cf"><enum>(B)</enum><header>Availability to Federal agencies</header><text display-inline="yes-display-inline">The Director shall ensure that the test-bed made available under subsection (b)(1) is also made available to other Federal agencies on a cost-recovery basis.</text></subparagraph></paragraph><paragraph commented="no" display-inline="no-display-inline" id="ide2656fbb34c8480d8d2b3bb4ddd2623c"><enum>(2)</enum><header>Use of certain infrastructure and other resources</header><text>In carrying out subsection (b)(1), the Director shall leverage, to the greatest extent practicable, infrastructure and other resources provided under section 5.2 of the Executive Order dated October 30, 2023 (relating to safe, secure, and trustworthy development and use of artificial intelligence).</text></paragraph></subsection><subsection id="idc245a30e2d3c41c9a33921ce164d0866" commented="no"><enum>(d)</enum><header>Access to proprietary models</header><text>In carrying out this section, The Director shall establish such mechanisms as the Director considers appropriate, including potential contractual incentives, to ensure the provision of access to proprietary models by qualified independent, third-party researchers, provided that commercial model vendors have voluntarily provided models and associated resources for such testing.</text></subsection></section></legis-body></bill> 

