<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="billres.xsl"?>
<!DOCTYPE bill PUBLIC "-//US Congress//DTDs/bill.dtd//EN" "bill.dtd">
<bill bill-stage="Introduced-in-Senate" dms-id="A1" public-private="public" slc-id="S1-LIP24391-DGK-TC-K6C">
<metadata xmlns:dc="http://purl.org/dc/elements/1.1/">
<dublinCore>
<dc:title>118 S4495 IS: Promoting Responsible Evaluation and Procurement to Advance Readiness for Enterprise-wide Deployment for Artificial Intelligence Act</dc:title>
<dc:publisher>U.S. Senate</dc:publisher>
<dc:date>2024-06-11</dc:date>
<dc:format>text/xml</dc:format>
<dc:language>EN</dc:language>
<dc:rights>Pursuant to Title 17 Section 105 of the United States Code, this file is not subject to copyright protection and is in the public domain.</dc:rights>
</dublinCore>
</metadata>
<form>
<distribution-code display="yes">II</distribution-code>
<congress>118th CONGRESS</congress><session>2d Session</session>
<legis-num>S. 4495</legis-num>
<current-chamber>IN THE SENATE OF THE UNITED STATES</current-chamber>
<action>
<action-date date="20240611">June 11, 2024</action-date>
<action-desc><sponsor name-id="S380">Mr. Peters</sponsor> (for himself and <cosponsor name-id="S384">Mr. Tillis</cosponsor>) introduced the following bill; which was read twice and referred to the <committee-name committee-id="SSGA00">Committee on Homeland Security and Governmental Affairs</committee-name></action-desc>
</action>
<legis-type>A BILL</legis-type>
<official-title>To enable safe, responsible, and agile procurement, development, and use of artificial intelligence by the Federal Government, and for other purposes.</official-title>
</form>
<legis-body id="H464AB54E82EC4CF49AAEAED7C359C1C2">
<section id="S1" section-type="section-one"><enum>1.</enum><header>Short title</header><text display-inline="no-display-inline">This Act may be cited as the <quote><short-title>Promoting Responsible Evaluation and Procurement to Advance Readiness for Enterprise-wide Deployment for Artificial Intelligence Act</short-title></quote> or the <quote><short-title>PREPARED for AI Act</short-title></quote>.</text></section> <section commented="no" display-inline="no-display-inline" id="idbc8e43dbe684484984310d84e9f99f83"><enum>2.</enum><header>Definitions</header><text display-inline="no-display-inline">In this Act:</text>
<paragraph id="id72c547253d7a481196b62d36ce8c6dbc"><enum>(1)</enum><header>Adverse incident</header><text>The term <term>adverse incident</term> means any incident or malfunction of artificial intelligence that directly or indirectly leads to—</text> <subparagraph id="ide4c4c90b469b4ece9a905e9b926aff2c"><enum>(A)</enum><text>harm impacting rights or safety, as described in section 7(a)(2)(D);</text></subparagraph>
<subparagraph id="idd99f65cd529c43dd8863aa53e3805a47"><enum>(B)</enum><text>the death of an individual or damage to the health of an individual;</text></subparagraph> <subparagraph id="id5e7d02e95cb84728a2319021ab70f2dd"><enum>(C)</enum><text>material or irreversible disruption of the management and operation of critical infrastructure, as described in section 7(a)(2)(D)(i)(II)(cc);</text></subparagraph>
<subparagraph id="id43044898cffd47a38e9927356c8754c8"><enum>(D)</enum><text>material damage to property or the environment;</text></subparagraph> <subparagraph commented="no" display-inline="no-display-inline" id="id6755e24c9c7f4cd1ba62c63302d119ea"><enum>(E)</enum><text>loss of a mission-critical system or equipment;</text></subparagraph>
<subparagraph commented="no" display-inline="no-display-inline" id="idef8664e55b8d44d5a840c852fddac5e6"><enum>(F)</enum><text>failure of the mission of an agency;</text></subparagraph> <subparagraph commented="no" display-inline="no-display-inline" id="id722f608cb1ce4d44b8f63ef3ae2f75ae"><enum>(G)</enum><text display-inline="yes-display-inline">the denial of a benefit, payment, or other service to an individual or group of individuals who would have otherwise been eligible;</text></subparagraph>
<subparagraph commented="no" display-inline="no-display-inline" id="id139b97f0671445b2ae160886d9abb340"><enum>(H)</enum><text>the denial of an employment, contract, grant, or similar opportunity that would have otherwise been offered; or</text></subparagraph> <subparagraph id="idf0847ec0b7ec4b4bb3c142d13b1dbdc1" commented="no" display-inline="no-display-inline"><enum>(I)</enum><text>another consequence, as determined by the Director with public notice. </text></subparagraph></paragraph>
<paragraph commented="no" display-inline="no-display-inline" id="id5e5c606e4f8f46198f816d6e7a8f4106"><enum>(2)</enum><header>Agency</header><text display-inline="yes-display-inline">The term <term>agency</term>— </text> <subparagraph commented="no" display-inline="no-display-inline" id="idf8f19664460e48acbc5fc09f8497b4ff"><enum>(A)</enum><text display-inline="yes-display-inline">has the meaning given that term in section 3502(1) of title 44, United States Code; and</text></subparagraph>
<subparagraph commented="no" display-inline="no-display-inline" id="id2cf318618d4841a58574e67d81536d96"><enum>(B)</enum><text>includes each of the independent regulatory agencies described in section 3502(5) of title 44, United States Code.</text></subparagraph></paragraph> <paragraph commented="no" display-inline="no-display-inline" id="id44fb9c54b4b44c8284d0a53a1272753b"><enum>(3)</enum><header>Artificial intelligence</header><text>The term <term>artificial intelligence</term>—</text>
<subparagraph commented="no" display-inline="no-display-inline" id="id15cb86050dd8455f8da3ee9b6bc75a15"><enum>(A)</enum><text display-inline="yes-display-inline">has the meaning given that term in section 5002 of the National Artificial Intelligence Initiative Act of 2020 (<external-xref legal-doc="usc" parsable-cite="usc/15/9401">15 U.S.C. 9401</external-xref>); and</text></subparagraph> <subparagraph commented="no" display-inline="no-display-inline" id="idea23bfae9ddb449fac6807d44311d50c"><enum>(B)</enum><text display-inline="yes-display-inline">includes the artificial systems and techniques described in paragraphs (1) through (5) of section 238(g) of the John S. McCain National Defense Authorization Act for Fiscal Year 2019 (<external-xref legal-doc="public-law" parsable-cite="pl/115/232">Public Law 115–232</external-xref>; <external-xref legal-doc="usc" parsable-cite="usc/10/4061">10 U.S.C. 4061</external-xref> note prec.). </text></subparagraph></paragraph>
<paragraph commented="no" display-inline="no-display-inline" id="idd9f5e987484f4e299a1a90b770a1ae69"><enum>(4)</enum><header>Biometric data</header><text display-inline="yes-display-inline">The term <term>biometric data</term> means data resulting from specific technical processing relating to the unique physical, physiological, or behavioral characteristics of an individual, including facial images, dactyloscopic data, physical movement and gait, breath, voice, DNA, blood type, and expression of emotion, thought, or feeling. </text></paragraph> <paragraph commented="no" display-inline="no-display-inline" id="id9a1a091288c7408c8f9dd6921006f2a6"><enum>(5)</enum><header>Commercial technology</header><text>The term <term>commercial technology</term>—</text>
<subparagraph commented="no" display-inline="no-display-inline" id="id9fc232fb7a6c458aba755de48fca9c7a"><enum>(A)</enum><text display-inline="yes-display-inline">means a technology, process, or method, including research or development; and </text></subparagraph> <subparagraph commented="no" display-inline="no-display-inline" id="id297aa85fdc8c41d89bb701f283c07fa7"><enum>(B)</enum><text display-inline="yes-display-inline">includes commercial products, commercial services, and other commercial items, as defined in the Federal Acquisition Regulation, including any addition or update thereto by the Federal Acquisition Regulatory Council.</text></subparagraph></paragraph>
<paragraph commented="no" display-inline="no-display-inline" id="id8c0ddc6f90c343c580a13c49b2210147"><enum>(6)</enum><header>Council</header><text>The term <term>Council</term> means the Chief Artificial Intelligence Officers Council established under section 5(a).</text></paragraph> <paragraph commented="no" display-inline="no-display-inline" id="id2910512171ae48bd90b189e212fd4935"><enum>(7)</enum><header>Deployer</header><text>The term <term>deployer</term> means an entity that operates or provides artificial intelligence, whether developed internally or by a third-party developer.</text></paragraph>
<paragraph commented="no" display-inline="no-display-inline" id="id4977057767614fa182b05b87340b0c00"><enum>(8)</enum><header>Developer</header><text>The term <term>developer</term> means an entity that designs, codes, produces, or owns artificial intelligence.</text></paragraph> <paragraph commented="no" display-inline="no-display-inline" id="id6b9397a153e84637b36195c17bc55271"><enum>(9)</enum><header>Director</header><text display-inline="yes-display-inline">The term <term>Director</term> means the Director of the Office of Management and Budget.</text></paragraph>
<paragraph commented="no" display-inline="no-display-inline" id="id3a68e05cf80c4ff6b7230f0bd889850a"><enum>(10)</enum><header>Impact assessment</header><text>The term <term>impact assessment</term> means a structured process for considering the implications of a proposed artificial intelligence use case.</text></paragraph> <paragraph id="id23444bf5e4e34d7fa7ccb4c4b8b7b077"><enum>(11)</enum><header>Operational design domain</header><text>The term <term>operational design domain</term> means a set of operating conditions for an automated system.</text></paragraph>
<paragraph id="idfc194c13d9db4e9a8addece9546e81ad"><enum>(12)</enum><header>Procure or obtain</header><text>The term <term>procure or obtain</term> means—</text> <subparagraph id="id0e492826fe3b43a19ba4fddff4a87378"><enum>(A)</enum><text>to acquire through contract actions awarded pursuant to the Federal Acquisition Regulation, including through interagency agreements, multi-agency use, and purchase card transactions;</text></subparagraph>
<subparagraph id="idc68932fc832a40a0bbfc2cca6bc4b314"><enum>(B)</enum><text>to acquire through contracts and agreements awarded through other special procurement authorities, including through other transactions and commercial solutions opening authorities; or</text></subparagraph> <subparagraph id="idb43a83b28e2747a39d4544f54ea3dbcd"><enum>(C)</enum><text>to obtain through other means, including through open source platforms or freeware.</text></subparagraph></paragraph>
<paragraph id="ide8557c7837254934aa890e8d62e973ed"><enum>(13)</enum><header>Relevant congressional committees</header><text>The term <term>relevant congressional committees</term> means the <committee-name committee-id="SSGA00">Committee on Homeland Security and Governmental Affairs of the Senate</committee-name> and the <committee-name committee-id="">Committee on Oversight and Accountability of the House of Representatives</committee-name>.</text></paragraph> <paragraph id="idf0d5288c1af8415caa21725bdae6ccae"><enum>(14)</enum><header>Risk</header><text>The term <term>risk</term> means the combination of the probability of an occurrence of harm and the potential severity of that harm.</text></paragraph>
<paragraph id="id35e54b1301ea40878935eeb4162d2c88"><enum>(15)</enum><header>Use case</header><text>The term <term>use case</term> means the ways and context in which artificial intelligence is operated to perform a specific function.</text></paragraph></section> <section id="id89a6ba649e62402c941ffb423b4b64cf"><enum>3.</enum><header>Implementation of requirements</header> <subsection commented="no" display-inline="no-display-inline" id="id39c1b551c37343ea9cf5dfeb13914228"><enum>(a)</enum><header>Agency implementation</header><text display-inline="yes-display-inline">Not later than 1 year after the date of enactment of this Act, the Director shall ensure that agencies have implemented the requirements of this Act. </text></subsection>
<subsection commented="no" display-inline="no-display-inline" id="idbc87cea78cbf417192588953ebe854d5"><enum>(b)</enum><header>Annual briefing</header><text display-inline="yes-display-inline">Not later than 180 days after the date of enactment of this Act, and annually thereafter, the Director shall brief the appropriate Congressional committees on implementation of this Act and related considerations.</text></subsection></section> <section commented="no" display-inline="no-display-inline" section-type="subsequent-section" id="idb1bcddc9e0ca4baaaa826d3fd921ae40"><enum>4.</enum><header display-inline="yes-display-inline">Procurement of artificial intelligence</header> <subsection id="id48af4d899f6e49118f3e00396c28b022"><enum>(a)</enum><header>Government-Wide requirements</header> <paragraph id="idddf19aeb7672432cbe37a681414a2a0c"><enum>(1)</enum><header>In general</header><text>Not later than 1 year after the date of enactment of this Act, the Federal Acquisition Regulatory Council shall review Federal Acquisition Regulation acquisition planning, source selection, and other requirements and update the Federal Acquisition Regulation as needed to ensure that agency procurement of artificial intelligence includes—</text>
<subparagraph id="id6a1913799c344111b7756d552de3ea96"><enum>(A)</enum><text>a requirement to address the outcomes of the risk evaluation and impact assessments required under section 8(a);</text></subparagraph> <subparagraph id="id8f5a2c81e4bf4eb6831a7635ad166ff8"><enum>(B)</enum><text>a requirement for consultation with an interdisciplinary team of agency experts prior to, and throughout, as necessary, procuring or obtaining artificial intelligence; and</text></subparagraph>
<subparagraph id="id3fe9b99990604b58b2b373765f400f02"><enum>(C)</enum><text>any other considerations determined relevant by the Federal Acquisition Regulatory Council.</text></subparagraph></paragraph> <paragraph id="id06d7dea911f04ee6b76177f052290a44"><enum>(2)</enum><header>Interdisciplinary team of experts</header><text>The interdisciplinary team of experts described in paragraph (1)(B) may—</text>
<subparagraph id="id6031004395794c44ab8b50fba4a1545a"><enum>(A)</enum><text>vary depending on the use case and the risks determined to be associated with the use case; and</text></subparagraph> <subparagraph id="ideee72ba2461e482e8ddda010f05e264e"><enum>(B)</enum><text>include technologists, information security personnel, domain experts, privacy officers, data officers, civil rights and civil liberties officers, contracting officials, legal counsel, customer experience professionals, and others.</text></subparagraph></paragraph>
<paragraph commented="no" display-inline="no-display-inline" id="idcab3b7181a6c4d2e8e3ed3e860480397"><enum>(3)</enum><header>Acquisition planning</header><text>The acquisition planning updates described in paragraph (1) shall include considerations for, at minimum, as appropriate depending on the use case—</text> <subparagraph id="id5893c3cf6a554234abb076a4ea343899"><enum>(A)</enum><text>data ownership and privacy;</text></subparagraph>
<subparagraph commented="no" display-inline="no-display-inline" id="id847301899f2c46d09e9a21c61689bad1"><enum>(B)</enum><text>data information security;</text></subparagraph> <subparagraph id="id27fda495630f4a91834575ad8809f9d6"><enum>(C)</enum><text>interoperability requirements;</text></subparagraph>
<subparagraph id="ide5385a21c2c24a59985923fb295df423"><enum>(D)</enum><text>data and model assessment processes;</text></subparagraph> <subparagraph commented="no" display-inline="no-display-inline" id="id2474709a89f248a7b776a384c8cba471"><enum>(E)</enum><text>scope of use;</text></subparagraph>
<subparagraph id="id31f7b475eedd4228a388089e5ff24e53"><enum>(F)</enum><text>ongoing monitoring techniques;</text></subparagraph> <subparagraph id="idc175eec14b9245c8ab348913e46377a5"><enum>(G)</enum><text>type and scope of artificial intelligence audits;</text></subparagraph>
<subparagraph id="id6b1c08b855784dbb89b5ff5cabdd5fbc"><enum>(H)</enum><text>environmental impact; and</text></subparagraph> <subparagraph id="id9eee989069a84a36a898ac465d2387a4"><enum>(I)</enum><text>safety and security risk mitigation techniques, including a plan for how adverse event reporting can be incorporated, pursuant to section 5(g).</text></subparagraph></paragraph></subsection>
<subsection id="id6ba74ebe653d47d1867316f73837ae19"><enum>(b)</enum><header>Requirements for high risk use cases</header>
<paragraph id="id1af29dc6b0b74047ad7d871e1237d1a2"><enum>(1)</enum><header>In general</header>
<subparagraph id="idbdacab49cdf04239b312cb83a0ec9cdd"><enum>(A)</enum><header>Establishment</header><text>Beginning on the date that is 1 year after the date of enactment of this Act, the head of an agency may not procure or obtain artificial intelligence for a high risk use case, as defined in section 7(a)(2)(D), prior to establishing and incorporating certain terms into relevant contracts, agreements, and employee guidelines for artificial intelligence, including—</text> <clause id="id489daae0a86c4bbaadbcfea39393b040"><enum>(i)</enum><text>a requirement that the use of the artificial intelligence be limited to its operational design domain;</text></clause>
<clause id="idc7ddb4480e20497e8c86b10bc841c0af"><enum>(ii)</enum><text>requirements for safety, security, and trustworthiness, including—</text> <subclause id="id164b87716bf345909a46371987474158"><enum>(I)</enum><text>a reporting mechanism through which agency personnel are notified by the deployer of any adverse incident;</text></subclause>
<subclause id="id54b7aa7d827146b18db318d7839e200d"><enum>(II)</enum><text>a requirement, in accordance with section 5(g), that agency personnel receive from the deployer a notification of any adverse incident, an explanation of the cause of the adverse incident, and any data directly connected to the adverse incident in order to address and mitigate the harm; and</text></subclause> <subclause id="id1f3204fb144449d1b937933e16f8ea01"><enum>(III)</enum><text>that the agency has the right to temporarily or permanently suspend use of the artificial intelligence if—</text>
<item id="ida028b36bd937436ea04b7d63d19be6d6"><enum>(aa)</enum><text>the risks of the artificial intelligence to rights or safety become unacceptable, as determined under the agency risk classification system pursuant to section 7; or</text></item> <item id="id6beb7e4ec35d41a7a5f8d19cd71b9985"><enum>(bb)</enum><text>on or after the date that is 180 days after the publication of the most recently updated version of the framework developed and updated pursuant to section 22(A)(c) of the National Institute of Standards and Technology Act (<external-xref legal-doc="usc" parsable-cite="usc/15/278h-1">15 U.S.C. 278h–1(c)</external-xref>), the deployer is found not to comply with such most recent update;</text></item></subclause></clause>
<clause id="id354c50d86e6e40ee9412762e45deec00"><enum>(iii)</enum><text>requirements for quality, relevance, sourcing and ownership of data, as appropriate by use case, and applicable unless the head of the agency waives such requirements in writing, including—</text> <subclause id="id2831083aa65d439b97344ca3dc7931f1"><enum>(I)</enum><text>retention of rights to Government data and any modification to the data including to protect the data from unauthorized disclosure and use to subsequently train or improve the functionality of commercial products offered by the deployer, any relevant developers, or others; and</text></subclause>
<subclause id="id0a2699a160ab4e2794948ecd3f524cd9"><enum>(II)</enum><text>a requirement that the deployer and any relevant developers or other parties isolate Government data from all other data, through physical separation, electronic separation via secure copies with strict access controls, or other computational isolation mechanisms; </text></subclause></clause> <clause id="id8810fc529078453cae8624894565433b"><enum>(iv)</enum><text>requirements for evaluation and testing of artificial intelligence based on use case, to be performed on an ongoing basis; and</text></clause>
<clause commented="no" display-inline="no-display-inline" id="id12513bf81332494c8be08b5be48307c4"><enum>(v)</enum><text>requirements that the deployer and any relevant developers provide documentation, as determined necessary and requested by the agency, in accordance with section 8(b).</text></clause></subparagraph> <subparagraph id="ida540bca4913a4840ad60eef36d60c030"><enum>(B)</enum><header>Review</header><text>The Senior Procurement Executive, in coordination with the Chief Artificial Intelligence Officer, shall consult with technologists, information security personnel, domain experts, privacy officers, data officers, civil rights and civil liberties officers, contracting officials, legal counsel, customer experience professionals, and other relevant agency officials to review the requirements described in clauses (i) through (v) of subparagraph (A) and determine whether it may be necessary to incorporate additional requirements into relevant contracts or agreements.</text></subparagraph>
<subparagraph commented="no" display-inline="no-display-inline" id="id546e276314d04dcaa1a4f2b13671563e"><enum>(C)</enum><header>Regulation</header><text>The Federal Acquisition Regulatory Council shall revise the Federal Acquisition Regulation as necessary to implement the requirements of this subsection.</text></subparagraph></paragraph> <paragraph id="idbfd3716ba2e641cfb16c7fdb8a7c799c"><enum>(2)</enum><header>Rules of construction</header><text>This Act shall supersede any requirements that conflict with this Act under the guidance required to be produced by the Director pursuant to section 7224(d) of the Advancing American AI Act (<external-xref legal-doc="usc" parsable-cite="usc/40/11301">40 U.S.C. 11301</external-xref> note).</text></paragraph></subsection></section>
<section id="idbe3a5d969e0d410491d17f0d91bfb4a2"><enum>5.</enum><header>Interagency governance of artificial intelligence</header>
<subsection commented="no" display-inline="no-display-inline" id="idecbcfc79a6314b3cb66213dbdb5c9ad2"><enum>(a)</enum><header>Chief artificial intelligence officers council</header><text>Not later than 60 days after the date of enactment of this Act, the Director shall establish a Chief Artificial Intelligence Officers Council.</text></subsection> <subsection commented="no" display-inline="no-display-inline" id="id96ec29e55014437f87c87705549dba12"><enum>(b)</enum><header>Duties</header><text>The duties of the Council shall include—</text>
<paragraph commented="no" display-inline="no-display-inline" id="id42f003f6e47b47b387a2dfde0b6ea3cc"><enum>(1)</enum><text display-inline="yes-display-inline">coordinating agency development and use of artificial intelligence in agency programs and operations, including practices relating to the design, operation, risk management, and performance of artificial intelligence;</text></paragraph> <paragraph commented="no" display-inline="no-display-inline" id="idf3e6d933da1a48afacf5e6d8ab0c28f8"><enum>(2)</enum><text display-inline="yes-display-inline">sharing experiences, ideas, best practices, and innovative approaches relating to artificial intelligence; and</text></paragraph>
<paragraph commented="no" display-inline="no-display-inline" id="idf83972bd9f1347beaefab12ac2c33d4d"><enum>(3)</enum><text display-inline="yes-display-inline">assisting the Director, as necessary, with respect to—</text> <subparagraph commented="no" display-inline="no-display-inline" id="id6c0347d84ff44dcba56cd8603b2542cc"><enum>(A)</enum><text display-inline="yes-display-inline">the identification, development, and coordination of multi-agency projects and other initiatives, including initiatives to improve Government performance;</text></subparagraph>
<subparagraph commented="no" display-inline="no-display-inline" id="idda0c17fec976438f91c5873865e25509"><enum>(B)</enum><text display-inline="yes-display-inline">the management of risks relating to developing, obtaining, or using artificial intelligence, including by developing a common template to guide agency Chief Artificial Intelligence Officers in implementing a risk classification system that may incorporate best practices, such as those from—</text> <clause commented="no" display-inline="no-display-inline" id="idd0f450fa59314277ac10dc05ceead5ba"><enum>(i)</enum><text display-inline="yes-display-inline">the most recently updated version of the framework developed and updated pursuant to section 22A(c) of the National Institute of Standards and Technology Act (<external-xref legal-doc="usc" parsable-cite="usc/15/278h-1">15 U.S.C. 278h–1(c)</external-xref>); and</text></clause>
<clause commented="no" display-inline="no-display-inline" id="id8868f0eb552648bf957df144f2b2d8aa"><enum>(ii)</enum><text display-inline="yes-display-inline">the report published by the Government Accountability Office entitled <quote>Artificial Intelligence: An Accountability Framework for Federal Agencies and Other Entities</quote> (GAO–21–519SP), published on June 30, 2021;</text></clause></subparagraph> <subparagraph commented="no" display-inline="no-display-inline" id="id560bb636c67243eaa52a6973100de36a"><enum>(C)</enum><text display-inline="yes-display-inline">promoting the development and use of efficient, effective, common, shared, or other approaches to key processes that improve the delivery of services for the public; and</text></subparagraph>
<subparagraph commented="no" display-inline="no-display-inline" id="ide2301692f177423a88c635537d02c97b"><enum>(D)</enum><text display-inline="yes-display-inline">soliciting and providing perspectives on matters of concern, including from and to—</text> <clause commented="no" display-inline="no-display-inline" id="id53dd5d1280ab4fe4a30672fe4e9573ac"><enum>(i)</enum><text display-inline="yes-display-inline">interagency councils;</text></clause>
<clause commented="no" display-inline="no-display-inline" id="id95bdcd7c14b44ebc8dfae26607d3d31a"><enum>(ii)</enum><text display-inline="yes-display-inline">Federal Government entities;</text></clause> <clause commented="no" display-inline="no-display-inline" id="id9e70643505b5491f8354e0af04783657"><enum>(iii)</enum><text display-inline="yes-display-inline">private sector, public sector, nonprofit, and academic experts;</text></clause>
<clause commented="no" display-inline="no-display-inline" id="id3b3fbd5f6028486b9f0d7d2324aadeb1"><enum>(iv)</enum><text display-inline="yes-display-inline">State, local, Tribal, territorial, and international governments; and</text></clause> <clause commented="no" display-inline="no-display-inline" id="idec225d9028e043d1a1ea0bd5be2ddfaf"><enum>(v)</enum><text display-inline="yes-display-inline">other individuals and entities, as determined relevant by the Council.</text></clause></subparagraph></paragraph></subsection>
<subsection commented="no" display-inline="no-display-inline" id="ided2e7ed5db0c4602838dd5a7c6763241"><enum>(c)</enum><header>Membership of the Council</header>
<paragraph commented="no" display-inline="no-display-inline" id="id07ca05f1507a4e08a187c260372c8824"><enum>(1)</enum><header>Co-chairs</header><text display-inline="yes-display-inline">The Council shall have 2 co-chairs, which shall be—</text> <subparagraph commented="no" display-inline="no-display-inline" id="idefba229e94774096ba371c268c390eea"><enum>(A)</enum><text display-inline="yes-display-inline">the Director; and</text></subparagraph>
<subparagraph commented="no" display-inline="no-display-inline" id="ida0cee3dae9c74697b89c6eb9920e7329"><enum>(B)</enum><text display-inline="yes-display-inline">an individual selected by a majority of the members of the Council.</text></subparagraph></paragraph> <paragraph commented="no" display-inline="no-display-inline" id="id43031f0c00994282b4aadfecda1451fe"><enum>(2)</enum><header>Members</header><text>Other members of the Council shall include—</text>
<subparagraph commented="no" display-inline="no-display-inline" id="id0ed447a8a4f24c16a7e871afbc664051"><enum>(A)</enum><text display-inline="yes-display-inline">the Chief Artificial Intelligence Officer of each agency; and</text></subparagraph> <subparagraph commented="no" display-inline="no-display-inline" id="id0d2a74e2ae2446e9b2076bc2ec8a1fbe"><enum>(B)</enum><text>the senior official for artificial intelligence of the Office of Management and Budget.</text></subparagraph></paragraph></subsection>
<subsection commented="no" display-inline="no-display-inline" id="id36e9774a1b99414bb355df960f88a10e"><enum>(d)</enum><header>Standing committees; working groups</header><text display-inline="yes-display-inline">The Council shall have the authority to establish standing committees, including an executive committee, and working groups.</text></subsection> <subsection commented="no" display-inline="no-display-inline" id="id33ce2d42124e48cfb3838776b3d13cc3"><enum>(e)</enum><header>Council staff</header><text>The Council may enter into an interagency agreement with the Administrator of General Services for shared services for the purpose of staffing the Council. </text></subsection>
<subsection commented="no" display-inline="no-display-inline" id="id54414e36bccc42f5aab0d776fa31834e"><enum>(f)</enum><header>Development, adaptation, and documentation</header>
<paragraph commented="no" display-inline="no-display-inline" id="idae682d61b6114e3e87225a08ae5af85e"><enum>(1)</enum><header>Guidance</header><text display-inline="yes-display-inline">Not later than 90 days after the date of enactment of this Act, the Director, in consultation with the Council, shall issue guidance relating to—</text> <subparagraph commented="no" display-inline="no-display-inline" id="id51e258d75e434191b1c0fd70bf15f37a"><enum>(A)</enum><text display-inline="yes-display-inline">developments in artificial intelligence and implications for management of agency programs;</text></subparagraph>
<subparagraph commented="no" display-inline="no-display-inline" id="idda1c995521d748e2923b97212325e42e"><enum>(B)</enum><text display-inline="yes-display-inline">the agency impact assessments described in section 8(a) and other relevant impact assessments as determined appropriate by the Director, including the appropriateness of substituting pre-existing assessments, including privacy impact assessments, for purposes of an artificial intelligence impact assessment;</text></subparagraph> <subparagraph commented="no" display-inline="no-display-inline" id="id6c54020590a84a3fbdd01d1e6237f060"><enum>(C)</enum><text display-inline="yes-display-inline">documentation for agencies to require from deployers of artificial intelligence; </text></subparagraph>
<subparagraph commented="no" display-inline="no-display-inline" id="id36acc906ab0f4b2a89379fd73cd83a5c"><enum>(D)</enum><text display-inline="yes-display-inline">a model template for the explanations for use case risk classifications that each agency must provide under section 8(a)(4); and</text></subparagraph> <subparagraph commented="no" display-inline="no-display-inline" id="id4d5b015c579e4e71ac4db2737d358a9b"><enum>(E)</enum><text>other matters, as determined relevant by the Director.</text></subparagraph></paragraph>
<paragraph commented="no" display-inline="no-display-inline" id="ide3e9600e11e9415e81d6c9006b3ef4b6"><enum>(2)</enum><header>Annual review</header><text display-inline="yes-display-inline">The Director, in consultation with the Council, shall periodically, but not less frequently than annually, review and update, as needed, the guidelines issued under paragraph (1).</text></paragraph></subsection> <subsection commented="no" display-inline="no-display-inline" id="id93f51653b42840a0b3f94184207e529a"><enum>(g)</enum><header display-inline="yes-display-inline">Incident reporting</header> <paragraph commented="no" display-inline="no-display-inline" id="ide6d72a46303d4340bc7df05051d4adda"><enum>(1)</enum><header>In general</header><text display-inline="yes-display-inline">Not later than 180 days after the date of enactment of this Act, the Director, in consultation with the Council, shall develop procedures for ensuring that—</text>
<subparagraph commented="no" display-inline="no-display-inline" id="idfb1bc6c744164bb79c72c3cfe22b88bf"><enum>(A)</enum><text display-inline="yes-display-inline">adverse incidents involving artificial intelligence procured, obtained, or used by agencies are reported promptly to the agency by the developer or deployer, or to the developer or deployer by the agency, whichever first becomes aware of the adverse incident; and</text></subparagraph> <subparagraph commented="no" display-inline="no-display-inline" id="id20a5826c58ce452594c6b89288cde318"><enum>(B)</enum><text display-inline="yes-display-inline">information relating to an adverse incident described in subparagraph (A) is appropriately shared among agencies.</text></subparagraph></paragraph>
<paragraph commented="no" display-inline="no-display-inline" id="id748276e4e2d04fb184c4da58a349a832"><enum>(2)</enum><header>Single report</header><text display-inline="yes-display-inline">Adverse incidents also qualifying for incident reporting under section 3554 of title 44, United States Code, or other relevant laws or policies, may be reported under such other reporting requirement and are not required to be additionally reported under this subsection.</text></paragraph> <paragraph commented="no" display-inline="no-display-inline" id="id945f4b76af2b4d14ad556b2ff8e24179"><enum>(3)</enum><header>Notice to deployer</header> <subparagraph commented="no" display-inline="no-display-inline" id="id5bdce1a355e8429ba5635b2200df0f06"><enum>(A)</enum><header>In general</header><text display-inline="yes-display-inline">If an adverse incident is discovered by an agency, the agency shall report the adverse incident to the deployer and the deployer, in consultation with any relevant developers, shall take immediate action to resolve the adverse incident and mitigate the potential for future adverse incidents.</text></subparagraph>
<subparagraph commented="no" display-inline="no-display-inline" id="idda495cb65f1c49f7887a61e5a6eb98d3"><enum>(B)</enum><header>Waiver</header>
<clause commented="no" display-inline="no-display-inline" id="id1930ba98b12a43c8afcc672faaed81fc"><enum>(i)</enum><header>In general</header><text display-inline="yes-display-inline">Unless otherwise required by law, the head of an agency may issue a written waiver that waives the applicability of some or all of the requirements under subparagraph (A), with respect to a specific adverse incident.</text></clause> <clause commented="no" display-inline="no-display-inline" id="id831828e197b244309964b1bf475a8a9c"><enum>(ii)</enum><header>Written waiver contents</header><text>A written waiver under clause (i) shall include justification for the waiver.</text></clause>
<clause commented="no" display-inline="no-display-inline" id="idb6c78a0ebb194616b1fbd9eb0ef8b693"><enum>(iii)</enum><header>Notice</header><text>The head of an agency shall forward advance notice of any waiver under this subparagraph to the Director, or the designee of the Director.</text></clause></subparagraph></paragraph></subsection></section> <section commented="no" display-inline="no-display-inline" id="id5a5171616ea748c6a13ba7577ec687ae"><enum>6.</enum><header>Agency governance of artificial intelligence</header> <subsection id="id193af092768a46fdba934c93f7d74f98"><enum>(a)</enum><header>In general</header><text>The head of an agency shall—</text>
<paragraph id="idbf70cb6a27b34ec0a7c82843064af7b9"><enum>(1)</enum><text>ensure the responsible adoption of artificial intelligence, including by—</text> <subparagraph commented="no" display-inline="no-display-inline" id="idee28aeddb7624ff091a74fa4776a8275"><enum>(A)</enum><text display-inline="yes-display-inline">articulating a clear vision of what the head of the agency wants to achieve by developing, procuring or obtaining, or using artificial intelligence;</text></subparagraph>
<subparagraph commented="no" display-inline="no-display-inline" id="id2637e6c24ce0449dbeb715db6bda5e26"><enum>(B)</enum><text display-inline="yes-display-inline">ensuring the agency develops, procures, obtains, or uses artificial intelligence that follows the principles of trustworthy artificial intelligence in government set forth under Executive Order 13960 (85 Fed. Reg. 78939; relating to promoting the use of trustworthy artificial intelligence in Federal Government) and the principles for safe, secure, and trustworthy artificial intelligence in government set forth under section 2 of Executive Order 14110 (88 Fed. Reg. 75191; relating to the safe, secure, and trustworthy development and use of artificial intelligence);</text></subparagraph> <subparagraph commented="no" display-inline="no-display-inline" id="id4d16713151b846f6bcfafe9ef4dbf546"><enum>(C)</enum><text display-inline="yes-display-inline">testing, validating, and monitoring artificial intelligence and the use case-specific performance of artificial intelligence, among others, to—</text>
<clause commented="no" display-inline="no-display-inline" id="idefa7e6056bef4044b34f3ff964d621eb"><enum>(i)</enum><text display-inline="yes-display-inline">ensure all use of artificial intelligence is appropriate to and improves the effectiveness of the mission of the agency;</text></clause> <clause commented="no" display-inline="no-display-inline" id="idc2021cb0ef2f4908ac0b4069cf163d82"><enum>(ii)</enum><text display-inline="yes-display-inline">guard against bias in data collection, use, and dissemination;</text></clause>
<clause commented="no" display-inline="no-display-inline" id="idee655b1bd6434f259f42f41bac01d715"><enum>(iii)</enum><text display-inline="yes-display-inline">ensure reliability, fairness, and transparency; and</text></clause> <clause commented="no" display-inline="no-display-inline" id="id41451afe008b42dfa79b7fb44a27c050"><enum>(iv)</enum><text display-inline="yes-display-inline">protect against impermissible discrimination;</text></clause></subparagraph>
<subparagraph commented="no" display-inline="no-display-inline" id="id169ae99944714966aa87ecc0704aeab5"><enum>(D)</enum><text>developing, adopting, and applying a suitable enterprise risk management framework approach to artificial intelligence, incorporating the requirements under this Act; </text></subparagraph> <subparagraph commented="no" display-inline="no-display-inline" id="id6f953ded28c14e98a6c8345320b549fb"><enum>(E)</enum><text>continuing to develop a workforce that—</text>
<clause commented="no" display-inline="no-display-inline" id="id6d1b33b66dfb4f09b510f9d4b3f50338"><enum>(i)</enum><text display-inline="yes-display-inline">understands the strengths and weaknesses of artificial intelligence, including artificial intelligence embedded in agency data systems and operations;</text></clause> <clause commented="no" display-inline="no-display-inline" id="id195a500897de4655867aa9300f81ae5a"><enum>(ii)</enum><text display-inline="yes-display-inline">is aware of the benefits and risk of artificial intelligence;</text></clause>
<clause commented="no" display-inline="no-display-inline" id="id1730043ef32f49a0ba32382efa1b6bd4"><enum>(iii)</enum><text display-inline="yes-display-inline">is able to provide human oversight for the design, implementation, and end uses of artificial intelligence; and</text></clause> <clause commented="no" display-inline="no-display-inline" id="idb148e29811d54f869458c686f132a528"><enum>(iv)</enum><text display-inline="yes-display-inline">is able to review and provide redress for erroneous decisions made in the course of artificial intelligence-assisted processes; and</text></clause></subparagraph>
<subparagraph commented="no" display-inline="no-display-inline" id="id977c73a1a88c4a06ab7c4c77ec853a9b"><enum>(F)</enum><text>ensuring implementation of the requirements under section 8(a) for the identification and evaluation of risks posed by the deployment of artificial intelligence in agency use cases;</text></subparagraph></paragraph> <paragraph commented="no" display-inline="no-display-inline" id="id8b2b7492ee1d4df4953df04ede475804"><enum>(2)</enum><text display-inline="yes-display-inline">designate a Chief Artificial Intelligence Officer, whose duties shall include—</text>
<subparagraph commented="no" display-inline="no-display-inline" id="ide78eedf79e4346d29d8d862633faecfd"><enum>(A)</enum><text display-inline="yes-display-inline">ensuring appropriate use of artificial intelligence;</text></subparagraph> <subparagraph commented="no" display-inline="no-display-inline" id="id8445d87ca91e434ab5ad3d9f55ca4356"><enum>(B)</enum><text display-inline="yes-display-inline">coordinating agency use of artificial intelligence;</text></subparagraph>
<subparagraph commented="no" display-inline="no-display-inline" id="idae7af5709f6b4d0ba1a47fd09f18462b"><enum>(C)</enum><text display-inline="yes-display-inline">promoting artificial intelligence innovation; </text></subparagraph> <subparagraph commented="no" display-inline="no-display-inline" id="ide0ea39b190cd46d2b128d4fa8c08c362"><enum>(D)</enum><text display-inline="yes-display-inline">managing the risks of use of artificial intelligence;</text></subparagraph>
<subparagraph commented="no" display-inline="no-display-inline" id="id2b8cfe777cf040a5a98a61b6177e031d"><enum>(E)</enum><text display-inline="yes-display-inline">supporting the head of the agency with developing the risk classification system required under section 7(a) and complying with other requirements of this Act; and</text></subparagraph> <subparagraph commented="no" display-inline="no-display-inline" id="idbb125621d965444d938ff008c24d7df4"><enum>(F)</enum><text>supporting agency personnel leading the procurement and deployment of artificial intelligence to comply with the requirements under this Act; and</text></subparagraph></paragraph>
<paragraph id="id305e0519fbbe4b8db0232a7a4520fc3d"><enum>(3)</enum><text>form and convene an Artificial Intelligence Governance Board, as described in subsection (b), which shall coordinate and govern artificial intelligence issues across the agency.</text></paragraph></subsection> <subsection commented="no" display-inline="no-display-inline" id="id4b9a9ed166614f0dbfe23c85c60574fc"><enum>(b)</enum><header>Artificial Intelligence Governance Board</header> <paragraph commented="no" display-inline="no-display-inline" id="id1a41af244d2a40cf898be15bbd32134a"><enum>(1)</enum><header>Leadership</header><text display-inline="yes-display-inline">Each Artificial Intelligence Governance Board (referred to in this subsection as <quote>Board</quote>) of an agency shall be chaired by the Deputy Secretary of the agency or equivalent official and vice-chaired by the Chief Artificial Intelligence Officer of the agency. Neither the chair nor the vice-chair may assign or delegate these roles to other officials.</text></paragraph>
<paragraph commented="no" display-inline="no-display-inline" id="id299b14a648424f2f8d9c9e73537dacdd"><enum>(2)</enum><header>Representation</header><text display-inline="yes-display-inline">The Board shall, at a minimum, include representatives comprised of senior agency officials from operational components, if relevant, program officials responsible for implementing artificial intelligence, and officials responsible for information technology, data, privacy, civil rights and civil liberties, human capital, procurement, finance, legal counsel, and customer experience.</text></paragraph> <paragraph commented="no" display-inline="no-display-inline" id="id26e43e345d404c4bb2fdf7e240a37a45"><enum>(3)</enum><header>Existing bodies</header><text>An agency may rely on an existing governance body to fulfill the requirements of this subsection if the body satisfies or is adjusted to satisfy the leadership and representation requirements of paragraphs (1) and (2).</text></paragraph></subsection>
<subsection commented="no" display-inline="no-display-inline" id="id283194563fa64a86a4115f306ef6e55a"><enum>(c)</enum><header>Designation of Chief Artificial Intelligence Officer</header><text>The head of an agency may designate as Chief Artificial Intelligence Officer an existing official within the agency, including the Chief Technology Officer, Chief Data Officer, Chief Information Officer, or other official with relevant or complementary authorities and responsibilities, if such existing official has expertise in artificial intelligence and meets the requirements of this section.</text></subsection> <subsection id="idc4cea272c78e4ef693bb0ea70fa35c4e"><enum>(d)</enum><header>Effective date</header><text display-inline="yes-display-inline">Beginning on the date that is 120 days after the date of enactment of this Act, an agency shall not develop or procure or obtain artificial intelligence prior to completing the requirements under paragraphs (2) and (3) of subsection (a).</text></subsection></section>
<section id="id6ea14021d8754d6e8afc105e99c09bf8"><enum>7.</enum><header>Agency risk classification of artificial intelligence use cases for procurement and use</header>
<subsection commented="no" display-inline="no-display-inline" id="id72255a3434234717b2564ef03b98390c"><enum>(a)</enum><header>Risk classification system</header>
<paragraph commented="no" display-inline="no-display-inline" id="idafe8467dd2e34011aa7af973f8ea482c"><enum>(1)</enum><header>Development</header><text display-inline="yes-display-inline">The head of each agency shall be responsible for developing, not later than 1 year after the date of enactment of this Act, a risk classification system for agency use cases of artificial intelligence, without respect to whether artificial intelligence is embedded in a commercial product.</text></paragraph> <paragraph id="idbfb0b000e1054a129acad16868ae30a6"><enum>(2)</enum><header>Requirements</header> <subparagraph commented="no" display-inline="no-display-inline" id="id99b236068ab246659f04b88dff9bbfac"><enum>(A)</enum><header>Risk classifications</header><text display-inline="yes-display-inline">The risk classification system under paragraph (1) shall, at a minimum, include unacceptable, high, medium, and low risk classifications. </text></subparagraph>
<subparagraph commented="no" display-inline="no-display-inline" id="id498c90b503724543b007dac939857f86"><enum>(B)</enum><header>Factors for risk classifications</header><text display-inline="yes-display-inline">In developing the risk classifications under subparagraph (A), the head of the agency shall consider the following:</text> <clause id="ide17b378691fc4c6e9786802ec648b92f"><enum>(i)</enum><header>Mission and operation</header><text>The mission and operations of the agency.</text></clause>
<clause commented="no" display-inline="no-display-inline" id="id9ca3d79ae5a6494bb4ddc89f616f0aab"><enum>(ii)</enum><header display-inline="yes-display-inline">Scale</header><text>The seriousness and probability of adverse impacts.</text></clause> <clause id="idb0a325c9fedf4985a13f40ef735c164c"><enum>(iii)</enum><header>Scope</header><text>The breadth of application, such as the number of individuals affected.</text></clause>
<clause id="idb0158e2d013d4c12805da924b08b6932"><enum>(iv)</enum><header>Optionality</header><text>The degree of choice that an individual, group, or entity has as to whether to be subject to the effects of artificial intelligence.</text></clause> <clause id="id53600d844feb4f84aed4ecfdca5b5d4c"><enum>(v)</enum><header>Standards and frameworks</header><text>Standards and frameworks for risk classification of use cases that support democratic values, such as the standards and frameworks developed by the National Institute of Standards and Technology, the International Standards Organization, and the Institute of Electrical and Electronics Engineers.</text></clause></subparagraph>
<subparagraph commented="no" display-inline="no-display-inline" id="id72b3c46737e14ccaba1be073c8dac3b9"><enum>(C)</enum><header>Classification variance</header>
<clause commented="no" display-inline="no-display-inline" id="id01766e2d96d3427ab9f447ff8cd82026"><enum>(i)</enum><header>Certain lower risk use cases</header><text display-inline="yes-display-inline">The risk classification system may allow for an operational use case to be categorized under a lower risk classification, even if the use case is a part of a larger area of the mission of the agency that is categorized under a higher risk classification. </text></clause> <clause commented="no" display-inline="no-display-inline" id="id4dbb48a129044be3a29275c1e17d6f36"><enum>(ii)</enum><header>Changes based on testing or new information</header><text>The risk classification system may allow for changes to the risk classification of an artificial intelligence use case based on the results from procurement process testing or other information that becomes available.</text></clause></subparagraph>
<subparagraph id="id8cd186a69a1248d381702517ba207b91"><enum>(D)</enum><header>High risk use cases</header>
<clause commented="no" display-inline="no-display-inline" id="id8ffb76e0da844490b98c518108f6a67c"><enum>(i)</enum><header>In general</header><text display-inline="yes-display-inline">High risk classification shall, at a minimum, apply to use cases for which the outputs of the system—</text> <subclause id="id168da6f70a0d46d2a99e6d7f3a2bf381"><enum>(I)</enum><text>are presumed to serve as a principal basis for a decision or action that has a legal, material, binding, or similarly significant effect, with respect to an individual or community, on—</text>
<item id="idfaa451b80e4f4244bc368f3c2ec94228"><enum>(aa)</enum><text>civil rights, civil liberties, or privacy;</text></item> <item id="idd2e0e9ef284c414d980aa9e7fef4257d"><enum>(bb)</enum><text>equal opportunities, including in access to education, housing, insurance, credit, employment, and other programs where civil rights and equal opportunity protections apply; or</text></item>
<item id="idddef8068a3c844c6a37993dd4b6db06a"><enum>(cc)</enum><text>access to or the ability to apply for critical government resources or services, including healthcare, financial services, public housing, social services, transportation, and essential goods and services; or</text></item></subclause> <subclause id="id6f7265629bd34298bad1f9be39e1646d"><enum>(II)</enum><text>are presumed to serve as a principal basis for a decision that substantially impacts the safety of, or has the potential to substantially impact the safety of—</text>
<item id="idf86e6413c3cb45e99916882a9f535298"><enum>(aa)</enum><text>the well-being of an individual or community, including loss of life, serious injury, bodily harm, biological or chemical harms, occupational hazards, harassment or abuse, or mental health;</text></item> <item id="id623e53ec00c14311a355715cb65c3884"><enum>(bb)</enum><text>the environment, including irreversible or significant environmental damage;</text></item>
<item id="id1ee87eb0108841e6a07b62bd3fe8d1f7"><enum>(cc)</enum><text>critical infrastructure, including the critical infrastructure sectors defined in Presidential Policy Directive 21, entitled <quote>Critical Infrastructure Security and Resilience</quote><italic></italic> (dated February 12, 2013) (or any successor directive) and the infrastructure for voting and protecting the integrity of elections; or</text></item> <item id="id2910cd40a4c0485dbc97bd07328ba684"><enum>(dd)</enum><text>strategic assets or resources, including high-value property and information marked as sensitive or classified by the Federal Government and controlled unclassified information.</text></item></subclause></clause>
<clause id="id8290b547abef486794431a80e2e9dbf3"><enum>(ii)</enum><header>Additions</header><text>The head of each agency shall add other use cases to the high risk category, as appropriate.</text></clause></subparagraph> <subparagraph id="ida6f0b516027e4fb5968db4b045863a71" commented="no" display-inline="no-display-inline"><enum>(E)</enum><header>Medium and low risk use cases</header><text>If a use case is not high risk, as described in subparagraph (D), the head of an agency shall have the discretion to define the risk classification.</text></subparagraph>
<subparagraph commented="no" display-inline="no-display-inline" id="idfbbd611010444e4cbbf9806eea8ddaa8"><enum>(F)</enum><header>Unacceptable risk</header><text>If an agency identifies, through testing, adverse incident, or other means or information available to the agency, that a use or outcome of an artificial intelligence use case is a clear threat to human safety or rights that cannot be adequately or practicably mitigated, the agency shall identify the risk classification of that use case as unacceptable risk.</text></subparagraph></paragraph> <paragraph commented="no" display-inline="no-display-inline" id="id8ea699b949ea45caa15f4dd4a5f7395c"><enum>(3)</enum><header>Transparency</header><text>The risk classification system under paragraph (1) shall be published on a public-facing website, with the methodology used to determine different risk levels and examples of particular use cases for each category in language that is easy to understand to the people affected by the decisions and outcomes of artificial intelligence.</text></paragraph></subsection>
<subsection id="id94691f71b3aa4b82b3421d175a1d77a0"><enum>(b)</enum><header>Effective date</header><text display-inline="yes-display-inline">This section shall take effect on the date that is 180 days after the date of enactment of this Act, on and after which an agency that has not complied with the requirements of this section may not develop, procure or obtain, or use artificial intelligence until the agency complies with such requirements.</text></subsection></section> <section id="id101aaa12577645bdb8678b10e29dc7d6"><enum>8.</enum><header>Agency requirements for use of artificial intelligence</header> <subsection id="id7ecba9e53c5440c686ce81072ec7a617"><enum>(a)</enum><header>Risk evaluation process</header> <paragraph commented="no" display-inline="no-display-inline" id="id1f332de2cf664248a50a0337db81779b"><enum>(1)</enum><header>In general</header><text display-inline="yes-display-inline">Not later than 180 days after the effective date in section 7(b), the Chief Artificial Intelligence Officer of each agency, in coordination with the Artificial Intelligence Governance Board of the agency, shall develop and implement a process for the identification and evaluation of risks posed by the deployment of artificial intelligence in agency use cases to ensure an interdisciplinary and comprehensive evaluation of potential risks and determination of risk classifications under such section.</text></paragraph>
<paragraph commented="no" display-inline="no-display-inline" id="idcb43702ed9d84b55b7b63baa1787e889"><enum>(2)</enum><header>Process requirements</header><text>The risk evaluation process described in paragraph (1), shall include, for each artificial intelligence use case—</text> <subparagraph commented="no" display-inline="no-display-inline" id="idabb7e3df30f34400b44492acfa7a51f9"><enum>(A)</enum><text display-inline="yes-display-inline">identification of the risks and benefits of the artificial intelligence use case;</text></subparagraph>
<subparagraph commented="no" display-inline="no-display-inline" id="id6ebf79dd9bcb462aad560ec151bbccbf"><enum>(B)</enum><text display-inline="yes-display-inline">a plan to periodically review the artificial intelligence use case to examine whether risks have changed or evolved and to update the corresponding risk classification as necessary;</text></subparagraph> <subparagraph commented="no" display-inline="no-display-inline" id="idb4820f74a14a426482dbbeda98e275eb"><enum>(C)</enum><text>a determination of the need for targeted impact assessments to further evaluate specific risks of the artificial intelligence use case within certain impact areas, which shall include privacy, security, civil rights and civil liberties, accessibility, environmental impact, health and safety, and any other impact area relating to high risk classification under section 7(a)(2)(D) as determined appropriate by the Chief Artificial Intelligence Officer; and</text></subparagraph>
<subparagraph commented="no" display-inline="no-display-inline" id="idb128d91151e14b439f62e81c92c28502"><enum>(D)</enum><text>if appropriate, consultation with and feedback from affected communities and the public on the design, development, and use of the artificial intelligence use case.</text></subparagraph></paragraph> <paragraph commented="no" display-inline="no-display-inline" id="id920af5f2788c4f809445134ed5a8221c"><enum>(3)</enum><header>Review</header> <subparagraph commented="no" display-inline="no-display-inline" id="ide999bae402644fb8a5618ec4a56aac6c"><enum>(A)</enum><header>Existing use cases</header><text display-inline="yes-display-inline">With respect to each use case that an agency is planning, developing, or using on the date of enactment of this Act, not later than 1 year after such date, the Chief Artificial Intelligence Officer of the agency shall identify and review the use case to determine the risk classification of the use case, pursuant to the risk evaluation process under paragraphs (1) and (2). </text></subparagraph>
<subparagraph commented="no" display-inline="no-display-inline" id="id510ebe6ad68142efa53e51e402510926"><enum>(B)</enum><header>New use cases</header>
<clause commented="no" display-inline="no-display-inline" id="ida571a6c8f1f5497391b19d44238358ed"><enum>(i)</enum><header>In general</header><text display-inline="yes-display-inline">Beginning on the date of enactment of this Act, the Chief Artificial Intelligence Officer of an agency shall identify and review any artificial intelligence use case that the agency will plan, develop, or use and determine the risk classification of the use case, pursuant to the risk evaluation process under paragraphs (1) and (2), before procuring or obtaining, developing, or using the use case. </text></clause> <clause commented="no" display-inline="no-display-inline" id="id6038cc31fabc4652b40d218a40261dcb"><enum>(ii)</enum><header>Development</header><text>For any use case described in clause (i) that is developed by the agency, the agency shall perform an additional risk evaluation prior to deployment in a production or operational environment.</text></clause></subparagraph></paragraph>
<paragraph commented="no" display-inline="no-display-inline" id="ide43761b50ac3489399979ef0372061a8"><enum>(4)</enum><header>Rationale for risk classification</header><text display-inline="yes-display-inline">Risk classification of an artificial intelligence use case shall be accompanied by an explanation from the agency of how the risk classification was determined, which shall be included in the artificial intelligence use case inventory of the agency, and written referencing the model template developed by the Director under section 5(f)(1)(D).</text></paragraph></subsection> <subsection id="id9dc61e09a26a42d29f2d2c5b88e88396"><enum>(b)</enum><header>Model card documentation requirements</header> <paragraph commented="no" display-inline="no-display-inline" id="idf751bc783ab9416fb21539c0a097d8f6"><enum>(1)</enum><header>In general</header><text display-inline="yes-display-inline">Beginning on the date that is 180 days after the date of enactment of this Act, any time during developing, procuring or obtaining, or using artificial intelligence, an agency shall require, as determined necessary by the Chief Artificial Intelligence Officer, that the deployer and any relevant developer submit documentation about the artificial intelligence, including—</text>
<subparagraph id="ide76bb677b8724eecb8974d6368337618"><enum>(A)</enum><text>a description of the architecture of the artificial intelligence, highlighting key parameters, design choices, and the machine learning techniques employed;</text></subparagraph> <subparagraph id="idfd1347e96b38444fbabf6f914d6af7d3"><enum>(B)</enum><text>information on the training of the artificial intelligence, including computational resources utilized;</text></subparagraph>
<subparagraph id="id99449befb86c42a385fc20ddd9e82727"><enum>(C)</enum><text>an account of the source of the data, size of the data, any licenses under which the data is used, collection methods and dates of the data, and any preprocessing of the data undertaken, including human or automated refinement, review, or feedback;</text></subparagraph> <subparagraph id="id6ec76a6bbf1b45dfbb7a22350cf691ee"><enum>(D)</enum><text>information on the management and collection of personal data, outlining data protection and privacy measures adhered to in compliance with applicable laws;</text></subparagraph>
<subparagraph id="id1760d7284a914bbd903da1aa7daefdc1"><enum>(E)</enum><text>a description of the methodologies used to evaluate the performance of the artificial intelligence, including key metrics and outcomes; and</text></subparagraph> <subparagraph commented="no" display-inline="no-display-inline" id="id65337cbe28ea40ee9955276a802ab9d9"><enum>(F)</enum><text>an estimate of the energy consumed by the artificial intelligence during training and inference.</text></subparagraph></paragraph>
<paragraph id="ida23722797dba42f890cd8a93563531ff"><enum>(2)</enum><header>Additional documentation for medium and high risk use cases</header><text>Beginning on the date that is 270 days after the date of enactment of this Act, with respect to use cases categorized as medium risk or higher, an agency shall require that the deployer of artificial intelligence, in consultation with any relevant developers, submit (including proactively, as material updates of the artificial intelligence occur) the following documentation:</text> <subparagraph id="id82c301caa0ef4ca7a43c83fcb18fe526"><enum>(A)</enum><header>Model architecture</header><text>Detailed information on the model or models used in the artificial intelligence, including model date, model version, model type, key parameters (including number of parameters), interpretability measures, and maintenance and updating policies.</text></subparagraph>
<subparagraph id="idf8930eae80b343889880dd8a90d2d94c"><enum>(B)</enum><header>Advanced training details</header><text>A detailed description of training algorithms, methodologies, optimization techniques, computational resources, and the environmental impact of the training process.</text></subparagraph> <subparagraph id="idf9a84526149e4f29b630ede77cbb9cd9"><enum>(C)</enum><header>Data provenance and integrity</header><text>A detailed description of the training and testing data, including the origins, collection methods, preprocessing steps, and demographic distribution of the data, and known discriminatory impacts and mitigation measures with respect to the data.</text></subparagraph>
<subparagraph id="idf50aa6af9fab4ad2a5e33ce55b108c01"><enum>(D)</enum><header>Privacy and data protection</header><text>Detailed information on data handling practices, including compliance with legal standards, anonymization techniques, data security measures, and whether and how permission for use of data is obtained.</text></subparagraph> <subparagraph id="idf4b61af131704951b4608f75bd0845d4"><enum>(E)</enum><header>Rigorous testing and oversight</header><text>A comprehensive disclosure of performance evaluation metrics, including accuracy, precision, recall, and fairness metrics, and test dataset results.</text></subparagraph>
<subparagraph id="id939d3e354aad47c88b97cd088e215b7b"><enum>(F)</enum><header>NIST artificial intelligence risk management framework</header><text>Documentation demonstrating compliance with the most recently updated version of the framework developed and updated pursuant to section 22A(c) of the National Institute of Standards and Technology Act (<external-xref legal-doc="usc" parsable-cite="usc/15/278h-1">15 U.S.C. 278h–1(c)</external-xref>). </text></subparagraph></paragraph> <paragraph commented="no" display-inline="no-display-inline" id="id1e0094566aa54fd39f933b89c69f261e"><enum>(3)</enum><header>Review of requirements</header><text>Not later than 1 year after the date of enactment of this Act, the Comptroller General shall conduct a review of the documentation requirements under paragraphs (1) and (2) to—</text>
<subparagraph commented="no" display-inline="no-display-inline" id="id73893b7140f945c09a082ab04a570cec"><enum>(A)</enum><text display-inline="yes-display-inline">examine whether agencies and deployers are complying with the requirements under those paragraphs; and</text></subparagraph> <subparagraph commented="no" display-inline="no-display-inline" id="id1222b99fd9374066ab5ad5418e5e4d60"><enum>(B)</enum><text display-inline="yes-display-inline">make findings and recommendations to further assist in ensuring safe, responsible, and efficient artificial intelligence.</text></subparagraph></paragraph>
<paragraph commented="no" display-inline="no-display-inline" id="ida94d25384af543eca039714c226c1229"><enum>(4)</enum><header>Security of provided documentation</header><text>The head of each agency shall ensure that appropriate security measures and access controls are in place to protect documentation provided pursuant to this section.</text></paragraph></subsection> <subsection commented="no" display-inline="no-display-inline" id="id510783383efc4063ab55358e57ad5283"><enum>(c)</enum><header>Information and use protections</header><text>Information provided to an agency under subsection (b)(3) is exempt from disclosure under section 552 of title 5, United States Code (commonly known as the <quote>Freedom of Information Act</quote>) and may be used by the agency, consistent with otherwise applicable provisions of Federal law, solely for—</text>
<paragraph id="id5e864d2da3f34d2980807133cae34801"><enum>(1)</enum><text>assessing the ability of artificial intelligence to achieve the requirements and objectives of the agency and the requirements of this Act; and</text></paragraph> <paragraph id="idb0b02d36b7154b7c8c663e4067e8638a"><enum>(2)</enum><text>identifying—</text>
<subparagraph id="idde99a627fc77409ebd75755c5c46cc07"><enum>(A)</enum><text>adverse effects of artificial intelligence on the rights or safety factors identified in section 7(a)(2)(D);</text></subparagraph> <subparagraph id="id688590c6fccc469e8c27e0b09faddb23"><enum>(B)</enum><text>cyber threats, including the sources of the cyber threats; and</text></subparagraph>
<subparagraph id="id7bc1b7552a55414089ac503dd2a93343"><enum>(C)</enum><text>security vulnerabilities.</text></subparagraph></paragraph></subsection> <subsection id="id488ab28fd99e4495811bffb493563fd2"><enum>(d)</enum><header>Pre-Deployment requirements for high risk use cases</header><text>Beginning on the date that is 1 year after the date of enactment of this Act, the head of an agency shall not deploy or use artificial intelligence for a high risk use case prior to—</text>
<paragraph id="ide522c324cbf042dfb466eedea930b5ef"><enum>(1)</enum><text>collecting documentation of the artificial intelligence, source, and use case in agency software and use case inventories;</text></paragraph> <paragraph id="id57313aad18be44138490f9d13b0f26e0"><enum>(2)</enum><text>testing of the artificial intelligence in an operational, real-world setting with privacy, civil rights, and civil liberty safeguards to ensure the artificial intelligence is capable of meeting its objectives;</text></paragraph>
<paragraph id="id11bd200243254c45941bcb006aac7886"><enum>(3)</enum><text>establishing appropriate agency rules of behavior for the use case, including required human involvement in, and user-facing explainability of, decisions made in whole or part by the artificial intelligence, as determined by the Chief Artificial Intelligence Officer in coordination with the program manager or equivalent agency personnel; and</text></paragraph> <paragraph id="id4257de3c26314f9791c13efe265dd5f7"><enum>(4)</enum><text>establishing appropriate agency training programs, including documentation of completion of training prior to use of artificial intelligence, that educate agency personnel involved with the application of artificial intelligence in high risk use cases on the capacities and limitations of artificial intelligence, including training on—</text>
<subparagraph id="id9c0b0aa242974d08b97d0d86ebd7707b"><enum>(A)</enum><text>monitoring the operation of artificial intelligence in high risk use cases to detect and address anomalies, dysfunctions, and unexpected performance in a timely manner to mitigate harm;</text></subparagraph> <subparagraph id="id5555e924d6924b7e9b446fa320174535"><enum>(B)</enum><text>lessening reliance or over-reliance on the output produced by artificial intelligence in a high risk use case, particularly if artificial intelligence is used to make decisions impacting individuals;</text></subparagraph>
<subparagraph id="id57c0ff42320844a2b27c86ceca73ad8b"><enum>(C)</enum><text>accurately interpreting the output of artificial intelligence, particularly considering the characteristics of the system and the interpretation tools and methods available;</text></subparagraph> <subparagraph id="ideb0a36a0169547138371017a799836e1"><enum>(D)</enum><text>when to not use, disregard, override, or reverse the output of artificial intelligence;</text></subparagraph>
<subparagraph id="id314dff9cd84f47488ac050fe726808ad"><enum>(E)</enum><text>how to intervene or interrupt the operation of artificial intelligence;</text></subparagraph> <subparagraph id="idc266549e7d7c493482b72a94f68affd0"><enum>(F)</enum><text>limiting the use of artificial intelligence to its operational design domain; and</text></subparagraph>
<subparagraph id="id0766a6a1d2364ac1a16d60a5221e24a5"><enum>(G)</enum><text>procedures for reporting incidents involving misuse, faulty results, safety and security issues, and other problems with use of artificial intelligence that does not function as intended.</text></subparagraph></paragraph></subsection> <subsection id="id4d4f12027f984edda08b9fb68081d6bf"><enum>(e)</enum><header>Ongoing monitoring of artificial intelligence in high risk use cases</header><text>The Chief Artificial Intelligence Officer of each agency shall—</text>
<paragraph commented="no" display-inline="no-display-inline" id="ida1dfba31f43542fbaeb9b75f2cc7a775"><enum>(1)</enum><text display-inline="yes-display-inline">establish a reporting system, consistent with section 5(g), and suspension and shut-down protocols for defects or adverse impacts of artificial intelligence, and conduct ongoing monitoring, as determined necessary by use case;</text></paragraph> <paragraph id="ida7bc38e09c464c9cb7a58e89a570e341"><enum>(2)</enum><text>oversee the development and implementation of ongoing testing and evaluation processes for artificial intelligence in high risk use cases to ensure continued mitigation of the potential risks identified in the risk evaluation process;</text></paragraph>
<paragraph id="idbf6a432cd1ae45d4bf6e62115bfda85e"><enum>(3)</enum><text>implement a process to ensure that risk mitigation efforts for artificial intelligence are reviewed not less than annually and updated as necessary to account for the development of new versions of artificial intelligence and changes to the risk profile; and</text></paragraph> <paragraph commented="no" display-inline="no-display-inline" id="idb6418dc9a0b048fe9bbe2ecca8a39ec2"><enum>(4)</enum><text>adhere to pre-deployment requirements under subsection (d) in each case in which a low or medium risk artificial intelligence use case becomes a high risk artificial intelligence use case.</text></paragraph></subsection>
<subsection id="id5e7faac160f244e7b4f801178862d8cf"><enum>(f)</enum><header>Exemption from requirements for select use cases</header><text>The Chief Artificial Intelligence Officer of each agency—</text> <paragraph commented="no" display-inline="no-display-inline" id="id54ebbdabb0e546b0b5f33983eaeba6a4"><enum>(1)</enum><text display-inline="yes-display-inline">may designate select, low risk use cases, including current and future use cases, that do not have to comply with all or some of the requirements in this Act; and</text></paragraph>
<paragraph commented="no" display-inline="no-display-inline" id="idfe0ff6c36b2947e5bf0c1a56c046bb5a"><enum>(2)</enum><text display-inline="yes-display-inline">shall publicly disclose all use cases exempted under paragraph (1) with a justification for each exempted use case.</text></paragraph></subsection> <subsection id="id45b80bb8e06046828f2b94287d89e53b"><enum>(g)</enum><header>Exception</header><text>The requirements under subsections (a) and (b) shall not apply to an algorithm software update, enhancement, derivative, correction, defect, or fix for artificial intelligence that does not materially change the compliance of the deployer with the requirements of those subsections, unless determined otherwise by the agency Chief Artificial Intelligence Officer.</text></subsection>
<subsection id="idcf3ac1ca7e224cba972aeba5f690db25"><enum>(h)</enum><header>Waivers</header>
<paragraph commented="no" display-inline="no-display-inline" id="idf743d183cef943a98bef6f04813a2bfc"><enum>(1)</enum><header>In general</header><text display-inline="yes-display-inline">The head of an agency, on a case by case basis, may waive 1 or more requirements under subsection (d) for a specific use case after making a written determination, based upon a risk assessment conducted by a human with respect to the specific use case, that fulfilling the requirement or requirements prior to procuring or obtaining, developing, or using artificial intelligence would increase risks to safety or rights overall or would create an unacceptable impediment to critical agency operations.</text></paragraph> <paragraph commented="no" display-inline="no-display-inline" id="id012c12a24d7d4c3dac36f1f4c4f028a4"><enum>(2)</enum><header>Requirements; limitations</header><text display-inline="yes-display-inline">A waiver under this subsection shall be—</text>
<subparagraph commented="no" display-inline="no-display-inline" id="iddb5d1affd8c44ff7adbc722a3861d543"><enum>(A)</enum><text display-inline="yes-display-inline">in the national security interests of the United States, as determined by the head of the agency; </text></subparagraph> <subparagraph commented="no" display-inline="no-display-inline" id="id193e712c5d254e56ad5e0a9f09d7505d"><enum>(B)</enum><text display-inline="yes-display-inline">submitted to the relevant congressional committees not later than 15 days after the head of the agency grants the waiver; and</text></subparagraph>
<subparagraph commented="no" display-inline="no-display-inline" id="id1236625b5d2d4604beb3ea5d6c20ad91"><enum>(C)</enum><text>limited to a duration of 1 year, at which time the head of the agency may renew the waiver and submit the renewed waiver to the relevant congressional committees.</text></subparagraph></paragraph></subsection> <subsection id="id314db77c787a483987c767880ddc126e"><enum>(i)</enum><header>Infrastructure security</header><text>The head of an agency, in consultation with the agency Chief Artificial Intelligence Officer, Chief Information Officer, Chief Data Officer, and other relevant agency officials, shall reevaluate infrastructure security protocols based on the artificial intelligence use cases and associated risks to infrastructure security of the agency.</text></subsection>
<subsection id="id6204f708c92f4ab191c62db4ca199e2f"><enum>(j)</enum><header>Compliance deadline</header><text>Not later than 270 days after the date of enactment of this Act, the requirements of subsections (a) through (i) of this section shall apply with respect to artificial intelligence that is already in use on the date of enactment of this Act.</text></subsection></section> <section commented="no" display-inline="no-display-inline" id="ide79462cfdeb24ee599c1948ecd0f37b6"><enum>9.</enum><header>Prohibition on select artificial intelligence use cases</header><text display-inline="no-display-inline">No agency may develop, procure or obtain, or use artificial intelligence for—</text>
<paragraph id="ida88a06d07eb14078a06713d1f80274a8"><enum>(1)</enum><text>mapping facial biometric features of an individual to assign corresponding emotion and potentially take action against the individual;</text></paragraph> <paragraph id="id7551624b53844e0db634a0dadc125d3d"><enum>(2)</enum><text>categorizing and taking action against an individual based on biometric data of the individual to deduce or infer race, political opinion, religious or philosophical beliefs, trade union status, sexual orientation, or other personal trait;</text></paragraph>
<paragraph id="id31ebebc51ebd4e6d9039cab4b31e0c8e"><enum>(3)</enum><text>evaluating, classifying, rating, or scoring the trustworthiness or social standing of an individual based on multiple data points and time occurrences related to the social behavior of the individual in multiple contexts or known or predicted personal or personality characteristics in a manner that may lead to discriminatory outcomes; or</text></paragraph> <paragraph commented="no" display-inline="no-display-inline" id="id672030be31014549b97a255259ccd94c"><enum>(4)</enum><text>any other use found by the agency to pose an unacceptable risk under the risk classification system of the agency, pursuant to section 7.</text></paragraph></section>
<section id="id69584996faba49928236f549e814a834"><enum>10.</enum><header>Agency procurement innovation labs</header>
<subsection id="id608efe28157348d88a572495feabb257"><enum>(a)</enum><header>In general</header><text>An agency subject to the Chief Financial Officers Act of 1990 (<external-xref legal-doc="usc" parsable-cite="usc/31/901">31 U.S.C. 901</external-xref> note; <external-xref legal-doc="public-law" parsable-cite="pl/101/576">Public Law 101–576</external-xref>) that does not have a Procurement Innovation Lab on the date of enactment of this Act should consider establishing a lab or similar mechanism to test new approaches, share lessons learned, and promote best practices in procurement, including for commercial technology, such as artificial intelligence, that is trustworthy and best-suited for the needs of the agency. </text></subsection> <subsection commented="no" display-inline="no-display-inline" id="idb8c2a4ed216d4611821cba4f4233db86"><enum>(b)</enum><header>Functions</header><text display-inline="yes-display-inline">The functions of the Procurement Innovation Lab or similar mechanism should include—</text>
<paragraph id="id02f4d3308c814ddeadfc03dfb5ba49f4"><enum>(1)</enum><text>providing leadership support as well as capability and capacity to test, document, and help agency programs adopt new and better practices through all stages of the acquisition lifecycle, beginning with project definition and requirements development;</text></paragraph> <paragraph id="id7d4644010090483482f6c46437f453c6"><enum>(2)</enum><text>providing the workforce of the agency with a clear pathway to test and document new acquisition practices and facilitate fresh perspectives on existing practices;</text></paragraph>
<paragraph id="ide81378998ae245d4a5e77471e14b06a5"><enum>(3)</enum><text>helping programs and integrated project teams successfully execute emerging and well-established acquisition practices to achieve better results; and</text></paragraph> <paragraph id="id353c54e0877846bebff491d5ad545d6c"><enum>(4)</enum><text>promoting meaningful collaboration among offices that are responsible for requirements development, contracting officers, and others, including financial and legal experts, that share in the responsibility for making a successful procurement.</text></paragraph></subsection>
<subsection id="id93c9499487d948c3b324085a0a8c3154"><enum>(c)</enum><header>Structure</header><text>An agency should consider placing the Procurement Innovation Lab or similar mechanism as a supporting arm of the Chief Acquisition Officer or Senior Procurement Executive of the agency and shall have wide latitude in structuring the Procurement Innovation Lab or similar mechanism and in addressing associated personnel staffing issues. </text></subsection></section> <section id="id37240b63c87a4a7abb7ee2a9849fae19"><enum>11.</enum><header>Multi-phase commercial technology test program</header> <subsection id="idce0e1c8311dc469cb2f785e818aca874"><enum>(a)</enum><header>Test program</header><text>The head of an agency may procure commercial technology through a multi-phase test program of contracts in accordance with this section.</text></subsection>
<subsection id="id2b78e97c4dc0475fa6fce2a538dda32b"><enum>(b)</enum><header>Purpose</header><text>A test program established under this section shall—</text> <paragraph commented="no" display-inline="no-display-inline" id="idedceb868a2224b04afdff8b7c27a26ce"><enum>(1)</enum><text display-inline="yes-display-inline">provide a means by which an agency may post a solicitation, including for a general need or area of interest, for which the agency intends to explore commercial technology solutions and for which an offeror may submit a bid based on existing commercial capabilities of the offeror with minimal modifications or a technology that the offeror is developing for commercial purposes; and</text></paragraph>
<paragraph commented="no" display-inline="no-display-inline" id="id67157d9024d947258134e364202cb418"><enum>(2)</enum><text display-inline="yes-display-inline">use phases, as described in subsection (c), to minimize government risk and incentivize competition.</text></paragraph></subsection> <subsection id="id7ec00fc485dd411498f4b72dea59dab2"><enum>(c)</enum><header>Contracting procedures</header><text>Under a test program established under this section, the head of an agency may acquire commercial technology through a competitive evaluation of proposals resulting from general solicitation in the following phases:</text>
<paragraph id="idd3efd0d001af49f78bd4bca66c7ee628"><enum>(1)</enum><header>Phase 1 (viability of potential solution)</header><text>Selectees may be awarded a portion of the total contract award and have a period of performance of not longer than 1 year to prove the merits, feasibility, and technological benefit the proposal would achieve for the agency.</text></paragraph> <paragraph id="id6106713ace474ebb9b3a5185e42a4f2f"><enum>(2)</enum><header>Phase 2 (major details and scaled test)</header><text>Selectees may be awarded a portion of the total contract award and have a period of performance of not longer than 1 year to create a detailed timeline, establish an agreeable intellectual property ownership agreement, and implement the proposal on a small scale.</text></paragraph>
<paragraph id="id25df1c6c1a2649f09e1bc3fbc649cc8f"><enum>(3)</enum><header>Phase 3 (implementation or recycle)</header>
<subparagraph commented="no" display-inline="no-display-inline" id="id1c476ddb9c5a4b23be4eaa454638e78c"><enum>(A)</enum><header display-inline="yes-display-inline">In general</header><text>Following successful performance on phase 1 and 2, selectees may be awarded up to the full remainder of the total contract award to implement the proposal, depending on the agreed upon costs and the number of contractors selected.</text></subparagraph> <subparagraph id="idb769f30cf46f4f14a3ceb1335f83ce91"><enum>(B)</enum><header>Failure to find suitable selectees</header><text>If no selectees are found suitable for phase 3, the agency head may determine not to make any selections for phase 3, terminate the solicitation and utilize any remaining funds to issue a modified general solicitation for the same area of interest.</text></subparagraph></paragraph></subsection>
<subsection id="ide112915db60249ba843b460af28aa3be"><enum>(d)</enum><header>Treatment as competitive procedures</header><text>The use of general solicitation competitive procedures for a test program under this section shall be considered to be use of competitive procedures as defined in section 152 of title 41, United States Code.</text></subsection> <subsection id="idec97752d698d4b209fb6a55dd2ccca4e"><enum>(e)</enum><header>Limitation</header><text>The head of an agency shall not enter into a contract under the test program for an amount in excess of $25,000,000.</text></subsection>
<subsection id="ida5c0d63cb01048a9809e78bc88090836"><enum>(f)</enum><header>Guidance</header>
<paragraph commented="no" display-inline="no-display-inline" id="id95c0bec872584c77beef560c6f8e6f5c"><enum>(1)</enum><header display-inline="yes-display-inline">Federal Acquisition Regulatory Council</header><text>The Federal Acquisition Regulatory Council shall revise the Federal Acquisition Regulation as necessary to implement this section, including requirements for each general solicitation under a test program to be made publicly available through a means that provides access to the notice of the general solicitation through the System for Award Management or subsequent government-wide point of entry, with classified solicitations posted to the appropriate government portal.</text></paragraph> <paragraph id="ide68345cd291e4901a9d4da621e1a822c"><enum>(2)</enum><header>Agency procedures</header><text>The head of an agency may not award contracts under a test program until the agency issues guidance with procedures for use of the authority. The guidance shall be issued in consultation with the relevant Acquisition Regulatory Council and shall be publicly available.</text></paragraph></subsection>
<subsection id="idb9d1b465310a4e49a2600808f4479134"><enum>(g)</enum><header>Sunset</header><text>The authority for a test program under this section shall terminate on the date that is 5 years after the date the Federal Acquisition Regulation is revised pursuant to subsection (f)(1) to implement the program. </text></subsection></section> <section id="id6cc3760bc16949078cd45a996ef048ac"><enum>12.</enum><header>Research and development project pilot program</header> <subsection id="id88d90da996bd46af811f559227fe2c91"><enum>(a)</enum><header>Pilot program</header><text>The head of an agency may carry out research and prototype projects in accordance with this section.</text></subsection>
<subsection id="idb24bc86b11d1477db70354f4c7f8730c"><enum>(b)</enum><header>Purpose</header><text>A pilot program established under this section shall provide a means by which an agency may—</text> <paragraph id="id5d820cc2dd154444be84c0d29e4fcca5"><enum>(1)</enum><text>carry out basic, applied, and advanced research and development projects; and</text></paragraph>
<paragraph id="idca81dcca9c97414094e8faf3371e4ef5"><enum>(2)</enum><text>carry out prototype projects that address—</text> <subparagraph id="idcb2b1edc63bf4b1daf14760c67090638"><enum>(A)</enum><text>a proof of concept, model, or process, including a business process;</text></subparagraph>
<subparagraph id="idd543a7a353e542b8876d24b2ca3b32cb"><enum>(B)</enum><text>reverse engineering to address obsolescence;</text></subparagraph> <subparagraph id="id340b74867cbb478586079c80cec5a399"><enum>(C)</enum><text>a pilot or novel application of commercial technologies for agency mission purposes;</text></subparagraph>
<subparagraph id="id0a76513b97b347bb9467ac3ecf8aecf4"><enum>(D)</enum><text>agile development activity;</text></subparagraph> <subparagraph id="id3bf1cda20b72418f848c0182d73688a0"><enum>(E)</enum><text>the creation, design, development, or demonstration of operational utility; or</text></subparagraph>
<subparagraph id="id10c3b363ae014c0585a1ab678b1af60c"><enum>(F)</enum><text>any combination of items described in subparagraphs (A) through (E).</text></subparagraph></paragraph></subsection> <subsection id="idb1fe13c34d6c4aa494d145f7096d8b41"><enum>(c)</enum><header>Contracting procedures</header><text>Under a pilot program established under this section, the head of an agency may carry out research and prototype projects—</text>
<paragraph id="id0912a505ee2a4e52ac6166f5a7dcd80d"><enum>(1)</enum><text>using small businesses to the maximum extent practicable;</text></paragraph> <paragraph id="id9d1f38bc5afd488fba790c98a13835c1"><enum>(2)</enum><text>using cost sharing arrangements where practicable;</text></paragraph>
<paragraph id="id70b734851edd433bbbbf314133694d38"><enum>(3)</enum><text>tailoring intellectual property terms and conditions relevant to the project and commercialization opportunities; and</text></paragraph> <paragraph id="idbde89ebc37c440af858bd8fe4ae3eed9"><enum>(4)</enum><text>ensuring that such projects do not duplicate research being conducted under existing agency programs.</text></paragraph></subsection>
<subsection id="id833cdd68fc3b4e4f899ed87a4bca528f"><enum>(d)</enum><header>Treatment as competitive procedures</header><text>The use of research and development contracting procedures under this section shall be considered to be use of competitive procedures, as defined in section 152 of title 41, United States Code.</text></subsection> <subsection id="id60ce995210d545e0bb97ad08933f9296"><enum>(e)</enum><header>Treatment as commercial technology</header><text>The use of research and development contracting procedures under this section shall be considered to be use of commercial technology, as defined in section 2.</text></subsection>
<subsection id="id63f9ff4eb93b433c8e2d82ccc238117b"><enum>(f)</enum><header>Follow-On Projects or Phases</header><text>A follow-on contract provided for in a contract opportunity announced under this section may, at the discretion of the head of the agency, be awarded to a participant in the original project or phase if the original project or phase was successfully completed.</text></subsection> <subsection id="ideab81d4a69964b43ba3831222b3bb6d9"><enum>(g)</enum><header>Limitation</header><text>The head of an agency shall not enter into a contract under the pilot program for an amount in excess of $10,000,000.</text></subsection>
<subsection id="idea2c7b788ba449bd8581666e72d1e046"><enum>(h)</enum><header>Guidance</header>
<paragraph id="idb367486cad7247cf8b96935e83456165"><enum>(1)</enum><header>Federal acquisition regulatory council</header><text>The Federal Acquisition Regulatory Council shall revise the Federal Acquisition Regulation research and development contracting procedures as necessary to implement this section, including requirements for each research and development project under a pilot program to be made publicly available through a means that provides access to the notice of the opportunity through the System for Award Management or subsequent government-wide point of entry, with classified solicitations posted to the appropriate government portal.</text></paragraph> <paragraph id="id9e17fa46752742978592a5c481d8b1f9"><enum>(2)</enum><header>Agency procedures</header><text>The head of an agency may not award contracts under a pilot program until the agency, in consultation with the relevant Acquisition Regulatory Council issues and makes publicly available guidance on procedures for use of the authority.</text></paragraph></subsection>
<subsection id="id617baa1d6d6c496db02b2b1fa4eca428"><enum>(i)</enum><header>Reporting</header><text>Contract actions entered into under this section shall be reported to the Federal Procurement Data System, or any successor system.</text></subsection> <subsection id="idba2b26a1f74c49d483d5a7520467f180"><enum>(j)</enum><header>Sunset</header><text>The authority for a pilot program under this section shall terminate on the date that is 5 years from the date the Federal Acquisition Regulation is revised pursuant to subsection (h)(1) to implement the program.</text></subsection></section>
<section id="ida2a8969314e2452ca8e1b1b5c940df5c"><enum>13.</enum><header>Development of tools and guidance for testing and evaluating artificial intelligence</header>
<subsection id="idc67cfd0431bf48f9be28d42240db551d"><enum>(a)</enum><header>Agency report requirements</header><text>In a manner specified by the Director, the Chief Artificial Intelligence Officer shall identify and annually submit to the Council a report on obstacles encountered in the testing and evaluation of artificial intelligence, specifying—</text> <paragraph id="idb3989705cc7346a1957a6972a657e0cb"><enum>(1)</enum><text>the nature of the obstacles;</text></paragraph>
<paragraph id="id14562907030045a9bb516193ed9e084e"><enum>(2)</enum><text>the impact of the obstacles on agency operations, mission achievement, and artificial intelligence adoption;</text></paragraph> <paragraph id="idd6e13d480ff841ca9df8cad9f536d9c3"><enum>(3)</enum><text>recommendations for addressing the identified obstacles, including the need for particular resources or guidance to address certain obstacles; and</text></paragraph>
<paragraph id="idcfcf796835a94a6ab73992a0b84b3f87"><enum>(4)</enum><text>a timeline that would be needed to implement proposed solutions.</text></paragraph></subsection> <subsection id="id39fcf0a1861a4fdcb27cbb2c20a19274"><enum>(b)</enum><header>Council review and collaboration</header> <paragraph id="id1db0f7f9c30047c7b90a72399a2ccdbf"><enum>(1)</enum><header>Annual review</header><text>Not less frequently than annually, the Council shall conduct a review of agency reports under subsection (a) to identify common challenges and opportunities for cross-agency collaboration.</text></paragraph>
<paragraph id="id9e0da62d978342b2b707a81c1ac1e79a"><enum>(2)</enum><header>Development of tools and guidance</header>
<subparagraph commented="no" display-inline="no-display-inline" id="ida9d89fe7e9e54762b37bbfe8b8082a73"><enum>(A)</enum><header>In general</header><text display-inline="yes-display-inline">Not later than 2 years after the date of enactment of this Act, the Director, in consultation with the Council, shall convene a working group to—</text> <clause commented="no" display-inline="no-display-inline" id="id2be9a79efde44fffa678f483d2235231"><enum>(i)</enum><text display-inline="yes-display-inline">develop tools and guidance to assist agencies in addressing the obstacles that agencies identify in the reports under subsection (a);</text></clause>
<clause commented="no" display-inline="no-display-inline" id="idaaf1035e1e874a118b605d0feffa4842"><enum>(ii)</enum><text display-inline="yes-display-inline">support interagency coordination to facilitate the identification and use of relevant voluntary standards, guidelines, and other consensus-based approaches for testing and evaluation and other relevant areas; and</text></clause> <clause commented="no" display-inline="no-display-inline" id="id89a46e617b7943ebab6583cba9fdb6d9"><enum>(iii)</enum><text>address any additional matters determined appropriate by the Director.</text></clause></subparagraph>
<subparagraph commented="no" display-inline="no-display-inline" id="idf86325dc7cc54e5c8fffaf3ac8e3f059"><enum>(B)</enum><header>Working group membership</header><text display-inline="yes-display-inline">The working group described in subparagraph (A) shall include Federal interdisciplinary personnel, such as technologists, information security personnel, domain experts, privacy officers, data officers, civil rights and civil liberties officers, contracting officials, legal counsel, customer experience professionals, and others, as determined by the Director.</text></subparagraph></paragraph> <paragraph id="ide6fd0320dcdd4577a2a073c1d909ba55"><enum>(3)</enum><header>Information sharing</header><text>The Director, in consultation with the Council, shall establish a mechanism for sharing tools and guidance developed under paragraph (2) across agencies.</text></paragraph></subsection>
<subsection id="id17da479f89b3421ab2a914533ec3397c"><enum>(c)</enum><header>Congressional reporting</header>
<paragraph commented="no" display-inline="no-display-inline" id="id77f6c9b0da98465aa899b7041db82fa2"><enum>(1)</enum><header>In general</header><text display-inline="yes-display-inline">Each agency shall submit the annual report under subsection (a) to relevant congressional committees.</text></paragraph> <paragraph commented="no" display-inline="no-display-inline" id="idbee867ad8d434cdcac07c4f3f2c1f0e4"><enum>(2)</enum><header>Consolidated report</header><text>The Director, in consultation with the Council, may suspend the requirement under paragraph (1) and submit to the relevant congressional committees a consolidated report that conveys government-wide testing and evaluation challenges, recommended solutions, and progress toward implementing recommendations from prior reports developed in fulfillment of this subsection.</text></paragraph></subsection>
<subsection id="ide4bf9a2682f2461d90dbb2684bcdd607"><enum>(d)</enum><header>Sunset</header><text>The requirements under this section shall terminate on the date that is 10 years after the date of enactment of this Act.</text></subsection></section> <section id="id55880113633846b7994f6960a69f3ea3"><enum>14.</enum><header>Updates to artificial intelligence use case inventories</header> <subsection id="idf888c3ea13ab4bfeae89fddd6db9b4e7"><enum>(a)</enum><header>Amendments</header> <paragraph commented="no" display-inline="no-display-inline" id="id9da70354d3b84b7688be533302a1b191"><enum>(1)</enum><header>Advancing American AI Act</header><text display-inline="yes-display-inline">The Advancing American AI Act (<external-xref legal-doc="public-law" parsable-cite="pl/117/263">Public Law 117–263</external-xref>; <external-xref legal-doc="usc" parsable-cite="usc/40/11301">40 U.S.C. 11301</external-xref> note) is amended—</text>
<subparagraph commented="no" display-inline="no-display-inline" id="ide5d0510cd6b14e42a6c90370189945ad"><enum>(A)</enum><text display-inline="yes-display-inline">in section 7223(3), by striking the period and inserting <quote>and in section 5002 of the National Artificial Intelligence Initiative Act of 2020 (<external-xref legal-doc="usc" parsable-cite="usc/15/9401">15 U.S.C. 9401</external-xref>).</quote>; and</text></subparagraph> <subparagraph commented="no" display-inline="no-display-inline" id="idc8fc63a7823449778a9dc961cc1b0f58"><enum>(B)</enum><text display-inline="yes-display-inline">in section 7225, by striking subsection (d).</text></subparagraph></paragraph>
<paragraph commented="no" display-inline="no-display-inline" id="id3e8635e356f34312a86c4cccb7ce0df3"><enum>(2)</enum><header>Executive order 13960</header><text display-inline="yes-display-inline">The provisions of section 5 of Executive Order 13960 (85 Fed. Reg. 78939; relating to promoting the use of trustworthy artificial intelligence in Federal Government) that exempt classified and sensitive use cases from agency inventories of artificial intelligence use cases shall cease to have legal effect. </text></paragraph></subsection> <subsection id="id3af8d4fc99eb47bfadece7b53bab120c"><enum>(b)</enum><header>Compliance</header> <paragraph commented="no" display-inline="no-display-inline" id="idbeecc034a7e64f0ab782a852525f24b2"><enum>(1)</enum><header>In general</header><text display-inline="yes-display-inline">The Director shall ensure that agencies submit artificial intelligence use case inventories and that the inventories comply with applicable artificial intelligence inventory guidance.</text></paragraph>
<paragraph id="id485344ff34524371b46e027a46433adb"><enum>(2)</enum><header>Annual report</header><text>The Director shall submit to the relevant congressional committees an annual report on agency compliance with artificial intelligence inventory guidance.</text></paragraph></subsection> <subsection id="id1189c790241f42389658f2ec8bb28c1a"><enum>(c)</enum><header>Disclosure</header> <paragraph commented="no" display-inline="no-display-inline" id="idbbe0466047bb40b18eca32d8160e33eb"><enum>(1)</enum><header>In general</header><text display-inline="yes-display-inline">The artificial intelligence inventory of each agency shall publicly disclose—</text>
<subparagraph id="id3268b123da824eb8855fa856c8468e06"><enum>(A)</enum><text>whether artificial intelligence was developed internally by the agency or procured externally, without excluding any use case on basis that the use case is <quote>sensitive</quote> solely because it was externally procured;</text></subparagraph> <subparagraph id="idf935ff369c744660922b7c4725030bd9"><enum>(B)</enum><text>data provenance information, including identifying the source of the training data of the artificial intelligence, including internal government data, public data, commercially held data, or similar data;</text></subparagraph>
<subparagraph id="ide4d1f151c6db43cbb15e510fed368ddd"><enum>(C)</enum><text>the level of risk at which the agency has classified the artificial intelligence use case and a brief explanation for how the determination was made;</text></subparagraph> <subparagraph id="id92401c2b30e446e196c7d9f568c4ee51"><enum>(D)</enum><text>a list of targeted impact assessments conducted pursuant to section 7(a)(2)(C); and</text></subparagraph>
<subparagraph commented="no" display-inline="no-display-inline" id="ideeeb5b6d68a2499498e97085fb46c0c2"><enum>(E)</enum><text display-inline="yes-display-inline">the number of artificial intelligence use cases excluded from public reporting as being <quote>sensitive.</quote></text></subparagraph></paragraph> <paragraph id="id9d06e8349e1246f6b3a5fb6859983cd9"><enum>(2)</enum><header>Updates</header> <subparagraph commented="no" display-inline="no-display-inline" id="idd07140b3382545ce81ba039770d161d7"><enum>(A)</enum><header>In general</header><text display-inline="yes-display-inline">When an agency updates the public artificial intelligence use case inventory of the agency, the agency shall disclose the date of the modification and make change logs publicly available and accessible.</text></subparagraph>
<subparagraph commented="no" display-inline="no-display-inline" id="idbc1cda53a6524d98b2cfcb868f9cfaaf"><enum>(B)</enum><header>Guidance</header><text>The Director shall issue guidance to agencies that describes how to appropriately update artificial intelligence use case inventories and clarifies how sub-agencies and regulatory agencies should participate in the artificial intelligence use case inventorying process.</text></subparagraph></paragraph></subsection> <subsection id="id57b1ae63b2bb46e2a40e5127a3473b6e"><enum>(d)</enum><header>Congressional reporting</header><text>The head of each agency shall submit to the relevant congressional committees a copy of the annual artificial intelligence use case inventory of the agency, including—</text>
<paragraph commented="no" display-inline="no-display-inline" id="idfdd3295d78ce4f8f962f541f89fcd014"><enum>(1)</enum><text display-inline="yes-display-inline">the use cases that have been identified as <quote>sensitive</quote> and not for public disclosure; and</text></paragraph> <paragraph commented="no" display-inline="no-display-inline" id="idc89a82c37bab45d3851bc84f5bf8e6ed"><enum>(2)</enum><text>a classified annex of classified use cases.</text></paragraph></subsection>
<subsection id="id6105a76ab04445b8814247716a20fc75"><enum>(e)</enum><header>Government trends report</header><text>Beginning 1 year after the date of enactment of this Act, and annually thereafter, the Director, in coordination with the Council, shall issue a report, based on the artificial intelligence use cases reported in use case inventories, that describes trends in the use of artificial intelligence in the Federal Government.</text></subsection> <subsection id="id19d2ab06429940d5b67639f854a9e12b"><enum>(f)</enum><header>Comptroller General</header> <paragraph commented="no" display-inline="no-display-inline" id="ideac37f950a4b4294a05e5c171ce3e8a2"><enum>(1)</enum><header>Report required</header><text display-inline="yes-display-inline">Not later than 1 year after the date of enactment of this Act, and annually thereafter, the Comptroller General of the United States shall submit to relevant congressional committees a report on whether agencies are appropriately classifying use cases.</text></paragraph>
<paragraph id="id69531173e78b4eaab57442aed31db1fb"><enum>(2)</enum><header>Appropriate classification</header><text>The Comptroller General of the United States shall examine whether the appropriate level of disclosure of artificial intelligence use cases by agencies should be included on the High Risk List of the Government Accountability Office.</text></paragraph></subsection></section> </legis-body> </bill> 

