<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="billres.xsl"?>
<!DOCTYPE bill PUBLIC "-//US Congress//DTDs/bill.dtd//EN" "bill.dtd">
<bill bill-stage="Introduced-in-House" dms-id="HAAFAA8F83B374DDE8D86A09F29B48257" public-private="public" key="H" bill-type="olc"><metadata xmlns:dc="http://purl.org/dc/elements/1.1/">
<dublinCore>
<dc:title>117 HR 9720 IH: AI Incident Reporting and Security Enhancement Act</dc:title>
<dc:publisher>U.S. House of Representatives</dc:publisher>
<dc:date>2024-09-20</dc:date>
<dc:format>text/xml</dc:format>
<dc:language>EN</dc:language>
<dc:rights>Pursuant to Title 17 Section 105 of the United States Code, this file is not subject to copyright protection and is in the public domain.</dc:rights>
</dublinCore>
</metadata>
<form>
<distribution-code display="yes">I</distribution-code><congress display="yes">118th CONGRESS</congress><session display="yes">2d Session</session><legis-num display="yes">H. R. 9720</legis-num><current-chamber>IN THE HOUSE OF REPRESENTATIVES</current-chamber><action display="yes"><action-date date="20240920">September 20, 2024</action-date><action-desc><sponsor name-id="R000305">Ms. Ross</sponsor> (for herself, <cosponsor name-id="O000019">Mr. Obernolte</cosponsor>, and <cosponsor name-id="B001292">Mr. Beyer</cosponsor>) introduced the following bill; which was referred to the <committee-name committee-id="HSY00">Committee on Science, Space, and Technology</committee-name></action-desc></action><legis-type>A BILL</legis-type><official-title display="yes">To direct the Director of the National Institute of Standards and Technology to update the national vulnerability database to reflect vulnerabilities to artificial intelligence systems, study the need for voluntary reporting related to artificial intelligence security and safety incidents, and for other purposes.</official-title></form><legis-body id="H8D6F919E36CE42A0ABAE19D456158EA9" style="OLC"><section id="H0EC7365B1A9041849A5CD1E1960FFAFB" section-type="section-one"><enum>1.</enum><header>Short title</header><text display-inline="no-display-inline">This Act may be cited as the <quote><short-title>AI Incident Reporting and Security Enhancement Act</short-title></quote>. </text></section><section id="HAE7A0FBBE4CE45B586CEE49E659D6CFF"><enum>2.</enum><header>Activities to support voluntary vulnerability and incident tracking associated with artificial intelligence</header><subsection id="H9336CFC1768547B5B5F88C85FA2CEE15"><enum>(a)</enum><header>Update to national vulnerability database</header><text>Subject to the availability of appropriations, the Director of the National Institute of Standards and Technology, in coordination with industry stakeholders, standards development organizations, and appropriate Federal agencies, as appropriate, shall carry out the following:</text><paragraph id="H6801BDA4B0FA4D2F9F9A8AAE0D486128"><enum>(1)</enum><text>Establish or identify common definitions and any characteristics of artificial intelligence security vulnerabilities that make utilization of the National Vulnerability Database inappropriate for the management of such vulnerabilities, and develop processes and procedures for vulnerability management of such vulnerabilities.</text></paragraph><paragraph id="H5BABA646B8F24E69BCAF09D1582A8016"><enum>(2)</enum><text>Support the development of standards and guidance for technical vulnerability management processes related to artificial intelligence.</text></paragraph><paragraph id="H64D72FE9EB4D40DD9FA60CBA3FD47235"><enum>(3)</enum><text>Consistent with paragraphs (1) and (2), as appropriate, initiate a process to update the Institute’s processes and procedures associated with the National Vulnerability Database to ensure such Database and associated vulnerability management processes incorporate artificial intelligence security vulnerabilities to the greatest extent practicable.</text></paragraph></subsection><subsection id="H4036A1E73A114AC7AAF9DA1E5811155A"><enum>(b)</enum><header>Assessing voluntary tracking of substantial artificial intelligence security and safety incidents</header><paragraph id="H9815ADCD696047918850C041E4DBF84D" commented="no"><enum>(1)</enum><header>In general</header><text>Subject to the availability of appropriations, the Director of the National Institute of Standards and Technology, in consultation with the Director of the Cybersecurity and Infrastructure Security Agency of the Department of Homeland Security, shall convene a multi-stakeholder process to consider the development of a process relating to the voluntary collection, reporting, and tracking of substantial artificial intelligence security incidents and substantial artificial intelligence safety incidents.</text></paragraph><paragraph id="HBCE951D4AD4B483285B04DAA68A4A1EC"><enum>(2)</enum><header>Activities</header><text>In carrying out paragraph (1), the Director of the National Institute of Standards and Technology shall convene appropriate representatives of industry, academia, nonprofit organizations, standards development organizations, civil society groups, Sector Risk Management Agencies, and appropriate Federal departments and agencies to carry out the following:</text><subparagraph id="H111A091A4815464CA5E930C50F8A7CB8"><enum>(A)</enum><text>Establish common definitions and characterizations for relevant aspects of substantial artificial intelligence security incidents and substantial artificial intelligence safety incidents, which may include the following:</text><clause id="HCEA207A474C54F8AAD1EDDEE5AC2F420"><enum>(i)</enum><text>Classifications that sufficiently differentiate between the following:</text><subclause id="H5A8BCA35CF9D40C7BC606D4AE3C19E2D"><enum>(I)</enum><text>Artificial intelligence security incidents.</text></subclause><subclause id="H85F5644E9D794A3FAE9CA32D95A4CC68"><enum>(II)</enum><text>Artificial intelligence safety incidents.</text></subclause></clause><clause id="H60D80EF189A0430DAABC594E833E1F28"><enum>(ii)</enum><text>Taxonomies to classify incidents referred to in clause (i) based on relevant characteristics, impacts, or other appropriate criteria.</text></clause></subparagraph><subparagraph id="HAF5FD415AD6E4D18B8BB485E6E83031E"><enum>(B)</enum><text>Assess the usefulness and cost-effectiveness of an effort to voluntarily track substantial artificial intelligence security incidents and substantial artificial intelligence safety incidents.</text></subparagraph><subparagraph id="H8EA559255348493187BD7DF075A0F989"><enum>(C)</enum><text>Identify and provide guidelines, best practices, methodologies, procedures, and processes for tracking and reporting substantial artificial intelligence security incidents and substantial artificial intelligence safety incidents across different sectors and use cases.</text></subparagraph><subparagraph id="HE387AF4330F347E8AF73A1C01A32B16F" commented="no"><enum>(D)</enum><text display-inline="yes-display-inline">Support the development of standardized reporting and documentation mechanisms, including automated mechanisms, that would help provide information, including public information, regarding substantial artificial intelligence security incidents and substantial artificial intelligence safety incidents.</text></subparagraph><subparagraph id="HB0321475F54A438184F4E3175AE53CB3"><enum>(E)</enum><text>Support the development of norms for reporting of substantial artificial intelligence security incidents and substantial artificial intelligence safety incidents, taking into account when it is appropriate to publicly disclose such incidents.</text></subparagraph></paragraph><paragraph id="H9B36302442B24DCF89CAF1AE3D45236C"><enum>(3)</enum><header>Report</header><text display-inline="yes-display-inline">Not later than three years after the date of the enactment of this Act, the Director of the National Institute of Standards and Technology shall submit to Congress a report on a process relating to the voluntary collection, reporting, and tracking of substantial artificial intelligence security incidents and substantial artificial intelligence safety incidents under paragraph (1). Such report shall include the following:</text><subparagraph id="H23D86AD732AC49C39EBB71ABE916CBBD"><enum>(A)</enum><text>Findings from the multi-stakeholder process referred to in such paragraph.</text></subparagraph><subparagraph id="H0F0FC1D214FF40528941F222CDAE6DA0"><enum>(B)</enum><text>An assessment of and recommendations for establishing reporting and collection mechanisms by which industry, academia, nonprofit organizations, standards development organizations, civil society groups, and appropriate public sector entities may voluntarily share standardized information regarding substantial artificial intelligence security incidents and substantial artificial intelligence safety incidents;</text></subparagraph></paragraph></subsection><subsection id="H169D84C85082455AB2B427493C9E2E15"><enum>(c)</enum><header>Limitation</header><text>Nothing in this section provides the Director of the National Institute of Standards and Technology with any enforcement authority that was not in effect on the day before the date of the enactment of this section.</text></subsection><subsection id="HDA514C7E02A74C0FA454692DDB43F56C"><enum>(d)</enum><header>Definitions</header><text>In this section:</text><paragraph id="HE9E807FEE8CB4A10AD23AD038371A24C"><enum>(1)</enum><header>Artificial intelligence</header><text>The term <quote>artificial intelligence</quote> has the meaning given such term in section 5002 of the National Artificial Intelligence Initiative Act of 2020 (<external-xref legal-doc="usc" parsable-cite="usc/15/9401">15 U.S.C. 9401</external-xref>).</text></paragraph><paragraph id="H41A963C55A5E4614855D5CFC350621C7" commented="no"><enum>(2)</enum><header>Artificial intelligence security vulnerability</header><text display-inline="yes-display-inline">The term <quote>artificial intelligence security vulnerability</quote> means a weakness in an artificial intelligence system, system security procedures, internal controls, or implementation that could be exploited or triggered by a threat source.</text></paragraph><paragraph id="HB8AEF75BEB9240EEB18B30E1784E2014"><enum>(3)</enum><header>Artificial intelligence system</header><text>The term <quote>artificial intelligence system</quote> has the meaning given such term in section 7223 of the Advancing American AI Act (<external-xref legal-doc="usc" parsable-cite="usc/40/11301">40 U.S.C. 11301</external-xref> note; as enacted as part of title LXXII of division G of the James M. Inhofe National Defense Authorization Act for Fiscal Year 2023; <external-xref legal-doc="public-law" parsable-cite="pl/117/263">Public Law 117–263</external-xref>).</text></paragraph><paragraph id="H5D62E83B3D6D4D6CBE028C742FDC348B"><enum>(4)</enum><header>Sector Risk Management Agency</header><text>The term <quote>Sector Risk Management Agency</quote> has the meaning given such term in section 2200 of the Homeland Security Act of 2002 (<external-xref legal-doc="usc" parsable-cite="usc/6/650">6 U.S.C. 650</external-xref>).</text></paragraph><paragraph id="H133C45A932834A18ADCD3D1D6F7D1AB4" commented="no"><enum>(5)</enum><header>Threat source</header><text display-inline="yes-display-inline">The term <quote>threat source</quote> means any of the following:</text><subparagraph id="H11DBB957B6144B6D91D5708DDE607D2C" commented="no"><enum>(A)</enum><text>An intent and method targeted at the intentional exploitation of a vulnerability.</text></subparagraph><subparagraph id="H34618559A5164D808F15220BB150EA9D" commented="no"><enum>(B)</enum><text>A situation and method that may accidentally trigger a vulnerability.</text></subparagraph></paragraph></subsection></section></legis-body></bill> 

