[116th Congress Public Law 258]
[From the U.S. Government Publishing Office]
[[Page 1149]]
IDENTIFYING OUTPUTS OF GENERATIVE
ADVERSARIAL NETWORKS ACT
[[Page 134 STAT. 1150]]
Public Law 116-258
116th Congress
An Act
To direct the Director of the National Science Foundation to support
research on the outputs that may be generated by generative adversarial
networks, otherwise known as deepfakes, and other comparable techniques
that may be developed in the future, and for other
purposes. <<NOTE: Dec. 23, 2020 - [S. 2904]>>
Be it enacted by the Senate and House of Representatives of the
United States of America in Congress assembled, <<NOTE: Identifying
Outputs of Generative Adversarial Networks Act. 15 USC 9101 note. 15 USC
9101.>>
SECTION 1. SHORT TITLE.
This Act may be cited as the ``Identifying Outputs of Generative
Adversarial Networks Act'' or the ``IOGAN Act''.
SEC. 2. FINDINGS.
Congress finds the following:
(1) Gaps currently exist on the underlying research needed
to develop tools that detect videos, audio files, or photos that
have manipulated or synthesized content, including those
generated by generative adversarial networks. Research on
digital forensics is also needed to identify, preserve, recover,
and analyze the provenance of digital artifacts.
(2) The National Science Foundation's focus to support
research in artificial intelligence through computer and
information science and engineering, cognitive science and
psychology, economics and game theory, control theory,
linguistics, mathematics, and philosophy, is building a better
understanding of how new technologies are shaping the society
and economy of the United States.
(3) The National Science Foundation has identified the ``10
Big Ideas for NSF Future Investment'' including ``Harnessing the
Data Revolution'' and the ``Future of Work at the Human-
Technology Frontier'', with artificial intelligence is a
critical component.
(4) The outputs generated by generative adversarial networks
should be included under the umbrella of research described in
paragraph (3) given the grave national security and societal
impact potential of such networks.
(5) Generative adversarial networks are not likely to be
utilized as the sole technique of artificial intelligence or
machine learning capable of creating credible deepfakes. Other
techniques may be developed in the future to produce similar
outputs.
[[Page 134 STAT. 1151]]
SEC. 3. <<NOTE: 15 USC 9102.>> NSF SUPPORT OF RESEARCH ON
MANIPULATED OR SYNTHESIZED CONTENT AND
INFORMATION SECURITY.
The Director <<NOTE: Consultation.>> of the National Science
Foundation, in consultation with other relevant Federal agencies, shall
support merit-reviewed and competitively awarded research on manipulated
or synthesized content and information authenticity, which may include--
(1) fundamental research on digital forensic tools or other
technologies for verifying the authenticity of information and
detection of manipulated or synthesized content, including
content generated by generative adversarial networks;
(2) fundamental research on technical tools for identifying
manipulated or synthesized content, such as watermarking systems
for generated media;
(3) social and behavioral research related to manipulated or
synthesized content, including human engagement with the
content;
(4) research on public understanding and awareness of
manipulated and synthesized content, including research on best
practices for educating the public to discern authenticity of
digital content; and
(5) <<NOTE: Coordination.>> research awards coordinated
with other federal agencies and programs, including the Defense
Advanced Research Projects Agency and the Intelligence Advanced
Research Projects Agency, with coordination enabled by the
Networking and Information Technology Research and Development
Program.
SEC. 4. <<NOTE: 15 USC 9103.>> NIST SUPPORT FOR RESEARCH AND
STANDARDS ON GENERATIVE ADVERSARIAL NETWORKS.
(a) In General.--The Director of the National Institute of Standards
and Technology shall support research for the development of
measurements and standards necessary to accelerate the development of
the technological tools to examine the function and outputs of
generative adversarial networks or other technologies that synthesize or
manipulate content.
(b) Outreach.--The Director of the National Institute of Standards
and Technology shall conduct outreach--
(1) to receive input from private, public, and academic
stakeholders on fundamental measurements and standards research
necessary to examine the function and outputs of generative
adversarial networks; and
(2) to consider the feasibility of an ongoing public and
private sector engagement to develop voluntary standards for the
function and outputs of generative adversarial networks or other
technologies that synthesize or manipulate content.
SEC. 5. REPORT ON FEASIBILITY OF PUBLIC-PRIVATE PARTNERSHIP TO
DETECT MANIPULATED OR SYNTHESIZED CONTENT.
Not later than 1 year after the date of enactment of this Act, the
Director of the National Science Foundation and the Director of the
National Institute of Standards and Technology shall jointly submit to
the Committee on Science, Space, and Technology of the House of
Representatives, the Subcommittee on Commerce, Justice, Science, and
Related Agencies of the Committee on Appropriations of the House of
Representatives, the Committee on Commerce, Science, and Transportation
of the Senate, and the
[[Page 134 STAT. 1152]]
Subcommittee on Commerce, Justice, Science, and Related Agencies of the
Committee on Appropriations of the Senate a report containing--
(1) the Directors' findings with respect to the feasibility
for research opportunities with the private sector, including
digital media companies to detect the function and outputs of
generative adversarial networks or other technologies that
synthesize or manipulate content; and
(2) <<NOTE: Recommenda- tions.>> any policy recommendations
of the Directors that could facilitate and improve communication
and coordination between the private sector, the National
Science Foundation, and relevant Federal agencies through the
implementation of innovative approaches to detect digital
content produced by generative adversarial networks or other
technologies that synthesize or manipulate content.
SEC. 6. <<NOTE: 15 USC 9104.>> GENERATIVE ADVERSARIAL NETWORK
DEFINED.
In this Act, the term ``generative adversarial network'' means,
with respect to artificial intelligence, the machine learning process of
attempting to cause a generator artificial neural network (referred to
in this paragraph as the ``generator'' and a discriminator artificial
neural network (referred to in this paragraph as a ``discriminator'') to
compete against each other to become more accurate in their function and
outputs, through which the generator and discriminator create a feedback
loop, causing the generator to produce increasingly higher-quality
artificial outputs and the discriminator to increasingly improve in
detecting such artificial outputs.
Approved December 23, 2020.
LEGISLATIVE HISTORY--S. 2904 (H.R. 4355):
---------------------------------------------------------------------------
HOUSE REPORTS: No. 116-268 (Comm. on Science, Space, and Technology)
accompanying H.R. 4355.
SENATE REPORTS: No. 116-289 (Comm. on Commerce, Science, and
Transportation).
CONGRESSIONAL RECORD, Vol. 166 (2020):
Nov. 18, considered and passed Senate.
Dec. 8, considered and passed House.
<all>