[House Hearing, 115 Congress]
[From the U.S. Government Publishing Office]







             GAME CHANGERS: ARTIFICIAL INTELLIGENCE PART I

=======================================================================

                                HEARING

                               BEFORE THE

                            SUBCOMMITTEE ON
                         INFORMATION TECHNOLOGY

                                 OF THE

                         COMMITTEE ON OVERSIGHT
                         AND GOVERNMENT REFORM
                        HOUSE OF REPRESENTATIVES

                     ONE HUNDRED FIFTEENTH CONGRESS

                             SECOND SESSION

                               __________

                           FEBRUARY 14, 2018

                               __________

                           Serial No. 115-65

                               __________

Printed for the use of the Committee on Oversight and Government Reform





[GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]





         Available via the World Wide Web: http://www.fdsys.gov
                       http://oversight.house.gov
                       
                                   ______
		 
                     U.S. GOVERNMENT PUBLISHING OFFICE 
		 
30-296 PDF                WASHINGTON : 2018                 
                       
                       
                       
                       
                       
                       
                       
                       
                       
                       
                       
                       
                       
                       
                       
                       
                       
                       
                       
              Committee on Oversight and Government Reform

                  Trey Gowdy, South Carolina, Chairman
John J. Duncan, Jr., Tennessee       Elijah E. Cummings, Maryland, 
Darrell E. Issa, California              Ranking Minority Member
Jim Jordan, Ohio                     Carolyn B. Maloney, New York
Mark Sanford, South Carolina         Eleanor Holmes Norton, District of 
Justin Amash, Michigan                   Columbia
Paul A. Gosar, Arizona               Wm. Lacy Clay, Missouri
Scott DesJarlais, Tennessee          Stephen F. Lynch, Massachusetts
Blake Farenthold, Texas              Jim Cooper, Tennessee
Virginia Foxx, North Carolina        Gerald E. Connolly, Virginia
Thomas Massie, Kentucky              Robin L. Kelly, Illinois
Mark Meadows, North Carolina         Brenda L. Lawrence, Michigan
Ron DeSantis, Florida                Bonnie Watson Coleman, New Jersey
Dennis A. Ross, Florida              Stacey E. Plaskett, Virgin Islands
Mark Walker, North Carolina          Val Butler Demings, Florida
Rod Blum, Iowa                       Raja Krishnamoorthi, Illinois
Jody B. Hice, Georgia                Jamie Raskin, Maryland
Steve Russell, Oklahoma              Peter Welch, Vermont
Glenn Grothman, Wisconsin            Matt Cartwright, Pennsylvania
Will Hurd, Texas                     Mark DeSaulnier, California
Gary J. Palmer, Alabama              Jimmy Gomez,California
James Comer, Kentucky
Paul Mitchell, Michigan
Greg Gianforte, Montana

                     Sheria Clarke, Staff Director
                    William McKenna, General Counsel
           Troy Stock, Technology Subcommittee Staff Director
                Sarah Moxley, Senior Professional Member
                    Sharon Casey, Deputy Chief Clerk
                 David Rapallo, Minority Staff Director
                                 ------                                

                 Subcommittee on Information Technology

                       Will Hurd, Texas, Chairman
Paul Mitchell, Michigan, Vice Chair  Robin L. Kelly, Illinois, Ranking 
Darrell E. Issa, California              Minority Member
Justin Amash, Michigan               Jamie Raskin, Maryland
Blake Farenthold, Texas              Stephen F. Lynch, Massachusetts
Steve Russell, Oklahoma              Gerald E. Connolly, Virginia
Greg Gianforte, Montana              Raja Krishnamoorthi, Illinois



























                            C O N T E N T S

                              ----------                              
                                                                   Page
Hearing held on February 14, 2018................................     1

                               WITNESSES

Dr. Amir Khosrowshahi, Vice President and Chief Technology 
  Officer, Artificial Intelligence Products Group, Intel
    Oral Statement...............................................     4
    Written Statement............................................     7
Dr. Charles Isbell, Executive Associate Dean and Professor, 
  College of Computing, Georgia Institute of Technology
    Oral Statement...............................................    22
    Written Statement............................................    25
Dr. Oren Etzioni, Chief Executive Officer, Allen Institute for 
  Artificial Intelligence
    Oral Statement...............................................    31
    Written Statement............................................    33
Dr. Ian Buck, Vice President and General Manager, Tesla Data 
  Center Business, NVIDIA
    Oral Statement...............................................    45
    Written Statement............................................    47

 
             GAME CHANGERS: ARTIFICIAL INTELLIGENCE PART I

                              ----------                              


                      Wednesday, February 14, 2018

                  House of Representatives,
            Subcommittee on Information Technology,
              Committee on Oversight and Government Reform,
                                                   Washington, D.C.
    The subcommittee met, pursuant to call, at 2:23 p.m., in 
Room 2154, Rayburn House Office Building, Hon. Will Hurd 
[chairman of the subcommittee] presiding.
    Present: Representatives Hurd, Amash, Kelly, Lynch, 
Connolly, and Krishnamoorthi.
    Also Present: Representative Massie.
    Mr. Hurd. The Subcommittee on Information Technology will 
come to order. And, without objection, the chair is authorized 
to declare a recess at any time.
    Welcome to the first hearing in a series of hearings on 
artificial intelligence. This series is an opportunity for the 
subcommittee to take a deep dive into artificial intelligence. 
And today's hearing is an opportunity to increase Congress' 
understanding of artificial intelligence, including its 
development, uses, and the potential challenges and advantages 
of government adoption of artificial intelligence.
    We have four experts on the matter whom I look forward to 
hearing from today. And in the next hearing we do, in March, I 
believe, we will hear from government agencies about how they 
are or should be adopting artificial intelligence into their 
operations, how they will use AI to spend taxpayer dollars 
wisely and make each individual's interactions with the 
government more efficient, effective, and secure.
    It is important that we understand both the risks and 
rewards of artificial intelligence. And in the third hearing, 
in April, we will discuss the appropriate roles of both the 
public and private sectors as artificial intelligence matures.
    Artificial intelligence is a technology that transcends 
borders. We have allies and adversaries, both nation-states and 
individual hackers, who are pursuing artificial intelligence 
with all they have, because dominance in artificial 
intelligence is a guaranteed leg up in the realm of geopolitics 
and economics.
    At the end of this series, it is my goal to ensure that we 
have a clear idea of what it takes for the United States to 
remain the world leader when it comes to artificial 
intelligence. Thoughtful engagement by legislators is key to 
this goal, and I believe that this committee will be leaders on 
this topic.
    So what is artificial intelligence? Hollywood's portrayal 
of artificial intelligence is not accurate. Instead, many of us 
are already using it every single day, from song 
recommendations in Spotify to digital assistants that tell us 
the weather.
    And while these consumer applications are important, I am 
most excited about the possibility of using artificial 
intelligence in the government to defend our infrastructure and 
have better decisionmaking because of the analytics that 
artificial intelligence can run.
    In an environment of tightening resources, artificial 
intelligence can help us do more for less money and help to 
provide better citizen-facing services.
    I thank the witnesses for being here today and look forward 
to hearing and learning from you so that we can all benefit 
from the revolutionary opportunities AI provides us.
    As always, I am honored to be exploring these issues in a 
bipartisan fashion, I think the IT Subcommittee is a leader on 
doing things in a bipartisan way, with my friend and ranking 
member, the Honorable Robin Kelly from the great State of 
Illinois.
    Ms. Kelly. Thank you. Welcome to the witnesses. Thank you, 
Chairman Hurd, and welcome to all of our witnesses today, and 
Happy Valentine's Day.
    Artificial intelligence, or AI, has the capacity to improve 
how society handles some of its most difficult challenges.
    In medicine, the use of AI has the potential to save lives 
and detect illnesses early. One MIT study found that using 
machine-learning algorithms reduced human errors by 85 percent 
when analyzing the cells of lung cancer patients. And earlier 
this month, Wired magazine reported hospitals have now begun 
testing software that can check the images of a person's eye 
for signs of diabetic eye disease, a condition that if 
diagnosed too late can result in vision lost.
    In some communities around the country, self-driving cars 
are already operating on the road and highways. That makes me 
nervous. Investment by major car companies in self-driving cars 
makes it increasingly likely that they will become the norm, 
not the exception on our Nation's roads.
    But there is a lot of uncertainty revolving around 
artificial intelligence. AI is no longer the fantasy of science 
fiction and is increasingly used in everyday life. As the use 
of AI expands, it is critical that this powerful technology is 
implemented in an inclusive, accessible, and transparent 
manner.
    In its most recent report on the future of AI, the National 
Science and Technology Council issued a dire assessment of the 
state of diversity within the AI industry. The NSTC found that 
there was a, quote, ``lack of gender and racial diversity in 
the AI workforce,'' and that this, quote, ``mirrors the lack of 
diversity in the technology industry and the field of computer 
science generally.'' According to the NSTC, in the field of AI 
improving diversity, and I quote, ``is one of the most critical 
and high priority challenges.''
    The existing racial and gender gaps in the tech industry 
add to the challenges the AI field faces. Although women 
comprise approximately 18 percent of computer science graduates 
in the Nation, only 11 percent of all computer science 
engineers are female. African Americans and Hispanics account 
for just 11 percent of all employees in the technology sector, 
despite making up 27 percent of the total population in this 
country.
    Lack of AI workforce diversity can have real cost on 
individuals' lives. The increasing use of AI to make 
consequential decisions about people's lives is spreading at a 
fast rate. Currently, AI systems are being used to make 
decisions by banks about who should receive loans, by 
government about whether someone is eligible for public 
benefits, and by courts about whether a person should be set 
free.
    However, research has found considerable flaws and biases 
can exist in the algorithms that support AI systems, calling 
into question the accuracy of such systems and its potential 
for unequal treatment of some Americans. For AI to be accurate, 
it requires accurate data and learning sets to draw 
conclusions. If the data provided is biased, the conclusions 
will likely be biased. A diverse workforce will likely account 
for this and use more diverse data and learning sets.
    Within the industry, the use of black box algorithms are 
exacerbating the problems of bias. Two years ago, ProPublica 
investigated the use of computerized risk prediction tools that 
were used by some judges in criminal sentencing and bail 
hearings.
    The investigation revealed that the algorithm the systems 
relied upon to predict recidivism was not only inaccurate, but 
biased against African Americans who were, quote, ``twice as 
likely as Whites to be labeled a higher risk but not actually 
reoffend.''
    Judges were using misinformation derived from black box 
software to make life-changing decisions on whether someone is 
let free or receives a harsher sentence than appropriate.
    Increasing the transparency of these programs and ensuring 
a diverse workforce is engaged on developing AI will help 
decrease bias and make software more inclusive. Increasing 
diversity among the AI workforce helps avoid the negative 
outcomes that can occur when AI development is concentrated 
among certain groups of individuals, including the risk of 
biases in AI systems.
    As we move forward in this great age of technological 
modernization, I will be focused on how the private sector, 
Congress, and regulators can work together to ensure that AI 
technologies continue to innovate successfully and socially 
responsibly.
    I want to thank our witnesses for testifying today and look 
forward to hearing your thoughts on how we can achieve this 
goal.
    And, again, thank you, Mr. Chair.
    Mr. Hurd. I recognize the distinguished gentleman from 
Kentucky, Mr. Massie, is here. He is not a member of the 
subcommittee, so I ask unanimous consent that he is able to 
fully participate in this hearing. Without objection, so 
ordered.
    Now I am pleased to announce and introduce our witnesses. 
Our first one, Dr. Amir Khosrowshahi, is vice president and 
chief technology officer of the Artificial Intelligence 
Products Group at Intel.
    Welcome.
    Dr. Charles Isbell is executive associate dean of the 
College of Computing within the Georgia Institute of 
Technology.
    Dr. Oren Etzioni is the chief executive officer at the 
Allen Institute for Artificial Intelligence.
    And Dr. Ian Buck is vice president and general manager of 
Accelerated Computing at NVIDIA.
    Welcome to you all.
    And pursuant to committee rules, all witnesses will be 
sworn in before you testify. So please rise and raise your 
right hand.
    Do you solemnly swear or affirm that the testimony you are 
about to give is the truth, the whole truth, and nothing but 
the truth, so help you God?
    Thank you.
    Please let the record reflect that all witnesses answered 
in the affirmative.
    In order to allow time for discussion, please limit your 
testimony to 5 minutes. Your entire written statement will be 
made part of the record.
    And as a reminder, the clock in front of you shows your 
remaining time. The light will turn yellow when you have 30 
seconds left, and when it turns red your time is up. And please 
remember to also push the button to turn on your microphone 
before speaking.
    And now it is a pleasure to recognize Dr. Khosrowshahi for 
your initial 5 minutes.

                       WITNESS STATEMENTS

                 STATEMENT OF AMIR KHOSROWSHAHI

    Mr. Khosrowshahi. Good afternoon, Chairman Hurd, Ranking 
Member Kelly, and members of the House Committee on Oversight 
and Government Reform, Subcommittee on Information Technology.
    My name is Amir Khosrowshahi, and I am the vice president 
and chief technology officer of Intel Corporation's Artificial 
Intelligence Products Group.
    We're here today to discuss artificial intelligence, a term 
that was an aspirational concept until recently. While 
definitions of artificial intelligence vary, my work at Intel 
focuses on applying machine-learning algorithms to real world 
scenarios to offer benefits to people and organizations.
    Thanks to technological advancements, AI is now emerging as 
a fixture in our daily lives. For instance, speech recognition 
features, recommendation engines, and bank fraud detection 
systems all utilize AI.
    These features make our lives more convenient, but AI 
offers society so much more. For example, AI healthcare 
solutions will revolutionize patient diagnosis and treatment.
    Heart disease kills one in four people in the United 
States. It is difficult for doctors to accurately diagnose 
disease, because different conditions present similar symptoms. 
That's why doctors mainly have had to rely on experience and 
instinct to make diagnoses. More experienced doctors tend to 
diagnose correctly three out of four times, those with less 
experience, however, just half the time, as accurate as the 
flipping of a coin. Patients suffer due to this information 
gap.
    Recently, researchers using AI accurately spotted the 
difference between the two types of heart disease 9 out of 10 
times. In this regard, AI democratizes expert diagnoses for 
patients and doctors everywhere in the world.
    AI is also contributing positively to agriculture. The 
population is growing, and by 2050 we will need to produce at 
least 50 percent more food to feed everyone. This will become 
increasingly challenging as societies will need to produce more 
food with less land to grow crops.
    Thankfully, AI applications provide tools to improve crop 
yields and quality, while also reducing consumption of 
resources like water and fertilizer.
    These are just a few examples of how AI is helping our 
communities. However, as we continue to harness the benefits of 
AI for societal good, governments will play a major role. We 
are in the early days of innovation of a technology that can do 
tremendous good. Governments should make certain to encourage 
this innovation and they should be wary of regulation that will 
stifle its growth.
    At the Federal level, the United States Government can play 
an important role in enabling the further development of AI 
technology in a few ways.
    First, since data fuels AI, the U.S. Government should 
embrace open data policies. To realize AI's benefits, 
researchers need to have access to large datasets. Some of the 
most comprehensive datasets are currently owned by the Federal 
Government. This data is a taxpayer-funded resource which, if 
made accessible to the public, could be utilized by researchers 
to train algorithms for future AI solutions.
    The OPEN Government Data Act makes all nonsensitive U.S. 
Government data freely available and accessible to the public. 
Intel supports this bill and calls for its swift passage.
    Second, the U.S. Government can help prepare an AI 
workforce. Supporting universal STEM education is a start, but 
Federal funding for basic scientific research at universities 
by agencies like the National Science Foundation is important 
to both train graduate-level scientists and contribute to our 
scientific knowledge base.
    Current Federal funding levels are not keeping pace with 
the rest of the industrialized world. I encourage lawmakers to 
consider the tremendous returns on investment to our economy 
that funding science research produces.
    In addition to developing the right talent to develop AI 
solutions, governments will have to confront labor 
displacement. AI's emergence will displace some workers, but 
too little is known about the types of jobs and industries that 
would be most affected.
    Bills like H.R. 4829, the AI JOBS Act, help bridge that 
information gap by calling for the Labor Department to study 
the issue and to work with Congress on recommendations. Intel 
supports this bill as well and encourages Congress to consider 
it in committee.
    AI promises many societal benefits, and government and 
industry should work together to harness them, and also to set 
up guidelines to encourage ethical deployment of AI and to 
prevent it from being used in improper ways that could harm the 
public.
    I cannot stress enough how important it is that lawmakers 
seize the opportunity to enable AI innovation. As U.S. 
lawmakers consider what to do in response to the emergence of 
AI, I encourage you to use a light touch. Legislating or 
regulating AI too heavily will only serve to disadvantage 
Americans, especially as governments around the world are 
pouring resources into tapping into AI's potential.
    Thank you again for the opportunity to testify today. The 
government will play an important role in enabling us to 
harness AI's benefits while preparing society to participate in 
an AI-fueled economy. Determining whether or how existing legal 
and public policy frameworks may need to be altered will be an 
iterative process. Intel stands ready to be a resource as you 
consider these issues.
    Thank you.
    [Prepared statement of Mr. Khosrowshahi follows:]
    
    
    
    
   [GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]
   
    Mr. Hurd. Thank you, Dr. Khosrowshahi.
    Dr. Isbell, you are now recognized for 5 minutes.

                  STATEMENT OF CHARLES ISBELL

    Mr. Isbell. Chairman Hurd, Ranking Member Kelly, and 
distinguished members of the subcommittee, my name is Dr. 
Charles Isbell. I am a professor and executive associate dean 
for the College of Computing at Georgia Tech. I would like to 
thank you for the opportunity to appear before the 
subcommittee.
    As requested by the subcommittee, my testimony today will 
focus on the potential for artificial intelligence and machine 
learning to transform the world around us and how we might 
collectively best respond to this potential.
    There are many definitions of AI. My favorite one is that 
it is the art and science of making computers act the way they 
do in the movies. In the movies, computers are often 
semimagical and anthropomorphic. They do things that if humans 
did them, we would say they required intelligence.
    As noted by the chairman, if that is AI, then we already 
see AI in our everyday lives. We use the infrastructure of AI 
to search more documents than any human could possibly read in 
a lifetime, to find the answers to a staggering variety of 
questions, often expressed literally as questions. We use that 
same infrastructure to plan optimal routes for trips, even 
altering our routes on the fly in the face of changes in 
traffic.
    We let computers finish our sentences, sometimes 
facilitating a subtle shift from prediction of our behavior to 
influence over our behavior. And we take advantage of these 
services by using computers on our phones or home speakers to 
interpret a wide variety of spoken commands.
    All of this is made possible because AI systems are 
fundamentally about computing and computing methods for 
automated understanding and reasoning, especially ones that 
leverage data to adapt their behavior over time.
    That AI is really computing is an important point to 
understand. What has enabled many of the advances in AI is the 
stunning increase of computational power, combined with the 
ubiquity of that computing.
    That AI also leverages data is equally important. The same 
advances in AI are also due, in large part, to the even more 
stunning increase in the availability of data, again made 
possible by ubiquity, in this case of the internet, social 
media, and relatively inexpensive sensors, including cameras, 
GPS, microphones, all embedded in devices we carry with us, 
connected to computers that are, in turn, connected to one 
another.
    By leveraging computing and data, we are moving from robots 
that assemble our cars to cars that almost drive themselves. 
One can be skeptical, as I am, that we will in the near future 
create AI that is as capable as humans are in performing a wide 
variety of the sort of general tasks that humans grapple with 
every day simultaneously. But it does seem that we are making 
strong progress toward being able to solve a lot of very hard 
individual tasks as well as humans.
    We may not replace all 3 million truck drivers and taxi cab 
drivers, nor all 3 million cashiers in the United States, but 
we will increasingly replace many of them. We may soon trust 
the x-ray machine itself to tell us whether we have a tumor as 
much as we trust the doctor. We may not automate away 
intelligence analysts, but AI will shape and change their 
analysis.
    So AI exists and is getting better. It is not the AI of 
science fiction, neither benevolent intelligence working with 
humans as we traverse the galaxy, nor malevolent AI that seeks 
humanity's destruction. Nonetheless, we are living every day 
with machines that make decisions that if humans made them we 
would attribute to intelligence.
    As noted by the ranking member, it is worth noting that 
these machines are making decisions for humans and with humans. 
Many AI researchers and practitioners are engaged in what we 
might call interactive AI. The fundamental goal there is to 
understand how to build intelligent agents that must live and 
interact with large numbers of other intelligent agents, some 
of whom may be human.
    Progress towards this goal means that we can build 
artificial systems that work with humans to accomplish tasks 
more effectively, can respond more robustly to changes in the 
environment, and can better coexist with humans as long-lived 
partners.
    But as with any partner, it is important that we understand 
what our partner is doing and why. To make the most of this 
emerging technology, we will need a more informed citizenry, 
something we can accomplish by requiring that our AI partners 
are more transparent on the one hand and that we are more savvy 
on the other.
    By transparency, I mean something relatively simple. An AI 
algorithm should be inspectable. The kind of data the algorithm 
uses to build its model should be available. It is useful to 
know that your medical AI was trained to detect heart attacks 
mostly in men.
    The decisions that the system makes should be explainable 
and understandable. In other words, as we deploy these 
algorithms, each algorithm should be able to explain its output 
and its decisions: This applicant was assigned higher risk 
because is not only more useful, but is less prone to abuse 
than just this applicant was assigned a higher risk.
    To understand such machines, much less to create them, we 
have to strive for everyone to not only be literate, but to be 
compurate. That is, they must understand computing and 
computational thinking and how it fits into problem-solving in 
their everyday lives.
    I am excited by these hearings. Advances in AI are central 
to our economic and social future. The issues that are being 
raised here are addressable and can be managed with thoughtful 
support for robust funding and basic research in artificial 
intelligence, as noted by my colleague, support for ubiquitous 
and equitable computing education throughout the pipeline, in 
K-12 and beyond, and the developing standards for the proper 
use of intelligent systems.
    I thank you very much for your time and attention today, 
and I look forward to working with you in your efforts to 
understand how we can best develop these technologies to create 
a future where we are partners with intelligent machines.
    Thank you.
    [Prepared statement of Mr. Isbell follows:]
    
    
    
 [GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]
         
    Mr. Hurd. Thank you, sir.
    Dr. Etzioni, you are now up for 5 minutes.

                   STATEMENT OF OREN ETZIONI

    Mr. Etzioni. Good afternoon, Chairman Hurd and Ranking 
Member Kelly, distinguished members of the committee. Thank you 
for the opportunity to speak with you today about the nature of 
AI and the role of the Federal Government.
    My name is Oren Etzioni. I am the CEO of the Allen 
Institute for Artificial Intelligence, which is backed by Paul 
Allen. We call ourselves AI2. Founded in 2014, AI2 is a 
nonprofit research institute whose mission is to improve lives 
by conducting high-impact research and engineering in the field 
of AI for the common good.
    The goal of my brief remarks today is to help demystify AI 
and cut through a lot of the hype on the subject. And I'm 
delighted to talk to you in particular, Chairman, with a 
computer science degree. But it's really important to me to 
make sure that my remarks are understandable by everybody and 
that we don't confuse science fiction with the real science and 
Hollywood and hype with what's actually going on.
    What we do have are these very narrow systems that are 
increasingly sophisticated, but they're also extremely 
difficult to build. We need to work to increase the supply of 
people who can do this. And that's going to be achieved through 
increased diversity, but also through immigration.
    And so, so many of us are immigrants to this country. At 
AI2, we have 80 people who come from literally all over the 
world, from Iran, from Israel, from India, et cetera, et 
cetera. We need to continue to welcome these people so we can 
continue to build these systems.
    I have a number of thoughts, but I actually want to address 
the issue that came up just in the conversation now about 
transparency and bias and certainly the concerns that we have 
about these database systems generating unfairness.
    Obviously, we want the systems to be fair, and obviously, 
we want them to be transparent. Unfortunately, it's not as easy 
as it sounds. These are complex statistical models that are 
ingesting enormous amounts of data, millions and billions of 
examples, and generating conclusions.
    So we have to be careful. And I think the phrase ``light 
touch'' is a great one here. We have to be very careful that we 
don't legislate transparency, but rather that we attempt to 
build algorithms that are more favored, more desired, because 
they're more transparent.
    I think legislating transparency or trying to do that would 
actually be a mistake, because ultimately consider the 
following dilemma. Let's say you have a diagnostic system 
that's highly transparent and 80 percent accurate. You've got 
another diagnostic system that's making a decision about a key 
treatment. It's not as transparent, okay, that's very 
disturbing, but it's 99 percent accurate. Which system would 
you want to have diagnosing you or your child?
    That's a real dilemma. So I think we need to balance these 
issues and be careful not to rush to legislate what's complex 
technology here.
    While I'm talking about legislation and regulation and the 
kinds of decisions you'll be making, I want to emphasize that I 
believe that we should not be regulating and legislating about 
AI as a field. It's amorphous. It's fast-moving. Where does 
software stop and AI begin? Is Google an AI system? It's really 
quite complicated.
    Instead, I would argue we should be thinking about AI 
applications. Let's say self-driving cars. That's something 
that we should be regulating, if only because there's a 
patchwork of municipal and State regulations that are going to 
be very confusing and disjointed, and that's a great role for 
the Federal Government.
    The same with AI toys. If Barbie has a chip in it and it's 
talking to my child, I want to be assured that there are some 
guidelines and some regulations about what information Barbie 
can take from my child and share publicly. So I think that if 
we think about applications, that's a great role for 
regulation.
    And then the last point I want to make is that we need to 
remember that AI is a tool. It's not something that's going to 
take over. It's not something that's going to make decisions 
for us, even in the context of criminal justice. It's a tool 
that's working side by side with a human.
    And so long as we don't just rubber stamp its decisions but 
rather listen to what it has to say but make our own decisions 
and realize that maybe AI ought to be thought of as augmented 
intelligence rather than artificial intelligence, then I think 
we're going to be in great shape.
    Thank you very much.
    [Prepared statement of Mr. Etzioni follows:] 
    
    
[GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]
    
    Mr. Hurd. Dr. Buck, you're on the clock, 5 minutes.

                     STATEMENT OF IAN BUCK

    Mr. Buck. Thank you, Chairman Hurd, Ranking Member Kelly, 
and distinguished members of the committee. I appreciate your 
invitation to give testimony today on this important subject of 
AI.
    My name is Ian Buck. I'm the vice president and general 
manager of Accelerated Computing at NVIDIA. Our company is 
headquartered in Silicon Valley and has over 11,000 employees.
    In 1999, NVIDIA invented a new type of processor called the 
graphics processing unit, or the GPU. It was designed to 
accelerate computer graphics for games by processing millions 
of calculations at the same time.
    Today, GPUs are used for many applications, including 
virtual reality, self-driving cars, AI, and high-performance 
computing. In fact, America's fastest supercomputer, at Oak 
Ridge National Labs, uses 18,000 NVIDIA GPUs for scientific 
research.
    Our involvement with AI began about 7 years ago, when 
researchers started using our processors to simulate human 
intelligence. Up until that time, computer programs required 
domain experts to manually describe objects or features.
    Those systems took years to develop and many were never 
accurate enough for widespread adoption. Researchers discovered 
that they could teach computers to learn with data in a process 
we call training.
    To put that in context, to teach a computer how to 
accurately recognize vehicles, for example, you need about 100 
million data points and images and an enormous amount of 
computation. Without GPUs, training such a system would take 
months. Today's GPU-based systems can do this in about a day.
    The world's leading technology companies have aggressively 
adopted AI. Google and Microsoft's algorithms now recognize 
images better than humans. Facebook translates over 2 billion 
language queries per day. Netflix uses AI to personalize your 
movie recommendations. And all those systems rely on thousands 
of GPUs.
    My job is to help companies like these bring intelligent 
features to billions of people.
    But AI's impact isn't just limited to tech companies. Self-
driving cars, as was mentioned, surgical robots, smart cities 
that can detect harmful activities, even solving fusion power, 
AI holds the best promise to solve these previously unsolvable 
problems.
    Here's a short list of problems for which I think AI could 
help.
    First, cyber defense. We need to protect government data 
centers and our citizens from cyber attack. The scale of the 
problem is mind-boggling, and we're working with Booz Allen 
Hamilton to develop faster cybersecurity systems and train 
Federal employees in AI.
    Second, as was mentioned, healthcare. Nearly 2 million 
Americans die each year from disease. We could diagnose them 
earlier and develop more personalized treatments. The National 
Cancer Institute and Department of Energy are using AI to 
accelerate cancer research.
    Third, waste, fraud, and abuse. The GAO reported that 
agencies made $144 billion in improper payments in fiscal 2016. 
The commercial sector is already using AI to reduce such costs. 
PayPal uses AI to cut their fraud rate in half, saving 
billions. And Google used AI to lower the cost of its data 
centers by 40 percent.
    Fourth, defense platform sustainment costs. Maintenance 
costs are a huge challenge for the DOD, typically equaling 50 
percent or more of the cost of a major platform, totaling over 
$150 billion annually. GE is already using AI to detect 
anomalies and perform predictive maintenance on gas turbines, 
saving them $5 million per plant each year.
    These are complex problems that require innovative 
solutions. AI can help us better achieve these results in less 
time and at lower cost.
    For the role of government, I have three recommendations.
    First, fund AI research. The reason we have neural networks 
today is because the government funded research for the first 
neural network in 1950. America leads the world in autonomous 
machine vehicle technology because DARPA funded self-driving 
car competitions over a decade ago.
    While other governments have aggressively raised their 
research funding, the U.S. research has been relatively flat. 
We should boost research funding through agencies like the NSF, 
NIH, and DARPA. We also need faster supercomputers, which are 
essential for AI research.
    Second, drive agency adoption of AI. Every major Federal 
agency, just like every major tech company, needs to invest in 
AI. Each agency should consult with experts in the field who 
understand AI and recruit or train data scientists.
    Three, open access to data. Data is the fuel that drives 
the AI engine. Opening access to vast sources of data available 
to the Federal Government would help develop new AI 
capabilities so we can eliminate more mundane tasks and enable 
workers to focus on problem-solving.
    In closing, AI is the biggest economic and technological 
revolution to take place in our lifetime. By some estimates, AI 
will add $8 trillion to the U.S. economy by 2035. The bottom 
line is we cannot afford to allow other countries overtake us.
    And I thank you for your consideration. I look forward to 
answering your questions.
    [Prepared statement of Mr. Buck follows:] 
    
[GRAPHIC(S) NOT AVAILABLE IN TIFF FORMAT]


    Mr. Hurd. I thank all of you.
    Now it's a pleasure to recognize the gentleman from 
Kentucky for 5 minutes for his first line of questions.
    Mr. Massie. To the doctor from Intel, I don't want to try 
to pronounce your name. Help me out with that.
    Mr. Khosrowshahi. Khosrowshahi.
    Mr. Massie. Khosrowshahi.
    You said that AI was aspirational, but now it's a reality. 
Where did we cross the threshold? In the '90s, I worked at the 
AI lab at MIT. I worked on the hardware, because the software 
problem was too hard. And it seemed like you could solve 
certain engineering problems in the software, but it still 
feels that way to me.
    What milestone did we cross, what threshold?
    Mr. Khosrowshahi. So I hear this a lot, that people studied 
neural networks in the '90s and they're kind of curious what 
has changed. And so let me just put it into a broader context. 
The history of AI goes back to the 1930s. The individuals who 
started the field, John von Neumann and Alan Turing, they were 
also the first people to build computers.
    So the history of AI and computing has been tightly 
intertwined. So computing, as Dr. Isbell mentioned, is really 
critical. Compute power has dramatically increased since your 
time to today.
    Another, the next change is data. And the algorithms 
potentially have not changed so much. They might look very 
familiar to you. But there has been actually a remarkable 
amount of innovation in the space of machine learning, which is 
a dominant form of AI, and in neural networks that Ian 
mentioned that is the state of the art today.
    And invariably, these things change with time. The state of 
the art in AI changes with time. But the three things that are 
different today are computing power, data, and innovation in 
algorithms.
    Mr. Massie. This next question I'd like to ask all four of 
you.
    If there were going to be an XPRIZE for AI, what is the 
next big milestone? What's the sword in the stone that somebody 
should try to pull out and if they do they deserve a big 
reward?
    Dr. Etzioni.
    Mr. Etzioni. I would observe that every time we build one 
of these systems, whether it's in medicine or self-driving cars 
or speech recognition, we're kind of starting from scratch. We 
have to train them with these millions or hundreds of millions 
of examples. We have to set the architecture by hand, et 
cetera, et cetera, et cetera.
    If we could build, as Charles was alluding to, more general 
systems, which is something that we're very far from being able 
to do today, a system that can work across multiple tasks 
simultaneously without being retrained by hand every time, that 
would be a major breakthrough.
    Mr. Massie. So, Dr. Buck, what would it be for you? Maybe 
driving from New York to L.A.?
    Mr. Buck. I think we've had our XPRIZE in self-driving cars 
with the work that DARPA did to kick off the industry 
innovation. There's a huge market for the first car company to 
really come up with a mass-produced self-driving vehicle.
    I think AI at this point has the opportunity to 
revolutionize individual fields, and some could benefit from an 
XPRIZE, certainly healthcare. I think if we can identify an 
opportunity to do personalized medicine, to look at the 
genomics data that we've been able to get flooded with, with 
new instruments, and apply AI to understanding the NED 
treatments that are going to solve diseases, many of them just 
need to be detected earlier. If we could find them early, we 
could treat them. If we wait until the symptoms surface with 
today's technology, it's sadly too late.
    And if I had to add one more, I think there are huge 
opportunities for AI to improve our infrastructure, 
transportation, and just apply it to real modern problems 
today.
    Kansas City is doing a great project right now on detecting 
potholes with AI. They're actually gathering all the data from 
the weather data, the traffic information, and trying to 
predict when a pothole is going to form on a particular road. 
They are now up to 75 percent accurate within about 5 to 10 
feet. So they can go out there ahead of time and treat that 
road and tar it up before they have to tear it up to fix a 
pothole.
    There are so many different applications of AI, I think 
those XPRIZES would be fun to watch.
    Mr. Massie. Dr. Isbell.
    Mr. Isbell. So I think there's sort of two answers to this.
    One, all of us have said in one form or another that AI is 
interesting in the context of a specific domain, and so there's 
an XPRIZE for every domain.
    But the more general question, I think, the answer is in 
the AI lab from the 1990s. I was also in the AI lab in the 
1990s, and my adviser was Rod Brooks. As you might recall, at 
the time he was building a system called Cog, and the goal of 
Cog was to build----
    Mr. Massie. I remember Cog.
    Mr. Isbell. Yes. I was probably sitting in the back when he 
announced it with you.
    The interesting thing about Cog was the idea was that they 
were going to build a 3-year-old. And I think that the general 
problem of intelligence is a difficult one, and the real XPRIZE 
is being able to build someone we would recognize as 
sophisticated as a 3-, 4-, or 5-year-old.
    Mr. Massie. Okay. Just a speed round here, if you'll 
indulge me. All four of you, I'll start here on the left.
    Since you mentioned the 3-year-old goal that Professor 
Brooks had, how far away is AI from passing the Turing test, 
the classic Turing test, where if you were talking to this 
being, sentient being in the computer, you wouldn't be able to 
recognize it as not a human? How many years away are we?
    You go first.
    Mr. Khosrowshahi. Twenty-plus.
    Mr. Massie. Twenty-plus.
    Dr. Isbell.
    Mr. Isbell. I assume the day after I die, because that's 
how these things usually work.
    Mr. Massie. Or the day after your funding runs out.
    Mr. Etzioni. I should caution that the Turing test as it's 
set up is kind of a test of human gullibility. I'm afraid that 
we'll pass it much sooner than is said. But if your question is 
about true human-level intelligence, I agree it's 20, 25 years 
and beyond, effectively beyond the foreseeable future.
    Mr. Massie. It's definitely easier to fool somebody than it 
is to convince them they've been fooled, right?
    Dr. Buck.
    Mr. Buck. I agree with my colleagues. It's equivalent to 
worrying about the overpopulation of Mars at this moment.
    Mr. Massie. But it's the question. So what's your guess?
    Mr. Buck. Oh, decades.
    Mr. Massie. Decades. Okay.
    Thank you very much.
    Mr. Hurd. The gentlelady from Illinois is recognized.
    Ms. Kelly. Thank you.
    A few of you talked about the investment that needs to be 
made in this and made into some of the agencies. So what amount 
of money per year do you think the Federal Government should 
invest in some of the science agencies and foundations that you 
were referring to? Because it's easy to say we should invest, 
but what's your realistic----
    Mr. Etzioni. None of us are a policy or budgeting expert, 
as you can see from the few seconds of silence, but----
    Ms. Kelly. We're silent, too, so don't worry.
    Mr. Etzioni. Let me suggest that much more than China. We 
have a substantially larger economy. We should be investing a 
lot more.
    Ms. Kelly. Do you know what China is investing?
    Mr. Etzioni. I don't know the exact numbers, but it's 
certainly in the billions, according to their recently released 
blueprint.
    Ms. Kelly. Anybody else?
    Mr. Khosrowshahi. So I don't know the numbers exactly, but 
funding for NSF I think is on the order of billions. And this 
money is highly leveraged. And funding graduate students 
studying at AI universities is a really good way to spend the 
money to accelerate innovation in AI.
    And we do this at our company. We invest heavily in 
university programs, many grad students, many labs. And we've 
seen a lot of return in this specific area. So money well 
spent.
    So $3 billion versus $6 billion, the extra $3 billion will 
be hugely effective in spurring innovation in AI.
    Ms. Kelly. I was going to ask you, since your company is 
big in this area, how are you spurring on diversity, more 
women, more people of color?
    Mr. Khosrowshahi. It is actually a prime directive that 
comes from our CEO. So it's something that he is very focused 
on. We have diversity requirements in our hiring. Everyone 
knows these requirements in our hiring process. We focus on it.
    And in our field in particular, we've seen firsthand--I 
have--that additional diversity benefits in many ways. So we 
discuss bias, transparency, having diversity in the scientific 
demographics within our company. We have different ideas 
presented. Sometimes these issues that you brought up are 
highly nuanced and they surprise me.
    And so, again, that's a directive from our CEO.
    Ms. Kelly. Thank you.
    Dr. Isbell, you talked about increasing diversity, but 
starting in K through 12. What do you think schools need to do 
K through 12 to spur interest or what resources do they have to 
have?
    Mr. Isbell. So two short answers to that. I'll answer the 
first one first.
    They have to connect what AI and what computing can do to 
the lives of the people who are in school. That's the single 
most important thing.
    One thing that you just heard is that every dollar you 
spend on AI has a multiplying effect. And it's true, because it 
connects to all these domains, whether it's driving or whether 
it's history, whether it's medicine, whatever it is. And just 
connect that what you're doing will help you to do whatever 
problem you want to solve.
    But the main limiting factor fundamentally is teachers. We 
simply do not have enough of them. You asked me how much money 
you should spend. Whatever number you come up with, it's 10 
times whatever you will come up with is the right answer.
    But even if you spent all of that money, we are not going 
to be able to have enough teachers who are going to be able to 
reach enough tenth-graders in the time that we're going to need 
in order to develop the next-generation workforce. It simply 
isn't possible.
    What we're going to have to do is use technology to make 
that happen. We're going to have to make it so that Dr. Etzioni 
can reach 10,000 people instead of 40 people at a time and can 
work with people who are local to the students in order to help 
them to learn. That's the biggest, I think, resource for 
bringing people in who are young.
    Ms. Kelly. Thank you.
    Mr. Etzioni. May I just add something real quick?
    It's not just the number of teachers, but it's teacher 
training. My kids went to fancy private schools in Seattle that 
had classes called tech, and I was really disappointed to learn 
that they were teaching them features of PowerPoint because the 
teacher did not know how to program. So we need to have 
educational programs for the teachers so that they can teach 
our kids.
    And believe me, 8-year-old, 10-year-old, what a great time 
to learn to write computer programs. And it will also help at 
least with gender diversity and other kinds of diversity, 
because at that point kids are less aware of these things and 
they'll figure out, hey, I can do this.
    Ms. Kelly. Also, we talked about not getting the immigrant 
community. I serve on the board of trustees of my college, and 
that's something that we talked about. And they shared that the 
amount of foreign students has gone down drastically, because 
they don't feel as welcome in the country, and it's in 
engineering and the STEM fields that that has happened.
    So I think my time is about up. Oh, I can keep going.
    One thing I wanted to ask, what are the biases you have 
seen because of the lack of diversity?
    Mr. Buck. I think biases are a very important topic. 
Inherently, there's nothing biased about AI in itself as a 
technique. The bias comes from the data that is presented to 
it, and it is the job of a good data scientist to understand 
and grapple with that bias.
    You're always going to have more data samples from one 
source than another source. It's inevitable. So you have to be 
aware of those things and seek them out. And a good data 
scientist never rests until they've looked at every angle to 
discover that bias.
    It was talked about in our panel, in our testimonies. The 
think I'd add is that an important part of it, to detect bias, 
is where did it come from?
    Traceability is a term that's used a lot in developing AI 
systems. As you're going through and learning better neural 
networks, inserting more data, you're recording the process and 
development.
    So when you get out to a production system, you can then go 
back and find out why did it make that incorrect judgment and 
find out where was that bias inserted in the AI process and 
recreate it.
    It's very important for self-driving cars, and I think it's 
going to be important for the rest of AI.
    If you don't mind me going back to your previous question, 
I also think it's important that the committee recognize that 
AI is a remarkably open technology. Literally anyone can go 
buy, on a PC, download some open source software. They can rent 
an AI supercomputer in the cloud for as little as $3 and get 
started learning how to use AI. There's online courses from 
Coursera, Udacity. Industry, too. NVIDIA has an industry 
program called the Deep Learning Institute to help teach.
    So those technologies are remarkably accessible and open, 
and I think that goes to your diversity, making it available. 
It inspires students, kids with ideas of how they can take data 
and apply these technologies. There's more and more courses 
coming online. And I think that will inspire the next wave of 
AI workers.
    Mr. Isbell. If I can just add to that.
    I think the first round of bias comes from all of our 
beliefs, including myself. The sort of fundamental thing we 
want to believe is that the technology is itself unbiased and 
must be and that it is no more biased than a hammer or a 
screwdriver. But we'll point out that both hammers and 
screwdrivers are actually biased and they can only be used in 
certain ways and under certain circumstances.
    The second set of bias comes from the data that you choose, 
which is exactly what Dr. Buck said.
    I'll give you an example. When I was sitting in an AI lab 
apparently across the hall from you, a lot of the original work 
in vision was being done, particularly in face recognition.
    A good friend of mine came up to me at one point and told 
me that I was breaking all of their face recognition software, 
because apparently all the pictures they were taking were of 
people with significantly less melanin than I have.
    And so they had to come up with ways around the problem of 
me. And they did, and got their papers published, and then they 
made better algorithms that didn't depend upon the assumptions 
that they were making from the data that they had.
    This is not a small thing. It can be quite subtle, and you 
can go years and years and decades without even understanding 
that you are injecting these kind of biases just in the 
questions that you're asking, the data that you're given, and 
the problems that you're trying to solve.
    And the only way around that is to, from the very 
beginning, train people to think through, in the way that Dr. 
Buck said, to think about their data, where it's coming from, 
and to surface the assumptions that they are making in the 
development of their algorithms and their problem choices.
    Mr. Etzioni. Bias is a very real issue, as you're saying, 
as we're all saying. But we have to be a little bit careful not 
to hold our database system to an overly high standard. So we 
have to ask, what are we comparing the behavior of the systems 
to? And currently, humans are making these decisions, and the 
humans are often racist, they're often sexist. They're biased 
in their own way.
    We know, you talked about the case with a judicial 
decision. We have studies that show that when the justices are 
hungry, you really don't want them to rule at that point. You 
want them to go to lunch.
    So my perspective is let's definitely root out the bias in 
our systems, but let's also think about these collaborative 
systems where humans are working together with the AI systems, 
and the AI system might suggest to the person, hey, maybe it's 
time for a snack, or you're overlooking this factor.
    If we insist on building bias-free technology or figuring 
out how to build bias-free technology, we're going to fail. We 
need to build technology and systems that are better than what 
we have today.
    Mr. Hurd. Ranking Member, we need an XPRIZE for that, you 
know, to figure out when I'm hangry and make better decisions.
    Ms. Kelly. My last question is, those of you representing 
companies, do you have internship programs? How do you reach 
out into the community?
    Mr. Buck. Certainly. I think the most exciting work is 
happening in our research institutions and even at the 
undergrad and earlier levels.
    We're a huge proponent of interns. Myself, I was an intern 
at NVIDIA when I started at the company and worked my way up to 
be a general manager.
    So I'm a huge proponent of interns. They bring fresh ideas, 
new ways of thinking, new ways of programming. They teach us a 
lot about what our technology can do.
    Mr. Khosrowshahi. If I'm allowed to comment on your last 
question.
    So we talked about bias, but this line of thinking applies 
to everything. So transparency. I heard accountability. Humans 
are largely not transparent in their decisionmaking. This is 
something that's been studied exhaustively by people like 
Daniel Kahneman.
    So I think it's very interesting to hear this firsthand, 
but we have to be concerned about humans as well as machines. 
And when they interoperate, that's even more challenging.
    But, again, humans are biased, humans are transparent. And 
this is something to be cognizant of in your decisionmaking. I 
just wanted to stress that.
    Ms. Kelly. Thank you.
    Mr. Hurd. One of the reasons we do these kinds of hearings 
is to get some of the feedback from the smart people that are 
doing this.
    And, Dr. Buck, for example, we continue to do our FITARA 
Scorecards looking at how the Federal Government implements 
some of these rules. One of the questions we're going to start 
asking our Federal CIOs is, what are you doing to introduce 
artificial intelligence into your operations?
    So, Federal CIOs, if you're watching, friends at FedScoop, 
make sure you let them know that's going to be coming on the 
round six, I think, of the FITARA Scorecard.
    Where to start? So, yes, basic research. It is important. 
What kind of basic research? Do we need basic research into 
bias? Do we need basic research into some aspect of neural 
networks? Like, what kind of basic research should we be 
funding to start seeing that, to raise our game?
    And all these questions are open to all of you all, so if 
you all want to answer, just give me a sign, and I'll start.
    But, Dr. Buck, do you have some opinions?
    Mr. Buck. Certainly. As data science in general becomes 
more important to understanding the root cause of bias and how 
it is introduced and understood, I think it is a very important 
basic research understanding.
    A lot of this work has been done. It can be dusted off and 
continued. I think it will be increasingly important as AI 
becomes more of the computational tool for changing all the 
things that we're doing.
    Industry will tackle a lot of the neural network design. 
You have some of the smartest people in the world here in the 
U.S. building newer, smarter neural networks. They're largely 
focused on consumer use cases: speech recognition, translation, 
self-driving vehicles.
    I feel like the science applications of AI, how AI can 
assist in climate and weather simulations, how AI can assist in 
healthcare and drug discovery, are still early. And it is an 
area that has less of a commercial application but obviously 
really important to this country.
    You have some amazing computational scientists at the DOE 
labs that are starting to look at this. I think they also 
recognize the opportunity that AI can assist in simulation or 
improve the accuracy or get to the next level of discovery. I 
think there are some real opportunities there.
    And we're starting to see that conversation happen within 
the science community. Any more encouragement and, of course, 
funding to help amplify it would be greatly appreciated.
    Mr. Etzioni. I think you make a great point. There is the 
investment from Google, Intel, and Facebook. But there is so 
much basic research that they won't do.
    And I also can't emphasize enough how primitive the state 
of AI is. Sure, we've made a lot of strides forward, but----
    Mr. Hurd. Not to interrupt, but give me some. What are 
examples of basic research they won't do that we should be 
doing?
    Mr. Etzioni. Common sense. Something that you and I and 
every person knows and AI does not. That a finger has five 
hands. That people typically look to their left and their right 
before they cross the street.
    There's an infinite set of information that machines don't 
have. As a result, they really struggle to understand natural 
language. So we've seen success where the signal is very 
limited, like in a game of Go or in speech recognition.
    But all you have to do is turn to Alexa or Siri and realize 
just how little our AI programs understand and how little can 
we have a conversation with them.
    So I think research into natural language processing, into 
commonsense knowledge, into more efficient systems that use 
less training data, all of these are very, very challenging 
fundamental problems. And I could go on and on.
    Mr. Hurd. Gentlemen.
    Mr. Isbell. So I have very strong opinions about this, but 
I will try to keep it short.
    I think if I were going to pick one--I'm going to give you 
two answers--and if I was going to pick one thing to focus on 
that I don't think we're doing enough of, it is long-lived AI.
    That is, a lot of the work that we're doing are systems 
that solve a specific problem for a specific relatively short 
period of time is why it ends up looking like supervised 
learning as opposed to something like long-term decisionmaking.
    But if you think about what makes human beings so 
interesting, there are two things. One is that we depend upon 
each other, and the other is that we learn and we live for a 
really long time, not measured in minutes or hours but measured 
in decades.
    The problem of reading is hard. It takes human beings 6, 7, 
8 years to learn how to read. We need to understand what it 
means to build systems that are going to have to survive. Not 
just figure out how to turn the car now, but have to figure out 
how to live with other intelligent beings for 10, 20, or 30 
years. That's, I think, a sort of truly difficult problem.
    But having said that, I'll back off and say, I think the 
answer is you trust your agencies who talk to the community. 
NSF has a long list of things that they believe are important 
to invest in AI and other things as well and the get that by 
having ongoing communications and conversations with a large 
community. It creates a kind of market, as it were, of what the 
interesting ideas are.
    And I trust them. I listen to them. I talk to them. They're 
the mechanism that sort of aggregates what people are 
believing.
    And then, in some sense, what you can do or what government 
can do or what these agencies can do is to push us a little bit 
in one direction or another by giving incentives for thinking 
about a problem that people aren't necessarily thinking of.
    But, in general, I trust the people who are doing the work.
    Mr. Hurd. Dr. Khosrowshahi.
    Mr. Khosrowshahi. So we've been talking about high-level 
aspects of AI, decisionmaking and so forth. But in some of our 
testimonies we mentioned that there is a substrate for 
computation that enables AI. You have lots of data, need a 
while to compute.
    We're at an interesting point in time where we're having 
rapid innovation in AI, lots of successes. It's being driven by 
availability of data and compute. The amount of data is 
increasing really, really rapidly, and the compute has to 
commensurately increase in power.
    So that will require basic research and innovation at the 
silicon level, at the hardware level, which is what Intel does. 
We have fabs. We build the hardware from glass.
    So areas such as silicon photonics, analog computing, 
quantum computing, low-powered computing, all of these areas 
are potentially great investment NSF funding opportunities for 
you.
    And I'd like to also mention the landscape for getting AI 
systems to work involves so many different things. It requires 
machine learning, teachers, and so forth. But it requires 
things that seem prosaic but are really important, reliable 
software systems that are accountable, scalable, robust, and so 
forth.
    Again, that comes from investing in STEM and computer 
science in early stages of someone's career development.
    Mr. Hurd. So we've talked about bias as a potential 
challenge that we have to deal with as we explore and evolve in 
the world with AI. Another way you can manipulate a learning 
algorithm is by loading it up with bad data.
    What are some of the other challenges and other threats to 
artificial intelligence that we should be thinking about at the 
same time that we think about bias and integrity of the data 
that's involved in learning? Anyone.
    Dr. Buck.
    Mr. Buck. I'll emphasize that it's easy to say we have lots 
of data. It's actually quite challenging to organize that data 
in a meaningful way. The Federal Government has vast sources of 
data. It is very unstructured.
    Mr. Hurd. Very aware.
    Mr. Buck. And that is a challenge. We just spent a decade 
talking about big data. And as far as I can tell, we've largely 
collected data, not really done much with it.
    You now have a tool that can take all that data you've 
collected and really have some meaningful insights, to make a 
new discovery in healthcare, to save enormous amounts of money 
by finding inefficiencies or, worst, waste or fraud. But that 
data needs to be aggregated, cleaned up, labeled properly, and 
identified.
    I certainly would make sure that not only that the Federal 
Government has an AI policy but also has a sister data policy 
as well to organize and make that data actionable and 
consumable by AIs, whether within the Federal Government or 
make them available to the larger research community.
    I am sure there are dozens, if not thousands, of Ph.D.'s 
waiting to happen if they just had some of the more interesting 
Federal data to really make those kinds of discoveries.
    Mr. Hurd. Well, Dr. Buck, one of the first things this 
committee looked at was the DATA Act. And, shocker, the Federal 
Government was actually ahead of the game in trying to make 
sure that we're taking on that data and adding some structure 
to it. Implementation of that, as you have pointed out, is a 
bit tricky. So any tools that you all have to help with that 
would be great.
    Other concerns?
    Dr. Isbell.
    Mr. Isbell. So I'll add one. I agree with everything that 
Dr. Buck said and what other people have said before. Data is 
the problem. But one real issue is we typically build AI 
systems that don't worry about adversaries.
    So this ties back into the notion of long-lived AI systems. 
So we're building a system that's going to determine whether 
you have a tumor, whether you have a heart attack, whether you 
should get a mortgage, but we're not spending a lot of energy--
some people are thinking about this--we're not spending a lot 
of energy figuring out what happens when we send these things 
into the wild, we deploy them, and other people know that 
they're out there and they're changing their behavior in order 
to fool them.
    And how do we make them change over time is an arm's race. 
You can think about this security. It's easy to think of. We 
could think of something even simpler, like spam. I get all 
this terrible mail. I build a system that learns what my spam 
is. The people who are sending spam figure out what the rules 
are and what's going on there, and then they change what they 
do. And it just keeps escalating.
    And so this notion that you're going to have to not just 
solve the problem in front of you but solve the problem as it's 
going to change on the next round, the round after that, and 
the round after that, I think that's a real limitation of the 
kind of way that we build systems, freeze them, and then deploy 
them.
    And I'm not saying that that's all people do and that no 
one is thinking about it. But I do think, because we tend to 
think in this sort of a transactional way about AI, we 
sometimes don't think through the consequences of having long-
term systems.
    Mr. Khosrowshahi. I'd like take a slightly different tone. 
So we have talked in our testimonies about bias, privacy, 
transparency, assurances of correctness, adversarial agents 
trying to take advantage of weaknesses in the system.
    So one thing that I've seen in this past year that I 
haven't seen in the past 10 years is these things are discussed 
at academic conferences. Companies like Intel, my team, 
actually these are some of the top priorities, these issues 
that you raise. They're discussed. They're attracting some of 
the best minds in the field.
    I just introduced the idea of transparency literally months 
ago. And it's a really interesting area. It's highly nuanced. 
Humans are a tribal, multi-agent society. There are times when, 
if people have more information, the overall performance of the 
system goes down. It's very nonintuitive. Things can happen. 
Academics are pouring a lot of effort into this area.
    So I'm just very, very optimistic that the things we've 
enumerated today are being addressed, and we should just 
amplify them. So the government can play a big role in 
investing in things like academic research.
    It is quite different to me--I don't know if you guys 
concur--but the last major machine learning conference, NIPS, 
was really eye-opening to me, that there is a workshop on 
transparency, there is a workshop on bias, there is a workshop 
on diversity in the demographics of the AI community.
    So we are definitely on a very positive and virtuous track, 
and I'm asking government to just amplify this however it can.
    Mr. Hurd. The distinguished gentleman from the Commonwealth 
of Virginia is now recognized.
    Mr. Connolly. Thank you, Mr. Chairman.
    And thank you to our panel.
    Dr. Etzioni, from here, I had a little trouble reading what 
was underneath your name. And I thought for a minute it said 
alien AI. I thought, wow, we really are getting diverse in the 
panels we are putting together here. Alien AI.
    Mr. Etzioni. I come in peace.
    Mr. Connolly. Yeah. Thank God.
    So we were reminded rather dramatically last September with 
the Equifax hack that compromised information on 145 million 
Americans as to the risks of devastating cyber attacks and the 
absolute need for creating shields and protective measures, 
both for the government and for the private sector.
    According to the 2016 report from the NSTC, the National 
Science and Technology Council, AI has important applications 
in cybersecurity and is expected to play an increasing role for 
both defensive and offensive cyber measures.
    Dr. Khosrowshahi--and I'm from now on going to say the 
doctor from Intel--how can AI be most useful in defending 
against cyber attacks?
    Mr. Khosrowshahi. So I'll suggest a few ways, and I guess 
we'll have other opinions.
    So cybersecurity, of course, is a major issue broadly in 
computing, as well as in AI, and as well at Intel. It is one of 
our primary focuses.
    So in terms of addressing cyber attacks using AI, cyber 
attacks are intentionally devious and nefarious, obscure. And 
these kinds of actions are really well suited to the latest 
state of the art in AI, machine learning.
    That is algorithms can take large corpora of data--these 
are inputs from whatever the type of cyber attack you're 
experiencing--and they can build a model of the cyber attack 
and a response, essentially.
    And the response can have very low latency. It can study 
the statistics of the attack, potentially it's a novel attack, 
build a model, and respond very quickly.
    So that's one way we can address cybersecurity, is with 
better models to defend against it.
    Another way--another thing that we can--it's not in answer 
to your question--but when we build models, it's good to know 
the set of possible attacks, because a researcher, a data 
scientist, is very cognizant of building robust models that are 
resistant to adversarial events.
    So as we get knowledge of cybersecurity issues in this 
area, AI, we build in security and defense against cyber 
attacks into the models such that adversarial actions do not 
perturb or give erroneous results.
    Mr. Connolly. Presumably also one of the advantages of AI 
would be early detection. I mean, part of the problem of cyber, 
certainly from the Federal Government's point of view, but 
apparently in the private sector as well, is when we finally 
realize we have been compromised, it's too late.
    Mr. Khosrowshahi. That's right.
    Mr. Connolly. And AI has the potential for early detection 
and diversion, preemption, protective walls, whatever.
    Mr. Khosrowshahi. That's right. The nature of these attacks 
could be so devious that the smartest human security experts 
could not identify them. So can either augment our human 
security experts or we can have systems that are early 
detectors that can just flag this is a potential threat. And 
these systems are really well suited for doing this, latency 
and learning very quickly.
    Mr. Connolly. Anyone else on the panel is more than welcome 
to comment.
    Dr. Etzioni.
    Mr. Etzioni. I just wanted to add that at the root of the 
Equifax hack was human error, several human errors. So 
something you might want to think about is, what are the 
incentives that we have in place to avoid that? What are the 
consequences that people at Equifax face--and not to pick on 
them--for making those mistakes with our data?
    I think if we put the right incentive structure in place, 
it's not a technical solution, but it'll help people to be more 
watchful, and they should be.
    Mr. Connolly. Yeah.
    Mr. Buck. The statistics here are alarming. And the rate of 
attacks are growing exponentially way faster than we can expect 
a human operator, even with the tools they have today, to keep 
up.
    This is a very hot topic in the startup community. There 
are many startups trying to apply AI to this problem. It's a 
natural fit.
    AI is, by nature, pattern matching. It can identify 
patterns and call out when things don't match that pattern. 
Malware is exactly that way. Suspicious network traffic is that 
way.
    One startup we work with, they're claiming the top AI 
software is only able to capture about 62 percent of the 
potential threats that are out there. But by applying AI, they 
can shorten the time to discovery and get to 90-plus percent 
accurate malware detection, and the false error rate, get it 
down to less than 0.1 percent where normally it's 2 percent.
    It's an opportunity to increase the throughput of our 
detector systems and make them much more rapidly responsive.
    Mr. Connolly. So why aren't we doing it? Is it the cost?
    Mr. Buck. The AI just needs to be developed. It is in the 
process of being developed by those startup companies. It's not 
as talked about in application as maybe video analytics or ad 
placement, but it is certainly active.
    Mr. Connolly. Well, you put your finger on two things, 
among others. But one is the exponential growth in the volume 
of attacks. I talk to some Federal agencies, and I'm stunned at 
the numbers. I mean, I know of one Federal agency, not a big 
one, where the cyber attacks or attempted attacks are in the 
hundreds of millions a year.
    And you're absolutely right. I mean, this particular 
agency, its mission isn't cyber. It's got a very human mission. 
And it's trying to put together through Band-Aids and other 
measures some protection. And it does raise questions about the 
ability of, in this case, the Federal Government to protect 
itself.
    Mr. Buck. I'm seeing a sea change in that as well. Not just 
are we looking to protect our firewalls and the data coming 
into our firewalls, but the data traffic behind the firewall.
    Assume you are attacked, for the sake of argument, and look 
at the traffic that's inside your firewall to detect it. 
Because as was mentioned before, in many cases you may already 
be compromised and you don't know it.
    So it's important to look at both, the front line as well 
as behind the lines, in understanding your network traffic and 
your security.
    Mr. Connolly. And the second thing this conversation I 
think underscores, and we had testimony yesterday from the 
intelligence community, but the idea that the Russians are not 
going to continue their attacks and attempts to distort our 
electoral process is naive. All 17 intelligence agencies in the 
United States Government testified to the fact that it is an 
ongoing threat and the midterm elections will be a target.
    So in a democracy, that's the very heart of how we 
function. How do we protect ourselves? And I think maybe we've 
got one tool, maybe a very critical tool, in terms of 
artificial intelligence. But trying to get that out to the 
myriad localities, over 10,000 localities in the United States, 
is going to be a different kind of challenge.
    I thank you, Mr. Chairman.
    Mr. Hurd. Mr. Lynch, you are now recognized.
    Mr. Lynch. Thank you, Mr. Chairman. I appreciate that.
    Dr. Etzioni, in your written testimony you state, and I 
quote here, ``We can and should regulate AI applications.'' 
Obviously, as more and more AI systems are used to collect more 
and more sensitive and personal data on Americans, there are 
palpable and real privacy concerns.
    What are the ways in which you think that the regulations 
that you anticipate would serve to protect the private 
information of Americans?
    Mr. Etzioni. So I think that there are some principles that 
I can talk about. And, frankly, you and your staff are probably 
better qualified to think through specific regulations.
    But a principle that I would really advocate is identifying 
when AI is involved. And that's something that we can regulate 
so that the bots, at least the homegrown ones, state that 
they're AI. We had Intel inside. We should have AI inside.
    Most recently we've seen that there are examples of fake 
pornography, superimposed celebrities on top of bodies and 
things like that. If we can't trust the integrity of our 
pornography--obviously I'm joking.
    Mr. Lynch. Thanks for making that clear.
    Mr. Etzioni. But the point is we should label when AI is 
being used. And, likewise, we should be clear when we have AI 
systems in our homes. Alexa, AI Barbie, the Roomba vacuuming 
our floor, they naturally also vacuum up a huge amount of data, 
some of it from our kids, if Barbie is talking to our kids. We 
should have regulations about where that information can go.
    Mr. Lynch. So the proliferation of AI, I just see it 
proceeds at a velocity far exceeding the ability of Congress to 
keep up with it, and that's true with many technologies. And 
oftentimes we rely heavily on the private sector to look at 
those ways that, if AI is being broadly used, how we might 
develop a protocol that would prevent that private information 
from just getting out there.
    And we have, in a very narrow sense, the Equifax situation 
where we have the names, addresses, Social Security numbers of 
150 million Americans out there, just gone. So they basically 
burnt the entire Social Security number system as a reliable 
and secure indicia. So that's gone. And it's just because one 
company was very lazy about protecting data.
    And so I'm just concerned. I have similar concerns about AI 
being out there and these bots. And we've got some pretty 
creative hackers out there, Russians and others, that have been 
able to access some very, very sensitive information. At one 
point they swept every bit of data from any individual who had 
applied for a high-level security clearance in this country.
    And so I could just see if there are, as you say, not 
necessarily household appliances, but other forms of AI 
operating a higher level, if those are hacked, it just 
increases the magnitude of our vulnerability exponentially.
    And I'm just trying to think in advance, as this is all 
happening in real-time, how do we protect the people who 
elected us? We're all for innovation, but I think with the 
appropriate safeguards in place.
    Mr. Etzioni. The thing that I would like to highlight, 
though, is that you're right, those are some scary realities. 
But they are realties. They're often instigated from the 
outside. So maintaining our strategic edge.
    And that's why I emphasize regulating applications as 
opposed to the AI field and AI research itself. If we adopt an 
overly defensive, dare I even say in a reactionary posture, 
we're just going to lose.
    So this is a very competitive global business. And staying 
ahead, which we're all trying to do in various ways for 
education, et cetera, is essential.
    Mr. Lynch. Okay. Thank you.
    I assume my time has expired, Mr. Chairman. I yield back.
    Mr. Hurd. Dr. Isbell, did you have a response to that 
question?
    Mr. Isbell. I just want to add something. I think it's 
important to recognize here everything that you brought up are 
deep concerns. But AI is a secondary problem there. The primary 
problem there is that we are sharing our data constantly.
    Every one of you has a cell phone, possibly two of them, 
you have a watch, which is pinging all the WiFi hotspots 
everywhere you go. Each one of those devices has a unique ID. 
That unique ID is not you, but that unique ID is with you all 
the time. I can figure out with very little effort who you are, 
where you are, where you come from.
    By the way, I've deployed systems myself, this is 10 or 15 
years' old worth of technology, where I can predict what button 
you're going to press on your remote control after just 
observing you for one weekend.
    We are creatures of habit. We are sharing our data in our 
cars, our phones, everything that we do. The data itself, even 
if it's anonymized, is giving amazing amounts of information 
about us as individuals. That's the primary problem.
    The secondary problem is the AI, the machine learning, the 
technology, which can look at it very quickly and bring 
together the obvious connections even though you've tried to 
hide them.
    But the first thing I think to think about is it's not the 
AI, because computers are just fast, that's just going to 
happen. It's the fact that we are sharing data, and we've given 
very little thought to what it means to protect ourselves from 
the data we are willingly giving to everyone around us. And I 
don't have an answer, but that, in some sense, is the root 
problem.
    Mr. Lynch. Mr. Chairman, if I could.
    The ability of AI to aggregate the data, make sense of it, 
and give it direction and a purpose and a use, that's the magic 
of AI. The data's out there. And you're right, that's a 
problem. But I'm worried about weaponizing that raw data that's 
out there and how do we control that.
    But thank you. I think you offered a very good 
clarification. Thanks.
    Mr. Khosrowshahi. Let me make a short comment.
    So I liked to balance the discussion and present a slightly 
dissenting view to Dr. Etzioni.
    Well-intentioned efforts, such as labeling robots and other 
devices that employ AI, it could have unintended consequences. 
You have in the State of California, my State, we now know that 
asparagus and coffee cause cancer. So we are going to have 
labels on every piece of food and every building that this 
thing causes cancer. And these signs are becoming 
uninformative.
    So I would just be wary of unnecessary regulation or 
imposing regulation on a very young and rapidly moving field, 
because I can immediately see that it can have some adverse 
consequences.
    We talked about transparency. To use your example, would 
you want something that is labeled and worse performing or 
unlabeled and better performing, to use your example.
    And just in general, our view at Intel is that legislation 
should be based on principles, not on regulation that mandates 
certain kinds of technology. So we are self-regulating.
    This field is wonderful, that it does a lot of high-minded 
academics who are now leaders in business, and there is a 
strong impetus to be good stewards of this technology to do 
good. And we have lots of things that we can impose on 
ourselves to self-regulate to potentially address some of the 
adverse conditions that you mentioned. Not all of them. Perhaps 
some of them do need legislation.
    Mr. Hurd. I've got some final questions. And this first 
question is for everyone. And I know you all have all spent 
your adult lives trying to answer this question, and so I 
recognize this before I ask.
    And, Dr. Buck, I've got to give some kudos to your team 
that was out at the Consumer Electronics Show. They were very 
helpful in helping me understand some of the nuance of 
artificial intelligence. And if artificial intelligence was 
based on Fortran 77, I'd be your guy. That's my background 
experience.
    But I understand how to introduce antivirus software into 
your system. I understand how you introduce CDM into a network. 
When we ask all the Federal CIOs how are you thinking about 
introducing artificial intelligence into the networks, the 
first question I'm probably going to get is, well, it's really 
hard.
    And so my question is simple. And we've all been saying 
that AI is interesting because it's domain specific and I 
recognize how broad this question is. But how do we introduce 
AI into a network, into a system, into an agency?
    Mr. Buck. That's a great question. And AI can seem like 
rocket science. And first off, having this conversation is the 
first step. Explaining what it is and understanding it so they 
can comprehend it is, obviously, the first step.
    And where I've seen it work most successfully is in 
meaningful simple pilot projects. Project Maiden, which is a 
project with DOD, where they're using AI to help with 
reconnaissance so that airmen are not staring at TV screens for 
8 hours a day waiting for something to happen. They're letting 
the AI do the mundane parts of the job so our soldiers can do 
the decisionmaking.
    That kind of application of AI is well established. People 
know how to do it. You don't need to invent a new neural 
network to do it. It's the same work that's being done 
elsewhere. But by creating these pilot projects inside of these 
agencies, they are dramatically improving the lives of the 
people that work there.
    Mr. Hurd. So do we believe we're at a point now where the 
agencies can be the ones that are involved in training the 
algorithm. Okay, you find an algorithm, you figure out what 
dataset you need to train it. And do you expect the person at 
Department of Interior to be the one training that, or is it 
folks that are providing that service?
    Mr. Buck. You can do it both ways. I've definitely seen 
public partnerships where agencies are going outside for 
consulting to help apply AI technology to a specific problem. 
Or in some cases the neural networks are well established. 
Image recognition is where AI started. It is a well-established 
technique. The networks are open source. The software is open 
source and public.
    So I think if you find those use cases off the bat that are 
well published and, as was spoken, in these AI conferences well 
shared. The beauty about AI is that it's incredibly open, it's 
being done in the open source community, it's all being 
published. And it takes very little work to take one of those 
established workflows and apply it. And then the next step is 
to share that success.
    Mr. Hurd. Dr. Khosrowshahi.
    Mr. Khosrowshahi. So AI has changed over the last 80 years 
and it almost surely will change. We talked about neural 
networks. Five years from now, almost surely--I'm on TV--but I 
guarantee it's going to be something different.
    But the underpinnings are you have data, you have model, 
you have inferences. You have data that has statistical 
distribution, whether it's images, whether it's a car driving 
down the road collecting video in the U.S. or Canada or 
wherever, different statistics. You build models, the models 
try to understand the statistics of the data, and then you can 
ask the model questions. Is this a cat or a dog? Is there a 
stop sign approaching me? That's basically what AI is today.
    So if you just take these simple underpinnings and then 
apply them to whatever public policy or application CIOs want 
to insert into their business workloads and so forth, just 
understanding that basic element. There's going to be some 
data, it will have some statistical properties, maybe it will 
be difficult for a human to understand them. A machine could be 
better and faster, more robust, more power efficient than the 
brain. And then it can perform inferences.
    And whether or not you choose to rely on these inferences 
depends on how good the model is, how much assurances of 
correctness you have. I mean, the landscape of AI is so vast 
and it's touching so many different things. And it's still, I 
would again stress, that it's very early on. We don't have 
artificial agents making decisions for us almost anywhere.
    So even in finance, you would expect automated trading 
systems. It's not there yet. We're still in the very early 
stages. There is not widespread adoption in the industry. It 
will get there, but it's still early on.
    But, again, the AI, the underpinnings and the applications, 
there's this model data inference. You can stick it in anywhere 
where that works.
    Mr. Isbell. So in the interest of time, I'll keep this 
short.
    I want to distinguish between at least two different 
things. One is face recognition and that class of things versus 
shared decisionmaking. I think the answer for things like face 
recognition, relatively straightforward. At the risk of 
oversimplifying, it's like asking the question, how can we 
integrate the internet? How can we integrate telephones? It's 
relatively straightforward. It's well understood, it's very 
clear, and you can ask yourself how to use the screw driver.
    The shared decisionmaking is what's difficult. That 
requires that the domain experts are part of the fundamental 
conversations. The research question from my point of view is 
figuring out how to be able to use humans in order to train the 
systems that we have when they don't understand machine 
learning and AI, but they do understand their domain. How do 
you get those people to talk to one another?
    I'm not worried about the deployment of face recognition. 
I'm worried about how I'm going to get an intelligence analyst 
to understand enough about what it is they are doing so that 
they can communicate to a system that will work with them in 
order to make decisions.
    That's where the difficult problem is, but it's really no 
different than just trying to understand what it is they 
actually do. The problem is, the thing that we know, is that 
people are terrible at telling you what it is that they do. You 
can't ask them and they tell you. You have to watch them, 
observe them, model them, and give them feedback. It's an 
iterative, ongoing process.
    Mr. Etzioni. I wonder if an approach would be to focus on 
outcomes and metrics and grand challenges. And if you ask for 
those rather than demanding AI and then they have to resort to 
AI to satisfy those mandates, that might work.
    Mr. Hurd. One minute for all four of you all to answer 
these two questions.
    What datasets in the government do you want access to or 
should the AI community of people that are working on these 
challenges get access to? And what skill sets should our kids 
in college be getting in order to make sure that they can 
handle the next phase when it comes to artificial intelligence?
    Mr. Isbell. All of them. And the skills that the students 
need in college, they need to understand computing. There 
shouldn't be a single person who graduates with a college 
degree who hasn't taken three or four classes in computing at 
the upper division level. They need to understand statistics. 
And they need to understand what it means to take unstructured 
data and turn it into structured data that they can construct 
problems around.
    Mr. Khosrowshahi. So on the datasets, things like NOAA, 
weather data, things that are not sensitive have private 
information, those would be the first. And there's a vast trove 
of this. This would be immediately useable by academics.
    But on the skill set side, if I were to pick one, it would 
be computer science. I would invest as much as possible in 
teaching computer science K through 12, especially in high 
school.
    Mr. Hurd. Dr. Etzioni.
    Mr. Etzioni. Research funded by NIH, by NSF, DARPA, et 
cetera, is often not available under open access. Journals keep 
it behind pay walls. That's changing way too slowly.
    So the dataset that I would like everybody, human and 
machine, to have access to is the data and the articles that 
you and we as taxpayers paid for. I think that's incredibly 
informant.
    As far as the skill sets, I would say that everybody in 
college should be able to write a simple computer program and 
to do a simple analysis. And we can get there, and, remarkably, 
it's not required.
    Mr. Hurd. Dr. Buck, last word.
    Mr. Buck. I certainly would love to see all the datasets. I 
certainly also would like to see access to the problems around 
healthcare. And I know those are sensitive topics, but the 
problem is too important, the opportunity is too great, and it 
is where I feel like AI will truly save lives. If we could 
figure out to make that data available, it would be an amazing 
achievement.
    In terms of education, I believe that data science is 
becoming a science again. And I also feel like training a 
neural network is not that hard. I think it can be done at the 
junior high level.
    And the access to technology is available today. And I 
think we should start teaching students what this tool can do. 
Because it really is a tool and will inspire new applications 
that will come from the interns, the undergrads, the college 
students. That's what makes this fun.
    Mr. Hurd. Well, gentlemen, I think my colleagues would 
agree with me on this, this has been a helpful conversation. 
There is a lot packed into your all's testimony that's going to 
help us to continue to do our work on the Oversight Committee 
and to look at opening up some of these datasets. How do we 
double down on NSF funding? How do we focus on getting more? I 
think every kid in middle school should have access to a coding 
class. And we're working on that stuff down in the great State 
of Texas.
    And many of these points that you make, we're going to be 
talking to folks in the government, in early March, in the 
second series of this AI series. We intended to invite GSA, 
NSF, DOD, DHS and to continue this conversation about how they 
are introducing and looking at artificial intelligence and what 
more support they need from Congress.
    So, again, I want to thank you all and the witnesses for 
appearing before us today.
    The hearing record will remain open for 2 weeks for any 
member to submit a written opening statement or questions for 
the record.
    And if there's no further business, without objection, the 
subcommittee stands adjourned.
    [Whereupon, at 3:54 p.m., the subcommittee was adjourned.]

                                 [all]