[Senate Hearing 118-209]
[From the U.S. Government Publishing Office]


                                                        S. Hrg. 118-209

                      AVOIDING A CAUTIONARY TALE:
                       POLICY CONSIDERATIONS FOR
                        ARTIFICIAL INTELLIGENCE
                             IN HEALTH CARE

=======================================================================

                                HEARING

                               BEFORE THE

                    SUBCOMMITTEE ON PRIMARY HEALTH 
                        AND RETIREMENT SECURITY

                                 OF THE

                    COMMITTEE ON HEALTH, EDUCATION,
                          LABOR, AND PENSIONS

                          UNITED STATES SENATE

                    ONE HUNDRED EIGHTEENTH CONGRESS

                             FIRST SESSION

                                   ON

           EXAMINING POLICY CONSIDERATIONS FOR ARTIFICIAL INTELLIGENCE 
                                IN HEALTH 
                                  CARE

                               __________

                            NOVEMBER 8, 2023

                               __________

 Printed for the use of the Committee on Health, Education, Labor, and 
                                Pensions
                                
[GRAPHIC NOT AVAILABLE IN TIFF FORMAT]                                


        Available via the World Wide Web: http://www.govinfo.gov
        
                             __________

                   U.S. GOVERNMENT PUBLISHING OFFICE                    
54-522 PDF                  WASHINGTON : 2024                    
          
-----------------------------------------------------------------------------------         
        
          COMMITTEE ON HEALTH, EDUCATION, LABOR, AND PENSIONS

                 BERNIE SANDERS (I), Vermont, Chairman
PATTY MURRAY, Washington             BILL CASSIDY, M.D., Louisiana, 
ROBERT P. CASEY, JR., Pennsylvania       Ranking Member
TAMMY BALDWIN, Wisconsin             RAND PAUL, Kentucky
CHRISTOPHER S. MURPHY, Connecticut   SUSAN M. COLLINS, Maine
TIM KAINE, Virginia                  LISA MURKOWSKI, Alaska
MAGGIE HASSAN, New Hampshire         MIKE BRAUN, Indiana
TINA SMITH, Minnesota                ROGER MARSHALL, M.D., Kansas
BEN RAY LUJAN, New Mexico            MITT ROMNEY, Utah
JOHN HICKENLOOPER, Colorado          TOMMY TUBERVILLE, Alabama
ED MARKEY, Massachusetts             MARKWAYNE MULLIN, Oklahoma
                                     TED BUDD, North Carolina

                Warren Gunnels, Majority Staff Director
              Bill Dauster, Majority Deputy Staff Director
                Amanda Lincoln, Minority Staff Director
           Danielle Janowski, Minority Deputy Staff Director
                                 ------                                

         SUBCOMMITTEE ON PRIMARY HEALTH AND RETIREMENT SECURITY

                   ED MARKEY, Massachusetts, Chairman
PATTY MURRAY, Washington             ROGER MARSHALL, M.D., Kansas, 
TAMMY BALDWIN, Wisconsin                 Ranking Member
CHRISTOPHER S. MURPHY, Connecticut   RAND PAUL, M.D., Kentucky
MAGGIE HASSAN, New Hampshire         SUSAN M. COLLINS, Maine,
TINA SMITH, Minnesota                LISA MURKOWSKI, Alaska
BEN RAY LUJAN, New Mexico            MIKE BRAUN, Indiana
JOHN HICKENLOOPER, Colorado          MARKWAYNE MULLIN, Oklahoma
BERNIE SANDERS (I), Vermont, (ex     TED BUDD, North Carolina
    officio)                         BILL CASSIDY, M.D., Louisiana, (ex 
                                         officio)
                           
                           C O N T E N T S

                              ----------                              

                               STATEMENTS

                      WEDNESDAY, NOVEMBER 8, 2023

                                                                   Page

                           Committee Members

Markey, Hon. Ed, Chairman, Subcommittee on Primary Health and 
  Retirement Security, Opening statement.........................     1
Marshall, Hon. Roger, Ranking Member, U.S. Senator from the State 
  of Kansas, Opening statement...................................     3

                               Witnesses

Huberty, Christine, Supervising Attorney, Greater Wisconsin 
  Agency on Aging Resources, Madison, WI.........................     5
    Prepared statement...........................................     6
Inglesby, Thomas, Director, Johns Hopkins Center for Health 
  Security, Baltimore, MD........................................     8
    Prepared statement...........................................    10
Mandl, Kenneth D., Harvard Professor and Director, Computational 
  Health Informatics Program, Boston Children's Hospital, Boston, 
  MA.............................................................    17
    Prepared statement...........................................    19
Sale, Keith, Vice President and Chief Physician Executive of 
  Ambulatory Services, The University of Kansas Health System, 
  Kansas City, KS................................................    20
    Prepared statement...........................................    22

                          ADDITIONAL MATERIAL

Marshall, Hon. Roger:
    Exploring Congress' Framework for the Future of AI, submitted 
      by Sen. Cassidy............................................    42
    American College of Surgeons, Statement submitted for the 
      Record.....................................................    60
Markey, Hon. Edward J.:
    National Nurses United, Written Statement for AI Forum: 
      Workforce..................................................    62
    National Nurses United, Stakeholders Statement for the Record    66
    Premier Inc., Statement submitted for the Record.............    67
Huberty, Christine:
    NH Predict Outcome...........................................    72
    Premier's Advocacy Roadmap for the 118th Congress: Artificial 
      Intelligence in Healthcare.................................    75
 
                      AVOIDING A CAUTIONARY TALE:
                       POLICY CONSIDERATIONS FOR
                        ARTIFICIAL INTELLIGENCE
                             IN HEALTH CARE

                              ----------                              


                      Wednesday, November 8, 2023

                                       U.S. Senate,
    Subcommittee on Primary Health and Retirement Security,
       Committee on Health, Education, Labor, and Pensions,
                                                    Washington, DC.

    The Subcommittee met, pursuant to notice, at 2:45 p.m., in 
room 430, Dirksen Senate Office Building, Hon. Edward Markey, 
Chairman of the Subcommittee, presiding.

    Present: Senators Markey [presiding], Baldwin, Murphy, 
Hassan, Smith, Lujan, Hickenlooper, Marshall, and Braun.

                  OPENING STATEMENT OF SENATOR MARKEY

    Senator Markey. Thank you all so much for being here. The 
Senate, Health, Education, Labor, and Pensions Subcommittee on 
Primary Health and Retirement Security will come to order. 
Thank you all for joining us today for the hearing, ``Avoiding 
a Cautionary Tale, Policy Considerations for Artificial 
Intelligence in Health Care.''

    Thank you to Ranking Member Marshall for your continued 
partnership, your staff's continued partnership on the 
Subcommittee. We are hearing more and more about the promise of 
artificial intelligence in health care, the potential for 
innovation to reduce the red tape facing patients and 
providers, to identify patterns, improve patient outcomes, and 
cure disease.

    But we have heard grand promises from big tech before. In 
2012, Mark Zuckerberg compared social media to the printing 
press and explained that Facebook was built to make the world 
more open and more connected.

    But here is the unfortunate truth. Big tech made big 
promises for innovation, democracy, and community, but instead 
unleashed big problems on the American people without solutions 
that were attached by big tech. And our young people have 
suffered the most.

    In 2021, 1 in 3 high school girls seriously considered 
suicide, and at least 1 in 10 high school girls attempted 
suicide that year, 2021. Among LGBTQ youth, the number was more 
like one in five attempted suicide in 2021. And as U.S. Surgeon 
General Dr. Vivek Murthy concluded in a CDC report earlier this 
year, there is significant evidence that big tech's predatory 
practices contributed significantly to this youth mental health 
crisis.

    That is why I am working to pass my bipartisan Children and 
Teens Online Privacy Protection Act with Senator Cassidy to 
ensure children and teenagers and their parents have the tools 
they need when kids are searching and scrolling and connecting 
online.

    Fast forward 10 years from when Mark Zuckerberg made his 
rose colored promise and look at our approach to artificial 
intelligence, and I have concerns, because when we talk about 
the promises of AI, we need to also talk about its risks.

    We have learned time and again that left to self-regulate, 
big tech puts profit over people almost every time. We cannot 
afford to repeat that mistake by not regulating artificial 
intelligence now. The risks are too great.

    Unregulated experimentation involving artificial 
intelligence may fuel our next pandemic. Humans insert human 
bias and discrimination into algorithms that can supercharge 
existing inequalities in our health care system, jeopardize our 
privacy, and misdiagnose or mistreat patients.

    Big tech's access to sensitive patient information without 
guardrails exposes people to their most personal information 
being shared, or even worse, weaponized back against them. 
Automated review processes will speed up insurance reviews and 
denials, leaving patients scrambling to get the health coverage 
they need to avoid choosing between their care and bankruptcy.

    In the middle of all of this, health workers are on the 
front lines of implementing this powerful technology without 
proof of safety, reliability, effectiveness, or equity. Workers 
are seeing health systems replace conversations on retaining 
and paying the workforce with extending and replacing them 
using artificial intelligence.

    We don't need big tech treating our health care system like 
a lab to experiment on patients and workers. We need a health 
care system that prioritizes people over heart rhythms, over 
bots run by algorithms.

    Our artificial intelligence must be paired with a voice for 
workers in determining their own working conditions, more 
treatments, and cures for all patients, and better access to 
health care. Otherwise, we are innovating for the sake of 
profit, and that isn't really innovation at all. It is greed.

    We can act now to prevent the next cautionary tale. We can 
pass my legislation, the Artificial Intelligence and 
Biosecurity Risk Assessment Act, with Senator Budd, and the 
Securing Gene Synthesis Act with Representative Eshoo to 
require the U.S. Department of Health and Human Services to 
identify and respond to biosecurity threats involving AI.

    We can stop corporations from implementing technologies on 
patients and workers without their knowledge and without 
appropriate testing to prevent harm, discrimination, or 
interference with their clinical judgments. We can guarantee 
that workers and patients have a voice in whether and how 
artificial intelligence is used. We can guarantee civil rights 
protection in the utilization of artificial intelligence.

    We can protect young people from big tech's targeting and 
tracking and pass a comprehensive privacy bill of rights for 
teenagers and children in our Country. And we have to guarantee 
that wherever artificial intelligence is used, it prioritizes 
people over profits.

    But I have learned in my many years serving on the 
telecommunications committee, I was Chairman in the House 
during, and I am the author of all of the bills moving us from 
analog to digital America, from narrowband to broadband. Those 
are all my bills breaking down all the monopolies.

    What I learned was the only time you really get things for 
the little guy is when the big guys want something. So, in AI 
right now, the big guys want something, and we got to make sure 
we put in all the protections for the little guys in our 
society, and we have got to do it simultaneously, not 
sequentially.

    Not after the big guys get what they need. That is what 
this hearing is really all about in the health care sector. We 
welcome everyone. And I turn to recognize Ranking Member 
Marshall for an opening statement.

                 OPENING STATEMENT OF SENATOR MARSHALL

    Senator Marshall. Well, thank you, Mr. Chairman. I 
certainly appreciate those comments. Artificial intelligence 
and machine learning have great potential to revolutionize 
health care by developing new cures, improving health care 
delivery, and reducing administrative burdens, as well as 
overall health care spending.

    We hope someday, someday, very soon, AI and machine 
learning will allow our clinical workforce to go back to 
practicing medicine. Those of us in medicine, whether we are a 
physician, a nurse, a counselor, we all long to spend more face 
to face time with our patients and less on medical records and 
administrative burden.

    Other opportunities for AI include developing better 
standards of care, increasing timely access to care, and 
perhaps most importantly, discovering innovative treatments, 
which includes monitoring disease progression and the 
effectiveness of those treatments. But all that being said, my 
biggest concern we hope to address today is AI's application in 
biosecurity and how it could be used to enable bioterrorism.

    After all, AI can help us prepare or react to the next 
pandemic, or it could also be used intentionally or 
unintentionally to develop novel pathogens, viruses, 
bioweapons, or chemical weapons. As I have always said, those 
closest to the industry know the challenges. They understand 
the opportunities and the risks the best.

    They also know the most practical and impactful solutions 
as we look for guardrails that protect Americans, but at the 
same time promote innovation. Today, we are asking our 
witnesses to describe these risks and benefits as best they see 
them. And if we are going to write rules surrounding AI, let's 
be careful not to destroy innovation or allow those who would 
harm us to get ahead of us.

    After all, artificial intelligence and machine learning 
have been making remarkable discoveries and improving health 
care for some five decades without much Government 
interference.

    I would like to quote Ranking Member Cassidy, who has done 
extensive research and written in a wonderful white paper on 
this. Senator Cassidy says, ``we must strike the right balance 
for America, from the earliest ages of developing new products 
through deployment of an AI system or solution solving complex 
problems.''

    Mr. Chairman, I have two articles here I would like to 
submit for the record. First is the white paper from Dr. 
Cassidy entitled, Exploring Congress Framework for the Future 
of AI. The Oversight and Legislative Role of Congress Over the 
Integration of AI in Health, Education, and Labor.

    [The following information can be found on page 42 in 
Additional Material.]

    Also, a second document from the American College of 
Surgeons, a statement to this Committee regarding avoiding--
regarding their statement and thoughts on this, Mr. Chair.

    Senator Markey. Without objection, so ordered.

    [The following information can be found on page 60 in 
Additional Material.]

    Senator Marshall. Thank you, and I yield back.

    Senator Markey. Thank you, Ranking Member Marshall. And now 
I turn to recognize Senator Baldwin, who has a special guest to 
the Committee who she is going to introduce.

    Senator Baldwin. Thank you so much, Chairman Markey and 
Ranking Member Marshall. I am so proud to welcome a 
constituent, Christine Huberty, to our Subcommittee hearing 
today.

    Ms. Huberty comes--currently serves as the Lead Benefits 
Specialist, Supervising Attorney at the nonprofit Greater 
Wisconsin Agency on Aging Resources, and it is located in 
Madison, Wisconsin.

    In this role, she provides free legal assistance to 
Northern Wisconsin residents over the age of 60 who need 
assistance in accessing their benefits, including Medicare, 
Medicaid, Social Security, and SNAP. She also provides support 
related to issues with housing and consumer law.

    As you will hear in her testimony, Ms. Huberty has been 
fighting on behalf of Wisconsinites who have had critical 
health services denied by big insurance companies using AI.

    Ms. Huberty, I want to thank you for your advocacy on 
behalf of Wisconsin seniors, and for making this trip to 
Washington, DC. Your testimony highlights the need for us to 
act to address the use of AI. It is simply not right for 
patients to have their care dictated by an algorithm.

    Welcome to the Subcommittee, and I look forward to your 
testimony.

    Senator Markey. Whenever you are comfortable, Ms. Huberty, 
you may begin with your opening statement.

 STATEMENT OF CHRISTINE HUBERTY, SUPERVISING ATTORNEY, GREATER 
        WISCONSIN AGENCY ON AGING RESOURCES, MADISON, WI

    Ms. Huberty. Thank you, Mr. Chairman, and Members of the 
Subcommittee. My name is Christine Huberty, and I have served 
as an Attorney at the Greater Wisconsin Agency on Aging 
Resources since 2015.

    As an advocate for senior residents of Wisconsin, part of 
my job is to provide legal assistance to those aged 60 and over 
who are experiencing health care coverage denials. The purpose 
of my testimony today is to share how the use of AI in health 
care causes patient harm and administrative burdens.

    On May 25th of this year, Jim, age 81, was hospitalized for 
pneumonia secondary to COVID-19. Jim had a history of COPD and 
was at the time undergoing chemotherapy for B-cell lymphoma. 
Jim's doctors recommended that he transfer from the hospital to 
a skilled nursing facility for short term rehab.

    His doctors prescribed at least 30 days of daily therapies 
in order to return to his prior level of functioning. Jim's 
insurance provider, however, relied on technology that said he 
should only need 14.2 to 17.8 days at the rehab facility. Jim 
received a denial on day 16, with coverage ending 2 days later, 
just as the algorithm predicted.

    Jim went home on day 25, not because he was well enough, 
but because he feared the mounting out-of-pocket costs. Jim's 
doctors and therapists did not agree with the algorithm's 
predicted discharge date, nor did they agree with Jim's own 
decision to return home. AI directed Jim's care.

    The subcontractors using the algorithm argue that the 
predicted discharge date is used as a guide only, and medical 
reviewers, humans, make all final denial decisions. If that is 
the case, then humans who had no contact with Jim ignored the 
following in his medical records. He was unable to safely 
swallow by himself and in fact had a choking episode just days 
after he was admitted. His oxygen saturation remained at unsafe 
levels.

    He was at risk of falling and lacked the strength and 
activity tolerance to participate in chemotherapy. He could not 
climb the three stairs necessary to get into his home. He 
required assistance of at least one, if not two, people with 
getting in and out of bed toileting, bathing, and dressing.

    Most egregiously, they ignored the direct words, currently 
not safe to return home with wife. Jim's family helped him 
appeal twice, which was ultimately successful, meaning the 
algorithm got it wrong and a human did not catch the mistake 
until it was challenged. In Wisconsin alone, our agency has 
seen the frequencies of these denials multiply from 1 to 2 per 
year to 1 to 2 per week.

    In 2023, 30.8 million people were enrolled in Jim's type of 
insurance nationally. This means that use of an algorithm for 
this one narrow patient experience is churning out hundreds of 
thousands of incorrect denials that go largely unchallenged. If 
Jim had stayed in the facility the full length of time that his 
doctors advised, it would have cost him over $3,600 due to that 
denial.

    Additionally, Jim's health suffered as a result of his 
early discharge, and members of his family needed to take time 
off work to provide care. Patients may be reimbursed 
financially, but they cannot go back in time and get the care 
that they needed.

    Insurance companies bank on patients not appealing, or in 
many cases with our elderly clients, dying in the process. I am 
only able to share Jim's story because he had family advocating 
for him.

    On his own, Jim may have remained in the facility, drained 
his assets on care, and been forced to take Medicaid, which 
shifts cost to the state. If Jim had returned home on his own, 
most likely he would have been quickly readmitted to the 
hospital or died. He certainly would not have been able to 
navigate the appeals process by himself from his hospital bed.

    Using an algorithm to guide discharges also negatively 
affects the facilities, who must submit almost daily updates to 
the subcontractors regarding that predicted date and provide 
hundreds of pages of medical records when a patient appeals. 
Often, nurses and therapists are called to testify at Federal 
hearings.

    As a result, many facilities are refusing to take patients 
whose insurance uses this predictive technology due to the 
administrative burdens it creates. This means that in rural 
areas, patients need to travel hundreds of miles for the care 
they need only to be met with network restrictions when they 
get there.

    It is unrealistic to eliminate AI completely from the 
health care system, I understand. However, this algorithm alone 
has been used for years to direct patient care with devastating 
consequences. If the machine itself can't be dismantled, then 
patients should at a minimum, have a clear view of its moving 
parts.

    When the algorithm gets it wrong, patients need to be 
compensated, and both the insurance companies and their 
subcontractors must be penalized. I want to thank you for the 
opportunity to speak about this important issue, and I welcome 
any additional questions you have. Thank you.

    [The prepared statement of Ms. Huberty follows.]

                prepared statement of christine huberty
    Dear Mr. Chairman and Members of the Subcommittee:

    My name is Christine J. Huberty and I have served as an attorney at 
the Greater Wisconsin Agency on Aging Resources (GWAAR) since 2015. The 
Elder Law and Advocacy Center at GWAAR provides free legal services to 
adults over age 60 under Title IIIB of the Older Americans Act. As an 
advocate for senior residents of Wisconsin, part of my job is to 
provide legal assistance to individuals experiencing healthcare 
coverage denials. The purpose of my testimony today is to share how the 
use of Artificial Intelligence (AI) in healthcare causes patient harm 
and administrative burdens.

    On May 25, 2023, Jim, age 81, was hospitalized for pneumonia 
secondary to COVID-19. Jim had a history of COPD, and was at the time 
undergoing chemotherapy for B-cell lymphoma. Prior to getting COVID-19, 
Jim lived with his spouse, was independent in all activities of daily 
living, and did not need supplemental oxygen. Therefore, Jim's doctors 
recommended that he transfer from the hospital to a Skilled Nursing 
Facility (SNF) for short-term rehabilitation. His doctors and 
therapists recommended daily skilled therapies for 30 days.

    Jim's insurance provider contracts with a company that used 
proprietary technology to compare his care needs with millions of other 
patients. This technology said Jim should only need 14.2-17.8 days at a 
SNF. \1\ Jim received a denial on day 16, with coverage ending 2 days 
later, just as the algorithm predicted. Jim went home on day 25 not 
because he was well enough, but because he was afraid of the mounting 
out-of-pocket costs. Jim's doctors and therapists did not agree with 
the algorithm's predicted discharge date, nor did they agree with Jim's 
own decision to return home so soon. AI directed Jim's care.
---------------------------------------------------------------------------
    \1\  naviHealth nH Predict Outcome Tool (attached).

    The subcontractors using the algorithm argue that the predicted 
length of stay is used as a guide only, and medical reviewers (humans) 
make all final denial decisions. This may be the case, but if so, these 
---------------------------------------------------------------------------
humans ignored things in Jim's medical records such as:

          He was unable to safely swallow by himself, and in 
        fact had a choking episode just days after he was admitted;

          His oxygen saturation remained at unsafe levels;

          He was at risk of falling and lacked the strength and 
        activity tolerance to participate in chemotherapy;

          He could not climb the three stairs required to get 
        into his home;

          He required assistance of at least one if not two 
        people with getting in and out of bed, toileting, bathing, and 
        dressing; and

          The direct words: ``Currently not safe to return home 
        with wife.''

    Throughout Jim's medical records, the reasoning for discharge was 
not because it was medically appropriate, but because his insurance 
denied coverage based on the algorithm. Jim's family helped him appeal 
twice, which was ultimately successful. Meaning, the algorithm got it 
wrong, and a human did not catch the mistake until it was challenged.

    Some reports show that only 1 percent of denials are appealed, with 
75 percent of those overturned. \2\ Our agency, which serves Wisconsin 
only, has seen the number of these denials increase from 1-2 per year 
to 1-2 per week, with a 90 percent success rate with appeals. In 2023, 
30.8 million people were enrolled in Jim's type of insurance 
nationally. \3\ This means that use of an algorithm for this one narrow 
patient experience is churning out hundreds of thousands of incorrect 
denials that go largely unchallenged, leaving patients and their 
families to suffer. When I called Jim's family for permission to share 
his story, they told me they knew of four other individuals this had 
happened to in the past 2 years. None of those cases reached our 
agency.
---------------------------------------------------------------------------
    \2\  Office of Inspector General, Medicare Advantage Appeal 
Outcomes and Audit Findings Raise Concerns About Service and Payment 
Denials (Sept. 2018). https://oig.hhs.gov/oei/reports/oei-16-00410.pdf
    \3\  KFF, Medicare Advantage in 2023: Enrollment Update and Key 
Trends (Aug. 2023). https://www.kff.org/Medicare/issue-brief/Medicare-
advantage-in--2023-enrollment-update-and-key-trends/

    If Jim had stayed in the SNF the full length of time his doctors 
advised, it would have cost him over $3,600 due to the denial. Even 
more troubling is that Jim's health suffered as a result of his early 
discharge, and several members of his family needed to take time off 
---------------------------------------------------------------------------
from their own jobs to help provide care.

    I am only able to share Jim's story because he had family 
advocating for him. On his own, Jim may have remained in the facility, 
drained his assets, and been forced to take Medicaid, which then shifts 
the costs to the state. Insurance providers often cite potential 
eligibility for Medicaid as a reason for a denial in medical records. 
It is not unrealistic to imagine that if Jim had returned home on his 
own when he did, he would have been quickly readmitted to the hospital 
or died. He certainly would not have been able to navigate the appeals 
process by himself from his hospital bed.

    The effects of the use of the algorithm to guide discharges not 
only causes patient harm, but also negatively affects the facilities, 
which must submit near daily updates to the subcontractors regarding 
the predicted discharge date, and provide hundreds of pages of medical 
records when a patient appeals. Often, nurses and therapists are called 
to testify at Federal hearings. This is on top of an already 
understaffed, overworked, and underpaid care system. As a result, many 
facilities are refusing to take patients whose insurance uses this 
predictive technology due to the administrative burdens it creates. 
This means that in rural areas, patients need to travel hundreds of 
miles for the care they need, only to be met with network restrictions 
when they get there. Also, if a patient is readmitted to the hospital 
after being discharged from the SNF too soon, the facility is the one 
penalized. \4\
---------------------------------------------------------------------------
    \4\  JAMA Network, Skilled Nursing Facility Performance and 
Readmission Rates Under Value-Based Purchasing (Feb. 2022). https://
jamanetwork.com/journals/jamanetworkopen/fullarticle/2789442; CMS, The 
Skilled Nursing Facility Value-Based Purchasing (SNF VBP) Program. 
https://www.cms.gov/Medicare/quality/nursing-home-improvement/value-
based-purchasing

    Meanwhile, neither the insurance provider nor its subcontractors 
suffer negative consequences. The burden is on the patient to prove why 
the algorithm got it wrong. If the appeal makes it to the Federal 
hearing stage, a judge will order the insurance company pay what it was 
supposed to pay in the first place, and the practice continues. 
Insurance companies rely on patients not appealing, or in many of our 
---------------------------------------------------------------------------
cases with elderly clients, dying in the process.

    It is unrealistic to eliminate AI from the healthcare system. 
However, this algorithm has been used for years to direct patient care 
with devastating effects. If the machine itself cannot be dismantled, 
then patients should have, at a minimum, a clear view of its moving 
parts. Additionally, when it is obvious that the algorithm got it wrong 
and issued an incorrect denial, patients need to be compensated, and 
insurance companies and their subcontractors must be penalized.

    I want to thank you for the opportunity to speak about this 
important issue and I welcome any additional questions you may have.
                                 ______
                                 
    Senator Markey. Thank you very much. Our next witness is 
Dr. Thomas Inglesby. Dr. Inglesby is a Professor at Johns 
Hopkins University and the Director of the Johns Hopkins Center 
for Health Security.

    Dr. Inglesby chaired the Centers for Disease Control and 
Prevention Center for Preparedness and Response's Board of 
Scientific Counselors.

    He has advised the Department of Health and Human Services, 
and he has also a--served as a Senior Adviser on the White 
House COVID-19 Rapid Response Team. Welcome, Dr. Inglesby. 
Whenever you feel comfortable, please begin.

 STATEMENT OF THOMAS INGLESBY, DIRECTOR, JOHNS HOPKINS CENTER 
               FOR HEALTH SECURITY, BALTIMORE, MD

    Dr. Inglesby. Thank you. Chairman Markey, Ranking Member 
Marshall, and distinguished Members of the Subcommittee, it is 
my pleasure to appear before you to discuss the use of 
artificial intelligence in health care.

    My name is Tom Inglesby. I am Director of the Johns Hopkins 
Center for Health Security and Professor in the Department of 
Environmental Health and Engineering in the Johns Hopkins 
Bloomberg School of Public Health.

    I am also a medical doctor with a background of providing 
care for patients with HIV, and the opinions expressed here are 
my own and do not necessarily reflect the views of Johns 
Hopkins University.

    AI offers great potential benefits for health care and 
public health. In health care, it could drive earlier disease 
diagnosis. It could reduce medical errors, lead to more 
efficient, less invasive surgeries.

    In public health, it could improve disease surveillance and 
perhaps provide earlier indicators of outbreaks, even making it 
possible to contain smaller outbreaks before they become 
epidemics. However, to realize these benefits, it is vital to 
address potentially very serious risks.

    AI developers could inadvertently introduce biases into 
health care related models. Models could fail to protect 
privacy, leading to the public sharing of patients' sensitive 
health care data. Training data could include serious 
inaccuracies, leading to misleading results that are difficult 
to detect.

    These are among important risks that Congress will need to 
assess, and where needed, create legislative remedies. My 
testimony focuses on two high consequence risks related to AI 
and the biological sciences that I believe deserve top priority 
for attention and strong governance.

    First, the potential for AI to accelerate or simplify the 
creation of dangerous viruses that are now extinct, or 
dangerous viruses that only exist within research laboratories. 
And second, the potential for AI to enable, accelerate, or 
simplify the creation of entirely new biological constructs 
that could start a new pandemic.

    The Executive Order on AI signed last week launched a 
series of important strong actions to address and minimize 
biosecurity risks posed by AI. In addition, several 
foundational AI and protein design model developers have 
already taken important steps to reduce biosecurity risks, 
which I highly commend, but more action is needed.

    To that end, I recommend Congress take three immediate 
steps to further protect against possible high consequence 
biological risks emanating from future generation AI models. 
First, Congress should provide HHS with the authority and 
resources to require anyone purchasing synthetic nucleic acids 
in the U.S. to purchase only from a nucleic acid provider that 
conducts sequence and customer screening irrespective of 
funding source.

    This would go--this would build on but go further than the 
requirements of the Executive Order that was signed last week, 
which covered only federally funded entities. And this would 
help establish uniform protection against the risks of 
synthesizing highly dangerous viruses in the U.S. and give the 
U.S. a platform to advocate for strong international screening 
standards.

    Second, Congress should commission a rapid risk assessment 
to identify whether the Executive Order signed last week will 
adequately address high end biological risks or whether 
additional Congressional action is needed to prevent those 
threats.

    I want to commend Chairman Markey and Senator Budd for 
their leadership on the Artificial Intelligence and Biosecurity 
Risk Assessment Act and recommend taking this additional step 
in light of the Executive Order.

    Third, Congress should require entities developing products 
with significant dual use risks to evaluate and red team their 
models, identify significant risks, and address them. Congress 
should also task an agency with auditing these high risk dual 
use models and submitting a report to Congress with 
recommendations for new authorities that will be needed by the 
agency to take any appropriate remedial actions.

    It will be important to conduct red teaming evaluations and 
audits before future dual use, high end risk bio models are 
made wholly open source on the internet, because once that 
occurs, they cannot be recalled. We only have one chance to get 
things right for each new open source model release.

    If taken now, these measures taken together will reduce the 
risk of high consequence, malicious, and accidental events 
derived from AI that could trigger future pandemics, which 
would likely also broadly derail the beneficial uses of 
powerful AI models.

    Congress should pursue these measures in a manner that will 
allow AI developers and scientists to continue to vigorously to 
pursue the many very positive uses of AI to improve human 
health. Thank you again for the opportunity to testify, and I 
look forward to your questions.

    [The prepared statement of Dr. Inglesby follows.]

                   prepared statement of tom inglesby
                              Introduction
    Chairman Markey, Ranking Member Marshall, and distinguished Members 
of the Committee, it is my pleasure to appear before you today to 
discuss the potential benefits and challenges related to artificial 
intelligence (AI) use in health care and public health. In order to 
harness the great promise that AI holds for benefits in health care and 
public health, AI risks (including privacy, data integrity, and bias) 
all need to be rigorously addressed.

    Within the realm of AI models working in the biological sciences, I 
want to urge this Committee to place high priority on establishing 
strong governance over the highest potential dual-use risks of AI and 
biosecurity (AIxBio), which I judge to be: (1) the potential for AI to 
accelerate or simplify the reintroduction of particularly dangerous 
extinct viruses or dangerous viruses that only exist now within 
research labs; and (2) the potential for AI to enable, accelerate, or 
simplify the creation of entirely new biological constructs that could 
start a new pandemic. Taken together, AI foundation models like large 
language models (LLMs), and AI biological design tools (BDTs), such as 
models focused on protein design or immune evasion, could now or in the 
foreseeable future be misused to purposefully create such threats. We 
should start working to guard against these risks today.

    My name is Tom Inglesby. I am Director of the Johns Hopkins Center 
for Health Security and Professor in the Department of Environmental 
Health and Engineering in the Johns Hopkins Bloomberg School of Public 
Health, with a Joint Appointment in the Johns Hopkins School of 
Medicine. I'm also a medical doctor with a background caring for 
patients with HIV, and I worked on the COVID pandemic response, 
including on resolving challenges around access to diagnostic testing 
for COVID. The opinions expressed herein are my own and do not 
necessarily reflect the views of Johns Hopkins University.

    For 25 years, our Center's mission has been to protect people's 
health from major epidemics and disasters and build resilience to those 
challenges. Our Center is comprised of researchers and experts in 
science, medicine, public health, law, social sciences, economics, and 
national security--all focused on our mission to protect people's 
health from epidemics and disasters and ensure that communities are 
resilient to major challenges. Our team conducts independent research 
and analyzes how scientific and technological innovations can 
strengthen health security. Our Center founded the bipartisan Capitol 
Hill Steering Committee on Pandemic Preparedness and Health Security in 
2020, in collaboration with Members of the House and Senate, as well as 
former Administration officials, as an educational forum to discuss new 
topics, technologies, and ideas that can improve domestic health 
security now and in the future. The Steering Committee has held over 20 
sessions in the last 3 years intended to be of value to congressional 
offices working on pandemic and biosecurity challenges.

    Today, I was asked to provide comments on how we can guard against 
potential harms of AI while at the same time working to ensure that AI, 
where implemented, is done so in ways that will improve patient 
experience and outcomes. In my testimony below, I provide my views on 
the enormous potential benefits of AI in health care and the 
substantial potential risks that need to be addressed before and while 
realizing those benefits. Prior to offering those views, I want to give 
my top line recommendations as to what Congress should be doing at this 
time to address the greatest AIxBio risks.

    To that end, I recommend that Congress now build on the strong 
foundation provided by the October 30 Executive Order titled: Safe, 
Secure, and Trustworthy Development and Use of Artificial Intelligence 
(EO no.14110). I recommend that congressional actions related to this 
include:

          (1) Providing the Department of Health and Human Services 
        (HHS) with the authority and resources to require anyone 
        purchasing synthesized nucleic acids, regardless of the funding 
        source, to purchase only from a provider or manufacturer that 
        screens both orders and customers in a way that reduces the 
        highest potential dual-use risks of AIxBio. \1\
---------------------------------------------------------------------------
    \1\  (requiring that all federally funded entities conducting life-
sciences research purchase synthetic nucleic acids only from providers 
or manufacturers that adhere to the screening framework developed by 
NIST). Safe, Secure, and Trustworthy Development and Use of Artificial 
Intelligence, 88 Fed. Reg. 75191 (Nov. 1, 2023), Sec.  4.4(b)(iii).

          (2) Commissioning a rapid risk assessment to identify whether 
        EO #14110 as written will adequately address high-end 
        biological risks or whether congressional action is needed in 
---------------------------------------------------------------------------
        the near-term to ensure prevention of those threats.

          (3) Requiring entities developing models with significant 
        dual-use risks to red-team and evaluate their models, and task 
        an agency with: (1) auditing those models; and (2) submitting a 
        report to Congress with recommendations for new authorities 
        that will be needed by the agency to take any appropriate 
        remedial action should red-teaming, evaluations, or audits 
        fail.

    If taken now, these measures will reduce the risk of malicious and 
consequential misuse of AI-enabled biology while allowing AI developers 
and scientists to pursue beneficial uses of AI to improve the human 
condition.

Medical and Public Health Benefits of AI and Recognition of Other Risks 
                             in Health Care
    AI holds great promise for benefits in health care and public 
health. Potential benefits include earlier disease diagnoses, allowing 
doctors to intervene earlier in the course of an illness; reduced 
medical errors; more efficient or less invasive surgeries; lowering of 
administrative burdens on clinicians to allow more time with patients; 
and faster response times to patient questions. Researchers and 
companies may be able to create or use AI tools to help them accelerate 
development of vaccines and medicines and to significantly advance 
personalized medicine. AI may be able to improve disease surveillance 
and perhaps even provide earlier indicators of new outbreaks or 
epidemics. It will place stronger diagnostic and clinical tools in the 
hands of providers in the field or those in clinics far from more 
advanced health care systems. \2\ AI could also assist with more 
careful monitoring of drug safety and help to improve, and potentially 
greatly accelerate, clinical trials of new medicines.
---------------------------------------------------------------------------
    \2\  World Health Organization (WHO), Ethics and Governance of 
Artificial Intelligence for Health, WHO (June 28, 2021), https://
www.who.int/publications/i/item/9789240029200; IBM Education, How Can 
Artificial Intelligence Benefit Healthcare?, IBM (July 11, 2023), 
https://www.ibm.com/blog/the-benefits-of-ai-in-healthcare/.

    To realize these benefits, policymakers, companies, and health 
systems will need to take great care in implementing consequential AI 
systems, and all parties will need to address a series of risks and 
potentially serious challenges. For instance, developers could 
inadvertently introduce biases into the models that are being developed 
in AI health care systems. Policymakers and firms will need to ensure 
that privacy is protected so that individual patient information is not 
inappropriately accessed or shared publicly. This includes addressing 
cybersecurity issues in AI, such as the potential for offensive cyberAI 
to outstrip cyberAI's defensive capabilities, using lessons learned 
from cyber governance. \3\ The quality and integrity of the training 
data for AI systems will need to be high - inaccuracies or skews in the 
data that AI systems are being trained on could lead to inaccurate or 
misleading results that could be damaging and hard to detect. \4\
---------------------------------------------------------------------------
    \3\  Louis Columbus, Defensive Vs. Offensive AI: Why Security Teams 
are Losing the AI War, VENTUREBEAT (Jan. 3, 2023, 10:07 AM), https://
venturebeat.com/security/defensive-vs-offensive-ai-why-security-teams-
are-losing-the-ai-war/.
    \4\  World Health Organization (WHO), Ethics and Governance of 
Artificial Intelligence for Health, WHO (June 28, 2021), https://
www.who.int/publications/i/item/9789240029200.

    There are additional legal and ethical risks associated with AI. 
When implementing the technology, it will be vital to ensure that AI is 
not used as a substitute for investment in and development of core 
health functions. \5\ Many have identified these and other challenges, 
and it's good to see that U.S.-based companies are trying to work with 
the government to find feasible ways of effectively mitigating the 
range of potential AI risks to health care. It will be important for 
Congress to regularly assess the extent to which AI developers and 
health care systems are addressing these risks, and to consider 
legislative remedies to address any clear gaps.
---------------------------------------------------------------------------
    \5\  World Health Organization (WHO), WHO Issues First Global 
Report on Artificial Intelligence (AI) in Health and Guiding Principles 
for Its Design and Use, WHO (June 28, 2021), https://www.who.int/news/
item/28-06-2021-who-issues-first-global-report-on-ai-in-health-and-six-
guiding-principles-for-its-design-and-use.
---------------------------------------------------------------------------
                 The Need for Strong AIxBio Governance
    One area of risk that deserves special and immediate attention is 
the potential for AI systems to create high-consequence biosecurity and 
biosafety risks. Leaders from the AI technology field have identified 
those risks as among their highest priority concerns, as have 
government officials and outside research groups focused on the 
establishment of AI governance systems. \6\
---------------------------------------------------------------------------
    \6\  See, e.g., Diane Bartz, U.S. Senators Express Bipartisan Alarm 
About AI, Focusing on Biological Attack, REUTERS (July 25, 2023,10:23 
PM), https://www.reuters.com/technology/us-senators-express-bipartisan-
alarm-about-ai-focusing-biological-attack-2023-07-25/; Congresswoman 
Anna G. Eshoo, Eshoo Urges NSA & OSTP to Address Biosecurity Risks 
Caused by AI, CONGRESSWOMAN ANNA G. ESHOO (Oct. 25, 2022), https://
eshoo.house.gov/media/press-releases/eshoo-urges-nsa-ostp-address-
biosecurity-risks-caused-ai; The White House, Fact Sheet: President 
Biden Issues Executive Order on Safe, Secure, and Trustworthy 
Artificial Intelligence, WHITE HOUSE (Oct. 30, 2023), https://
www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-
sheet-president-biden-issues-executive-order-on-safe-secure-and-
trustworthy-artificial-intelligence/; Nuclear Threat Initiative (NTI), 
Report Launch: The Convergence of Artificial Intelligence and the Life 
Sciences, NTI (Oct. 30, 2023), https://www.nti.org/events/report-
launch-the-convergence-of-artificial-intelligence-and-the-life-
sciences/.

    Signed last week, EO #14110 represents the strongest action on AI 
that any government has taken thus far. It sets out a series of high-
level principles and priorities that broadly commit the country's AI 
path to: developing safe and secure AI systems; responsible innovation 
and competition; a commitment to supporting workers; advancing equity 
around AI; the protection of privacy and civil liberties; responsible 
---------------------------------------------------------------------------
Federal use of AI; and strong global leadership.

    As part of this overall approach, the EO identifies a series of 
specific risks the executive branch will work to address, including the 
risk that AI systems could substantially lower the barrier of entry to 
design, synthesize, acquire, or use biological weapons. It details a 
series of important steps the executive branch will take in the months 
ahead to develop guidance, identify new industry norms, and evaluate 
potential risks in order to protect against AI being deliberately 
misused for this purpose.

    The EO directs the National Institute of Standards and Technology 
(NIST) to develop guidelines and best practices, with the aim of 
promoting consensus industry standards for safe and secure systems that 
include benchmarks for evaluating and auditing AI capabilities to cause 
harm, as well as guidance for AI developers regarding red-teaming 
practices and testing processes and environments. It also directs the 
Department of Energy to implement tools and testbeds for evaluating 
AIxBio capabilities and to develop guardrails that reduce these risks.

    The EO directs the Department of Commerce to require companies with 
frontier dual-use foundation AI models (models that could potentially 
lower barriers for designing/synthesizing bioweapons) to report 
activities related to the production of those models, the protection of 
key model characteristics, and the results of red-teaming tests.

    The EO also directs the Office of Science and Technology Policy 
(OSTP) to establish a framework that encourages providers of synthetic 
nucleic acid sequences to implement comprehensive nucleic acid 
procurement screening mechanisms. As part of that effort, OSTP will 
need to establish criteria and mechanisms for identifying sequences 
that pose a risk to national security and determine methodologies for 
verifying performance of screening, including customer screening 
approaches. Six months after the creation of this framework, all 
agencies that fund life sciences work will establish that their funding 
recipients procure nucleic acid sequences from manufacturers that 
adhere to this framework.

    My Center, along with other biosecurity-focused researchers and 
experts, as well as industry leaders from the companies that conduct 
nucleic acid synthesis, have been calling for the development of a 
framework to require those who procure nucleic acid sequences to 
purchase them from companies that are verified to be carefully 
screening orders and customers in order to deter and detect any 
potentially malicious actors. I'm very glad that the EO makes progress 
on this issue for those entities receiving Federal funding.

    I believe that this series of EO actions, taken together, are 
appropriate, important, strong actions that are needed to better 
assess, evaluate, test for, and diminish biological risks posed by new 
AI models. AI foundation models, LLMs, and AI biological design tools--
such as those that help to design and predict structures of proteins, 
design viral vectors, or predict the properties of pathogens, host-
pathogen interactions, or immune-system evasion--could be misused by 
accelerating the synthesis/manufacture of extinct or eradicated highly 
transmissible viruses, or by helping to design novel biological 
constructs capable of epidemic or pandemic spread. While more 
evaluation and study of these risks are clearly needed, preliminary 
evidence suggests that AI models could in the foreseeable future 
accelerate, simplify, or enable the creation of these risks. Early 
technical studies from nongovernmental research teams that I've been 
briefed on are quite worrying. As these assessments are ongoing, we 
need a governance process that will address risks identified during 
red-teaming exercises and other evaluations.

    Beyond this EO, I have been encouraged by other developments to 
address these risks. I highly commend many of the AI companies for 
making voluntary commitments to pre-release internal and external 
security testing of their AI systems, which includes testing by 
independent experts to guard against biosecurity risks. \7\ The first 
step in addressing risk is to identify it, and many of the companies 
developing frontier models have made progress in the past year in 
trying to understand the biosecurity risks that their models may pose 
and addressing those risks. \8\
---------------------------------------------------------------------------
    \7\  The White House, Fact Sheet: Biden-.Harris Administration 
Secures Voluntary Commitments from Leading Artificial Intelligence 
Companies to Manage the Risks Posed by AI, WHITE HOUSE (July 21, 2023), 
https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/
21/fact-sheet-biden-harris-administration-secures-voluntary-
commitments-from-leading-artificial-intelligence-companies-to-manage-
the-risks-posed-by-ai/.
    \8\  See, e.g., Diane Bartz, U.S. Senators Express Bipartisan Alarm 
About AI, Focusing on Biological Attack, REUTERS (July 25, 2023,10:23 
PM), https://www.reuters.com/technology/us-senators-express-bipartisan-
alarm-about-ai-focusing-biological--attack-2023-07-25/ (Anthropic 
warning Senators about biological risks during congressional 
testimony); Anthropic, Frontier Threats Red Teaming for AI Safety, 
ANTHROPIC (July 26, 2023), https://www.anthropic.com/index/frontier-
threats-red-teaming-for-ai-safety (Anthropic developing red-teaming 
tests to guard against biosecurity risks).

    I'm also encouraged by the Institute for Protein Design's 
community-wide effort to develop new voluntary guidelines for 
researchers to follow as they apply AI to protein research. Such 
commitments can help establish community standards and encourage 
ethical behavior on the part of individual scientists by, for example, 
creating an obligation to report any concerning research practices. \9\
---------------------------------------------------------------------------
    \9\  Institute for Protein Design (IPD), Results from our Summit on 
Responsible AI, IPD (Oct. 31, 2023), https://www.ipd.uw.edu/2023/10/
responsible-ai-summit/.

    Strong governance will also require international collaboration. 
That is why I'm very pleased to see that the U.S. and 27 other 
countries recognized the special risks that AI poses in biotechnology 
in the recently signed Bletchley Declaration by Countries Attending the 
AI Safety Summit. \10\ I'm further encouraged that at least two 
Artificial Intelligence Safety Institutes have already been stood up--
one in the UK and one at NIST in the U.S. Department of Commerce--to 
provide testing environments for researchers to evaluate emerging AI 
risks, such as those at the intersection of AI and biotechnology.
---------------------------------------------------------------------------
    \10\  The Prime Minister's Office, The Bletchley Declaration by 
Countries Attending the AI Safety Summit, 1-2 November 2023, PRIME 
MINISTER'S OFFICE (Nov. 1, 2023), https://www.gov.uk/government/
publications/ai-safety-summit-2023-the-bletchley-declaration/the-
bletchley-declaration-by-countries-attending-the-ai-safety-summit--1-2-
november--2023.
---------------------------------------------------------------------------
                            Recommendations
    Congress should ensure that as the U.S. government acts to mitigate 
the risks of AIxBio, it set as its highest priority the reduction of 
the two most consequential biological risks, which I argue are: (1) the 
potential for AI to accelerate or simplify the reintroduction of 
particularly dangerous extinct viruses or dangerous viruses that only 
exist now within research labs; and (2) the potential for AI to enable, 
accelerate, or simplify the creation of entirely new biological 
constructs that could start a pandemic.

    While I am encouraged by recent actions being taken by the U.S. 
government, industry developers of powerful AI technologies, and 
researchers in the field, there are series of steps that I think will 
be important for Congress to attend to in the time ahead to ensure that 
these two most consequential biological risks are addressed. They 
include:

          (1) Providing HHS with the authority and resources to require 
        anyone purchasing synthesized nucleic acids, regardless of the 
        funding source, to purchase only from a provider or 
        manufacturer that screens both orders and customers in a way 
        that reduces the highest potential dual-use risks of AIxBio.

    Our increasing ability to automate scientific experiments, cheaply 
synthesize nucleic acids, and autonomously generate biological 
constructs will likely speed up development of drugs and devices to 
protect and prolong human health and allow the advent of enormously 
powerful medical tools that will protect millions of American lives, 
such as personalized medicine. \11\ But we must ensure at the same time 
that these new powers are not used maliciously to cause great harm. 
Certain AI models will likely help to accelerate the transition across 
the ``digital-to-physical'' boundary--they may also enable digitally 
designed threats to turn into physical biological risk. They could be 
used to help malicious actors create highly dangerous and transmissible 
pathogens. Without a strong screening framework in place and required 
of all companies, such actors could exploit companies that do not 
screen customers or orders, or they could find gaps in screening 
programs that are weak or insufficient to guard against exploitation. 
\12\
---------------------------------------------------------------------------
    \11\  Kanika Jain, Synthetic Biology and Personalized Medicine, 22 
MED. PRINC. PRAC. 209 (2013), https://doi.org/10.1159/000341794.
    \12\  The Hon. Mark Dybul et al., Biosecurity in the Age of AI: 
Chairperson's Statement, HELENA (July 2023), https://
www.helenabiosecurity.org.

    In order to secure the digital-to-physical frontier, it will be 
critical to implement mandatory screening policies for gene synthesis 
providers and manufacturers. EO #14110 requires that all federally 
funded entities conducting life sciences research must purchase 
synthetic nucleic acids from gene synthesis providers or manufacturers 
that adhere to a gene synthesis screening framework to be developed by 
OSTP. \13\ This is an excellent initial step, but Congress should 
further provide HHS--as by far the largest government funder of life 
sciences research--with the authority and resources to expand this 
requirement to all U.S. purchasers of synthetic nucleic acids, not just 
those receiving Federal funding. There is broad public support for 
this--a recent poll found that 61 percent of Americans of all political 
affiliations support such an expansion, while only 12 percent do not. 
\14\ My understanding is that the EO's screening requirements were 
applied only to federally funded entities because the authority to 
regulate the purchases by other entities in this manner does not 
currently exist within the executive branch. That suggests that action 
by Congress is vital. Congress should also give HHS the authority and 
resources to set up verification mechanisms to ensure that 
manufacturers and purchasers comply with screening requirements.
---------------------------------------------------------------------------
    \13\  Sec.  4.4(b)(iii).
    \14\  Artificial Intelligence Policy Institute (AIPI), Vast 
Majority of U.S. voters of All Political Affiliations Support President 
Biden's Executive Order on AI, AIPI (Oct. 30, 2023), https://
theaipi.org/poll-biden-ai-executive-order-10--30/.

    While Congress works to ensure that U.S. gene synthesis providers 
follow OSTP's framework, the executive branch should focus on promoting 
the adoption of similar standards internationally. Around 60 percent of 
the gene synthesis market sits outside of North America. \15\ Not only 
does this mean that malicious actors within the U.S. can access 
international providers, but as COVID-19 demonstrated, borders are not 
a protection against disease--a gene synthesis-driven outbreak abroad 
could have terrible impact in the U.S.. It is therefore crucial that 
the executive branch works to create a widely adopted international 
agreement that requires all gene synthesis providers globally to adhere 
to rigorous screening standards. The framework that will be developed 
as part of this EO will provide a vital starting point for such an 
agreement.
---------------------------------------------------------------------------
    \15\  (though the market share of the U.S. is expected to increase 
in coming years). Global Market Insights (GMI), Gene Synthesis Market--
By Method (Solid-phase Synthesis), By Services (Antibody DNA 
Synthesis), By Application (Vaccine Development) By End-use (Academic 
and Research Institutes, Biopharmaceutical Companies,) & Forecast 
2023--2032, GMI (May 2023), https://www.gminsights.com/industry-
analysis/gene-synthesis-market.

          (2) Commissioning a rapid report to identify whether EO 
        #14110 as written will adequately address high-end biological 
        risks or whether congressional action is needed in the near 
---------------------------------------------------------------------------
        term to prevent those threats.

    Although EO #14110 requires studies and reports on AIxBio risks, 
\16\ those studies and reports (1) are not required to be reported to 
Congress; (2) will not include any new legislative recommendations; and 
(3) do not clearly prioritize high-end biological risks.
---------------------------------------------------------------------------
    \16\  Sec. Sec.  4.4(a), 4.6.

    For example, the EO requires the Department of Homeland Security 
(DHS) to submit a report to the president on the potential for AI to be 
misused to enable the development or production of chemical, 
biological, radiological, and nuclear (CBRN) threats. It also requires 
the Department of Defense (DOD) to commission a report on biosecurity 
risks from AI. These are important actions for the executive branch to 
take. However, given the fast-moving nature of this technology and 
Congress's role in ensuring that the executive branch has the tools and 
resources it needs to appropriately govern, Congress should commission 
a rapid report to identify whether EO #14110 as written will adequately 
address high-end biological risks or whether congressional action is 
---------------------------------------------------------------------------
needed in the near term to ensure prevention of those threats.

    The need for this focus on high-end risks is akin to the important 
focus that is warranted around the governance of enhanced potential 
pandemic pathogen (ePPP) research. The U.S. government should carefully 
scrutinize research that can reasonably be anticipated to create novel 
pandemic threats, lest we face the devastating consequences of an 
accident or deliberate misuse. Similarly, we should advance 
cautiously--and with full awareness of the relevant risks--as we fund 
and promote the creation of advanced AI models. In prior work on other 
issues related to biological threats, I have seen efforts that have 
neglected or paid insufficient attention to high-end biological risks, 
and I fear that the same thing could happen in this context.

    Commissioning a rapid report on high-end biological risks posed by 
AI would provide timely clarity to Congress as it considers how to 
ensure the country is harnessing the incredible transformative power 
that AI promises in health care, public health, and broader society 
while guarding against its greatest risks. It would be logical for the 
Administration for Strategic Preparedness and Response (ASPR) to have 
responsibility for such a report given its responsibilities around 
genome synthesis screening and assessment of risks related to ePPP 
research.

          (3) Requiring entities developing models with significant 
        dual-use risks to red-team and evaluate their models, and task 
        an agency with: (1) auditing those models; and (2) submitting a 
        report to Congress with recommendations for new authorities 
        that will be needed by the agency to take any appropriate 
        remedial action should red-teaming, evaluations, or audits 
        fail.

    Just as EO #14110 establishes a safety program at HHS that provides 
for remedial action if it finds harms or unsafe health care practices 
involving AI, \17\ so too should Congress establish a program that 
provides for remedial action in the event that red-teamers demonstrate 
AI models enable high-end biological risks, evaluations identify high-
end biological risks, or audits find that a company did not provide 
accurate information regarding high-end biological risks. What is 
currently required by the EO in the area of high-end biological risks 
is that companies developing or intending to develop dual-use 
foundation models must report relevant technical information to the 
Federal Government, including red-teaming performance related to AIxBio 
risks. \18\ However, the question that Congress should address is: what 
happens in the event of failures? What can the government do if tests 
show that a model is too dangerous to release safely?
---------------------------------------------------------------------------
    \17\  The White House, Fact Sheet: President Biden Issues Executive 
Order on Safe, Secure, and Trustworthy Artificial Intelligence, WHITE 
HOUSE (Oct. 30, 2023), https://www.whitehouse.gov/briefing-room/
statements-releases/2023/10/30/fact-sheet-president-biden-issues-
executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
 
    \18\  Sec.  4.2(i).

    EO no.14110 does not actually require companies to conduct red-
teaming tests, evaluations, or audits. Instead, the EO simply requires 
that if a company voluntarily opts to red-team its dual-use foundation 
model, the results of those tests must be reported. \19\ Moreover, the 
EO does not require individuals or groups that may develop AI systems 
in the future to report the same activities required of companies in 
the EO. \20\ Accordingly, Congress should develop legislation to 
require all entities (not just companies) developing models with high-
end, dual-use biological risks \21\ to red-team, evaluate, and audit 
their models.
---------------------------------------------------------------------------
    \19\  Id.
    \20\  Compare Sec.  4.2(i) with Sec.  4.2(ii). I suspect that this 
is because individuals or groups, such as academic institutions, are 
not currently developing frontier AI models. However, this could shift 
in the future, such as if the National AI Research Resource (NAIRR) 
provides independent AI researchers and students with significantly 
expanded access to computational resources. Accordingly, a 
capabilities-based requirement rather than an entity-based requirement 
seems warranted.
    \21\  Potentially subject to be defined by the actions taken in the 
EO. See Sec.  4.2(b).

    Additionally, while NIST is tasked with developing auditing 
standards in the EO, it's unclear whether any U.S. government agency 
would have the authority to require entities to grant the government 
permission to audit those models, by which I mean the assessment of 
developers' red-teaming efforts as well as an evaluation of frontier 
models by the government itself. Nor is it clear by what authority the 
U.S. government could take remedial action should its evaluation, or 
that of the developers, find a model dangerous. Congress should 
therefore task an agency with: (1) auditing those models as described 
above, as the agency deems necessary; and (2) submitting a report to 
Congress with recommendations for new authorities that will be needed 
by the agency to take any appropriate remedial action such as pausing 
development until safety measures can be implemented, cessation of 
development, or directing the developer to face other consequences if 
red-teaming, evaluations, or audits fail. In conducting these 
evaluations, agencies should of course consider both the most extreme 
risks posed by advanced models as well as their potential benefits, 
both in detecting and flagging pandemic threats and in mitigating them 
---------------------------------------------------------------------------
through vaccine and drug design.

    One of the most concerning risks of AI models is that if they 
become wholly open source and available on the internet, they cannot be 
recalled. \22\ That is why red-teaming, evaluations, and audits will be 
so important to conduct before future dual-use, high-end risk bio 
models are made open source--we will only have one chance to get it 
right for each release.
---------------------------------------------------------------------------
    \22\  See, e.g., the leak of Meta's Llama model.

    It will also be important for Congress to consider how to support 
the development of a skilled workforce able to sufficiently red-team 
frontier dual-use foundation models for the highest-consequence 
biological risks. Providing these authorities will ensure that the AI 
systems that could be used to design new effective pharmaceuticals, 
make breakthroughs in fundamental biology, and give doctors powerful 
new diagnostic tools do not create new pandemic risks that both 
endanger the public and threaten to undermine AI's great potential 
benefit.
                               Conclusion
    In order to harness the great promise that AI holds for benefits in 
health care and public health, AI risks (including privacy, data 
integrity, bias) will all need to be rigorously addressed. Within the 
realm of AI models working in the biological sciences, there are two 
high-consequence risks that deserve top priority for attention and 
strong governance: (1) the potential for AI to accelerate or simplify 
the reintroduction of particularly dangerous extinct viruses or 
dangerous viruses that only exist now within research labs; and (2) the 
potential for AI to enable, accelerate, or simplify the creation of 
entirely new biological constructs that could start a new pandemic.

    While I am encouraged by recent actions taken by the U.S. 
government, industry developers of powerful AI technologies, and 
researchers in the field, I outline above three steps that I think will 
be important for Congress to attend to in the time ahead to ensure that 
these high-consequence risks are addressed. If taken now, these 
measures will help to reduce the risk of malicious and consequential 
misuse of AI-enabled biology while allowing AI developers and 
scientists to pursue beneficial uses of AI to broadly improve medicine, 
public health, and patient outcomes.
                                 ______
                                 
    Senator Markey. Thank you, doctor. Our next witness is Dr. 
Kenneth Mandl. He is the Director of the Computational Health 
Informatics Program, excuse me, at Boston Children's Hospital, 
and he is a Professor of Pediatrics and Biometric Informatics 
at Harvard Medical School.

    Dr. Mandl is also, importantly, Co-Chairing the National 
Academy of Medicine's Digital Health Action Collaborative, 
which is working to facilitate the adoption of an AI code of 
conduct to ensure responsible and equitable use of AI in health 
care and in research. Welcome, doctor. Whenever you are ready, 
please begin.

STATEMENT OF KENNETH D. MANDL, HARVARD PROFESSOR AND DIRECTOR, 
  COMPUTATIONAL HEALTH INFORMATICS PROGRAM, BOSTON CHILDREN'S 
                      HOSPITAL, BOSTON, MD

    Dr. Mandl. Thank you, Subcommittee Chairman Markey, and 
Ranking Member Marshall, and Members of the Subcommittee----

    Senator Markey. Could you just move in a little closer and 
move the microphone a little closer, please.

    Dr. Mandl. Of course. It is with a deep sense of 
responsibility and privilege that I offer my testimony, as a 
Professor of Biomedical Informatics and Pediatrics, and 
Director of a Computational Health Informatics Program. I do 
Co-Chair the National Academy of Medicine's Digital Health 
Action Collaborative, but I am not speaking on behalf of the 
Academy today.

    With the release of sophisticated large language models 
like ChatGPT, AI will transform health care delivery sooner 
than anticipated. These emerging intelligences assimilate vast 
amounts of information and demonstrate remarkable empathy and 
profound reasoning.

    But they are flawed, can produce inaccurate responses, 
hallucinate, and the precision of their answers changes over 
time and based on the precise wording of prompts. Consider AI 
in the doctor's office.

    The $48 billion high tech investment in electronic health 
records digitized medical information. But these systems also 
introduced complex and distorted clinical workflows, turning 
MDs into documentation clerks, contributing to physician 
burnout, and exacerbating the shortage of primary care 
providers.

    An early application of clinical AI attempts to alleviate 
this self-inflicted problem by placing a microphone in the 
doctor's office and generating clinical visit notes in real 
time just from the overheard doctor, patient dialog, allowing 
doctors to face their patients instead of being turned away and 
crouched over a computer keyboard.

    But soon, AI may produce not only the note, but also 
recommend diagnostics and treatments. Some AI systems may 
operate independently of physicians, potentially democratizing 
health care access and alleviating physician shortages, but as 
of now, with no oversight.

    What if the information is inaccurate? What if a drug 
company could whisper in the ear of your electronic health 
record, nudging that AI to favor their pills over a 
competitors? We must anticipate and manage a recalibration of 
responsibilities within health care delivery. How will tasks be 
allocated between human physicians and their AI colleagues?

    Will AI improve care and outcomes? Even as we speak, 
patients and doctors are tapping away at keyboards, using 
ChatGPT to navigate health care decisions. But here is the 
catch, there are no guardrails on this road yet.

    As we reshape health care around AI, let's remember that 
today we don't adequately even measure whether current medical 
practice is effective. For example, drugs are approved by the 
FDA with limited data obtained under conditions in a trial.

    Those conditions are controlled. But how do approved 
products fare in the wild, in the real world? Do they work like 
they are supposed to in the messiness of real life? That COVID 
test you just took, how accurate is it when you are not in a 
pristine lab but at your kitchen table? How well did that 
artificial hip you are about to get work in all the patients 
who had it before?

    The National Academy of Medicine's blueprint for learning 
health care system envisions not just treatment, but learning, 
and not just from clinical trials, but from the vast ocean of 
real world data. Each patient's experience informs the care of 
the next patient by connecting the dots among every visit, 
treatment, and outcome, but it has been slow in the making.

    The urgency of AI should compel us to accelerate a system 
that meticulously tracks the real world accuracy, safety, and 
effectiveness of not just AI, but also drugs, diagnostics, 
devices, procedures, and models of care. To realize the return 
on investment on our $48 billion that we have spent, we must 
demand that the data generated are available to support 
learning.

    Thanks to the highly bipartisan 21st Century Cures Act and 
a rule from the Office of the National Coordinator of Health 
Information Technology, all EHRs must this year, for the first 
time, provide a push button export for their data across what 
is called an API.

    Because each hospital office can produce data in the same 
format, the care delivery system becomes an interoperable data 
source in a federated network where the lion's share of data 
can remain safeguarded at the point of origin.

    These data cannot only drive the development of innovative 
AI, but also help evaluate AI innovations in real time. Let's 
learn from another cautionary tale. The HIPPA privacy rule 
passed in 2000, guaranteed patients the right to access their 
electronic health records, but without focused enforcement, 
nearly 20 years went by before this became possible at health 
system scale.

    If the CURES Act APIs are fully supported, we can avoid 
data monopolies and spark a free market of American innovation 
in AI, while moving us toward a high performing health system. 
Thank you for the opportunity to testify. I look forward to 
answering your questions.

    [The prepared statement of Dr. Mandl follows.]

                 prepared statement of kenneth d. mandl
    Subcommittee Chairman Markey, Ranking Member Marshall, and HELP 
Committee Chairman Sanders and Ranking Member Cassidy, thank you for 
holding this hearing today and for inviting me as a witness. It is with 
a deep sense of responsibility and privilege that I offer my testimony 
as a Professor of Biomedical Informatics and Pediatrics, and Director 
of a program in Computational Health. I also Co-Chair the National 
Academy of Medicine's Digital Health Action Collaborative.

    With the release of sophisticated large language models like 
ChatGPT, AI will transform health care delivery sooner than 
anticipated. These emerging intelligences assimilate vast amounts of 
information and demonstrate remarkable empathy and profound reasoning. 
But they are flawed, can produce inaccurate responses, hallucinate, and 
the precision of their answers changes over time and based on the 
precise wording of prompts.

    Consider AI in the doctor's office. The $48 billion HITECH 
investment in electronic health records digitized medical information. 
But these systems also introduced complex and distorted clinical 
workflows, turning MDs into documentation clerks, contributing to 
physician burnout and exacerbating the shortage in primary care 
providers.

    An early application of clinical AI attempts to alleviate this 
self-inflicted problem, placing a microphone in the office, and 
generating clinical visit notes in real time, just from the overheard 
doctor-patient dialog, allowing doctors to face their patients instead 
of being turned away, crouched over a computer keyboard.

    But soon, AI may produce not only the note, but also recommend 
diagnostics and treatments. Some AI systems may operate independently 
of physicians, potentially democratizing healthcare access and 
alleviating physician shortages. But as of now, with no oversight. What 
if the information is inaccurate? What if a drug company could whisper 
in the ear of your electronic health record, nudging that AI to favor 
their pills over a competitor's?

    We must anticipate and manage a recalibration of responsibilities 
within healthcare delivery. How will tasks be allocated between human 
physicians and their AI colleagues? And will using AI improve care and 
outcomes. As we speak, patients and doctors are tapping away at 
keyboards, using ChatGPT to navigate healthcare decisions. But here's 
the catch--there are no guardrails on this road yet.

    As we reshape healthcare around AI, let's remember that today we 
don't adequately measure whether medical practice is effective. For 
example, drugs are approved by the FDA with limited data obtained under 
controlled conditions in a trial.

    But, how do approved products fare in the wild, the real world? Do 
they work like they're supposed to in the messiness of real life? That 
COVID test you just took, how accurate is it when you're not in a 
pristine lab, but at your kitchen table? How well did that artificial 
hip you're about to get work in all the patients who had it before?

    The National Academy of Medicine's blueprint for a Learning 
Healthcare System envisions not just treatment, but learning, and not 
just from clinical trials but from the vast ocean of real-world data. 
Each patient's experience informs the care of the next patient by 
connecting the dots among every visit, treatment, and outcome.

    But it's been slow in the making.

    The urgency of AI should compel us to accelerate a system that 
meticulously tracks the real-world accuracy, safety, and effectiveness 
of not just AI, but also drugs, diagnostics, and devices, procedures, 
and models of care.

    To realize ROI on our $48 billion Federal investment we must demand 
that the data generated are available to support learning. Thanks to 
the highly bipartisan 21st Century Cures Act and a rule from the Office 
of the National Coordinator of Health Information Technology, all EHRs 
must, this year, for the first time, provide a push button export 
button for their data across what is called an API. Because each 
hospital or office can produce data in the same format, the care 
delivery system becomes an interoperable data source in a federated 
network where the lion's share of data can remain safeguarded at the 
point of origin. This data cannot only drive the development of 
innovative AI, but also help evaluate AI innovations in real time.

    Let's learn from another cautionary tale. The HIPAA privacy rule, 
passed in 2000, guaranteed patients the right to access their 
electronic health records. But, without focused enforcement, nearly 20 
years went by before this became possible at health system scale.

    If the Cures Act APIs are fully supported, we can avoid data 
monopolies and spark a free market of American innovation in AI, while 
moving us toward a high performing health system.

    Thank you for the opportunity to testify. I look forward to 
answering your questions.
                                 ______
                                 
    Senator Markey. Thank you, doctor. And our next witness 
will be introduced by Ranking Member Marshall.

    Senator Marshall. Well, thank you, Mr. Chairman. It is an 
honor to introduce our next witness here today, is Dr. Keith 
Sale. Dr. Sale is a practicing physician and currently serves 
as the Vice President and Chief Physician Executive for 
Ambulatory Services at the home of the No. 1 ranked basketball 
program in the Nation and a top 25 football program, as well as 
a top research institute in the country.

    Of course, that would be the University of Kansas Health 
System in Kansas City, Kansas. Dr. Sale's clinical interests 
include sinonasal disease, auri and vagus nerve stimulator 
implantation, though his practice includes the full scope of 
otolaryngology. When he is not seeing patients, he is a 
leading--partnership with industry to use AI to write clinician 
notes with physicians put in the electronic health record.

    Dr. Sale is the President-Elect of the American Academy of 
Oral Laryngeal Allergy. He is a National Physician Specialty 
Trade Association. He has also served as past President of the 
Kansas City Society of Otolaryngology and Ophthalmology. Thank 
you for agreeing to testify, Dr. Sale, and welcome.

  STATEMENT OF KEITH SALE, VICE PRESIDENT AND CHIEF PHYSICIAN 
  EXECUTIVE OF AMBULATORY SERVICES, THE UNIVERSITY OF KANSAS 
                 HEALTH SYSTEM, KANSAS CITY, KS

    Dr. Sale. Thank you for that introduction. Chair Markey, 
Chair Marshall, Committee Members, thank you for the 
opportunity to be here because it is truly an honor and a 
privilege. I would like to focus my testimony on what I think 
is possibly one of the best impacts that AI can have in health 
care, and that is addressing one of the most serious concerns 
that faces physicians.

    That is burnout. Burnout has become an increasing problem 
amongst our physicians and our medical staff, and it can impact 
us in ways when it comes to our ability to take care of 
patients and to manage the amount of patients that come through 
our doors on a daily basis. When you think about burnout and 
AI, I want to get back a little bit to where documentation 
started, right.

    If you go back 20 odd years or so when we started all of 
this, we were using tape recorders to dictate our notes about 
clinic visits. I would go in, I would meet with a patient and 
have a conversation.

    I would walk out of the room. I dictate a note. That note 
would then go to a transcriptionist at the end of the day who 
would get that note back to me. I would review the note and 
edit it and put it in the chart.

    That whole process was a two or 3 day process, all right. 
Fast forward 10 years, we have the EMR, right, so electronic 
medical record, the--theoretically the savior of medicine at 
that time. The challenge was it increased our documentation 
load because now I am the transcriptionist.

    I put in all that information personally at the time of the 
visit. I type in front of the patient and look at my keyboard 
and my screen instead of talking to the patient, so patient 
experience is impacted. At the end of the day, half of the 
documentation is now done, but I still have the other half to 
do.

    So now I am adding two, three, 4 hours at the end of my 
clinic day to get my documentation done. Fast forward 10 more 
years. The introduction of AI in health care and ambient 
documentation tools.

    We have now piloted two different tools in our 
organization. The current one allows me as a physician to take 
a device in the room. It records that conversation. It then 
takes that conversation and takes the history and investment 
plan and summarizes it based on that conversation.

    Puts it into a place where I can then review it within 
minutes of that encounter ending. I edit that information, and 
the editing part is really important because that is how the AI 
tool learns. It learns what my preferences are.

    It learns my techniques, my topics, my lingo, if you will, 
in otolaryngology, and allows that note to be more specific and 
more--especially specific and patient specific. I then can take 
that edited note, put it in the EMR, and it is done within 
minutes of seeing that patient.

    Now, fast forward into my clinic day and I, even though I 
love to say I get all of my notes done as soon as that patient 
walks through the door, I am usually behind a little bit, as 
most of us in clinical practice are.

    At the end of the day, now I have 30 to 45 minutes of time 
to go through interview notes and plunk them into the EMR. But 
as I have gotten more facile with this tool, I have been able 
to get through my notes faster.

    I have less editing, and the notes are better. There is 
more detail, there is more information, and the content is more 
effective for what I need, for my future visits, what my 
colleagues need to see from that visit, and then from what the 
patient needs, who can also now read those notes.

    I think there is a great opportunity for AI technology to 
assist and remove that burden of documentation and 
administrative tasks that have become commonplace in health 
care and are truly challenging our physicians and our health 
care workers as you try and keep up with the growing demand of 
patient care.

    When you talk about the things that I worry about in AI, 
and how it impacts health care, first and foremost was 
mentioned was privacy. And so, how do we make sure that the 
tool we are using now, much like the EMR tools we have, adhere 
to the HIPAA guidelines and criteria we have in place now?

    I think making sure that anything we build and put in place 
maintains those privacy standards is paramount. I think as we 
roll out and develop these tools, AI is a data consumption tool 
in my mind. I need as a physician to have the ability to input 
and guide what that tool uses and what it consumes to drive the 
decisions that I hopefully arrive from based on what it 
produces for me.

    But it is a tool. It is not something that should replace 
what I decide for--what I decide in practice or how I make 
decisions that affect my patients. So, ultimately it is 
designed to enhance my practice, not replace me in practice. I 
think there is an issue around data security and--as well.

    Making sure that as this information passes between 
different tools and whether it is my device to the EMR, there 
are protections in place, again, guided under HIPAA. Last, I 
think what is really unique about the current tool we are using 
is the traceability and track ability of the information.

    I can see in real time as I am editing my note where the AI 
tool achieved its information to create the note that it 
documented. I can go into that then and understand that why it 
said cholelithiasis instead of tonsillitis in my note, and I 
don't even do gallbladder surgery, so it doesn't belong there. 
I can go in and edit that.

    I know exactly where it came from because it is 
transparent, and I can track it through that AI's workflow. 
Ultimately, I think there is a great opportunity for AI to help 
us in health care, and to make our lives and our workflows 
better.

    I appreciate the time and your allowing me to testify 
today, and I look forward to your questions. Thank you.

    [The prepared statement of Dr. Sale follows.]

                    prepared statement of keith sale
                              Introduction
    Chairman Markey and Ranking Member Marshall, I am Dr. Keith Sale, 
Vice President and Chief Physician Executive of Ambulatory Services at 
The University of Kansas Health System and Associate Professor of 
Otolaryngology-Head and Neck Surgery at The University of Kansas School 
of Medicine. Located in the Kansas City metro area, The University of 
Kansas Health System is the only academic health system in Kansas, 
providing a full range of care to patients from every county in Kansas 
and Missouri, all 50 states and nearly 30 countries. The health system 
offers over 140 hospital and clinic locations, including its original 
campus in Kansas City, Kansas, which includes 1,300 beds and is 
supported by over 17,000 employees and 1,500 physicians. Thank you for 
the opportunity to present testimony to you and your colleagues on the 
Subcommittee on Primary Health and Retirement Security regarding the 
adoption of AI (Artificial Intelligence) and how it can transform the 
delivery of healthcare and more importantly, enhance patient care. In a 
changing healthcare environment, AI is one of many tools available to 
help the American healthcare system improve access and create better 
outcomes.

    Increasing patient care needs in America are overwhelming the 
healthcare workforce and persistent nursing and physician shortages 
continue to challenge our healthcare infrastructure. The Association of 
American Medical Colleges projects the United States will see a 
shortage of between 37,800 and 124,000 physicians within the next 12 
years \1\. In addition, by 2025 the United States is projected to see a 
shortage between 200,000 to 450,000 of registered nurses needed for 
direct patient care \2\. Simultaneously, healthcare systems face 
increased financial pressures that include insurance companies creating 
more barriers to delivering care like pre-authorizations and paying 
less for the care we provide and higher costs for medicines and 
equipment critical to patient care.
---------------------------------------------------------------------------
    \1\  Robeznieks, A. (2022, April 13). Doctor shortages are here-and 
they'll get worse if we don't act fast. American Medical Association. 
https://www.ama-assn.org/practice-management/sustainability/doctor-
shortages-are-here-and-they-ll-get-worse-if-we-don-t-act
    \2\  Gamble, M. (2022, May 12). U.S. faces deficit of 450,000 
nurses by 2025. Becker's Hospital Review. https://
www.beckershospitalreview.com/workforce/us-faces-deficit-of-450-000-
nurses-by-2025.html-oly--enc--id
---------------------------------------------------------------------------
                         The Opportunity of AI
    Healthcare systems continually evolve to match the ever-changing 
patient care environment. Before Electronic Medical Record (EMR) 
systems were widely implemented and before AI improvements, physicians 
and providers spent considerable time recording and transcribing notes 
from patient visits because detailed records from patient encounters 
maintained continuity for follow up visits and improved patient 
outcomes. However, each stage was duplicative of the original 
conversation and added time to the patient encounter completion. 
Historically, these notes could take days to get back into the 
patients' records. Today AI technology records the conversation between 
the doctor and patient during the appointment, summarizes the 
interaction, and downloads the conversation for review within minutes 
of patient encounter ending. This technology reduces the steps in 
documentation and directly captures the conversation in real time. 
Physicians can then edit notes to ensure accuracy and upload finalized 
clinical notes into the electronic medical record within minutes of 
completing a visit.
                     Patient and Physician Benefits
    As the complexity of patient care increases, the administrative 
burden has exploded, and patients now have unprecedented access to 
physicians and health care workers through EMR portals. AI automates 
routine and time-consuming tasks reducing the administrative burden and 
allowing physicians and providers to spend more time with patients 
focusing on better outcomes. Finding efficiencies for the 
administrative and documentation burden of healthcare may also allow 
physicians to see more patients and help address the capacity 
challenges resulting from the growing physician shortage. In addition, 
AI's reduction of administrative tasks and documentation may help 
mitigate the growing concern of physician burnout, much of which 
relates directly to documentation and administrative burden. Allowing 
providers to spend more time with direct patient care will help return 
the joy of practice to our physicians and providers, reduce 
administrative burdens, and thereby improve patient outcomes.
                        Importance of Oversight
    While AI holds immense potential, its implementation should be 
built upon clinical practice guidelines, be compliant with patient 
privacy standards, and be safeguarded from misuse. Physicians and 
healthcare professionals must be actively involved in the development 
and validation of AI tools to ensure they are driven by clinical 
guidelines and that they enhance rather than replace human expertise. 
Trained and licensed clinicians develop expertise through direct 
patient interactions that should not be fully replaced by AI. Rather, 
AI can be used to help clinicians sort through the growing volumes of 
healthcare data, present care options based on recommended best 
practices, and inform physicians about therapeutic options. AI will 
greatly expedite patient care, but human judgment will still need to 
determine if a final care plan is appropriate and in line with a 
patient's condition and expectations. To best utilize AI in healthcare 
requires access to vast volumes of clinical data, financial data, 
research data, and patient data, much of which is considered highly 
sensitive and personal information. Maintaining the privacy standards 
built around the Health Insurance Portability and Accountability Act 
(HIPAA) that currently exists to protect our patients' privacy is 
paramount. Continued observance of these standards will safeguard 
individual data and ensure that healthcare data is used responsibly and 
kept secure. While healthcare providers, patients, and technology 
companies contribute to this data pool, the question of data ownership 
may not be straightforward. Conversations about data ownership and use 
are essential to maintaining patient trust and preserving the sanctity 
of patient privacy. Importantly, HIPAA privacy and security standards 
will also have to keep up with current technology as well.

    In conclusion, the integration of AI and its consumption of 
healthcare data carries tremendous opportunities for improved patient 
care and outcomes and reduced physician and clinical team burnout. 
However, data privacy and management are equally significant and 
require careful consideration. As Congress navigates this complex 
landscape, it is essential to balance the promise of AI with safeguards 
to protect patient privacy and maintain data security. I urge this 
Committee to support initiatives, such as AI, which promote improved 
patient care while simultaneously easing the administrative burdens 
currently troubling our healthcare teams. Additionally, responsible 
data management and patient privacy must be at the core of AI 
integration into healthcare to protect our patients' rights and 
safeguard their privacy.

    Thank you for your attention and I am available to address any 
questions you may have.
                                 ______
                                 
    Senator Markey. Thank you, doctor, very much, now we will 
turn to questions from the Subcommittee Senators.

    Senator Marshall and I, we are part of a long tradition of 
partnering between Massachusetts and Kansas, going back to Dr. 
James Naismith inventing basketball at Springfield College, and 
then the University of Kansas stealing him away to be their--
and his rules to be the first basketball coach at University of 
Kansas.

    This partnership has a long, rich history in medicine and 
in basketball. And we are good at inventing things, but the 
application out of the University of Kansas has been much 
better than any Massachusetts college in the basketball field.

    We are hoping here that this partnership that we are 
creating can help us to get the correct formula, the correct 
rules, like the Naismith basketball rules, for AI. So let me go 
to you, Ms. Huberty.

    In your testimony, you included a powerful story of AI 
directing care for a patient by deciding what is covered by 
insurance, and that there are many more people who are 
currently experiencing this, who don't know to challenge these 
decisions. They are being made by AI about their health care.

    Ms. Huberty, what do stories like Jim's, that you told us 
here, tell us about insurance companies and companies 
developing artificial intelligence, and how they are 
incorporating patient experience versus their profit 
motivation? Can you talk about the lesson we should learn from 
that experience?

    Ms. Huberty. Sure. I do want to focus first just on the 
fact that this is not new technology that we are talking about 
in Jim's case. It has been around since I started as an 
attorney. I believe it was used beforehand.

    A lot of times when we are talking about ChatGPT, that is 
new innovations. We are just starting to get a sense of how it 
is affecting us. But the technology that affected Jim and has 
affected hundreds of residents in Wisconsin is not anything 
new.

    We have a long history of showing that this algorithm, this 
use of predictive technology, has shown time and time again 
that it is incorrect. They come to us in our agency. We appeal, 
we get it overturned.

    We see that so often, that number, that computer, that 
algorithm gets it wrong, and there wasn't enough human 
oversight.

    Senator Markey. Yes. And who should bear the burden of 
proving that the use of artificial intelligence won't harm 
patients? Where should that burden of proof lie?

    Ms. Huberty. Right now, I think that should be with those 
subcontractors that have developed and are using that AI.

    Senator Markey. Yes. I do agree with you, by the way, in 
terms of this being an old technology.

    When Al Gore was Vice President and I was the chairman of 
the telecommunications committee, when we were breaking down 
all of the monopolies in the mid-90's so we could have the 
digital revolution, the broadband revolution, not one home had 
broadband in February 1996 in America.

    I used to call these new technologies Al-Gore-rhythms, 
right. So, it is not a new word. It was obviously what the 
digital revolution was unfolding at that time, and we had to 
heed those warnings that we were hearing at that point.

    Bonnie Castillo, who is Executive Director of the National 
Nurses United, the Nation's largest union of registered nurses, 
noted in recent written testimony for an AI Insight forum on 
workforce that, ``health care workers should not be displaced 
or de-skilled, as this will inevitably come at the expense of 
both patients and of workers.''

    That is true, if not carefully implemented with Government 
oversight and worker input, AI can harm health care workers by 
making them feel like the art and science of health care is 
distilled to typing into an iPad, and that is all there will be 
to it.

    Dr. Mandl, your testimony noted how technological advances 
can contribute to health provider burnout. Can you speak to the 
danger of using AI in the health care settings to automate both 
tasks and clinical decisions without Government oversight and 
worker autonomy and input?

    Dr. Mandl. The worker autonomy----

    Senator Markey. Can you turn on your microphone, please.

    Dr. Mandl. The worker autonomy and input is very important. 
And there has to be early on training and education of our 
workforce so that they can understand what the issues are and 
understand how to work alongside AI tools, what their 
functionalities and limitations are.

    There is a risk today of using an AI tool without 
understanding its limitations, for example. There are 
ergonomics and workflow integration issues that are key. We 
heard today that documentation burden ballooned with electronic 
health record implementations. We have to design AI tools so 
that they improve the life and the work life of physicians 
while maintaining safety.

    Probably there is mental health support to provide to the 
workforce as well at a stressful moment when there may be 
workforce shocks as a result of AI, and the shared 
responsibility between physicians and AI, and we don't know 
where that is going to equilibrate. There have to be legal and 
ethical safeguards to protect health workers from liability 
associated with AI. It has to be clear who is responsible if 
the AI makes a decision that is incorrect.

    That is going to cause a lot of hesitancy and anxiety 
otherwise. We have to monitor, as I was talking, we have to 
have systems that are monitoring the output of AI and the 
diagnoses that are made, the treatment recommendations that are 
made, the claims denials that are made. Those can all be 
automated with data.

    We have an opportunity to move forward with getting the 
data flowing in the health care system so that we can monitor 
safety. And again, it is the same safety that we are talking 
about for devices, drugs, procedures, and AI.

    There can be a float all boats. And then of course, there 
are ethics and transparency. And we really need to understand 
how the AI algorithms were designed, what they were intended to 
do, and what they actually are doing.

    Senator Markey. We have to be able to get under the hood 
just to understand how there are biases built in. Is there harm 
that is inside of this ultimately human designed algorithm that 
then takes on a life of its own? What was that human input that 
ultimately led to the recommendations that will be made?

    Thank you, and I will be coming back again. But at this 
point, I would like to recognize the Senator from Minnesota, 
Senator Smith, for a round of questions.

    Senator Smith. Well, thank you very much, Mr. Chair. And 
thank you, Ranking Member, for deferring. I really appreciate 
that. And thanks to all of you for your testimony. It is super 
interesting. There is so many questions I could ask.

    Dr. Inglesby, I would like to start with you. Could you 
talk a bit--we know that AI was important in the way in which 
we developed the COVID-19 response--or vaccine, how we 
responded to COVID-19, the historic pace of that, of testing 
and treatments developed, and vaccines as well.

    Could you talk about how--kind of what are the lessons 
learned from that experience? And are there lessons learned as 
well for not only advancing treatments like the vaccine, but 
also preventing biosecurity risks, which we are talking about 
in this Committee hearing?

    Dr. Inglesby. Yes. Well, thank you so much for that 
question. I think what we have seen with vaccine development, 
new drug development, and AI tools is that AI can improve the 
speed and precision and efficiency of many processes involved 
in vaccine and drug development.

    They can start with the target and work backward to decide 
what will attack that target on that pathogen most efficiently. 
They can predict toxicity. They can improve the efficiency of 
laboratory practices.

    AI tools kind of across the board can take on different 
components of the vaccine drug development process and make 
them more powerful. But on the same--at the same time, those 
very processes could conceivably either inadvertently, 
accidentally, or deliberately be misused to identify things 
that could harm people on large scale, that could become 
products that, or kind of biological constructs that could not 
be controlled.

    That is my greatest concern, is that we need to set up 
guardrails, at least to begin with, that are focused on 
preventing pandemic risks, risks of things spreading in the 
environment, not being able to be controlled.

    I think the companies themselves have said the same things. 
If you look at what they are saying in the public in the last 
year, many of the leading companies have said they are 
concerned about setting up guardrails around biological risks, 
and that is one of the things that they are explicitly talking 
about.

    I think the Executive Order begins to do that and has many 
steps moving in that direction. What I would do, though, is I 
think Congress should seriously consider going a bit further 
than the Executive Order even now, because the role of 
Government still is setting--in the Executive Order, setting 
standards, creating a testing process, but in terms of 
requirements for audits, a Government audit of these companies, 
that does--it is not yet there.

    I think that is the next important step.

    Senator Smith. One of the things that I have been thinking 
about a lot is how do you overcome sort of the black box 
phenomenon of these AI models and how you get accountability 
around bias, for example.

    There is lots of questions around accountability. But how 
do you think about it as you as--from your perspective as a 
clinician, how do you think about that question of getting 
accountability in that sort of black box world where we are not 
exactly sure why or how the model comes up with its answer, 
let's say.

    Dr. Inglesby. Yes, I mean, I think that gets to the heart 
of the bias questions that people have been talking about here. 
And there are many sources of bias. Can be data bias. Can be 
the model itself and how the model collected the data.

    Senator Smith. Right.

    Dr. Inglesby. But one of the strongest things that people 
talk about in bias is getting rid of the black box, and the 
term interpretability is really--is the key concept around 
that.

    I think that is just another way of saying that in health 
care related AIs that will ultimately drive clinical care, we 
should be able to look under the hood and understand that 
process. And right now, with some tools we can and some tools 
we can't.

    Senator Smith. Some tools we just----

    Dr. Inglesby. But that could be--for health care 
indications of AI, that could become a standard which the 
Government insists upon. We have to be able to see how this--
go, reverse engineer it. Understand how it came up with its--
with process and recommendations.

    Senator Smith. Right. Right. That question of how decisions 
are made and what is programmed into the model, let's call it 
gets to the core questions of accountability. Ms. Huberty, I 
was thinking about your story of the man who was confronted 
with this prior authorization recommendation algorithm, which 
clearly was not being made in his--you know, the decision is 
not being made in his best interests.

    I mean, to be clear, I worry about humans and these big 
insurance companies also not correctly balancing the health 
risks of an individual with the marginal profit that they may 
incur by releasing somebody 7 days earlier or whatever it is.

    I know I am just out of time, Mr. Chair, but could you--
like how do you think about how we kind of get the right 
balance in these models?

    Ms. Huberty. Well, I think in these cases, there are humans 
involved in running those--the algorithms and adhering to those 
discharge dates.

    But even those humans involved have moral issues with those 
dates and how they are required to adhere to them within their 
own company. So I also just think the volume of it too.

    When you have so many of these denials running through that 
algorithm, the human oversight is only there when it is 
challenged. So only when there are appeals, do you have that 
really detailed and careful human oversight where they are 
looking at the medical records.

    I guess my recommendation is to slow down, to get more of 
the humans involved, have more of the treating physicians more 
involved as well, because the humans involved in those pieces 
never see the patients. They have no contact with them 
whatsoever.

    Senator Smith. Thank you very much.

    Senator Markey. Great. Thank you very much, Senator Smith. 
Senator Marshall is willing to forego his turn at this moment 
so that I can recognize Senator Hassan from New Hampshire for 
her round.

    Senator Hassan. Thank you very much. And thank you, Senator 
Marshall. Thank you, Mr. Chair, for this hearing. And thanks to 
our witnesses for being here. Dr. Inglesby, I wanted to start 
with a question for you.

    Artificial intelligence can be helpful when designing new 
tools to combat the threat of antimicrobial resistance. For 
example, researchers at the NIH have found that machine 
learning algorithms can quickly analyze patterns in 
antimicrobial resistance.

    This can obviously help public health authorities respond 
to outbreaks of resistant infections more quickly and 
efficiently. Artificial intelligence also has the potential to 
help doctors more precisely diagnose and treat an infection 
with the right antibiotic at the right dose.

    As an expert in health security, can you speak to the role 
that artificial intelligence plays in our fight against 
antimicrobial resistance?

    Dr. Inglesby. Yes. Well, thank you very much for the 
question. Very important set of issues around AMR. I think 
there are a number of ways that AI could help in the fight 
against antimicrobial resistance, and you have mentioned many 
of the major ways. The first is the design of new therapeutic 
approaches.

    We have talked about how new protein design tools, in the 
category of AI, biological design tools could accelerate the 
development of new therapies. But also, AI can help us with 
looking at the combination of therapies in ways that were not 
necessarily obvious by--through human judgment.

    Senator Hassan. Yes.

    Dr. Inglesby. Combinations of therapies. They can move from 
interpreting the sequences of pathogens and making predictions 
about resistance. And we begin to see that in experimental 
approaches. We just need kind of a strong data set to be able 
to move forward on that, but lots of potential.

    Senator Hassan. Well then--as a follow-up, how can Congress 
help support the use of AI to better predict and combat AMR?

    Dr. Inglesby. Yes, well, I think it depends on--depending 
on the category of approaches, I think new therapeutic approach 
is I think making sure that BARDA, HHS, and FDA are oriented 
around new AMR approaches and have the flexibility to make new 
therapeutics.

    There is a--there are a number of different approaches that 
BARDA has been pursuing around that. I think making sure that 
the data sets that are being developed around these microbes is 
robust.

    I think people talked about the federated approach, making 
sure that institutions across the country can work together, 
anonymize data, and randomized patient data, and develop the 
datasets we need to make those predictions.

    Senator Hassan. Well, thank you for that. I am going to 
move now to doctor--is it Mandl? Dr. Mandl, artificial 
intelligence has played an integral role in the widespread 
adoption of electronic health records.

    Algorithms can help physicians categorize and structure 
patient data, making it easier for health care providers to 
access and use. While this has the potential to boost 
productivity and allow providers to spend more time with their 
patients, we need standards in AI for medical settings in order 
to ensure that patients are receiving the best possible care 
and that their privacy is protected.

    How can Congress support the development and implementation 
of these kinds of standards?

    Dr. Mandl. Thank you very much for that question. The 
delivery of AI through electronic health records will clearly 
be a very important channel for how AI gets to the point of 
care.

    For one thing, I think it is very important in that 
context, so that we optimize innovation and excellence, to be 
modular in the way we integrate AI with electronic health 
records, to make sure that innovators can get to the point of 
care outside of the electronic health record, but within 
clinician workflows as well.

    We want to be sure that the innovation and that the 
decisions that lead to the kinds of outcomes, good and bad, 
that you are talking about are not all channeled through a 
small set of companies, but through the full power of American 
innovation. I refer to in my testimony application programing 
interfaces.

    Under the 21st Century Cures Act, there are actually 
methods to integrate outside technology with electronic health 
records so that we can move the data to where it needs to be 
and implement those standards widely.

    The importance on understanding how AI is working is going 
to be very heavily placed, I believe, on continuous monitoring. 
While understanding the algorithms and testing the algorithms 
is extremely important, until you know how they perform in the 
real world, you can't fully evaluate them.

    These large language models, no one understands. No one 
understands exactly how they work or exactly how they produce 
their output. So, we are poking the bear and testing. And so, 
there has to be interactive testing and measuring, and that is 
how we will begin to see what emerges.

    There has to be collaboration across multiple sectors so 
that we are all on the same team.

    Senator Hassan. That is very helpful. Thank you. And thank 
you again to all the witnesses. Thanks, Mr. Chair.

    Senator Markey. I want to make unanimous consent to enter 
into the record November 1st written statement for AI insight 
forum workforce by Bonnie Castillo. Without objection, so 
ordered.

    [The following information can be found on page 62 in 
Additional Material.]

    Senator Markey. Now I am going to recognize Senator 
Hickenlooper from Colorado to Chair. Both Senator Marshall and 
I now have to run over to make the roll call on the floor, and 
we will try to run over, make it, and come back. This is again 
how we get our 10,000 steps in. So, just to turn to Senator 
Hickenlooper. Thank you.

    Senator Hickenlooper. Great. Thank you, Mr. Chair. Dr. 
Inglesby, you spoke about the potential for AI models to assist 
malicious actors in creating highly transmissible pathogens.

    This is obviously all the more possible given that we do 
not currently require screening for all gene synthesis 
providers. Senator Budd and I have a bill called the Gene 
Synthesis Safety and Security Act, which would help us conduct 
critical oversight of the industry and protect against misuse 
of these types of products.

    If we do not enact Federal guardrails here, how would you 
assess our level of risk?

    Dr. Inglesby. Senator, first of all, I just want to commend 
your leadership on that Act and think that is a really crucial 
step that we need to take to reduce bio security risks.

    I think the Executive Order goes some distance toward--in 
the direction that your Act laid out, but I think Congress 
could go further in requiring that all of those ordering gene--
nucleic acids in the United States abide by the same rule, not 
just those who are federally funded.

    But to your point, I think the problem that your Act and 
the Executive Order has been trying to solve has been the 
possibility that malicious actors could order de novo nucleic 
acid--could order nucleic acids through--from a company in the 
United States and de novo or create viruses that are now 
extinct, such as smallpox or something along those lines, which 
would be, if released into the public, would--could create a 
pandemic.

    It is very clear--the industry is very in favor of 
regulation in this case, which is obviously quite unusual. But 
they have been very clear about that. Many of the best actors 
in the industry are already screening their customers and 
screening the sequences, but it is not a requirement.

    The good actors are at a disadvantage. The bad actors are 
not paying for that or doing that work. So, thank you for your 
leadership on that.

    Senator Hickenlooper. Of course. Thank you. Dr. Mandl, you 
wrote in your testimony that, and I quote, ``each patient's 
experience informs the care of the next patient by connecting 
the dots among every visit, treatment, and outcome.''

    In many ways, this is the highest ideal of how our health 
system, under the best circumstances, good circumstances, how 
it should work. And certainly, AI could be a great equalizer in 
terms of helping us to amass and analyze all and connect all 
those data points.

    How can we seize on this moment with our AI to catapult our 
ability to utilize real world data, but also building the 
guardrails that you have all been saying are necessary to 
ensure the security of the data.

    What is the No. 1 concern you have in mind in terms of the 
use of AI to manage this level of data, this amount of data, 
and how should we--how should we be working to address it?

    Dr. Mandl. Well, I think there is two sides to this. One is 
the actual use of the AI to look across vast amounts of data 
that no clinician could integrate in their head, and to do that 
potentially even in real time in the clinic when a patient is 
before us.

    There, we need the guardrails to make sure that the AI is 
acting in a way that is accurate and beneficial, that is 
improving the value of care. And there are multiple levels of 
that kind of measurement.

    The second place where I think AI can help us is simply by 
being a burning platform of sorts. If you look at the COVID 
pandemic, there were some failings but there were also some 
incredible successes at the community coming together and 
moving data to where it needs to be so that we could monitor 
the pandemic. And as the pandemic went on, we got better and 
better at it.

    The collaboration and the enthusiasm for it was very 
different than what happened before. I do think that the COVID 
pandemic receding, hopefully, permanently--we see some also 
receding of that enthusiasm for the kind of collaboration.

    I think that AI is the burning platform where we can 
actually try to move the data to where it needs to be to 
evaluate the health care system and to move toward a learning 
health care system, not just for AI, but for drugs, devices, 
procedures, surgeries, and that there is an incredible 
opportunity there if we seize the moment.

    Senator Hickenlooper. It would be an amazing concept to go 
from spending 18 percent of our GDP, down to maybe 8 or 10 
percent like the rest of the world. That is one way we could 
move in that direction.

    Dr. Mandl. Absolutely.

    Senator Hickenlooper. Thank you. I am out of time, but I 
have got other questions that I will submit to both of you in 
writing.

    Senator Braun.

    Senator Braun. Thank you, Mister--Senator Hickenlooper 
subbing in for Senator Markey.

    I ran a business for 37 years that had so little technology 
in it until I finally, after repeated kind of not wanting to 
spend the money on it, have been such a believer that if you 
use it practically, it not only makes things more efficient, it 
makes things a lot less expensive too.

    When I look to see that AI had come onto the scene, to me, 
there are so many practical ways that we can use it to sift 
through the mundaneness of how you do it without it. And all I 
can tell you is if you drag your feet on it, you are going to 
regret it because your competition in the real world is going 
to use it and you are going to regret it.

    Based on that, I want to define something that currently is 
being done by CMS and give it the tools to do it better. I am 
introducing on November 16th and looking for a good Democratic 
lead. We will get one, and I think this bill is going to go to 
town. It is called the Medicare Transaction Fraud Prevention 
Act.

    It will direct CMS to conduct a pilot program of enhanced 
oversight for two categories of historically high fraud. That 
would be diagnostic testing and durable medical equipment. By 
notifying beneficiaries in real time with suspicious purchase 
alerts, this bill utilizes a successful technique that is 
already employed by private industry like our leading credit 
card companies. It is that simple premise.

    I want to ask Dr. Inglesby, what do you think of that idea? 
We know how much fraud is endemic to so much that Government 
does. I would like to remind everyone that when we spent nearly 
$1 trillion on the extended unemployment benefits during the 
CARES Act, the estimate is anywhere from $100 to $250 billion 
was stolen by domestic and international fraudsters.

    When we are now borrowing $1 trillion every 6 months, and 
just 5 years ago it was $1 trillion annually, I think we need 
to start doing some things that give taxpayers a better value. 
What do you think of this idea as a bill?

    Dr. Inglesby. Well, and from what you have described--I 
haven't heard of this idea before. From what you described, it 
sounds like a very, very good idea. I am very in favor of tools 
we can use to decrease fraud at CMS.

    I think we use very sophisticated tools in the private 
sector to detect indicators of fraud or checks. So, if those 
tools can be used in a way that allow health care dollars to go 
to clinical care as opposed to some kind of fraud, I think I 
would be strongly in favor of that.

    Senator Braun. Thank you. Ms. Huberty, Hoosiers have been 
billed up to 20 times for like COVID tests, and this phantom 
billing of larger durable equipment like powered wheelchairs 
can involve huge co-payments to boot.

    What trends do you see in health care fraud, and how do you 
think that impacts seniors financially? And again, do you think 
a bill like mine would be the place to start where you weave it 
into the system to work and even address the larger stuff down 
the road?

    Ms. Huberty. I mean, everything that you have said 
absolutely is happening in terms of the billing fraud. There 
are programs. Wisconsin has a senior Medicare patrol program, 
and those are available nationwide to do just that, is to 
address those issues of fraud and detect and report those.

    A bill that would be--you know, that would focus in on 
that, extremely. We can avoid the wasteful spending and those 
fraudsters. I think to my testimony, though, what I am getting 
at is that the AI, those companies are actually committing 
fraud on the other end where they are taking Medicare dollars 
and not putting it back into the pockets of the patients by not 
offering the coverage that they said they were going to.

    Senator Braun. Thank you. And one final quick question for 
Dr. Sale. President Biden's Executive Order encourages 
innovation in health care services so long as AI models are 
tested robustly beforehand.

    The figure that we have talked about is way up there. What 
do you think, how would this impact a bill like this, taking 
into consideration what the Administration has put out there as 
a caveat to make sure it is robustly tested? Do you think this 
would be a good place to start?

    Dr. Sale. Thank you for the question. I do think robustly 
testing AI technology is important. I think we have been doing 
it in our own health system now for the better part of 2 years, 
trying to figure out how we can make ambient documentation 
support work and be successful for the rest of our physicians 
to use and use seamlessly.

    I think anything that will allow us to, as clinicians, to 
make sure that we have input and guidance into new tools that 
we are deploying with patient care, I think are really 
important. And in safeguarding how we charge for those 
resources, I think is also important.

    Senator Braun. Thank you very much. And like I say, this 
bill will be introduced here shortly, and we would love for all 
of you to weigh in on it beyond this kind of brief discussion 
of it.

    I think it is the place to start where we can build in what 
I think is going to be in areas like this, something that is 
going to completely change the landscape and it is going to 
save the Government a ton of money. Thank you. I yield back.

    Senator Hickenlooper. Great, thank you, now I turn over to 
Senator Lujan.

    Senator Lujan. Thank you, Mr. Chairman. And thank you all 
for being here today.

    The way I am looking at this is we need technology to help 
improve health outcomes, reduce health disparities, not 
exacerbate them, and it is clear that AI has the power to do 
both, which points me to the realization that AI is only as 
good as its inputs. If it is machine learning, it is going to 
learn based on what exists and what is done and all the fun 
stuff that gets put in its way. Well, it seems to me that AI 
has a diversity problem.

    What I want to illustrate here is a recent study from the 
Journal of the American Medical Association researchers 
reviewed it and said that of the 70 publications that compare 
the diagnostic decisions of doctors against AI models across 
several areas of medicine, most of the data used to train those 
AI algorithms came from just three states, California, New 
York, and Massachusetts.

    It seems that there is a diversity of data problem by 
population, by gender, by geography, and all the rest. Now, Dr. 
Mandl, do you agree that gathering data from a homogeneous 
patient population teaches the AI tool to serve only that 
population?

    Dr. Mandl. I do agree. And the ability to get data not just 
from the highest performing health systems that are wealthy 
enough to have teams in their IT departments that can extract 
data and make it available, but from the edges.

    We should be able to get data from all of the electronic 
health records out to the federally qualified health centers. 
And in order to do that, we need interoperability. And the 
interoperability should enable us to get data to train 
algorithms and to monitor algorithms. And there is another area 
that is a little more hidden where these algorithms are being 
developed that could limit diversity.

    In the large language models, the models are further 
trained after they are--been developed on the data, which 
already may lack diversity, they are trained with something 
called reinforcement learning with human feedback, where people 
tell the AI whether it was right or wrong when answering 
certain questions.

    We actually need a diversity of staff who are doing the 
reinforcement learning as well so that we get the right mix 
across multiple perspectives. So, the issue you bring up is 
extremely important. It is demonstrated over and again that 
lack of diversity in the data leads to bias conclusions that do 
not serve the full population well.

    Senator Lujan. As a follow-up, Dr. Mandl, is, is it 
important to include this at early stages or later stages? And 
if the answer is yes, why?

    Dr. Mandl. The early stages is much better so that the 
models are developed with less bias at the beginning. That bias 
can become entrenched and harder to fix later.

    Technically, far better to try to solve the problem early 
with diverse data and an appropriately diverse reinforcement 
learning staff, and--rather than just trying to correct the 
bias later. Absolutely.

    Senator Lujan. I appreciate that very much. Other examples 
that I have found with the help of the team, are that I trained 
mostly on chest x-rays from men will perform poorly when a 
clinician applies it to a female patient.

    An algorithm for diagnosing skin cancer on dermatologic 
photos will botch the diagnosis if the patient is dark skinned 
and most of the training images come from fair skinned. I think 
these are obvious things that are happening in this space.

    That technology, such as what I am wearing as well, has 
been proven that when you are trying to capture information 
from someone where those that were in the room developing that 
technology may have been one skin color versus another, maybe 
it was not obvious to those in the room that they should have 
included pigment awareness and challenges when they were trying 
to grab this technology.

    I am hopeful that we can be smarter about this and that 
this can be included so that the same problems that have been 
identified in the lack of diversity when it comes to clinical 
trials of drug treatments are not replicated now that AI is on 
the boom and on the build and all the rest.

    I have lots of other questions, Mr. Chairman. I will submit 
them into the record, but I thank you conversation.

    Senator Hickenlooper. Thank you, Senator Lujan.

    Senator Marshall.

    Senator Marshall. All right. Thank you, Chairman. Again, 
welcome to all of our witnesses today. I think the question I 
am going to start with is, is what should Congress not do right 
now with AI? What should we not do that would prevent 
innovation from continuing? What scares you, Dr. Sale?

    Dr. Sale. I think when you think about innovation in health 
care, we do innovation as part of our practice of medicine, and 
this has been ingrained in what we do, especially in the 
academic world where I live, right.

    It is all about how we move forward patient care and 
drive--and change and make improvements in patient care, and I 
think my fear would be if we somehow limit or restrain the 
ability to utilize this type of technology in health care.

    I think as I mentioned earlier in my testimony, there are a 
tremendous number of applications where AI is beneficial and 
can be beneficial in patient management, patient throughput, 
patient access, physician well-being, etcetera.

    I think if there is any fear that I have, it is that this 
technology would be actually removed or limited in some way. I 
think we want to be actively engaged as clinicians in 
developing that tool. I think that if there was any way that we 
would be somehow cut out of that process, I think that makes me 
nervous.

    But I think those are the two areas where if there is 
anything that the Government would do that would limit our 
access to or ability to participate in the development of this 
tool, I think that would be scary for me.

    Senator Marshall. Thank you. Let's go ahead to Dr. Mandl 
next.

    Dr. Mandl. I will say that I would avoid actions that would 
promote unregulatable monopolies, and I would be very cautious 
when designing specific regulations to recognize the extremely 
rapid change in this technology.

    It is not even enough to keep up with the medical 
literature. You have to be following releases and announcements 
on Twitter a couple of times a day to understand what is going 
on in this field, not reading your journals once a week or once 
a month.

    It is very important to recognize the fluidity and the 
rapid progress, and to develop evergreen approaches to 
monitoring this emerging----

    Senator Marshall. I hear you say this would be really hard 
to put--it is going to be hard to put guardrails around it 
because it is changing so fast.

    Dr. Mandl. It will be a challenge.

    Senator Marshall. Yes. Dr. Inglesby.

    Dr. Inglesby. Yes, that is--I think it is a really 
important question. I think what I would say is Congress should 
not take their eye off some of the most serious risks, because 
if those risks become a major problem, either in bias or in 
what I am worried about, particularly around life science, 
pandemic risks, or others, I think those kinds of developments 
could derail or really distract the AI companies, could 
distract the Government for a long time--if major problems 
emerge.

    What you--back to what you said early in this hearing was, 
I do think that the AI companies have extraordinary expertise, 
and it is going to be very important for the Government to stay 
close with them and not be at a distance and not kind of 
disengage. I think it is going to be require a very close 
partnership because a lot of the expertise.

    The great majority of expertise right now is in the 
industry and not within the Government. I do think the 
Government has to build its workforce of very smart AI talented 
people to be able to keep up.

    I think you are right, working with industry closely is 
going to be very important in order to both reap the many 
benefits, but also to develop systems that are reasonable and 
scaled to deal with the risks.

    Senator Marshall. Okay. Ms. Huberty, within your 
association, when you go for continuing education, when people 
in your profession talk about AI, what concerns have we not 
talked about today that you have, or any of your solutions? Go 
ahead.

    Ms. Huberty. Right. I would like to actually speak to the 
question that you asked the doctors, because we have been 
sitting here talking about what if, what if, what if. I am 
sitting here telling you that we have seen the negative 
consequences. We have seen the devastating effects of AI for 
years.

    I was here in May testifying before the Medicare Advantage 
Plan hearing. And so, I am sitting here saying, well, here is 
harm, here is proven harm from AI, so what are we going to do 
about it? My fear is that we are doing nothing.

    We aren't doing anything. So that is my contribution to 
that, is that we need to be doing something.

    Senator Marshall. Okay. Dr. Inglesby let's talk about viral 
gain of function just for a second--viral gain of function 
research. And certainly, AI could be used with this area, and 
it probably has been, whether you are trying to find and 
develop a protein spike that fits on a SARS virus.

    Maybe insert an HIV code from the HIV virus in to decrease 
people's level of immune reaction or put it a Furin cleavage 
site as well. One thing that scares me, though, is if Congress 
puts too many guardrails on it is we let our enemies do 
research and develop things that we won't be able to counter.

    It would be counterterrorism, if you would speak. Any just 
vague general thoughts on that? It is kind of a wild, outside 
the box question, sorry.

    Dr. Inglesby. I think it is very, very, very important. I 
think this last year and a half, there has been a lot of work 
between the Government and the scientific community around 
trying to develop the right policy that focuses only on the 
very highest risks around potential pandemic pathogen research.

    I do think that if the U.S. gets its house in order, it can 
then argue for kind of the similar standards around the world. 
In this case, I don't think other countries want to be creating 
new viruses. I don't think Governments are going to want to 
create viruses with pandemic risks that they are not aware of.

    They are going to want to have the same kind of 
understanding of what their science communities are doing. I 
think ultimately we should--all governments, in theory, should 
be moving toward the same kind of arrangements, which is not 
slowing science down, but being aware of that little area, that 
small area of science, which could pose extraordinary risks and 
just doing the right thing, working with industry.

    Senator Marshall. Doing the right thing is so important, 
right. We have all seen in health care, innovation, so many 
technologies come by our desk. There was a time when people 
thought, oh, my gosh, we shouldn't do MRI's because it could 
lead to overdiagnosis.

    Certainly, you don't want an obstetrician reading an MRI, 
but it didn't stop us from developing the MRI technology. As I 
think about these algorithms, at the end of the day, I think it 
comes down to people doing the right thing and that is teaching 
our medical students the right thing, that this is one more 
tool.

    It is no more important than a CBC or an X-ray, and it is 
no more important than a stethoscope. Do you remember that 
fourth year of medical school when you suddenly realized the 
most important tool you had in the toolbox was listening to a 
patient? If you can only have one thing, it would be listening 
to the patient. I just would implore you all that--to keep the 
patient first.

    As we teach our medical students that this is a tool. I 
tell people, I have seen one pregnant person with a virus. You 
have seen one pregnant person with this virus. The next 
pregnant patient may not obey the algorithm. There are more 
than two standard deviations outside the box.

    That is all algorithms are for the most part. Here is two 
standard deviations. Most people should be in the hospital 2.3 
days after being admitted with pneumonia unless they develop a 
blood clot.

    We still--it is going to come up to the person, people 
doing the right thing in our professions to protect. I would 
love to come back to doctor--what are your professions doing to 
protect the integrity of health care. But I do appreciate you 
all coming in. It has been great insight. Thank you.

    Senator Markey. I am going to ask a few more questions, if 
that is all right, Senator Marshall. Thank you. Back in the 
90's, when there would be a big headline, like once a week, 
insurance companies records hacked. We had public or hospital 
records hacked, made public, or you name it. Hacked, made 
public.

    I asked Joe Tucci, who was the CEO of EMC, of the 
Massachusetts, which was the leading data storage company in 
the world--Dell has now purchased it. I said, what is going on?

    He says, oh, we could have protected all those companies. 
We try to sell them our highest end security product, but they 
just don't want to buy it because it cost them too much money, 
so they would rather run the risk of having the data breach.

    I said, so the technology is there, the counter algorithm 
is there to fight against what becomes the state-of-the-art in 
terms of the criminals trying to break in? Oh, yes, yes, it is 
there but many companies or the executives just don't want to 
spend that extra money.

    They are hoping they retire before their company gets 
hacked, so they don't have to explain to the board of directors 
why they had to spend all that extra money. So, it was a big 
insight to me that, oh, yes, there is a battle that is going 
on, good versus evil, but good is in the battle too. It is 
just, are we going to have it deployed?

    Are we going to ask that be just maybe a little extra cost 
that has to get built into the system to protect against the 
deleterious aspects of any new technology? And it is that 
challenge, right, because profits would say, no. No, look at 
how much we max out if we just deployed this new technology 
without additional safeguards which could be built in.

    I introduced which Senator Budd, Republican on this 
Committee, the Artificial Intelligence and Biosecurity Risk 
Assessment Act, and the Strategy for Public Health Preparedness 
and Response to Artificial Intelligence Threats Act to direct 
the U.S. Department of Health and Human Services to prepare for 
AI biosecurity threats.

    In your testimony, you noted that President Biden's 
Executive Order is an essential step forward for AI oversight, 
but that there is more to be done. Dr. Inglesby, could you just 
tell us how important it is for Congress to play a role in 
regulating AI now?

    Dr. Inglesby. Yes. I am happy to do that. I think your Act 
really spoke to the importance of that. I think the Executive 
Order goes a long way in assigning responsibilities to this, 
Department of Energy, Commerce, HHS, but it doesn't require 
much yet of the companies. I think they are trying to 
understand the nature of the problem.

    But I think what your Act proposed and what I would also 
recommend is that, that you get an assessment from HHS. I think 
is the most logical place. HHS, ASPR I think would have the 
right expertise to give you a stronger sense of what are the 
risks of the creation of--AI helping to simplify or accelerate 
the creation of new, very serious biological risks, and what 
could be done.

    What authorities, if any, are needed to be able to deal 
with that. I think some are in sight now, which are I think 
Congress should be giving audit authority to an agency, whether 
it is Commerce or Energy or HHS, around AI risks.

    But I think such a risk assessment that is done rapidly 
aimed at Congressional leadership, which is a little bit 
different from what is now in the Executive Order, I think 
would be very valuable for leadership here to decide what they 
might want to do.

    Senator Markey. Yes. And again, that is the goal that 
Senator Budd and I have, just kind of moving this ball further 
down the line. And we see it in all kinds of areas with--in the 
automotive sector, the automotive industry, they want to sell 
you a new car, but they didn't want to have a mandatory 
seatbelt that was built in.

    That will be an extra cost. Not every consumer wants 
seatbelts. I know my father was a truck driver. He really 
didn't like seat belts. So, they were saying consumer choice. 
And then airbags. Well not every consumer wants an airbag, but 
it is a safety feature. Yes, but we will leave it up to the 
consumers to do it.

    The industry is trying to downsize their safety cost 
requirements until the consumers get a little bit of a taste of 
an airbag and a seatbelt, and then they are saying, I am not 
going to buy a car that doesn't have safety built in, right, 
from the get-go.

    We continue to have this conversation that coexist with the 
technological advance, but then as people catch up, they go, 
well, could we have a little more--could we have a child safety 
cap on top of that medicines? Is that too much of a cost to 
please ask you to build that in and so there is going to be 
some resistance.

    But you are just trying to balance it. You don't want to 
take away the good part of it, but you know that there is a 
sinister side to cyberspace. So, can I just come back to you, 
Dr. Sale, and I just heard that conversation that Senator 
Marshall was having about fourth year of medical school, which 
I will never know.

    My wife knows it as a physician and she keeps her maiden 
name because she says, I don't remember a Dr. Markey graduating 
from my medical school class. So, she keeps her own maiden name 
as Dr. Blumenthal. But in your testimony, you spoke about how 
AI allows you to spend more time with patients by greatly 
reducing the administrative burden of charting.

    However, some of the health care organizations may look at 
AI as a means to just cut costs by cutting their workforce. Dr. 
Sale, can you speak to how the success of artificial 
intelligence depends on actual health care providers being 
involved, as you were saying in your conversation with Dr. 
Marshall?

    Dr. Sale. Absolutely. Thank you for that. I would echo 
Senator Marshall's comment earlier how this is a tool, right.

    Much in line with the EMR, this is a chance for us to 
optimize our workflows, improve our efficiencies, add 
information and perform tasks that historically take away from 
our time with our patients, and add value back to our 
encounters so we can work with our patients more closely, 
listen to our patients, and really develop a more beneficial 
relationship with our patients so we can get when we are typing 
an information into an EMR.

    I think there is tremendous opportunity, I think, to 
continue to use this as a tool. I think it is important to 
remind our clinicians that is what it is and that you still 
have to play a role in this, because, right, what I fear 
sometimes is complacency or reliance, overreliance on this 
tool, right.

    You think about instances where in an EMR we have copy 
forwarded an error, right. And so, how do we avoid that with 
this kind of a tool? Because I think AI has the potential to 
propagate errors.

    Senator Markey. So can I--excuse me. So how should a nurse 
view this, as a threat to her employment or as an augmentation 
of her ability to help with her patient care?

    Dr. Sale. It is a great question. I would say if you were 
to ask my nurse, she would love to spend less time on the phone 
doing work that is beneath her level of licensure and doing 
menial tasks and chart review and chart things that could be 
done by AI and rather spend time with the patient doing 
education and training.

    I think most of our nursing staff and our clinicians would 
relish the opportunity to remove themselves from some of those 
administrative and documentation tasks that we have become 
overwhelmed with in our EMR world, and instead focus our time 
and efforts with our patients.

    Senator Markey. You don't--you don't view it as a threat?

    Dr. Sale. I don't really think it is going to replace 
clinician judgment or patient engagement. I think if anything, 
we have a nursing shortage, a physician shortage, an over a 
health care worker shortage that has been existing even pre-
pandemic and was exacerbated by the pandemic. I think that this 
is--and if anything helps us close some of the gaps that exist 
in our ability to take care of patients.

    Senator Markey. Okay, great. Thank you. Any other 
questions? Beautiful. So here is what I am going to do, finish 
up 1 minute apiece for each one of you in reverse order of the 
original testimony.

    The 1 minute you want the Subcommittee to remember as we 
are moving forward on legislation to deal with AI as it 
interacts with the health care system. Begin with you, Ms. 
Huberty. You have 1 minute.

    Ms. Huberty. Thank you so much. I think it is important to 
know that I have been here today to describe the actual patient 
harm that is in place due to this AI and sound the alarms for 
the points where the doctors cannot override the AI and it 
causes that harm, that patient harm.

    It has a ripple effects through the economy, not only for 
that person's medical bills, but also the facilities that can't 
keep up and that can't accept patients anymore either. I think 
I am here to say this is exactly what is happening, and this 
is--we can use this as a model, what can we do with this 
information now so that it doesn't happen with other AI 
technology in the future.

    Senator Markey. Great. Dr. Mandl.

    Dr. Mandl. I would like to reemphasize the importance of 
measurement, the importance of making data available so that we 
have AI trained on the full diversity of the American 
population, and so that we can monitor AI and its impacts, 
along with boosting tremendously the way we monitor drugs, 
devices, procedures.

    That we actually create a more efficient health care system 
as a byproduct. That is a--that is one important focus within 
this domain.

    Senator Markey. Okay. Great. And Dr. Inglesby.

    Dr. Inglesby. Yes. Thank you, Senator. I think I would just 
like to close by re-emphasizing the enormous potential benefit 
of AI in health care.

    But to get the full benefit of AI in health care and in 
public health, we need to now, at the start of this huge 
change, to address the risks not only of privacy, bias, data 
integrity, and beyond, but also focus on the very high end 
risks around AI and the biological sciences.

    I think a number of ideas and steps are already on the 
table, but Congress can go further with some immediate steps 
and with more information from the agencies. Thanks very much.

    Senator Markey. Dr. Sale, you have the final word.

    Dr. Sale. Thank you very much. First of all, it has been an 
honor and a pleasure to be here. I would say, while I 
acknowledge the large scale and big picture concerns around AI, 
I feel like there may be some small window opportunities for us 
to utilize this technology in ways that really help improve 
patient care, physician and practice--practitioner well-being, 
and can really actually improve our outcomes in health care, 
with mitigating that risk.

    I think that requires close collaboration with our 
physicians and our clinical workforce as we develop these tools 
and define their uses of application within health care. I 
think it encompasses mitigating risk with privacy and security 
of data.

    I think ultimately, with the goal in mind of improve 
patient care and avoiding physician and clinician replacement, 
but rather enhancement of the practice of medicine.

    Senator Markey. Beautiful. Thank you so much. And like Dr. 
Naismith, you have served the State of Kansas very well, so we 
thank you for your testimony. Although the best basketball 
player in the world right now plays for the Denver Nuggets, for 
Senator Hickenlooper's home team.

    [Laughter.]

    Senator Markey. And----

    Senator Marshall. Potentially, potentially.

    Senator Markey. I think it is an evidence based 
determination I am making on that----

    [Laughter.]

    Senator Hickenlooper. Until that young man from--Wembanyama 
down in San Antonio, he might quickly change the algorithm.

    [Laughter.]

    Senator Markey. We thank everyone who participated today, 
especially our witnesses who traveled here from Massachusetts, 
Kansas, Wisconsin, and Maryland.

    Your perspectives are essential for ensuring that we guard 
against the harms of artificial intelligence. We need to put 
people over profit, prioritize worker voices, and keep focused 
on how to best treat patients.

    I ask unanimous consent to enter into the record a 
statement from stakeholders outlining priorities for addressing 
AI in health care. Without objection.

    [The following information can be found on page 66 in 
Additional Material.]

    Senator Markey. For any Senators who wish to ask additional 
questions of our witnesses for the record, they will be due in 
10 days, November 22, 2023, at 5.00 p.m. And we thank everyone. 
And with that, this hearing is adjourned. Thank you.

                          ADDITIONAL MATERIAL

           exploring congress' framework for the future of ai
                              Introduction
    Artificial intelligence (AI) is a transformational tool, carrying 
enormous power and potential to improve life for every American. As a 
foundational enabling technology, AI can be adapted for nearly any use 
to solve a myriad of problems. Health care is a prime example of a 
field where AI can do enormous good, with the potential to help create 
new cures, improve care, and reduce administrative burdens and overall 
health care spending. AI is also increasingly being adopted by 
businesses, consequently reshaping work, the workplace, and the labor 
market. But greater use of AI also carries significant risks. Experts 
exploring how the technology may affect the education field, for 
example, raise well-founded concerns about how AI might be used as a 
low-quality shortcut by both students and teachers, even as the 
technology might provide more personalized learning for students and 
reduce teacher workload. Our challenge as policymakers is to weigh the 
tradeoffs inherent with any powerful technology and modify or create 
the legal frameworks needed to maximize technologies' benefits while 
minimizing risks.

    To assess and balance the benefits and risks that AI creates, we 
must first define the term. Defining AI is challenging since AI experts 
have not arrived at a static definition of the rapidly developing 
general-purpose technology. ``Artificial intelligence'' was coined in 
1955 when the primitive computers of that time were often referred to 
as ``thinking machines.'' This definition bears little resemblance to 
today's cutting-edge technology. \1\ The working definition of AI for 
this paper, synthesized from others' definitions, is computers, or 
computer-powered machines, exhibiting human-like intelligent 
capabilities. \2\ It is an umbrella term that encapsulates multiple 
distinct technologies and approaches. AI multiplies the availability of 
human-level intelligence that can be applied to solve problems. But 
like any technology, how it works, and the risks it creates, depends on 
how it is used.
---------------------------------------------------------------------------
    \1\  Stanford University. (n.d). Defining AI. https://
ai100.stanford.edu/2016-report/section-i-what-artificial-intelligence/
defining-ai
    \2\  The definitions from which this one is synthesized include the 
following: Oxford Languages: ``The theory and development of computer 
systems able to perform tasks that normally require human intelligence, 
such as visual perception, speech recognition, decisionmaking, and 
translation between languages.'' Oxford English Dictionary: ``The 
capacity of computers or other machines to exhibit or simulate 
intelligent behavior; the field of study concerned with this.'' https:/
/www.oed.com/view/ Technologist Marc Andreessen: ``The application of 
mathematics and software code to teach computers how to understand, 
synthesize, and generate knowledge in ways similar to how people do 
it.'' https://a16z.com/2023/06/06/ai-will-save-the-world/.

    As the U.S. Senate begins to consider legislation to address AI, we 
must account for the specific context in which AI's capabilities are 
applied. A sweeping, one-size-fits-all approach for regulating AI will 
not work and will stifle, not foster, innovation. \3\ To use an 
analogous example, there is no Federal department of software, nor 
should there be: software is regulated based on how it is used, whether 
in power plants, airplanes, or X-ray machines. Likewise, we must adapt 
our current frameworks to leverage the benefits and mitigate the risks 
of how AI is applied to achieve certain goals. And only if our current 
frameworks are unable to accommodate continually changing AI, should 
Congress look to create new ones or modernize existing ones.
---------------------------------------------------------------------------
    \3\  Adam Thierer. (June 21, 2023). The Most Important Principle 
for AI Regulation. R Street. https://www.rstreet.org/commentary/the-
most-important-principle-for-ai-regulation/.

    Congress' proactive consideration of AI's implications is 
encouraging--we need to pay attention to this fast-changing field to 
protect consumers and ensure that the U.S. maintains global 
technological leadership. However, Congress must be just as mindful of 
the risks of changes to the AI regulatory environment as we are to the 
risks from AI itself. Top-down, all-encompassing frameworks risk 
entrenching incumbent companies as the perpetual leaders in AI, 
imposing an artificial lid on the types of problems that dynamic 
innovators of the future could use AI to solve. Instead, we need 
robust, flexible frameworks that protect against mission-critical risks 
and create pathways for new innovation to reach consumers. As Ranking 
Member of the Senate Health, Education, Labor, and Pensions (HELP) 
Committee, I'm focused on making sure that we strike the right balance 
for Americans from the earliest stages of developing new products 
through deployment of an AI system or solution solving complex 
problems.
                Researching and Developing New Medicines
    AI holds enormous potential to improve the speed and success of 
creating new medicines. For decades, drug development has begun with a 
laborious ``discovery'' process--researchers running painstaking 
experiments to assess one-by-one whether individual molecules have 
potential to treat disease. This process typically takes up to 26 
months before clinical trials can begin. \4\ AI can help bring 
engineering principles to this guesswork-filled process, empowering 
researchers to predict which molecules make the best drug candidates, 
and increasingly design drugs to address specific targets, rather than 
discover them through slower, manual laboratory methods. \5\ It's been 
reported that the first drug designed entirely with AI has moved into 
clinical trials in China. \6\ Investors have estimated that even modest 
improvements reaped through AI could create an additional 50 novel 
therapies over a decade. \7\ Not only can AI help create new therapies 
for patients, it could also help lower the costs of the time-consuming, 
expensive drug development process. Some estimates have found that 
leveraging AI could reduce development costs for manufacturers by up to 
$54 billion annually. \8\
---------------------------------------------------------------------------
    \4\  Garurav Agrawal et al. (February 10, 2023) Fast to first-in-
human: Getting new medicines to patients more quickly. McKinsey & 
Company. https://www.mckinsey.com/industries/life-sciences/our-
insights/fast-to-first-in-human-getting-new-medicines-to-patients--
more-quickly.
    \5\  Vijay Pande. (Nov. 12, 2018) How to Engineer Biology. https://
a16z.com/2018/11/12/how-to-engineer-biology/.
    \6\  Jamie Smyth. (June 26, 2023) Financial Times, Biotech begins 
human trials of drug designed by artificial intelligence. https://
www.ft.com/
    \7\  Morgan Stanley. (Sept. 9, 2022). Why Artificial Intelligence 
Could Speed Drug Discovery. https://www.morganstanley.com/ideas/ai-
drug-discovery.
    \8\  Kevin Gawora. (December 7, 2020). Fact of the Week: Artificial 
Intelligence Can Save Pharmaceutical Companies Almost $54 Billion in 
R&D Costs Each Year. Information Technology & Innovation Foundation. 
https://itif.org/publications/2020/12/07/fact-week-artificial-
intelligence-can-save-pharmaceutical-companies-almost/.

    Our framework for preclinical and clinical investigation of new 
drugs, implemented by the Food and Drug Administration (FDA), is 
generally well-suited to adapt to the use of AI to research and develop 
new drugs. Indeed, FDA has done an admirable job facilitating the use 
of AI in early stage drug development: in 2021, over 100 drug 
applications submitted to the FDA included AI components. \9\ In May 
2023, FDA published two discussion papers on the use of AI in drug 
development and manufacturing, respectively. \10\ The agency is 
spearheading initiatives for industry, academia, patients, and global 
regulatory authorities to engage on how best to facilitate AI in this 
field. Congress should support continued growth in the use of AI for 
research and development, and encourage FDA to continue to spur the use 
of innovative approaches while ensuring that new technologies are 
properly validated and monitored. As AI leads drug development to 
become both more productive and more complex, FDA needs world-leading 
expertise to keep up. As drug developers use AI to design new 
medicines, FDA's need to leverage experts in critical fields like 
computer science, biostatistics, biomedical engineering, and others 
will only grow. Congress needs to work with FDA on implementing last 
year's user fee agreements, which included significant funding 
increases for new review staff. Congress should also explore how to 
help FDA address perennial challenges recruiting and retaining 
qualified staff, including through finding ways to use external sources 
to tap needed expertise and manage limited resources.
---------------------------------------------------------------------------
    \9\  Patrizia Cavazzoni, M.D. (May 10, 2023). FDA Releases Two 
Discussion Papers to Spur Conversation about Artificial Intelligence 
and Machine Learning in Drug Development and Manufacturing. Food and 
Drug Administration. https://www.fda.gov/news-events/fda-voices/fda-
releases-two-discussion-papers-spur-conversation-about-artificial-
intelligence-and-machine.
    \10\  Id.

    This can be assisted by FDA using AI to increase the speed and 
efficiency of the agency's review process. FDA (and other agencies, 
like the National Institutes of Health [NIH]) can play an important 
role as early adopters and customers for new AI-powered research and 
development tools. Such tools could unlock enormous benefits, freeing 
FDA experts to focus on the tasks most critical to public health.
                    Diagnosing and Treating Diseases
    Diagnostic and treatment applications of artificial intelligence 
are proliferating each year. \11\ They hold the potential to expand 
health care access, improve outcomes, and increase efficiency. However, 
FDA's framework for regulating medical devices was not designed for 
devices that incorporate evolving AI--Congress may need to consider 
targeted updates to provide predictability and flexibility for AI-
powered devices while ensuring that such devices are safe and effective 
for patients. Moreover, foundational questions about AI applications 
remain regarding the transparency of algorithm development, ongoing 
effectiveness of such applications, and who carries the liability if 
something goes wrong.
---------------------------------------------------------------------------
    \11\  Ben Leonard et al. (June 29, 2023). Big bets on health care 
AI. Politico. https://www.politico.com/newsletters/future-pulse/2023/
06/29/big-bets-on-health-care-ai
---------------------------------------------------------------------------
     Using AI-Enabled Tools to Detect, Diagnose, and Treat Disease
    Consumers, patients, and health care providers use AI-enabled 
products throughout the patient lifecycle. AI is used to detect the 
earliest signs of medical conditions in otherwise healthy people, 
accurately diagnose patients when they get sick, and treat deadly 
diseases. In 2022 alone, FDA authorized 91 AI-enabled medical devices, 
after authorizing a record 115 devices in 2021. \12\ Many of these 
devices leverage advances in sensor technology and imaging and data 
analytics to examine symptoms of a particular condition and use 
extensive datasets to inform diagnosis or treatment. \13\ These devices 
range from Apple's atrial fibrillation sensor built into its watch and 
image reconstruction algorithms used in radiology and cardiology to 
detect cancers and lesions to clinical decision support software to 
predict a patient's risk of developing sepsis.
---------------------------------------------------------------------------
    \12\  Elise Reuter. (November 7, 2022). 5 takeaways from the FDA's 
list of AI-enabled medical devices. MedTechDive. https://
www.medtechdive.com/news/FDA-AI-ML-medical-devices--5-takeaways/635908/
 
    \13\  Id.

    AI-enabled diagnostic tools synthesize large amounts of data and 
perform pattern analysis to help detect a diagnosable condition, like a 
tumor. \14\ Diagnostic AI tools are used across a variety of fields 
where the pattern-matching capabilities of AI can compare images from 
X-rays, CT scans, and other devices against massive data bases of 
similar images to identify outliers that may indicate a disease or 
condition. These tools have shown the capability to increase the 
accuracy and efficiency of diagnosing patients. One application that 
has demonstrated incredible effectiveness is the use of AI for early 
screening for signs of diabetic retinopathy. \15\ There are very few 
trained eye technicians who are able to expertly diagnose the condition 
compared to the vast number of diabetic patients who need screening. 
Automated analysis software that uses AI helps increase the accuracy of 
diagnosis and expand the number of clinicians who can do this important 
screening. More diagnoses are made earlier, helping more patients avoid 
blindness.
---------------------------------------------------------------------------
    \14\  U.S. Government Accountability Office. (September 29, 2022). 
Artificial Intelligence in Health Care: Benefits and Challenges of 
Machine Learning Technologies for Medical Diagnostics. https://
www.gao.gov/assets/gao--22--104629.pdf.
    \15\  SK Padhy et al. (July 2019). Artificial intelligence in 
diabetic retinopathy: A natural step to the future. Indian Journal of 
Ophthalmology. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6611318/
pdf/IJO--67--1004.pdf.

    The utility of AI-enabled devices depends on clinician adoption--no 
patients are better off if these tools sit on a shelf. To a greater 
degree than traditional devices, AI-enabled products raise novel 
questions about supplementing, or even supplanting the clinician's 
role: the same tool that could reduce error could also miss outlier 
symptoms. In order to best leverage the utility of AI-enabled devices, 
clinicians need to be effectively trained, including in how to reduce 
the risk of misdiagnosis and mistreatment. In order to have a robust 
and effective framework, standards to demonstrate clinical validity 
will need to be developed and testing to proper safety standards will 
need to be implemented. \16\
---------------------------------------------------------------------------
    \16\  See, Artificial Intelligence in Health Care.
---------------------------------------------------------------------------
                   Adapting the FDA Framework for AI
    AI poses two foundational challenges to FDA's current regulatory 
framework for medical devices. First, products that incorporate AI-
enabled software face varying degrees of premarket regulatory scrutiny, 
based on whether they meet the statutory definition of medical device 
or are subject to either a statutory carve-out or FDA's policy of 
enforcement discretion for certain products. Second, FDA's review of 
the safety and effectiveness of devices inherently applies to a 
specific product at a specific moment in time, meaning that FDA's 
review, and the statutory requirements it implements, was not designed 
for products that incorporate AI to improve over time.

    In light of these challenges, FDA is still figuring out how best to 
assess medical devices that use AI. It has attempted a pre-
certification pilot for software treated as medical devices that would 
certify software developers as opposed to the products themselves. FDA 
also published an attempt at an AI framework through guidance in 2019 
and subsequent action plans. \17\ Pursuant to policies enacted by 
Congress in December 2022, FDA has begun accepting predetermined change 
protocol plans in premarket product submissions where developers can 
outline anticipated modifications to avoid subsequent review and 
approval. Yet these efforts have presented more questions about how FDA 
will actually treat medical devices that integrate AI, and FDA (and 
others) acknowledge that Congress may need to consider updating the 
decades-old medical device framework. \18\
---------------------------------------------------------------------------
    \17\  Food and Drug Administration. (January 7, 2019). Developing a 
Software Precertification Program: A Working Model. https://
www.fda.gov/media/119722/download. (AI/ML)-Based Software as a Medical 
Device (SaMD) Action Plan, Food and Drug Administration (January 12, 
2021), https://www.fda.gov/media/161815/download.
    \18\  The Software Precertification (Pre-Cert) Pilot Program: 
Tailored Total Product Lifecycle Approaches and Key Findings, Food and 
Drug Administration (September 27, 2022), https://www.fda.gov/media/
161815/download; See also, Scott Gottlieb and Lauren Silvis, Regulators 
Face Novel Challenges as Artificial Intelligence Tools Enter Medical 
Practice, JAMA Forum (June 8, 2023), https://jama--network.com/
journals/jama-health-forum/fullarticle/2806091.
---------------------------------------------------------------------------
     Considerations for Transparency, Effectiveness, and Liability
    Ensuring that AI tools are trusted by all stakeholders is essential 
to support greater AI adoption and enable patients to receive maximum 
benefits. First, AI tools should be developed in a transparent way, so 
patients and providers can understand how they are meant to be applied 
to ensure appropriate use. One of the barriers to adoption of AI tools 
is a lack of understanding about how any given algorithm was designed. 
Improving transparency about how an AI product works will build 
stakeholder trust in such products.

    Second, any framework must build in a clear method to measure 
effectiveness so AI products can be further improved. AI algorithms are 
trained on data sets which may only represent a specific population. 
Algorithms may not be appropriate for different populations from ones 
they were trained on, which can create bias and decrease effectiveness. 
Effective algorithms must also leverage accurate data sets to ensure 
that the information being used to make determinations is properly 
collected and inputted. Congress may need to consider how to best 
ensure that AI-enabled products do not give undue weight to potential 
biases.

    Third, stakeholders need a clear understanding of potential 
liability around the use of AI. Like any medical device, failure of a 
product that incorporates AI could harm patients, such as through 
incorrect diagnoses (both false positives and false negatives). These 
risks are magnified with AI devices that are trained by additive data 
sets and evolve over time, and where later results may differ from 
earlier iterations. A predictable framework is needed to facilitate 
adoption of these tools, which requires determining where liability 
lies--the original developer, most recent developer, clinician, or 
other party.
                   Supporting Patients and Providers
    A burgeoning application of AI is in the development of clinical 
decision support algorithms, which use data sets of patient data and an 
individual patient's own medical record to alert a clinician through 
their electronic health record software of a diagnosis, treatment, or 
predicted likelihood of developing a condition that they may want to 
consider. Hospital systems across the country use internally developed 
clinical decision support algorithms based off of their own patient 
population and patient data.

    One leading electronic health record (EHR) vendor that developed an 
algorithm intended to predict whether a patient would develop sepsis 
came under scrutiny when the Journal of the American Medical 
Association found that it only accurately predicted the occurrence of 
sepsis 7 percent of the time. \19\ This highlighted the risk involved 
if clinicians rely too heavily on algorithms. In response, FDA proposed 
a guidance for industry in September 2022 asserting authority over 
these algorithms and requiring them to go through FDA review as medical 
devices. \20\
---------------------------------------------------------------------------
    \19\  Anand Habib et al. (June 21, 2021) The Epic Sepsis Model 
Falls Short-The Importance of External Validation. JAMA Internal 
Medicine. https://jamanetwork.com/journals/jamainternalmedicine/
article-abstract/2781313.
    \20\  U.S. Food and Drug Administration. (September 28, 2022) 
Clinical Decision Support Software; Guidance for Industry and Food and 
Drug Administration Staff. https://www.fda.gov/media/109618/download.

    AI interfaces that engage directly with patients are also promising 
enhanced care and improving outcomes by predicting and catching 
conditions early. \21\ For example, patient-facing chatbots have 
reduced emergency department visits at one health system by 5 percent, 
saving $1 million. \22\ Yet incorporating AI in patient care warrants 
caution. A recent study found that 60 percent of patients would be 
uncomfortable with a provider relying on AI when receiving care. \23\ 
Patients are understandably concerned about how AI could result in a 
less robust patient-provider relationship. As we move forward, 
integrating AI into patient care will require both effective products, 
as well as the much harder task of building trust with patients.
---------------------------------------------------------------------------
    \21\  Bill Siwicki. (June 22, 2023). Where AI is making a 
difference in healthcare now. Healthcare IT News. https://www.healthca-
reitnews.com/news/where-ai-making-difference-healthcare-now
    \22\  Id.
    \23\  Alec Tyson et al. (February 22, 2023). 60 percent of 
Americans Would Be Uncomfortable With Provider Relying on AI in Their 
Own Health Care. Pew Research Center. https://www.pewresearch.org/
science/2023/02/22/60-of-americans-would-be-uncomfortable-with--
provider-relying-on-ai-in-their-own-health-care/.
---------------------------------------------------------------------------
            Address Health Care Administration and Coverage
    Administrative activities are a significant component of the health 
care system. These activities are responsible for executing the 
operations of health care, including practice management, payment 
processing, engagement with regulators, and integrating new tools to 
improve health outcomes. Approximately 15-30 percent of all health care 
spending is spent on administrative activities. \24\ However, as health 
care has become more complex, administrative tasks take up an 
increasing part of providers' time, taking them away from patient care. 
Studies have found that physicians spend approximately 8.7 hours a week 
on administrative activities and must devote approximately 28 percent 
of a patient visit to administrative tasks, such as data entry into EHR 
systems, filling out health insurance claims forms and prior 
authorization requests, and scheduling appointments. \25\ As 
administrative tasks have become more time intensive, physicians have 
reported higher levels of burnout.
---------------------------------------------------------------------------
    \24\  Health Affairs. (October 6, 2022) The Role Of Administrative 
Waste In Excess U.S. Health Spending. https://www.healthaffairs.org/.
    \25\  Steffie Woolhandler and David Himmelstein. (2014) 
Administrative work consumes one-sixth of U.S. physicians' working 
hours and lowers their career satisfaction, International Journal of 
Health Services. https://pubmed.ncbi.nlm.nih.gov/. Fabrizio Toscano et 
al., How Physicians Spend Their Work Time: an Ecological Momentary 
Assessment, Journal of General Internal Medicine (August 17, 2019), 
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7661623/pdf/ Rebecca 
Pifer, Hurtling into the future: The potential and thorny ethics of 
generative AI in healthcare, Healthcare Dive (April 21, 2023).

    Administrative functions related to EHR use are a leading cause of 
burnout, leading to workforce shortages and a lower quality of care for 
patients. \26\
---------------------------------------------------------------------------
    \26\  Steffie Woolhandler and David Himmelstein. (2014) 
Administrative work consumes one-sixth of U.S. physicians' working 
hours and lowers their career satisfaction. International Journal of 
Health Services. https://pubmed.ncbi.nlm.nih.gov/. Scott Yates. 
(September 11, 2019). Physician Stress and Burnout, The American 
Journal of Medicine https://www.amjmed.com/action/

    AI has the potential to not only streamline health care 
administration by leveraging automation and analytical tools to reduce 
provider time on spent on administrative tasks, but also reduce 
potential mistakes, streamline management decisions, and improve claims 
management. One hospital system used AI to improve surgical scheduling 
and saw a 10 percent reduction in physician overtime and improved 
utilization of surgical suites by 19 percent. \27\ EHR systems are also 
leveraging AI tools to reply to patient messages and eventually 
summarize patient medical history and translate between languages and 
reading levels for patient materials. \28\ AI has also been used to 
improve claims management, by improving the speed by which claims can 
be reviewed and prepared. Some vendors have used AI to enable instant 
claims approval, reducing uncertainty and paperwork for patients. \29\
---------------------------------------------------------------------------
    \27\  Thomas Davenport and Randy Bean. (April 11, 2022). Clinical 
AI Gets the Headlines, but Administrative AI May Be a Better Bet, MIT 
Sloan Management Review. https://sloanreview.mit.edu/article/clinical-
ai-gets-the-headlines-but-administrative-ai-may-be-a-better-bet/.
    \28\  Rebecca Pifer. (April 21, 2023). Hurtling into the future: 
The potential and thorny ethics of generative AI in healthcare. 
HealthcareDive. https://www.healthcaredive.com/trendline/tech/
    \29\  PR Newswire (April 13, 2023). Google Cloud Unveils New AI-
enabled Claims Acceleration Suite to Streamline Health Insurance Prior 
Authorization and Claims Processing, Helping Experts Make Faster, More 
Informed Decisions. https://www.prnewswire.com/

    Health insurers can also leverage AI to great benefit, reducing the 
time, energy, and expenses dedicated to determining and managing health 
risks. AI can more accurately predict and measure an individual's risk 
and the specific type of care they need, reducing administrative 
burdens and saving time and money. \30\ AI can also drive health care 
savings by reducing long-term costs and unnecessary paperwork. Some 
estimates have found that greater uptake of AI could reduce national 
health care spending by of five to 10 percent. \31\
---------------------------------------------------------------------------
    \30\  Albert Pomales. (January 10, 2023). Using AI And Machine 
Learning To Improve The Health Insurance Process. https://
www.forbes.com/sites/forbesbusinesscouncil/2022/01/10/using-ai-and-
machine-learning-to-improve-the-health-insurance-process/
    \31\  Nikhil Sahni, George Stein, Rodney Zemmel, and David M. 
Cutler. (January 2023). The Potential Impact of Artificial Intelligence 
on Healthcare Spending, National Bureau of Economic Research. https://
www.nber.org/papers/w30857.

    However, we must also ensure that using AI for coverage decisions 
does not reduce needed care. One report found that a health insurer 
used an algorithm to batch claims that were denied by the thousands 
with a single click. \32\ Stakeholders later emphasized the need for 
greater regulatory oversight of using AI to review prior authorization 
requests. \33\ Steps should also be taken to ensure that AI is not 
overriding clinical judgment. Some patients have been unable to receive 
a provider opinion due to algorithms automatically deciding a treatment 
plan. \34\
---------------------------------------------------------------------------
    \32\  Patrick Rucker, Maya Miller, and David Armstrong. (March 25, 
2023). How Cigna Saves Millions by Having Its Doctors Reject Claims 
Without Reading Them. ProPublica. https://www.propublica.org/article/
cigna-pxdx-medical-health-insurance-rejection-claims.
    \33\  American Medical Association (June 14, 2023). AMA adopts 
policy calling for more oversight of AI in prior authorization. https:/
/www.ama-assn.org/press-center/press-releases/ama-adopts-policy-
calling-more-oversight-ai-prior-authorization.
    \34\  Casey Ross and Bob Herman. (July 11, 2023). How 
UnitedHealth's acquisition of a popular Medicare Advantage algorithm 
sparked internal dissent over denied care. https://www.statnews.com/
2023/07/11/Medicare-advantage-algorithm-navihealth-united--health-
insurance-coverage/

    While AI has the potential to streamline health care administration 
and address spending by optimizing provider resources and improving 
patient care, there are still questions about how patient information 
will be used to advance care and whether this may weaken patient 
privacy protections. Leveraging individual health data is essential to 
deliver specific care outcomes to a patient, but Congress must ensure 
that AI tools are not used to deny patients access to care or use 
patient information for purposes that a patient has not given consent 
for.
   Safeguarding Patient Privacy Throughout the Health Care Lifecycle
    The foundational requirement for developing an AI tool is a large 
data set upon which to train an algorithm to analyze information and 
make determinations and predict outcomes. The dataset can take many 
forms, including thousands of medical images accompanied by indications 
of whether and where cancerous tumors are present. After learning from 
enough images, the algorithm should be able to process a new image and 
alert a clinician as to whether cancer is indicated in the scan. To 
obtain such vast datasets, algorithm developers may affiliate with an 
institution that already has internal datasets, such as a hospital 
system or EHR vendor. These institutions are typically regulated as 
covered entities or business associates under the Health Insurance 
Portability and Accountability Act (HIPAA). Developers may also use 
health data collected via third-party applications. This information is 
not always protected by the HIPAA framework and raises questions about 
what protections the information may be entitled to. In many instances, 
patients and consumers have expectations for how their health 
information should be handled that may differ from existing 
requirements on those who collect health data. AI can be leveraged to 
enhance privacy protections by aggregating disparate data to anonymize 
personally identifiable information, though it can also be used to re-
identify previously de-identified health information. \35\ Congress 
needs to consider if changes are needed in how health information is 
protected when it falls outside the scope of HIPAA.
---------------------------------------------------------------------------
    \35\  Katharine Miller, De-Identifying Medical Patient Data Doesn't 
Protect Our Privacy, Stanford University Human-Centered Artificial 
Intelligence, July 19, 2021, https://hai.stanford.edu/news/de-
identifying-medical-patient-data-doesnt-protect-our-privacy/.
---------------------------------------------------------------------------
         Improving Student Learning and Transforming Education
    Educators, school officials, and researchers are debating the 
merits and shortcomings of utilizing this new technology in classrooms. 
Proponents posit that AI can revolutionize education by providing more 
personalized learning for students while reducing the workload for 
teachers. This technology might prove especially helpful in light of 
the COVID-19 pandemic, which resulted in years of lost learning and the 
largest decline in test scores seen on national assessments in decades. 
\36\ However, there are well-founded concerns around how AI might be 
used as a low-quality shortcut by both students and teachers, how to 
account for errors in AI's output, and how the underlying models and 
algorithms might not be setup to adequately serve all students.
---------------------------------------------------------------------------
    \36\  The National Assessment of Educational Progress. (June 2023). 
Reading and Mathematics Scores Decline during COVID-19 Pandemic. 
https://www.nationsreportcard.gov/highlights/ltt/2022/.

    School districts across the country have used Federal funds to 
provide tutoring to address student learning loss. Now, researchers are 
exploring whether AI can serve as a supplemental tutor during class 
time or at home to provide homework help. The rise of platforms such as 
Khan Academy's Khanmingo shows that the technology can provide 
customized responses to students' questions, guiding them through their 
thinking process to help them come up with an accurate answer. \37\ AI 
can help educators with routine tasks, like grading assessments and 
identifying trends in student outcomes, to reduce the ever-growing 
burdens on teacher time. For example, teachers are starting to use AI 
to assist in lesson planning, by aligning standards to activities, 
identifying strategies to engage all learners, and developing 
assessments. \38\ This can free up teachers' time to focus on 
activities that make a greater impact on learning outcomes, such as 
providing individualized instruction or whole-group remediation.
---------------------------------------------------------------------------
    \37\  Khanmigo. (n.d.) Khan Academy. https://www.khanacademy.org/
khan-labs--khanmigo.
    \38\  Jorge Valenzuela. (March 15, 2023). Using AI to Help Organize 
Lesson Plans. Edutopia. https://www.edutopia.org/article/ai-lesson--
plans/.

    AI can even be used to help support other school personnel, like 
security guards. School districts are starting to purchase and use AI-
powered robots that can surveil school grounds and notify security 
staff about intruders. \39\ While reliant on guidance from humans, 
these robots are equipped to video record interactions with intruders, 
transmit communications from safety staff, and even use flashing lights 
and lasers to disarm an individual. \40\ While these robots are a new, 
and expensive, development, it is a promising innovation that can 
improve school safety.
---------------------------------------------------------------------------
    \39\  Megan Tagami. (July 7, 2023) Your School's Next Security 
Guard May Be an AI-Enabled Robot. Wall Street Journal. https://
www.wsj.com/articles/this-schools-new-security-aide-has-360-degree-
vision-its-a-robot-a4f983b5.
    \40\  Ibid.

    Use of AI in post-secondary education, from workforce development 
to higher education, involves similar opportunities and potential 
concerns. A famous example of AI success in higher education is on 
student completion and success at Georgia State University. The 
institutional graduation rate stood at 32 percent and Pell students, 
those from low-income backgrounds, were graduating at a rates 10 
percentage points lower than non-Pell students. \41\ According to their 
report, in 2003, Georgia State University was the ``embodiment of these 
national failings.'' \42\ Now, the graduation rate is up and the 
racial, ethnic, and economic disparities are no longer predictors of 
success at Georgia State. The university successfully demonstrated the 
impact of analytics-based proactive advisement, using AI, from 
identifying students at-risk of not graduating to chatbots to provide 
customized communications in real-time. \43\
---------------------------------------------------------------------------
    \41\  Ibid.
    \42\  Georgia State University. (September 2020) Complete College 
Georgia. Carnegie Foundation. https://www.carnegiefoundation.org/
    \43\  Ibid.

    While these advances may be a bright spot for the future of 
education, results from a recent survey of teachers and administrators 
by the digital learning platform, Clever, show that there are more 
obstacles to overcome. Nearly half of survey respondents believed that 
``AI will make their jobs more challenging within 3 years'' and these 
challenges may stem from the lack of professional development preparing 
teachers to use these new technologies in the classroom. \44\ However, 
as with any new technology, like introduction of the internet or 
tablets in the classroom, there will be growing pains as teachers begin 
to grapple with and use AI in their classrooms. School leaders will 
need to take the lead in ensuring that their staff is appropriately 
trained, and best practices for use are developed and widely 
disseminated.
---------------------------------------------------------------------------
    \44\  PR Newswire. (June 21 2023). Half of Teachers Surveyed 
Believe AI Will Make Their Jobs More Challenging. https://
www.prnewswire.com/

    As localities consider if and how they will use AI in their 
classrooms, the country's largest school district, New York City Public 
Schools, has taken a decisive step by banning ChatGPT on all district 
devices and networks. \45\ One of the chief concerns shared by district 
leaders and teachers is how AI can enable students to cheat on 
assessments. \46\ In fact, the Department of Education recently 
released a report that raised both this concern and a more widespread 
issue--how AI can provide information that appears to be accurate but 
perpetuates misunderstandings. \47\
---------------------------------------------------------------------------
    \45\  Maya Yang. (January 6, 2023). New York City Schools Ban AI 
Chatbot That Writes Essays and Answers Prompts. The Guardian. https://
www.theguardian.com/us-news/2023/jan/06/new-york-city-schools-ban-ai-
chatbot-chatgpt.
    \46\  Ibid.
    \47\  Office of Educational Technology. (May 2023). Artificial 
Intelligence and the Future of Teaching and Learning. U.S. Department 
of Education. https://www..ed.gov/documents/ai-report/ai-report.pdf.

    While students are now able to use the internet and other 
technologies to help answer basic homework questions, recent 
advancements will enable students to use AI as a substitute for their 
own thinking for assignments aimed at building or testing their 
critical thinking skills. AI can be used to write essays, prepare an 
argument for debate, or construct proofs for complex math problems. If 
both AI's content and students' use of the technology is left 
unchecked, students may never fully develop the critical thinking 
skills needed to succeed in the workforce. Students must be taught to 
use AI to strengthen, rather than replace their critical thinking 
skills. For instance, students could be asked to critique the reasoning 
of an essay prepared by AI or submit their argument to AI and ask for 
probing questions to work through that might strengthen their logic. AI 
will either be a shortcut for students' critical thinking or an 
incredible sparring partner to strengthen them--what actions can we 
take to ensure it is the latter?
            Responsible Use of AI Can Improve the Workplace
    Human resources (HR) technology spending on AI tripled in 2021 as 
companies adjusted to remote work and staffing challenges. \48\ This 
year, H.R. technology ranks as the top spending priority for H.R. 
leaders, higher than staffing, total rewards, or learning and 
development. \49\ Employers are using AI to create efficiencies across 
the employee lifecycle, from recruiting, to interviewing, hiring, 
onboarding, upskilling, managing, promoting, and downsizing. Proponents 
argue AI can help firms make better employment-related decisions and 
enhance work for employees. To fill employment gaps, AI is facilitating 
connections between job seekers and potential employers, and helping 
employers attract, hire, and retain high-value employees, including 
those with untraditional backgrounds. When designed or used 
inappropriately, AI can lead to violations of Federal law or alter how 
work is done to the detriment of workers.
---------------------------------------------------------------------------
    \48\  Dondo, Jean. (2021, December 21). H.R. technology budget 
triples in 2021. HRD America. https://www.hcamag.com/us/specialization/
hr-technology/hr-technology-budget-triples-in-2021/320668.
    \49\  Feffer, Mark. (2023, March 16). H.R. Sees Technology as One 
Solution to Rising Costs. HCM Technology Report. https://
www.hcmtechnologyreport.com/hr-sees-technology-as-one-solution-to-
rising-costs/.

    For example, the use of AI to monitor and manage employees has 
often been cited as a cause of deteriorating workplace conditions. In 
certain cases, employees have expressed concerns that AI was 
inappropriately used to determine who is laid off. \50\ In addition, 
the digitalization of H.R. departments has often meant information on 
employee productivity, employee potential, and other metrics derived 
using AI played a role in adverse H.R. decisions. \51\ Meanwhile, some 
companies are deploying employee monitoring methods such as keystroke 
and eye tracking software, video monitoring or automated job 
interviews, and wearable tracking devices, which can raise concerns 
over employee privacy and dignity. \52\ The shift to remote work that 
occurred during the pandemic spurred adoption of these technologies, 
intensifying concerns. Companies are also using AI to ensure the safety 
and protection of their workers. For example, AI models are being 
developed for fire detection, limiting unauthorized access, and 
collision warnings for moving vehicles. \53\
---------------------------------------------------------------------------
    \50\  Nurski, L. and Hoffman, M. (2022, July 27). The impact of 
artificial intelligence on the nature and quality of jobs, Working 
Paper, Bruegel. https://www.bruegel.org/sites/default/files/2022-07/WP 
percent2014 percent202022.pdf.
    \51\  Verma, Pranshu. (2023, February 20). AI is starting to pick 
who gets laid off. The Washington Post. https://www.washingtonpost.com/
technology/2023/02/20/layoff-algorithms/.
    \52\  Lazar, Wendi, & Yorke, Cody. (2023, April 25). Watched while 
working: Use of monitoring and AI in the workplace increases. Reuters. 
https://www.reuters.com/legal/legalindustry/watched-while-working-use-
monitoring-ai-workplace-increases.
    \53\  Boesch, G. (2023, January 5). Top 18 applications of Computer 
Vision in security and surveillance. viso.ai. https://viso.ai/
applications/computer-vision-applications-in-surveillance-and-security/
 

    Another area of potential harm that has garnered ample attention by 
policymakers and regulators is discrimination. At the Federal level, 
Congress, the Department of Labor (DOL), the Equal Employment 
Opportunity Commission (EEOC), the National Labor Relations Board 
(NLRB), and the White House have each opined on the potential risk of 
AI to produce discriminatory employment decisions. \54\, \55\, \56\, 
\57\, \58\ Debates are just beginning about whether adequate 
protections are provided by technology-neutral Federal anti-
discrimination statutes, such as Title VII of the Civil Rights Act of 
1964, the Americans with Disabilities Act of 1990, and the Age 
Discrimination in Employment Act of 1967. \59\, \60\, \61\
---------------------------------------------------------------------------
    \54\  Senate Judiciary Subcommittee on Privacy, Technology, and the 
Law. (2023, July 25). Senate hearing on Regulating Artificial 
Intelligence Technology. CSPAN. https://www.c-span.org/video/
    \55\  Goldman, T. (2022, October 4). What the blueprint for an AI 
bill of rights means for workers. DOL Blog. https://blog.dol.gov/2022/
10/04/what-the-blueprint-for-an-ai-bill-of-rights-means-for-workers.
    \56\  U.S. Equal Employment Opportunity Commission. (2022, May 12). 
[Technical Guidance] The ADA and AI: Applicants and Employees. U.S. 
Equal Employment Opportunity Commission. https://www.eeoc.gov/laws/
guidance/americans-disabilities-act-and-use-software-algorithms-and-
artificial-intelligence.
    \59\  Abruzzo, J. A. (2022, October 31). Electronic Monitoring and 
Algorithmic Management of Employees Interfering with the Exercise of 
Section 7 Rights. National Labor Relations Board. https://www.nlrb.gov/
news-outreach/news-story/nlrb-general-counsel-issues-memo-on-unlawful-
electronic-surveillance-and.
    \58\  The U.S. Government. (2023, March 16). Blueprint for an AI 
bill of rights. The White House. https://www.whitehouse.gov/ostp/ai-
bill-of-rights/.
    \59\  Equal Employment Opportunity Commission . (1964). Title VII 
of the Civil Rights Act of 1964. U.S. EEOC. https://www.eeoc.gov/
statutes/title-vii-civil-rights-act--1964.
    \60\  Equal Employment Opportunity Commission . (1990). Americans 
with Disabilities Act of 1990. U.S. EEOC. https://www.eeoc.gov/
publications/ada-your-responsibilities-employer.
    \61\  Equal Employment Opportunity Commission . (1967). Age 
Discrimination in Employment Act of 1967. U.S. EEOC. https://
www.eeoc.gov/statutes/age-discrimination-employment-act--1967.

    Three AI challenges facing policymakers are working conditions, 
discrimination, and job displacement. AI is disrupting the labor market 
by automating some jobs and threatening to displace more. \62\ In one 
estimate, about two-thirds of jobs globally are exposed to partial AI 
automation, and about one-fourth of jobs could be replaced. \63\ Early 
estimates focus on potential job loss among low-skilled, low-income 
jobs. White-collar jobs are increasingly considered at risk, 
particularly with the rapid development of generative AI (i.e., AI 
systems using existing patterns within data sets to create new content, 
such as ChatGPT).
---------------------------------------------------------------------------
    \62\  Challenger, Gray & Christmas, Inc. (2023, June 1). Layoffs 
Jump in May on tech, retail, auto; TYD hiring lowest since 2016. 
Challenger Report May 2023. https://omscgcinc.wpenginepowered.com/wp-
content/uploads/2023/06/The-Challenger-Report-May23.pdf.
    \63\  Briggs, Joseph, Hatzius, Jan, Kodnani, Devesh, & 
Pierdomenico, Giovanni. (2023, March 26). The Potentially Large Effects 
of Artificial Intelligence on Economic Growth (Briggs/Kodnani). Goldman 
Sachs Economic Research. https://www.key4biz.it/

    As EEOC Commissioner Keith Sonderling notes, machine learning and 
natural language processing are the most pertinent iterations of AI in 
the employment context. \64\ Machine learning is a subfield of AI that 
allows computing systems to process large amounts of data to change the 
original programming, i.e. ``learn,'' without explicitly being 
programmed. At any point in the process, programmers may alter the 
model to push it to more accurate results or assess the system with 
evaluation data. \65\ Natural language processing is a set of 
computational techniques to analyze and produce written or oral 
language in a way that appears to be human. \66\ Chatbots are a common 
example.
---------------------------------------------------------------------------
    \64\  Ibid.
    \65\  Brown, Sara. (2021, April 21). Machine learning, explained. 
MIT Sloan. https://mitsloan.mit.edu/ideas-made-to-matter/machine-
learning-explained.
    \66\  Liddy, Elizabeth D. (2001). Natural Language Processing. 
SURFACE at Syracuse University. https://surface.syr.edu/cgi/
viewcontent.cgi

    AI's impact on work is far from understood, as the workplace, 
workers' preferences and expectations, and the technology itself is 
rapidly developing. AI's potential positive impact on work is less 
discussed, but may prove more significant. AI systems have been used to 
help workers look for a job, or upskill to a new one. AI education 
tools can be seamlessly integrated into an employee's workflow, and 
adjusted in real time as the economy changes. \67\ AI can increase 
workplace access for disabled employees. Examples include lip-reading 
recognition tools, image and facial expression recognition, and 
wearable technologies, such as robotic arms. AI tools can create more 
flexible scheduling, matching labor demands with worker availability, 
qualifications, and preferences. Flexible scheduling is particularly 
important for family caregivers. \68\ Research has indicated that AI 
often results in more diverse hires and less biased promotion 
decisions. \69\ Perhaps counterintuitively, the use of AI in the 
workplace has been correlated with greater employee satisfaction, 
giving actionable information on workplace stressors in real time and 
facilitating interactions with management. \70\, \71\
---------------------------------------------------------------------------
    \67\  Perara, Angela. (2022, October 8). Artificial Intelligence in 
HR: Using AI for identifying and hiring suitable candidates. Business-
Tech Weekly. https://www.businesstechweekly.com/hr-and-recruitment/
artificial-intelligence-ai-for-hiring/.
    \68\  Siddiqui, A. R. (2023, June 7). How ai is helping society 
break free from the 9-to-5 mold. Entrepreneur. https://
www.entrepreneur.com/leadership/how-ai-is-breaking-the-9-to-5-mold/
    \69\  Houser, Kimberly. (2020, July 12). Can AI Solve the Diversity 
Problem in the Tech Industry? Mitigating Noise and Bias in Employment 
Decision-Making. SSRN. https://papers.ssrn.com/sol3/papers.cfm
    \70\  Candelon, Francois, Khodabandeh, Shervin, & Lanne, Remi. 
(2022, November 4). A.I. empowers employees, not just companies. Here's 
how leaders can spread that message. FORTUNE. https://fortune.com/2022/
11/04/artificial-intelligence-ai-employee-empowerment/.
    \71\  Houser, Kimberly. (2020, July 12). Can AI Solve the Diversity 
Problem in the Tech Industry? Mitigating Noise and Bias in Employment 
Decision-Making. SSRN. https://papers.ssrn.com/sol3/papers.

    The U.S. Government has not adopted a centralized regulatory 
approach to AI in the employment context. Several states and 
localities--Maryland, Illinois, and New York City, for example--have 
enacted AI laws, and more local and state regulation is pending. \72\ 
Executive branch policy is beginning to address AI, to include 
technical assistance from the EEOC and a memo by NLRB General Counsel, 
but is still in its infant stages. Federal lawmakers have shown 
interest in regulating AI, but significant problems, including the 
novelty of the technology and the still undecided nature of its impact, 
remain.
---------------------------------------------------------------------------
    \72\  Zhu, K. (2023, August 3). The State of State AI laws: 2023. 
EPIC. https://epic.org/the-state-of-state-ai-laws-2023/.
---------------------------------------------------------------------------
                        AI and Job Displacement
    Technological unemployment has been a recurring fear since the 
manufacturing era, and is once again with the advent of AI. According 
to a Goldman Sachs study, globally 300 million full-time jobs could be 
at risk of automation. \73\ The World Economic Forum estimates that 85 
million jobs could be displaced by 2025 but 97 million new jobs may be 
generated by technology. \74\ Many economists argue robots are not 
replacing workers, but instead workplaces are integrating them into 
their ecosystem. \75\ Despite these fears, as adoption of AI increases 
across the private sector, the major workforce challenge most companies 
face is filling job vacancies.
---------------------------------------------------------------------------
    \73\  Goldman Sachs. (2023, April 05). Generative AI could raise 
global GDP by 7 percent. Goldman Sachs. https://www.goldmansachs.com/
intelligence/pages/generative-ai-could-raise-global-gdp-by-7-
percent.html.
    \74\  Schwab, Klaus, & Zahidi, Saadia. (2020, October 20). The 
Future of Jobs Report 2020. WeForum. https://www.weforum.org/reports/
the-future-of-jobs-report-2020/
    \75\  Dahlin, Eric. (2022, October 17). Are Robots Really Stealing 
Our Jobs? Perception versus Experience. Socius, 8. https://doi.org/.

    The potential automation of truck driving has often been predicted 
to threaten millions of U.S. jobs. According to the American Trucking 
Association, in 2022, 8.4 million Americans were employed in jobs that 
relate to trucking activity. \76\ Hearings on autonomous vehicles and 
trucking have focused on this risk. The Senate Commerce Committee 
reported the AV STARTAct (S. 1885) in 2017, but exempted vehicles 
weighing more than 10,000 pounds after pressure from the Teamsters 
Union. \77\ In 2021, the Departments of Transportation and Labor 
published a congressionally directed study on the impacts of automated 
trucking on the workforce, which acknowledged the potential for job 
displacement in the trucking industry but noted the lack of data would 
require further studies to generate a stronger prediction. \78\ A 2019 
Government Accountability Office (GAO) report noted widespread 
deployment of automated trucks could be years or decades away.
---------------------------------------------------------------------------
    \76\  American Trucking Association. (n.d.). Economics and industry 
data. American Trucking Associations. https://trucking.org/economics-
and-industry-data.
    \77\  DC Velocity Staff. (2017, October 4). Senate Committee caps 
weight limit on vehicles to be subject to AV laws. DCVelocity. https://
www.dcvelocity.com/articles/29203-senate-committee-caps-weight-limit-
on-vehicles-to-be-subject-to-av-laws.
    \78\  U.S. GAO. (2019, March). Automated Trucking Federal Agencies 
Should Take Additional Steps to Prepare for Potential Workforce 
Effects. U.S. Government Accountability Office (U.S. GAO). https://
www.gao.gov/assets/gao-19-161.pdf.

    Studies have suggested that the impact of automation on jobs may be 
less abrupt than is envisioned. \79\ A significant portion of job 
losses, for example, will take place through attrition, including 
retirement. In addition, studies comparing predictions of job loss and 
job creation due to technology fail to predict even the most common job 
titles over the coming decades. \80\ Sixty percent of today's workforce 
occupy jobs that did not exist in the 1940's. \81\ Increased demand for 
AI is predicted to generate job opportunities in engineering, software 
design, and programing. Industries such as finance and health care will 
experience job creation for high skilled roles including biologists, 
financial technology specialists, and geneticists. \82\ The 
Massachusetts Institute of Technology (MIT) Work of the Future report 
noted, ``[W]e anticipate that in the next two decades, industrialized 
countries will have more job openings than workers to fill them, and 
that robotics and automation will play an increasingly crucial role in 
closing these gaps.'' \83\
---------------------------------------------------------------------------
    \79\  Gmyrek, P., Berg, J., & Bescond, D. (2023, August). 
Generative AI and jobs: A global analysis of potential effects on job 
quantity and quality. ILO Working Paper 96. https://www.ilo.org/wcmsp5/
groups/public/--dgreports/--inst/documents/publication/wcms--
890761.pdf.
    \80\  Thierer, Adam. (2023 March). Can We Predict the Jobs & Skills 
Needed for the AI Era?. R Street. https://www.rstreet.org/wp-content/
uploads/2023/03/r-street-policy-study-no-278.pdf.
    \81\  The Economist. (2023, May 7). Your job is (probably) safe 
from artificial intelligence. https://www.economist.com/finance-and-
economics/2023/05/07/your-job-is-probably-safe-from-artificial-
intelligence.
    \82\  Schwab, Klaus, & Zahidi, Saadia. (2020, October 20). The 
Future of Jobs Report 2020. WeForum. https://www.weforum.org/reports/
the-future-of-jobs-report-2020/
    \83\  Autor, David, Mindell, David, & Reynolds, Elisabeth. (2020). 
The Work of the Future: Building Better Jobs in an Age of Intelligent 
Machines. MIT Work of the Future. https://workofthefuture.mit.edu/wp-
content/uploads/2021/01/2020-Final-Report4.pdf.

    Labor unions have expressed concern over various implications of 
AI, including recently at a White House listening session, where union 
leaders flagged safety, privacy, civil rights, and job loss as key risk 
areas. \84\ Concurrently, AI has become a central issue in current 
contract negotiations between the respective actors' and writers' labor 
unions and studios. \85\ The Screen Actors Guild has articulated the 
principal concern from the actors regarding AI is the risk of actors 
losing control over their likeness, specifically if their image or 
voice is used without their consent or without pay. \86\ Likewise, the 
Writers Guild of America is concerned with the greater utilization of 
AI-generated storylines or dialog, especially when it relates to 
credits that are linked to recognition pay. \87\ Automation was also a 
major concern of dockworkers during the West Coast labor negotiations, 
particularly the potential of job loss presented by container-handling 
and transporting equipment. \88\ This aspect was one of the last areas 
of agreement reached before the negotiations concluded. Other unions 
are positioning themselves to provide training and resources for 
workers entering new roles, or learning to work with technology in 
their current roles. AFL-CIO President Liz Shuler claimed AI will be 
``the next frontier for the labor movement,'' anticipating growing 
productivity will allow the union organization to be ``the center of 
gravity for working people as they transition to new and better jobs.'' 
\89\
---------------------------------------------------------------------------
    \84\  The U.S. Government. (2023, July 3). Readout of white house 
listening session with union leaders on Advancing Responsible 
Artificial Intelligence Innovation. The White House. https://
www.whitehouse.gov/briefing-room/statements-releases/2023/07/03/
readout-of-white-house-listening-session-with-union-leaders-on-
advancing-responsible-artificial-intelligence-innovation/.
    \85\  Patten, Dominic. (2023, July 10). SAG-AFTRA Strike Could 
Hinge On AI; Deep Divisions Remain Between Actors & Studios In Final 
Hours Of Talks. Deadline. https://deadline.com/2023/07/actors-strike-
ai-kim-kardashian-fran-drescher-contract-deadline-1235432142/.
    \86\  Webster, Andrew. (2023, July 13). Actors say Hollywood 
studios want their AI replicas--for free, forever. The Verge. https://
www.theverge.com/2023/7/13/23794224/sag-aftra-actors-strike-ai-image-
rights.
    \87\  Dalton, Andrew. (2023, July 13). AI is the wild card in 
Hollywood's strikes. Here's an explanation of its unsettling role. AP 
News. https://apnews.com/article/artificial-intelligence-hollywood-
strikes-explained-writers-actors-e872bd63ab52c3ea9f7d6e825240a202.
    \88\  Berger, Paul. (2023, April 20). West Coast Dockworkers Reach 
Tentative Deal on Port Automation. The Wall Street Journal. https://
www.wsj.com/articles/west-coast-dockworkers-reach-tentative-deal-on-
port-automation-b4b828fe.
    \89\  Kullgren, I. (2023, August 29). Unions must be at forefront 
of AI battle, AFL-CIO president says. Bloomberg Law. https://
news.bloomberglaw.com/us-law-week/unions-must-be-at-forefront-of-ai-
battle-afl-cio-president-says.

    Upskilling or educating workers to understand new technological 
advancements works to mitigate the negative impacts of new technology. 
For example, Senator Richard Durbin's (D-IL) Investing in Tomorrow's 
Workforce Act of 2021 would provide grants toward upskilling workers 
displaced due to automation. \90\ Senators Gary Peters (D-MI) and Mike 
Braun's (R-IN) AI Leadership Training Act would train Federal employees 
on AI. Tim Kaine (D-VA) and Senator Braun's JOBS Act, which would 
extend short term Pell Grants to workforce education programs, has been 
put forward as a response to automation caused by AI. \91\
---------------------------------------------------------------------------
    \90\  S. 1212--117th Congress (2021-2022) Investing in Tomorrow's 
Workforce Act of 2021. (2021, April 19) https://www.Congress.gov/
    \91\  Munhoz, Diego Areas. (2023, May 22). Congress Moves to Engage 
Workforce with AI, Not Fight Against It. Bloomberg Law. https://
news.bloomberglaw.com/daily

    AI itself may also be an answer to training workers for new tasks 
and jobs ahead. A Price Waterhouse Coopers (PwC) study found, ``AI 
allows those in training to go through naturalistic simulations in a 
way that simple computer-driven algorithms cannot. The advent of 
natural speech and the ability of an AI computer to draw instantly on a 
large data base of scenarios, means the response to questions, 
decisions or advice from a trainee can challenge in a way that a human 
cannot.'' \92\ Several companies are currently leveraging AI to 
identify learning opportunities for their workers and facilitate 
personalized and flexible upskilling. Through machine learning, AI can 
recommend and facilitate employee role pathways and learning sequences. 
AI-facilitated upskilling can be seamlessly integrated into an 
employee's workflow. \93\
---------------------------------------------------------------------------
    \92\  PricewaterhouseCoopers International Limited. (n.d.) No 
longer science fiction, AI and robotics are transforming healthcare. 
PWC. https://www.pwc.com/gx/en/industries/healthcare/publications/ai-
robotics-new-health/transforming-healthcare.html.
    \93\  H.R. Policy. (2023, January 31). HRPA Statement to EEOC: 
``Growing Opportunity for the U.S. Workforce in the Age of AI''. HR-
policy. https://www.hrpolicy.org/insight-and-research/resources/2023/
hr-workforce/public/02/hrpa-statement-to-eeoc-growing-opportunity-for-
the/.
---------------------------------------------------------------------------
                       AI and Working Conditions
    AI presents the opportunity for firms to derive meaningful data 
from workers and the workplace in ways not previously possible. This 
may translate to productivity gains and improved worker conditions. 
However, if not designed and implemented properly, AI may play a role 
in worsening workplace conditions by dehumanizing workers through 
inhospitable AI-driven management techniques, intruding on worker 
privacy, or increasing discrimination.

    The COVID-19 pandemic shifted many in-person roles to remote, some 
temporarily and some permanently. Remote work centered the discussion 
of employee monitoring as employers attempted to find ways to hold 
remote workers accountable. Data collected from such monitoring may 
contribute to employment decisions such as promotions, raises, 
demotion, or termination. However, there is concern these tools are 
simply an invasion of workers' privacy. Federal law is largely silent 
on the issue of worker surveillance in the workplace. \94\ Several 
states have passed laws limiting employer surveillance, particularly in 
rest and changing rooms, including in California, New York, and West 
Virginia. \95\ Nevertheless, U.S. employers have great discretion to 
monitor the workplace. Courts have upheld that employee monitoring is 
permitted if there is a valid business purpose. In Smyth v. Pillsbury 
Co., an employee claimed to be wrongfully terminated after sending 
inappropriate emails through the employer's email system. The court 
decided the plaintiff was not wrongfully terminated because there was 
not a reasonable expectation of privacy. \96\
---------------------------------------------------------------------------
    \94\  American Bar Association. (2018, January). How much employee 
monitoring is too much?. Americanbar. https://www.americanbar.org/news/
abanews/publications/youraba/2018/January
    \95\  Id.
    \96\  Smyth v. Pillsbury Co., 914 F. Supp.97 (E.D. Pa. 1996).

    Employer use of AI to streamline worker management has also come 
under scrutiny. Safety and health issues have been implicated by 
aggressive requirements imposed by AI systems on workers' movements, 
breaks, and other behaviors within the workplace. The labor movement 
has taken keen interest in the intersection of working conditions and 
---------------------------------------------------------------------------
technology.

    For example, testing of tracking technology on UPS delivery trucks 
drew strong push back from the Teamsters Union in 2020. \97\ UPS 
Teamsters United claimed UPS used worker surveillance systems to 
``harass and discipline [its] drivers.'' \98\ Advocates for such 
technologies claim they improve worker safety. For example, Amazon 
partnered with Netradyn to develop a driver information camera system 
that utilized telematics to ensure the safety of the driver and 
vehicle. \99\ However, the announcement received push back from the 
American Civil Liberties Union due to concerns of bias. \100\
---------------------------------------------------------------------------
    \97\  Scarpati, Jessica. (2023, March). Telematics. Techtarget. 
https://www.techtarget.com/searchnetworking/definition/telematics
    \98\  UPS Teamsters United. (n.d.). Protect Drivers From Cameras In 
UPS Trucks. UPS Teamsters for a democratic union. https://ups-
teamstersforademocraticunion.nationbuilder.com/sign--the--petition--
against--ups--cameras--in--trucks--today.
    \99\  Amazon. (n.d.). Amazon Netradyn Driver Information. Vimeo. 
https://vimeo.com/.
    \100\  Stanely, Jay. (2021, March 23). Amazon Drivers Placed Under 
Robot Surveillance Microscope. ACLU. https://www.aclu.org/news/privacy-
technology/amazon-drivers-placed-under-robot-surveillance-microscope.

    Many use cases of AI have contributed to improved working 
conditions and worker well-being. AI has the ability to reduce human 
error, as such creating a safer workplace. Marks & Spencer, a UK-based 
multinational retailer, reported a reduction of workplace incidents by 
80 percent when they introduced a computer vision technology at a 
distribution center because the technology identified and rectified 
unsafe behaviors. \101\ Integration of AI and other innovative 
technologies may ultimately improve workplace conditions, worker 
safety, and worker mobility. \102\ App-based food delivery companies 
use AI to organize and design the system of pick-ups, deliveries, and 
food recommendations. \103\ Through this system, drivers are able to 
maximize efficiency and profits. A study on the use of generative AI in 
the workplace found that workers who used the technology increased 
their productivity by 14 percent on average. It also found attrition 
rates plunged by 8.6 percent, suggesting lower stress levels among 
employees. \104\
---------------------------------------------------------------------------
    \101\  Healy, Charlotte. (2023, June 2). UK: AI's Impact on 
Workplace Safety. SHRM. https://www.shrm.org/resourcesandtools/hr-
topics/global-hr/pages/uk-ai-safety.aspx.
    \102\  Altman, Elizabeth J., Kiron, David, & Riedl, Christoph. 
(2023, April 13). Workforce ecosystems and AI. Brookings. https://
www.brookings.edu/articles/workforce-ecosystems-and-ai/.
    \103\  Ramesh, Raghav. (2018, May 2). How DoorDash leverages AI in 
its world-class on-demand logistics engine. Artificial Intelligence 
Conference. https://conferences.oreilly.com/artificial-intelligence/ai-
ny-2018/public/schedule/detail/65038.html.
    \104\  Brynjolfsson, Erik, Li, Danielle, & Raymond, Lindsey R. 
(2023, April). Generative AI at Work. NBER. https://www.nber.org/
system/files/working--papers/w31161/w31161.pdf.
---------------------------------------------------------------------------
                         AI and Discrimination
    The use of AI in employment decisions has become mainstream. Nearly 
80 percent of employers use some sort of AI or automation in the 
recruitment and hiring process. \105\ AI is often used to reach a 
specific candidate audience via targeted ads, to screen and rank 
applicants, and to analyze candidates' facial expressions or eye 
contact during a video interview. \106\ AI is also being used to track 
performance of employees by following log in times, computer usage, and 
online activity. \107\ Evidence suggests AI may have the potential to 
exacerbate biases in hiring. \108\ Data being inputted may reflect 
existing workplace biases and it is difficult to discern how an AI 
system's inputs translate into its outputs. \109\
---------------------------------------------------------------------------
    \105\  Brin, Dinah Wisenberg. (2019, March 22). Employers Embrace 
Artificial Intelligence for HR. SHRM. https://www.shrm.org/
resourcesandtools/hr-topics/global-hr/pages/employers-embrace-
artificial-intelligence-for-hr.aspx.
    \106\  Casimir, Lance, Kelley, Bradford J., & Sonderling, Keith E. 
(2022, August 11). The Promise and The Peril: Artificial Intelligence 
and Employment Discrimination. University of Miami Law Review. https://
repository.law.miami.edu/cgi/viewcontent.cgi
    \107\  Ibid.
    \108\  The U.S. Government. (2016, May). Big Data: A Report on 
Algorithmic Systems, Opportunity, and Civil Rights. Executive Office of 
the President. https://obamawhitehouse.archives.gov/sites/default/
files/microsites/ostp/2016--0504--data--discrimination.pdf.
    \109\  Rawashdeh, Samir. (2023, March 6). Artificial intelligence 
can do amazing things that humans can't, but in many cases, we have no 
idea how AI systems make their decisions. UM-Dearborn Associate 
Professor Samir Rawashdeh explains why that's a big deal. UM Dearborn. 
https://umdearborn.edu/news/ais-mysterious-black-box-problem-explained.

    Title VII of the Civil Rights Act (Title VII) prohibits 
discrimination on the basis of race, color, religion, national origin, 
or sex in the employment context. According to the EEOC, which enforces 
Title VII, a business may be found to have violated Title VII for 
either disparate treatment or, more relevant to AI operators, disparate 
impact. Disparate treatment occurs ``when an employer or other person 
subject to the [Civil Rights] Act intentionally excludes individuals 
from an employment opportunity on the basis of race, color, religion, 
sex, or national origin'' (emphasis added). However, intent is not 
necessary to establish a claim of disparate impact, where the only 
concern is whether a facially neutral policy disproportionally excludes 
individuals within a protected class. \110\ Disparate impact is 
typically the focus of discrimination concerns regarding AI. \111\
---------------------------------------------------------------------------
    \110\  U.S. Equal Employment Opportunity Commission. (1988, August 
1). [Guidance] CM-604 theories of discrimination. U.S. Equal Employment 
Opportunity Commission. https://www.eeoc.gov/laws/guidance/cm-604-
theories-discrimination.
    \111\  New EEOC guidance on when the use of artificial intelligence 
in selection procedures may be discriminatory. FordHarrison. (2023, 
June 13). https://www.fordharrison.com/eeocs-guidance-on-artificial-
intelligence-hiring-and-employment-related-actions-taken-using-
artificial-intelligence-may-be-investigated-for-employment-
discrimination-violations.

    Employers are also prohibited from unlawfully discriminating based 
on age or disability under the Age Discrimination in Employment Act 
(ADEA). The ADEA prohibits employers and employment agencies from 
discriminating against workers 40 or older in job advertising, 
recruiting, hiring, and other job opportunities. \112\ In December 
2022, in one of the first AI-related charges filed with the EEOC, Real 
Women in Trucking filed a discrimination charge against Meta Platforms 
Inc. The group alleged Meta Platforms steered employment ads away from 
women and people over 55 years. After an investigation of a complaint 
by a man who could not complete an online application due to age 
restrictions, the Illinois Attorney General investigated several 
automated hiring platforms for discouraging older workers from 
applying. \113\
---------------------------------------------------------------------------
    \112\  Department of Labor. (n.d.). Age discrimination. U.S. 
Department of Labor. https://www.dol.gov/general/topic/discrimination/
    \113\  Ajunwa, Ifeoma. (2020, May 1). Protecting Workers' Civil 
Rights in the Digital Age. UNC School of Law. https://
scholarship.law.unc.edu/cgi/viewcontent.cgi

    The Americans with Disabilities Act (ADA) expressly bans pre-
employment assessments that tend to screen out individuals with a 
disability unless the test can be shown to be job-related and 
consistent with a business necessity. For example, an AI-powered 
personality test may ask or intuit an applicant's sense of optimism, 
and disqualify them based on their living with Major Depressive 
Disorder. \114\ Job applicants diagnosed with autism may be screened 
out from job opportunities based on video interviews assessed by AI 
trained to detect certain patterns, such as eye contact and pauses in 
speech. \115\ In addition, the ADA prohibits employers from inquiring 
into an applicant's disability during the application and interview 
processes. AI systems that determine a potential employee's disability 
status may violate the ADA. Advocates in favor of using of AI in the 
workplace, however, argue that with certain safeguards, the technology 
can speed up the hiring process while limiting discrimination and bias. 
\116\
---------------------------------------------------------------------------
    \114\  U.S. Equal Employment Opportunity Commission. (2022, May 
12). [Technical Guidance] The ADA and AI: Applicants and Employees. 
U.S. Equal Employment Opportunity Commission. https://www.eeoc.gov/
laws/guidance/americans-disabilities-act-and-use-software-algorithms-
and-artificial-intelligence.
    \115\  Landon, Oliver. (2022, April). AI video assessment. 
Employment autism. https://www.employmentautism.org.uk/blog/ai-video-
assessments.
    \116\  Sonderling, Keith E. (n.d.). How People Analytics Can 
Prevent Algorithmic Bias. IHRIM. https://www.ihrim.org
---------------------------------------------------------------------------
                               Conclusion
    As the U.S. Senate assesses the readiness of American regulatory 
frameworks for AI, as Ranking Member of the HELP Committee, I'm focused 
on ensuring that we are prepared for the continued deployment of AI. 
The insights of stakeholders that can describe the advantages and 
drawbacks of AI in our health care system, in the classroom, and in the 
workplace are critical as policymakers grapple with this topic. Please 
submit feedback and comments for ways to improve the framework in which 
these technologies are developed, reviewed, and used to HELPGOP--
[email protected] by Friday, September 22.
                      Questions for Consideration
    Health Care

    Supporting Medical Innovation:

          How can FDA support the use of AI to design and 
        develop new drugs and biologics?

          What updates to the regulatory frameworks for drugs 
        and biologics should Congress consider to facilitate innovation 
        in AI applications?

          How can FDA improve the use of AI in medical devices?

          What updates to the regulatory frameworks for medical 
        devices should Congress consider to facilitate innovation in AI 
        applications while also ensuring that products are safe and 
        effective for patients?

          How can Congress help FDA ensure that it has access 
        to the expertise required to review products that are developed 
        using AI or that incorporate AI?

          How can FDA better leverage AI to review product 
        submissions?

          How can FDA harness external expertise to support 
        review of products that are developed using AI or that 
        incorporate AI?

          What are the potential consequences of regulating AI 
        in the United States if it remains unregulated in other 
        countries?

    Medical Ethics and Protecting Patients:

          What existing standards are in place to demonstrate 
        clinical validity when leveraging AI? What gaps exist in those 
        standards?

          What practices are in place to mitigate bias in AI 
        decisionmaking?

          What should be the Federal role, if any, in 
        addressing social and/or political bias?

          How can AI be best adopted to not inappropriately 
        deny patients care?

          Is the current HIPAA framework equipped to safeguard 
        patient privacy with regards to AI in clinical settings? If 
        not, how not or how to better equip the framework?

          What standards are in place to ensure that AI 
        maintains respect and dignity for human life from conception to 
        natural death?

          Who should be responsible for determining safe and 
        appropriate applications of AI algorithms?

          Who should be liable for unsafe or inappropriate 
        applications of AI algorithms? The developer? A regulating 
        body? A third party or private entity?

    Education

    General Policy:

          What should the Federal role be in supporting AI in 
        education?

          What should the state role be in supporting AI in 
        education?

          What should be the local role in supporting AI in 
        education?

          Do these roles vary by the educational setting?

          What should be the Federal role in supporting and 
        ensuring safe and responsible use of AI with respect to the 
        workforce and the workplace?

          What should the state role be in supporting and 
        ensuring safe and responsible use of AI with respect to the 
        workforce and the workplace?

          What are the best practices currently being used to 
        ensure that AI systems are designed, developed, and deployed in 
        a manner that protects people's rights and safety?

    Practical Uses for AI in Education Settings:

          How is AI already being used in the classroom? Are 
        there any innovative models emerging?

          How is AI being used throughout school buildings or 
        on post-secondary campuses? What areas are advocates hopeful AI 
        can help in besides the classroom?

          How can AI be used to promote school safety? Are 
        there pilots in this area?

          How do we ensure kids can use AI without relying on 
        it? How can it be used to promote critical thinking, rather 
        than replace it? What part of the workflow can AI take over for 
        teachers? What part of the workflow should not be replaced by 
        AI?

          How can we ensure that AI is used effectively and 
        meaningfully in the classroom to support teachers and improve 
        learning, rather than becoming another burdensome new tech for 
        teachers to navigate?

    Fostering Students' Understanding of AI:

          How does AI impact what students need to be taught?

          What are the skills students need to use AI 
        responsibly and effectively?

          How does AI impact how student learning is assessed?

          What are the components of next-generation digital 
        literacy related to AI (e.g., algorithmic bias, ethics and 
        academic integrity, asking critical questions/spotting deep 
        fakes, etc.)?

    Preparing for AI in the Classroom:

          What do teachers/professors/instructors need to 
        understand about AI before using it?

          How can we incentivize and fund high quality 
        professional development for teachers and administrators in AI 
        and computer science?

          How could AI impact teacher preparation programs?

          What does refusal look like in a classroom? When can 
        and should teachers decline advice/recommendations from an AI 
        system?

          How should errors in AI's output be handled? How 
        should teachers be trained to spot and correct these? Students?

          Right now, schools are putting many of their AI 
        courses into their Career and Technical Education (CTE) 
        programs, but AI lacks industry-recognized credentials. How can 
        industry create meaningful credential development, recognizing 
        also that the curricula and assessments may need to be updated 
        frequently to reflect the changing technology?

    Design for AI Use in Schools and with Kids and Young Adults:

          What are the demonstrable steps taken during the 
        design process that give districts/teachers/parents confidence 
        that the AI is fit for use?

          How do foundational models that were not designed 
        with children or the classroom in mind come into play here?

          How is data that is collected during the use of these 
        programs in schools used by the AI?

          How is personally identifiable information managed, 
        stored, and used in accordance with FERPA?

          What protections are in place to keep AI from 
        ``learning'' the wrong things?

          How can policymakers and technologists work together 
        to build trust in responsibly developed AI? What does 
        responsible development look like?

    Higher Education Admissions:

          What is the current and future use of AI in college 
        admissions?

          What protections are put into place to ensure 
        admissions is not biased in decisionmaking?

          How will AI affect the admissions timelines, and 
        would it increase the response time from schools on their 
        admissions decisions?

    Degree or Credential Completion and Success:

          Are there lessons that can be learned from other 
        policy areas or program spaces about how to leverage AI to 
        improve the student experience and improve outcomes?

          How do we protect students from being just another 
        number and instead use AI to build social connections that lead 
        to student success?

    Labor

    Practical Uses for AI in the Workplace:

          What role does AI play in the workplace? Where is AI 
        most often deployed in the context of the workplace?

          What are the key areas companies anticipate making 
        investments in AI in the workplace context?

          What are the chief reasons employers deploy AI in the 
        workplace?

          What considerations do companies purchasing AI 
        software make to ensure it is safe and does not infringe on 
        human rights prior to implementing it in their systems?

          What do workers need to understand about AI in the 
        workplace?

          What do AI developers need to understand about AI in 
        the workplace?

          What steps do companies take when they become aware 
        of a safety or humans rights issue caused by the use of AI with 
        respect to workers?

          How are companies integrating AI into their remote 
        workforce?

    AI Standards

          What role will AI standards, such as the National 
        Institute of Standards and Technology AI Risk Management 
        Framework, play in regulatory and self-regulatory efforts?

          What do policymakers need to know about the 
        development of AI standards?

          What do employers need to know about the development 
        of AI standards?

          How can policymakers work with AI developers and 
        users to update and improve such standards as the technology 
        develops?

    AI and the Job Market

          What role will AI play in creating new jobs?

          What jobs are most at risk of experiencing 
        displacement due to AI?

          What is the rate of job displacement due to AI?

          What skillsets will become more important as AI is 
        adopted in the workplace

          How is AI being used to fill gaps in the labor 
        market?

          Should Congress be involved to mitigate job 
        displacement from AI? How will the market adapt if Congress 
        does not step in?

    AI and Working Conditions

          What are high-risk use cases of AI with respect to 
        working conditions?

          What are low-risk use cases of AI with respect to 
        working conditions?

          The General Counsel of the NLRB has taken a 
        particular interest in the use of AI in employee monitoring. 
        How are employers viewing this issue? How are they preparing in 
        the case they are brought before the Board for review?

          How is AI being used to promote safety in the 
        workplace?

          How is AI being used to promote accessibility in the 
        workplace?

          How is AI being used to increase flexibility in the 
        workplace, including for remote workers?

          What are the concerns regarding the use of AI and 
        worker privacy and dignity, including for remote workers?

          What is the impact of AI on worker productivity?

          What is the impact of AI on worker retention?

    AI and Workplace Bias

          What are high-risk use cases of AI with respect to 
        discrimination?

          What are low-risk use cases of AI with respect to 
        discrimination?

          Are the current technology-neutral Federal anti-
        discrimination laws sufficient to prevent discrimination in the 
        workplace?

                                 ______
                                 
             statement of the american college of surgeons
    On behalf of the more than 88,000 members of the American College 
of Surgeons (ACS), we thank you for convening the hearing entitled 
``Avoiding a Cautionary Tale: Policy Considerations for Artificial 
Intelligence in Health Care.'' The ACS is dedicated to improving the 
care of the surgical patient and to safeguarding standards of care in 
an optimal and ethical practice environment. As such, we understand the 
critical role that technology plays in achieving this mission, as well 
as the need for thoughtful policymaking to ensure that tools such as 
artificial intelligence (AI) are used with the utmost regard for 
patients' rights and safety. As we discuss below, it is essential that 
AI tools are trained and maintained with high quality, diverse, valid, 
and representative data; are regularly assessed for continued accuracy 
and reliability; that regulators engage clinical experts in the 
assessment of AI health tools; and that physicians' clinical judgment 
remains paramount.

    The ACS appreciates the Senate Health, Education, Labor, and 
Pensions (HELP) Primary Health and Retirement Security Subcommittee's 
attention to this critical issue and welcomes the opportunity to share 
some legislative and regulatory considerations for the use of AI in 
health care.
                     Ensuring Reliability Over Time
    AI can be a powerful tool for medical innovation, but it is 
critical to ensure that these tools remain accurate and reliable as 
they develop. The ACS supports efforts to expand the use of real-world 
evidence (RWE) in the development and maintenance of medical 
technology. RWE is clinical evidence regarding the use and the 
potential benefits or risks of a medical product derived from analysis 
of real-world data (RWD), data related to a patient's health status or 
delivery of care that can be collected from a variety of sources such 
as mobile devices, wearables, and sensors; patient generated data used 
in home-use settings; product and disease registries; claims and 
billing activities; electronic health records, and more. Such data can 
complement data that are collected through traditional means and 
enhance clinical decisionmaking.

    For the Food and Drug Administration (FDA) and other regulators, 
RWE is necessary for monitoring the safety of drugs, devices, and 
emerging technologies such as AI. As devices that use AI evolve, RWD 
will be reported back to the FDA regarding the product's safety, 
effectiveness, and potential risks. The true power of AI-based software 
lies in its ability to improve over time instead of remaining static. 
But this is problematic for regulation because the device that was 
approved or cleared may no longer be operating in a similar fashion as 
it learns. RWD is necessary to show that the AI-based device still 
functions appropriately and in the way that it was intended. RWD is 
also important for accurately training AI algorithms. These data should 
be high quality, diverse, valid, and representative of the uses for 
which it will be applied. Any regulatory framework should require that 
AI applications are assessed, maintained, and updated over their 
lifetime to ensure continued clinical safety and effectiveness, but 
also technological integrity. AI tools must be reviewed to make sure 
they are still valid, reliable, and accurate as they learn.

    AI health tools must be both (1) clinically and (2) technologically 
sound. Validity, reliability, and accuracy are required on both levels. 
The ACS believes that clinical experts, such as physician 
informaticists, are best positioned to determine whether data used in 
AI applications are the best quality and the most appropriate from a 
clinical perspective, and to monitor the technology for clinical 
validity as it evolves over time. The FDA should engage advisory groups 
for clinical and technical excellence that are conditionally or 
programmatically defined with cross specialty expertise, in order to 
ensure an AI tool is reliable and valid on multiple levels.

    In addition, physicians and specialty societies are well-equipped 
to assist the FDA as they consider what tools and/or information would 
be most useful in driving improvements and advancements in clinical 
care and the format in which the information should be expressed. 
Understanding where physicians see the benefits of AI in their 
practices is crucial to help build trust in the capabilities of the 
technology, leading to broader utilization. Likewise, understanding why 
physicians decide not to use or do not trust certain health 
technologies in their clinical practices would also be useful as 
regulators certify products for real-time use.
                     Validation of AI Health Tools
    Validation of digital health tools, including AI applications, is 
truly essential to physician trust, improving care delivery, and 
avoiding patient harm. There are many aspects to validation. Validation 
is necessary in terms of the technology/algorithm used, the patient 
population on which the device is trained, whether the outcomes are 
accurate and unbiased, and whether the tool is appropriate for the 
specific setting in which it is used. While the FDA is responsible for 
regulating many digital health tools, the FDA should work in 
collaboration with an appropriate specialty society, clinical expert, 
or physician informaticist to reinforce physician trust in the tool. 
Use and validation of digital health tools are two of the most critical 
areas for physicians to successfully realize the potential of these 
technologies. In the case of AI tools, it is especially important to 
emphasize that the data used to train algorithms is critical to their 
validity and reliability. The data should be high quality, diverse, 
valid, and representative of the uses for which it will be applied. 
While the data used to train the AI-based tool is important, it is 
equally important that up-to-date data are used to retrain such tools 
so that the algorithms themselves remain current, reliable, and valid. 
Additionally, Congress could take steps to create a government-
sponsored relationship with a synthetic patient environment, a free, 
open-source test bed that could be used to test the clinical and 
technical aspects of any AI application.

    At the facility level, institutions should have their own 
governance and structure for AI-based tools, including pathways for 
user feedback and timely responses to feedback as physicians have 
concerns or encounter issues. Liability risks and uncertainty about who 
is responsible for issues with certain algorithms, outputs, or user 
errors can hinder implementation of these tools. Before leveraging AI 
technology, institutions should be confident in the quality of the tool 
and its capabilities.

    Ultimately, digital health tools should reduce, not add to, a 
physician's cognitive burden. AI technology can enhance a physician's 
ability to gather, process, and exchange knowledge and ultimately 
improve patient care when the tool is developed using semantic data 
exchange standards in alignment with validated clinical workflows. This 
enables these tools to provide the right information at the right time 
and seamless incorporation into the clinical workflow.
                            Mitigating Bias
    It is critical to consider bias when designing, training, and using 
AI health tools. Various forms of bias based on race, ethnicity, 
gender, sexual orientation, socioeconomic status, and more can be 
perpetuated through the use of certain advanced digital health tools, 
especially those using AI. Bias can manifest in digital tools in 
various ways. For instance, if an AI algorithm is trained with data 
that fails to include all patient populations for which the tool is 
used, this would introduce inherent bias. Bias could also be 
unintentionally written into algorithms, leading to outputs that could 
have a biased impact on certain populations. The context in which the 
tool is used should also be considered when trying to avoid bias. If 
the tool were trained on a certain population for a specific purpose 
and is applied in a different setting with a different patient 
population with varying risk factors, this could also result in bias.

    While we will be unable to eliminate bias completely, steps can be 
taken to validate the quality of the data and reduce bias in AI 
algorithms. As discussed above, the need for trusted and complete data 
sources for AI tools is critically important, and ensuring the 
algorithms and data are properly validated is crucial. If the tool is 
not developed and trained with data that are representative of the 
patient population the physicians serve, the data outputs could be 
inaccurate or biased. To lower the risk of bias, the use of trusted and 
complete data sources in development and testing stages is extremely 
important. The data sources, methods of data collection, data quality, 
data completeness, whether the data are fit for purpose, and how the 
data are analyzed, must all be considered.

    In addition, building a framework through collaboration with 
stakeholders possessing clinical and technical expertise that guides 
the development and validation of algorithms can assist in reducing 
bias if done with a high level of rigor. The framework could include a 
checklist with certain steps that developers would have to complete to 
ensure algorithms have gone through rigorous testing and validation. By 
following the processes and validation criteria set forth by the 
framework, developers can ensure that the algorithms are free of 
significant bias and will output accurate predictions. This type of 
framework coupled with external validation that utilizes data across 
various practice settings and demographics, can also be applied 
periodically following the implementation of the tool, to ensure that 
as the algorithms take in real-time data, they are still achieving a 
high-level of accuracy.
                        Safe and Appropriate Use
    The FDA holds an important role in ensuring the safe and 
appropriate application of AI technology. Physicians can place greater 
trust in devices using digital technology if these devices have 
received FDA clearance or approval. FDA approval is also important for 
patient trust. Patients should know when they are receiving AI-informed 
care, and that it comes from validated instruments.

    However, the ACS believes strongly that AI tools should never 
replace a physician's clinical judgment; rather, the goal of these and 
other digital health tools is to enhance physicians' knowledge and 
augment their cognitive efforts. Medical care relies not only on 
science, but on the capabilities of the care team, the local resources, 
and the goals of the patient. Care is highly personalized and requires 
a physician-patient interface where the medical knowledge is 
contextualized and personalized in a trusted manner for each patient 
and physicians are empowered to make clinical decisions. As we assess 
AI applications, part of the assessment must evaluate the insertion of 
AI knowledge artifacts into a human workflow. It is the AI 
application's utility in the workflow that makes a difference in the 
informed nature of care, in the diagnosis, and in the treatment.
                           Concluding Remarks
    The ACS thanks the HELP Primary Health and Retirement Security 
Subcommittee for convening this important hearing on considerations for 
the use of AI in health care. In order to best serve patients and the 
physicians who care for them, it is essential that AI tools are trained 
and maintained with high quality, diverse, valid, and representative 
data; are regularly assessed for continued accuracy and reliability; 
that regulators engage clinical experts in the assessment of AI health 
tools; and that physicians' clinical judgment remains paramount. The 
ACS looks forward to continuing to work with lawmakers on these 
important issues. For questions or additional information, please 
contact Carrie Zlatos with the ACS Division of Advocacy and Health 
Policy at [email protected].
                                 ______
                                 
    national nurses united, written statement for ai insight forum: 
                               workforce
    Thank you, Majority Leader Schumer and Senators Heinrich, Rounds, 
and Young, for inviting me to participate in this important 
conversation about the impact of artificial intelligence (AI) on the 
workforce. My name is Bonnie Castillo, I'm a registered nurse and the 
Executive Director of National Nurses United, the nation's largest 
union and professional association of registered nurses, representing 
nearly 225,000 nurses across the country.

    Our members primarily work in acute care hospitals, where they are 
already experiencing the impacts of artificial intelligence and other 
data-driven technologies. The decisions to implement these technologies 
are made without the knowledge of either nurses or patients and are 
putting patients and the nurses who care for them at risk. AI 
technology is being used to replace educated registered nurses 
exercising independent judgment with lower cost staff following 
algorithmic instructions. However, patients are unique and health care 
is made up of non-routine situations that require human touch, care, 
and input. In my comments, I will demonstrate the risks that AI poses 
to patient care and to nursing practice and propose key legislative and 
regulatory steps that must be taken to utilize the precautionary 
principle--an idea at the center of public health analysis--in order to 
protect patients from harm.
AI and data-driven technologies have already been implemented at acute-
                   care hospitals around the country.
    The health care industry has been implementing various forms of 
artificial intelligence and other data driven technologies for a number 
of years. The nursing workforce is therefore uniquely situated to 
provide feedback and analysis on the impacts that these technologies 
have had on workers and on patients.

    Technologies that have already been implemented include the 
clinical decision support systems embedded in electronic health records 
(EHRs), acute-care hospital-at-home and remote patient monitoring 
schemes, virtual acute-care nursing, automated worker surveillance and 
management (AWSM) and staffing platforms that support gig nursing, and 
increasingly, emerging technologies like generative AI systems.

    Through our experiences working with and around these systems, it 
is clear to registered nurses that hospital employers have used these 
technologies in attempts to outsource, devalue, deskill, and automate 
our work. Doing so increases their profit margins at the expense of 
patient care and safety.

    Many of these technologies are ostensibly designed to improve 
patient care, but in fact they track the activities of health care 
workers and are designed to increase billing of patients and insurers. 
Automated monitoring technology feeds into algorithmic management 
systems that make unreasonable and inaccurate decisions about patient 
acuity, staffing, and care with the goal of lowering labor costs. As a 
result, nurses and other health care professionals are expected to work 
faster, accept more patients per nurse than is safe, and reduce nurses' 
use of independent professional skill and judgment. Tracking nurses is 
designed to facilitate routinization--breaking the holistic process of 
nursing into discrete tasks--with the goal of replacing educated 
registered nurses exercising independent judgment with lower-cost staff 
following algorithmic instructions.

    Employers generally assert that these powerful technologies are 
just updates of older technology that has long been in the workplace, 
such as treating computer-vision aided cameras the same as traditional 
security cameras, or EHRs as electronic versions of old paper medical 
records. However, these technologies are much more than modern 
iterations of well understood tools and are being introduced widely 
despite lack of robust research showing safety, reliability, 
effectiveness, and equity. Rather, AWSM technologies pull vast and 
diverse data from an entire ecosystem of monitoring equipment and 
process this information through opaque algorithms that then make 
clinical and employment decisions. There is no current method for 
evaluating AI and no requirement for external validation; it is clear 
to nurses that AI technologies are being designed to be a replacement 
for skilled clinicians as opposed to a tool that many clinicians would 
find helpful.

    A ``nursing shortage'' is often the justification for the 
deployment of this technology. However, the United States is not 
experiencing a nursing shortage, only a shortage of nurses willing to 
risk their licenses and the safety of their patients by working under 
the unsafe conditions the hospital industry has created. By 
deliberately refusing to staff our Nation's hospital units with enough 
nurses to safely and optimally care for patients, the hospital industry 
has driven nurses away from direct patient care. When we add the 
complete failure by the hospital industry to protect the health and 
safety of nurses and patients during the COVID pandemic, many nurses 
have made the difficult decision to stop providing hands-on nursing 
care to protect themselves, their nursing licenses, their families, and 
their patients.

    Except for a small handful of states, there are sufficient numbers 
of registered nurses to meet the needs of the country's patients, 
according to a 2017 U.S. Department of Health and Human Services report 
on the supply and demand of the nursing workforce from 2014 to 2030. 
\1\ Some states will even have surpluses. The report identifies an 
inequitable distribution of nurses across the country, rather than a 
nationwide shortage. In fact, there are 1.2 million RNs with active 
licenses that are not working as RNs across the United States, and the 
exodus of RNs from the hospital bedside is ongoing. \2\
---------------------------------------------------------------------------
    \1\  Health Resources and Services Administration. 2017. ``National 
and Regional Supply and Demand Projections of the Nursing Workforce: 
2014-2030.'' U.S. Department of Health and Human Services. https://
bhw.hrsa.gov/sites/default/files/ bureau-health-workforce/data-
research/ nchwa-hrsa-nursing-report.pdf.
    \2\  NNU has several recent reports on the industry-created 
staffing crisis and the failure to provide a safe and health work 
environment. See Protecting Our Front Line: Ending the Shortage of Good 
Nursing Jobs and the Industry-created Unsafe Staffing Crisis available 
at: https://www.nationalnursesunited.org/protecting-our-front-line-
report; Workplace Violence and COVID-19 in Health Care: How the 
Hospital Industry Created an Occupational Syndemic available at: 
https://www.nationalnursesunited.org/sites/default/files/nnu/documents/
1121--WPV--HS--Survey--Report--FINAL.pdf; and Deadly Shame: Redressing 
the Devaluation of Registered Nurse Labor Through Pandemic Equity 
available at: https://www.nationalnursesunited.org/campaign/
deadlyshame-report.
---------------------------------------------------------------------------
   AI and data-driven technologies are negatively impacting nursing 
practice and limiting the use of nurses' professional judgment. This is 
                  putting patients and nurses at risk.
    Registered nurses have extensive education and clinical experience 
that enables us to provide safe, effective, and equitable patient care. 
These standards of nursing care can only be accomplished through 
continuous in-person assessments of a patient by a qualified licensed 
registered nurse. Every time an RN interacts with a patient, we perform 
skilled assessments and evaluations of the patient's overall condition. 
These assessments are fundamental to ensuring that the patient receives 
optimal care. Health care is not one-size-fits-all. Nurses must be able 
to alter expected treatment plans based on the unique circumstances of 
the patient and the patient's wishes and values and to use their 
experience and nursing judgment to provide the best course of care. 
Indeed, we are ethically and legally required to do so. We should not 
be pressured by management to conform to decisions made by algorithms 
that are prone to racial and ethnic bias as well as other errors that 
arise when one applies information that may apply to a population but 
not to individual patients.

    We are already experiencing the degradation and devaluation of our 
nursing practice through the use of technologies that have been 
implemented in recent years. For example, health care employers are 
using EHRs to replace RN judgment by automating the creation of nursing 
care plans and assigning patient acuity levels. RNs develop the nursing 
skill and judgment necessary to accurately evaluate a patient and 
create an effective care plan through education and experience in the 
clinical setting. That human skill and judgment cannot be replaced by 
an algorithm without serious consequences for safe patient care.

    The highly skilled work of a registered nurse, by its very 
definition, cannot be automated. When hospital employers use technology 
to override and limit the professional judgment of nurses and other 
health care workers, patients are put at risk. In fact, patients have 
already been harmed by AWSM systems, including at least four deaths in 
the VA health care system linked to errors made by Cerner's electronic 
health records. \3\
---------------------------------------------------------------------------
    \3\  Rodriguez, S. (2023, March 21) VA Admits Oracle Cerner EHRM 
Issues Contributed to 4 Veteran Deaths. EHR Intelligence, Adoption and 
Implementation News. https://ehrintelligence.com/news/va-admits-oracle-
cerner-ehrm-issues-contributed-to-4-veteran-deaths. Accessed October 
28, 2023.

    One example that illustrates this risk can be found in efforts to 
decrease the incidence of sepsis, a complication from infection that 
carries a high degree of mortality. \4\ One AI Early Warning System 
(EWS) analyzed patient data with the goal of identifying patients with 
a substantial risk of developing sepsis. The EWS was widely implemented 
at hundreds of hospitals throughout the country. \5\ However, when this 
sepsis EWS underwent external validation, researchers found that the 
program missed over 67 percent of sepsis cases. \6\ The authors of this 
study concluded of the EWS that ``it appears to predict sepsis long 
after the clinician has recognized possible sepsis and acted on that 
suspicion.''
---------------------------------------------------------------------------
    \4\  Leng, Y., Gao, C., Li, F., Li, E., & Zhang, F. (2022). The 
Supportive Role of International Government Funds on the Progress of 
Sepsis Research During the Past Decade (2010-2019): A Narrative Review. 
Inquiry : a journal of medical care organization, provision and 
financing, 59, 469580221078513. https://doi.org/10.1177/
00469580221078513.
    \5\  Wong, A., Otles, E., Donnelly, J. P., Krumm, A., McCullough, 
J., DeTroyer-Cooley, O., Pestrue, J., Phillips, M., Konye, J., Penoza, 
C., Ghous, M., & Singh, K. (2021). External Validation of a Widely 
Implemented Proprietary Sepsis Prediction Model in Hospitalized 
Patients. JAMA Internal Medicine, 181(8), 1065-1070. https://doi.org/
10.1001/jamainternmed.2021.2626.
    \6\  Schertz, A. R., Lenoir, K. M., Bertoni, A. G., Levine, B. J., 
Mongraw-Chaffin, M., & Thomas, K. W. (2023). Sepsis Prediction Model 
for Determining Sepsis vs SIRS, qSOFA, and SOFA. JAMA Network Open, 
6(8), e2329729-e2329729. https://doi.org/10.1001/
jamanetworkopen.2023.29729.

    Employers are also using AI to side-step vital RN-to-RN 
communication during patient hand-off and transfer of duty and to 
automate patient assignments. Patient transfers are one of the most 
dangerous points in a patient's care. Disruptions in communication can 
lead to life-threatening errors and omissions. Our nurses report that 
AI-generated communication leaves out important information while 
overburdening nurses with information that is not essential, forcing 
nurses to waste precious time searching medical records for information 
that could have been completely and accurately communicated during a 
brief person-to-person interaction. The use of AI to automate patient 
transfers has resulted in patients being sent to the wrong level of 
care because an RN was not involved in comparing the patients' needs 
with the resources available on the unit. This automation has also 
resulted in situations where patients were transferred to a room, and 
---------------------------------------------------------------------------
the RN did not know that they were there.

    This removal of human communication puts both nurses and patients 
at risk. At one member's hospital in Michigan, the AI system's failure 
to relay basic information, such as the patient being positive for 
COVID or the patient having low white blood cell counts, have resulted 
in nurses needlessly exposing themselves to the virus or 
immunocompromised patients being placed on COVID or flu units.

    We have grave concerns about the fundamental limits on the ability 
of algorithms to meet the needs of individual patients, especially when 
those patients are part of racial or ethnic groups that are less well 
represented in the data. Nurses know that clinical algorithms can 
interfere with safe, therapeutic health care that meets the needs of 
each individual patient. While clinical algorithms may purport to be an 
objective analysis of the scientific evidence, in fact their 
development involves significant use of judgment by their creators and 
creates the opportunity for creator bias--from conflicts of interest, 
limited perspective on the lives of racial minorities, or implicit 
racial bias--to be introduced into the algorithm.

    Even under optimal conditions, clinical algorithms are based on 
population-level data and are not appropriate for every patient. In 
addition, the way clinical algorithms are implemented, regardless of 
how they are created, often inappropriately constrains the use of 
health care professionals' judgment, which can worsen the impact of a 
biased algorithm. It is essential that the use of race or ethnicity in 
clinical algorithms is scrutinized, including whether race or ethnicity 
are serving as proxies for other factors that should be identified 
explicitly. However, it will not be possible to eliminate the use of 
judgment or the need for individual assessment in care decisions. These 
judgments should be made at the bedside between the patient and their 
health care provider, not by a committee based on population-level 
data.
 The deployment of artificial intelligence should be subjected to the 
                     Precautionary Principle test.
    Nurses believe that we must approach any change in health care 
using the precautionary principle; the proposition that, as Harvard 
University Professor A. Wallace Hayes explains, ``When an activity 
raises threats of harm to human health or the environment, 
precautionary measures should be taken even if some cause-and-effect 
relationships are not fully established scientifically.''

    The deployment of artificial intelligence should be subjected to 
this precautionary principle test, especially when it comes to patient 
care. Policymakers must ensure that the burden of proof rests on 
healthcare employers to demonstrate that these technologies are safe, 
effective, and equitable under specific conditions and for the specific 
populations in which they are used, before they are tested on human 
beings. It is imperative that the usage and process of deployment be as 
transparent as possible, and that issues of liability are discussed 
early and often. As nurses, we believe it is unacceptable to sacrifice 
any human life in the name of technological innovation. Our first duty 
is to protect our patients from harm, and we vehemently oppose any risk 
to patient health or safety and quality of care inflicted by unproved, 
untested technology.

    Nothing about artificial intelligence is inevitable. How AI is 
developed and deployed is the result of human decisions, and the 
impacts of AI--whether it helps or harms health care workers and the 
patients we serve--depends on who is making those decisions. To 
safeguard the rights, safety, and well-being of our patients, the 
healthcare workforce and our society, workers and unions must be 
involved at every step of the development of data-driven technologies 
and be empowered through strengthened organizing and bargaining rights 
to decide whether and how AI is deployed in the workplace.

    NNU urges the Federal Government to pursue a regulatory framework 
that safeguards the clinical judgment of nurses and other health care 
workers from being undermined by AI and other data-driven technologies.

    NNU recommends that Congress take the following actions:

          1. All statutes and regulations must be grounded in the 
        precautionary principle. NNU urges Congress to develop 
        regulations that require technology developers and health care 
        providers to prove that AI and other data-driven digital 
        technologies are safe, effective, and therapeutic for both a 
        specific patient population and the health care workforce 
        engaging with these technologies before they are deployed in 
        real-world care settings. This goes beyond racial, gender, and 
        age-based bias. As each patient has unique traits, needs, and 
        values, no AI can be sufficiently fine-tuned to predict the 
        appropriate diagnostic, treatment, and prognostic for an 
        individual patient. Liability for any patient harm associated 
        with failures or inaccuracies of automated systems must be 
        placed on both AI developers and health care employers and 
        other end users. Patients must provide informed consent for the 
        use of AI in their treatment, including notification of any 
        clinical decision support software being used.

          2. Privacy is paramount in health care--Congress must 
        prohibit the collection and use of patient data without 
        informed consent, even in so-called deidentified form. There 
        are often sufficient data points to reidentify so-called de-
        identified patient information. Currently, health care AI 
        corporations institute gag clauses on users' public discussions 
        of any issues or problems with their products or cloak the 
        workings of their products in claims of proprietary 
        information. Such gag clauses must be prohibited by law. 
        Additionally, health care AI corporations and the health care 
        employers that use their products regularly claim that 
        clinicians' right to override software recommendations makes 
        them liable for any patient harm while limiting their ability 
        to fully understand and determine how they are used. Thus, 
        clinicians must have the legal right to override AI. For 
        nurses, this means the right to determine nurse staffing and 
        patient care based on our professional judgment.

          3. Patients' informed consent and the right to clinician 
        override are not sufficient protections, however. Nurses must 
        have the legal right to bargain over the employer's decision to 
        implement AI and over the deployment and effects of 
        implementation of AI in our workplace. In addition to statutes 
        and regulations codifying nurses' and patients' rights 
        directly, Congress needs to strengthen workers' rights to 
        organize, collectively bargain, and engage in collective action 
        overall. Health care workers should not be displaced or 
        deskilled as this will inevitably come at the expense of both 
        patients and workers. At the regulatory level, the Centers for 
        Medicare and Medicaid Services must require health care 
        employers to bargain over any implementation of AI with labor 
        unions representing workers as a condition of participation.

          4. Congress must protect workers from AI surveillance and 
        data mining. Congress must prohibit monitoring or data mining 
        of worker-owned devices. Constant surveillance can violate an 
        employee's personal privacy and personal time. It can also 
        allow management to monitor union activity, such as 
        conversations with union representatives or organizing 
        discussions, which chills union activity and the ability of 
        workers to push back against dangerous management practices. 
        The Federal Government must require that employers make clear 
        the capabilities of this technology and provide an explanation 
        of how it can be used to track and monitor nurses. 
        Additionally, Congress must prohibit the monitoring of worker 
        location, data, or activities during off time in devices used 
        or provided by the employer. Employers should be restricted 
        from collecting biometric data or data related to workers' 
        mental or emotional states. Finally, employers should be 
        prohibited from disciplining an employee based on data gathered 
        through AI surveillance or data mining, and AI developers and 
        employers should also be prohibited from selling worker data to 
        third parties.

    Thank you again for inviting me to participate in this discussion. 
These comments are by no means an exhaustive list of concerns. National 
Nurses United looks forward to future conversations on this topic, and 
to working with Congress to ensure that the Federal Government develops 
effective regulations that will protect nurses and patients from the 
harm that can be caused by artificial intelligence and data-driven 
technologies in health care.
                                 ______
                                 
                            National Nurses United,
                                            Washington, DC,
                                                  November 8, 2023.
Hon. Ed Markey, Chairman,
Hon. Roger Marshall, Ranking Member,
U.S. Senate Committee on Health, Education, Labor, and Pensions,
428 Senate Dirksen Office Building,
Washington, DC 20510.

    Dear Chairman Markey, Ranking Member Marshall, and Members of the 
Committee:

    In light of the Committee's hearing today on ``Avoiding a 
Cautionary Tale: Policy Considerations for Artificial Intelligence in 
Health Care,'' I write to you on behalf of National Nurses United, the 
nation's largest union and professional association of registered 
nurses (RNs) to discuss the ways that our nearly 225,000 members are 
already experiencing the impacts of artificial intelligence (AI) and 
data-driven technologies at the hospital bedside.

    The decisions to implement these technologies are often made 
without the knowledge of either nurses or patients, and are putting 
patients and the nurses who care for them at risk. AI technology is 
being used to replace educated registered nurses exercising independent 
judgment with lower-cost staff following algorithmic instructions. 
However, patients are unique and health care is made up of non-routine 
situations that require human touch, care, and input. AI poses 
significant risks to patient care and to nursing practice, and all 
legislative and regulatory steps taken must utilize the precautionary 
principle--an idea at the center of public health analysis--in order to 
protect patients from harm.

    NNU urges the Federal Government to pursue a regulatory framework 
that safeguards the clinical judgment of nurses and other health care 
workers from being undermined by AI and other data-driven technologies. 
NNU recommends that Congress take the following actions:

          All statutes and regulations must be grounded in the 
        precautionary principle. NNU urges Congress to develop 
        regulations that require technology developers and health care 
        providers to prove that AI and other data-driven digital 
        technologies are safe, effective, and therapeutic for both a 
        specific patient population and the health care workforce 
        engaging with these technologies before they are deployed in 
        real-world care settings.

          Privacy is paramount in health care--Congress must 
        prohibit the collection and use of patient data without 
        informed consent, even in so-called deidentified form, as there 
        are often sufficient data points to reidentify so-called de-
        identified patient information.

          Nurses must have the legal right to bargain over the 
        employer's decision to implement AI and over the deployment and 
        effects of implementation of AI in our workplace. In addition 
        to statutes and regulations codifying nurses' and patients' 
        rights directly, Congress needs to strengthen workers' rights 
        to organize, collectively bargain, and engage in collective 
        action overall.

          Congress must protect workers from AI surveillance 
        and data mining. Congress must prohibit monitoring or data 
        mining of worker-owned devices. Constant surveillance can 
        violate an employee's personal privacy and personal time. It 
        can also allow management to monitor union activity, such as 
        conversations with union representatives or organizing 
        discussions, which chills union activity and the ability of 
        workers to push back against dangerous management practices.

          Congress must prohibit the monitoring of worker 
        location, data, or activities during off time in devices used 
        or provided by the employer. Employers should be restricted 
        from collecting biometric data or data related to workers' 
        mental or emotional states.

    These comments are by no means an exhaustive list of concerns, and 
I am attaching to this letter recent testimony that was given by our 
Executive Director, Bonnie Castillo, RN, at Majority Leader Schumer's 
most recent AI Insight Forum. National Nurses United looks forward to 
future conversations on this topic, and to working with Congress to 
ensure that the Federal Government develops effective regulations that 
will protect nurses and patients from the harm that can be caused by 
artificial intelligence and data-driven technologies in health care.

            Sincerely,
                                           Amirah Sequeira,
                            National Government Relations Director,
                                            National Nurses United.
                                 ______
                                 
             premier inc, written statement for the record
    On behalf of Premier Inc. and the providers we serve, we thank the 
leadership of the Committee on Health, Education, Labor, and Pensions 
for their commitment to examining the ways in which technology can be 
leveraged in healthcare to reduce costs, improve quality and access, 
alleviate workforce shortages and advance health equity. Premier 
appreciates the opportunity to share our recommendations and insights 
related to the role of Artificial Intelligence (AI) in healthcare and 
looks forward to working with Congress on these issues.
                     I. Background on Premier Inc.
    Premier is a leading healthcare improvement company, uniting an 
alliance of more than 4,350 U.S. hospitals and approximately 300,000 
continuum of care providers to transform healthcare. With integrated 
data and analytics, collaboratives, supply chain solutions, consulting 
and other services, Premier enables better care and outcomes at a lower 
cost. Premier plays a critical role in the rapidly evolving healthcare 
industry, collaborating with members to co-develop long-term 
innovations that reinvent and improve the way care is delivered to 
patients nationwide. Headquartered in Charlotte, NC, Premier is 
passionate about transforming American healthcare.

    Premier is already leveraging AI to move the needle on cost and 
quality in healthcare, including:

          Stanson Health, a subsidiary of Premier, designs 
        technology to reduce low-value and unnecessary care. Stanson 
        leverages real-time alerts and relevant analytics to guide and 
        influence physician's decisions through clinical decision 
        support technology, providing higher-quality, lower-cost 
        healthcare. Stanson's mission is to measurably improve the 
        quality and safety of patient care while reducing the cost of 
        care by enabling context-specific information integrated into 
        the provider workflow.

          Premier's PINC AI Applied Sciences (PAS) is a trusted 
        leader in accelerating healthcare improvement through services, 
        data, and scalable solutions, spanning the continuum of care 
        and enabling sustainable innovation and rigorous research. 
        These services and real-world data are valuable resources for 
        the pharmaceutical, device and diagnostic industries, academia, 
        Federal and national healthcare agencies, as well as hospitals 
        and health systems. Since 2000, PAS researchers have produced 
        more than 1,000 publications which appear in 264 scholarly, 
        peer-reviewed journals, covering a wide variety of topics such 
        as population-based analyses of drugs, devices, treatments, 
        disease states, epidemiology, resource utilization, healthcare 
        economics and clinical outcomes.

          Conductiv, a Premier purchased services subsidiary, 
        harnesses AI to help hospitals and health systems streamline 
        contract negotiations, benchmark service providers and manage 
        spend based on historical supply chain data. Conductiv also 
        works to enable a healthy, competitive services market by 
        creating new opportunities for smaller, diverse suppliers and 
        helping hospitals invest locally across many different 
        categories of their business.

    Premier has thought critically about the potential legislative and 
regulatory framework for AI in healthcare and recently published an 
Advocacy Roadmap for AI in Healthcare. \1\ While Premier believes that 
AI can and should play a critical role in advancing healthcare and 
spurring innovation, Premier also believes that AI cannot and should 
not replace the practice of medicine
---------------------------------------------------------------------------
    \1\  See Appendix A.

    Additional detailed comments and recommendations, based on our 
depth of experience in using AI in healthcare, are included below.
      II. Protecting Patient Rights, Safety and National Security
    Premier supports the responsible development and implementation of 
AI tools across all segments of American industry--particularly in the 
healthcare industry--where numerous applications of this technology are 
already improving patient outcomes and provider efficiency. Premier 
sees a defined role for Congress in advancing clear statutory 
guidelines that will allow providers and payers to deploy AI technology 
to its full potential, while still protecting individual rights and 
safety.

    Premier strongly supports AI policy guardrails that include 
standards around transparency and trust, bias and discrimination, risk 
and safety, and data use and privacy.
                         Promoting Transparency
    Trust--among patients, providers, payers and suppliers--is critical 
to the development and deployment of AI tools in healthcare settings. 
To earn trust, AI tools must have an established standard of 
transparency. Recent policy proposals, including those proffered by the 
Office of the National Coordinator for Health Information Technology 
(ONC), suggest transparency can be achieved through a ``nutrition 
label'' model. This approach seeks to demystify the black box of an AI 
algorithm by listing the sources and classes of data used to train the 
algorithm. Unfortunately, some versions of the ``nutrition label'' 
approach to AI transparency fail to acknowledge that when an AI tool is 
trained on a large, complex dataset, and is by design intended to 
evolve and learn, the initial static inputs captured by a label do not 
provide accurate insights into an ever-changing AI tool. Further, 
overly intrusive disclosure requirements around data inputs or 
algorithmic processes could force AI developers to publicly disclose 
intellectual property or proprietary technology, which would stifle 
innovation.

    Premier recommends that AI technology in healthcare should be held 
to a standardized, outcomes-focused set of metrics, such as accuracy, 
bias, false positives, inference risks, recommended use and other 
similarly well-defined values. Outcomes, rather than inputs, are where 
AI technologies hold potential to drive health or harm. Thus, Premier 
believes it is essential to focus transparency efforts on the accuracy, 
reliability and overall appropriateness of AI technology outputs in 
healthcare to ensure that the evolving tool does not produce harm.
                            Mitigating Risks
    It is important to acknowledge potential concerns around biased or 
discriminatory outcomes resulting from the use of AI tools in 
healthcare, as well as potential concerns around patient safety. 
Fortunately, there are several best practices that Premier and others 
at the forefront of technology are already following to mitigate these 
risks. First, we reiterate Premier's recommendation for standardized, 
outcomes-based assessments of AI technologies' performance, which would 
hold AI developers and vendors responsible for monitoring for any 
biased outcomes. Performance reporting could incorporate results from 
disparity testing before and after technology deployment to ensure that 
bias stays out of the AI ``machinery.''

    Premier also supports the development of a standardized risk 
assessment, drawing on the extensive groundwork already laid by the 
National Institute of Standards and Technology (NIST) in the AI Risk 
Management Framework. An AI risk assessment should identify potential 
risks that the AI tool could introduce, potential mitigation 
strategies, detailed explanations of recommended uses for the tool and 
risks that could arise should the tool be used inappropriately. Premier 
urges Congress to consider a nuanced approach to risk level 
classification for the use of AI tools in healthcare. While there are 
some clinical applications of AI technology that could be considered 
high risk, it is certainly true that not all healthcare use cases carry 
the same level of risk. For example, the use of AI technology to reduce 
administrative burden or improve workflow in a hospital carries a much 
different level of risk and very different safety considerations than 
the use of AI technology to treat patients. Premier also supports the 
development of standardized intended use certifications or reporting 
requirements for AI technologies, which would prevent new systems from 
producing harmful outcomes due to use outside of the technology's 
design.

    Finally, Premier understands the importance of data standards, 
responsible data use and data privacy in the development and deployment 
of AI technology. Data standards should specifically focus on objective 
assessment of potential sources of bias or inaccuracy introduced 
through poor dataset construction, cleaning or use. These may include, 
but are not limited to, appropriately representative datasets, bias in 
data collection (e.g., subjectivity in clinical reports) or introduced 
by instrument performance or sensitivity (e.g., pulse oximetry devices 
producing inaccurate measurements of blood oxygen levels in patients 
with darker skin), bias introduced during curation (e.g., datasets with 
systemically introduced nulls and their correlation, such as failure to 
pursue treatment due to lack of ability to pay), and training and test 
data that is appropriately applicable to various patient subpopulations 
(e.g., data that sufficiently represents symptoms or characteristics of 
a condition for each age/gender/race of patient that the tool will be 
used to treat). Premier also supports the establishment of guidelines 
for proper data collection, storage and use that protect patient rights 
and safety. This is particularly important given the sensitivity of 
health data.
           III. Drug Research, Development and Manufacturing
    One critical area where we would highlight the transformative 
potential of AI is drug research, development and production. Congress 
and the Administration must work collaboratively to pre-empt 
uncertainty and responsibly govern the deployment of emerging 
technologies in these areas in a patient-centered manner. Premier 
specifically recommends timely legislative and/or regulatory guidance 
for the use of AI in clinical trials and drug manufacturing.
                Opportunities for AI in Clinical Trials
    Premier sees particular promise for the use of AI in streamlining 
processes and expanding patient access in clinical trials.

    Identifying trial participants: One of the biggest challenges 
facing health systems that seek to participate in or enroll patients in 
clinical trials is identifying and enrolling patients in a timely 
manner. Delays in meeting trial enrollment targets and timelines can 
increase the cost of the trial. AI tools have the ability to analyze 
the extensive universe of data available to healthcare systems in order 
to identify patients that may be a match for clinical trials that are 
currently recruiting. This application of natural language processing 
systems can make developing new drugs less expensive and more 
efficient, while also improving patient and geographical diversity in 
trials to address health equity.

    Generating synthetic data: AI, once trained on real-world data 
(RWD), has the capability to generate synthetic data and patient 
profiles that share characteristics with the target patient population 
for a clinical trial. This synthetic data can be used to simulate 
clinical trials to optimize trial designs, model the possible effects 
or range of results of a novel intervention, and predict the 
statistical significance and magnitude of effects or biases. 
Ultimately, synthetic patient data can help optimize trial design, 
improve safety and reduce cost for decentralized clinical trials. 
Further, synthetic control arms in clinical trials can help increase 
trial enrollment by easing patient fears that they will receive a 
placebo. To encourage continued innovation, clear guidance is needed 
from Congress and/or the Food and Drug Administration (FDA) on the 
process for properly obtaining consent from patients for the use of 
their RWD to produce AI-generated synthetic control arms in clinical 
trials.
               Opportunities for AI in Drug Manufacturing
    Premier sees potential for AI to transform at least three key 
segments of the drug manufacturing process: component supply chain, 
advanced process control, and quality monitoring.

    Supply chain visibility: Premier believes the application of AI can 
advance national security by helping build a more efficient and 
resilient healthcare supply chain. Specifically, AI can enable better 
demand forecasting for products and services, such as drug components, 
through analysis of historical and emerging clinical and patient data. 
As the COVID-19 pandemic demonstrated, the ability to understand and 
react to shortages poses a critical challenge to healthcare providers; 
AI enables better planning and response time to national or regional 
emergencies. AI can drive better inventory management by automating the 
monitoring and replenishment of inventory levels. Healthcare providers 
can leverage AI to better manage suppliers through faster more 
efficient contracting processes and by monitoring of supplier key 
performance metrics. As Premier works to combat drug shortages, the 
most effective remedies begin with supply chain visibility and reliable 
predictions that allow manufacturers to plan for and respond to 
shortages or disruptions--this crucial element of the drug 
manufacturing process presents a key value-add opportunity for AI 
technology.

    Advanced process control: Another significant value-add for AI in 
the drug manufacturing process is in the development and optimization 
of advanced process control systems (APCs). Process controls typically 
regulate conditions during the manufacturing process, such as 
temperature, pressure, feedback and speed. However, a recent report 
found that industrial process controls are overwhelmingly still 
manually regulated, and less than 10 percent of automated APCs are 
active, optimized and achieving the desired objective. These 
technologies are now ready to transform drug manufacturing on a 
commercial scale; however, challenges still remain to widespread 
adoption. Premier strongly believes that the FDA should issue clear 
guidance that supports the industry-wide transition to AI-powered APCs. 
Such technologies offer drug manufacturers the opportunity to assess 
the entire set of input variables and the effect of each on system 
performance and product quality, automating plant-wide optimization. 
This application of AI technology can transform the physical 
manufacturing of drugs and pharmaceuticals, leading to cost-savings and 
increased resiliency, transparency and safety in the drug supply chain.

    Quality monitoring: AI can also provide value-add to drug 
manufacturing in the field of quality monitoring and reporting. Current 
manufacturing processes provide an immense volume of data from imagers 
and sensors that, if processed and analyzed more quickly and 
efficiently, could transform approaches to safety and quality control. 
AI models trained on this data can be used to predict malfunctions or 
adverse events. AI can also perform advanced quality control and 
inspection tasks, using data feeds to quickly identify and correct 
product defects or catch quality issues with products on the 
manufacturing line. Taken together, these capabilities can improve both 
the accuracy and speed of inspections and quality control, helping 
companies to reliably meet regulatory requirements and avoid costly 
delays that disrupt the drug supply chain.
          IV. Training the Healthcare Workforce of the Future
    Premier believes technology can and should work alongside and learn 
from healthcare professionals, but current technology will not and 
should not replace the healthcare workforce.

    To ensure clinical validity and protect patients, Premier 
reiterates the importance of comprehensive risk assessments, 
recommended use, and trainings that combat automation bias and 
incorporate human decisionmaking into the use of AI technology in 
healthcare. The risks and safety concerns around AI technology are 
unique to each use case, and Premier supports the requirement of a risk 
assessment and mitigation plan specific to the level of risk associated 
with the use case. Premier also supports the development of 
standardized intended use certifications or reporting requirements for 
AI technologies, which would prevent new systems from producing harmful 
outcomes due to use outside of the technology's design.

    Premier acknowledges the risks of automation bias and fully 
automated decisionmaking processes. To reduce these risks, promote 
trust in AI technologies used in healthcare and achieve the goal of 
supporting the healthcare workforce through AI, Premier recommends that 
healthcare workforce training programs provide comprehensive AI 
literacy training. Healthcare workers deal with high volumes of 
incredibly nuanced data, research and instructions--a growing 
percentage of which may be supplied by AI. This is particularly true 
for applications of AI in drug development, where manufacturers and 
quality control specialists may be reviewing high volumes of AI-powered 
recommendations or insights and making rapid decisions that affect the 
safety of patients. By ensuring our healthcare workers understand how 
to evaluate the most appropriate AI use cases and appropriate 
procedures for evaluating the accuracy or validity of AI 
recommendations, we can maximize the advisory benefit of AI while 
mitigating the risk to patients and provider liability. Additionally, 
clear, risk-based guidance on which uses of AI technology in healthcare 
require human review and decisionmaking is essential.

    Additionally, watermarking or provenance data/systems for AI-
generated content were a component of the voluntary commitments 
recently announced by the Administration. Premier generally supports 
the development of similar metrics for scientific research or clinical 
decision support recommendations produced by AI technology. It is 
important that patients, scientists, drug manufacturers and medical 
professionals understand when decisions or recommendations are made by 
AI so they can consciously respond and evaluate the new information 
accordingly.

    Specifically, watermarking is one potential strategy to combat 
automation bias, a risk especially pertinent to the use of AI 
technology in healthcare. Automation bias refers to human overreliance 
on suggestions made by automated technology, such as an AI device. This 
tendency is often amplified in high-pressure settings that require a 
rapid decision. The issue of automation bias in a healthcare setting is 
discussed at length by the FDA in guidance on determining if a clinical 
decision support tool should be considered a medical device. Premier 
suggests that future guidance or standards for the use of AI should 
consider automation bias in risk assessments and implementation 
practices, such as workforce education and institutional controls, to 
minimize the potential harm that automation bias could have on patients 
and vulnerable populations, including to mitigate any potential risk of 
AI used in unintended settings or built on biased datasets. In the drug 
manufacturing process, it is important that workers evaluating a supply 
chain disruption prediction, optimization recommendation, or quality 
control report know that the data or recommendation is AI-generated and 
evaluate it effectively.
                             V. Conclusion
    In closing, Premier appreciates the opportunity to share comments 
on the topic of AI and its role in healthcare. If you have any 
questions regarding our comments, or if Premier can serve as a resource 
on these issues to the Committee in its policy development, please 
contact Mason Ingram, Director of Payer Policy, at Mason--
[email protected] or 334-318-5016.

[GRAPHICS NOT AVAILABLE IN TIFF FORMAT]
                                 ______
                                 
    [Whereupon, at 4:17 p.m., the meeting was adjourned.]

                                  [all]