Nurses Warn Patient Safety at Risk as AI Use Spreads in Health Care

 

As the use of artificial intelligence proliferates in the health care industry, Bay Area unionized nurses call for greater transparency and say in how the technologies are deployed to minimize risks to patients.

At a protest on Monday outside of Kaiser Permanente’s San Francisco Medical Center, many in the estimated crowd of about 200 members of the California Nurses Association held red signs that read “Patients are not algorithms” and “Trust nurses, not AI.”

“All health care corporations need to make sure that the technology is tested, it’s valid, and it’s not harmful to patients,” said Michelle Gutierrez Vo, a president at CNA, representing 24,000 nurses at Kaiser Permanente. “And before they deploy it, they need to sit down with nurses so that the nurses can review and make sure it’s congruent with patient safety.”

Gutierrez Vo and other nurses worry that without proper oversight and accountability, health care employers will use AI to replace nurses and other medical professionals for profit, to the detriment of patient care. The nurses are calling for health care organizations to hit pause on the rollout of new AI technologies.

This comes as state and federal regulators race to catch up with the explosive growth of generative AI tools, which experts say also have great potential to improve health care delivery.

Kaiser Permanente, one of the largest employers in San Francisco, Alameda and other Bay Area counties, has been an early adopter of AI. Company officials have said they rigorously test the tools they use for safety, accuracy and equity.

“Our physicians and care teams are always at the center of decision-making with our patients,” a Kaiser Permanente statement said in response to a KQED request for comment. “We believe that AI may be able to help our physicians and employees and enhance our members’ experience. As an organization dedicated to inclusiveness and health equity, we ensure the results from AI tools are correct and unbiased; AI does not replace human assessment.”

One program in use at 21 Kaiser hospitals in Northern California is the Advance Alert Monitor, which analyzes electronic health data to notify a nursing team when a patient’s health is at risk of serious decline. The program saves about 500 lives per year, according to the company.

But Gutierrez Vo said nurses have flagged problems with the tool, such as producing inaccurate alarms or failing to detect all patients whose health is quickly deteriorating.

“There’s just so much buzz right now that this is the future of health care. These health care corporations are using this as a shortcut, as a way to handle patient load. And we’re saying ‘No. You cannot do that without making sure these systems are safe,’” said Gutierrez Vo, a nurse with 25 years of experience at the company’s Fremont Adult Family Medicine clinic. “Our patients are not lab rats.”

The U.S. Food and Drug Administration has authorized some AI-generated services before they go to market, but mostly without the comprehensive data required for new medicines. Last fall, President Joe Biden issued an executive order on the safe use of AI, which includes a directive to develop policies for AI-enabled technologies in health services that promote “the welfare of patients and workers.”

It’s very good to have open discussions because the technology is moving at such a fast pace, and everyone is at a different level of understanding of what it can do and [what] it is,” said Dr. Ashish Atreja, Chief Information and Digital Health Officer at UC Davis Health. “Many health systems and organizations do have guardrails in place, but perhaps they haven’t been shared that widely. That’s why there’s a knowledge Gap.

UC Davis Health is part of a collaboration with other health systems to implement generative and other types of AI with what Atreja referred to as “intentionality” to support their workforce and improve patient care.

“We have this mission that no patient, no clinician, no researcher, no employee gets left behind in getting advantage from the latest technologies,” Atreja said.

Dr. Robert Pearl, a lecturer at the Stanford Graduate Business School and a former CEO of The Permanente Medical Group (Kaiser Permanente), told KQED he agreed with the nurses’ concerns about the use of AI at their workplace.

“Generative AI is a threatening technology but also a positive one. What is the best for the patient? That has to be the number one concern,” said Pearl, author of “ChatGPT, MD: How AI-Empowered Patients & Doctors Can Take Back Control of American Medicine,” which he said he co-wrote with the AI system.

Facebook
Pinterest
Twitter
LinkedIn