.

Rapid AI adoption could cause medical errors, patient harm, WHO warns, urging oversight

Article By:  Susan KellyReporter

Blog Source From : https://www.healthcaredive.com/

The World Health Organization is warning that the “meteoric” rise of artificial intelligence tools in healthcare threatens the safety of patients if caution is not exercised.

Because of the excitement that rapidly expanding platforms such as ChatGPT, Bard, Bert and others are generating over the potential to improve patient health, device developers and others are tossing aside the caution that normally would be applied toward new technologies, the organization said in a statement on Tuesday.

“Precipitous adoption of untested systems could lead to errors by health-care workers, cause harm to patients, erode trust in AI and thereby undermine (or delay) the potential long-term benefits and uses of such technologies around the world,” the WHO warned.

Many of the AI systems being introduced in healthcare use so-called large language model tools (LLMs) to imitate human understanding, processing and communication. The WHO said the “meteoric” rise of the technologies, many still in the experimental stage, warrant a close look at the risks they pose to key values in healthcare and scientific research. Those values include transparency, inclusion, public engagement, expert supervision, and rigorous evaluation.

“It is imperative that the risks be examined carefully when using LLMs to improve access to health information, as a decision-support tool, or even to enhance diagnostic capacity in under-resourced settings to protect people’s health and reduce inequity,” the WHO said.

Those risks include concerns that the data used to train AI may be biased, leading the tools to generate misleading or inaccurate information, including responses that appear authoritative or plausible when they are wrong. Rigorous oversight is needed for the technologies to be used in safe, effective, and ethical ways, the WHO said.

“WHO proposes that these concerns be addressed, and clear evidence of benefit be measured before their widespread use in routine health care and medicine,” the organization said.

The organization’s warning echoes remarks last week from Food and Drug Administration Commissioner Robert Califf, who said in a speech that nimble regulation of large language models is needed to avoid the healthcare system being “swept up quickly by something that we hardly understand.”

Sam Altman, CEO of OpenAI, agreed with the sentiment that AI must be regulated in testimony he gave last week before a Senate subcommittee. “I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that,” he was quoted as saying. “We want to work with the government to prevent that from happening.”

Recent examples of new AI applications in medtech include Smith and Nephew’s planning and data visualization software for robotic surgery and BD’s new software to look for Methicillin-resistant Staphylococcus aureus (MRSA).

Leave a Reply

Your email address will not be published. Required fields are marked *