.

Hospital evaluation of AI predictive tools for bias is inconsistent, study finds

Hospitals with more financial resources and technical expertise were more likely to locally test their models, suggesting a potential digital divide that could impact patient care.

Article By: Emily Olsen

Blog Source From : https://www.healthcaredive.com/

Dive Brief:

  • Most hospitals are using predictive artificial intelligence tools, but less than half are evaluating them for bias, according to a study published this month in Health Affairs.  
  • Sixty-five percent of U.S. hospitals reported utilizing predictive models, including products for identifying high-risk patients or helping with appointment scheduling. But only 61% tested models for accuracy using their own health systems’ data, and just 44% said they locally evaluated the models for bias. 
  • Hospitals with high operating margins, those that developed their own models and facilities that were part of health systems were more likely to report local accuracy and bias evaluation — suggesting a “growing digital divide” between high- and low-resource hospitals could threaten patient safety, lead study author Paige Nong said in a statement. 

Dive Insight:

AI has become an increasingly hot technology for the healthcare sector, boosting venture capital investment and spurring interest in tools that could stretch the overburdened provider workforce.

But accuracy and bias remain significant concerns. Models could replicate racial, ethnic or gender biases that could worsen already existing health disparities. 

Similarly, tools that work well with one patient population might provide inaccurate information when deployed in other situations, highlighting the importance of testing models on providers’ own data. Health systems also need to keep monitoring their AI products after implementation, as environmental changes could worsen their performance over time. 

However, many hospitals aren’t adequately evaluating their AI models for bias and accuracy — which could pose a risk to patient care, according to the latest research, which analyzed survey responses from more than 2,400 hospitals. 

Access to financial resources could play a role in boosting hospitals’ ability to conduct local evaluations of their AI tools. Meanwhile, critical access hospitals, other rural hospitals and facilities that served areas with high levels of social disadvantage were less likely to use predictive models entirely. 

Plus, hospitals that had the technical expertise to develop their own predictive models were more likely to test the products with their own data. Nearly 80% of hospitals in the study reported using models that came from their electronic health record vendor, while 59% said they utilized products built by third parties and 54% said they used self-developed tools. 

“Many better-funded hospitals can design models tailored to their own patients, then conduct in-house evaluations of them. In contrast, hospitals with fewer resources are buying these products ‘off the shelf,’ which may not reflect the needs of local patients,” Nong, an assistant professor at the University of Minnesota School of Public Health, said in a statement. 

Hospitals’ likelihood to conduct local evaluations of their models was also affected by the AI’s intended use. Facilities that utilized models to predict health trajectories or risk for inpatients were most likely to say they locally tested their AI, compared with hospitals that used the tools to follow up with outpatients or to automate billing.

Hospitals might see those tools as lower risk, even though outpatient models could also be used to perpetuate bias and patients have raised concerns about AI for billing, the study’s authors wrote.

Leave a Reply

Your email address will not be published. Required fields are marked *