STAT+: AI health care companies say they’ll keep humans in the loop. But what does that actually mean?
The “human in the loop” is meant to be a backstop to prevent flawed AI models from causing harm. But does it work?
Developers of artificial intelligence models slowly making their way into medicine have long parried ethical concerns with assertions that clinical staff must review tech’s suggestions before they are acted on. That “human in the loop” is meant to be a backstop preventing potential medical errors conjured up by a flawed algorithm from harming patients.
And yet, industry experts warn that there’s no standard way to keep humans in the loop, giving technology vendors significant latitude to market their AI-powered products as helpful professional tools rather than as autonomous decision-makers.
Health record giant Epic is piloting a generative AI feature that drafts responses to patients’ email queries, but clinical staff must review the suggestions before they are sent out, the company has said. A flurry of AI-guided ambient documentation startups can rapidly transcribe and summarize patient visits and populate patients’ medical charts, but they require doctors and nurses to OK the generated entries first. Products predicting health risks — like overdose or sepsis — show up as flags in medical record software, and it’s up to clinicians to act on it.
What's Your Reaction?