The ‘model-eat-model world’ of clinical AI: How predictive power becomes a pitfall
New research shows how AI models can become a victim of their own success, sending their performance into a nosedive and generating inaccurate, potentially harmful results.
A growing number of AI tools are being used to predict everything from sepsis to strokes, with the hope of accelerating the delivery of life-saving care. But over time, new research suggests, these predictive models can become a victim of their own success — sending their performance into a nosedive and generating inaccurate, potentially harmful results.
“There is no accounting for this when your models are being tested,” said Akhil Vaid, an instructor of data-driven and digital medicine at the Icahn School of Medicine at Mount Sinai and author of the new research, published Monday in the Annals of Internal Medicine. “You can’t run validation studies, do external validation, run clinical trials — because all they’ll tell you is that the model works. And when it starts to work, that is when the problems will arise.”
What's Your Reaction?