Opinion: Current large language models will not fix health care. Here’s what could
Today, AI in health care is splintered at best. There’s a better way.
Advancements in LLMs such as ChatGPT and GPT-4 have generated substantial excitement. Many see these models as assistants or even potential replacements for time-intensive tasks, like patient-physician communication through the electronic health record. Designed to serve numerous downstream applications, these models convert data into representations that are useful for multiple tasks. As a result, they have been labeled “foundation models.”
Yet a core question remains: As exciting as it is to chat with an AI tool that has read more text than you will in your lifetime, will such models in their current state really transform health care? We think the answer is no. But one approach customized for medicine could.
What's Your Reaction?