≡ Menu

When Will We Trust AI for Clinical Decisions?

There are few things more complex than clinical decision making. We ask our physicians for diagnosis, prognosis and treatments based on limited sets of known factors. They certainly can’t imagine the action, reaction, and interaction of 42 million proteins within each cell of our 30 trillion human cells, nor understand the 60 – 85% of determinants of an individual health outcomes (which doesn’t include healthcare and genetics) or keep up with the two million peer-reviewed medical journal articles published each year. Hopefully, one day AI will help them with that.

When will physicians and patients be able to trust AI to help?

Christina Jewett provides some insight in her New York Times article.

The F.D.A. has approved many new programs that use artificial intelligence, but doctors are skeptical that the tools really improve care or are backed by solid research.

Google has already drawn attention from Congress with its pilot of a new chatbot … designed to answer medical questions, raising concerns about patient privacy and informed consent.

She writes how physicians are being careful with AI using it as a scribe, for occasional second opinions and to draft various reports. Physicians don’t trust the 350 FDA approved AI powered solutions, thus increasing healthcare cost with duplicate efforts (AI and physician) and false positives. AI has shown some benefits such as expediting stroke treatments by moving brain scans to the top of radiologist inbox if the AI detected a stroke.

Generative AI has produced great benefits for software coders, generating first draft of the desired software code using standalone point solutions like Chat GPT. The promise is one day Generative AI will be able to help doctors make sense of numerous factors contributing to a health condition. We will also need physicians to make sense of the credibility of the AI.