≡ Menu

When Will We Trust AI for Clinical Decisions?

There are few things more complex than clinical decision making. We ask our physicians for diagnosis, prognosis and treatments based on limited sets of known factors. They certainly can’t imagine the action, reaction, and interaction of 42 million proteins within each cell of our 30 trillion human cells, nor understand the 60 – 85% of determinants of an individual health outcomes (which doesn’t include healthcare and genetics) or keep up with the two million peer-reviewed medical journal articles published each year. Hopefully, one day AI will help them with that.

When will physicians and patients be able to trust AI to help?

Christina Jewett provides some insight in her New York Times article.

The F.D.A. has approved many new programs that use artificial intelligence, but doctors are skeptical that the tools really improve care or are backed by solid research.

Google has already drawn attention from Congress with its pilot of a new chatbot … designed to answer medical questions, raising concerns about patient privacy and informed consent.

She writes how physicians are being careful with AI using it as a scribe, for occasional second opinions and to draft various reports. Physicians don’t trust the 350 FDA approved AI powered solutions, thus increasing healthcare cost with duplicate efforts (AI and physician) and false positives. AI has shown some benefits such as expediting stroke treatments by moving brain scans to the top of radiologist inbox if the AI detected a stroke.

Generative AI has produced great benefits for software coders, generating first draft of the desired software code using standalone point solutions like Chat GPT. The promise is one day Generative AI will be able to help doctors make sense of numerous factors contributing to a health condition. We will also need physicians to make sense of the credibility of the AI.

 

{ 0 comments }

Complexity of Using Health Insurance

To ensure the use of preventive care like shots and services, the Affordable Care Act of 2010 requires health plans to pay for these services without charging their members. Free is the incentive for us to do the right thing like colonoscopies without copayments or using our deductibles. The preventitive care benefits cover 22 adult services, 27 additional women services and 29 services for children.

Christine Rogers, 60, of Wake Forest, North Carolina is insured by Cigna Healthcare through her job. Christine had an annual wellness visit that included typical blood tests as well as a depression screening and discussion with a physician. Cigna was billed $487, which included a $331 wellness visit and a separate $156 charge for what was billed as a 20- to 29-minute consultation with her physician. Her insurer paid $419.93, leaving Rogers with a $67.07 charge related to the consultation.

What is the catch?

Not all care provided during a wellness visit counts as no-cost preventive care under federal guidelines. If a health issue arises during a checkup that prompts discussion or treatment — say, an unusual mole or heart palpitations — that consult can be billed separately, and the patient may owe a copayment or deductible charge for that part of the visit.

Who is supposed to understand all the nuances of Federal Regulations? Our healthcare system is too complex. Gallup found that 38% of patients defer care because of cost. I wonder how many people defer free preventative care because that don’t believe it to be true.

 

{ 0 comments }

Kate Yoder asks a great question in the Wired the article Why Have Climate Catastrophes Toppled Some Civilizations but Not Others?

Henry Kissinger once said about societal collapse of complex societies, “Every civilization that has ever existed has ultimately collapsed. So, as a historian, one has to live with a sense of the inevitability of tragedy.” 

Kate offers insight into potential answers mentioning Jared Diamond’s 2005 book Collapse: How Societies Choose to Fail or Succeed that describes the Roman Empire the first bubonic plague due to volcanos lowering temperatures and Maya’s in Central America linked to major drought.

This is a well-researched topic which Kate references:

Every day we wake up to a society a little more complex than the day before. It may be helpful to learn the previous advanced societies that were also very complex.

{ 0 comments }

How Transparent Are AI Foundational Models?

Athough AI has great potential to help humans strategize beyond the limits of 86 billions neurons, it has many challenges including trust. Introducing the Foundation Model Transparency Index. Organizations investing in layering applications on these foundational models certainly need this.

From Sayash Kapoor on his site aisnakeoil.com. Sayash was recently included in the TIME 100 Most Influential People in AI.

Foundation models such as GPT-4 and Stable Diffusion 2 are the engines of generative AI. While the societal impact of foundation models is growing, transparency is on the decline, mirroring the opacity that has plagued past digital technologies like social media. How are these models trained and deployed? Once released, how do users actually use them? Who are the workers that build the datasets that these systems rely on, and how much are they paid? Transparency about these questions is important to keep companies accountable and understand the societal impact of foundation models.

{ 0 comments }

2023 Nobel Prize awards go to those that shrink human knowledge.

Anne L’Huillier, Pierre Agostini and Ferenc Krausz shared the 2023 Nobel Prize in Physics for producing laser pulses lasting mere attoseconds.

One attosecond is one-quintillionth of a second, or 0.000000000000000001 seconds. More attoseconds pass in the span of one second than there are seconds that have passed since the birth of the universe.

Katalin Karikó and Drew Weissman share the Physiology or Medicine Nobel Prize for development of mRNA  to provide instructions to cells to make proteins (10 nanometers). A typical atom is 0.1 to 0.5 nm in diameter. DNA molecules are about 2.5 nanometers wide. A typical virus is about 100 nm wide. The work led to the development of Covid-19 vaccines (75 – 89 nM) administered to billions around the world.

Moungi G. Bawendi, Louis E. Brus and Alexei I. Ekimov are awarded the Nobel Prize in Chemistry 2023 for the discovery and development of quantum dots (1.5 – 10 nm). These tiny nanonanoparticles are essential for a wide range of applications including LED displays, solar cells, and biomedical imaging.

Expanding human knowledge of our attosecond and nanoscale world enables more innovation as well as more complexity to manage.

{ 0 comments }

Interesting new Alzheimer’s research that looks beyond amyloid plaques buildup in the brain.

Researchers conducted the most extensive analysis on the genomic, epigenomic, and transcriptomic changes in Alzheimer’s patient brains. By analyzing over 2 million cells from 400 postmortem samples, they offer insight into the interplay of four areas to help treat Alzheimer’s disease. 

Transcriptome – RNA-sequencing to analyze the gene expression patterns of 54 types of brain cells. They found impairments in the expression of genes involved in mitochondrial function, synaptic signaling, and protein complexes needed to maintain the structural integrity of the genome.

Epigenomics – The chemical modifications that effect gene usage within a given cell. The found they occur most often in microglia, the immune cells responsible for clearing debris from the brain.

Microglia – brain cells that make up 5 to 10 percent of the cells in the brain. They clear debris from the brain, are immune cells that respond to injury or infection and help neurons communicate with each other. They found as Alzheimer’s disease progresses, more microglia enter inflammatory states, the blood-brain barrier begins to degrade, and neurons begin to have difficulty communicating with each other.

DNA damage – during memory formation, neurons create DNA breaks. These breaks are promptly repaired, but the repair process can become faulty as neurons age. They found that as more DNA damage accumulates in neurons, it gets more difficult to repair the damage, leading to genome rearrangements and 3D folding defects.

{ 0 comments }

Transformative AI is really, really hard

The hype and doom of generative AI models such as ChatGPT often leave out the work ahead before their predictions can happen. While these new large language models will be transformative, they will have to address numerous technical, social and economic challenges.

Arjun Ramani and Zhengdong Wang describe many of these challenges in Why transformative artificial intelligence is really, really hard to achieve on the site The Gradient.

They discuss the challenges of AI creating productivity gains in areas such as healthcare, education, and construction that is labor intensive. The fine motor control of robots are not advancing as fast as large language models that still require millions of humans to annotate and train these AI models. They remind us that a lot of process knowlege (that AI models will use) is not written down anywhere citing Michael Polanyi saying “that we can know more than we can tell.”.

There are many AI challenges including data quality with common ontologies, securing data quanity, data sources (local vs. industry), model reproducibility, caputuring inputs and outputs that might not be digitized, privacy, data rights, trust, transparency and transformation of current processes. I have no doubt these will get addressed. How they get addressed will determine the promise and perils of AI.

{ 0 comments }

Using Health Insurance

Navigating the rules of the health insurance can be complex.

A majority of insured adults (58%) say they have experienced a problem using their health insurance in the past 12 months – such as denied claims, provider network problems, and pre-authorization problems.

This is from a recent Kaiser Family Foundation (KFF) survey of 3,605 U.S. adults with health insurance. It also found:

Nearly half of insured adults who had insurance problems were unable to satisfactorily resolve them, with some reporting serious consequences.

Yet the study found most viewed their health insurance favorably. Those with good health rated it higher than those reporting poor health.

 

{ 0 comments }

Understanding ChatGPT and Microsoft’s Chatbot

Ben Dickson sheds light on what is behind ChatGPT and Microsoft’s Bing Chatbot named Sydney. These large language models (LLMs) will likely find their way into our normal routines. It is important for us to understand their strengths and weaknesses before is gets your coffee order wrong and eloquently explains to you why it was right.

His well-written post “To understand language models, we must separate “language” from “thought” describes what these LLMs, created through machine learning of massive data sets, do well and what they struggle with. He cites a recent paper Dissociating language and thought in large language models: a cognitive perspective that found:

LLMs show impressive (although imperfect) performance on tasks requiring formal linguistic competence, but fail on many tests requiring functional competence.

{ 0 comments }

Leadership can be a lonely place. Especially during the Covid-19 pandemic if you are restaurant owner, school superintendent, healthcare leader, or wedding planner. It is impossible to process the complexity of the many knowns, unknowns and uncertainties to forecast the future and satisfy the impacted people. Yet, that is what we ask them to do.

As an example, here are 15 Covid-19 forecasts healthcare leaders must get right. See more.

{ 0 comments }