≡ Menu

Understanding AI by understanding humans

Are humans underrated? Anthropic (maker of Claude) CEO Dario Amodei predicted in May:

AI could wipe out half of all entry-level white-collar jobs — and spike unemployment to 10-20% in the next one to five years.

Last year, AI startup investor and author of AI Superpowers Kai-Fu Lee predicted AI will displace 50% of jobs by 2027.

Research on humans has begun to put sand in the gears of these bold predictions. Researchers are following entrepreneurs, marketing departments, and shameless blog writers by evoking AI to get attention. Yet, improving our understanding of humans may be essential for our lifelong journeys living with AI.

Last week, I saw three studies that illustate this shift.

AttentionHow the Brain Filters Distractions to Stay Focused on a Goal

The Yale University study demonstrated how the human brain allocates limited perceptual resources to focus on goal-relevant information in dynamic environments. The study finds that the brain prioritizes perceptual effort based on goals, filtering out distractions. Attention shifts rapidly and flexibly in response to changing visual demands.  AI struggles with non-relevant information and requires precise language to be effective, as demonstrated in a clinical diagnosis study using chatbots.  The study found that if you remove physicians from filtering relevant information and precisely describing them (using long Latin derived terms) the effectiveness of chatbots drops from (94 percent accuracy to 34 percent).

Attention is essentially processing of bi-directional electrical pulses in neurons between perception and mental models relevant to the goal-directed strategy. Agentic AI will need to learn attention to focus on relevant inputs, shift attention rapidly, change based on perceptual inputs (learning) and infer futures without requiring precise prompts to engage LLM token prediction machines.

LearningWhy Children Learn Language Faster Than AI

Learning (a.k.a. self-correction) may be the most important type of inference for survival of any form of life.  The Max Planck Institute study found that even the smartest machines can’t match young minds at language learning. They estimated if a human learned language at the same rate as ChatGPT, it would take them 92,000 years. They introduced a new framework and cited three key areas:

  • Embodied Learning: Children use sight, sound, movement, and touch to build language in a rich, interactive world.
  • Active Exploration: Kids create learning moments by pointing, crawling, and engaging with their surroundings.
  • AI vs. Human Learning: Machines process static data; children dynamically adapt in real-time social and sensory contexts.

Next ActionAffordances in the brain: The human superpower AI hasn’t mastered

To achieve a goal, strategy inferences such as perception, imagining, deciding, and predicting must conclude with the next best action(s). The  study by University of Amsterdam scientists discovered:

Our brains automatically understand how we can move through different environments—whether it’s swimming in a lake or walking a path—without conscious thought. These “action possibilities,” or affordances, light up specific brain regions independently of what’s visually present. In contrast, AI models like ChatGPT struggle with these intuitive judgments, missing the physical context that humans naturally grasp.

There is no doubt that AI and robots will improve next best action inferences when they get widely deployed. For now, they must rely on token prediction machines based on statistical representations of words or groups of pixels (a.k.a., ChatGPT, Claude, or Gemini).

Photo Credit: Neuroscience News

{ 0 comments }

Are we ready for Doctor AI?

ChatGPT, Gemini, Claude and Large Language Models (LLMs) are impressive with medical diagnoses, with ChatGPT-4 performing better than physicians at diagnosing illness in a small study. A closer look finds AI in medical diagnosis is another example of the cognitive dissonance of AI.

  • Thought – A paper by researchers at the University of Oxford found LLMs could correctly identify relevant conditions 94.9% of the time when directly presented with test scenarios.
  • Thought – Human participants using LLMs to diagnose the same scenarios identified the correct conditions less than 34.5% of the time.

What went wrong?

Looking back at transcripts, researchers found that participants both provided incomplete information to the LLMs and the LLMs misinterpreted their prompts. For instance, one user who was supposed to exhibit symptoms of gallstones merely told the LLM: “I get severe stomach pains lasting up to an hour. It can make me vomit and seems to coincide with a takeaway,” omitting the location of the pain, the severity, and the frequency.

It appears physicians know how to identify the relevant conditions and how to clearly state them to the ChatBot. The Oxford study highlights one problem, not with humans or even LLMs, but with the way we sometimes measure LLM performance.

  • Thought – LLMs can pass medical licensing tests, real estate licensing exams, or state bar exams.
  • Thought – LLMs can often provide poor personal medical, real estate, and legal advice.
{ 0 comments }

The Cognitive Dissonance of AI

In psychology, cognitive dissonance is the discomfort from holding two or more contradictory thoughts.  The term describes AI today. To leverage AI and thrive in our AI journeys, we need to live with the discomfort that comes with understanding of the strengths and weaknesses of AI.

ChatGPT, Gemini, Claude:

Chatbots for advice:

Large Language Models:

AI Agents:

AI Reasoning Models:

Autonomous Vehicles:

Photo Credit: Author generated with ChatGPT. AI image generation is amazing, though it can be a struggle to get precisely what is wanted.

{ 0 comments }

How Do You Hire a Gen AI Model?

Hilke Schellmann describes how we use AI-powered algorithms to screen resumes, process background checks, facilitate candidate online assessments, and conduct one-way interviews in the book  The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted, and Fired and Why We Need to Fight Back Now. 

While the AI-powered algorithms for hiring humans may not work with Large Language Models (Gen AI), we do have insights from Melanie Mitchell. She is one of the best explainers of AI. Her bestselling book is Artificial Intelligence: A Guide for Thinking Humans. She explains very well what AI can do and what it cannot do.

She recently casted doubt on recent LLM research that stated: “GPT-3 appears to display an emergent ability to reason by analogy, matching or surpassing human performance across a wide range of text-based problem types.”

She replicated the experiments using counterfactual tasks to stress-test claims of reasoning in large language models.  While the advances of LLMs have been amazing, we need people like Melanie Mitchell to help make sense of the hype and sensational claims. Otherwise, how are we going to know to how hire our next assistant?

{ 0 comments }

A Little Earth Day Optimism

The complexity of reducing the CO2 pumped into the atmosphere can feel overwhelming and even hopeless. While we must continue engaging in the many initiatives to make this happen, it is nice to read an optimistic story that could help us improve our future.

That dose of optimism is the Jessica Rawnsley story “The Rise of the Carbon Farmer” in Wired.  She describes the revival of Regenerative Agriculture that keeps carbon in the soil rather than the atmosphere. It even improves soil health and improves yields.

By some counts, a third of the excess CO2 in the atmosphere started life in the soil, having been released not by burning fossil fuels but by changing how the planet’s land is used.

He (Patrick Holden) is one of a growing number of farmers shaking off conventional methods and harnessing practices to rebuild soil health and fertility—cover crops, minimal tilling, managed grazing, diverse crop rotations. It is a reverse revolution in some ways, taking farming back to what it once was.

 

 

{ 0 comments }

There are few things are more complex than managing health conditions. The healthcare system is very good at tracking the prescribing of medicines. It doesn’t track deprescribing of medicines.

Seasons change, fashions change, US presidents change, but for many patients, prescriptions never do—except to become more numerous.

Among US adults aged 40 to 79 years, about 22% reported using 5 or more prescription drugs in the previous 30 days. Within that group, people aged 60 to 79 years were more than twice as likely to have used at least 5 prescription drugs in the previous month as those aged 40 to 59 years.

Over time, a drug’s benefit may decline while its harms increase, Johns Hopkins geriatrician Cynthia Boyd, MD, MPH, told JAMA. “There are a pretty limited number of drugs for which the benefit-harm balance never changes.”

Deprescribing requires shared decision-making that considers “what patients value and what patients prioritize.”

Deprescribing lacks proven clinical guidelines and time for a patient and physician discussion. The average patient visits are twelve minutes for new patients and seven for return patients*.

* Topol, Eric, Deep Medicine, Basic Books, New York, 2019, p17

{ 0 comments }

Self-driving Cars – Autonomous AI Challenge

A popular physicist joke is nuclear fusion is thirty years away and always will be.

While robo-taxis are picking up passengers today in San Francisco, Austin and Phoenix, robo-taxis seem perpetually a few years away.

We’ve had military aircraft drones deployed since 1995, a modified autonomous Volkswagen vehicle won the 132-mile DARPA Grand Challenge in 2005, and six automakers announced in 2015 delivery plans for their self-driving vehicles between 2017 and 2020. One of those companies announced yesterday:

General Motors (GM) will slash spending in its self-driving car unit Cruise, after an accident last month seriously injured a pedestrian and prompted regulators to retract its operating permit for driverless cars in San Francisco.

In October, the company said it would no longer operate its vehicles without safety drivers behind the wheel.

The horrific accident in San Francisco highlighted a significant challenge for autonomous vehicles, “long tail” or edge cases. These instances are at end of the distribution curve of occurrence and are often unique. Robo-taxis can operate perfectly for ten thousand miles then break down with an edge case. AI requires many training examples to learn. Humans are more flexible and leverage common sense to navigate these cases. Another challenge for autonomous vehicles is social acceptance. While humans learned to live with over forty thousand deaths from car accidents per year, it is too early to know what will be accepted from autonomous vehicles.

For more, see GM Slashes Spending on Robotaxi Unit Cruise, a Setback for Driverless Cars

 

{ 0 comments }

Amazing Brain in Color

This a photo of Purkinje neuron cells that connect the brain and spinal cord to help control breathing, heart rate, balance and more. Silas Busch from the University of Chicago captured this slightly eerie scene, noting it reminded him of people shuffling through the dark of night. The photo won first place this year in the National Institute of Health Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative’s annual Show Us Your BRAINs! Photo and Video Contest.

More from NIH

Photo Credit: Silas Busch, The University of Chicago

 

{ 0 comments }

Is clean, sustainable energy a few miles away?

Hot rock is everywhere, with temperatures rising hundreds of degrees Fahrenheit within the first few miles of the surface, … yet geothermal plants are built where naturally heated water can be easily tapped. 

Gregory Barber writes in Wired about a new “enhanced” geothermal system (EGS) built on wells drilled 7,000 feet down into completely dry, 375 degrees Fahrenheit, rock to create an artificial hot spring by pumping water into the well. The returned hot water drive turbines to create 2 to 3 megawatts of electricity. Enough to power a few thousand homes.

Pre-eighteenth-century mills were powered by wind and water wheels until the steam engine made in possible for factories to locate anywhere. EGSs show promise in creating clean, sustainable energy that may help address climate change.

{ 0 comments }

More Health Complexity – Molecules and Microbes

The complexity of human health doesn’t change each week. The complexity of our understanding does when another traunch of peer-reviewed medical journal articles arrive. Two million articles are published each year*.

Each week, our world becomes more complex. Some complexity is human made, like our Byzantine-like healthcare reimbursement system, some complexity is discovering our existing realities, such as new information about molecules (DNA, immune proteins) and microbes.

Stanford Medicine-led study clarifies how ‘junk DNA’ influences gene expression – When the first whole genome sequencing was announced in 2000, they found 20,000 genes representing just 1-2% of the 3 billion base pairs. They called the remaining 98-99% of the genome non-coding DNA (a.k.a., junk DNA). This study shows how junk DNA regulates gene expression (“the chef), essentially choosing which gene recipe to make.

Your “immune resilience” greatly impacts your health and lifespan

  • Immune resilience is the capacity to control inflammation and rapidly restore immune balance following a disease challenge. 
  • People with high levels of immune resilience live longer, resist diseases, and are more likely to survive diseases when they do develop. 

Over time, our immune resilience decreases as our immune systems are subjected to multiple respond-and-recover cycles.

How Many Microbes Does It Take to Make You Sick? – The concept of “infectious dose” suggests there are ways to stay safer from harm.

You may need to add “junk DNA”, “immune resilience,” and “infectious dose” to your “staying healthy” strategy. A great opportunity for an AI digital twin to help us make sense of our molecules and microbes in managing the complexity of health.

* Topol, Eric; Deep Medicine, Basic Books, New York, 2019, p138

{ 0 comments }