≡ Menu

Understanding AI by understanding humans

Are humans underrated? Anthropic (maker of Claude) CEO Dario Amodei predicted in May:

AI could wipe out half of all entry-level white-collar jobs — and spike unemployment to 10-20% in the next one to five years.

Last year, AI startup investor and author of AI Superpowers Kai-Fu Lee predicted AI will displace 50% of jobs by 2027.

Research on humans has begun to put sand in the gears of these bold predictions. Researchers are following entrepreneurs, marketing departments, and shameless blog writers by evoking AI to get attention. Yet, improving our understanding of humans may be essential for our lifelong journeys living with AI.

Last week, I saw three studies that illustate this shift.

AttentionHow the Brain Filters Distractions to Stay Focused on a Goal

The Yale University study demonstrated how the human brain allocates limited perceptual resources to focus on goal-relevant information in dynamic environments. The study finds that the brain prioritizes perceptual effort based on goals, filtering out distractions. Attention shifts rapidly and flexibly in response to changing visual demands.  AI struggles with non-relevant information and requires precise language to be effective, as demonstrated in a clinical diagnosis study using chatbots.  The study found that if you remove physicians from filtering relevant information and precisely describing them (using long Latin derived terms) the effectiveness of chatbots drops from (94 percent accuracy to 34 percent).

Attention is essentially processing of bi-directional electrical pulses in neurons between perception and mental models relevant to the goal-directed strategy. Agentic AI will need to learn attention to focus on relevant inputs, shift attention rapidly, change based on perceptual inputs (learning) and infer futures without requiring precise prompts to engage LLM token prediction machines.

LearningWhy Children Learn Language Faster Than AI

Learning (a.k.a. self-correction) may be the most important type of inference for survival of any form of life.  The Max Planck Institute study found that even the smartest machines can’t match young minds at language learning. They estimated if a human learned language at the same rate as ChatGPT, it would take them 92,000 years. They introduced a new framework and cited three key areas:

  • Embodied Learning: Children use sight, sound, movement, and touch to build language in a rich, interactive world.
  • Active Exploration: Kids create learning moments by pointing, crawling, and engaging with their surroundings.
  • AI vs. Human Learning: Machines process static data; children dynamically adapt in real-time social and sensory contexts.

Next ActionAffordances in the brain: The human superpower AI hasn’t mastered

To achieve a goal, strategy inferences such as perception, imagining, deciding, and predicting must conclude with the next best action(s). The  study by University of Amsterdam scientists discovered:

Our brains automatically understand how we can move through different environments—whether it’s swimming in a lake or walking a path—without conscious thought. These “action possibilities,” or affordances, light up specific brain regions independently of what’s visually present. In contrast, AI models like ChatGPT struggle with these intuitive judgments, missing the physical context that humans naturally grasp.

There is no doubt that AI and robots will improve next best action inferences when they get widely deployed. For now, they must rely on token prediction machines based on statistical representations of words or groups of pixels (a.k.a., ChatGPT, Claude, or Gemini).

Photo Credit: Neuroscience News