Why AI needs a “body” to truly understand the world
April 1, 2026
In Neuron, a USC-led team of neuroscientists explains why, for better reasoning and greater safety, the next generation of AI needs a sense of “embodiment.”
Artificial Intelligence Research
By Mike McNulty
AI systems have made remarkable progress in recent years. Multimodal large language models like ChatGPT and Gemini can analyze images, videos and text to describe scenes, answer complex questions and generate realistic content.
Yet despite these advances, they still fail at relatively straightforward tasks that most humans find effortless. Perhaps that’s because MLLMs are fundamentally limited by the very thing they don’t actually have: a physical, functional, neurobiological body.
A new paper published today in Neuron by researchers from USC, the University of California, Los Angeles and Google’s DeepMind research lab outlines why today’s “disembodied” AI systems seemingly struggle when prompted with tasks requiring real-world understandings. The authors then explain why tomorrow’s AI systems must incorporate an “embodied” framework modeled on the ways human beings actually engage in and with the world around them.
“A longstanding paradox in AI is that systems excel at tasks that humans find difficult, but are stumped by the most basic tasks that humans find easy,” said lead author Akila Kadambi.
For example, humans can easily recognize a cluster of moving dots that represents a person’s walking gait. But AI systems struggle with this task, erroneously identifying the dots as constellations, for example.
“Despite their tremendous processing speed and pattern-matching capabilities, this paradox illustrates that AI does not truly understand the real world because it does not experience the real world,” Kadambi said.
Kadambi is a postdoctoral researcher in the USC Brain and Creativity Institute’s Center for the Neuroscience of Embodied Cognition under the direction of USC Chan Professor Lisa Aziz-Zadeh, and at the UCLA Department of Psychiatry and Biobehavioral Sciences in the David Geffen School of Medicine. She is also a research fellow at the UCLA-CDU Dana Center.
At the foundation of the human experience is embodiment — the continuous communication and feedback loops between brain, body and environment that not only maintain physiological homeostasis but enable our brains to regulate, integrate and modulate information to carry out action through three-dimensional space. Therefore, human learning depends upon both external sensory information and internal bodily signals that help us make decisions, understand others and interpret social situations. Even simple interactions, according to Kadambi, depend upon this innate sense of “internal” embodiment.
“When a dinner guest asks you ‘Where is the salt?,’ your brain does far more than just parse language,” Kadambi said. “You subconsciously interpret their motivations, locate your body position within the environment, plan your hand’s physical action sequences and respond in context.”
In contrast, today’s AI architecture treats the salt question as a statistical pattern-matching problem. Answers are generated by optimizing probabilities of the correct external response without any underlying internal sense of empathy, relationship or intention. The authors argue this lack of internal embodiment limits AI’s capacity to reason about space, actions and social situations in ways humans can.
More Human-Like AI Will Be Safer AI
In their paper, the neuroscientists argue that future AI architecture and algorithms must go beyond linguistic descriptions of internal embodiment via engineering that actually accounts for concepts like uncertainty, social connectivity and resource tradeoffs. Rather than behave unpredictably in unfamiliar contexts or confidently generate false responses, a stronger sense of internal embodiment will help AI adapt more effectively to complex and ambiguous situations, and defer answers or seek more information instead of making the best (but wrong) guess.
The authors also emphasize that the future of embodied AI does not require a replication of human neurobiology. Instead, better simulation of the regulatory roles that internal states play in human reasoning and behavior will make AI more grounded, adaptable and aligned with real-world conditions.
Perhaps most importantly, they argue that AI with internal embodiment will ultimately be safer.
“Today’s AI systems have no built-in costs or disincentives,” Kadambi said. “But in biological systems, survival depends on maintaining internal stability and estimating risks to the individual and to the population, both of which naturally constrain behavior.”
The authors believe that embedding similar mechanisms into AI will serve as guardrails that prioritize empathic, prosocial responses over risky, antisocial ones.
As AI becomes increasingly integrated into daily life, the call is growing for more ethical, benevolent and human-centric behaviors. For artificial intelligence to better reflect human intelligence, according to Kadambi, it will need the very type of intelligence that, for now, only seems to be located within the human body.
“Embodiment may be what bridges that gap between raw processing power and more meaningful understanding.”
Article co-authors with Kadambi and Aziz-Zadeh are Antonio Damasio (USC Brain and Creativity Institute), Marco Iacoboni (Ahmanson-Lovelace Brain Mapping Center at UCLA) and Srini Narayanan (Google DeepMind). Their publication was supported in part by a Google Faculty Research Award titled “AI Tropes and the Human Mind: A Neuroscience Perspective” (PI: L. Aziz-Zadeh).
⋯
Next by tag Artificial Intelligence ⟩ Research ⟩





