This study from UCLA Health emphasizes the need for artificial intelligence systems to incorporate ‘internal embodiment’—an awareness of their own states—to enhance safety and reliability.
Los Angeles, CA — A recent study from UCLA Health has raised significant concerns regarding the limitations of current artificial intelligence (AI) systems, particularly in their ability to understand and interact with the world as humans do. The research, led by postdoctoral fellow Akila Kadambi and published in the journal Neuron, highlights the absence of what the researchers term ‘internal embodiment’—the capacity for a system to be aware of its own internal states, such as fatigue or uncertainty—among existing AI models.
The study suggests that this ‘body gap’ may have serious implications for the safety and reliability of AI technologies, especially as they become more integrated into everyday life. As AI systems like ChatGPT and Google’s Gemini increasingly perform tasks that require nuanced understanding, the lack of an embodied experience could lead to errors and misjudgments.
The Concept of Internal Embodiment
According to Kadambi, the concept of internal embodiment is crucial for understanding how humans interact with the world. “While there is a current focus in world modeling on external embodiment, such as our outward interactions with the world, far less attention is given to internal dynamics,” he stated. “The body acts as our experiential regulator of the world, a kind of built-in safety system.” This internal awareness allows humans to register feelings of uncertainty or depletion, a capability that current AI lacks.
The researchers argue that without such an internal mechanism, AI systems risk generating responses that may appear experiential but lack genuine understanding. This gap in capability is not merely philosophical; it has tangible consequences for AI behavior and performance. For instance, the study noted that when leading AI models were shown a simple image designed to test human perception, several models failed to recognize it as a human figure. Instead, one model inaccurately described it as a constellation of stars.
External vs. Internal Embodiment
The research distinguishes between two types of embodiment in AI: external and internal. External embodiment refers to a system’s ability to interact with its environment and respond to real-world feedback, a focus of current AI designs. In contrast, internal embodiment involves a system’s continuous monitoring of its internal states, akin to human awareness of physical and emotional conditions.
Dr. Marco Iacoboni, a senior author of the paper, emphasized the implications of this distinction, stating, “Without internal costs or constraints, an AI system has no intrinsic reason to avoid overconfident errors, resist manipulation, or behave consistently.” This lack of a self-regulating mechanism could result in AI systems making decisions that are not only flawed but potentially harmful.
Proposed Framework for AI Development
The authors propose a ‘dual-embodiment framework’ to guide future AI research and development. This framework would involve creating principles that enable AI systems to model both their interactions with the external world and their internal states. While the internal state variables in AI would not need to replicate human biology directly, they could serve as persistent signals that track factors such as uncertainty and processing load, informing the system’s outputs and constraining its behavior over time.
Moreover, the researchers advocate for the establishment of new benchmarks designed to assess a system’s internal embodiment. Current AI evaluations primarily focus on external performance metrics, such as navigation and object recognition. However, the UCLA team stresses that the field requires assessments that also examine whether an AI system can monitor its own internal states and maintain stability when those states are disrupted.
Implications for the Future of AI
As AI continues to evolve and permeate various sectors, the insights provided by this research underscore the necessity of incorporating a deeper understanding of internal embodiment into AI systems. Iacoboni noted, “If we want AI systems that are genuinely aligned with human behavior—not just superficially fluent—we may need to give them vulnerabilities and checks that function like internal self-regulators.” This approach could help ensure that AI systems are not only more capable but also safer and more trustworthy in their applications.
In summary, the UCLA study brings to light critical considerations regarding the development of AI technologies. By addressing the ‘body gap’ and striving for a more comprehensive understanding of both external and internal embodiment, researchers and developers may pave the way for a future where AI systems can operate with greater reliability and alignment with human values.