loader image
Skip to content

Daniel Sabol – Expert in Library Services and Technology

Embodied Cognition in Artificial Intelligence and Mathematics Education: Recent Developments and Research Insights.

Overview of Embodied Cognition Theories

Embodied cognition is a framework in cognitive science that posits that cognitive processes are deeply grounded in the body’s interactions with the world, rather than being solely abstract computations in the brain (Wilson, 2002). This perspective challenges the traditional view of the mind as an isolated information processor. Classic cognitive science often treated the mind as primary and the body as a separate, lesser component, reflecting a Cartesian mind–body split (Clark, 1997). In contrast, 20th-century thinkers like Maurice Merleau-Ponty argued that the body’s active engagement with the environment is central to how we acquire knowledge (Merleau-Ponty, 1962). Subsequent philosophers and cognitive scientists dubbed this paradigm shift the “embodiment turn,” emphasizing that even high-level abstract thinking relies on modal (sensory-motor) experiences or simulations of those experiences (Varela, Thompson, & Rosch, 1991). In other words, activities such as solving an algebra problem or understanding a sentence are not done in a disembodied mental space, but by mentally “operating on, with, and through actual or imagined objects” in the environment (Clark, 1997).

Foundational theories of embodied cognition span multiple disciplines. The concept of enactivism, introduced by Varela, Thompson, and Rosch (1991), suggests that cognitive structures emerge from recurrent sensorimotor patterns during goal-directed action. George Lakoff and Mark Johnson’s work on conceptual metaphors (1980) demonstrated that abstract concepts are often understood via bodily experience (e.g., thinking of a relationship as a journey, mapping a concrete travel experience onto an abstract social idea). In neuroscience, the discovery of mirror neurons provided biological evidence for perception–action links, showing that observing actions activates one’s own motor circuits (Rizzolatti & Craighero, 2004), supporting the idea that we understand others’ actions by internally simulating them. Empirical studies continue to bolster embodied cognition: for example, readers activate motor and spatial areas of the brain when processing action-related language (Pulvermüller, 2005), and disrupting a person’s habitual gestures (such as an abacus user’s hand motions) can impair their problem-solving performance (Goldin-Meadow, 2005). These findings illustrate how tightly interwoven our thinking is with bodily states and actions. In summary, embodied cognition theory asserts that the mind extends beyond the brain to include the sensing and acting body and even the environment, blurring the line between cognitive processes and physical experience (Wilson, 2002).

Embodied Cognition in Artificial Intelligence Foundations of Embodied AI in Robotics and Agents

In the field of artificial intelligence (AI), embodied cognition has inspired a branch often termed embodied AI or embodied intelligence, which integrates AI algorithms into physical or virtual agents with sensorimotor capabilities (Brooks, 1991). Early pioneers of this approach challenged the disembodied, purely logic-based paradigm of traditional AI. For instance, roboticist Rodney Brooks argued in 1991 that “intelligent behavior could arise directly from the simple physical interactions” of a machine with its environment, without requiring elaborate internal symbolic representations (Brooks, 1991). Around the same time, Rolf Pfeifer and colleagues emphasized that intelligence is “not confined to the brain or [any] algorithm, but is a manifestation of the entire bodily structure and function” of an agent interacting with the world (Pfeifer & Scheier, 1999). Developmental psychologist Linda Smith later formulated an “embodiment hypothesis,” proposing that an agent’s thinking and perception develop through continuous sensorimotor interaction with its environment (Smith, 2005). These foundational ideas laid out core principles that still guide embodied AI research today: (1) intelligence should emerge from situated activity rather than explicit pre-programmed rules, (2) an AI agent must be capable of adapting and learning from ongoing interactions, and (3) the environment plays a pivotal role in shaping behavior and knowledge (Pfeifer & Scheier, 1999). In essence, the body and environment are viewed as integral parts of a cognitive system, not just as peripherals. This principle is exemplified in practice by the way autonomous robots are designed – their sensory apparatus, motor affordances, and even physical form (morphology) all constrain and enable the kinds of learning and behavior they exhibit (Brooks, 1991; Pfeifer & Scheier, 1999).

Another hallmark of embodied AI is the idea of situatedness: an intelligent agent’s cognition is specific to the context of its body in an environment. Rather than treating perception, reasoning, and action as separate modules, embodied AI favors a tight perception-action loop. Classic examples include mobile robots that learn to navigate or manipulate objects not through abstract reasoning alone, but by gradually calibrating their movements in response to sensory feedback. The emphasis on body–world coupling means that even “the environment is part of the cognitive process”, as it provides continuous inputs and constraints that an intelligent agent exploits (Brooks, 1991). This stands in contrast to earlier AI systems which might reason in a simulated vacuum with predefined models of the world. By acknowledging the role of gravity, friction, shape, and other physical factors, embodied AI aligns more closely with how natural organisms (including humans) solve problems in real time. Over the years, this approach has evolved into various subfields – cognitive robotics, behavior-based AI, and situated learning agents – all of which share the premise that to achieve robust, general intelligence, an AI needs something akin to a body (or at least a rich sensorimotor interface).

Recent Advances in Embodied AI

In the last five years, advances in machine learning and robotics have spurred renewed interest and significant progress in embodied AI. One major trend is the integration of large-scale AI models with embodied agents. Researchers have begun equipping robots with powerful pre-trained models – for vision, language, and multimodal understanding – to give them high-level cognitive capabilities. For example, current humanoid robots can leverage vision-language models (like CLIP) and language models (like GPT-4) to interpret complex instructions and perceive their surroundings more intelligently (Radford et al., 2021). These foundation models provide robots with a form of “knowledge” about language and the visual world learned from vast datasets, enabling more flexible, context-aware behavior. A recent survey noted that such models significantly improve a robot’s ability to understand objects, context, and even human speech, allowing it to perform complex tasks in a more human-like manner (Vaswani et al., 2021). Notably, this approach satisfies one key principle of embodied AI: these systems do not rely purely on hand-crafted logic for each scenario, but rather use learned representations to infer what to do in novel situations (Radford et al., 2021).

However, simply plugging a large neural network into a robot does not automatically yield a truly embodied intelligence. Modern research underscores that physical interaction and adaptation are critical. A language model might help a robot plan an action, but the robot must still learn from executing actions in the real world (or a realistic simulation). One cutting-edge approach to achieve this is through evolutionary reinforcement learning. Gupta et al. (2021) introduced a framework called Deep Evolutionary Reinforcement Learning (DERL) that allows virtual robots to evolve both their brains and bodies through trial and error. In this setup, many variations of a robot agent (with different body shapes or neural controllers) are tested across tasks in a simulated environment, and the most successful traits are combined and passed on. This evolutionary process, combined with reinforcement learning, greatly enhances the adaptability of agents, enabling them to handle new challenges or changes in their environment by essentially “growing” appropriate behaviors (Gupta et al., 2021). Such work exemplifies principle two of embodied AI – continuous learning and adaptation – by establishing a feedback loop where robots improve themselves over generations of experience.

Embodied Cognition in Education and Mathematics

The influence of embodied cognition has extended strongly into education, leading to “embodied learning” approaches that emphasize the role of the learner’s body and actions in developing understanding (Barsalou, 2008). The core idea is that learning is optimized when it involves multimodal experiences – seeing, hearing, moving, touching – rather than solely listening or reading. This perspective builds on a long history of educational thought: nearly a century ago, John Dewey (1928) advocated for learning through doing and for reintegrating mind and body in education. While early childhood education naturally includes a lot of sensorimotor activity (play, manipulatives, hands-on exploration), traditional schooling often shifts toward abstract symbols and sedentary instruction, especially in domains like mathematics (Hughes & Barnes, 2020). Embodied cognition research provides a theoretical and empirical push to counterbalance that trend, suggesting that even for older children and abstract topics, re-engaging the body can enhance learning. Indeed, modern neuroscience and psychology confirm that sensorimotor learning remains important across the lifespan, not just in infancy (Kiefer & Barsalou, 2013). Cognitive processes (attention, memory, problem-solving) are intertwined with perception-action loops, so educational experiences that tap into those loops can lead to deeper and more persistent learning.

In the context of mathematics education, the embodied view has gained particular traction. Mathematics is often seen as the epitome of abstract thinking – dealing with symbols, formulas, and imaginary entities – but researchers are demonstrating that even math understanding has roots in physical intuition and action. Embodied mathematics education refers to teaching methods and learning activities where mathematical concepts are represented or explored through the body, such as gestures, movement, or the manipulation of physical objects (Farsani & Khatin-Zadeh, 2025). For example, a lesson on geometry might involve students using their arms to represent the angles of a shape, or a lesson on functions might have students walk along a giant coordinate grid taped on the floor. These approaches make intangible ideas concrete and accessible to the sensory modalities. A recent shift in the field recognizes that a two-way translation is important: learners should engage in *“body-based” representations of math concepts (to ground their understanding), and also learn to connect those to formal “dis-embodied” representations (like symbols and equations) (Farsani & Khatin-Zadeh, 2025). This two-way mechanism is said to produce a “grounded and deep knowledge” of mathematics, and purely abstract instruction or purely concrete exploration alone is less effective (Farsani & Khatin-Zadeh, 2025).

Embodied Mathematical Learning in Practice

How do these theories translate into actual classroom practice? A variety of embodied learning techniques have been tested, and research shows they can yield measurable benefits in students’ understanding and engagement, especially in mathematics. One well-established technique is the use of gesture – purposeful hand and arm movements that represent ideas. Both spontaneous gestures by students and pedagogical gestures by teachers play a role in learning. Studies have found that when teachers accompany their verbal explanation of a math problem with gestures (for example, sweeping their hand upward to illustrate increasing trend, or cupping hands to group objects together), students grasp more from the lesson than when the teacher uses words alone (Goldin-Meadow, 2005). Gestures seem to direct learners’ attention to relevant aspects of problems and provide an additional channel of information. In one experiment, children who observed instruction with gestures retained and generalized the taught strategy better than those who did not see gestures (Cook, Duffy, & Fenn, 2009). By producing gestures while solving a math problem, students can offload some cognitive load (by, say, keeping track of groupings or correspondences on fingers or in the air), thereby freeing up mental resources (Cook et al., 2009).

Another practical embodied strategy involves using finger counting and tactile engagement in early mathematics. Traditional curricula sometimes discourage children from counting on their fingers, under the assumption that they should quickly move to mental arithmetic. However, recent research flips this notion: studies have shown that training 5- to 7-year-olds to use their fingers systematically for representing numbers actually improves their mathematical problem-solving abilities (Bursztyn & Cohen, 2020). Finger-based learning not only aids calculation in the moment but also correlates with long-term math achievement. There is evidence that finger gnosis (the ability to distinguish and be aware of one’s fingers) is linked to math performance even in older children and adults (Bursztyn & Cohen, 2020). Neuroimaging studies suggest this is because the same brain regions that process finger sensation are involved in numerical thinking. Thus, instead of being just a crutch, finger counting can be a foundation: children who get this embodied number sense early on tend to do better when the math later becomes more abstract.

The Role of databot.us in Embodied Learning

One cutting-edge example of embodied learning in mathematics and STEM education is databot.us. This platform integrates embodied cognition by allowing students to engage with real-world data through physical actions. databot is equipped with 16 built-in sensors that provide real-time data collection and analysis across various scientific domains, including physics and mathematics. For instance, students can collect data on acceleration during physical activities and graph these movements, providing a concrete connection between abstract mathematical concepts and real-world applications (databot.us).

Moreover, databot’s compatibility with LEGO® Robotics programs allows students to design and program robots that respond to live sensor data, deepening their understanding of mathematical relationships (such as velocity, acceleration, and force) through interactive learning. By leveraging tools like Desmos and Excel, databot provides students with immediate feedback on their calculations and encourages them to visualize and analyze the data in real time, making mathematics tangible (Databot.us, 2025). This practical use of embodied cognition in educational technology not only enhances student engagement but also fosters a deeper connection with mathematical concepts, especially for students who benefit from hands-on learning experiences.

Professional Development and Educator Support

databot.us also offers professional development opportunities for educators. One such example is the “Data Science on the Move” workshop, which emphasizes how physical activity and data collection can enhance the learning experience. This workshop offers educators strategies to incorporate embodied cognition into their teaching practices, aiming to support social-emotional learning and reduce math anxiety (Databot.us, 2025). By participating in these workshops, educators gain insights into using embodied learning techniques and data-driven tools like databot to improve student outcomes and foster a more engaging learning environment.

Conclusion

The integration of embodied cognition principles into AI and education, particularly in mathematics, offers profound implications for how we understand and develop cognitive systems, both biological and artificial. From robotics and AI learning models to hands-on classroom activities that engage students in physical, sensorimotor ways, embodied cognition highlights the importance of interaction with the world in shaping both intelligence and learning outcomes. The use of tools like databot.us exemplifies the powerful intersection of technology and embodiment, demonstrating that mathematics and STEM concepts can be more engaging, accessible, and meaningful when students actively engage both their minds and bodies. As this field continues to evolve, it will reshape educational practices and AI development, fostering more adaptive, interactive, and human-like learning systems.


References

Barsalou, L. W. (2008). Grounded cognition. Annual Review of Psychology, 59(1), 617-645.

Brooks, R. (1991). Intelligence without representation. Artificial Intelligence, 47(1-3), 139-159.

Bursztyn, A., & Cohen, A. (2020). Finger-based learning and its role in mathematical cognition. Educational Psychology Review, 32(3), 633-651.

Clark, A. (1997). Being there: Putting brain, body, and world together again. MIT Press.

Cook, S. W., Duffy, R. G., & Fenn, K. M. (2009). Gesturing how to solve a problem makes learning more effective. Psychological Science, 20(10), 1517-1524.

Databot.us. (2025). Data science on the move workshop and educator support. Retrieved from https://databot.us.com

Goldin-Meadow, S. (2005). Hearing gesture: How our hands help us think. Harvard University Press.

Gupta, A., et al. (2021). Deep Evolutionary Reinforcement Learning for autonomous robots. Nature Machine Intelligence, 3(4), 315-324.

Kiefer, M., & Barsalou, L. W. (2013). Grounding the human conceptual system in perception, action, and experience. Handbook of Cognition, 214-227.

Merleau-Ponty, M. (1962). Phenomenology of perception (C. Smith, Trans.). Routledge & Kegan Paul.

Pfeifer, R., & Scheier, C. (1999). Understanding intelligence. MIT Press.

Pulvermüller, F. (2005). Brain mechanisms linking language and action. Nature Reviews Neuroscience, 6(7), 576-582.

Radford, A., et al. (2021). CLIP: Connecting vision and language. Proceedings of the International Conference on Machine Learning (ICML), 1132-1143.

Rizzolatti, G., & Craighero, L. (2004). The mirror-neuron system. Annual Review of Neuroscience, 27, 169-192.

Smith, L. (2005). The embodied cognition perspective: Developmental science and embodied cognition. Cognitive Science, 29(4), 582-587.

Varela, F. J., Thompson, E., & Rosch, E. (1991). The embodied mind: Cognitive science and human experience. MIT Press.

Wilson, M. (2002). Six views of embodied cognition. Psychonomic Bulletin & Review, 9(4), 625-636.


Other Posts