AIResearch AIResearch
Back to articles
Hardware

Smart Insoles Can Now Reconstruct Your Full Body Movement

Smart insoles in your shoes can now recreate your entire body's movement. This breakthrough could transform sports training and rehabilitation without bulky equipment.

AI Research
November 14, 2025
3 min read
Smart Insoles Can Now Reconstruct Your Full Body Movement

Imagine wearing ordinary shoe insoles that can accurately reconstruct your entire body's movement—from walking and running to dancing and crouching. Researchers have developed a breakthrough method that uses pressure-sensing insoles to capture and recreate complex human motion, opening new possibilities for sports analytics, rehabilitation, and virtual reality without requiring bulky motion capture suits or cameras.

The key finding from the study, called Step2Motion, is that data from smart insoles alone can successfully reconstruct full-body human locomotion. The researchers demonstrated that their system can accurately capture diverse movement styles including walking, jogging, moving sideways, tiptoeing, crouching, and even dancing. This represents the first general method to achieve full-body motion reconstruction using only insole sensor data.

The methodology combines pressure sensors and inertial measurement units (IMUs) embedded in standard shoe insoles. Each insole contains 16 pressure sensors that measure force distribution across the foot, along with accelerometers and gyroscopes that capture movement dynamics. The system uses a diffusion-based AI model that processes this multimodal data—pressure distribution, linear acceleration, angular rotation rates, and center of pressure location—to generate realistic 3D body poses.

As shown in Figure 2 of the paper, the reconstruction process starts with random noise and progressively refines it into accurate motion sequences through multiple iterations. The system employs a specialized attention mechanism that allows it to focus on different sensor modalities depending on the body part being reconstructed. For example, during stationary actions like squatting, pressure distribution becomes the primary information source, while during walking, acceleration data becomes more important.

The results analysis reveals significant improvements over existing methods. In quantitative evaluations, Step2Motion achieved mean positional errors of 7.2-7.7 centimeters for full-body reconstruction and 9.9-10.7 centimeters specifically for leg movements. The system maintained this accuracy across various activities, with velocity errors of 6.5-7.4 centimeters per second for legs. Comparative tests showed that removing the multi-head cross-attention mechanism increased errors by 25-40%, demonstrating the importance of the specialized architecture.

This technology matters because it addresses major limitations of current motion capture systems. Optical systems require expensive multi-camera setups and controlled environments, while IMU-based systems need specialized suits that restrict movement. The insole-based approach works seamlessly outdoors, requires no external equipment, and integrates naturally into regular footwear. This makes motion capture accessible for real-world applications like sports training, physical therapy, and immersive entertainment.

The system does have limitations, primarily stemming from drift in the inertial sensors and challenges in accurately predicting movements of body parts far from the feet, such as detailed arm motions during complex activities like dancing. The researchers note that integrating additional sensors or camera data could help mitigate these issues in future versions. Additionally, the current method relies on training data that includes paired insole readings and motion capture sequences, which limits its ability to generate entirely new motion patterns not seen during training.

Despite these limitations, Step2Motion represents a significant step toward practical, unconstrained motion capture that could transform how we monitor and analyze human movement in everyday settings.

About the Author

Guilherme A.

Guilherme A.

Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.

Connect on LinkedIn