Pytorch, Python, C++, CNN, Computer Vision, Multi Agent System
This project explores how human movement can shape the collective behavior of robotic swarms. Drawing from Laban Movement Analysis, I am building a real-time pose classification pipeline that identifies movement qualities (such as float vs. punch) from a dancer’s body.
The system integrates MediaPipe-based pose tracking with a machine learning classifier to label movement phrases. These classifications are then broadcast into a multi-agent swarm simulation, where 50 robots translate human dynamics into emergent group behaviors such as flocking, orbiting, or dispersing.
By linking dance notation and robotics, the project creates a call-and-response choreography between human performers and machines. My goal is to test how movement qualities, rather than predefined gestures, can guide collective robotic action
Kandinsky, Wassily. 2021, Point and Line to Plane. Bauhausbücher. Volume 9. Lars Müller Publishers. Page 36.