Pose estimation from depth images
This project presents a control theoretic approach for human pose estimation from a set of key feature points detected using depth image streams obtained from a time of flight imaging device.
This project considers a model-based, Cartesian control theoretic approach for estimating human pose from a set of key features points (key-points) detected using depth images obtained from a time of flight imaging device. The key-points represent positions of anatomical landmarks, detected and tracked over time based on a probabilistic inferencing algorithm that is robust to partial occlusions and capable of resolving ambiguities in detection. The detected key-points are subsequently used as input to a constrained, closed loop inverse kinematics algorithm which not only estimates the pose of the articulated human model, but also provides feedback to the key-point detection module in order to resolve ambiguities or to provide estimates of undetected key-points. Based on a standard kinematic and mesh model of a human, constraints such as joint limit avoidance, and self penetration avoidance are enforced within the closed loop inverse kinematics framework. We demonstrate the effectiveness of the algorithm with experimental results of upper-body and full body pose reconstruction from a small set of detected key-points. On average, the proposed algorithm runs at approximately 10 frames per second for the upper-body and 5 frames per second for whole body reconstruction on a standard 2.13 GHz
laptop PC.