1-3hit |
Tsuyoshi HIGASHIGUCHI Norimichi UKITA Masayuki KANBARA Norihiro HAGITA
This paper proposes a method for predicting individuality-preserving gait patterns. Physical rehabilitation can be performed using visual and/or physical instructions by physiotherapists or exoskeletal robots. However, a template-based rehabilitation may produce discomfort and pain in a patient because of deviations from the natural gait of each patient. Our work addresses this problem by predicting an individuality-preserving gait pattern for each patient. In this prediction, the transition of the gait patterns is modeled by associating the sequence of a 3D skeleton in gait with its continuous-value gait features (e.g., walking speed or step width). In the space of the prediction model, the arrangement of the gait patterns are optimized so that (1) similar gait patterns are close to each other and (2) the gait feature changes smoothly between neighboring gait patterns. This model allows to predict individuality-preserving gait patterns of each patient even if his/her various gait patterns are not available for prediction. The effectiveness of the proposed method is demonstrated quantitatively. with two datasets.
Tsuyoshi HIGASHIGUCHI Toma SHIMOYAMA Norimichi UKITA Masayuki KANBARA Norihiro HAGITA
This paper proposes a method for evaluating a physical gait motion based on a 3D human skeleton measured by a depth sensor. While similar methods measure and evaluate the motion of only a part of interest (e.g., knee), the proposed method comprehensively evaluates the motion of the full body. The gait motions with a variety of physical disabilities due to lesioned body parts are recorded and modeled in advance for gait anomaly detection. This detection is achieved by finding lesioned parts a set of pose features extracted from gait sequences. In experiments, the proposed features extracted from the full body allowed us to identify where a subject was injured with 83.1% accuracy by using the model optimized for the individual. The superiority of the full-body features was validated in in contrast to local features extracted from only a body part of interest (77.1% by lower-body features and 65% by upper-body features). Furthermore, the effectiveness of the proposed full-body features was also validated with single universal model used for all subjects; 55.2%, 44.7%, and 35.5% by the full-body, lower-body, and upper-body features, respectively.
Kittiya KHONGKRAPHAN Pakorn KAEWTRAKULPONG
A novel method is proposed to estimate the 3D relative positions of an articulated body from point correspondences in an uncalibrated monocular image sequence. It is based on a camera perspective model. Unlike previous approaches, our proposed method does not require camera parameters or a manual specification of the 3D pose at the first frame, nor does it require the assumption that at least one predefined segment in every frame is parallel to the image plane. Our work assumes a simpler assumption, for example, the actor stands vertically parallel to the image plane and not all of his/her joints lie on a plane parallel to the image plane in the first frame. Input into our algorithm consists of a topological skeleton model and 2D position data on the joints of a human actor. By geometric constraint of body parts in the skeleton model, 3D relative coordinates of the model are obtained. This reconstruction from 2D to 3D is an ill-posed problem due to non-uniqueness of solutions. Therefore, we introduced a technique based on the concept of multiple hypothesis tracking (MHT) with a motion-smoothness function between consecutive frames to automatically find the optimal solution for this ill-posed problem. Since reconstruction configurations are obtained from our closed-form equation, our technique is very efficient. Very accurate results were attained for both synthesized and real-world image sequences. We also compared our technique with both scaled-orthographic and existing perspective approaches. Our proposed method outperformed other approaches, especially in scenes with strong perspective effects and difficult poses.