This paper presents a vision-based human interface system that enables a user to move a target object in a 3D CG world by moving his hand. The system can interpret hand motions both in a frame fixed in the world and a frame attached to the user. If the latter is chosen, the user can move the object forward by moving his hand forward even if he has changed his body position. In addition, the user does not have to keep in mind that his hand is in the camera field of view. The active camera system tracks the user to keep him in its field of view. Moreover, the system does not need any camera calibration. The key for the realization of the system with such features is vision algorithms based on the multiple view affine invariance theory. We demon-strate an experimental system as well as the vision algorithms. Human operation experiments show the usefulness of the system.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Kang-Hyun JO, Kentaro HAYASHI, Yoshinori KUNO, Yoshiaki SHIRAI, "Vision-Based Human Interface System with World-Fixed and Human-Centered Frames Using Multiple View Invariance" in IEICE TRANSACTIONS on Information,
vol. E79-D, no. 6, pp. 799-808, June 1996, doi: .
Abstract: This paper presents a vision-based human interface system that enables a user to move a target object in a 3D CG world by moving his hand. The system can interpret hand motions both in a frame fixed in the world and a frame attached to the user. If the latter is chosen, the user can move the object forward by moving his hand forward even if he has changed his body position. In addition, the user does not have to keep in mind that his hand is in the camera field of view. The active camera system tracks the user to keep him in its field of view. Moreover, the system does not need any camera calibration. The key for the realization of the system with such features is vision algorithms based on the multiple view affine invariance theory. We demon-strate an experimental system as well as the vision algorithms. Human operation experiments show the usefulness of the system.
URL: https://global.ieice.org/en_transactions/information/10.1587/e79-d_6_799/_p
Copy
@ARTICLE{e79-d_6_799,
author={Kang-Hyun JO, Kentaro HAYASHI, Yoshinori KUNO, Yoshiaki SHIRAI, },
journal={IEICE TRANSACTIONS on Information},
title={Vision-Based Human Interface System with World-Fixed and Human-Centered Frames Using Multiple View Invariance},
year={1996},
volume={E79-D},
number={6},
pages={799-808},
abstract={This paper presents a vision-based human interface system that enables a user to move a target object in a 3D CG world by moving his hand. The system can interpret hand motions both in a frame fixed in the world and a frame attached to the user. If the latter is chosen, the user can move the object forward by moving his hand forward even if he has changed his body position. In addition, the user does not have to keep in mind that his hand is in the camera field of view. The active camera system tracks the user to keep him in its field of view. Moreover, the system does not need any camera calibration. The key for the realization of the system with such features is vision algorithms based on the multiple view affine invariance theory. We demon-strate an experimental system as well as the vision algorithms. Human operation experiments show the usefulness of the system.},
keywords={},
doi={},
ISSN={},
month={June},}
Copy
TY - JOUR
TI - Vision-Based Human Interface System with World-Fixed and Human-Centered Frames Using Multiple View Invariance
T2 - IEICE TRANSACTIONS on Information
SP - 799
EP - 808
AU - Kang-Hyun JO
AU - Kentaro HAYASHI
AU - Yoshinori KUNO
AU - Yoshiaki SHIRAI
PY - 1996
DO -
JO - IEICE TRANSACTIONS on Information
SN -
VL - E79-D
IS - 6
JA - IEICE TRANSACTIONS on Information
Y1 - June 1996
AB - This paper presents a vision-based human interface system that enables a user to move a target object in a 3D CG world by moving his hand. The system can interpret hand motions both in a frame fixed in the world and a frame attached to the user. If the latter is chosen, the user can move the object forward by moving his hand forward even if he has changed his body position. In addition, the user does not have to keep in mind that his hand is in the camera field of view. The active camera system tracks the user to keep him in its field of view. Moreover, the system does not need any camera calibration. The key for the realization of the system with such features is vision algorithms based on the multiple view affine invariance theory. We demon-strate an experimental system as well as the vision algorithms. Human operation experiments show the usefulness of the system.
ER -