For fast egomotion of a camera, computing feature correspondence and motion parameters by global search becomes highly time-consuming. Therefore, the complexity of the estimation needs to be reduced for real-time applications. In this paper, we propose a compound omnidirectional vision sensor and an algorithm for estimating its fast egomotion. The proposed sensor has both multi-baselines and a large field of view (FOV). Our method uses the multi-baseline stereo vision capability to classify feature points as near or far features. After the classification, we can estimate the camera rotation and translation separately by using random sample consensus (RANSAC) to reduce the computational complexity. The large FOV also improves the robustness since the translation and rotation are clearly distinguished. To date, there has been no work on combining multi-baseline stereo with large FOV characteristics for estimation, even though these characteristics are individually are important in improving egomotion estimation. Experiments showed that the proposed method is robust and produces reasonable accuracy in real time for fast motion of the sensor.
Trung Thanh NGO
Yuichiro KOJIMA
Hajime NAGAHARA
Ryusuke SAGAWA
Yasuhiro MUKAIGAWA
Masahiko YACHIDA
Yasushi YAGI
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Trung Thanh NGO, Yuichiro KOJIMA, Hajime NAGAHARA, Ryusuke SAGAWA, Yasuhiro MUKAIGAWA, Masahiko YACHIDA, Yasushi YAGI, "Real-Time Estimation of Fast Egomotion with Feature Classification Using Compound Omnidirectional Vision Sensor" in IEICE TRANSACTIONS on Information,
vol. E93-D, no. 1, pp. 152-166, January 2010, doi: 10.1587/transinf.E93.D.152.
Abstract: For fast egomotion of a camera, computing feature correspondence and motion parameters by global search becomes highly time-consuming. Therefore, the complexity of the estimation needs to be reduced for real-time applications. In this paper, we propose a compound omnidirectional vision sensor and an algorithm for estimating its fast egomotion. The proposed sensor has both multi-baselines and a large field of view (FOV). Our method uses the multi-baseline stereo vision capability to classify feature points as near or far features. After the classification, we can estimate the camera rotation and translation separately by using random sample consensus (RANSAC) to reduce the computational complexity. The large FOV also improves the robustness since the translation and rotation are clearly distinguished. To date, there has been no work on combining multi-baseline stereo with large FOV characteristics for estimation, even though these characteristics are individually are important in improving egomotion estimation. Experiments showed that the proposed method is robust and produces reasonable accuracy in real time for fast motion of the sensor.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.E93.D.152/_p
Copy
@ARTICLE{e93-d_1_152,
author={Trung Thanh NGO, Yuichiro KOJIMA, Hajime NAGAHARA, Ryusuke SAGAWA, Yasuhiro MUKAIGAWA, Masahiko YACHIDA, Yasushi YAGI, },
journal={IEICE TRANSACTIONS on Information},
title={Real-Time Estimation of Fast Egomotion with Feature Classification Using Compound Omnidirectional Vision Sensor},
year={2010},
volume={E93-D},
number={1},
pages={152-166},
abstract={For fast egomotion of a camera, computing feature correspondence and motion parameters by global search becomes highly time-consuming. Therefore, the complexity of the estimation needs to be reduced for real-time applications. In this paper, we propose a compound omnidirectional vision sensor and an algorithm for estimating its fast egomotion. The proposed sensor has both multi-baselines and a large field of view (FOV). Our method uses the multi-baseline stereo vision capability to classify feature points as near or far features. After the classification, we can estimate the camera rotation and translation separately by using random sample consensus (RANSAC) to reduce the computational complexity. The large FOV also improves the robustness since the translation and rotation are clearly distinguished. To date, there has been no work on combining multi-baseline stereo with large FOV characteristics for estimation, even though these characteristics are individually are important in improving egomotion estimation. Experiments showed that the proposed method is robust and produces reasonable accuracy in real time for fast motion of the sensor.},
keywords={},
doi={10.1587/transinf.E93.D.152},
ISSN={1745-1361},
month={January},}
Copy
TY - JOUR
TI - Real-Time Estimation of Fast Egomotion with Feature Classification Using Compound Omnidirectional Vision Sensor
T2 - IEICE TRANSACTIONS on Information
SP - 152
EP - 166
AU - Trung Thanh NGO
AU - Yuichiro KOJIMA
AU - Hajime NAGAHARA
AU - Ryusuke SAGAWA
AU - Yasuhiro MUKAIGAWA
AU - Masahiko YACHIDA
AU - Yasushi YAGI
PY - 2010
DO - 10.1587/transinf.E93.D.152
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E93-D
IS - 1
JA - IEICE TRANSACTIONS on Information
Y1 - January 2010
AB - For fast egomotion of a camera, computing feature correspondence and motion parameters by global search becomes highly time-consuming. Therefore, the complexity of the estimation needs to be reduced for real-time applications. In this paper, we propose a compound omnidirectional vision sensor and an algorithm for estimating its fast egomotion. The proposed sensor has both multi-baselines and a large field of view (FOV). Our method uses the multi-baseline stereo vision capability to classify feature points as near or far features. After the classification, we can estimate the camera rotation and translation separately by using random sample consensus (RANSAC) to reduce the computational complexity. The large FOV also improves the robustness since the translation and rotation are clearly distinguished. To date, there has been no work on combining multi-baseline stereo with large FOV characteristics for estimation, even though these characteristics are individually are important in improving egomotion estimation. Experiments showed that the proposed method is robust and produces reasonable accuracy in real time for fast motion of the sensor.
ER -