1-5hit |
A non-linear extension of generalized hyperplane approximation (GHA) method is introduced in this letter. Although GHA achieved a high-confidence result in motion parameter estimation by utilizing the supervised learning scheme in histogram of oriented gradient (HOG) feature space, it still has unstable convergence range because it approximates the non-linear function of regression from the feature space to the motion parameter space as a linear plane. To extend GHA into a non-linear regression for larger convergence range, we derive theoretical equations and verify this extension's effectiveness and efficiency over GHA by experimental results.
Trung Thanh NGO Yuichiro KOJIMA Hajime NAGAHARA Ryusuke SAGAWA Yasuhiro MUKAIGAWA Masahiko YACHIDA Yasushi YAGI
For fast egomotion of a camera, computing feature correspondence and motion parameters by global search becomes highly time-consuming. Therefore, the complexity of the estimation needs to be reduced for real-time applications. In this paper, we propose a compound omnidirectional vision sensor and an algorithm for estimating its fast egomotion. The proposed sensor has both multi-baselines and a large field of view (FOV). Our method uses the multi-baseline stereo vision capability to classify feature points as near or far features. After the classification, we can estimate the camera rotation and translation separately by using random sample consensus (RANSAC) to reduce the computational complexity. The large FOV also improves the robustness since the translation and rotation are clearly distinguished. To date, there has been no work on combining multi-baseline stereo with large FOV characteristics for estimation, even though these characteristics are individually are important in improving egomotion estimation. Experiments showed that the proposed method is robust and produces reasonable accuracy in real time for fast motion of the sensor.
Iris FERMIN Atsushi IMIYA Akira ICHIKAWA
We introduce two probabilistic algorithms to determine the motion parameters of a planar shape without knowing a priori the point-to-point correspondences. If the target is limited to rigid objects, an Euclidean transformation can be expressed as a linear equation with six parameters, i.e. two translational parameters and four rotational parameters (the axis of rotation and the rotational speed about the axis). These parameters can be determined by applying the randomized Hough transform. One remarkable feature of our algorithms is that the calculations of the translation and rotation parameters are performed by using points randomly selected from two image frames that are acquired at different times. The estimation of rotation parameters is done using one of two approaches, which we call the triangle search and the polygon search algorithms respectively. Both methods focus on the intersection points of a boundary of the 2D shape and the circles whose centers are located at the shape's centroid and whose radii are generated randomly. The triangle search algorithm randomly selects three different intersection points in each image, such that they form congruent triangles, and then estimates the rotation parameter using these two triangles. However, the polygon search algorithm employs all the intersection points in each image, i.e. all the intersection points in the two image frames form two polygons, and then estimates the rotation parameter with aid of the vertices of these two polygons.
This paper presents a new method for solving the structure-from-motion problem for optical flow. The fact that the structure-from-motion problem can be simplified by using the linearization technique is well known. However, it has been pointed out that the linearization technique reduces the accuracy of the computation. In this paper, we overcome this disadvantage by correcting the linearized solution in a statistically optimal way. Computer simulation experiments show that our method yields an unbiased estimator of the motion parameters which almost attains the theoretical bound on accuracy. Our method also enables us to evaluate the reliability of the reconstructed structure in the form of the covariance matrix. Real-image experiments are conducted to demonstrate the effectiveness of our method.
Shin-Chung WANG Chung-Lin HUANG
This paper presents a modified disparity measurement to recover the depth and a robust method to estimate motion parameters. First, this paper considers phase correspondence for the computation of disparity. It has less computation for disparity than previous methods that use the disparity from correspondence and from correlation. This modified disparity measurement uses the Gabor filter to analyze the local phase property and the exponential filter to analyze the global phase property. These two phases are added to make quasi-linear phases of the stereo image channels which are used for the stereo disparity finding and the structure recovery of scene. Then, we use feature-based correspondence to find the corresponding feature points in temporal image pair. Finally, we combine the depth map and use disparity motion stereo to estimate 3-D motion parameters.