1-5hit |
Akira KUBOTA Kazuya KODAMA Daiki TAMURA Asami ITO
Focal stacks (FS) have attracted attention as an alternative representation of light field (LF). However, the problem of reconstructing LF from its FS is considered ill-posed. Although many regularization methods have been discussed, no method has been proposed to solve this problem perfectly. This paper showed that the LF can be perfectly reconstructed from the FS through a filter bank in theory for Lambertian scenes without occlusion if the camera aperture for acquiring the FS is a Cauchy function. The numerical simulation demonstrated that the filter bank allows perfect reconstruction of the LF.
Various haze removal methods based on the atmospheric scattering model have been presented in recent years. Most methods have targeted strong haze images where light is scattered equally in all color channels. This paper presents a haze removal method using near-infrared (NIR) images for relatively weak haze images. In order to recover the lost edges, the presented method first extracts edges from an appropriately weighted NIR image and fuses it with the color image. By introducing a wavelength-dependent scattering model, our method then estimates the transmission map for each color channel and recovers the color more naturally from the edge-recovered image. Finally, the edge-recovered and the color-recovered images are blended. In this blending process, the regions with high lightness, such as sky and clouds, where unnatural color shifts are likely to occur, are effectively estimated, and the optimal weighting map is obtained. Our qualitative and quantitative evaluations using 59 pairs of color and NIR images demonstrated that our method can recover edges and colors more naturally in weak haze images than conventional methods.
Akira KUBOTA Kazuya KODAMA Asami ITO
A pupil function of aperture in image capturing systems is theoretically derived such that one can perfectly reconstruct all-in-focus image through linear filtering of the focal stack. The perfect reconstruction filters are also designed based on the derived pupil function. The designed filters are space-invariant; hence the presented method does not require region segmentation. Simulation results using synthetic scenes shows effectiveness of the derived pupil function and the filters.
Kazuki KASAI Kaoru KAWAKITA Akira KUBOTA Hiroki TSURUSAKI Ryosuke WATANABE Masaru SUGANO
In this paper, we present an efficient and robust method for estimating Homography matrix for soccer field registration between a captured camera image and a soccer field model. The presented method first detects reliable field lines from the camera image through clustering. Constructing a novel directional feature of the intersection points of the lines in both the camera image and the model, the presented method then finds matching pairs of these points between the image and the model. Finally, Homography matrix estimations and validations are performed using the obtained matching pairs, which can reduce the required number of Homography matrix calculations. Our presented method uses possible intersection points outside image for the point matching. This effectively improves robustness and accuracy of Homography estimation as demonstrated in experimental results.
This paper describes a method of free iris and focus image generation based on transformation integrating multiple differently focused images. First, we assume that objects are defocused by a geometrical blurring model. And we combine acquired images on certain imaging planes and spatial information of objects by using a convolution of a three-dimensional blur. Then, based on spatial frequency analysis of the blur, we design three-dimensional filters that generate free iris and focus images from the acquired images. The method enables us to generate not only an all-in-focus image corresponding to an ideal pin-hole iris but also various images, which would be acquired with virtual irises whose sizes are different from the original one. In order to generate a certain image by using multiple differently focused images, especially very many images, conventional methods usually analyze focused regions of each acquired image independently and construct a depth map. Then, based on the map, the regions are merged into a desired image with some effects. However, generally, it is so difficult to conduct such depth estimation robustly in all regions that these methods cannot prevent merged results from including visible artifacts, which decrease the quality of generated images awfully. In this paper, we propose a method of generating desired images directly and robustly from very many differently focused images without depth estimation. Simulations of image generation are performed utilizing synthetic images to study how certain parameters of the blur and the filter affect the quality of generated images. We also introduce pre-processing that corrects the size of acquired images and a simple method for estimating the parameter of the three-dimensional blur. Finally, we show experimental results of free iris and focus image generation from real images.