1-11hit |
Farzin MATIN Yoosoo JEONG Hanhoon PARK
Multiscale retinex is one of the most popular image enhancement methods. However, its control parameters, such as Gaussian kernel sizes, gain, and offset, should be tuned carefully according to the image contents. In this letter, we propose a new method that optimizes the parameters using practical swarm optimization and multi-objective function. The method iteratively verifies the visual quality (i.e. brightness, contrast, and colorfulness) of the enhanced image using a multi-objective function while subtly adjusting the parameters. Experimental results shows that the proposed method achieves better image quality qualitatively and quantitatively compared with other image enhancement methods.
In this letter, we propose a new no-reference blur estimation method in the frequency domain. It is based on computing the cumulative distribution function (CDF) of the Fourier transform spectrum of the blurred image and analyzing the relationship between its shape and the blur strength. From the analysis, we propose and evaluate six curve-shaped analytic metrics for estimating blur strength. Also, we employ an SVM-based learning scheme to improve the accuracy and robustness of the proposed metrics. In our experiments on Gaussian blurred images, one of the six metrics outperformed the others and the standard deviation values between 0 and 6 could be estimated with an estimation error of 0.31 on average.
Speeded up robust features (SURF) can detect/describe scale- and rotation-invariant features at high speed by relying on integral images for image convolutions. However, the time taken for matching SURF descriptors is still long, and this has been an obstacle for use in real-time applications. In addition, the matching time further increases in proportion to the number of features and the dimensionality of the descriptor. Therefore, we propose a fast matching method that rearranges the elements of SURF descriptors based on their entropies, divides SURF descriptors into sub-descriptors, and sequentially and analytically matches them to each other. Our results show that the matching time could be reduced by about 75% at the expense of a small drop in accuracy.
Hanhoon PARK Hideki MITSUMINE Mahito FUJII
Speeded up robust features (SURF) can detect scale- and rotation-invariant features at high speed by relying on integral images for image convolutions. However, since the number of image convolutions greatly increases in proportion to the image size, another method for reducing the time for detecting features is required. In this letter, we propose a method, called ordinal convolution, of reducing the number of image convolutions for fast feature detection in SURF and compare it with a previous method based on sparse sampling.
Hanhoon PARK Hideki MITSUMINE Mahito FUJII
This letter presents a novel edge-based blur metric that averages the ratios between the slopes and heights of edges. The metric computes the edge slopes more carefully, i.e., by averaging the edge gradients. The effectiveness of the proposed metric is confirmed by experiments with motion or Gaussian blurred real images and comparison with existing edge-based blur metrics.
Hanhoon PARK Hideki MITSUMINE Mahito FUJII
In nearest neighbor distance ratio (NNDR) matching the fixed distance ratio threshold sometimes results in an insufficient number of inliers or a huge number of outliers, which is not good for robust tracking. In this letter, we propose adjusting the distance ratio threshold based on maximizing the number of inliers while maintaining the ratio of the number of outliers to that of inliers. By applying the proposed method to a model-based camera tracking system, its effectiveness is verified.
Side match vector quantization (SMVQ) has been originally developed for image compression and is also useful for steganography. SMVQ requires to create its own state codebook for each block in both encoding and decoding phases. Since the conventional method for the state codebook generation is extremely time-consuming, this letter proposes a fast generation method. The proposed method is tens times faster than the conventional one without loss of perceptual visual quality.
This letter proposes a new face sketch recognition method. Given a query sketch and face photos in a database, the proposed method first synthesizes pseudo sketches by computing the locality sensitive histogram and dense illumination invariant features from the resized face photos, then extracts discriminative features by computing histogram of averaged oriented gradients on the query sketch and pseudo sketches, and finally find a match with the shortest cosine distance in the feature space. It achieves accuracy comparable to the state-of-the-art while showing much more robustness than the existing face sketch recognition methods.
Sanghoon KANG Hanhoon PARK Jong-Il PARK
Image deformations caused by different steganographic methods are typically extremely small and highly similar, which makes their detection and identification to be a difficult task. Although recent steganalytic methods using deep learning have achieved high accuracy, they have been made to detect stego images to which specific steganographic methods have been applied. In this letter, a staganalytic method is proposed that uses hierarchical residual neural networks (ResNet), allowing detection (i.e. classification between stego and cover images) and identification of four spatial steganographic methods (i.e. LSB, PVD, WOW and S-UNIWARD). Experimental results show that using hierarchical ResNets achieves a classification rate of 79.71% in quinary classification, which is approximately 23% higher compared to using a plain convolutional neural network (CNN).
In this letter, we propose a simple framework for accelerating a state-of-the-art histogram-based weighted median filter at no expense. It is based on a process of determining the filter processing direction. The determination is achieved by measuring the local feature variation of input images. Through experiments with natural images, it is verified that, depending on input images, the filtering speed can be substantially increased by changing the filtering direction.
In this letter, we propose an improved single image haze removal algorithm using image segmentation. It can effectively resolve two common problems of conventional algorithms which are based on dark channel prior: halo artifact and wrong estimation of atmospheric light. The process flow of our algorithm is as follows. First, the input hazy image is over-segmented. Then, the segmentation results are used for improving the conventional dark channel computation which uses fixed local patches. Also, the segmentation results are used for accurately estimating the atmospheric light. Finally, from the improved dark channel and atmospheric light, an accurate transmission map is computed allowing us to recover a high quality haze-free image.