The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] blurring(14hit)

1-14hit
  • Joint Extreme Channels-Inspired Structure Extraction and Enhanced Heavy-Tailed Priors Heuristic Kernel Estimation for Motion Deblurring of Noisy and Blurry Images

    Hongtian ZHAO  Shibao ZHENG  

     
    PAPER-Vision

      Vol:
    E103-A No:12
      Page(s):
    1520-1528

    Motion deblurring for noisy and blurry images is an arduous and fundamental problem in image processing community. The problem is ill-posed as many different pairs of latent image and blur kernel can render the same blurred image, and thus, the optimization of this problem is still unsolved. To tackle it, we present an effective motion deblurring method for noisy and blurry images based on prominent structure and a data-driven heavy-tailed prior of enhanced gradient. Specifically, first, we employ denoising as a preprocess to remove the input image noise, and then restore strong edges for accurate kernel estimation. The image extreme channels-based priors (dark channel prior and bright channel prior) as sparse complementary knowledge are exploited to extract prominent structure. High closeness of the extracted structure to the clear image structure can be obtained via tuning the parameters of extraction function. Next, the integration term of enhanced interim image gradient and clear image heavy-tailed prior is proposed and then embedded into the image restoration model, which favors sharp images over blurry ones. A large number of experiments on both synthetic and real-life images verify the superiority of the proposed method over state-of-the-art algorithms, both qualitatively and quantitatively.

  • Accelerating Existing Non-Blind Image Deblurring Techniques through a Strap-On Limited-Memory Switched Broyden Method

    Ichraf LAHOULI  Robby HAELTERMAN  Joris DEGROOTE  Michal SHIMONI  Geert DE CUBBER  Rabah ATTIA  

     
    PAPER-Machine Vision and its Applications

      Pubricized:
    2018/02/16
      Vol:
    E101-D No:5
      Page(s):
    1288-1295

    Video surveillance from airborne platforms can suffer from many sources of blur, like vibration, low-end optics, uneven lighting conditions, etc. Many different algorithms have been developed in the past that aim to recover the deblurred image but often incur substantial CPU-time, which is not always available on-board. This paper shows how a “strap-on” quasi-Newton method can accelerate the convergence of existing iterative methods with little extra overhead while keeping the performance of the original algorithm, thus paving the way for (near) real-time applications using on-board processing.

  • Image Restoration with Multiple Hard Constraints on Data-Fidelity to Blurred/Noisy Image Pair

    Saori TAKEYAMA  Shunsuke ONO  Itsuo KUMAZAWA  

     
    PAPER

      Pubricized:
    2017/06/14
      Vol:
    E100-D No:9
      Page(s):
    1953-1961

    Existing image deblurring methods with a blurred/noisy image pair take a two-step approach: blur kernel estimation and image restoration. They can achieve better and much more stable blur kernel estimation than single image deblurring methods. On the other hand, in the image restoration step, they do not exploit the information on the noisy image, or they require ad hoc tuning of interdependent parameters. This paper focuses on the image restoration step and proposes a new restoration method of using a blurred/noisy image pair. In our method, the image restoration problem is formulated as a constrained convex optimization problem, where data-fidelity to a blurred image and that to a noisy image is properly taken into account as multiple hard constraints. This offers (i) high quality restoration when the blurred image also contains noise; (ii) robustness to the estimation error of the blur kernel; and (iii) easy parameter setting. We also provide an efficient algorithm for solving our optimization problem based on the so-called alternating direction method of multipliers (ADMM). Experimental results support our claims.

  • Shift-Variant Blind Deconvolution Using a Field of Kernels

    Motoharu SONOGASHIRA  Masaaki IIYAMA  Michihiko MINOH  

     
    PAPER

      Pubricized:
    2017/06/14
      Vol:
    E100-D No:9
      Page(s):
    1971-1983

    Blind deconvolution (BD) is the problem of restoring sharp images from blurry images when convolution kernels are unknown. While it has a wide range of applications and has been extensively studied, traditional shift-invariant (SI) BD focuses on uniform blur caused by kernels that do not spatially vary. However, real blur caused by factors such as motion and defocus is often nonuniform and thus beyond the ability of SI BD. Although specialized methods exist for nonuniform blur, they can only handle specific blur types. Consequently, the applicability of BD for general blur remains limited. This paper proposes a shift-variant (SV) BD method that models nonuniform blur using a field of kernels that assigns a local kernel to each pixel, thereby allowing pixelwise variation. This concept is realized as a Bayesian model that involves SV convolution with the field of kernels and smoothing of the field for regularization. A variational-Bayesian inference algorithm is derived to jointly estimate a sharp latent image and a field of kernels from a blurry observed image. Owing to the flexibility of the field-of-kernels model, the proposed method can deal with a wider range of blur than previous approaches. Experiments using images with nonuniform blur demonstrate the effectiveness of the proposed SV BD method in comparison with previous SI and SV approaches.

  • Blind Image Deconvolution Using Specified 2-D HPF for Feature Extraction and Conjugate Gradient Method in Frequency Domain

    Takanori FUJISAWA  Masaaki IKEHARA  

     
    PAPER-Image

      Vol:
    E100-A No:3
      Page(s):
    846-853

    Image deconvolution is the task to recover the image information that was lost by taking photos with blur. Especially, to perform image deconvolution without prior information about blur kernel, is called blind image deconvolution. This framework is seriously ill-posed and an additional operation is required such as extracting image features. Many blind deconvolution frameworks separate the problem into kernel estimation problem and deconvolution problem. In order to solve the kernel estimation problem, previous frameworks extract the image's salient features by preprocessing, such as edge extraction. The disadvantage of these frameworks is that the quality of the estimated kernel is influenced by the region with no salient edges. Moreover, the optimization in the previous frameworks requires iterative calculation of convolution, which takes a heavy computational cost. In this paper, we present a blind image deconvolution framework using a specified high-pass filter (HPF) for feature extraction to estimate a blur kernel. The HPF-based feature extraction properly weights the image's regions for the optimization problem. Therefore, our kernel estimation problem can estimate the kernel in the region with no salient edges. In addition, our approach accelerates both kernel estimation and deconvolution processes by utilizing a conjugate gradient method in a frequency domain. This method eliminates costly convolution operations from these processes and reduces the execution time. Evaluation for 20 test images shows our framework not only improves the quality of recovered images but also performs faster than conventional frameworks.

  • Blind Image Deblurring Using Weighted Sum of Gaussian Kernels for Point Spread Function Estimation

    Hong LIU  BenYong LIU  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2015/08/05
      Vol:
    E98-D No:11
      Page(s):
    2026-2029

    Point spread function (PSF) estimation plays a paramount role in image deblurring processing, and traditionally it is solved by parameter estimation of a certain preassumed PSF shape model. In real life, the PSF shape is generally arbitrary and complicated, and thus it is assumed in this manuscript that a PSF may be decomposed as a weighted sum of a certain number of Gaussian kernels, with weight coefficients estimated in an alternating manner, and an l1 norm-based total variation (TVl1) algorithm is adopted to recover the latent image. Experiments show that the proposed method can achieve satisfactory performance on synthetic and realistic blurred images.

  • Joint Deblurring and Demosaicing Using Edge Information from Bayer Images

    Du Sic YOO  Min Kyu PARK  Moon Gi KANG  

     
    PAPER-Image Processing and Video Processing

      Vol:
    E97-D No:7
      Page(s):
    1872-1884

    Most images obtained with imaging sensors contain Bayer patterns and suffer from blurring caused by the lens. In order to convert a blurred Bayer-patterned image into a viewable image, demosaicing and deblurring are needed. These concepts have been major research areas in digital image processing for several decades. Despite their importance, their performance and efficiency are not satisfactory when considered independently. In this paper, we propose a joint deblurring and demosaicing method in which edge direction and edge strength are estimated in the Bayer domain and then edge adaptive deblurring and edge-oriented interpolation are performed simultaneously from the estimated edge information. Experimental results show that the proposed method produces better image quality than conventional algorithms in both objective and subjective terms.

  • Feature Fusion for Blurring Detection in Image Forensics

    BenJuan YANG  BenYong LIU  

     
    LETTER-Image Processing and Video Processing

      Vol:
    E97-D No:6
      Page(s):
    1690-1693

    Artificial blurring is a typical operation in image forging. Most existing image forgery detection methods consider only one single feature of artificial blurring operation. In this manuscript, we propose to adopt feature fusion, with multifeatures for artificial blurring operation in image tampering, to improve the accuracy of forgery detection. First, three feature vectors that address the singular values of the gray image matrix, correlation coefficients for double blurring operation, and image quality metrics (IQM) are extracted and fused using principal component analysis (PCA), and then a support vector machine (SVM) classifier is trained using the fused feature extracted from training images or image patches containing artificial blurring operations. Finally, the same procedures of feature extraction and feature fusion are carried out on the suspected image or suspected image patch which is then classified, using the trained SVM, into forged or non-forged classes. Experimental results show the feasibility of the proposed method for image tampering feature fusion and forgery detection.

  • Image Restoration with Multiple DirLOTs

    Natsuki AIZAWA  Shogo MURAMATSU  Masahiro YUKAWA  

     
    PAPER

      Vol:
    E96-A No:10
      Page(s):
    1954-1961

    A directional lapped orthogonal transform (DirLOT) is an orthonormal transform of which basis is allowed to be anisotropic with the symmetric, real-valued and compact-support property. Due to its directional property, DirLOT is superior to the existing separable transforms such as DCT and DWT in expressing diagonal edges and textures. The goal of this paper is to enhance the ability of DirLOT further. To achieve this goal, we propose a novel image restoration technique using multiple DirLOTs. This paper generalizes an image denoising technique in [1], and expands the application of multiple DirLOTs by introducing linear degradation operator P. The idea is to use multiple DirLOTs to construct a redundant dictionary. More precisely, the redundant dictionary is constructed as a union of symmetric orthonormal discrete wavelet transforms generated by DirLOTs. To select atoms fitting a target image from the dictionary, we formulate an image restoration problem as an l1-regularized least square problem, which can efficiently be solved by the iterative-shrinkage/thresholding algorithm (ISTA). The proposed technique is beneficial in expressing multiple directions of edges/textures. Simulation results show that the proposed technique significantly outperforms the non-subsampled Haar wavelet transform for deblurring, super-resolution, and inpainting.

  • Color Shrinkage for Color-Image Sparse Coding and Its Applications

    Takahiro SAITO  Yasutaka UEDA  Takashi KOMATSU  

     
    INVITED PAPER

      Vol:
    E94-A No:2
      Page(s):
    480-492

    As a basic tool for deriving sparse representation of a color image from its atomic-decomposition with a redundant dictionary, the authors have recently proposed a new kind of shrinkage technique, viz. color shrinkage, which utilizes inter-channel color dependence directly in the three primary color space. Among various schemes of color shrinkage, this paper particularly presents the soft color-shrinkage and the hard color-shrinkage, natural extensions of the classic soft-shrinkage and the classic hard-shrinkage respectively, and shows their advantages over the existing shrinkage approaches where the classic shrinkage techniques are applied after a color transformation such as the opponent color transformation. Moreover, this paper presents the applications of our color-shrinkage schemes to color-image processing in the redundant tight-frame transform domain, and shows their superiority over the existing shrinkage approaches.

  • On the Gaussian Scale-Space

    Taizo IIJIMA  

     
    INVITED PAPER

      Vol:
    E86-D No:7
      Page(s):
    1162-1164

    One of the most basic characteristics of the image is accompanied by its blur. It was 1962 that I had discovered for the first time in the world that the blur was a Gaussian type. In this paper the outline is described about historical details concerning this circumstances.

  • An Intelligent Image Interpolation Using Cubic Hermite Method

    Heesang KIM  Hanseok KO  

     
    PAPER-Image Processing, Image Pattern Recognition

      Vol:
    E83-D No:4
      Page(s):
    914-921

    This paper proposes an intelligent image interpolation method based on Cubic Hermite procedure for improving digital images. Image interpolation has been used to create high-resolution effects in digitized image data, providing sharpness in high frequency image data and smoothness in low frequency image data. Most interpolation techniques proposed in the past are centered on determining pixel values using the relationship between neighboring points. As one of the more prevalent interpolation techniques, Cubic Hermite procedure attains the interpolation with a 3rd order polynomial fit using derivatives of points and adaptive smoothness parameters. Cubic Hermite features many forms of a curved shape, which effectively reduce the problems inherent in interpolations. This paper focuses on a method that intelligently determines the derivatives and adaptive smoothness parameters to effectively contain the interpolation error, achieving significantly improved images. Derivatives are determined by taking a weighted sum of the neighboring points whose weighting function decreases as the intensity difference of neighboring points increases. Smoothness parameter is obtained by training an exemplar image to fit into the Cubic Hermite function such that the interpolation error is minimized at each interpolating point. The simulations indicate that the proposed method achieves improved image results over that of conventional methods in terms of error and image quality performance.

  • Blind Deconvolution Based on Genetic Algorithms

    Yen-Wei CHEN  Zensho NAKAO  Kouichi ARAKAKI  Shinichi TAMURA  

     
    LETTER-Neural Networks

      Vol:
    E80-A No:12
      Page(s):
    2603-2607

    A genetic algorithm is presented for the blind-deconvolution problem of image restoration without any a priori information about object image or blurring function. The restoration problem is modeled as an optimization problem, whose cost function is to be minimized based on mechanics of natural selection and natural genetics. The applicability of GA for blind-deconvolution problem was demonstrated.

  • Pluralizing Method of Simple Similarity

    Yasuhiro AOKI  Taizo IIJIMA  

     
    PAPER-Classification Methods

      Vol:
    E79-D No:5
      Page(s):
    485-490

    For similarity methods to work well, the image must be blurred before being input. However, the relationship between the blurring operation and the similarity is not fully understood. To solve the problem of this relationship, in this paper, the effect of blurring is investigated by expressing figure f(x) in the form of the sum of higher derivatives of f (x,σ), and then a simple similarity between figures was mathematically formulated in terms of the relation between visual patterns. By modifying this formulation, we propose pluralized simple similarity to increase the allowance in different view of multiple similarity. The similarity maintains higher allowance without any discernible loss of distinguishing power. We verify the effectiveness of the pluralized simple similarity throughout some experiments.