The search functionality is under construction.

Keyword Search Result

[Keyword] inpainting(18hit)

1-18hit
  • Facial Mask Completion Using StyleGAN2 Preserving Features of the Person

    Norihiko KAWAI  Hiroaki KOIKE  

     
    PAPER

      Pubricized:
    2023/05/30
      Vol:
    E106-D No:10
      Page(s):
    1627-1637

    Due to the global outbreak of coronaviruses, people are increasingly wearing masks even when photographed. As a result, photos uploaded to web pages and social networking services with the lower half of the face hidden are less likely to convey the attractiveness of the photographed persons. In this study, we propose a method to complete facial mask regions using StyleGAN2, a type of Generative Adversarial Networks (GAN). In the proposed method, a reference image of the same person without a mask is prepared separately from a target image of the person wearing a mask. After the mask region in the target image is temporarily inpainted, the face orientation and contour of the person in the reference image are changed to match those of the target image using StyleGAN2. The changed image is then composited into the mask region while correcting the color tone to produce a mask-free image while preserving the person's features.

  • Video Inpainting by Frame Alignment with Deformable Convolution

    Yusuke HARA  Xueting WANG  Toshihiko YAMASAKI  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2021/04/22
      Vol:
    E104-D No:8
      Page(s):
    1349-1358

    Video inpainting is a task of filling missing regions in videos. In this task, it is important to efficiently use information from other frames and generate plausible results with sufficient temporal consistency. In this paper, we present a video inpainting method jointly using affine transformation and deformable convolutions for frame alignment. The former is responsible for frame-scale rough alignment and the latter performs pixel-level fine alignment. Our model does not depend on 3D convolutions, which limits the temporal window, or troublesome flow estimation. The proposed method achieves improved object removal results and better PSNR and SSIM values compared with previous learning-based methods.

  • SEM Image Quality Assessment Based on Texture Inpainting

    Zhaolin LU  Ziyan ZHANG  Yi WANG  Liang DONG  Song LIANG  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2020/10/30
      Vol:
    E104-D No:2
      Page(s):
    341-345

    This letter presents an image quality assessment (IQA) metric for scanning electron microscopy (SEM) images based on texture inpainting. Inspired by the observation that the texture information of SEM images is quite sensitive to distortions, a texture inpainting network is first trained to extract texture features. Then the weights of the trained texture inpainting network are transferred to the IQA network to help it learn an effective texture representation of the distorted image. Finally, supervised fine-tuning is conducted on the IQA network to predict the image quality score. Experimental results on the SEM image quality dataset demonstrate the advantages of the presented method.

  • Multiple Subspace Model and Image-Inpainting Algorithm Based on Multiple Matrix Rank Minimization

    Tomohiro TAKAHASHI  Katsumi KONISHI  Kazunori URUMA  Toshihiro FURUKAWA  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2020/08/31
      Vol:
    E103-D No:12
      Page(s):
    2682-2692

    This paper proposes an image inpainting algorithm based on multiple linear models and matrix rank minimization. Several inpainting algorithms have been previously proposed based on the assumption that an image can be modeled using autoregressive (AR) models. However, these algorithms perform poorly when applied to natural photographs because they assume that an image is modeled by a position-invariant linear model with a fixed model order. In order to improve inpainting quality, this work introduces a multiple AR model and proposes an image inpainting algorithm based on multiple matrix rank minimization with sparse regularization. In doing so, a practical algorithm is provided based on the iterative partial matrix shrinkage algorithm, with numerical examples showing the effectiveness of the proposed algorithm.

  • Inpainting via Sparse Representation Based on a Phaseless Quality Metric

    Takahiro OGAWA  Keisuke MAEDA  Miki HASEYAMA  

     
    PAPER-Image

      Vol:
    E103-A No:12
      Page(s):
    1541-1551

    An inpainting method via sparse representation based on a new phaseless quality metric is presented in this paper. Since power spectra, phaseless features, of local regions within images enable more successful representation of their texture characteristics compared to their pixel values, a new quality metric based on these phaseless features is newly derived for image representation. Specifically, the proposed method enables spare representation of target signals, i.e., target patches, including missing intensities by monitoring errors converged by phase retrieval as the novel phaseless quality metric. This is the main contribution of our study. In this approach, the phase retrieval algorithm used in our method has the following two important roles: (1) derivation of the new quality metric that can be derived even for images including missing intensities and (2) conversion of phaseless features, i.e., power spectra, to pixel values, i.e., intensities. Therefore, the above novel approach solves the existing problem of not being able to use better features or better quality metrics for inpainting. Results of experiments showed that the proposed method using sparse representation based on the new phaseless quality metric outperforms previously reported methods that directly use pixel values for inpainting.

  • Quality Index for Benchmarking Image Inpainting Algorithms with Guided Regional Statistics

    Song LIANG  Leida LI  Bo HU  Jianying ZHANG  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2019/04/01
      Vol:
    E102-D No:7
      Page(s):
    1430-1433

    This letter presents an objective quality index for benchmarking image inpainting algorithms. Under the guidance of the masks of damaged areas, the boundary region and the inpainting region are first located. Then, the statistical features are extracted from the boundary and inpainting regions respectively. For the boundary region, we utilize Weibull distribution to fit the gradient magnitude histograms of the exterior and interior regions around the boundary, and the Kullback-Leibler Divergence (KLD) is calculated to measure the boundary distortions caused by imperfect inpainting. Meanwhile, the quality of the inpainting region is measured by comparing the naturalness factors between the inpainted image and the reference image. Experimental results demonstrate that the proposed metric outperforms the relevant state-of-the-art quality metrics.

  • Wiener-Based Inpainting Quality Prediction

    Takahiro OGAWA  Akira TANAKA  Miki HASEYAMA  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2017/07/04
      Vol:
    E100-D No:10
      Page(s):
    2614-2626

    A Wiener-based inpainting quality prediction method is presented in this paper. The proposed method is the first method that can predict inpainting quality both before and after the intensities have become missing even if their inpainting methods are unknown. Thus, when the target image does not include any missing areas, the proposed method estimates the importance of intensities for all pixels, and then we can know which areas should not be removed. Interestingly, since this measure can be also derived in the same manner for its corrupted image already including missing areas, the expected difficulty in reconstruction of these missing pixels is predicted, i.e., we can know which missing areas can be successfully reconstructed. The proposed method focuses on expected errors derived from the Wiener filter, which enables least-squares reconstruction, to predict the inpainting quality. The greatest advantage of the proposed method is that the same inpainting quality prediction scheme can be used in the above two different situations, and their results have common trends. Experimental results show that the inpainting quality predicted by the proposed method can be successfully used as a universal quality measure.

  • A Fast Exemplar-Based Image Inpainting Method Using Bounding Based on Mean and Standard Deviation of Patch Pixels

    Jungmin SO  Baeksop KIM  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2015/05/08
      Vol:
    E98-D No:8
      Page(s):
    1553-1561

    This paper proposes an algorithm for exemplar-based image inpainting, which produces the same result as that of Criminisi's original scheme but at the cost of much smaller computation cost. The idea is to compute mean and standard deviation of every patch in the image, and use the values to decide whether to carry out pixel by pixel comparison or not when searching for the best matching patch. Due to the missing pixels in the target patch, the same pixels in the candidate patch should be omitted when computing the distance between patches. Thus, we first compute the range of mean and standard deviation of a candidate patch with missing pixels, using the average and standard deviation of the entire patch. Then we use the range to determine if the pixel comparison should be conducted. Measurements with well-known images in the inpainting literature show that the algorithm can save significant amount of computation cost, without risking degradation of image quality.

  • Exemplar-Based Inpainting Driven by Feature Vectors and Region Segmentation

    Jinki PARK  Jaehwa PARK  Young-Bin KWON  Chan-Gun LEE  Ho-Hyun PARK  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2015/01/09
      Vol:
    E98-D No:4
      Page(s):
    972-975

    A new exemplar-based inpainting method which effectively preserves global structures and textures in the restored region driven by feature vectors is presented. Exemplars that belong to the source region are segmented based on their features. To express characteristics of exemplars such as shapes of structures and smoothness of textures, the Harris corner response and the variance of pixel values are employed as a feature vector. Enhancements on restoration plausibility and processing speedup are achieved as shown in the experiments.

  • Pixel and Patch Reordering for Fast Patch Selection in Exemplar-Based Image Inpainting

    Baeksop KIM  Jiseong KIM  Jungmin SO  

     
    LETTER-Image Processing and Video Processing

      Vol:
    E96-D No:12
      Page(s):
    2892-2895

    This letter presents a scheme to improve the running time of exemplar-based image inpainting, first proposed by Criminisi et al. In the exemplar-based image inpainting, a patch that contains unknown pixels is compared to all the patches in the known region in order to find the best match. This is very time-consuming and hinders the practicality of Criminisi's method to be used in real time. We show that a simple bounding algorithm can significantly reduce number of distance calculations, and thus the running time. Performance of the bounding algorithm is affected by the order of patches that are compared, as well as the order of pixels in a patch. We present pixel and patch ordering schemes that improve the performance of bounding algorithms. Experiments with well-known images used in inpainting literature show that the proposed reordering scheme can reduce running time of the bounding algorithm up to 50%.

  • A Depth-Guided Inpainting Scheme Based on Foreground Depth-Layer Removal for High Quality 2D to 3D Video Conversion

    Jangwon CHOI  Yoonsik CHOE  Yong-Goo KIM  

     
    LETTER-Image Processing and Video Processing

      Vol:
    E96-D No:11
      Page(s):
    2483-2486

    This letter proposes a novel depth-guided inpainting scheme for the high quality hole-filling in 2D-to-3D video conversion. The proposed scheme detects and removes foreground depth layers in an image patch, enabling appropriate patch formation using only disoccluded background information. This background only patch formation helps to avoid the propagation of wrong depths over hole area, and thus improve the overall quality of converted 3D video experience. Experimental results demonstrate the proposed scheme provides visually much more pleasing inpainting results with better preserved object edges compared to the state-of-the-art depth-guided inpainting schemes.

  • Image Restoration with Multiple DirLOTs

    Natsuki AIZAWA  Shogo MURAMATSU  Masahiro YUKAWA  

     
    PAPER

      Vol:
    E96-A No:10
      Page(s):
    1954-1961

    A directional lapped orthogonal transform (DirLOT) is an orthonormal transform of which basis is allowed to be anisotropic with the symmetric, real-valued and compact-support property. Due to its directional property, DirLOT is superior to the existing separable transforms such as DCT and DWT in expressing diagonal edges and textures. The goal of this paper is to enhance the ability of DirLOT further. To achieve this goal, we propose a novel image restoration technique using multiple DirLOTs. This paper generalizes an image denoising technique in [1], and expands the application of multiple DirLOTs by introducing linear degradation operator P. The idea is to use multiple DirLOTs to construct a redundant dictionary. More precisely, the redundant dictionary is constructed as a union of symmetric orthonormal discrete wavelet transforms generated by DirLOTs. To select atoms fitting a target image from the dictionary, we formulate an image restoration problem as an l1-regularized least square problem, which can efficiently be solved by the iterative-shrinkage/thresholding algorithm (ISTA). The proposed technique is beneficial in expressing multiple directions of edges/textures. Simulation results show that the proposed technique significantly outperforms the non-subsampled Haar wavelet transform for deblurring, super-resolution, and inpainting.

  • Fast and Structure-Preserving Image Inpainting Based on Probabilistic Structure Estimation

    Takashi SHIBATA  Akihiko IKETANI  Shuji SENDA  

     
    PAPER-Image Synthesis

      Vol:
    E95-D No:7
      Page(s):
    1731-1739

    This paper presents a novel inpainting method based on structure estimation. The method first estimates an initial image that captures the rough structure and colors in the missing region. This image is generated by probabilistically estimating the gradient within the missing region based on edge segments intersecting its boundary, and then by flooding the colors on the boundary into the missing region. The color flooding is formulated as an energy minimization problem, and is efficiently optimized by the conjugate gradient method. Finally, by locally replacing the missing region with local patches similar to both the adjacent patches and the initial image, the inpainted image is synthesized. The initial image not only serves as a guide to ensure the underlying structure is preserved, but also allows the patch selection process to be carried out in a greedy manner, which leads to substantial speedup. Experimental results show the proposed method is capable of preserving the underlying structure in the missing region, while achieving more than 5 times faster computational speed than the state-of-the-art inpainting method. Subjective evaluation of image quality also shows the proposed method outperforms the previous methods.

  • Image Inpainting Based on Adaptive Total Variation Model

    Zhaolin LU  Jiansheng QIAN  Leida LI  

     
    LETTER-Image

      Vol:
    E94-A No:7
      Page(s):
    1608-1612

    In this letter, a novel adaptive total variation (ATV) model is proposed for image inpainting. The classical TV model is a partial differential equation (PDE)-based technique. While the TV model can preserve the image edges well, it has some drawbacks, such as staircase effect in the inpainted image and slow convergence rate. By analyzing the diffusion mechanism of TV model and introducing a new edge detection operator named difference curvature, we propose a novel ATV inpainting model. The proposed ATV model can diffuse the image information smoothly and quickly, namely, this model not only eliminates the staircase effect but also accelerates the convergence rate. Experimental results demonstrate the effectiveness of the proposed scheme.

  • A Novel Bandelet-Based Image Inpainting

    Kuo-Ming HUNG  Yen-Liang CHEN  Ching-Tang HSIEH  

     
    PAPER-Image Coding and Processing

      Vol:
    E92-A No:10
      Page(s):
    2471-2478

    This paper proposes a novel image inpainting method based on bandelet transform. This technique is based on a multi-resolution layer to perform image restoration, and mainly utilizes the geometrical flow of the neighboring texture of the damaged regions as the basis of restoration. By performing the warp transform with geometrical flows, it transforms the textural variation into the nearing domain axis utilizing the bandelet decomposition method to decompose the non-relative textures into different bands, and then combines them with the affine search method to perform image restoration. The experimental results show that the proposed method can simplify the complexity of the repair decision method and improve the quality of HVS, and thus, repaired results to contain the image of contour of high change, and in addition, offer a texture image of high-frequency variation. These repair results can lead to state-of-the-art results.

  • A Visual Inpainting Method Based on the Compressed Domain

    Yi-Wei JIANG  De XU  Moon-Ho LEE  Cong-Yan LANG  

     
    LETTER-Image Processing and Video Processing

      Vol:
    E90-D No:10
      Page(s):
    1716-1719

    Visual inpainting is an interpolation problem that restores an image or a frame with missing or damaged parts. Over the past decades, a number of computable models of visual inpainting have been developed, but most of these models are based on the pixel domain. Little theoretical and computational work of visual inpainting is based on the compressed domain. In this paper, a visual inpainting model in the discrete cosine transform (DCT) domain is proposed. DCT coefficients of the non-inpainting blocks are utilized to get block features, and those block features are propagated to the inpainting region iteratively. The experimental results with I frames of MPEG4 are presented to demonstrate the efficiency and accuracy of the proposed algorithm.

  • Inpainting Highlights Using Color Line Projection

    Joung Wook PARK  Kwan Heng LEE  

     
    PAPER

      Vol:
    E90-D No:1
      Page(s):
    250-257

    In this paper we propose a novel method to inpaint highlights and to remove the specularity in the image with specular objects by the color line projection. Color line projection is the method that a color with a surface reflection component is projected near the diffuse color line by following the direction of the specular color line. We use two captured images using different exposure time so that the clue of the original color in a highlight area is searched from two images since the color at the highlight region is distorted and saturated to the illumination color. In the first step of the proposed procedure, the region corresponding to the highlight is generated and the clue of the original highlight color is acquired. In the next step, the color line is generated by the restricted region growing method around the highlight region, and the color line is divided into the diffuse color line and the specular color line. In the final step, pixels near the specular color line are projected onto near the diffuse color line by the color line projection, in which the modified random function is applied to realistically inpaint the highlight. One of advantages in our method is to find the highlight region and the clue of the original color of the highlight with ease. It also efficiently estimates the surface reflection component which is utilized to remove specularity and to inpaint the highlight. The proposed method performs the highlight inpainting and the specular removal simultaneously once the color line is generated. In addition, color line projection with the modified random function can make the result more realistic. We show experimental results from the real images and make a synthesis of the real image and the image modified by the proposed method.

  • Progressive Image Inpainting Based on Wavelet Transform

    Yen-Liang CHEN  Ching-Tang HSIEH  Chih-Hsu HSU  

     
    PAPER-Image Coding

      Vol:
    E88-A No:10
      Page(s):
    2826-2834

    Currently, the automatic image inpainting methods emphasize the inpainting techniques either globally or locally. They didn't consider the merits of global and local techniques to compensate each other. On the contrary, the artists fixed an image in global view firstly, and then focus on the local features of it, when they repaired it. This paper proposes a progressive processing of image inpainting method based on multi-resolution analysis. In damaged and defective area, we imitate the artistic techniques to approach the effectiveness of image inpainting in human vision. First, we use the multi-resolution characteristics of wavelet transform, from the lowest spatial-frequency layer to the higher one, to analyze the image from global-area to local-area progressively. Then, we utilize the variance of the energy of wavelet coefficients within each image block, to decide the priority of inpainting blocks. Finally, we extract the multi-resolution features of each block. We take account of the correlation among horizontal, vertical and diagonal directions, to determine the inpainting strategy for filling image pixels and approximate a high-quality image inpainting to human vision. In our experiments, the performance of the proposed method is superior to the existing methods.