The search functionality is under construction.

Keyword Search Result

[Keyword] background estimation(3hit)

1-3hit
  • Hole-Filling Algorithm with Spatio-Temporal Background Information for View Synthesis

    Huu-Noi DOAN  Tien-Dat NGUYEN  Min-Cheol HONG  

     
    PAPER

      Pubricized:
    2017/06/14
      Vol:
    E100-D No:9
      Page(s):
    1994-2004

    This paper presents a new hole-filling method that uses extrapolated spatio-temporal background information to obtain a synthesized free-view. A new background codebook for extracting reliable temporal background information is introduced. In addition, the paper addresses estimating spatial local background to distinguish background and foreground regions so that spatial background information can be extrapolated. Background holes are filled by combining spatial and temporal background information. Finally, exemplar-based inpainting is applied to fill in the remaining holes using a new priority function. The experimental results demonstrated that satisfactory synthesized views can be obtained using the proposed algorithm.

  • Privacy Protection for Social Video via Background Estimation and CRF-Based Videographer's Intention Modeling

    Yuta NAKASHIMA  Noboru BABAGUCHI  Jianping FAN  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2016/01/13
      Vol:
    E99-D No:4
      Page(s):
    1221-1233

    The recent popularization of social network services (SNSs), such as YouTube, Dailymotion, and Facebook, enables people to easily publish their personal videos taken with mobile cameras. However, at the same time, such popularity has raised a new problem: video privacy. In such social videos, the privacy of people, i.e., their appearances, must be protected, but naively obscuring all people might spoil the video content. To address this problem, we focus on videographers' capture intentions. In a social video, some persons are usually essential for the video content. They are intentionally captured by the videographers, called intentionally captured persons (ICPs), and the others are accidentally framed-in (non-ICPs). Videos containing the appearances of the non-ICPs might violate their privacy. In this paper, we developed a system called BEPS, which adopts a novel conditional random field (CRF)-based method for ICP detection, as well as a novel approach to obscure non-ICPs and preserve ICPs using background estimation. BEPS reduces the burden of manually obscuring the appearances of the non-ICPs before uploading the video to SNSs. Compared with conventional systems, the following are the main advantages of BEPS: (i) it maintains the video content, and (ii) it is immune to the failure of person detection; false positives in person detection do not violate privacy. Our experimental results successfully validated these two advantages.

  • Extraction of Blood Vessels in Retinal Images Using Resampling High-Order Background Estimation

    Sukritta PARIPURANA  Werapon CHIRACHARIT  Kosin CHAMNONGTHAI  Hideo SAITO  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2014/12/12
      Vol:
    E98-D No:3
      Page(s):
    692-703

    In retinal blood vessel extraction through background removal, the vessels in a fundus image which appear in a higher illumination variance area are often missing after the background is removed. This is because the intensity values of the vessel and the background are nearly the same. Thus, the estimated background should be robust to changes of the illumination intensity. This paper proposes retinal blood vessel extraction using background estimation. The estimated background is calculated by using a weight surface fitting method with a high degree polynomial. Bright pixels are defined as unwanted data and are set as zero in a weight matrix. To fit a retinal surface with a higher degree polynomial, fundus images are reduced in size by different scaling parameters in order to reduce the processing time and complexity in calculation. The estimated background is then removed from the original image. The candidate vessel pixels are extracted from the image by using the local threshold values. To identify the true vessel region, the candidate vessel pixels are dilated from the candidate. After that, the active contour without edge method is applied. The experimental results show that the efficiency of the proposed method is higher than the conventional low-pass filter and the conventional surface fitting method. Moreover, rescaling an image down using the scaling parameter at 0.25 before background estimation provides as good a result as a non-rescaled image does. The correlation value between the non-rescaled image and the rescaled image is 0.99. The results of the proposed method in the sensitivity, the specificity, the accuracy, the area under the receiver operating characteristic (ROC) curve (AUC) and the processing time per image are 0.7994, 0.9717, 0.9543, 0.9676 and 1.8320 seconds for the DRIVE database respectively.