The search functionality is under construction.

Keyword Search Result

[Keyword] face hallucination(2hit)

1-2hit
  • Face Hallucination via Multi-Scale Structure Prior Learning

    Yuexi YAO  Tao LU  Kanghui ZHAO  Yanduo ZHANG  Yu WANG  

     
    LETTER-Image

      Pubricized:
    2022/07/19
      Vol:
    E106-A No:1
      Page(s):
    92-96

    Recently, the face hallucination method based on deep learning understands the mapping between low-resolution (LR) and high-resolution (HR) facial patterns by exploring the priors of facial structure. However, how to maintain the face structure consistency after the reconstruction of face images at different scales is still a challenging problem. In this letter, we propose a novel multi-scale structure prior learning (MSPL) for face hallucination. First, we propose a multi-scale structure prior block (MSPB). Considering the loss of high-frequency information in the LR space, we mainly process the input image in three different scale ascending dimensional spaces, and map the image to the high dimensional space to extract multi-scale structural prior information. Then the size of feature maps is recovered by downsampling, and finally the multi-scale information is fused to restore the feature channels. On this basis, we propose a local detail attention module (LDAM) to focus on the local texture information of faces. We conduct extensive face hallucination reconstruction experiments on a public face dataset (LFW) to verify the effectiveness of our method.

  • Face Hallucination by Learning Local Distance Metric

    Yuanpeng ZOU  Fei ZHOU  Qingmin LIAO  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2016/11/07
      Vol:
    E100-D No:2
      Page(s):
    384-387

    In this letter, we propose a novel method for face hallucination by learning a new distance metric in the low-resolution (LR) patch space (source space). Local patch-based face hallucination methods usually assume that the two manifolds formed by LR and high-resolution (HR) image patches have similar local geometry. However, this assumption does not hold well in practice. Motivated by metric learning in machine learning, we propose to learn a new distance metric in the source space, under the supervision of the true local geometry in the target space (HR patch space). The learned new metric gives more freedom to the presentation of local geometry in the source space, and thus the local geometries of source and target space turn to be more consistent. Experiments conducted on two datasets demonstrate that the proposed method is superior to the state-of-the-art face hallucination and image super-resolution (SR) methods.