The search functionality is under construction.

Author Search Result

[Author] Yanduo ZHANG(6hit)

1-6hit
  • Face Hallucination via Multi-Scale Structure Prior Learning

    Yuexi YAO  Tao LU  Kanghui ZHAO  Yanduo ZHANG  Yu WANG  

     
    LETTER-Image

      Pubricized:
    2022/07/19
      Vol:
    E106-A No:1
      Page(s):
    92-96

    Recently, the face hallucination method based on deep learning understands the mapping between low-resolution (LR) and high-resolution (HR) facial patterns by exploring the priors of facial structure. However, how to maintain the face structure consistency after the reconstruction of face images at different scales is still a challenging problem. In this letter, we propose a novel multi-scale structure prior learning (MSPL) for face hallucination. First, we propose a multi-scale structure prior block (MSPB). Considering the loss of high-frequency information in the LR space, we mainly process the input image in three different scale ascending dimensional spaces, and map the image to the high dimensional space to extract multi-scale structural prior information. Then the size of feature maps is recovered by downsampling, and finally the multi-scale information is fused to restore the feature channels. On this basis, we propose a local detail attention module (LDAM) to focus on the local texture information of faces. We conduct extensive face hallucination reconstruction experiments on a public face dataset (LFW) to verify the effectiveness of our method.

  • Multi-View Texture Learning for Face Super-Resolution

    Yu WANG  Tao LU  Feng YAO  Yuntao WU  Yanduo ZHANG  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2021/03/24
      Vol:
    E104-D No:7
      Page(s):
    1028-1038

    In recent years, single face image super-resolution (SR) using deep neural networks have been well developed. However, most of the face images captured by the camera in a real scene are from different views of the same person, and the existing traditional multi-frame image SR requires alignment between images. Due to multi-view face images contain texture information from different views, which can be used as effective prior information, how to use this prior information from multi-views to reconstruct frontal face images is challenging. In order to effectively solve the above problems, we propose a novel face SR network based on multi-view face images, which focus on obtaining more texture information from multi-view face images to help the reconstruction of frontal face images. And in this network, we also propose a texture attention mechanism to transfer high-precision texture compensation information to the frontal face image to obtain better visual effects. We conduct subjective and objective evaluations, and the experimental results show the great potential of using multi-view face images SR. The comparison with other state-of-the-art deep learning SR methods proves that the proposed method has excellent performance.

  • Face Super-Resolution via Triple-Attention Feature Fusion Network

    Kanghui ZHAO  Tao LU  Yanduo ZHANG  Yu WANG  Yuanzhi WANG  

     
    LETTER-Image

      Pubricized:
    2021/10/13
      Vol:
    E105-A No:4
      Page(s):
    748-752

    In recent years, compared with the traditional face super-resolution (SR) algorithm, the face SR based on deep neural network has shown strong performance. Among these methods, attention mechanism has been widely used in face SR because of its strong feature expression ability. However, the existing attention-based face SR methods can not fully mine the missing pixel information of low-resolution (LR) face images (structural prior). And they only consider a single attention mechanism to take advantage of the structure of the face. The use of multi-attention could help to enhance feature representation. In order to solve this problem, we first propose a new pixel attention mechanism, which can recover the structural details of lost pixels. Then, we design an attention fusion module to better integrate the different characteristics of triple attention. Experimental results on FFHQ data sets show that this method is superior to the existing face SR methods based on deep neural network.

  • 2-D Frequency Estimation of Multiple Damped Sinusoids Using Subspace and Projection Separation Approaches

    Longting HUANG  Yuntao WU  Hing Cheung SO  Yanduo ZHANG  

     
    LETTER-Digital Signal Processing

      Vol:
    E94-A No:9
      Page(s):
    1842-1846

    In this paper, a new method for 2-D frequency estimation of multiple damped sinusoids in additive white Gaussian noise is proposed. The key idea is to combine the subspace-based technique and projection separation approach. The frequency parameters in the first dimension are estimated by the MUSIC-based method, and then a set of projection separation matrices are constructed by the estimated frequency parameters. In doing so, the frequency parameters in the second dimension can be separated by the constructed projection separation matrix. Finally, each frequency parameter in the second dimension is estimated by multiple 1-D MUSIC-based methods. The estimated frequency parameters in two dimensions are automatically paired. Computer simulations are included to compare the proposed algorithm with several existing methods.

  • Face Super-Resolution via Hierarchical Multi-Scale Residual Fusion Network

    Yu WANG  Tao LU  Zhihao WU  Yuntao WU  Yanduo ZHANG  

     
    LETTER-Image

      Pubricized:
    2021/03/03
      Vol:
    E104-A No:9
      Page(s):
    1365-1369

    Exploring the structural information as prior to facial images is a key issue of face super-resolution (SR). Although deep convolutional neural networks (CNNs) own powerful representation ability, how to accurately use facial structural information remains challenges. In this paper, we proposed a new residual fusion network to utilize the multi-scale structural information for face SR. Different from the existing methods of increasing network depth, the bottleneck attention module is introduced to extract fine facial structural features by exploring correlation from feature maps. Finally, hierarchical scales of structural information is fused for generating a high-resolution (HR) facial image. Experimental results show the proposed network outperforms some existing state-of-the-art CNNs based face SR algorithms.

  • Multi Feature Fusion Attention Learning for Clothing-Changing Person Re-Identification

    Liwei WANG  Yanduo ZHANG  Tao LU  Wenhua FANG  Yu WANG  

     
    LETTER-Image

      Pubricized:
    2022/01/25
      Vol:
    E105-A No:8
      Page(s):
    1170-1174

    Person re-identification (Re-ID) aims to match the same pedestrain identity images across different camera views. Because pedestrians will change clothes frequently for a relatively long time, while many current methods rely heavily on color appearance information or only focus on the person biometric features, these methods make the performance dropped apparently when it is applied to Clohting-Changing. To relieve this dilemma, we proposed a novel Multi Feature Fusion Attention Network (MFFAN), which learns the fine-grained local features. Then we introduced a Clothing Adaptive Attention (CAA) module, which can integrate multiple granularity features to guide model to learn pedestrain's biometric feature. Meanwhile, in order to fully verify the performance of our method on clothing-changing Re-ID problem, we designed a Clothing Generation Network (CGN), which can generate multiple pictures of the same identity wearing different clothes. Finally, experimental results show that our method exceeds the current best method by over 5% and 6% on the VCcloth and PRCC datasets respectively.