The search functionality is under construction.
The search functionality is under construction.

Author Search Result

[Author] Keisuke MAEDA(2hit)

1-2hit
  • Multi-Task Convolutional Neural Network Leading to High Performance and Interpretability via Attribute Estimation

    Keisuke MAEDA  Kazaha HORII  Takahiro OGAWA  Miki HASEYAMA  

     
    LETTER-Neural Networks and Bioengineering

      Vol:
    E103-A No:12
      Page(s):
    1609-1612

    A multi-task convolutional neural network leading to high performance and interpretability via attribute estimation is presented in this letter. Our method can provide interpretation of the classification results of CNNs by outputting attributes that explain elements of objects as a judgement reason of CNNs in the middle layer. Furthermore, the proposed network uses the estimated attributes for the following prediction of classes. Consequently, construction of a novel multi-task CNN with improvements in both of the interpretability and classification performance is realized.

  • Inpainting via Sparse Representation Based on a Phaseless Quality Metric

    Takahiro OGAWA  Keisuke MAEDA  Miki HASEYAMA  

     
    PAPER-Image

      Vol:
    E103-A No:12
      Page(s):
    1541-1551

    An inpainting method via sparse representation based on a new phaseless quality metric is presented in this paper. Since power spectra, phaseless features, of local regions within images enable more successful representation of their texture characteristics compared to their pixel values, a new quality metric based on these phaseless features is newly derived for image representation. Specifically, the proposed method enables spare representation of target signals, i.e., target patches, including missing intensities by monitoring errors converged by phase retrieval as the novel phaseless quality metric. This is the main contribution of our study. In this approach, the phase retrieval algorithm used in our method has the following two important roles: (1) derivation of the new quality metric that can be derived even for images including missing intensities and (2) conversion of phaseless features, i.e., power spectra, to pixel values, i.e., intensities. Therefore, the above novel approach solves the existing problem of not being able to use better features or better quality metrics for inpainting. Results of experiments showed that the proposed method using sparse representation based on the new phaseless quality metric outperforms previously reported methods that directly use pixel values for inpainting.