The search functionality is under construction.

Author Search Result

[Author] Masafumi HAGIWARA(8hit)

1-8hit
  • An Improved Anchor Shot Detection Method Using Fitness of Face Location and Dissimilarity of Icon Region

    Ji-Soo KEUM  Hyon-Soo LEE  Masafumi HAGIWARA  

     
    LETTER-Image

      Vol:
    E93-A No:4
      Page(s):
    863-866

    In this letter, we propose an improved anchor shot detection (ASD) method in order to effectively retrieve anchor shots from news video. The face location and dissimilarity of icon region are used to reduce false alarms in the proposed method. According to the results of the experiment on several types of news video, the proposed method obtained high anchor detection results compared with previous methods.

  • An Improved Speech / Nonspeech Classification Based on Feature Combination for Audio Indexing

    Ji-Soo KEUM  Hyon-Soo LEE  Masafumi HAGIWARA  

     
    LETTER-Speech and Hearing

      Vol:
    E93-A No:4
      Page(s):
    830-832

    In this letter, we propose an improved speech/ nonspeech classification method to effectively classify a multimedia source. To improve performance, we introduce a feature based on spectral duration analysis, and combine recently proposed features such as high zero crossing rate ratio (HZCRR), low short time energy ratio (LSTER), and pitch ratio (PR). According to the results of our experiments on speech, music, and environmental sounds, the proposed method obtained high classification results when compared with conventional approaches.

  • A Multi-Winner Associative Memory

    Jiongtao HUANG  Masafumi HAGIWARA  

     
    PAPER-Bio-Cybernetics and Neurocomputing

      Vol:
    E82-D No:7
      Page(s):
    1117-1125

    We propose a new associative memory named Multi-Winner Associative Memory (MWAM) and study its bidirectional association properties in this paper. The proposed MWAM has two processes for pattern pairs storage: storage process and recall process. For the storage process, the proposed MWAM can represent a half of pattern pair in the distributed representation layer and can store the correspondence of pattern and its representation using the upward weights. In addition, the MWAM can store the correspondence of the distributed representation and the other half of pattern pair in the downward weights. For the recall process, the MWAM can recall information bidirectionally: a half of the stored pattern pair can be recalled by receiving the other half in the input-output layer for any stored pattern pairs.

  • Discriminative Convolutional Neural Network for Image Quality Assessment with Fixed Convolution Filters

    Motohiro TAKAGI  Akito SAKURAI  Masafumi HAGIWARA  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2019/08/09
      Vol:
    E102-D No:11
      Page(s):
    2265-2266

    Current image quality assessment (IQA) methods require the original images for evaluation. However, recently, IQA methods that use machine learning have been proposed. These methods learn the relationship between the distorted image and the image quality automatically. In this paper, we propose an IQA method based on deep learning that does not require a reference image. We show that a convolutional neural network with distortion prediction and fixed filters improves the IQA accuracy.

  • Block Demodulation for Trellis Coded Modulation

    Yutaka MIYAKE  Masafumi HAGIWARA  Masao NAKAGAWA  

     
    PAPER-Modulation and Demodulation

      Vol:
    E73-E No:10
      Page(s):
    1674-1680

    Trellis-Coded Modulation (TCM) schemes become popular in digital transmission systems to improve error performance. However, demodulation of Trellis-Coded signal is rather difficult. Because TCM schemes need many signal points compared with the uncoded schemes. This leads to a comparatively high probability of cycle slips. In addition, large loop delay caused by decoding can not be avoided when decision feed back type demodulators are used. This paper proposes a novel demodulation method for TCM signal using block demodulation. The block demodulation scheme is a kind of off-line demodulation, which has many advantages. The Viterbi decoder in the proposed block demodulator is used not only for Viterbi decoding but also for carrier estimation. Such a combined processing is done twice for high performance. In addition, a block demodulation scheme is not affected by processing delay. Therefore in the proposed system, carrier estimation becomes accurate, and Viterbi decoding also becomes correct. As a result, it can get coding gains which cannot be obtained by conventional block demodulation methods. And the proposed system can demodulate not only PSK signal but also QAM signal. The performance of the proposed demodulator is confirmed by computer simulation.

  • Direct-Sequence Spread-Spectrum Demodulator Using Block Signal Processing

    Akihiro KAJIWARA  Masao NAKAGAWA  Masafumi HAGIWARA  

     
    PAPER

      Vol:
    E74-B No:5
      Page(s):
    1108-1114

    This paper shows a Direct-Sequence Spread-Spectrum (SS-DS) demodulator using block signal processing. One of the difficulties in applying SS-DS techniques to the packet radio network is that each packet needs a long initial-acquisition time for despreading. The acquisition time causes the large degradation of the data transmission efficiency. Our proposed SS-DS demodulator uses the block signal processing, unlike the conventional SS-DS demodulators using real time signal processing. Received signal demodulated quasi-coherently is once stored in memory, and after extracting matched-pulse timing and estimating carrier offset, the signal is demodulated. Incoming data, therefore, are all demodulated without being lost by the initial-acquisition time, and our proposed SS-DS demodulator can provide the higher data transmission efficiency.

  • Quick Learning for Bidirectional Associative Memory

    Motonobu HATTORI  Masafumi HAGIWARA  Masao NAKAGAWA  

     
    PAPER-Learning

      Vol:
    E77-D No:4
      Page(s):
    385-392

    Recently, many researches on associative memories have been made a lot of neural network models have been proposed. Bidirectional Associative Memory (BAM) is one of them. The BAM uses Hebbian learning. However, unless the traning vectors are orthogonal, Hebbian learning does not guarantee the recall of all training pairs. Namely, the BAM which is trained by Hebbian learning suffers from low memory capacity. To improve the storage capacity of the BAM, Pseudo-Relaxation Learning Algorithm for BAM (PRLAB) has been proposed. However, PRLAB needs long learning epochs because of random initial weights. In this paper, we propose Quick Learning for BAM which greatly reduces learning epochs and guarantees the recall of all training pairs. In the proposed algorithm, the BAM is trained by Hebbian learning in the first stage and then trained by PRLAB. Owing to the use of Hebbian learning in the first stage, the weights are much closer to the solution space than the initial weights chosen randomly. As a result, the proposed algorithm can reduce the learning epocks. The features of the proposed algorithm are: 1) It requires much less learning epochs. 2) It guarantees the recall of all training pairs. 3) It is robust for noisy inputs. 4) The memory capacity is much larger than conventional BAM. In addition, we made clear several important chracteristics of the conventional and the proposed algorithms such as noise reduction characteristics, storage capacity and the finding of an index which relates to the noise reduction.

  • Analysis of Momentum Term in Back-Propagation

    Masafumi HAGIWARA  Akira SATO  

     
    PAPER-Bio-Cybernetics and Neurocomputing

      Vol:
    E78-D No:8
      Page(s):
    1080-1086

    The back-propagation algorithm has been applied to many fields, and has shown large capability of neural networks. Many people use the back-propagation algorithm together with a momentum term to accelerate its convergence. However, in spite of the importance for theoretical studies, theoretical background of a momentum term has been unknown so far. First, this paper explains clearly the theoretical origin of a momentum term in the back-propagation algorithm for both a batch mode learning and a pattern-by-pattern learning. We will prove that the back-propagation algorithm having a momentum term can be derived through the following two assumptions: 1) The cost function is Enαn-µEµ, where Eµ is the summation of squared error at the output layer at the µth learning time and a is the momentum coefficient. 2) The latest weights are assumed in calculating the cost function En. Next, we derive a simple relationship between momentum, learning rate, and learning speed and then further discussion is made with computer simulation.