The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] perceptron(35hit)

1-20hit(35hit)

  • Multilayer Perceptron Training Accelerator Using Systolic Array

    Takeshi SENOO  Akira JINGUJI  Ryosuke KURAMOCHI  Hiroki NAKAHARA  

     
    PAPER

      Pubricized:
    2022/07/21
      Vol:
    E105-D No:12
      Page(s):
    2048-2056

    Multilayer perceptron (MLP) is a basic neural network model that is used in practical industrial applications, such as network intrusion detection (NID) systems. It is also used as a building block in newer models, such as gMLP. Currently, there is a demand for fast training in NID and other areas. However, in training with numerous GPUs, the problems of power consumption and long training times arise. Many of the latest deep neural network (DNN) models and MLPs are trained using a backpropagation algorithm which transmits an error gradient from the output layer to the input layer such that in the sequential computation, the next input cannot be processed until the weights of all layers are updated from the last layer. This is known as backward locking. In this study, a weight parameter update mechanism is proposed with time delays that can accommodate the weight update delay to allow simultaneous forward and backward computation. To this end, a one-dimensional systolic array structure was designed on a Xilinx U50 Alveo FPGA card in which each layer of the MLP is assigned to a processing element (PE). The time-delay backpropagation algorithm executes all layers in parallel, and transfers data between layers in a pipeline. Compared to the Intel Core i9 CPU and NVIDIA RTX 3090 GPU, it is 3 times faster than the CPU and 2.5 times faster than the GPU. The processing speed per power consumption is 11.5 times better than that of the CPU and 21.4 times better than that of the GPU. From these results, it is concluded that a training accelerator on an FPGA can achieve high speed and energy efficiency.

  • Neural Network Calculations at the Speed of Light Using Optical Vector-Matrix Multiplication and Optoelectronic Activation

    Naoki HATTORI  Jun SHIOMI  Yutaka MASUDA  Tohru ISHIHARA  Akihiko SHINYA  Masaya NOTOMI  

     
    PAPER

      Pubricized:
    2021/05/17
      Vol:
    E104-A No:11
      Page(s):
    1477-1487

    With the rapid progress of the integrated nanophotonics technology, the optical neural network architecture has been widely investigated. Since the optical neural network can complete the inference processing just by propagating the optical signal in the network, it is expected more than one order of magnitude faster than the electronics-only implementation of artificial neural networks (ANN). In this paper, we first propose an optical vector-matrix multiplication (VMM) circuit using wavelength division multiplexing, which enables inference processing at the speed of light with ultra-wideband. This paper next proposes optoelectronic circuit implementation for batch normalization and activation function, which significantly improves the accuracy of the inference processing without sacrificing the speed performance. Finally, using a virtual environment for machine learning and an optoelectronic circuit simulator, we demonstrate the ultra-fast and accurate operation of the optical-electronic ANN circuit.

  • Classifying MathML Expressions by Multilayer Perceptron

    Yuma NAGAO  Nobutaka SUZUKI  

     
    LETTER-Data Engineering, Web Information Systems

      Pubricized:
    2018/04/04
      Vol:
    E101-D No:7
      Page(s):
    1954-1958

    MathML is a standard markup language for describing math expressions. MathML consists of two sets of elements: Presentation Markup and Content Markup. The former is widely used to display math expressions in Web pages, while the latter is more suited to the calculation of math expressions. In this letter, we focus on the former and consider classifying Presentation MathML expressions. Identifying the classes of given Presentation MathML expressions is helpful for several applications, e.g., Presentation to Content MathML conversion, text-to-speech, and so on. We propose a method for classifying Presentation MathML expressions by using multilayer perceptron. Experimental results show that our method classifies MathML expressions with high accuracy.

  • ECG-Based Heartbeat Classification Using Two-Level Convolutional Neural Network and RR Interval Difference

    Yande XIANG  Jiahui LUO  Taotao ZHU  Sheng WANG  Xiaoyan XIANG  Jianyi MENG  

     
    PAPER-Biological Engineering

      Pubricized:
    2018/01/12
      Vol:
    E101-D No:4
      Page(s):
    1189-1198

    Arrhythmia classification based on electrocardiogram (ECG) is crucial in automatic cardiovascular disease diagnosis. The classification methods used in the current practice largely depend on hand-crafted manual features. However, extracting hand-crafted manual features may introduce significant computational complexity, especially in the transform domains. In this study, an accurate method for patient-specific ECG beat classification is proposed, which adopts morphological features and timing information. As to the morphological features of heartbeat, an attention-based two-level 1-D CNN is incorporated in the proposed method to extract different grained features automatically by focusing on various parts of a heartbeat. As to the timing information, the difference between previous and post RR intervels is computed as a dynamic feature. Both the extracted morphological features and the interval difference are used by multi-layer perceptron (MLP) for classifing ECG signals. In addition, to reduce memory storage of ECG data and denoise to some extent, an adaptive heartbeat normalization technique is adopted which includes amplitude unification, resolution modification, and signal difference. Based on the MIT-BIH arrhythmia database, the proposed classification method achieved sensitivity Sen=93.4% and positive predictivity Ppr=94.9% in ventricular ectopic beat (VEB) detection, sensitivity Sen=86.3% and positive predictivity Ppr=80.0% in supraventricular ectopic beat (SVEB) detection, and overall accuracy OA=97.8% under 6-bit ECG signal resolution. Compared with the state-of-the-art automatic ECG classification methods, these results show that the proposed method acquires comparable accuracy of heartbeat classification though ECG signals are represented by lower resolution.

  • Multi-Layer Perceptron with Pulse Glial Chain

    Chihiro IKUTA  Yoko UWATE  Yoshifumi NISHIO  Guoan YANG  

     
    PAPER-Neural Networks and Bioengineering

      Vol:
    E99-A No:3
      Page(s):
    742-755

    Glial cells include several types of cells such as astrocytes, and oligodendrocytes apart from the neurons in the brain. In particular, astrocytes are known to be important in higher brain function and are therefore sometimes simply called glial cells. An astrocyte can transmit signals to other astrocytes and neurons using ion concentrations. Thus, we expect that the functions of an astrocyte can be applied to an artificial neural network. In this study, we propose a multi-layer perceptron (MLP) with a pulse glial chain. The proposed MLP contains glia (astrocytes) in a hidden layer. The glia are connected to neurons and are excited by the outputs of the neurons. The excited glia generate pulses that affect the excitation thresholds of the neurons and their neighboring glia. The glial network provides a type of positional relationship between the neurons in the hidden layer, which can enhance the performance of MLP learning. We confirm through computer simulations that the proposed MLP has better learning performance than a conventional MLP.

  • Sparsification and Stability of Simple Dynamic Binary Neural Networks

    Jungo MORIYASU  Toshimichi SAITO  

     
    LETTER-Nonlinear Problems

      Vol:
    E97-A No:4
      Page(s):
    985-988

    This letter studies the simple dynamic binary neural network characterized by signum activation function and ternary connection parameters. In order to control the sparsity of the connections and the stability of the stored signal, a simple evolutionary algorithm is presented. As a basic example of teacher signals, we consider a binary periodic orbit which corresponds to a control signal of ac-dc regulators. In the numerical experiment, applying the correlation-based learning, the periodic orbit can be stored. The sparsification can be effective to reinforce the stability of the periodic orbit.

  • Dependency Chart Parsing Algorithm Based on Ternary-Span Combination

    Meixun JIN  Yong-Hun LEE  Jong-Hyeok LEE  

     
    PAPER-Natural Language Processing

      Vol:
    E96-D No:1
      Page(s):
    93-101

    This paper presents a new span-based dependency chart parsing algorithm that models the relations between the left and right dependents of a head. Such relations cannot be modeled in existing span-based algorithms, despite their popularity in dependency corpora. We address this problem through ternary-span combination during the subtree derivation. By modeling the relations between the left and right dependents of a head, our proposed algorithm provides a better capability of coordination disambiguation when the conjunction is annotated as the head of the left and right conjuncts. This eventually leads to state-of-the-art performance of dependency parsing on the Chinese data of the CoNLL shared task.

  • Statistical Mechanics of On-Line Learning Using Correlated Examples

    Kento NAKAO  Yuta NARUKAWA  Seiji MIYOSHI  

     
    LETTER

      Vol:
    E94-D No:10
      Page(s):
    1941-1944

    We consider a model composed of nonlinear perceptrons and analytically investigate its generalization performance using correlated examples in the framework of on-line learning by a statistical mechanical method. In Hebbian and AdaTron learning, the larger the number of examples used in an update, the slower the learning. In contrast, Perceptron learning does not exhibit such behaviors, and the learning becomes fast in some time region.

  • Multi-Layer Perceptron with Glial Network for Solving Two-Spiral Problem

    Chihiro IKUTA  Yoko UWATE  Yoshifumi NISHIO  

     
    LETTER-Nonlinear Problems

      Vol:
    E94-A No:9
      Page(s):
    1864-1867

    In this study, we propose a multi-layer perceptron with a glial network which is inspired from the features of glias in the brain. All glias in the proposed network generate independent oscillations, and the oscillations propagate through the glial network with attenuation. We apply the proposed network to the two-spiral problem. Computer simulations show that the proposed network gains a better performance than the conventional multi-layer perceptron.

  • A Theoretical Analysis of On-Line Learning Using Correlated Examples

    Chihiro SEKI  Shingo SAKURAI  Masafumi MATSUNO  Seiji MIYOSHI  

     
    PAPER-Neural Networks and Bioengineering

      Vol:
    E91-A No:9
      Page(s):
    2663-2670

    In this paper we analytically investigate the generalization performance of learning using correlated inputs in the framework of on-line learning with a statistical mechanical method. We consider a model composed of linear perceptrons with Gaussian noise. First, we analyze the case of the gradient method. We analytically clarify that the larger the correlation among inputs is or the larger the number of inputs is, the stricter the condition the learning rate should satisfy is, and the slower the learning speed is. Second, we treat the block orthogonal projection learning as an alternative learning rule and derive the theory. In a noiseless case, the learning speed does not depend on the correlation and is proportional to the number of inputs used in an update. The learning speed is identical to that of the gradient method with uncorrelated inputs. On the other hand, when there is noise, the larger the correlation among inputs is, the slower the learning speed is and the larger the residual generalization error is.

  • Tone Recognition of Continuous Mandarin Speech Based on Tone Nucleus Model and Neural Network

    Xiao-Dong WANG  Keikichi HIROSE  Jin-Song ZHANG  Nobuaki MINEMATSU  

     
    PAPER-Pattern Recognition

      Vol:
    E91-D No:6
      Page(s):
    1748-1755

    A method was developed for automatic recognition of syllable tone types in continuous speech of Mandarin by integrating two techniques, tone nucleus modeling and neural network classifier. The tone nucleus modeling considers a syllable F0 contour as consisting of three parts: onset course, tone nucleus, and offset course. Two courses are transitions from/to neighboring syllable F0 contours, while the tone nucleus is intrinsic part of the F0 contour. By viewing only the tone nucleus, acoustic features less affected by neighboring syllables are obtained. When using the tone nucleus modeling, automatic detection of tone nucleus comes crucial. An improvement was added to the original detection method. Distinctive acoustic features for tone types are not limited to F0 contours. Other prosodic features, such as waveform power and syllable duration, are also useful for tone recognition. Their heterogeneous features are rather difficult to be handled simultaneously in hidden Markov models (HMM), but are easy in neural networks. We adopted multi-layer perceptron (MLP) as a neural network. Tone recognition experiments were conducted for speaker dependent and independent cases. In order to show the effect of integration, experiments were conducted also for two baselines: HMM classifier with tone nucleus modeling, and MLP classifier viewing entire syllable instead of tone nucleus. The integrated method showed 87.1% of tone recognition rate in speaker dependent case, and 80.9% in speaker independent case, which was about 10% relative error reduction as compared to the baselines.

  • MLP/BP-Based Soft Decision Feedback Equalization with Bit-Interleaved TCM for Wireless Applications

    Terng-Ren HSU  Chien-Ching LIN  Terng-Yin HSU  Chen-Yi LEE  

     
    LETTER-Neural Networks and Bioengineering

      Vol:
    E90-A No:4
      Page(s):
    879-884

    For more efficient data transmissions, a new MLP/BP-based channel equalizer is proposed to compensate for multi-path fading in wireless applications. In this work, for better system performance, we apply the soft output and the soft feedback structure as well as the soft decision channel decoding. Moreover, to improve packet error rate (PER) and bit error rate (BER), we search for the optimal scaling factor of the transfer function in the output layer of the MLP/BP neural networks and add small random disturbances to the training data. As compared with the conventional MLP/BP-based DFEs and the soft output MLP/BP-based DFEs, the proposed MLP/BP-based soft DFEs under multi-path fading channels can improve over 3-0.6 dB at PER=10-1 and over 3.3-0.8 dB at BER=10-3.

  • Geometric Properties of Quasi-Additive Learning Algorithms

    Kazushi IKEDA  

     
    PAPER-Control, Neural Networks and Learning

      Vol:
    E89-A No:10
      Page(s):
    2812-2817

    The family of Quasi-Additive (QA) algorithms is a natural generalization of the perceptron learning, which is a kind of on-line learning having two parameter vectors: One is an accumulation of input vectors and the other is a weight vector for prediction associated with the former by a nonlinear function. We show that the vectors have a dually-flat structure from the information-geometric point of view, and this representation makes it easier to discuss the convergence properties.

  • Single-Channel Multiple Regression for In-Car Speech Enhancement

    Weifeng LI  Katsunobu ITOU  Kazuya TAKEDA  Fumitada ITAKURA  

     
    PAPER-Speech Enhancement

      Vol:
    E89-D No:3
      Page(s):
    1032-1039

    We address issues for improving hands-free speech enhancement and speech recognition performance in different car environments using a single distant microphone. This paper describes a new single-channel in-car speech enhancement method that estimates the log spectra of speech at a close-talking microphone based on the nonlinear regression of the log spectra of noisy signal captured by a distant microphone and the estimated noise. The proposed method provides significant overall quality improvements in our subjective evaluation on the regression-enhanced speech, and performed best in most objective measures. Based on our isolated word recognition experiments conducted under 15 real car environments, the proposed adaptive nonlinear regression approach shows an advantage in average relative word error rate (WER) reductions of 50.8% and 13.1%, respectively, compared to original noisy speech and ETSI advanced front-end (ETSI ES 202 050).

  • Alaryngeal Speech Enhancement Using Pattern Recognition Techniques

    Gualberto AGUILAR  Mariko NAKANO-MIYATAKE  Hector PEREZ-MEANA  

     
    LETTER-Biomedical Circuits and Systems

      Vol:
    E88-D No:7
      Page(s):
    1618-1622

    An alaryngeal speech enhancement system is proposed to improve the intelligibility and quality of speech signals generated by an artificial larynx transducer (ALT). Proposed system identifies the voiced segments of alaryngeal speech signal, by using pattern recognition methods, and replaces these by their equivalent voiced segments of normal speech. Evaluation results show that proposed system provides a fairly good improvement of the quality and intelligibility of ALT generated speech.

  • Adaptive Nonlinear Regression Using Multiple Distributed Microphones for In-Car Speech Recognition

    Weifeng LI  Chiyomi MIYAJIMA  Takanori NISHINO  Katsunobu ITOU  Kazuya TAKEDA  Fumitada ITAKURA  

     
    PAPER-Speech Enhancement

      Vol:
    E88-A No:7
      Page(s):
    1716-1723

    In this paper, we address issues in improving hands-free speech recognition performance in different car environments using multiple spatially distributed microphones. In the previous work, we proposed the multiple linear regression of the log spectra (MRLS) for estimating the log spectra of speech at a close-talking microphone. In this paper, the concept is extended to nonlinear regressions. Regressions in the cepstrum domain are also investigated. An effective algorithm is developed to adapt the regression weights automatically to different noise environments. Compared to the nearest distant microphone and adaptive beamformer (Generalized Sidelobe Canceller), the proposed adaptive nonlinear regression approach shows an advantage in the average relative word error rate (WER) reductions of 58.5% and 10.3%, respectively, for isolated word recognition under 15 real car environments.

  • A Self-Learning Analog Neural Processor

    Gian Marco BO  Daniele D. CAVIGLIA  Maurizio VALLE  

     
    PAPER-Neural Networks and Bioengineering

      Vol:
    E85-A No:9
      Page(s):
    2149-2158

    In this paper we present the analog architecture and the implementation of an on-chip learning Multi Layer Perceptron network. The learning algorithm is based on Back Propagation but it exhibits increased capabilities due to local learning rate management. A prototype chip (SLANP, Self-Learning Neural Processor) has been designed and fabricated in a CMOS 0.7 µm minimum channel length technology. We report the experimental results that confirm the functionality of the chip and the soundness of the approach. The SLANP performance compare favourably with those reported in the literature.

  • Fast and Optimal Synthesis of Binary Threshold Neural Networks

    Frank RHEE  

     
    LETTER-Fundamental Theories

      Vol:
    E85-B No:8
      Page(s):
    1608-1613

    A new algorithm for synthesizing binary threshold neural networks (BTNNs) is proposed. A binary (Boolean) input-output mapping that can be represented by minimal sum-of-product (MSP) terms is initially obtained from training data. The BTNN is then synthesized based on an MSP term grouping method. As a result, a fast and optimal realization of a BTNN can be obtained. Examples of both feedforward and recurrent BTNN synthesis used in a parallel processing architecture are given and compared with other existing methods.

  • Associative Memories Using Interaction between Multilayer Perceptrons and Sparsely Interconnected Neural Networks

    Takeshi KAMIO  Hisato FUJISAKA  Mititada MORISUE  

     
    PAPER

      Vol:
    E85-A No:6
      Page(s):
    1220-1228

    Associative memories composed of sparsely interconnected neural networks (SINNs) are suitable for analog hardware implementation. However, the sparsely interconnected structure also gives rise to a decrease in the capability of SINNs for associative memories. Although this problem can be solved by increasing the number of interconnections, the hardware cost goes up rapidly. Therefore, we propose associative memories consisting of multilayer perceptrons (MLPs) with 3-valued weights and SINNs. It is expected that such MLPs can be realized at a lower cost than increasing interconnections in SINNs and can give each neuron in SINNs the global information of an input pattern to improve the storage capacity. Finally, it is confirmed by simulations that our proposed associative memories have good performance.

  • A Hierarchical Classifier for Multispectral Satellite Imagery

    Abdesselam BOUZERDOUM  

     
    PAPER

      Vol:
    E84-C No:12
      Page(s):
    1952-1958

    In this article, a hierarchical classifier is proposed for classification of ground-cover types of a satellite image of Kangaroo Island, South Australia. The image contains seven ground-cover types, which are categorized into three groups using principal component analysis. The first group contains clouds only, the second consists of sea and cloud shadow over land, and the third contains land and three types of forest. The sea and shadow over land classes are classified with 99% accuracy using a network of threshold logic units. The land and forest classes are classified by multilayer perceptrons (MLPs) using texture features and intensity values. The average performance achieved by six trained MLPs is 91%. In order to improve the classification accuracy even further, the outputs of the six MLPs were combined using several committee machines. All committee machines achieved significant improvement in performance over the multilayer perceptron classifiers, with the best machine achieving over 92% correct classification.

1-20hit(35hit)