The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] boltzmann machine(12hit)

1-12hit
  • Learning Rule for a Quantum Neural Network Inspired by Hebbian Learning

    Yoshihiro OSAKABE  Shigeo SATO  Hisanao AKIMA  Mitsunaga KINJO  Masao SAKURABA  

     
    PAPER-Fundamentals of Information Systems

      Pubricized:
    2020/10/30
      Vol:
    E104-D No:2
      Page(s):
    237-245

    Utilizing the enormous potential of quantum computers requires new and practical quantum algorithms. Motivated by the success of machine learning, we investigate the fusion of neural and quantum computing, and propose a learning method for a quantum neural network inspired by the Hebb rule. Based on an analogy between neuron-neuron interactions and qubit-qubit interactions, the proposed quantum learning rule successfully changes the coupling strengths between qubits according to training data. To evaluate the effectiveness and practical use of the method, we apply it to the memorization process of a neuro-inspired quantum associative memory model. Our numerical simulation results indicate that the proposed quantum versions of the Hebb and anti-Hebb rules improve the learning performance. Furthermore, we confirm that the probability of retrieving a target pattern from multiple learned patterns is sufficiently high.

  • Speech Chain VC: Linking Linguistic and Acoustic Levels via Latent Distinctive Features for RBM-Based Voice Conversion

    Takuya KISHIDA  Toru NAKASHIKA  

     
    PAPER-Speech and Hearing

      Pubricized:
    2020/08/06
      Vol:
    E103-D No:11
      Page(s):
    2340-2350

    This paper proposes a voice conversion (VC) method based on a model that links linguistic and acoustic representations via latent phonological distinctive features. Our method, called speech chain VC, is inspired by the concept of the speech chain, where speech communication consists of a chain of events linking the speaker's brain with the listener's brain. We assume that speaker identity information, which appears in the acoustic level, is embedded in two steps — where phonological information is encoded into articulatory movements (linguistic to physiological) and where articulatory movements generate sound waves (physiological to acoustic). Speech chain VC represents these event links by using an adaptive restricted Boltzmann machine (ARBM) introducing phoneme labels and acoustic features as two classes of visible units and latent phonological distinctive features associated with articulatory movements as hidden units. Subjective evaluation experiments showed that intelligibility of the converted speech significantly improved compared with the conventional ARBM-based method. The speaker-identity conversion quality of the proposed method was comparable to that of a Gaussian mixture model (GMM)-based method. Analyses on the representations of the hidden layer of the speech chain VC model supported that some of the hidden units actually correspond to phonological distinctive features. Final part of this paper proposes approaches to achieve one-shot VC by using the speech chain VC model. Subjective evaluation experiments showed that when a target speaker is the same gender as a source speaker, the proposed methods can achieve one-shot VC based on each single source and target speaker's utterance.

  • Forecasting Service Performance on the Basis of Temporal Information by the Conditional Restricted Boltzmann Machine

    Jiali YOU  Hanxing XUE  Yu ZHUO  Xin ZHANG  Jinlin WANG  

     
    PAPER-Network

      Pubricized:
    2017/11/10
      Vol:
    E101-B No:5
      Page(s):
    1210-1221

    Predicting the service performance of Internet applications is important in service selection, especially for video services. In order to design a predictor for forecasting video service performance in third-party application, two famous service providers in China, Iqiyi and Letv, are monitored and analyzed. The study highlights that the measured performance in the observation period is time-series data, and it has strong autocorrelation, which means it is predictable. In order to combine the temporal information and map the measured data to a proper feature space, the authors propose a predictor based on a Conditional Restricted Boltzmann Machine (CRBM), which can capture the potential temporal relationship of the historical information. Meanwhile, the measured data of different sources are combined to enhance the training process, which can enlarge the training size and avoid the over-fit problem. Experiments show that combining the measured results from different resolutions for a video can raise prediction performance, and the CRBM algorithm shows better prediction ability and more stable performance than the baseline algorithms.

  • Deep Nonlinear Metric Learning for Speaker Verification in the I-Vector Space

    Yong FENG  Qingyu XIONG  Weiren SHI  

     
    LETTER-Speech and Hearing

      Pubricized:
    2016/10/04
      Vol:
    E100-D No:1
      Page(s):
    215-219

    Speaker verification is the task of determining whether two utterances represent the same person. After representing the utterances in the i-vector space, the crucial problem is only how to compute the similarity of two i-vectors. Metric learning has provided a viable solution to this problem. Until now, many metric learning algorithms have been proposed, but they are usually limited to learning a linear transformation. In this paper, we propose a nonlinear metric learning method, which learns an explicit mapping from the original space to an optimal subspace using deep Restricted Boltzmann Machine network. The proposed method is evaluated on the NIST SRE 2008 dataset. Since the proposed method has a deep learning architecture, the evaluation results show superior performance than some state-of-the-art methods.

  • Voice Conversion Based on Speaker-Dependent Restricted Boltzmann Machines

    Toru NAKASHIKA  Tetsuya TAKIGUCHI  Yasuo ARIKI  

     
    PAPER-Voice Conversion and Speech Enhancement

      Vol:
    E97-D No:6
      Page(s):
    1403-1410

    This paper presents a voice conversion technique using speaker-dependent Restricted Boltzmann Machines (RBM) to build high-order eigen spaces of source/target speakers, where it is easier to convert the source speech to the target speech than in the traditional cepstrum space. We build a deep conversion architecture that concatenates the two speaker-dependent RBMs with neural networks, expecting that they automatically discover abstractions to express the original input features. Under this concept, if we train the RBMs using only the speech of an individual speaker that includes various phonemes while keeping the speaker individuality unchanged, it can be considered that there are fewer phonemes and relatively more speaker individuality in the output features of the hidden layer than original acoustic features. Training the RBMs for a source speaker and a target speaker, we can then connect and convert the speaker individuality abstractions using Neural Networks (NN). The converted abstraction of the source speaker is then back-propagated into the acoustic space (e.g., MFCC) using the RBM of the target speaker. We conducted speaker-voice conversion experiments and confirmed the efficacy of our method with respect to subjective and objective criteria, comparing it with the conventional Gaussian Mixture Model-based method and an ordinary NN.

  • Boltzmann Machines with Identified States

    Masaki KOBAYASHI  

     
    LETTER-Nonlinear Problems

      Vol:
    E91-A No:3
      Page(s):
    887-890

    Learning for boltzmann machines deals with each state individually. If given data is categorized, the probabilities have to be distributed to each state, not to each catetory. We propose boltzmann machines identifying the states in the same categories. Boltzmann machines with hidden units are the special cases. Boltzmann learning and em algorithm are effective learning methods for boltzmann machines. We solve boltzmann learning and em algorithm for the proposed models.

  • A Boltzmann Machine with Non-rejective Move

    Hongbing ZHU  Ningping SUN  Mamoru SASAKI  Kei EGUCHI  Toru TABATA  Fuji REN  

     
    PAPER

      Vol:
    E85-A No:6
      Page(s):
    1229-1235

    It have been one open and significant topic for real-time applications to enhance the processing-speed of Boltzmann machines for long time. One effective way of solution of this problem is the augmentation of probability of neurons' state move. In this paper, a novel method, called a rejectionless method, was proposed and introduced into the Boltzmann machines for this augmentation. This method has a feature of independence on the ratio of neurons' state move. The efficiency of this method for speed-up was confirmed with the experiments of TSP and graph problem.

  • A Pipeline Structure for the Sequential Boltzmann Machine

    Hongbing ZHU  Mamoru SASAKI  Takahiro INOUE  

     
    PAPER

      Vol:
    E82-A No:6
      Page(s):
    920-926

    In this paper, by making good use of the parallel-transit-evaluation algorithm and sparsity of the connection between neurons, a pipeline structure is successfully introduced to the sequential Boltzmann machine processor. The novel structure speeds up nine times faster than the previous one, with only the 12% rise in hardware resources under 10,000 neurons. The performance is confirmed by designing it using 1.2 µm CMOS process standard cells and analyzing the probability of state-change.

  • Eliciting the Potential Functions of Single-Electron Circuits

    Masamichi AKAZAWA  Yoshihito AMEMIYA  

     
    INVITED PAPER

      Vol:
    E80-C No:7
      Page(s):
    849-858

    This paper describes a guiding principle for designing functional single-electron tunneling (SET) circuitsthat is a way to elicit the potential functions of a given SET circuit by using as a guiding tool the SET circuit stability diagram. A stability diagram is a map that depicts the stable regions of a SET circuit based on the circuit's variable coordinates. By scrutinizing the diagram, we can infer all the potential functions that can be obtained from a circuit configuration. As an example, we take up a well-known SET-inverter circuit and uncover its latent functions by studying the circuit configuration, based on its stability diagram. We can produce various functions, e.g., step-inverter, Schmidt-trigger, memory cell, literal, and stochastic-neuron functions. The last function makes good use of the inherent stochastic nature of single-electron tunneling, and can be applied to Boltzmann-machine neural network systems.

  • Deterministic Boltzmann Machine Learning Improved for Analog LSI Implementation

    Takashi MORIE  Yoshihito AMEMIYA  

     
    PAPER-Neural Networks and Chips

      Vol:
    E76-C No:7
      Page(s):
    1167-1173

    This paper describes the learning performance of the deterministic Boltzmann machine (DBM), which is a promising neural network model suitable for analog LSI implementation. (i) A new learning procedure suitable for LSI implementation is proposed. This is fully-on-line learning in which different sample patterns are presented in consecutive clamped and free phases and the weights are modified in each phase. This procedure is implemented without extra memories for learning operation, and reduces the chip area and power consumption for learning by 50 percent. (ii) Learning in a layer-type DBM with one output unit has characteristic local minima which reduce the effective number of available hidden units. Effective methods to avoid reaching these local minima are proposed. (iii) Although DBM learning is not suitable for mapping problems with analog target values, it is useful for analog data discrimination problems.

  • Boltzmann Machine Processor Using Single-Bit Operation

    Mamoru SASAKI  Shuichi KANEDA  Fumio UENO  Takahiro INOUE  Yoshiki KITAMURA  

     
    PAPER-Nonlinear Circuits and Neural Nets

      Vol:
    E76-A No:6
      Page(s):
    878-885

    This paper describes a single-bit parallel processor specified to Boltzmann Machine. The processor has SIMD (Shingle Instruction Multiple Data stream) type parallel architecture and every processing element (PE) has a single-bit ALU and a local memory storing connected weights between neurons. Features of the processor are large scale parallel processing a number of the simple single-bit PEs and effective expansion realized by multiple chips connected simple bus lines. Moreover, it is enhanced that the processing speed can be independent of the number of the neurons. We designed the PE using 1.2 µm CMOS process standard cells and confirmed the high performance using CAD simulations.

  • Analog VLSI Implementation of Adaptive Algorithms by an Extended Hebbian Synapse Circuit

    Takashi MORIE  Osamu FUJITA  Yoshihito AMEMIYA  

     
    PAPER

      Vol:
    E75-C No:3
      Page(s):
    303-311

    First, a number of issues pertaining to analog VLSI implementation of Backpropagation (BP) and Deterministic Boltzmann Machine (DBM) learning algorithms are clarified. According to the results from software simulation, a mismatch between the activation function and derivative generated by independent circuits degrades the BP learning performance. The perfomance can be improved, however, by adjusting the gain of the activation function used to obtain the derivative, irrespective of the original activation function. Calculation errors embedded in the circuits also degrade the learning preformance. BP learning is sensitive to offset errors in multiplication in the learning process, and DBM learning is sensitive to asymmetry between the weight increment and decrement processes. Next, an analog VLSI architecture for implementing the algorithms using common building block circuits is proposed. The evaluation results of test chips confirm that synaptic weights can be updated up to 1 MHz and that a resolution exceeding 14 bits can be attained. The test chips successfully perform XOR learning using each algorithm.