The search functionality is under construction.

Keyword Search Result

[Keyword] deep neural networks (DNNs)(3hit)

1-3hit
  • A COM Based High Speed Serial Link Optimization Using Machine Learning Open Access

    Yan WANG  Qingsheng HU  

     
    PAPER

      Pubricized:
    2022/05/09
      Vol:
    E105-C No:11
      Page(s):
    684-691

    This paper presents a channel operating margin (COM) based high-speed serial link optimization using machine learning (ML). COM that is proposed for evaluating serial link is calculated at first and during the calculation several important equalization parameters corresponding to the best configuration are extracted which can be used for the ML modeling of serial link. Then a deep neural network containing hidden layers are investigated to model a whole serial link equalization including transmitter feed forward equalizer (FFE), receiver continuous time linear equalizer (CTLE) and decision feedback equalizer (DFE). By training, validating and testing a lot of samples that meet the COM specification of 400GAUI-8 C2C, an effective ML model is generated and the maximum relative error is only 0.1 compared with computation results. At last 3 link configurations are discussed from the view of tradeoff between the link performance and cost, illustrating that our COM based ML modeling method can be applied to advanced serial link design for NRZ, PAM4 or even other higher level pulse amplitude modulation signal.

  • A Low-Cost Training Method of ReRAM Inference Accelerator Chips for Binarized Neural Networks to Recover Accuracy Degradation due to Statistical Variabilities

    Zian CHEN  Takashi OHSAWA  

     
    PAPER-Integrated Electronics

      Pubricized:
    2022/01/31
      Vol:
    E105-C No:8
      Page(s):
    375-384

    A new software based in-situ training (SBIST) method to achieve high accuracies is proposed for binarized neural networks inference accelerator chips in which measured offsets in sense amplifiers (activation binarizers) are transformed into biases in the training software. To expedite this individual training, the initial values for the weights are taken from results of a common forming training process which is conducted in advance by using the offset fluctuation distribution averaged over the fabrication line. SPICE simulation inference results for the accelerator predict that the accuracy recovers to higher than 90% even when the amplifier offset is as large as 40mV only after a few epochs of the individual training.

  • Supervised Denoising Pre-Training for Robust ASR with DNN-HMM

    Shin Jae KANG  Kang Hyun LEE  Nam Soo KIM  

     
    LETTER-Speech and Hearing

      Pubricized:
    2015/09/07
      Vol:
    E98-D No:12
      Page(s):
    2345-2348

    In this letter, we propose a novel supervised pre-training technique for deep neural network (DNN)-hidden Markov model systems to achieve robust speech recognition in adverse environments. In the proposed approach, our aim is to initialize the DNN parameters such that they yield abstract features robust to acoustic environment variations. In order to achieve this, we first derive the abstract features from an early fine-tuned DNN model which is trained based on a clean speech database. By using the derived abstract features as the target values, the standard error back-propagation algorithm with the stochastic gradient descent method is performed to estimate the initial parameters of the DNN. The performance of the proposed algorithm was evaluated on Aurora-4 DB, and better results were observed compared to a number of conventional pre-training methods.