The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] multi-layer neural networks(3hit)

1-3hit
  • Empirical Evaluation and Optimization of Hardware-Trojan Classification for Gate-Level Netlists Based on Multi-Layer Neural Networks

    Kento HASEGAWA  Masao YANAGISAWA  Nozomu TOGAWA  

     
    LETTER

      Vol:
    E101-A No:12
      Page(s):
    2320-2326

    Recently, it has been reported that malicious third-party IC vendors often insert hardware Trojans into their products. Especially in IC design step, malicious third-party vendors can easily insert hardware Trojans in their products and thus we have to detect them efficiently. In this paper, we propose a machine-learning-based hardware-Trojan detection method for gate-level netlists using multi-layer neural networks. First, we extract 11 Trojan-net feature values for each net in a netlist. After that, we classify the nets in an unknown netlist into a set of Trojan nets and that of normal nets using multi-layer neural networks. By experimentally optimizing the structure of multi-layer neural networks, we can obtain an average of 84.8% true positive rate and an average of 70.1% true negative rate while we can obtain 100% true positive rate in some of the benchmarks, which outperforms the existing methods in most of the cases.

  • Introduction of Orthonormal Transform into Neural Filter for Accelerating Convergence Speed

    Isao NAKANISHI  Yoshio ITOH  Yutaka FUKUI  

     
    LETTER

      Vol:
    E83-A No:2
      Page(s):
    367-370

    As the nonlinear adaptive filter, the neural filter is utilized to process the nonlinear signal and/or system. However, the neural filter requires large number of iterations for convergence. This letter presents a new structure of the multi-layer neural filter where the orthonormal transform is introduced into all inter-layers to accelerate the convergence speed. The proposed structure is called the transform domain neural filter (TDNF) for convenience. The weights are basically updated by the Back-Propagation (BP) algorithm but it must be modified since the error back-propagates through the orthogonal transform. Moreover, the variable step size which is normalized by the transformed signal power is introduced into the BP algorithm to realize the orthonormal transform. Through the computer simulation, it is confirmed that the introduction of the orthonormal transform is effective for speedup of convergence in the neural filter.

  • A Cascade Form Predictor of Neural and FIR Filters and Its Minimum Size Estimation Based on Nonlinearity Analysis of Time Series

    Ashraf A. M. KHALAF  Kenji NAKAYAMA  

     
    PAPER

      Vol:
    E81-A No:3
      Page(s):
    364-373

    Time series prediction is very important technology in a wide variety of fields. The actual time series contains both linear and nonlinear properties. The amplitude of the time series to be predicted is usually continuous value. For these reasons, we combine nonlinear and linear predictors in a cascade form. The nonlinear prediction problem is reduced to a pattern classification. A set of the past samples x(n-1),. . . ,x(n-N) is transformed into the output, which is the prediction of the next coming sample x(n). So, we employ a multi-layer neural network with a sigmoidal hidden layer and a single linear output neuron for the nonlinear prediction. It is called a Nonlinear Sub-Predictor (NSP). The NSP is trained by the supervised learning algorithm using the sample x(n) as a target. However, it is rather difficult to generate the continuous amplitude and to predict linear property. So, we employ a linear predictor after the NSP. An FIR filter is used for this purpose, which is called a Linear Sub-Predictor (LSP). The LSP is trained by the supervised learning algorithm using also x(n) as a target. In order to estimate the minimum size of the proposed predictor, we analyze the nonlinearity of the time series of interest. The prediction is equal to mapping a set of past samples to the next coming sample. The multi-layer neural network is good for this kind of pattern mapping. Still, difficult mappings may exist when several sets of very similar patterns are mapped onto very different samples. The degree of difficulty of the mapping is closely related to the nonlinearity. The necessary number of the past samples used for prediction is determined by this nonlinearity. The difficult mapping requires a large number of the past samples. Computer simulations using the sunspot data and the artificially generated discrete amplitude data have demonstrated the efficiency of the proposed predictor and the nonlinearity analysis.