The search functionality is under construction.

Author Search Result

[Author] Benhui CHEN(4hit)

1-4hit
  • Authors' Reply to the Comments by Kamata et al.

    Bo ZHOU  Benhui CHEN  Jinglu HU  

     
    WRITTEN DISCUSSION

      Pubricized:
    2023/05/08
      Vol:
    E106-A No:11
      Page(s):
    1446-1449

    We thank Kamata et al. (2023) [1] for their interest in our work [2], and for providing an explanation of the quasi-linear kernel from a viewpoint of multiple kernel learning. In this letter, we first give a summary of the quasi-linear SVM. Then we provide a discussion on the novelty of quasi-linear kernels against multiple kernel learning. Finally, we explain the contributions of our work [2].

  • An Adaptive Niching EDA with Balance Searching Based on Clustering Analysis

    Benhui CHEN  Jinglu HU  

     
    PAPER-VLSI Design Technology and CAD

      Vol:
    E93-A No:10
      Page(s):
    1792-1799

    For optimization problems with irregular and complex multimodal landscapes, Estimation of Distribution Algorithms (EDAs) suffer from the drawback of premature convergence similar to other evolutionary algorithms. In this paper, we propose an adaptive niching EDA based on Affinity Propagation (AP) clustering analysis. The AP clustering is used to adaptively partition the niches and mine the searching information from the evolution process. The obtained information is successfully utilized to improve the EDA performance by using a balance niching searching strategy. Two different categories of optimization problems are used to evaluate the proposed adaptive niching EDA. The first one is solving three benchmark functional multimodal optimization problems by a continuous EDA based on single Gaussian probabilistic model; the other one is solving a real complicated discrete EDA optimization problem, the HP model protein folding based on k-order Markov probabilistic model. Simulation results show that the proposed adaptive niching EDA is an efficient method.

  • A Deep Neural Network Based Quasi-Linear Kernel for Support Vector Machines

    Weite LI  Bo ZHOU  Benhui CHEN  Jinglu HU  

     
    PAPER-Neural Networks and Bioengineering

      Vol:
    E99-A No:12
      Page(s):
    2558-2565

    This paper proposes a deep quasi-linear kernel for support vector machines (SVMs). The deep quasi-linear kernel can be constructed by using a pre-trained deep neural network. To realize this goal, a multilayer gated bilinear classifier is first designed to mimic the functionality of the pre-trained deep neural network, by generating the gate control signals using the deep neural network. Then, a deep quasi-linear kernel is derived by applying an SVM formulation to the multilayer gated bilinear classifier. In this way, we are able to further implicitly optimize the parameters of the multilayer gated bilinear classifier, which are a set of duplicate but independent parameters of the pre-trained deep neural network, by using an SVM optimization. Experimental results on different data sets show that SVMs with the proposed deep quasi-linear kernel have an ability to take advantage of the pre-trained deep neural networks and outperform SVMs with RBF kernels.

  • Quasi-Linear Support Vector Machine for Nonlinear Classification

    Bo ZHOU  Benhui CHEN  Jinglu HU  

     
    PAPER-Neural Networks and Bioengineering

      Vol:
    E97-A No:7
      Page(s):
    1587-1594

    This paper proposes a so called quasi-linear support vector machine (SVM), which is an SVM with a composite quasi-linear kernel. In the quasi-linear SVM model, the nonlinear separation hyperplane is approximated by multiple local linear models with interpolation. Instead of building multiple local SVM models separately, the quasi-linear SVM realizes the multi local linear model approach in the kernel level. That is, it is built exactly in the same way as a single SVM model, by composing a quasi-linear kernel. A guided partitioning method is proposed to obtain the local partitions for the composition of quasi-linear kernel function. Experiment results on artificial data and benchmark datasets show that the proposed method is effective and improves classification performances.