The search functionality is under construction.

Keyword Search Result

[Keyword] kernel method(14hit)

1-14hit
  • Kernel Weights for Equalizing Kernel-Wise Convergence Rates of Multikernel Adaptive Filtering

    Kwangjin JEONG  Masahiro YUKAWA  

     
    PAPER-Algorithms and Data Structures

      Pubricized:
    2020/12/11
      Vol:
    E104-A No:6
      Page(s):
    927-939

    Multikernel adaptive filtering is an attractive nonlinear approach to online estimation/tracking tasks. Despite its potential advantages over its single-kernel counterpart, a use of inappropriately weighted kernels may result in a negligible performance gain. In this paper, we propose an efficient recursive kernel weighting technique for multikernel adaptive filtering to activate all the kernels. The proposed weights equalize the convergence rates of all the corresponding partial coefficient errors. The proposed weights are implemented via a certain metric design based on the weighting matrix. Numerical examples show, for synthetic and multiple real datasets, that the proposed technique exhibits a better performance than the manually-tuned kernel weights, and that it significantly outperforms the online multiple kernel regression algorithm.

  • Accurate Scale Adaptive and Real-Time Visual Tracking with Correlation Filters

    Jiatian PI  Shaohua ZENG  Qing ZUO  Yan WEI  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2018/07/27
      Vol:
    E101-D No:11
      Page(s):
    2855-2858

    Visual tracking has been studied for several decades but continues to draw significant attention because of its critical role in many applications. This letter handles the problem of fixed template size in Kernelized Correlation Filter (KCF) tracker with no significant decrease in the speed. Extensive experiments are performed on the new OTB dataset.

  • Optimum Nonlinear Discriminant Analysis and Discriminant Kernel Support Vector Machine

    Akinori HIDAKA  Takio KURITA  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2016/08/04
      Vol:
    E99-D No:11
      Page(s):
    2734-2744

    Kernel discriminant analysis (KDA) is the mainstream approach of nonlinear discriminant analysis (NDA). Since it uses the kernel trick, KDA does not consider its nonlinear discriminant mapping explicitly. In this paper, another NDA approach where the nonlinear discriminant mapping is analytically given is developed. This study is based on the theory of optimal nonlinear discriminant analysis (ONDA) of which the nonlinear mapping is exactly expressed by using the Bayesian posterior probability. This theory indicates that various NDA can be derived by estimating the Bayesian posterior probability in ONDA with various estimation methods. Also, ONDA brings an insight about novel kernel functions, called discriminant kernel (DK), which is defined by also using the posterior probabilities. In this paper, several NDA and DK derived from ONDA with several posterior probability estimators are developed and evaluated. Given fine estimation methods of the Bayesian posterior probability, they give good discriminant spaces for visualization or classification.

  • Robust Scale Adaptive and Real-Time Visual Tracking with Correlation Filters

    Jiatian PI  Keli HU  Yuzhang GU  Lei QU  Fengrong LI  Xiaolin ZHANG  Yunlong ZHAN  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2016/04/07
      Vol:
    E99-D No:7
      Page(s):
    1895-1902

    Visual tracking has been studied for several decades but continues to draw significant attention because of its critical role in many applications. Recent years have seen greater interest in the use of correlation filters in visual tracking systems, owing to their extremely compelling results in different competitions and benchmarks. However, there is still a need to improve the overall tracking capability to counter various tracking issues, including large scale variation, occlusion, and deformation. This paper presents an appealing tracker with robust scale estimation, which can handle the problem of fixed template size in Kernelized Correlation Filter (KCF) tracker with no significant decrease in the speed. We apply the discriminative correlation filter for scale estimation as an independent part after finding the optimal translation based on the KCF tracker. Compared to an exhaustive scale space search scheme, our approach provides improved performance while being computationally efficient. In order to reveal the effectiveness of our approach, we use benchmark sequences annotated with 11 attributes to evaluate how well the tracker handles different attributes. Numerous experiments demonstrate that the proposed algorithm performs favorably against several state-of-the-art algorithms. Appealing results both in accuracy and robustness are also achieved on all 51 benchmark sequences, which proves the efficiency of our tracker.

  • Learning a Similarity Constrained Discriminative Kernel Dictionary from Concatenated Low-Rank Features for Action Recognition

    Shijian HUANG  Junyong YE  Tongqing WANG  Li JIANG  Changyuan XING  Yang LI  

     
    LETTER-Pattern Recognition

      Pubricized:
    2015/11/16
      Vol:
    E99-D No:2
      Page(s):
    541-544

    Traditional low-rank feature lose the temporal information among action sequence. To obtain the temporal information, we split an action video into multiple action subsequences and concatenate all the low-rank features of subsequences according to their time order. Then we recognize actions by learning a novel dictionary model from concatenated low-rank features. However, traditional dictionary learning models usually neglect the similarity among the coding coefficients and have bad performance in dealing with non-linearly separable data. To overcome these shortcomings, we present a novel similarity constrained discriminative kernel dictionary learning for action recognition. The effectiveness of the proposed method is verified on three benchmarks, and the experimental results show the promising results of our method for action recognition.

  • Mean Polynomial Kernel and Its Application to Vector Sequence Recognition

    Raissa RELATOR  Yoshihiro HIROHASHI  Eisuke ITO  Tsuyoshi KATO  

     
    PAPER-Pattern Recognition

      Vol:
    E97-D No:7
      Page(s):
    1855-1863

    Classification tasks in computer vision and brain-computer interface research have presented several applications such as biometrics and cognitive training. However, like in any other discipline, determining suitable representation of data has been challenging, and recent approaches have deviated from the familiar form of one vector for each data sample. This paper considers a kernel between vector sets, the mean polynomial kernel, motivated by recent studies where data are approximated by linear subspaces, in particular, methods that were formulated on Grassmann manifolds. This kernel takes a more general approach given that it can also support input data that can be modeled as a vector sequence, and not necessarily requiring it to be a linear subspace. We discuss how the kernel can be associated with the Projection kernel, a Grassmann kernel. Experimental results using face image sequences and physiological signal data show that the mean polynomial kernel surpasses existing subspace-based methods on Grassmann manifolds in terms of predictive performance and efficiency.

  • A Support Vector and K-Means Based Hybrid Intelligent Data Clustering Algorithm

    Liang SUN  Shinichi YOSHIDA  Yanchun LIANG  

     
    PAPER-Artificial Intelligence, Data Mining

      Vol:
    E94-D No:11
      Page(s):
    2234-2243

    Support vector clustering (SVC), a recently developed unsupervised learning algorithm, has been successfully applied to solving many real-life data clustering problems. However, its effectiveness and advantages deteriorate when it is applied to solving complex real-world problems, e.g., those with large proportion of noise data points and with connecting clusters. This paper proposes a support vector and K-Means based hybrid algorithm to improve the performance of SVC. A new SVC training method is developed based on analysis of a Gaussian kernel radius function. An empirical study is conducted to guide better selection of the standard deviation of the Gaussian kernel. In the proposed algorithm, firstly, the outliers which increase problem complexity are identified and removed by training a global SVC. The refined data set is then clustered by a kernel-based K-Means algorithm. Finally, several local SVCs are trained for the clusters and then each removed data point is labeled according to the distance from it to the local SVCs. Since it exploits the advantages of both SVC and K-Means, the proposed algorithm is capable of clustering compact and arbitrary organized data sets and of increasing robustness to outliers and connecting clusters. Experiments are conducted on 2-D data sets generated by mixture models and benchmark data sets taken from the UCI machine learning repository. The cluster error rate is lower than 3.0% for all the selected data sets. The results demonstrate that the proposed algorithm compared favorably with existing SVC algorithms.

  • Kernel Methods for Chemical Compounds: From Classification to Design Open Access

    Tatsuya AKUTSU  Hiroshi NAGAMOCHI  

     
    INVITED PAPER

      Vol:
    E94-D No:10
      Page(s):
    1846-1853

    In this paper, we briefly review kernel methods for analysis of chemical compounds with focusing on the authors' works. We begin with a brief review of existing kernel functions that are used for classification of chemical compounds and prediction of their activities. Then, we focus on the pre-image problem for chemical compounds, which is to infer a chemical structure that is mapped to a given feature vector, and has a potential application to design of novel chemical compounds. In particular, we consider the pre-image problem for feature vectors consisting of frequencies of labeled paths of length at most K. We present several time complexity results that include: NP-hardness result for a general case, polynomial time algorithm for tree structured compounds with fixed K, and polynomial time algorithm for K=1 based on graph detachment. Then we review practical algorithms for the pre-image problem, which are based on enumeration of chemical structures satisfying given constraints. We also briefly review related results which include efficient enumeration of stereoisomers of tree-like chemical compounds and efficient enumeration of outerplanar graphs.

  • Cartesian Kernel: An Efficient Alternative to the Pairwise Kernel

    Hisashi KASHIMA  Satoshi OYAMA  Yoshihiro YAMANISHI  Koji TSUDA  

     
    PAPER

      Vol:
    E93-D No:10
      Page(s):
    2672-2679

    Pairwise classification has many applications including network prediction, entity resolution, and collaborative filtering. The pairwise kernel has been proposed for those purposes by several research groups independently, and has been used successfully in several fields. In this paper, we propose an efficient alternative which we call a Cartesian kernel. While the existing pairwise kernel (which we refer to as the Kronecker kernel) can be interpreted as the weighted adjacency matrix of the Kronecker product graph of two graphs, the Cartesian kernel can be interpreted as that of the Cartesian graph, which is more sparse than the Kronecker product graph. We discuss the generalization bounds of the two pairwise kernels by using eigenvalue analysis of the kernel matrices. Also, we consider the N-wise extensions of the two pairwise kernels. Experimental results show the Cartesian kernel is much faster than the Kronecker kernel, and at the same time, competitive with the Kronecker kernel in predictive performance.

  • A Model Optimization Approach to the Automatic Segmentation of Medical Images

    Ahmed AFIFI  Toshiya NAKAGUCHI  Norimichi TSUMURA  Yoichi MIYAKE  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E93-D No:4
      Page(s):
    882-890

    The aim of this work is to develop an efficient medical image segmentation technique by fitting a nonlinear shape model with pre-segmented images. In this technique, the kernel principle component analysis (KPCA) is used to capture the shape variations and to build the nonlinear shape model. The pre-segmentation is carried out by classifying the image pixels according to the high level texture features extracted using the over-complete wavelet packet decomposition. Additionally, the model fitting is completed using the particle swarm optimization technique (PSO) to adapt the model parameters. The proposed technique is fully automated, is talented to deal with complex shape variations, can efficiently optimize the model to fit the new cases, and is robust to noise and occlusion. In this paper, we demonstrate the proposed technique by implementing it to the liver segmentation from computed tomography (CT) scans and the obtained results are very hopeful.

  • Recent Advances and Trends in Large-Scale Kernel Methods

    Hisashi KASHIMA  Tsuyoshi IDE  Tsuyoshi KATO  Masashi SUGIYAMA  

     
    INVITED PAPER

      Vol:
    E92-D No:7
      Page(s):
    1338-1353

    Kernel methods such as the support vector machine are one of the most successful algorithms in modern machine learning. Their advantage is that linear algorithms are extended to non-linear scenarios in a straightforward way by the use of the kernel trick. However, naive use of kernel methods is computationally expensive since the computational complexity typically scales cubically with respect to the number of training samples. In this article, we review recent advances in the kernel methods, with emphasis on scalability for massive problems.

  • A Learning Algorithm of Boosting Kernel Discriminant Analysis for Pattern Recognition

    Shinji KITA  Seiichi OZAWA  Satoshi MAEKAWA  Shigeo ABE  

     
    PAPER-Biocybernetics, Neurocomputing

      Vol:
    E90-D No:11
      Page(s):
    1853-1863

    In this paper, we present a new method to enhance classification performance of a multiple classifier system by combining a boosting technique called AdaBoost.M2 and Kernel Discriminant Analysis (KDA). To reduce the dependency between classifier outputs and to speed up the learning, each classifier is trained in a different feature space, which is obtained by applying KDA to a small set of hard-to-classify training samples. The training of the system is conducted based on AdaBoost.M2, and the classifiers are implemented by Radial Basis Function networks. To perform KDA at every boosting round in a realistic time scale, a new kernel selection method based on the class separability measure is proposed. Furthermore, a new criterion of the training convergence is also proposed to acquire good classification performance with fewer boosting rounds. To evaluate the proposed method, several experiments are carried out using standard evaluation datasets. The experimental results demonstrate that the proposed method can select an optimal kernel parameter more efficiently than the conventional cross-validation method, and that the training of boosting classifiers is terminated with a fairly small number of rounds to attain good classification accuracy. For multi-class classification problems, the proposed method outperforms both Boosting Linear Discriminant Analysis (BLDA) and Radial-Basis Function Network (RBFN) with regard to the classification accuracy. On the other hand, the performance evaluation for 2-class problems shows that the advantage of the proposed BKDA against BLDA and RBFN depends on the datasets.

  • Constructing Kernel Functions for Binary Regression

    Masashi SUGIYAMA  Hidemitsu OGAWA  

     
    PAPER-Pattern Recognition

      Vol:
    E89-D No:7
      Page(s):
    2243-2249

    Kernel-based learning algorithms have been successfully applied in various problem domains, given appropriate kernel functions. In this paper, we discuss the problem of designing kernel functions for binary regression and show that using a bell-shaped cosine function as a kernel function is optimal in some sense. The rationale of this result is based on the Karhunen-Loeve expansion, i.e., the optimal approximation to a set of functions is given by the principal component of the correlation operator of the functions.

  • A New Approach to Fuzzy Modeling Using an Extended Kernel Method

    Jongcheol KIM  Taewon KIM  Yasuo SUGA  

     
    PAPER-Neuro, Fuzzy, GA

      Vol:
    E86-A No:9
      Page(s):
    2262-2269

    This paper proposes a new approach to fuzzy inference system for modeling nonlinear systems based on measured input and output data. In the suggested fuzzy inference system, the number of fuzzy rules and parameter values of membership functions are automatically decided by using the extended kernel method. The extended kernel method individually performs linear transformation and kernel mapping. Linear transformation projects input space into linearly transformed input space. Kernel mapping projects linearly transformed input space into high dimensional feature space. Especially, the process of linear transformation is needed in order to solve difficulty determining the type of kernel function which presents the nonlinear mapping in according to nonlinear system. The structure of the proposed fuzzy inference system is equal to a Takagi-Sugeno fuzzy model whose input variables are weighted linear combinations of input variables. In addition, the number of fuzzy rules can be reduced under the condition of optimizing a given criterion by adjusting linear transformation matrix and parameter values of kernel functions using the gradient descent method. Once a structure is selected, coefficients in consequent part are determined by the least square method. Simulated results of the proposed technique are illustrated by examples involving benchmark nonlinear systems.