The search functionality is under construction.

Keyword Search Result

[Keyword] kernel(136hit)

61-80hit(136hit)

  • Kernel-Based On-Line Object Tracking Combining both Local Description and Global Representation

    Quan MIAO  Guijin WANG  Xinggang LIN  

     
    LETTER-Image Recognition, Computer Vision

      Vol:
    E96-D No:1
      Page(s):
    159-162

    This paper proposes a novel method for object tracking by combining local feature and global template-based methods. The proposed algorithm consists of two stages from coarse to fine. The first stage applies on-line classifiers to match the corresponding keypoints between the input frame and the reference frame. Thus a rough motion parameter can be estimated using RANSAC. The second stage employs kernel-based global representation in successive frames to refine the motion parameter. In addition, we use the kernel weight obtained during the second stage to guide the on-line learning process of the keypoints' description. Experimental results demonstrate the effectiveness of the proposed technique.

  • On Kernel Parameter Selection in Hilbert-Schmidt Independence Criterion

    Masashi SUGIYAMA  Makoto YAMADA  

     
    LETTER-Artificial Intelligence, Data Mining

      Vol:
    E95-D No:10
      Page(s):
    2564-2567

    The Hilbert-Schmidt independence criterion (HSIC) is a kernel-based statistical independence measure that can be computed very efficiently. However, it requires us to determine the kernel parameters heuristically because no objective model selection method is available. Least-squares mutual information (LSMI) is another statistical independence measure that is based on direct density-ratio estimation. Although LSMI is computationally more expensive than HSIC, LSMI is equipped with cross-validation, and thus the kernel parameter can be determined objectively. In this paper, we show that HSIC can actually be regarded as an approximation to LSMI, which allows us to utilize cross-validation of LSMI for determining kernel parameters in HSIC. Consequently, both computational efficiency and cross-validation can be achieved.

  • Nonlinear Least-Squares Time-Difference Estimation from Sub-Nyquist-Rate Samples

    Koji HARADA  Hideaki SAKAI  

     
    PAPER-Digital Signal Processing

      Vol:
    E95-A No:7
      Page(s):
    1117-1124

    In this paper, time-difference estimation of filtered random signals passed through multipath channels is discussed. First, we reformulate the approach based on innovation-rate sampling (IRS) to fit our random signal model, then use the IRS results to drive the nonlinear least-squares (NLS) minimization algorithm. This hybrid approach (referred to as the IRS-NLS method) provides consistent estimates even for cases with sub-Nyquist sampling assuming the use of compactly-supported sampling kernels that satisfies the recently-developed nonaliasing condition in the frequency domain. Numerical simulations show that the proposed NLS-IRS method can improve performance over the straight-forward IRS method, and provides approximately the same performance as the NLS method with reduced sampling rate, even for closely-spaced time delays. This enables, given a fixed observation time, significant reduction in the required number of samples, while maintaining the same level of estimation performance.

  • Asymmetric Learning Based on Kernel Partial Least Squares for Software Defect Prediction

    Guangchun LUO  Ying MA  Ke QIN  

     
    LETTER-Software Engineering

      Vol:
    E95-D No:7
      Page(s):
    2006-2008

    An asymmetric classifier based on kernel partial least squares is proposed for software defect prediction. This method improves the prediction performance on imbalanced data sets. The experimental results validate its effectiveness.

  • Hand-Shape Recognition Using the Distributions of Multi-Viewpoint Image Sets

    Yasuhiro OHKAWA  Kazuhiro FUKUI  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E95-D No:6
      Page(s):
    1619-1627

    This paper proposes a method for recognizing hand-shapes by using multi-viewpoint image sets. The recognition of a hand-shape is a difficult problem, as appearance of the hand changes largely depending on viewpoint, illumination conditions and individual characteristics. To overcome this problem, we apply the Kernel Orthogonal Mutual Subspace Method (KOMSM) to shift-invariance features obtained from multi-viewpoint images of a hand. When applying KOMSM to hand recognition with a lot of learning images from each class, it is necessary to consider how to run the KOMSM with heavy computational cost due to the kernel trick technique. We propose a new method that can drastically reduce the computational cost of KOMSM by adopting centroids and the number of images belonging to the centroids, which are obtained by using k-means clustering. The validity of the proposed method is demonstrated through evaluation experiments using multi-viewpoint image sets of 30 classes of hand-shapes.

  • Iris Image Blur Detection with Multiple Kernel Learning

    Lili PAN  Mei XIE  Ling MAO  

     
    LETTER-Pattern Recognition

      Vol:
    E95-D No:6
      Page(s):
    1698-1701

    In this letter, we analyze the influence of motion and out-of-focus blur on both frequency spectrum and cepstrum of an iris image. Based on their characteristics, we define two new discriminative blur features represented by Energy Spectral Density Distribution (ESDD) and Singular Cepstrum Histogram (SCH). To merge the two features for blur detection, a merging kernel which is a linear combination of two kernels is proposed when employing Support Vector Machine. Extensive experiments demonstrate the validity of our method by showing the improved blur detection performance on both synthetic and real datasets.

  • DOA Estimation of Multiple Speech Sources from a Stereophonic Mixture in Underdetermined Case

    Ning DING  Nozomu HAMADA  

     
    PAPER-Engineering Acoustics

      Vol:
    E95-A No:4
      Page(s):
    735-744

    This paper proposes a direction-of-arrival (DOA) estimation method of multiple speech sources from a stereophonic mixture in an underdetermined case where the number of sources exceeds the number of sensors. The method relies on the sparseness of speech signals in time-frequency (T-F) domain representation which means multiple independent speakers have a small overlap. At first, a selection of T-F cells bearing reliable spatial information is proposed by an introduced reliability index which is defined by the estimated interaural phase difference at each T-F cell. Then, a statistical error propagation model between the phase difference at T-F cell and its consequent DOA is introduced. By employing this model and the sparseness in T-F domain the DOA estimation problem is altered to obtaining local peaks of probability density function of DOA. Finally the kernel density estimator approach based on the proposed statistical model is applied. The performance of the proposed method is assessed by conducted experiments. Our method outperforms others both in accuracy for real observed data and in robustness for simulation with additional diffused noise.

  • Improvement of SVM-Based Speech/Music Classification Using Adaptive Kernel Technique

    Chungsoo LIM  Joon-Hyuk CHANG  

     
    LETTER-Speech and Hearing

      Vol:
    E95-D No:3
      Page(s):
    888-891

    In this paper, we propose a way to improve the classification performance of support vector machines (SVMs), especially for speech and music frames within a selectable mode vocoder (SMV) framework. A myriad of techniques have been proposed for SVMs, and most of them are employed during the training phase of SVMs. Instead, the proposed algorithm is applied during the test phase and works with existing schemes. The proposed algorithm modifies a kernel parameter in the decision function of SVMs to alter SVM decisions for better classification accuracy based on the previous outputs of SVMs. Since speech and music frames exhibit strong inter-frame correlation, the outputs of SVMs can guide the kernel parameter modification. Our experimental results show that the proposed algorithm has the potential for adaptively tuning classifications of support vector machines for better performance.

  • Kernel Based Asymmetric Learning for Software Defect Prediction

    Ying MA  Guangchun LUO  Hao CHEN  

     
    LETTER-Software Engineering

      Vol:
    E95-D No:1
      Page(s):
    267-270

    A kernel based asymmetric learning method is developed for software defect prediction. This method improves the performance of the predictor on class imbalanced data, since it is based on kernel principal component analysis. An experiment validates its effectiveness.

  • A Novel Bayes' Theorem-Based Saliency Detection Model

    Xin HE  Huiyun JING  Qi HAN  Xiamu NIU  

     
    LETTER-Image Recognition, Computer Vision

      Vol:
    E94-D No:12
      Page(s):
    2545-2548

    We propose a novel saliency detection model based on Bayes' theorem. The model integrates the two parts of Bayes' equation to measure saliency, each part of which was considered separately in the previous models. The proposed model measures saliency by computing local kernel density estimation of features in the center-surround region and global kernel density estimation of features at each pixel across the whole image. Under the proposed model, a saliency detection method is presented that extracts DCT (Discrete Cosine Transform) magnitude of local region around each pixel as the feature. Experiments show that the proposed model not only performs competitively on psychological patterns and better than the current state-of-the-art models on human visual fixation data, but also is robust against signal uncertainty.

  • A Support Vector and K-Means Based Hybrid Intelligent Data Clustering Algorithm

    Liang SUN  Shinichi YOSHIDA  Yanchun LIANG  

     
    PAPER-Artificial Intelligence, Data Mining

      Vol:
    E94-D No:11
      Page(s):
    2234-2243

    Support vector clustering (SVC), a recently developed unsupervised learning algorithm, has been successfully applied to solving many real-life data clustering problems. However, its effectiveness and advantages deteriorate when it is applied to solving complex real-world problems, e.g., those with large proportion of noise data points and with connecting clusters. This paper proposes a support vector and K-Means based hybrid algorithm to improve the performance of SVC. A new SVC training method is developed based on analysis of a Gaussian kernel radius function. An empirical study is conducted to guide better selection of the standard deviation of the Gaussian kernel. In the proposed algorithm, firstly, the outliers which increase problem complexity are identified and removed by training a global SVC. The refined data set is then clustered by a kernel-based K-Means algorithm. Finally, several local SVCs are trained for the clusters and then each removed data point is labeled according to the distance from it to the local SVCs. Since it exploits the advantages of both SVC and K-Means, the proposed algorithm is capable of clustering compact and arbitrary organized data sets and of increasing robustness to outliers and connecting clusters. Experiments are conducted on 2-D data sets generated by mixture models and benchmark data sets taken from the UCI machine learning repository. The cluster error rate is lower than 3.0% for all the selected data sets. The results demonstrate that the proposed algorithm compared favorably with existing SVC algorithms.

  • Kernel Optimization Based Semi-Supervised KBDA Scheme for Image Retrieval

    Xu YANG  Huilin XIONG  Xin YANG  

     
    PAPER

      Vol:
    E94-D No:10
      Page(s):
    1901-1908

    Kernel biased discriminant analysis (KBDA), as a subspace learning algorithm, has been an attractive approach for the relevance feedback in content-based image retrieval. Its performance, however, still suffers from the “small sample learning” problem and “kernel learning” problem. Aiming to solve these problems, in this paper, we present a new semi-supervised scheme of KBDA (S-KBDA), in which the projection learning and the “kernel learning” are interweaved into a constrained optimization framework. Specifically, S-KBDA learns a subspace that preserves both the biased discriminant structure among the labeled samples, and the geometric structure among all training samples. In kernel optimization, we directly optimize the kernel matrix, rather than a kernel function, which makes the kernel learning more flexible and appropriate for the retrieval task. To solve the constrained optimization problem, a fast algorithm based on gradient ascent is developed. The image retrieval experiments are given to show the effectiveness of the S-KBDA scheme in comparison with the original KBDA, and the other two state-of-the-art algorithms.

  • Kernel Methods for Chemical Compounds: From Classification to Design Open Access

    Tatsuya AKUTSU  Hiroshi NAGAMOCHI  

     
    INVITED PAPER

      Vol:
    E94-D No:10
      Page(s):
    1846-1853

    In this paper, we briefly review kernel methods for analysis of chemical compounds with focusing on the authors' works. We begin with a brief review of existing kernel functions that are used for classification of chemical compounds and prediction of their activities. Then, we focus on the pre-image problem for chemical compounds, which is to infer a chemical structure that is mapped to a given feature vector, and has a potential application to design of novel chemical compounds. In particular, we consider the pre-image problem for feature vectors consisting of frequencies of labeled paths of length at most K. We present several time complexity results that include: NP-hardness result for a general case, polynomial time algorithm for tree structured compounds with fixed K, and polynomial time algorithm for K=1 based on graph detachment. Then we review practical algorithms for the pre-image problem, which are based on enumeration of chemical structures satisfying given constraints. We also briefly review related results which include efficient enumeration of stereoisomers of tree-like chemical compounds and efficient enumeration of outerplanar graphs.

  • Sub-Category Optimization through Cluster Performance Analysis for Multi-View Multi-Pose Object Detection

    Dipankar DAS  Yoshinori KOBAYASHI  Yoshinori KUNO  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E94-D No:7
      Page(s):
    1467-1478

    The detection of object categories with large variations in appearance is a fundamental problem in computer vision. The appearance of object categories can change due to intra-class variations, background clutter, and changes in viewpoint and illumination. For object categories with large appearance changes, some kind of sub-categorization based approach is necessary. This paper proposes a sub-category optimization approach that automatically divides an object category into an appropriate number of sub-categories based on appearance variations. Instead of using predefined intra-category sub-categorization based on domain knowledge or validation datasets, we divide the sample space by unsupervised clustering using discriminative image features. We then use a cluster performance analysis (CPA) algorithm to verify the performance of the unsupervised approach. The CPA algorithm uses two performance metrics to determine the optimal number of sub-categories per object category. Furthermore, we employ the optimal sub-category representation as the basis and a supervised multi-category detection system with χ2 merging kernel function to efficiently detect and localize object categories within an image. Extensive experimental results are shown using a standard and the authors' own databases. The comparison results reveal that our approach outperforms the state-of-the-art methods.

  • Improving the Accuracy of Least-Squares Probabilistic Classifiers

    Makoto YAMADA  Masashi SUGIYAMA  Gordon WICHERN  Jaak SIMM  

     
    LETTER-Pattern Recognition

      Vol:
    E94-D No:6
      Page(s):
    1337-1340

    The least-squares probabilistic classifier (LSPC) is a computationally-efficient alternative to kernel logistic regression. However, to assure its learned probabilities to be non-negative, LSPC involves a post-processing step of rounding up negative parameters to zero, which can unexpectedly influence classification performance. In order to mitigate this problem, we propose a simple alternative scheme that directly rounds up the classifier's negative outputs, not negative parameters. Through extensive experiments including real-world image classification and audio tagging tasks, we demonstrate that the proposed modification significantly improves classification accuracy, while the computational advantage of the original LSPC remains unchanged.

  • A Framework of Real Time Hand Gesture Vision Based Human-Computer Interaction

    Liang SHA  Guijin WANG  Xinggang LIN  Kongqiao WANG  

     
    PAPER-Vision

      Vol:
    E94-A No:3
      Page(s):
    979-989

    This paper presents a robust framework of human-computer interaction from the hand gesture vision in the presence of realistic and challenging scenarios. To this end, several novel components are proposed. A hybrid approach is first proposed to automatically infer the beginning position of hand gestures of interest via jointly optimizing the regions given by an offline skin model trained from Gaussian mixture models and a specific hand gesture classifier trained from the Adaboost technique. To consistently track the hand in the context of using kernel based tracking, a semi-supervised feature selection strategy is further presented to choose the feature subspaces which appropriately represent the properties of offline hand skin cues and online foreground-background-classification cues. Taking the histogram of oriented gradients as the descriptor to represent hand gestures, a soft-decision approach is finally proposed for recognizing static hand gestures at the locations where severe ambiguity occurs and hidden Markov model based dynamic gestures are employed for interaction. Experiments on various real video sequences show the superior performance of the proposed components. In addition, the whole framework is applicable to real-time applications on general computing platforms.

  • Laplacian Support Vector Machines with Multi-Kernel Learning

    Lihua GUO  Lianwen JIN  

     
    LETTER-Pattern Recognition

      Vol:
    E94-D No:2
      Page(s):
    379-383

    The Laplacian support vector machine (LSVM) is a semi-supervised framework that uses manifold regularization for learning from labeled and unlabeled data. However, the optimal kernel parameters of LSVM are difficult to obtain. In this paper, we propose a multi-kernel LSVM (MK-LSVM) method using multi-kernel learning formulations in combination with the LSVM. Our learning formulations assume that a set of base kernels are grouped, and employ l2 norm regularization for automatically seeking the optimal linear combination of base kernels. Experimental testing reveals that our method achieves better performance than the LSVM alone using synthetic data, the UCI Machine Learning Repository, and the Caltech database of Generic Object Classification.

  • Optimal Gaussian Kernel Parameter Selection for SVM Classifier

    Xu YANG  HuiLin XIONG  Xin YANG  

     
    PAPER-Pattern Recognition

      Vol:
    E93-D No:12
      Page(s):
    3352-3358

    The performance of the kernel-based learning algorithms, such as SVM, depends heavily on the proper choice of the kernel parameter. It is desirable for the kernel machines to work on the optimal kernel parameter that adapts well to the input data and the learning tasks. In this paper, we present a novel method for selecting Gaussian kernel parameter by maximizing a class separability criterion, which measures the data distribution in the kernel-induced feature space, and is invariant under any non-singular linear transformation. The experimental results show that both the class separability of the data in the kernel-induced feature space and the classification performance of the SVM classifier are improved by using the optimal kernel parameter.

  • Gaussian Process Regression with Measurement Error

    Yukito IBA  Shotaro AKAHO  

     
    PAPER

      Vol:
    E93-D No:10
      Page(s):
    2680-2689

    Regression analysis that incorporates measurement errors in input variables is important in various applications. In this study, we consider this problem within a framework of Gaussian process regression. The proposed method can also be regarded as a generalization of kernel regression to include errors in regressors. A Markov chain Monte Carlo method is introduced, where the infinite-dimensionality of Gaussian process is dealt with a trick to exchange the order of sampling of the latent variable and the function. The proposed method is tested with artificial data.

  • Superfast-Trainable Multi-Class Probabilistic Classifier by Least-Squares Posterior Fitting

    Masashi SUGIYAMA  

     
    PAPER

      Vol:
    E93-D No:10
      Page(s):
    2690-2701

    Kernel logistic regression (KLR) is a powerful and flexible classification algorithm, which possesses an ability to provide the confidence of class prediction. However, its training--typically carried out by (quasi-)Newton methods--is rather time-consuming. In this paper, we propose an alternative probabilistic classification algorithm called Least-Squares Probabilistic Classifier (LSPC). KLR models the class-posterior probability by the log-linear combination of kernel functions and its parameters are learned by (regularized) maximum likelihood. In contrast, LSPC employs the linear combination of kernel functions and its parameters are learned by regularized least-squares fitting of the true class-posterior probability. Thanks to this linear regularized least-squares formulation, the solution of LSPC can be computed analytically just by solving a regularized system of linear equations in a class-wise manner. Thus LSPC is computationally very efficient and numerically stable. Through experiments, we show that the computation time of LSPC is faster than that of KLR by two orders of magnitude, with comparable classification accuracy.

61-80hit(136hit)