The search functionality is under construction.

Author Search Result

[Author] Lihua GUO(5hit)

1-5hit
  • Locality-Constrained Multi-Task Joint Sparse Representation for Image Classification

    Lihua GUO  

     
    LETTER-Image Recognition, Computer Vision

      Vol:
    E96-D No:9
      Page(s):
    2177-2181

    In the image classification applications, the test sample with multiple man-handcrafted descriptions can be sparsely represented by a few training subjects. Our paper is motivated by the success of multi-task joint sparse representation (MTJSR), and considers that the different modalities of features not only have the constraint of joint sparsity across different tasks, but also have the constraint of local manifold structure across different features. We introduce the constraint of local manifold structure into the MTJSR framework, and propose the Locality-constrained multi-task joint sparse representation method (LC-MTJSR). During the optimization of the formulated objective, the stochastic gradient descent method is used to guarantee fast convergence rate, which is essential for large-scale image categorization. Experiments on several challenging object classification datasets show that our proposed algorithm is better than the MTJSR, and is competitive with the state-of-the-art multiple kernel learning methods.

  • From Easy to Difficult: A Self-Paced Multi-Task Joint Sparse Representation Method

    Lihua GUO  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2018/05/16
      Vol:
    E101-D No:8
      Page(s):
    2115-2122

    Multi-task joint sparse representation (MTJSR) is one kind of efficient multi-task learning (MTL) method for solving different problems together using a shared sparse representation. Based on the learning mechanism in human, which is a self-paced learning by gradually training the tasks from easy to difficult, I apply this mechanism into MTJSR, and propose a multi-task joint sparse representation with self-paced learning (MTJSR-SP) algorithm. In MTJSR-SP, the self-paced learning mechanism is considered as a regularizer of optimization function, and an iterative optimization is applied to solve it. Comparing with the traditional MTL methods, MTJSR-SP has more robustness to the noise and outliers. The experimental results on some datasets, i.e. two synthesized datasets, four datasets from UCI machine learning repository, an oxford flower dataset and a Caltech-256 image categorization dataset, are used to validate the efficiency of MTJSR-SP.

  • Self-Paced Learning with Statistics Uncertainty Prior

    Lihua GUO  

     
    LETTER-Artificial Intelligence, Data Mining

      Pubricized:
    2017/12/13
      Vol:
    E101-D No:3
      Page(s):
    812-816

    Self-paced learning (SPL) gradually trains the data from easy to hard, and includes more data into the training process in a self-paced manner. The advantage of SPL is that it has an ability to avoid bad local minima, and the system can improve the generalization performance. However, SPL's system needs an expert to judge the complexity of data at the beginning of training. Generally, this expert does not exist in the beginning, and is learned by gradually training the samples. Based on this consideration, we add an uncertainty of complexity judgment into SPL's system, and propose a self-paced learning with uncertainty prior (SPUP). For efficiently solving our system optimization function, an iterative optimization and statistical simulated annealing method are introduced. The final experimental results indicate that our SPUP has more robustness to the outlier and achieves higher accuracy and less error than SPL.

  • Laplacian Support Vector Machines with Multi-Kernel Learning

    Lihua GUO  Lianwen JIN  

     
    LETTER-Pattern Recognition

      Vol:
    E94-D No:2
      Page(s):
    379-383

    The Laplacian support vector machine (LSVM) is a semi-supervised framework that uses manifold regularization for learning from labeled and unlabeled data. However, the optimal kernel parameters of LSVM are difficult to obtain. In this paper, we propose a multi-kernel LSVM (MK-LSVM) method using multi-kernel learning formulations in combination with the LSVM. Our learning formulations assume that a set of base kernels are grouped, and employ l2 norm regularization for automatically seeking the optimal linear combination of base kernels. Experimental testing reveals that our method achieves better performance than the LSVM alone using synthetic data, the UCI Machine Learning Repository, and the Caltech database of Generic Object Classification.

  • Manifold Kernel Metric Learning for Larger-Scale Image Annotation

    Lihua GUO  

     
    LETTER-Pattern Recognition

      Pubricized:
    2015/04/03
      Vol:
    E98-D No:7
      Page(s):
    1396-1400

    An appropriate similarity measure between images is one of the key techniques in search-based image annotation models. In order to capture the nonlinear relationships between visual features and image semantics, many kernel distance metric learning(KML) algorithms have been developed. However, when challenged with large-scale image annotation, their metrics can't explicitly represent the similarity between image semantics, and their algorithms suffer from high computation cost. Therefore, they always lose their efficiency. In this paper, we propose a manifold kernel metric learning (M_KML) algorithm. Our M_KML algorithm will simultaneously learn the manifold structure and the image annotation metrics. The main merit of our M_KML algorithm is that the distance metrics are builded on image feature's interior manifold structure, and the dimensionality reduction on manifold structure can handle the high dimensionality challenge faced by KML. Final experiments verify our method's efficiency and effectiveness by comparing it with state-of-the-art image annotation approaches.