The search functionality is under construction.

Keyword Search Result

[Keyword] appearance model(6hit)

1-6hit
  • Robust and Adaptive Object Tracking via Correspondence Clustering

    Bo WU  Yurui XIE  Wang LUO  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2016/06/23
      Vol:
    E99-D No:10
      Page(s):
    2664-2667

    We propose a new visual tracking method, where the target appearance is represented by combining color distribution and keypoints. Firstly, the object is localized via a keypoint-based tracking and matching strategy, where a new clustering method is presented to remove outliers. Secondly, the tracking confidence is evaluated by the color template. According to the tracking confidence, the local and global keypoints matching can be performed adaptively. Finally, we propose a target appearance update method in which the new appearance can be learned and added to the target model. The proposed tracker is compared with five state-of-the-art tracking methods on a recent benchmark dataset. Both qualitative and quantitative evaluations show that our method has favorable performance.

  • Robust Superpixel Tracking with Weighted Multiple-Instance Learning

    Xu CHENG  Nijun LI  Tongchi ZHOU  Lin ZHOU  Zhenyang WU  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2015/01/15
      Vol:
    E98-D No:4
      Page(s):
    980-984

    This paper proposes a robust superpixel-based tracker via multiple-instance learning, which exploits the importance of instances and mid-level features captured by superpixels for object tracking. We first present a superpixels-based appearance model, which is able to compute the confidences of the object and background. Most importantly, we introduce the sample importance into multiple-instance learning (MIL) procedure to improve the performance of tracking. The importance for each instance in the positive bag is defined by accumulating the confidence of all the pixels within the corresponding instance. Furthermore, our tracker can help recover the object from the drifting scene using the appearance model based on superpixels when the drift occurs. We retain the first (k-1) frames' information during the updating process to alleviate drift to some extent. To evaluate the effectiveness of the proposed tracker, six video sequences of different challenging situations are tested. The comparison results demonstrate that the proposed tracker has more robust and accurate performance than six ones representing the state-of-the-art.

  • Person Re-Identification as Image Retrieval Using Bag of Ensemble Colors

    Lu TIAN  Shengjin WANG  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E98-D No:1
      Page(s):
    180-188

    Person re-identification is a challenging problem of matching observations of individuals across non-overlapping camera views. When pedestrians walk across disjoint camera views, continuous motion information is lost, and thus re-identification mainly relies on appearance matching. Person re-identification is actually a special case of near duplicate search in image retrieval. Given a probe, our task is to find the image containing the same person in galleries. At present many state-of-the-art methods in image retrieval are based on the Bag-of-Words (BOW) model. By adapting the BOW model to our task, Bag-of-Ensemble-Colors (BOEC) is proposed to tackle person re-identification in this paper. We combine low-level color histogram and semantic color names to represent human appearances. Meanwhile, some mature and efficient techniques in image retrieval are employed in the model containing soft quantization, burstiness punishing strategy, and negative evidence. In consideration apriori knowledge of human body structure, efficient spatial constraints are proposed to weaken the influence of background. Extensive experiments on VIPeR and ETHZ databases are performed to test the effectiveness of our approach, and promising results are obtained in the public databases. Compared with other unsupervised methods, we obtain state-of-the-art performances. The recognition rate is 32.23% on VIPeR dataset, 87% on ETHZ SEQ.#1, 83% on ETHZ SEQ.#2, and 91% on ETHZ SEQ.#3.

  • A Robust Visual Tracker with a Coupled-Classifier Based on Multiple Representative Appearance Models

    Deqian FU  Seong Tae JHANG  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E96-D No:8
      Page(s):
    1826-1835

    Aiming to alleviate the visual tracking problem of drift which reduces the abilities of almost all online visual trackers, a robust visual tracker (called CCMM tracker) is proposed with a coupled-classifier based on multiple representative appearance models. The coupled-classifier consists of root and head classifiers based on local sparse representation. The two classifiers collaborate to fulfil a tracking task within the Bayesian-based tracking framework, also to update their templates with a novel mechanism which tries to guarantee an update operation along the “right” orientation. Consequently, the tracker is more powerful in anti-interference. Meanwhile the multiple representative appearance models maintain features of the different submanifolds of the target appearance, which the target exhibited previously. The multiple models cooperatively support the coupled-classifier to recognize the target in challenging cases (such as persistent disturbance, vast change of appearance, and recovery from occlusion) with an effective strategy. The novel tracker proposed in this paper, by explicit inference, can reduce drift and handle frequent and drastic appearance variation of the target with cluttered background, which is demonstrated by the extensive experiments.

  • An Accurate User Position Estimation Method Using a Single Camera for 3D Display without Glasses

    Byeoung-su KIM  Cho-il LEE  Seong-hwan JU  Whoi-Yul KIM  

     
    PAPER-Pattern Recognition

      Vol:
    E96-D No:6
      Page(s):
    1344-1350

    3D display systems without glasses are preferred because of the inconvenience wearing of special glasses while viewing 3D content. In general, non-glass type 3D displays work by sending left and right views of the content to the corresponding eyes depending on the user position with respect to the display. Since accurate user position estimation has become a very important task for non-glass type 3D displays, most of such systems require additional hardware or suffer from low accuracy. In this paper, an accurate user position estimation method using a single camera for non-glass type 3D display is proposed. As inter-pupillary distance is utilized for the estimation, at first the face is detected and then tracked using an Active Appearance Model. The pose of face is then estimated to compensate the pose variations. To estimate the user position, a simple perspective mapping function is applied which uses the average of the inter-pupillary distance. For accuracy, personal inter-pupillary distance can also be used. Experimental results have shown that the proposed method successfully estimated the user position using a single camera. The average error for position estimation with the proposed method was small enough for viewing 3D contents.

  • A Model of Luminance-Adaptation for Quantifying Brightness in Mixed Visual Adapting Conditions

    Sung-Hak LEE  Kyu-Ik SOHNG  

     
    BRIEF PAPER

      Vol:
    E94-C No:11
      Page(s):
    1768-1772

    The color appearance model gives us the proper brightness information and optimized display conditions for various viewing surroundings. However on conditions of low-level illumination or low background reflectivity, the performance of brightness estimation is relatively poor. Therefore, through our psychophysical experiments, we investigated the state of visual luminance adaptation for comparing single adaptations and mixed adaptations under a complex viewing field, and we also investigated background adaptation degrees and exponential nonlinearity factors for mixed adaptation models. It provides more accurate brightness predictions according to different adapting luminance, which is decided from object and background luminance.