The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] features(84hit)

21-40hit(84hit)

  • Content-Based Superpixel Segmentation and Matching Using Its Region Feature Descriptors

    Jianmei ZHANG  Pengyu WANG  Feiyang GONG  Hongqing ZHU  Ning CHEN  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2020/04/27
      Vol:
    E103-D No:8
      Page(s):
    1888-1900

    Finding the correspondence between two images of the same object or scene is an active research field in computer vision. This paper develops a rapid and effective Content-based Superpixel Image matching and Stitching (CSIS) scheme, which utilizes the content of superpixel through multi-features fusion technique. Unlike popular keypoint-based matching method, our approach proposes a superpixel internal feature-based scheme to implement image matching. In the beginning, we make use of a novel superpixel generation algorithm based on content-based feature representation, named Content-based Superpixel Segmentation (CSS) algorithm. Superpixels are generated in terms of a new distance metric using color, spatial, and gradient feature information. It is developed to balance the compactness and the boundary adherence of resulted superpixels. Then, we calculate the entropy of each superpixel for separating some superpixels with significant characteristics. Next, for each selected superpixel, its multi-features descriptor is generated by extracting and fusing local features of the selected superpixel itself. Finally, we compare the matching features of candidate superpixels and their own neighborhoods to estimate the correspondence between two images. We evaluated superpixel matching and image stitching on complex and deformable surfaces using our superpixel region descriptors, and the results show that new method is effective in matching accuracy and execution speed.

  • Real-Time Generic Object Tracking via Recurrent Regression Network

    Rui CHEN  Ying TONG  Ruiyu LIANG  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2019/12/20
      Vol:
    E103-D No:3
      Page(s):
    602-611

    Deep neural networks have achieved great success in visual tracking by learning a generic representation and leveraging large amounts of training data to improve performance. Most generic object trackers are trained from scratch online and do not benefit from a large number of videos available for offline training. We present a real-time generic object tracker capable of incorporating temporal information into its model, learning from many examples offline and quickly updating online. During the training process, the pre-trained weight of convolution layer is updated lagging behind, and the input video sequence length is gradually increased for fast convergence. Furthermore, only the hidden states in recurrent network are updated to guarantee the real-time tracking speed. The experimental results show that the proposed tracking method is capable of tracking objects at 150 fps with higher predicting overlap rate, and achieves more robustness in multiple benchmarks than state-of-the-art performance.

  • Phase-Based Periocular Recognition with Texture Enhancement Open Access

    Luis Rafael MARVAL-PÉREZ  Koichi ITO  Takafumi AOKI  

     
    PAPER-Image

      Vol:
    E102-A No:10
      Page(s):
    1351-1363

    Access control and surveillance applications like walking-through security gates and immigration control points have a great demand for convenient and accurate biometric recognition in unconstrained scenarios with low user cooperation. The periocular region, which is a relatively new biometric trait, has been attracting much attention for recognition of an individual in such scenarios. This paper proposes a periocular recognition method that combines Phase-Based Correspondence Matching (PB-CM) with a texture enhancement technique. PB-CM has demonstrated high recognition performance in other biometric traits, e.g., face, palmprint and finger-knuckle-print. However, a major limitation for periocular region is that the performance of PB-CM degrades when the periocular skin has poor texture. We address this problem by applying texture enhancement and found out that variance normalization of texture significantly improves the performance of periocular recognition using PB-CM. Experimental evaluation using three public databases demonstrates the advantage of the proposed method compared with conventional methods.

  • MF-CNN: Traffic Flow Prediction Using Convolutional Neural Network and Multi-Features Fusion

    Di YANG  Songjiang LI  Zhou PENG  Peng WANG  Junhui WANG  Huamin YANG  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2019/05/20
      Vol:
    E102-D No:8
      Page(s):
    1526-1536

    Accurate traffic flow prediction is the precondition for many applications in Intelligent Transportation Systems, such as traffic control and route guidance. Traditional data driven traffic flow prediction models tend to ignore traffic self-features (e.g., periodicities), and commonly suffer from the shifts brought by various complex factors (e.g., weather and holidays). These would reduce the precision and robustness of the prediction models. To tackle this problem, in this paper, we propose a CNN-based multi-feature predictive model (MF-CNN) that collectively predicts network-scale traffic flow with multiple spatiotemporal features and external factors (weather and holidays). Specifically, we classify traffic self-features into temporal continuity as short-term feature, daily periodicity and weekly periodicity as long-term features, then map them to three two-dimensional spaces, which each one is composed of time and space, represented by two-dimensional matrices. The high-level spatiotemporal features learned by CNNs from the matrices with different time lags are further fused with external factors by a logistic regression layer to derive the final prediction. Experimental results indicate that the MF-CNN model considering multi-features improves the predictive performance compared to five baseline models, and achieves the trade-off between accuracy and efficiency.

  • Combining 3D Convolutional Neural Networks with Transfer Learning by Supervised Pre-Training for Facial Micro-Expression Recognition

    Ruicong ZHI  Hairui XU  Ming WAN  Tingting LI  

     
    PAPER-Pattern Recognition

      Pubricized:
    2019/01/29
      Vol:
    E102-D No:5
      Page(s):
    1054-1064

    Facial micro-expression is momentary and subtle facial reactions, and it is still challenging to automatically recognize facial micro-expression with high accuracy in practical applications. Extracting spatiotemporal features from facial image sequences is essential for facial micro-expression recognition. In this paper, we employed 3D Convolutional Neural Networks (3D-CNNs) for self-learning feature extraction to represent facial micro-expression effectively, since the 3D-CNNs could well extract the spatiotemporal features from facial image sequences. Moreover, transfer learning was utilized to deal with the problem of insufficient samples in the facial micro-expression database. We primarily pre-trained the 3D-CNNs on normal facial expression database Oulu-CASIA by supervised learning, then the pre-trained model was effectively transferred to the target domain, which was the facial micro-expression recognition task. The proposed method was evaluated on two available facial micro-expression datasets, i.e. CASME II and SMIC-HS. We obtained the overall accuracy of 97.6% on CASME II, and 97.4% on SMIC, which were 3.4% and 1.6% higher than the 3D-CNNs model without transfer learning, respectively. And the experimental results demonstrated that our method achieved superior performance compared to state-of-the-art methods.

  • Twofold Correlation Filtering for Tracking Integration

    Wei WANG  Weiguang LI  Zhaoming CHEN  Mingquan SHI  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2018/07/10
      Vol:
    E101-D No:10
      Page(s):
    2547-2550

    In general, effective integrating the advantages of different trackers can achieve unified performance promotion. In this work, we study the integration of multiple correlation filter (CF) trackers; propose a novel but simple tracking integration method that combines different trackers in filter level. Due to the variety of their correlation filter and features, there is no comparability between different CF tracking results for tracking integration. To tackle this, we propose twofold CF to unify these various response maps so that the results of different tracking algorithms can be compared, so as to boost the tracking performance like ensemble learning. Experiment of two CF methods integration on the data sets OTB demonstrates that the proposed method is effective and promising.

  • Extraction and Recognition of Shoe Logos with a Wide Variety of Appearance Using Two-Stage Classifiers

    Kazunori AOKI  Wataru OHYAMA  Tetsushi WAKABAYASHI  

     
    PAPER-Machine Vision and its Applications

      Pubricized:
    2018/02/16
      Vol:
    E101-D No:5
      Page(s):
    1325-1332

    A logo is a symbolic presentation that is designed not only to identify a product manufacturer but also to attract the attention of shoppers. Shoe logos are a challenging subject for automatic extraction and recognition using image analysis techniques because they have characteristics that distinguish them from those of other products; that is, there is much within-class variation in the appearance of shoe logos. In this paper, we propose an automatic extraction and recognition method for shoe logos with a wide variety of appearance using a limited number of training samples. The proposed method employs maximally stable extremal regions for the initial region extraction, an iterative algorithm for region grouping, and gradient features and a support vector machine for logo recognition. The results of performance evaluation experiments using a logo dataset that consists of a wide variety of appearances show that the proposed method achieves promising performance for both logo extraction and recognition.

  • Deformable Part Model Based Arrhythmia Detection Using Time Domain Features

    Yuuka HIRAO  Yoshinori TAKEUCHI  Masaharu IMAI  Jaehoon YU  

     
    PAPER-Digital Signal Processing

      Vol:
    E100-A No:11
      Page(s):
    2221-2229

    Heart disease is one of the major causes of death in many advanced countries. For prevention or treatment of heart disease, getting an early diagnosis from a long time period of electrocardiogram (ECG) examination is necessary. However, it could be a large burden on medical experts to analyze this large amount of data. To reduce the burden and support the analysis, this paper proposes an arrhythmia detection method based on a deformable part model, which absorbs individual variation of ECG waveform and enables the detection of various arrhythmias. Moreover, to detect the arrhythmia in low processing delay, the proposed method only utilizes time domain features. In an experimental result, the proposed method achieved 0.91 F-measure for arrhythmia detection.

  • Modeling Content Structures of Domain-Specific Texts with RUP-HDP-HSMM and Its Applications

    Youwei LU  Shogo OKADA  Katsumi NITTA  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2017/06/09
      Vol:
    E100-D No:9
      Page(s):
    2126-2137

    We propose a novel method, built upon the hierarchical Dirichlet process hidden semi-Markov model, to reveal the content structures of unstructured domain-specific texts. The content structures of texts consisting of sequential local contexts are useful for tasks, such as text retrieval, classification, and text mining. The prominent feature of our model is the use of the recursive uniform partitioning, a stochastic process taking a view different from existing HSMMs in modeling state duration. We show that the recursive uniform partitioning plays an important role in avoiding the rapid switching between hidden states. Remarkably, our method greatly outperforms others in terms of ranking performance in our text retrieval experiments, and provides more accurate features for SVM to achieve higher F1 scores in our text classification experiments. These experiment results suggest that our method can yield improved representations of domain-specific texts. Furthermore, we present a method of automatically discovering the local contexts that serve to account for why a text is classified as a positive instance, in the supervised learning settings.

  • LTDE: A Layout Tree Based Approach for Deep Page Data Extraction

    Jun ZENG  Feng LI  Brendan FLANAGAN  Sachio HIROKAWA  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2017/02/21
      Vol:
    E100-D No:5
      Page(s):
    1067-1078

    Content extraction from deep Web pages has received great attention in recent years. However, the increasingly complicated HTML structure of Web documents makes it more difficult to recognize the data records by only analyzing the HTML source code. In this paper, we propose a method named LTDE to extract data records from a deep Web page. Instead of analyzing the HTML source code, LTDE utilizes the visual features of data records in deep Web pages. A Web page is considered as a finite set of visual blocks. The data records are the visual blocks that have similar layout. We also propose a pattern recognizing method named layout tree to cluster the similar layout visual blocks. The weight of all clusters is calculated, and the visual blocks in the cluster that has the highest weight are chosen as the data records to be extracted. The experiment results show that LTDE has higher effectiveness and better robustness for Web data extraction compared to previous works.

  • Recognition of Online Handwritten Math Symbols Using Deep Neural Networks

    Hai DAI NGUYEN  Anh DUC LE  Masaki NAKAGAWA  

     
    PAPER-Pattern Recognition

      Pubricized:
    2016/08/30
      Vol:
    E99-D No:12
      Page(s):
    3110-3118

    This paper presents deep learning to recognize online handwritten mathematical symbols. Recently various deep learning architectures such as Convolution neural networks (CNNs), Deep neural networks (DNNs), Recurrent neural networks (RNNs) and Long short-term memory (LSTM) RNNs have been applied to fields such as computer vision, speech recognition and natural language processing where they have shown superior performance to state-of-the-art methods on various tasks. In this paper, max-out-based CNNs and Bidirectional LSTM (BLSTM) networks are applied to image patterns created from online patterns and to the original online patterns, respectively and then combined. They are compared with traditional recognition methods which are MRFs and MQDFs by recognition experiments on the CROHME database along with analysis and explanation.

  • Object Detection Based on Image Blur Evaluated by Discrete Fourier Transform and Haar-Like Features

    Ryusuke MIYAMOTO  Shingo KOBAYASHI  

     
    PAPER-Image

      Vol:
    E99-A No:11
      Page(s):
    1990-1999

    In general, in-focus images are used in visual object detection because image blur is considered as a factor reducing detection accuracy. However, in-focus images make it difficult to separate target objects from background images, because of that, visual object detection becomes a hard task. Background subtraction and inter-frame difference are famous schemes for separating target objects from background but they have a critical disadvantage that they cannot be used if illumination changes or the point of view moves. Considering these problems, the authors aim to improve detection accuracy by using images with out-of-focus blur obtained from a camera with a shallow depth of field. In these images, it is expected that target objects become in-focus and other regions are blurred. To enable visual object detection based on such image blur, this paper proposes a novel scheme using DFT-based feature extraction. The experimental results using synthetic images including, circle, star, and square objects as targets showed that a classifier constructed by the proposed scheme showed 2.40% miss rate at 0.1 FPPI and perfect detection has been achieved for detection of star and square objects. In addition, the proposed scheme achieved perfect detection of humans in natural images when the upper half of the human body was trained. The accuracy of the proposed scheme is better than the Filtered Channel Features, one of the state-of-the-art schemes for visual object detection. Analyzing the result, it is convincing that the proposed scheme is very feasible for visual object detection based on image blur.

  • Spectral Features Based on Local Normalized Center Moments for Speech Emotion Recognition

    Huawei TAO  Ruiyu LIANG  Xinran ZHANG  Li ZHAO  

     
    LETTER-Speech and Hearing

      Vol:
    E99-A No:10
      Page(s):
    1863-1866

    To discuss whether rotational invariance is the main role in spectrogram features, new spectral features based on local normalized center moments, denoted by LNCMSF, are proposed. The proposed LNCMSF firstly adopts 2nd order normalized center moments to describe local energy distribution of the logarithmic energy spectrum, then normalized center moment spectrograms NC1 and NC2 are gained. Secondly, DCT (Discrete Cosine Transform) is used to eliminate the correlation of NC1 and NC2, then high order cepstral coefficients TNC1 and TNC2 are obtained. Finally, LNCMSF is generated by combining NC1, NC2, TNC1 and TNC2. The rotational invariance test experiment shows that the rotational invariance is not a necessary property in partial spectrogram features. The recognition experiment shows that the maximum UA (Unweighted Average of Class-Wise Recall Rate) of LNCMSF are improved by at least 10.7% and 1.2% respectively, compared to that of MFCC (Mel Frequency Cepstrum Coefficient) and HuWSF (Weighted Spectral Features Based on Local Hu Moments).

  • Spectral Features Based on Local Hu Moments of Gabor Spectrograms for Speech Emotion Recognition

    Huawei TAO  Ruiyu LIANG  Cheng ZHA  Xinran ZHANG  Li ZHAO  

     
    LETTER-Pattern Recognition

      Pubricized:
    2016/05/06
      Vol:
    E99-D No:8
      Page(s):
    2186-2189

    To improve the recognition rate of the speech emotion, new spectral features based on local Hu moments of Gabor spectrograms are proposed, denoted by GSLHu-PCA. Firstly, the logarithmic energy spectrum of the emotional speech is computed. Secondly, the Gabor spectrograms are obtained by convoluting logarithmic energy spectrum with Gabor wavelet. Thirdly, Gabor local Hu moments(GLHu) spectrograms are obtained through block Hu strategy, then discrete cosine transform (DCT) is used to eliminate correlation among components of GLHu spectrograms. Fourthly, statistical features are extracted from cepstral coefficients of GLHu spectrograms, then all the statistical features form a feature vector. Finally, principal component analysis (PCA) is used to reduce redundancy of features. The experimental results on EmoDB and ABC databases validate the effectiveness of GSLHu-PCA.

  • Real-Time Hardware Implementation of a Sound Recognition System with In-Field Learning

    Mauricio KUGLER  Teemu TOSSAVAINEN  Miku NAKATSU  Susumu KUROYANAGI  Akira IWATA  

     
    PAPER-Speech and Hearing

      Pubricized:
    2016/03/30
      Vol:
    E99-D No:7
      Page(s):
    1885-1894

    The development of assistive devices for automated sound recognition is an important field of research and has been receiving increased attention. However, there are still very few methods specifically developed for identifying environmental sounds. The majority of the existing approaches try to adapt speech recognition techniques for the task, usually incurring high computational complexity. This paper proposes a sound recognition method dedicated to environmental sounds, designed with its main focus on embedded applications. The pre-processing stage is loosely based on the human hearing system, while a robust set of binary features permits a simple k-NN classifier to be used. This gives the system the capability of in-field learning, by which new sounds can be simply added to the reference set in real-time, greatly improving its usability. The system was implemented in an FPGA based platform, developed in-house specifically for this application. The design of the proposed method took into consideration several restrictions imposed by the hardware, such as limited computing power and memory, and supports up to 12 reference sounds of around 5.3 s each. Experimental results were performed in a database of 29 sounds. Sensitivity and specificity were evaluated over several random subsets of these signals. The obtained values for sensitivity and specificity, without additional noise, were, respectively, 0.957 and 0.918. With the addition of +6 dB of pink noise, sensitivity and specificity were 0.822 and 0.942, respectively. The in-field learning strategy presented no significant change in sensitivity and a total decrease of 5.4% in specificity when progressively increasing the number of reference sounds from 1 to 9 under noisy conditions. The minimal signal-to-noise ration required by the prototype to correctly recognize sounds was between -8 dB and 3 dB. These results show that the proposed method and implementation have great potential for several real life applications.

  • Hybrid Retinal Image Registration Using Mutual Information and Salient Features

    Jaeyong JU  Murray LOEW  Bonhwa KU  Hanseok KO  

     
    LETTER-Biological Engineering

      Pubricized:
    2016/03/01
      Vol:
    E99-D No:6
      Page(s):
    1729-1732

    This paper presents a method for registering retinal images. Retinal image registration is crucial for the diagnoses and treatments of various eye conditions and diseases such as myopia and diabetic retinopathy. Retinal image registration is challenging because the images have non-uniform contrasts and intensity distributions, as well as having large homogeneous non-vascular regions. This paper provides a new retinal image registration method by effectively combining expectation maximization principal component analysis based mutual information (EMPCA-MI) with salient features. Experimental results show that our method is more efficient and robust than the conventional EMPCA-MI method.

  • Efficient Two-Step Middle-Level Part Feature Extraction for Fine-Grained Visual Categorization

    Hideki NAKAYAMA  Tomoya TSUDA  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2016/02/23
      Vol:
    E99-D No:6
      Page(s):
    1626-1634

    Fine-grained visual categorization (FGVC) has drawn increasing attention as an emerging research field in recent years. In contrast to generic-domain visual recognition, FGVC is characterized by high intra-class and subtle inter-class variations. To distinguish conceptually and visually similar categories, highly discriminative visual features must be extracted. Moreover, FGVC has highly specialized and task-specific nature. It is not always easy to obtain a sufficiently large-scale training dataset. Therefore, the key to success in practical FGVC systems is to efficiently exploit discriminative features from a limited number of training examples. In this paper, we propose an efficient two-step dimensionality compression method to derive compact middle-level part-based features. To do this, we compare both space-first and feature-first convolution schemes and investigate their effectiveness. Our approach is based on simple linear algebra and analytic solutions, and is highly scalable compared with the current one-vs-one or one-vs-all approach, making it possible to quickly train middle-level features from a number of pairwise part regions. We experimentally show the effectiveness of our method using the standard Caltech-Birds and Stanford-Cars datasets.

  • Speaker-Independent Speech Emotion Recognition Based Multiple Kernel Learning of Collaborative Representation

    Cheng ZHA  Xinrang ZHANG  Li ZHAO  Ruiyu LIANG  

     
    LETTER-Engineering Acoustics

      Vol:
    E99-A No:3
      Page(s):
    756-759

    We propose a novel multiple kernel learning (MKL) method using a collaborative representation constraint, called CR-MKL, for fusing the emotion information from multi-level features. To this end, the similarity and distinctiveness of multi-level features are learned in the kernels-induced space using the weighting distance measure. Our method achieves better performance than existing methods by using the voiced-level and unvoiced-level features.

  • VisualTextualRank: An Extension of VisualRank to Large-Scale Video Shot Extraction Exploiting Tag Co-occurrence

    Nga H. DO  Keiji YANAI  

     
    PAPER-Image Processing and Video Processing

      Vol:
    E98-D No:1
      Page(s):
    166-172

    In this paper, we propose a novel ranking method called VisualTextualRank which ranks media data according to the relevance between the data and specified keywords. We apply our method to the system of video shot ranking which aims to automatically obtain video shots corresponding to given action keywords from Web videos. The keywords can be any type of action such as “surfing wave” (sport action) or “brushing teeth” (daily activity). Top ranked video shots are expected to be relevant to the keywords. While our baseline exploits only visual features of the data, the proposed method employs both textual information (tags) and visual features. Our method is based on random walks over a bipartite graph to integrate visual information of video shots and tag information of Web videos effectively. Note that instead of treating the textual information as an additional feature for shot ranking, we explore the mutual reinforcement between shots and textual information of their corresponding videos to improve shot ranking. We validated our framework on a database which was used by the baseline. Experiments showed that our proposed ranking method, VisualTextualRank, improved significantly the performance of the system of video shot extraction over the baseline.

  • Multiple Face Recognition Using Local Features and Swarm Intelligence

    Chidambaram CHIDAMBARAM  Hugo VIEIRA NETO  Leyza Elmeri Baldo DORINI  Heitor Silvério LOPES  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E97-D No:6
      Page(s):
    1614-1623

    Face recognition plays an important role in security applications, but in real-world conditions face images are typically subject to issues that compromise recognition performance, such as geometric transformations, occlusions and changes in illumination. Most face detection and recognition works to date deal with single face images using global features and supervised learning. Differently from that context, here we propose a multiple face recognition approach based on local features which does not rely on supervised learning. In order to deal with multiple face images under varying conditions, the extraction of invariant and discriminative local features is achieved by using the SURF (Speeded-Up Robust Features) approach, and the search for regions from which optimal features can be extracted is done by an improved ABC (Artificial Bee Colony) algorithm. Thresholds and parameters for SURF and improved ABC algorithms are determined experimentally. The approach was extensively assessed on 99 different still images - more than 400 trials were conducted using 20 target face images and still images under different acquisition conditions. Results show that our approach is promising for real-world face recognition applications concerning different acquisition conditions and transformations.

21-40hit(84hit)