The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] CRI(505hit)

141-160hit(505hit)

  • A Method to Mitigate the Impact of Primary User Traffic on an Energy Detector in Spectrum Sensing

    Truc Thanh TRAN  Alagan S. ANPALAGAN  Hyung Yun KONG  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E96-B No:6
      Page(s):
    1522-1530

    In this article, we propose a method to reduce the impact of primary traffic on spectrum sensing performance. In practice, the sensing performance is degraded by noise-only sample in the spectrum sensing time. Therefore, we employ a time of primary user (PU) signal arrival detector in order to remove the noise-only portion. Then, we employ equal-weight-based energy detection (EWED) to provide the detection decision. The analysis and simulation results show that there exists an optimal early time of arrival (ToA) false alarm which provides better performance compared to the use of a single EWED scheme.

  • Evolutionarily and Neutrally Stable Strategies in Multicriteria Games

    Tomohiro KAWAMURA  Takafumi KANAZAWA  Toshimitsu USHIO  

     
    PAPER-Concurrent Systems

      Vol:
    E96-A No:4
      Page(s):
    814-820

    Evolutionary stability has been discussed as a fundamental issue in single-criterion games. We extend evolutionarily and neutrally stable strategies to multicriteria games. Keeping in mind the fact that a payoff is given by a vector in multicriteria games, we provide several concepts which are coincident in single-criterion games based on partial vector orders of payoff vectors. We also investigate the hierarchical structure of our proposed evolutionarily and neutrally stable strategies. Shapley had introduced concepts such as strong and weak equilibria. We discuss the relationship between these equilibria and our proposed evolutionary stability.

  • A Low-Complexity Stopping Criterion for Turbo Decoding Using Forward State Metrics at a Single Time Instant

    Sun-Ting LIN  Shou-Sheu LIN  Je-An LAI  

     
    PAPER-Fundamental Theories for Communications

      Vol:
    E96-B No:3
      Page(s):
    722-729

    A stopping criterion is an indispensable function to reduce unnecessary power consumption and decoding delay in turbo decoding. Until now, a common design philosophy in previous works has involved using the entire block of information from the MAP decoder and its input/output information to calculate the stopping index. It is an intuitive method but suffers from heavy memory requirements and high calculation complexity. In this paper, a low-complexity stopping criterion is proposed that avoids the aforementioned disadvantages. A general abstraction model is utilized to analyze the design bottleneck of stopping criteria. Instead of using an entire block of information, a compact representation derived from the internal information of the MAP decoder at a single time instant is used as a low-complexity stopping index. Theoretical explanation is provided to justify the feasibility of the proposed criterion. Simulation results show that the proposed criterion can reduce the complexity of stopping criterion dramatically while continuing to achieve the same level of performance as previous works.

  • L1-Norm Based Linear Discriminant Analysis: An Application to Face Recognition

    Wei ZHOU  Sei-ichiro KAMATA  

     
    PAPER-Face Perception and Recognition

      Vol:
    E96-D No:3
      Page(s):
    550-558

    Linear Discriminant Analysis (LDA) is a well-known feature extraction method for supervised subspace learning in statistical pattern recognition. In this paper, a novel method of LDA based on a new L1-norm optimization technique and its variances are proposed. The conventional LDA, which is based on L2-norm, is sensitivity to the presence of outliers, since it used the L2-norm to measure the between-class and within-class distances. In addition, the conventional LDA often suffers from the so-called small sample size (3S) problem since the number of samples is always smaller than the dimension of the feature space in many applications, such as face recognition. Based on L1-norm, the proposed methods have several advantages, first they are robust to outliers because they utilize the L1-norm, which is less sensitive to outliers. Second, they have no 3S problem. Third, they are invariant to rotations as well. The proposed methods are capable of reducing the influence of outliers substantially, resulting in a robust classification. Performance assessment in face application shows that the proposed approaches are more effectiveness to address outliers issue than traditional ones.

  • Semi-Supervised Nonparametric Discriminant Analysis

    Xianglei XING  Sidan DU  Hua JIANG  

     
    LETTER-Pattern Recognition

      Vol:
    E96-D No:2
      Page(s):
    375-378

    We extend the Nonparametric Discriminant Analysis (NDA) algorithm to a semi-supervised dimensionality reduction technique, called Semi-supervised Nonparametric Discriminant Analysis (SNDA). SNDA preserves the inherent advantages of NDA, that is, relaxing the Gaussian assumption required for the traditional LDA-based methods. SNDA takes advantage of both the discriminating power provided by the NDA method and the locality-preserving power provided by the manifold learning. Specifically, the labeled data points are used to maximize the separability between different classes and both the labeled and unlabeled data points are used to build a graph incorporating neighborhood information of the data set. Experiments on synthetic as well as real datasets demonstrate the effectiveness of the proposed approach.

  • CBRISK: Colored Binary Robust Invariant Scalable Keypoints

    Huiyun JING  Xin HE  Qi HAN  Xiamu NIU  

     
    LETTER-Image Recognition, Computer Vision

      Vol:
    E96-D No:2
      Page(s):
    392-395

    BRISK (Binary Robust Invariant Scalable Keypoints) works dramatically faster than well-established algorithms (SIFT and SURF) while maintaining matching performance. However BRISK relies on intensity, color information in the image is ignored. In view of the importance of color information in vision applications, we propose CBRISK, a novel method for taking into account color information during keypoint detection and description. Instead of grayscale intensity image, the proposed approach detects keypoints in the photometric invariant color space. On the basis of binary intensity BRISK (original BRISK) descriptor, the proposed approach embeds binary invariant color presentation in the CBRISK descriptors. Experimental results show that CBRISK is more discriminative and robust than BRISK with respect to photometric variation.

  • Facial Micro-Expression Detection in Hi-Speed Video Based on Facial Action Coding System (FACS)

    Senya POLIKOVSKY  Yoshinari KAMEDA  Yuichi OHTA  

     
    PAPER-Pattern Recognition

      Vol:
    E96-D No:1
      Page(s):
    81-92

    Facial micro-expressions are fast and subtle facial motions that are considered as one of the most useful external signs for detecting hidden emotional changes in a person. However, they are not easy to detect and measure as they appear only for a short time, with small muscle contraction in the facial areas where salient features are not available. We propose a new computer vision method for detecting and measuring timing characteristics of facial micro-expressions. The core of this method is based on a descriptor that combines pre-processing masks, histograms and concatenation of spatial-temporal gradient vectors. Presented 3D gradient histogram descriptor is able to detect and measure the timing characteristics of the fast and subtle changes of the facial skin surface. This method is specifically designed for analysis of videos recorded using a hi-speed 200 fps camera. Final classification of micro expressions is done by using a k-mean classifier and a voting procedure. The Facial Action Coding System was utilized to annotate the appearance and dynamics of the expressions in our new hi-speed micro-expressions video database. The efficiency of the proposed approach was validated using our new hi-speed video database.

  • On d-Asymptotics for High-Dimensional Discriminant Analysis with Different Variance-Covariance Matrices

    Takanori AYANO  Joe SUZUKI  

     
    LETTER-Artificial Intelligence, Data Mining

      Vol:
    E95-D No:12
      Page(s):
    3106-3108

    In this paper we consider the two-class classification problem with high-dimensional data. It is important to find a class of distributions such that we cannot expect good performance in classification for any classifier. In this paper, when two population variance-covariance matrices are different, we give a reasonable sufficient condition for distributions such that the misclassification rate converges to the worst value as the dimension of data tends to infinity for any classifier. Our results can give guidelines to decide whether or not an experiment is worth performing in many fields such as bioinformatics.

  • Software FMEA for Safety-Critical System Based on Co-analysis of System Model and Software Model

    Guoqi LI  

     
    LETTER-Dependable Computing

      Vol:
    E95-D No:12
      Page(s):
    3101-3105

    Software FMEA is valuable and practically used for embedded software of safety-critical systems. In this paper, a novel method for Software FMEA is presented based on co-analysis of system model and software model. The method is hopeful to detect quantitative and dynamic effects by a targeted software failure. A typical application of the method is provided to illustrate the procedure and the applicable scenarios. In addition, a pattern is refined from the application for further reuse.

  • Classification of Prostate Histopathology Images Based on Multifractal Analysis

    Chamidu ATUPELAGE  Hiroshi NAGAHASHI  Masahiro YAMAGUCHI  Tokiya ABE  Akinori HASHIGUCHI  Michiie SAKAMOTO  

     
    PAPER-Pattern Recognition

      Vol:
    E95-D No:12
      Page(s):
    3037-3045

    Histopathology is a microscopic anatomical study of body tissues and widely used as a cancer diagnosing method. Generally, pathologists examine the structural deviation of cellular and sub-cellular components to diagnose the malignancy of body tissues. These judgments may often subjective to pathologists' skills and personal experiences. However, computational diagnosis tools may circumvent these limitations and improve the reliability of the diagnosis decisions. This paper proposes a prostate image classification method by extracting textural behavior using multifractal analysis. Fractal geometry is used to describe the complexity of self-similar structures as a non-integer exponent called fractal dimension. Natural complex structures (or images) are not self-similar, thus a single exponent (the fractal dimension) may not be adequate to describe the complexity of such structures. Multifractal analysis technique has been introduced to describe the complexity as a spectrum of fractal dimensions. Based on multifractal computation of digital imaging, we obtain two textural feature descriptors; i) local irregularity: α and ii) global regularity: f(α). We exploit these multifractal feature descriptors with a texton dictionary based classification model to discriminate cancer/non-cancer tissues of histopathology images of H&E stained prostate biopsy specimens. Moreover, we examine other three feature descriptors; Gabor filter bank, LM filter bank and Haralick features to benchmark the performance of the proposed method. Experiment results indicated that the performance of the proposed multifractal feature descriptor outperforms the other feature descriptors by achieving over 94% of correct classification accuracy.

  • Risk-Based Semi-Supervised Discriminative Language Modeling for Broadcast Transcription

    Akio KOBAYASHI  Takahiro OKU  Toru IMAI  Seiichi NAKAGAWA  

     
    PAPER-Speech and Hearing

      Vol:
    E95-D No:11
      Page(s):
    2674-2681

    This paper describes a new method for semi-supervised discriminative language modeling, which is designed to improve the robustness of a discriminative language model (LM) obtained from manually transcribed (labeled) data. The discriminative LM is implemented as a log-linear model, which employs a set of linguistic features derived from word or phoneme sequences. The proposed semi-supervised discriminative modeling is formulated as a multi-objective optimization programming problem (MOP), which consists of two objective functions defined on both labeled lattices and automatic speech recognition (ASR) lattices as unlabeled data. The objectives are coherently designed based on the expected risks that reflect information about word errors for the training data. The model is trained in a discriminative manner and acquired as a solution to the MOP problem. In transcribing Japanese broadcast programs, the proposed method reduced relatively a word error rate by 6.3% compared with that achieved by a conventional trigram LM.

  • Dimensionality Reduction by Locally Linear Discriminant Analysis for Handwritten Chinese Character Recognition

    Xue GAO  Jinzhi GUO  Lianwen JIN  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E95-D No:10
      Page(s):
    2533-2543

    Linear Discriminant Analysis (LDA) is one of the most popular dimensionality reduction techniques in existing handwritten Chinese character (HCC) recognition systems. However, when used for unconstrained handwritten Chinese character recognition, the traditional LDA algorithm is prone to two problems, namely, the class separation problem and multimodal sample distributions. To deal with these problems,we propose a new locally linear discriminant analysis (LLDA) method for handwritten Chinese character recognition.Our algorithm operates as follows. (1) Using the clustering algorithm, find clusters for the samples of each class. (2) Find the nearest neighboring clusters from the remaining classes for each cluster of one class. Then, use the corresponding cluster means to compute the between-class scatter matrix in LDA while keeping the within-class scatter matrix unchanged. (3) Finally, apply feature vector normalization to further improve the class separation problem. A series of experiments on both the HCL2000 and CASIA Chinese character handwriting databases show that our method can effectively improve recognition performance, with a reduction in error rate of 28.7% (HCL2000) and 16.7% (CASIA) compared with the traditional LDA method.Our algorithm also outperforms DLA (Discriminative Locality Alignment,one of the representative manifold learning-based dimensionality reduction algorithms proposed recently). Large-set handwritten Chinese character recognition experiments also verified the effectiveness of our proposed approach.

  • On Kernel Parameter Selection in Hilbert-Schmidt Independence Criterion

    Masashi SUGIYAMA  Makoto YAMADA  

     
    LETTER-Artificial Intelligence, Data Mining

      Vol:
    E95-D No:10
      Page(s):
    2564-2567

    The Hilbert-Schmidt independence criterion (HSIC) is a kernel-based statistical independence measure that can be computed very efficiently. However, it requires us to determine the kernel parameters heuristically because no objective model selection method is available. Least-squares mutual information (LSMI) is another statistical independence measure that is based on direct density-ratio estimation. Although LSMI is computationally more expensive than HSIC, LSMI is equipped with cross-validation, and thus the kernel parameter can be determined objectively. In this paper, we show that HSIC can actually be regarded as an approximation to LSMI, which allows us to utilize cross-validation of LSMI for determining kernel parameters in HSIC. Consequently, both computational efficiency and cross-validation can be achieved.

  • OpenGL SC Implementation on the OpenGL Hardware

    Nakhoon BAEK  Hwanyong LEE  

     
    LETTER-Computer Graphics

      Vol:
    E95-D No:10
      Page(s):
    2589-2592

    The need for the OpenGL-family of the 3D rendering API's are highly increasing, especially for graphical human-machine interfaces on various systems. In the case of safety-critical market for avionics, military, medical and automotive applications, OpenGL SC, the safety critical profile of the OpenGL standard plays the major role for graphical interfaces. In this paper, we present an efficient way of implementing OpenGL SC 3D graphics API for the environments with hardware-supported OpenGL 1.1 and its multi-texture extension facility, which is widely available on recent embedded systems. Our approach achieved the OpenGL SC features at the low development cost on the embedded systems and also on general personal computers. Our final result shows its compliance with the OpenGL SC standard specification. From the efficiency point of view, we measured its execution times for various application programs, to show a remarkable speed-up.

  • Model-Based Mutation Testing Using Pushdown Automata

    Fevzi BELL  Mutlu BEYAZIT  Tomohiko TAKAGI  Zengo FURUKAWA  

     
    PAPER

      Vol:
    E95-D No:9
      Page(s):
    2211-2218

    A model-based mutation testing (MBMT) approach enables to perform negative testing where test cases are generated using mutant models containing intentional faults. This paper introduces an alternative MBMT framework using pushdown automata (PDA) that relate to context-free (type-2) languages. There are two key ideas in this study. One is to gain stronger representational power to capture the features whose behavior depends on previous states of software under test (SUT). The other is to make use of a relatively small test set and concentrate on suspicious parts of the SUT by using MBMT approach. Thus, the proposed framework includes (1) a novel usage of PDA for modeling SUT, (2) novel mutation operators for generating PDA mutants, (3) a novel coverage criterion, and an algorithm to generate negative test cases from mutant PDA. A case study validates the approach, and discusses its characteristics and limitations.

  • Novel Watermarked MDC System Based on SFQ Algorithm

    Lin-Lin TANG  Jeng-Shyang PAN  Hao LUO  Junbao LI  

     
    LETTER-Fundamental Theories for Communications

      Vol:
    E95-B No:9
      Page(s):
    2922-2925

    A novel watermarked MDC system based on the SFQ algorithm and the sub-sampling method is proposed in this paper. Sub-sampling algorithm is applied onto the transformed image to introduce some redundancy between different channels. Secret information is embedded into the preprocessed sub-images. Good performance of the new system to defense the noise and the compression attacks is shown in the experimental results.

  • Polyphonic Music Transcription by Nonnegative Matrix Factorization with Harmonicity and Temporality Criteria

    Sang Ha PARK  Seokjin LEE  Koeng-Mo SUNG  

     
    LETTER-Engineering Acoustics

      Vol:
    E95-A No:9
      Page(s):
    1610-1614

    Non-negative matrix factorization (NMF) is widely used for music transcription because of its efficiency. However, the conventional NMF-based music transcription algorithm often causes harmonic confusion errors or time split-up errors, because the NMF decomposes the time-frequency data according to the activated frequency in its time. To solve these problems, we proposed an NMF with temporal continuity and harmonicity constraints. The temporal continuity constraint prevented the time split-up of the continuous time components, and the harmonicity constraint helped to bind the fundamental with harmonic frequencies by reducing the additional octave errors. The transcription performance of the proposed algorithm was compared with that of the conventional algorithms, which showed that the proposed method helped to reduce additional false errors and increased the overall transcription performance.

  • Reduced-Reference Objective Quality Assessment Model of Coded Video Sequences Based on the MPEG-7 Descriptor

    Masaharu SATO  Yuukou HORITA  

     
    LETTER-Quality Metrics

      Vol:
    E95-A No:8
      Page(s):
    1259-1263

    Our research is focused on examining the video quality assessment model based on the MPEG-7 descriptor. Video quality is estimated by using several features based on the predicted frame quality such as average value, worst value, best value, standard deviation, and the predicted frame rate obtained from descriptor information. As a result, assessment of video quality can be conducted with a high prediction accuracy with correlation coefficient=0.94, standard deviation of error=0.24, maximum error=0.68 and outlier ratio=0.23.

  • Design of High-Performance Asynchronous Pipeline Using Synchronizing Logic Gates

    Zhengfan XIA  Shota ISHIHARA  Masanori HARIYAMA  Michitaka KAMEYAMA  

     
    PAPER-Integrated Electronics

      Vol:
    E95-C No:8
      Page(s):
    1434-1443

    This paper introduces a novel design method of an asynchronous pipeline based on dual-rail dynamic logic. The overhead of handshake control logic is greatly reduced by constructing a reliable critical datapath, which offers the pipeline high throughput as well as low power consumption. Synchronizing Logic Gates (SLGs), which have no data dependency problem, are used in the design to construct the reliable critical datapath. The design targets latch-free and extremely fine-grain or gate-level pipeline, where the depth of every pipeline stage is only one dual-rail dynamic logic. HSPICE simulation results, in a 65 nm design technology, indicate that the proposed design increases the throughput by 120% and decreases the power consumption by 54% compared with PS0, a classic dual-rail asynchronous pipeline implementation style, in 4-bit wide FIFOs. Moreover, this method is applied to design an array style multiplier. It shows that the proposed design reduces power by 37.9% compared to classic synchronous design when the workloads are 55%. A chip has been fabricated with a 44 multiplier function, which works well at 2.16G data-set/s (Post-layout simulation).

  • An Efficient Translation Method from Timed Petri Nets to Timed Automata

    Shota NAKANO  Shingo YAMAGUCHI  

     
    PAPER-Concurrent Systems

      Vol:
    E95-A No:8
      Page(s):
    1402-1411

    There are various existing methods translating timed Petri nets to timed automata. However, there is a trade-off between the amount of description and the size of state space. The amount of description and the size of state space affect the feasibility of modeling and analysis like model checking. In this paper, we propose a new translation method from timed Petri nets to timed automata. Our method translates from a timed Petri net to an automaton with the following features: (i) The number of location is 1; (ii) Each edge represents the firing of transition; (iii) Each state implemented as clocks and variables represents a state of the timed Petri net one-to-one correspondingly. Through these features, the amount of description is linear order and the size of state space is the same order as that of the Petri net. We applied our method to three Petri net models of signaling pathways and compared our method with existing methods from the view points of the amount of description and the size of state space. And the comparison results show that our method keeps a good balance between the amount of description and the size of state space. These results also show that our method is effective when checking properties of timed Petri nets.

141-160hit(505hit)