The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] MPO(945hit)

201-220hit(945hit)

  • Enhancing Event-Related Potentials Based on Maximum a Posteriori Estimation with a Spatial Correlation Prior

    Hayato MAKI  Tomoki TODA  Sakriani SAKTI  Graham NEUBIG  Satoshi NAKAMURA  

     
    PAPER

      Pubricized:
    2016/04/01
      Vol:
    E99-D No:6
      Page(s):
    1437-1446

    In this paper a new method for noise removal from single-trial event-related potentials recorded with a multi-channel electroencephalogram is addressed. An observed signal is separated into multiple signals with a multi-channel Wiener filter whose coefficients are estimated based on parameter estimation of a probabilistic generative model that locally models the amplitude of each separated signal in the time-frequency domain. Effectiveness of using prior information about covariance matrices to estimate model parameters and frequency dependent covariance matrices were shown through an experiment with a simulated event-related potential data set.

  • Estimating Head Orientation Using a Combination of Multiple Cues

    Bima Sena Bayu DEWANTARA  Jun MIURA  

     
    PAPER-Human-computer Interaction

      Pubricized:
    2016/03/03
      Vol:
    E99-D No:6
      Page(s):
    1603-1614

    This paper proposes an appearance-based novel descriptor for estimating head orientation. Our descriptor is inspired by the Weber-based feature, which has been successfully implemented for robust texture analysis, and the gradient which performs well for shape analysis. To further enhance the orientation differences, we combine them with an analysis of the intensity deviation. The position of a pixel and its intrinsic intensity are also considered. All features are then composed as a feature vector of a pixel. The information carried by each pixel is combined using a covariance matrix to alleviate the influence caused by rotations and illumination. As the result, our descriptor is compact and works at high speed. We also apply a weighting scheme, called Block Importance Feature using Genetic Algorithm (BIF-GA), to improve the performance of our descriptor by selecting and accentuating the important blocks. Experiments on three head pose databases demonstrate that the proposed method outperforms the current state-of-the-art methods. Also, we can extend the proposed method by combining it with a head detection and tracking system to enable it to estimate human head orientation in real applications.

  • A Study of the Characteristics of MEMD for Fractional Gaussian Noise

    Huan HAO  Huali WANG  Naveed UR REHMAN  Hui TIAN  

     
    LETTER-Digital Signal Processing

      Vol:
    E99-A No:6
      Page(s):
    1228-1232

    The dyadic filter bank property of multivariate empirical mode decomposition (MEMD) for white Gaussian noise (WGN) is well established. In order to investigate the way MEMD behaves in the presence of fractional Gaussian noise (fGn), we conduct thorough numerical experiments for MEMD for fGn inputs. It turns out that similar to WGN, MEMD follows dyadic filter bank structure for fGn inputs, which is more stable than empirical mode decomposition (EMD) regardless of the Hurst exponent. Moreover, the estimation of the Hurst exponent of fGn contaminated with different kinds of signals is also presented via MEMD in this work.

  • Nonnegative Component Representation with Hierarchical Dictionary Learning Strategy for Action Recognition

    Jianhong WANG  Pinzheng ZHANG  Linmin LUO  

     
    LETTER-Pattern Recognition

      Pubricized:
    2016/01/13
      Vol:
    E99-D No:4
      Page(s):
    1259-1263

    Nonnegative component representation (NCR) is a mid-level representation based on nonnegative matrix factorization (NMF). Recently, it has attached much attention and achieved encouraging result for action recognition. In this paper, we propose a novel hierarchical dictionary learning strategy (HDLS) for NMF to improve the performance of NCR. Considering the variability of action classes, HDLS clusters the similar classes into groups and forms a two-layer hierarchical class model. The groups in the first layer are disjoint, while in the second layer, the classes in each group are correlated. HDLS takes account of the differences between two layers and proposes to use different dictionary learning methods for this two layers, including the discriminant class-specific NMF for the first layer and the discriminant joint dictionary NMF for the second layer. The proposed approach is extensively tested on three public datasets and the experimental results demonstrate the effectiveness and superiority of NCR with HDLS for large-scale action recognition.

  • FAQS: Fast Web Service Composition Algorithm Based on QoS-Aware Sampling

    Wei LU  Weidong WANG  Ergude BAO  Liqiang WANG  Weiwei XING  Yue CHEN  

     
    PAPER-Mathematical Systems Science

      Vol:
    E99-A No:4
      Page(s):
    826-834

    Web Service Composition (WSC) has been well recognized as a convenient and flexible way of service sharing and integration in service-oriented application fields. WSC aims at selecting and composing a set of initial services with respect to the Quality of Service (QoS) values of their attributes (e.g., price), in order to complete a complex task and meet user requirements. A major research challenge of the QoS-aware WSC problem is to select a proper set of services to maximize the QoS of the composite service meeting several QoS constraints upon various attributes, e.g. total price or runtime. In this article, a fast algorithm based on QoS-aware sampling (FAQS) is proposed, which can efficiently find the near-optimal composition result from sampled services. FAQS consists of five steps as follows. 1) QoS normalization is performed to unify different metrics for QoS attributes. 2) The normalized services are sampled and categorized by guaranteeing similar number of services in each class. 3) The frequencies of the sampled services are calculated to guarantee the composed services are the most frequent ones. This process ensures that the sampled services cover as many as possible initial services. 4) The sampled services are composed by solving a linear programming problem. 5) The initial composition results are further optimized by solving a modified multi-choice multi-dimensional knapsack problem (MMKP). Experimental results indicate that FAQS is much faster than existing algorithms and could obtain stable near-optimal result.

  • A Novel RZF Precoding Method Based on Matrix Decomposition: Reducing Complexity in Massive MIMO Systems

    Qian DENG  Li GUO  Jiaru LIN  Zhihui LIU  

     
    PAPER-Antennas and Propagation

      Vol:
    E99-B No:2
      Page(s):
    439-446

    In this paper, we propose an efficient regularized zero-forcing (RZF) precoding method that has lower hardware resource requirements and produces a shorter delay to the first transmitted symbol compared with truncated polynomial expansion (TPE) that is based on Neumann series in massive multiple-input multiple-output (MIMO) systems. The proposed precoding scheme, named matrix decomposition-polynomial expansion (MDPE), essentially applies a matrix decomposition algorithm based on polynomial expansion to significantly reduce full matrix multiplication computational complexity. Accordingly, it is suitable for real-time hardware implementations and high-mobility scenarios. Furthermore, the proposed method provides a simple expression that links the optimization coefficients to the ratio of BS/UTs antennas (β). This approach can speed-up the convergence to the matrix inverse by a matrix polynomial with small terms and further reduce computation costs. Simulation results show that the MDPE scheme can rapidly approximate the performance of the full precision RZF and optimal TPE algorithm, while adaptively selecting matrix polynomial terms in accordance with the different β and SNR situations. It thereby obtains a high average achievable rate of the UTs under power allocation.

  • MEMD-Based Filtering Using Interval Thresholding and Similarity Measure between Pdf of IMFs

    Huan HAO  Huali WANG  Weijun ZENG  Hui TIAN  

     
    LETTER-Digital Signal Processing

      Vol:
    E99-A No:2
      Page(s):
    643-646

    This paper presents a novel MEMD interval thresholding denoising, where relevant modes are selected by the similarity measure between the probability density functions of the input and that of each mode. Simulation and measured EEG data processing results show that the proposed scheme achieves better performance than other traditional denoisings.

  • Monitoring Temporal Properties Using Interval Analysis

    Daisuke ISHII  Naoki YONEZAKI  Alexandre GOLDSZTEJN  

     
    INVITED PAPER

      Vol:
    E99-A No:2
      Page(s):
    442-453

    Verification of temporal logic properties plays a crucial role in proving the desired behaviors of continuous systems. In this paper, we propose an interval method that verifies the properties described by a bounded signal temporal logic. We relax the problem so that if the verification process cannot succeed at the prescribed precision, it outputs an inconclusive result. The problem is solved by an efficient and rigorous monitoring algorithm. This algorithm performs a forward simulation of a continuous-time dynamical system, detects a set of time intervals in which the atomic propositions hold, and validates the property by propagating the time intervals. In each step, the continuous state at a certain time is enclosed by an interval vector that is proven to contain a unique solution. We experimentally demonstrate the utility of the proposed method in formal analysis of nonlinear and complex continuous systems.

  • A Refined Estimator of Multicomponent Third-Order Polynomial Phase Signals

    GuoJian OU  ShiZhong YANG  JianXun DENG  QingPing JIANG  TianQi ZHANG  

     
    PAPER-Fundamental Theories for Communications

      Vol:
    E99-B No:1
      Page(s):
    143-151

    This paper describes a fast and effective algorithm for refining the parameter estimates of multicomponent third-order polynomial phase signals (PPSs). The efficiency of the proposed algorithm is accompanied by lower signal-to-noise ratio (SNR) threshold, and computational complexity. A two-step procedure is used to estimate the parameters of multicomponent third-order PPSs. In the first step, an initial estimate for the phase parameters can be obtained by using fast Fourier transformation (FFT), k-means algorithm and three time positions. In the second step, these initial estimates are refined by a simple moving average filter and singular value decomposition (SVD). The SNR threshold of the proposed algorithm is lower than those of the non-linear least square (NLS) method and the estimation refinement method even though it uses a simple moving average filter. In addition, the proposed method is characterized by significantly lower complexity than computationally intensive NLS methods. Simulations confirm the effectiveness of the proposed method.

  • Impossible Differential Attack against 14-Round Piccolo-80 without Relying on Full Code Book

    Yosuke TODO  

     
    LETTER

      Vol:
    E99-A No:1
      Page(s):
    154-157

    Piccolo is a lightweight block cipher proposed by Sony Corporation in 2011. The designers showed two key modes, Piccolo-80 and Piccolo-128, which use an 80-bit secret key and a 128-bit one, respectively. Isobe and Shibutani estimated the security of Piccolo-80, and they showed that 14-round (reduced) Piccolo-80 w/o whitening keys is vulnerable against the Meet-in-the-Middle attack. The time complexity of their attack is about 273, but unfortunately it requires 264 texts, namely, the full code book. In this paper, we propose a new impossible differential attack against 14-round Piccolo-80 w/o whitening keys, and it can recover the secret key without relying on the full code book. The time complexity is 268 and it uses 262.2 distinct know plaintexts.

  • Quantitative Assessment of Facial Paralysis Based on Spatiotemporal Features

    Truc Hung NGO  Yen-Wei CHEN  Naoki MATSUSHIRO  Masataka SEO  

     
    PAPER-Pattern Recognition

      Pubricized:
    2015/10/01
      Vol:
    E99-D No:1
      Page(s):
    187-196

    Facial paralysis is a popular clinical condition occurring in 30 to 40 patients per 100,000 people per year. A quantitative tool to support medical diagnostics is necessary. This paper proposes a simple, visual and robust method that can objectively measure the degree of the facial paralysis by the use of spatiotemporal features. The main contribution of this paper is the proposal of an effective spatiotemporal feature extraction method based on a tracking of landmarks. Our method overcomes the drawbacks of the other techniques such as the influence of irrelevant regions, noise, illumination change and time-consuming process. In addition, the method is simple and visual. The simplification helps to reduce the time-consuming process. Also, the movements of landmarks, which relate to muscle movement ability, are visual. Therefore, the visualization helps reveal regions of serious facial paralysis. For recognition rate, experimental results show that our proposed method outperformed the other techniques tested on a dynamic facial expression image database.

  • Ontology Based Framework for Interactive Self-Assessment of e-Health Applications Open Access

    Wasin PASSORNPAKORN  Sinchai KAMOLPHIWONG  

     
    INVITED PAPER

      Pubricized:
    2015/10/21
      Vol:
    E99-D No:1
      Page(s):
    2-9

    Personal e-healthcare service is growing significantly. A large number of personal e-health measuring and monitoring devices are now in the market. However, to achieve better health outcome, various devices or services need to work together. This coordination among services remains challenge, due to their variations and complexities. To address this issue, we have proposed an ontology-based framework for interactive self-assessment of RESTful e-health services. Unlike existing e-health service frameworks where they had tightly coupling between services, as well as their data schemas were difficult to change and extend in the future. In our work, the loosely coupling among services and flexibility of each service are achieved through the design and implementation based on HYDRA vocabulary and REST principles. We have implemented clinical knowledge through the combination of OWL-DL and SPARQL rules. All of these services evolve independently; their interfaces are based on REST principles, especially HATEOAS constraints. We have demonstrated how to apply our framework for interactive self-assessment in e-health applications. We have shown that it allows the medical knowledge to drive the system workflow according to the event-driven principles. New data schema can be maintained during run-time. This is the essential feature to support arriving of IoT (Internet of Things) based medical devices, which have their own data schema and evolve overtime.

  • Spatio-Temporal Prediction Based Algorithm for Parallel Improvement of HEVC

    Xiantao JIANG  Tian SONG  Takashi SHIMAMOTO  Wen SHI  Lisheng WANG  

     
    PAPER

      Vol:
    E98-A No:11
      Page(s):
    2229-2237

    The next generation high efficiency video coding (HEVC) standard achieves high performance by extending the encoding block to 64×64. There are some parallel tools to improve the efficiency for encoder and decoder. However, owing to the dependence of the current prediction block and surrounding block, parallel processing at CU level and Sub-CU level are hard to achieve. In this paper, focusing on the spatial motion vector prediction (SMVP) and temporal motion vector prediction (TMVP), parallel improvement for spatio-temporal prediction algorithms are presented, which can remove the dependency between prediction coding units and neighboring coding units. Using this proposal, it is convenient to process motion estimation in parallel, which is suitable for different parallel platforms such as multi-core platform, compute unified device architecture (CUDA) and so on. The simulation experiment results demonstrate that based on HM12.0 test model for different test sequences, the proposed algorithm can improve the advanced motion vector prediction with only 0.01% BD-rate increase that result is better than previous work, and the BDPSNR is almost the same as the HEVC reference software.

  • A New Connected-Component Labeling Algorithm

    Xiao ZHAO  Lifeng HE  Bin YAO  Yuyan CHAO  

     
    LETTER-Pattern Recognition

      Pubricized:
    2015/08/05
      Vol:
    E98-D No:11
      Page(s):
    2013-2016

    This paper presents a new connected component labeling algorithm. The proposed algorithm scans image lines every three lines and processes pixels three by three. When processing the current three pixels, we also utilize the information obtained before to reduce the repeated work for checking pixels in the mask. Experimental results demonstrated that our method is more efficient than the fastest conventional labeling algorithm.

  • Active Noise Canceling for Headphones Using a Hybrid Structure with Wind Detection and Flexible Independent Component Analysis

    Dong-Hyun LIM  Minook KIM  Hyung-Min PARK  

     
    LETTER-Music Information Processing

      Pubricized:
    2015/07/31
      Vol:
    E98-D No:11
      Page(s):
    2043-2046

    This letter presents a method for active noise cancelation (ANC) for headphone application. The method improves the performance of ANC by deriving a flexible independent component analysis (ICA) algorithm in a hybrid structure combining feedforward and feedback configurations with correlation-based wind detection. The effectiveness of the method is demonstrated through simulation.

  • Efficient Anchor Graph Hashing with Data-Dependent Anchor Selection

    Hiroaki TAKEBE  Yusuke UEHARA  Seiichi UCHIDA  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2015/08/17
      Vol:
    E98-D No:11
      Page(s):
    2030-2033

    Anchor graph hashing (AGH) is a promising hashing method for nearest neighbor (NN) search. AGH realizes efficient search by generating and utilizing a small number of points that are called anchors. In this paper, we propose a method for improving AGH, which considers data distribution in a similarity space and selects suitable anchors by performing principal component analysis (PCA) in the similarity space.

  • An Efficient and Universal Conical Hypervolume Evolutionary Algorithm in Three or Higher Dimensional Objective Space

    Weiqin YING  Yuehong XIE  Xing XU  Yu WU  An XU  Zhenyu WANG  

     
    LETTER-Numerical Analysis and Optimization

      Vol:
    E98-A No:11
      Page(s):
    2330-2335

    The conical area evolutionary algorithm (CAEA) has a very high run-time efficiency for bi-objective optimization, but it can not tackle problems with more than two objectives. In this letter, a conical hypervolume evolutionary algorithm (CHEA) is proposed to extend the CAEA to a higher dimensional objective space. CHEA partitions objective spaces into a series of conical subregions and retains only one elitist individual for every subregion within a compact elitist archive. Additionally, each offspring needs to be compared only with the elitist individual in the same subregion in terms of the local hypervolume scalar indicator. Experimental results on 5-objective test problems have revealed that CHEA can obtain the satisfactory overall performance on both run-time efficiency and solution quality.

  • Collective Activity Recognition by Attribute-Based Spatio-Temporal Descriptor

    Changhong CHEN  Hehe DOU  Zongliang GAN  

     
    LETTER-Pattern Recognition

      Pubricized:
    2015/07/22
      Vol:
    E98-D No:10
      Page(s):
    1875-1878

    Collective activity recognition plays an important role in high-level video analysis. Most current feature representations look at contextual information extracted from the behaviour of nearby people. Every person needs to be detected and his pose should be estimated. After extracting the feature, hierarchical graphical models are always employed to model the spatio-temporal patterns of individuals and their interactions, and so can not avoid complex preprocessing and inference operations. To overcome these drawbacks, we present a new feature representation method, called attribute-based spatio-temporal (AST) descriptor. First, two types of information, spatio-temporal (ST) features and attribute features, are exploited. Attribute-based features are manually specified. An attribute classifier is trained to model the relationship between the ST features and attribute-based features, according to which the attribute features are refreshed. Then, the ST features, attribute features and the relationship between the attributes are combined to form the AST descriptor. An objective classifier can be specified on the AST descriptor and the weight parameters of the classifier are used for recognition. Experiments on standard collective activity benchmark sets show the effectiveness of the proposed descriptor.

  • Matrix Approach for the Seasonal Infectious Disease Spread Prediction

    Hideo HIROSE  Masakazu TOKUNAGA  Takenori SAKUMURA  Junaida SULAIMAN  Herdianti DARWIS  

     
    PAPER

      Vol:
    E98-A No:10
      Page(s):
    2010-2017

    Prediction of seasonal infectious disease spread is traditionally dealt with as a function of time. Typical methods are time series analysis such as ARIMA (autoregressive, integrated, and moving average) or ANN (artificial neural networks). However, if we regard the time series data as the matrix form, e.g., consisting of yearly magnitude in row and weekly trend in column, we may expect to use a different method (matrix approach) to predict the disease spread when seasonality is dominant. The MD (matrix decomposition) method is the one method which is used in recommendation systems. The other is the IRT (item response theory) used in ability evaluation systems. In this paper, we apply these two methods to predict the disease spread in the case of infectious gastroenteritis caused by norovirus in Japan, and compare the results obtained by using two conventional methods in forecasting, ARIMA and ANN. We have found that the matrix approach is simple and useful in prediction for the seasonal infectious disease spread.

  • MIMO Radar Receiver Design Based on Doppler Compensation for Range and Doppler Sidelobe Suppression

    Jinli CHEN  Jiaqiang LI  Lingsheng YANG  Peng LI  

     
    BRIEF PAPER-Electromagnetic Theory

      Vol:
    E98-C No:10
      Page(s):
    977-980

    Instrumental variable (IV) filters designed for range sidelobe suppression in multiple-input multiple-output (MIMO) radar suffer from Doppler mismatch. This mismatch causes losses in peak response and increases sidelobe levels, which affect the performance of MIMO radar. In this paper, a novel method using the component-code processing prior to the IV filter design for MIMO radar is proposed. It not only compensates for the Doppler effects in the design of IV filter, but also offers more virtual sensors resulting in narrower beams with lower sidelobes. Simulation results are presented to verify the effectiveness of the method.

201-220hit(945hit)