The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] MPO(945hit)

281-300hit(945hit)

  • An Iterative Technique for Optimally Designing Extrapolated Impulse Response Filter in the Mini-Max Sense

    Hao WANG  Li ZHAO  Wenjiang PEI  Jiakuo ZUO  Qingyun WANG  Minghai XIN  

     
    LETTER-Systems and Control

      Vol:
    E96-A No:10
      Page(s):
    2029-2033

    The optimal design of an extrapolated impulse response (EIR) filter (in the mini-max sense) is a non-linear programming problem. In this paper, the optimal design of the EIR filter by the semi-infinite programming (SIP) is investigated and an iterative technique for optimally designing the EIR filter is proposed. The simulation experiment validates the effectiveness of the SIP technique and the proposed iterative technique in the optimal design of the EIR filter.

  • Hand Gesture Recognition Based on Perceptual Shape Decomposition with a Kinect Camera

    Chun WANG  Zhongyuan LAI  Hongyuan WANG  

     
    LETTER-Pattern Recognition

      Vol:
    E96-D No:9
      Page(s):
    2147-2151

    In this paper, we propose the Perceptual Shape Decomposition (PSD) to detect fingers for a Kinect-based hand gesture recognition system. The PSD is formulated as a discrete optimization problem by removing all negative minima with minimum cost. Experiments show that our PSD is perceptually relevant and robust against distortion and hand variations, and thus improves the recognition system performance.

  • In-Service Video Quality Verifying Using DCT Basis for DTV Broadcasting

    Byeong-No KIM  Chan-Ho HAN  Kyu-Ik SOHNG  

     
    BRIEF PAPER-Electronic Instrumentation and Control

      Vol:
    E96-C No:7
      Page(s):
    1028-1031

    We propose a composite DCT basis line test signal to evaluate the video quality of a DTV encoder. The proposed composite test signal contains a frame index, a calibration square wave, and 7-field basis signals. The results show that the proposed method may be useful for an in-service video quality verifier, using an ordinary oscilloscope instead of special equipment.

  • A Small-Space Algorithm for Removing Small Connected Components from a Binary Image

    Tetsuo ASANO  Revant KUMAR  

     
    PAPER

      Vol:
    E96-A No:6
      Page(s):
    1044-1050

    Given a binary image I and a threshold t, the size-thresholded binary image I(t) defined by I and t is the binary image after removing all connected components consisting of at most t pixels. This paper presents space-efficient algorithms for computing a size-thresholded binary image for a binary image of n pixels, assuming that the image is stored in a read-only array with random-access. With regard to the problem, there are two cases depending on how large the threshold t is, namely, Relatively large threshold where t = Ω(), and Relatively small threshold where t = O(). In this paper, a new algorithmic framework for the problem is presented. From an algorithmic point of view, the problem can be solved in O() time and O() work space. We propose new algorithms for both the above cases which compute the size-threshold binary image for any binary image of n pixels in O(nlog n) time using only O() work space.

  • Selection of Component Carriers Using Centralized Baseband Pooling for LTE-Advanced Heterogeneous Networks

    Hiroyuki SEKI  Takaharu KOBAYASHI  Dai KIMURA  

     
    PAPER

      Vol:
    E96-B No:6
      Page(s):
    1288-1296

    Bandwidth expansion in Long Term Evolution (LTE)-Advanced is supported via carrier aggregation (CA), which aggregates multiple component carriers (CCs) to accomplish very high data rate communications. Heterogeneous networks (HetNets), which set pico-base stations in macrocells are also a key feature of LTE-Advanced to achieve substantial gains in coverage and capacity compared to macro-only cells. When CA is applied in HetNets, transmission on all CCs may not always be the best solution due to the extremely high levels of inter-cell interference experienced by HetNets. Activated CCs that are used for transmission should be selected depending on inter-cell interference conditions and the traffic offered in the cells. This paper presents a scheme to select CCs through centralized control assuming a centralized baseband unit (C-BBU) configuration. A C-BBU involves pooling tens or hundreds of baseband resources where one baseband resource can be connected to any CC installed in remote radio heads (RRHs) via optical fibers. Fewer baseband resources can be prepared in a C-BBU than those of CCs in RRHs to reduce the cost of equipment. Our proposed scheme selects the activated CCs by considering the user equipment (UE) assigned to CCs under the criterion of maximizing the proportional fairness (PF) utility function. Convex optimization using the Karush-Kuhn-Tucker (KKT) conditions is applied to solve the resource allocation ratio that enables user throughput to be estimated. We present results from system level simulations of the downlink to demonstrate that the proposed algorithm to select CCs can outperform the conventional one that selects activated CCs based on the received signal strength. We also demonstrate that our proposed algorithm to select CCs can provide a good balance in traffic load between CCs and achieve better user throughput with fewer baseband resources.

  • Speaker Adaptation in Sparse Subspace of Acoustic Models

    Yongwon JEONG  

     
    LETTER-Speech and Hearing

      Vol:
    E96-D No:6
      Page(s):
    1402-1405

    I propose an acoustic model adaptation method using bases constructed through the sparse principal component analysis (SPCA) of acoustic models trained in a clean environment. I perform experiments on adaptation to a new speaker and noise. The SPCA-based method outperforms the PCA-based method in the presence of babble noise.

  • Pedestrian Detection by Using a Spatio-Temporal Histogram of Oriented Gradients

    Chunsheng HUA  Yasushi MAKIHARA  Yasushi YAGI  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E96-D No:6
      Page(s):
    1376-1386

    In this paper, we propose a pedestrian detection algorithm based on both appearance and motion features to achieve high detection accuracy when applied to complex scenes. Here, a pedestrian's appearance is described by a histogram of oriented spatial gradients, and his/her motion is represented by another histogram of temporal gradients computed from successive frames. Since pedestrians typically exhibit not only their human shapes but also unique human movements generated by their arms and legs, the proposed algorithm is particularly powerful in discriminating a pedestrian from a cluttered situation, where some background regions may appear to have human shapes, but their motion differs from human movement. Unlike the algorithm based on a co-occurrence feature descriptor where significant generalization errors may arise owing to the lack of extensive training samples to cover feature variations, the proposed algorithm describes the shape and motion as unique features. These features enable us to train a pedestrian detector in the form of a spatio-temporal histogram of oriented gradients using the AdaBoost algorithm with a relatively small training dataset, while still achieving excellent detection performance. We have confirmed the effectiveness of the proposed algorithm through experiments on several public datasets.

  • LDR Image to HDR Image Mapping with Overexposure Preprocessing

    Yongqing HUO  Fan YANG  Vincent BROST  Bo GU  

     
    PAPER

      Vol:
    E96-A No:6
      Page(s):
    1185-1194

    Due to the growing popularity of High Dynamic Range (HDR) images and HDR displays, a large amount of existing Low Dynamic Range (LDR) images are required to be converted to HDR format to benefit HDR advantages, which give rise to some LDR to HDR algorithms. Most of these algorithms especially tackle overexposed areas during expanding, which is the potential to make the image quality worse than that before processing and introduces artifacts. To dispel these problems, we present a new LDR to HDR approach, unlike the existing techniques, it focuses on avoiding sophisticated treatment to overexposed areas in dynamic range expansion step. Based on a separating principle, firstly, according to the familiar types of overexposure, the overexposed areas are classified into two categories which are removed and corrected respectively by two kinds of techniques. Secondly, for maintaining color consistency, color recovery is carried out to the preprocessed images. Finally, the LDR image is expanded to HDR. Experiments show that the proposed approach performs well and produced images become more favorable and suitable for applications. The image quality metric also illustrates that we can reveal more details without causing artifacts introduced by other algorithms.

  • Super Resolution TOA Estimation Algorithm with Maximum Likelihood ICA Based Pre-Processing

    Tetsuhiro OKANO  Shouhei KIDERA  Tetsuo KIRIMOTO  

     
    PAPER-Sensing

      Vol:
    E96-B No:5
      Page(s):
    1194-1201

    High-resolution time of arrival (TOA) estimation techniques have great promise for the high range resolution required in recently developed radar systems. A widely known super-resolution TOA estimation algorithm for such applications, the multiple-signal classification (MUSIC) in the frequency domain, has been proposed, which exploits an orthogonal relationship between signal and noise eigenvectors obtained by the correlation matrix of the observed transfer function. However, this method suffers severely from a degraded resolution when a number of highly correlated interference signals are mixed in the same range gate. As a solution for this problem, this paper proposes a novel TOA estimation algorithm by introducing a maximum likelihood independent component analysis (MLICA) approach, in which multiple complex sinusoidal signals are efficiently separated by the likelihood criteria determined by the probability density function (PDF) of a complex sinusoid. This MLICA schemes can decompose highly correlated interference signals, and the proposed method then incorporates the MLICA into the MUSIC method, to enhance the range resolution in richly interfered situations. The results from numerical simulations and experimental investigation demonstrate that our proposed pre-processing method can enhance TOA estimation resolution compared with that obtained by the original MUSIC, particularly for lower signal-to-noise ratios.

  • Partitioned-Tree Nested Loop Join: An Efficient Join for Spatio-Temporal Interval Join

    Jinsoo LEE  Wook-Shin HAN  Jaewha KIM  Jeong-Hoon LEE  

     
    LETTER-Data Engineering, Web Information Systems

      Vol:
    E96-D No:5
      Page(s):
    1206-1210

    A predictive spatio-temporal interval join finds all pairs of moving objects satisfying a join condition on future time interval and space. In this paper, we propose a method called PTJoin. PTJoin partitions the inner index into small sub-trees and performs the join process for each sub-tree to reduce the number of disk page accesses for each window search. Furthermore, to reduce the number of pages accessed by consecutive window searches, we partition the index so that overlapping index pages do not belong to the same partition. Our experiments show that PTJoin reduces the number of page accesses by up to an order of magnitude compared to Interval_STJoin [9], which is the state-of-the-art solution, when the buffer size is small.

  • AspectQuery: A Method for Identification of Crosscutting Concerns in the Requirement Phase

    Chengwan HE  Chengmao TU  

     
    PAPER-Software Engineering

      Vol:
    E96-D No:4
      Page(s):
    897-905

    Identification of early aspects is the critical problem in aspect-oriented requirement engineering. But the representation of crosscutting concerns is various, which makes the identification difficult. To address the problem, this paper proposes the AspectQuery method based on goal model. We analyze four kinds of goal decomposition models, then summarize the main factors about identification of crosscutting concerns and conclude the identification rules based on a goal model. A goal is crosscutting concern when it satisfies one of the following conditions: i) the goal is contributed to realize one soft-goal; ii) parent goal of the goal is candidate crosscutting concern; iii) the goal has at least two parent goals. AspectQuery includes four steps: building the goal model, transforming the goal model, identifying the crosscutting concerns by identification rules, and composing the crosscutting concerns with the goals affected by them. We illustrate the AspectQuery method through a case study (a ticket booking management system). The results show the effectiveness of AspectQuery in identifying crosscutting concerns in the requirement phase.

  • A Proposal of Spatio-Temporal Reconstruction Method Based on a Fast Block-Iterative Algorithm Open Access

    Tatsuya KON  Takashi OBI  Hideaki TASHIMA  Nagaaki OHYAMA  

     
    PAPER-Medical Image Processing

      Vol:
    E96-D No:4
      Page(s):
    819-825

    Parametric images can help investigate disease mechanisms and vital functions. To estimate parametric images, it is necessary to obtain the tissue time activity curves (tTACs), which express temporal changes of tracer activity in human tissue. In general, the tTACs are calculated from each voxel's value of the time sequential PET images estimated from dynamic PET data. Recently, spatio-temporal PET reconstruction methods have been proposed in order to take into account the temporal correlation within each tTAC. Such spatio-temporal algorithms are generally quite computationally intensive. On the other hand, typical algorithms such as the preconditioned conjugate gradient (PCG) method still does not provide good accuracy in estimation. To overcome these problems, we propose a new spatio-temporal reconstruction method based on the dynamic row-action maximum-likelihood algorithm (DRAMA). As the original algorithm does, the proposed method takes into account the noise propagation, but it achieves much faster convergence. Performance of the method is evaluated with digital phantom simulations and it is shown that the proposed method requires only a few reconstruction processes, thereby remarkably reducing the computational cost required to estimate the tTACs. The results also show that the tTACs and parametric images from the proposed method have better accuracy.

  • The Effect of Distinctiveness in Recognizing Average Face: Human Recognition and Eigenface Based Machine Recognition

    Naiwala P. CHANDRASIRI  Ryuta SUZUKI  Nobuyuki WATANABE  Hiroshi YAMADA  

     
    PAPER-Face Perception and Recognition

      Vol:
    E96-D No:3
      Page(s):
    514-522

    Face perception and recognition have attracted more attention recently in multidisciplinary fields such as engineering, psychology, neuroscience, etc. with the advances in physical/physiological measurement and data analysis technologies. In this paper, our main interest is building computational models of human face recognition based on psychological experiments. We specially focus on modeling human face recognition characteristics of average face in the dimension of distinctiveness. Psychological experiments were carried out to measure distinctiveness of face images and their results are explained by computer analysis results of the images. Two psychological experiments, 1) Classical experiment of distinctiveness rating and, 2) Novel experiment of recognition of an average face were performed. In the later experiment, we examined on how the average face of two face images was recognized by a human in a similarity test respect to the original images which were utilized for the calculation of the average face. To explain results of the psychological experiments, eigenface spaces were constructed based on Principal Component Analysis (PCA). Significant correlation was found between human and PCA based computer recognition results. Emulation of human recognition of faces is one of the expected applications of this research.

  • An Improved Traffic Matrix Decomposition Method with Frequency-Domain Regularization

    Zhe WANG  Kai HU  Baolin YIN  

     
    LETTER-Information Network

      Vol:
    E96-D No:3
      Page(s):
    731-734

    We propose a novel network traffic matrix decomposition method named Stable Principal Component Pursuit with Frequency-Domain Regularization (SPCP-FDR), which improves the Stable Principal Component Pursuit (SPCP) method by using a frequency-domain noise regularization function. An experiment demonstrates the feasibility of this new decomposition method.

  • ATTI: Workload-Aware Query Adaptive OcTree Based Trajectory Index

    Xiangxu MENG  Xiaodong WANG  Xinye LIN  

     
    PAPER-Data Engineering, Web Information Systems

      Vol:
    E96-D No:3
      Page(s):
    643-654

    The GPS trajectory databases serve as bases for many intelligent applications that need to extract some trajectories for future processing or mining. When doing such tasks, spatio-temporal range queries based methods, which find all sub-trajectories within the given spatial extent and time interval, are commonly used. However, the history trajectory indexes of such methods suffer from two problems. First, temporal and spatial factors are not considered simutaneously, resulting in low performance when processing spatio-temporal queries. Second, the efficiency of indexes is sensitive to query size. The query performance changes dramatically as the query size changed. This paper proposes workload-aware Adaptive OcTree based Trajectory clustering Index (ATTI) aiming at optimizing trajectory storage and index performance. The contributions are three-folds. First, the distribution and time delay of the trajectory storage are introduced into the cost model of spatio-temporal range query; Second, the distribution of spatial division is dynamically adjusted based on GPS update workload; Third, the query workload adaptive mechanism is proposed based on virtual OcTree forest. A wide range of experiments are carried out over Microsoft GeoLife project dataset, and the results show that query delay of ATTI could be about 50% shorter than that of the nested index.

  • Model Checking an OSEK/VDX-Based Operating System for Automobile Safety Analysis

    Yunja CHOI  

     
    LETTER-Dependable Computing

      Vol:
    E96-D No:3
      Page(s):
    735-738

    An automotive operating system is a typical safety-critical software and therefore requires extensive analysis w.r.t its effect on system safety. Our earlier work [1] reported a systematic model checking approach for checking the safety properties of the OSEK/VDX-based operating system Trampoline. This article reports further performance improvement using embeddedC constructs for efficient verification of the Trampoline model developed in the earlier work. Experiments show that the use of embeddedC constructs greatly reduces verification costs.

  • A Fast Implementation of PCA-L1 Using Gram-Schmidt Orthogonalization

    Mariko HIROKAWA  Yoshimitsu KUROKI  

     
    LETTER-Face Perception and Recognition

      Vol:
    E96-D No:3
      Page(s):
    559-561

    PCA-L1 (principal component analysis based on L1-norm maximization) is an approximate solution of L1-PCA (PCA based on the L1-norm), and has robustness against outliers compared with traditional PCA. However, the more dimensions the feature space has, the more calculation time PCA-L1 consumes. This paper focuses on an initialization procedure of PCA-L1 algorithm, and proposes a fast method of PCA-L1 using Gram-Schmidt orthogonalization. Experimental results on face recognition show that the proposed method works faster than conventional PCA-L1 without decrease of recognition accuracy.

  • PCA-Based Retinal Vessel Tortuosity Quantification

    Rashmi TURIOR  Danu ONKAEW  Bunyarit UYYANONVARA  

     
    PAPER-Pattern Recognition

      Vol:
    E96-D No:2
      Page(s):
    329-339

    Automatic vessel tortuosity measures are crucial for many applications related to retinal diseases such as those due to retinopathy of prematurity (ROP), hypertension, stroke, diabetes and cardiovascular diseases. An automatic evaluation and quantification of retinal vascular tortuosity would help in the early detection of such retinopathies and other systemic diseases. In this paper, we propose a novel tortuosity index based on principal component analysis. The index is compared with three existant indices using simulated curves and real retinal images to demonstrate that it is a valid indicator of tortuosity. The proposed index satisfies all the tortuosity properties such as invariance to translation, rotation and scaling and also the modulation properties. It is capable of differentiating the tortuosity of structures that visually appear to be different in tortuosity and shapes. The proposed index can automatically classify the image as tortuous or non tortuous. For an optimal set of training parameters, the prediction accuracy is as high as 82.94% and 86.6% on 45 retinal images at segment level and image level, respectively. The test results are verified against the judgement of two expert Ophthalmologists. The proposed index is marked by its inherent simplicity and computational attractiveness, and produces the expected estimate, irrespective of the segmentation approach. Examples and experimental results demonstrate the fitness and effectiveness of the proposed technique for both simulated curves and retinal images.

  • Statistical Approaches to Excitation Modeling in HMM-Based Speech Synthesis

    June Sig SUNG  Doo Hwa HONG  Hyun Woo KOO  Nam Soo KIM  

     
    LETTER-Speech and Hearing

      Vol:
    E96-D No:2
      Page(s):
    379-382

    In our previous study, we proposed the waveform interpolation (WI) approach to model the excitation signals for hidden Markov model (HMM)-based speech synthesis. This letter presents several techniques to improve excitation modeling within the WI framework. We propose both the time domain and frequency domain zero padding techniques to reduce the spectral distortion inherent in the synthesized excitation signal. Furthermore, we apply non-negative matrix factorization (NMF) to obtain a low-dimensional representation of the excitation signals. From a number of experiments, including a subjective listening test, the proposed method has been found to enhance the performance of the conventional excitation modeling techniques.

  • Provable Security against Cryptanalysis with Impossible Differentials

    Kazumaro AOKI  

     
    LETTER

      Vol:
    E96-A No:1
      Page(s):
    233-236

    This letter discusses with cryptanalysis with impossible differentials. After Biham et al. presented an attack on Skipjack, the applications to many ciphers were done, and we think that the attack is one of the most effective tool to cryptanalyze a block cipher. However, unfortunately, there is no construction method that provably resists the attack. This letter first introduces the measure that can evaluate the resistance against cryptanalysis with impossible differentials. Then, we propose a construction that resists cryptanalysis with impossible differentials. Moreover, a cipher that is based on the construction also provably resists differential cryptanalysis and linear cryptanalysis.

281-300hit(945hit)