The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] PAR(2741hit)

601-620hit(2741hit)

  • Software Reliability Assessment via Non-Parametric Maximum Likelihood Estimation

    Yasuhiro SAITO  Tadashi DOHI  

     
    PAPER

      Vol:
    E98-A No:10
      Page(s):
    2042-2050

    In this paper we consider two non-parametric estimation methods for software reliability assessment without specifying the fault-detection time distribution, where the underlying stochastic process to describe software fault-counts in the system testing is given by a non-homogeneous Poisson process. The resulting data-driven methodologies can give the useful probabilistic information on the software reliability assessment under the incomplete knowledge on fault-detection time distribution. Throughout examples with real software fault data, it is shown that the proposed methods provide more accurate estimation results than the common parametric approach.

  • Consistent Sparse Representation for Abnormal Event Detection

    Zhong ZHANG  Shuang LIU  Zhiwei ZHANG  

     
    LETTER-Pattern Recognition

      Pubricized:
    2015/07/17
      Vol:
    E98-D No:10
      Page(s):
    1866-1870

    Sparsity-based methods have been recently applied to abnormal event detection and have achieved impressive results. However, most such methods suffer from the problem of dimensionality curse; furthermore, they also take no consideration of the relationship among coefficient vectors. In this paper, we propose a novel method called consistent sparse representation (CSR) to overcome the drawbacks. We first reconstruct each feature in the space spanned by the clustering centers of training features so as to reduce the dimensionality of features and preserve the neighboring structure. Then, the consistent regularization is added to the sparse representation model, which explicitly considers the relationship of coefficient vectors. Our method is verified on two challenging databases (UCSD Ped1 database and Subway batabase), and the experimental results demonstrate that our method obtains better results than previous methods in abnormal event detection.

  • Improvement of Reliability Evaluation for 2-Unit Parallel System with Cascading Failures by Using Maximal Copula

    Shuhei OTA  Takao KAGEYAMA  Mitsuhiro KIMURA  

     
    LETTER

      Vol:
    E98-A No:10
      Page(s):
    2096-2100

    In this study, we investigate whether copula modeling contributes to the improvement of reliability evaluation in a cascading failure-occurrence environment. In particular, as a basic problem, we focus on a 2-unit parallel system whose units may fail dependently each other. As a result, the reliability assessment of the system by using the maximal copula provides more accurate evaluation than the traditional Weibull analysis, if the degree of dependency between two units are high. We show this result by using several simulation studies.

  • A Novel Iterative Speaker Model Alignment Method from Non-Parallel Speech for Voice Conversion

    Peng SONG  Wenming ZHENG  Xinran ZHANG  Yun JIN  Cheng ZHA  Minghai XIN  

     
    LETTER-Speech and Hearing

      Vol:
    E98-A No:10
      Page(s):
    2178-2181

    Most of the current voice conversion methods are conducted based on parallel speech, which is not easily obtained in practice. In this letter, a novel iterative speaker model alignment (ISMA) method is proposed to address this problem. First, the source and target speaker models are each trained from the background model by adopting maximum a posteriori (MAP) algorithm. Then, a novel ISMA method is presented for alignment and transformation of spectral features. Finally, the proposed ISMA approach is further combined with a Gaussian mixture model (GMM) to improve the conversion performance. A series of objective and subjective experiments are carried out on CMU ARCTIC dataset, and the results demonstrate that the proposed method significantly outperforms the state-of-the-art approach.

  • Direction-of-Arrival Estimation Using an Array Covariance Vector and a Reweighted l1 Norm

    Xiao Yu LUO  Xiao chao FEI  Lu GAN  Ping WEI  Hong Shu LIAO  

     
    LETTER-Digital Signal Processing

      Vol:
    E98-A No:9
      Page(s):
    1964-1967

    We propose a novel sparse representation-based direction-of-arrival (DOA) estimation method. In contrast to those that approximate l0-norm minimization by l1-norm minimization, our method designs a reweighted l1 norm to substitute the l0 norm. The capability of the reweighted l1 norm to bridge the gap between the l0- and l1-norm minimization is then justified. In addition, an array covariance vector without redundancy is utilized to extend the aperture. It is proved that the degree of freedom is increased as such. The simulation results show that the proposed method performs much better than l1-type methods when the signal-to-noise ratio (SNR) is low and when the number of snapshots is small.

  • High-Quality Recovery of Non-Sparse Signals from Compressed Sensing — Beyond l1 Norm Minimization —

    Akira HIRABAYASHI  Norihito INAMURO  Aiko NISHIYAMA  Kazushi MIMURA  

     
    PAPER

      Vol:
    E98-A No:9
      Page(s):
    1880-1887

    We propose a novel algorithm for the recovery of non-sparse, but compressible signals from linear undersampled measurements. The algorithm proposed in this paper consists of two steps. The first step recovers the signal by the l1-norm minimization. Then, the second step decomposes the l1 reconstruction into major and minor components. By using the major components, measurements for the minor components of the target signal are estimated. The minor components are further estimated using the estimated measurements exploiting a maximum a posterior (MAP) estimation, which leads to a ridge regression with the regularization parameter determined using the error bound for the estimated measurements. After a slight modification to the major components, the final estimate is obtained by combining the two estimates. Computational cost of the proposed algorithm is mostly the same as the l1-nom minimization. Simulation results for one-dimensional computer generated signals show that the proposed algorithm gives 11.8% better results on average than the l1-norm minimization and the lasso estimator. Simulations using standard images also show that the proposed algorithm outperforms those conventional methods.

  • A Combinatorial Aliasing-Based Sparse Fourier Transform

    Pengcheng QIU  Feng YU  

     
    LETTER-Digital Signal Processing

      Vol:
    E98-A No:9
      Page(s):
    1968-1972

    The sparse Fourier transform (SFT) seeks to recover k non-negligible Fourier coefficients from a k-sparse signal of length N (k«N). A single frequency signal can be recovered via the Chinese remainder theorem (CRT) with sub-sampled discrete Fourier transforms (DFTs). However, when there are multiple non-negligible coefficients, more of them may collide, and multiple stages of sub-sampled DFTs are needed to deal with such collisions. In this paper, we propose a combinatorial aliasing-based SFT (CASFT) algorithm that is robust to noise and greatly reduces the number of stages by iteratively recovering coefficients. First, CASFT detects collisions and recovers coefficients via the CRT in a single stage. These coefficients are then subtracted from each stage, and the process iterates through the other stages. With a computational complexity of O(klog klog 2N) and sample complexity of O(klog 2N), CASFT is a novel and efficient SFT algorithm.

  • Statistics on Temporal Changes of Sparse Coding Coefficients in Spatial Pyramids for Human Action Recognition

    Yang LI  Junyong YE  Tongqing WANG  Shijian HUANG  

     
    LETTER-Pattern Recognition

      Pubricized:
    2015/06/01
      Vol:
    E98-D No:9
      Page(s):
    1711-1714

    Traditional sparse representation-based methods for human action recognition usually pool over the entire video to form the final feature representation, neglecting any spatio-temporal information of features. To employ spatio-temporal information, we present a novel histogram representation obtained by statistics on temporal changes of sparse coding coefficients frame by frame in the spatial pyramids constructed from videos. The histograms are further fed into a support vector machine with a spatial pyramid matching kernel for final action classification. We validate our method on two benchmarks, KTH and UCF Sports, and experiment results show the effectiveness of our method in human action recognition.

  • Motion of Break Arcs Occurring between Silver Electrical Contacts with Copper Arc Runners

    Haruki MIYAGAWA  Junya SEKIKAWA  

     
    BRIEF PAPER

      Vol:
    E98-C No:9
      Page(s):
    919-922

    Copper arc runners are fixed on silver electrical contacts. Break arcs are generated between the contacts in a DC resistive circuit. Circuit current when contacts are closed is 10A. Supply voltage is changed from 200V to 450V. The following results are shown. Cathode spots stay on the cathode surface but anode spots run on the runner when the supply voltage is 250V and over. In cases of the supply voltage is greater than 250V, the break arcs run on the runner when the arcs are successfully extinguished, and stays on the runner in cases of the failure of arc extinction. The arc lengths just before arc extinction with or without the runners are also investigated. The arc lengths are the same with or without the runners for each supply voltage.

  • Separation of Mass Spectra Based on Probabilistic Latent Component Analysis for Explosives Detection

    Yohei KAWAGUCHI  Masahito TOGAMI  Hisashi NAGANO  Yuichiro HASHIMOTO  Masuyuki SUGIYAMA  Yasuaki TAKADA  

     
    PAPER

      Vol:
    E98-A No:9
      Page(s):
    1888-1897

    A new algorithm for separating mass spectra into individual substances for explosives detection is proposed. In the field of mass spectrometry, separation methods, such as principal-component analysis (PCA) and independent-component analysis (ICA), are widely used. All components, however, have no negative values, and the orthogonality condition imposed on components also does not necessarily hold in the case of mass spectra. Because these methods allow negative values and PCA imposes an orthogonality condition, they are not suitable for separation of mass spectra. The proposed algorithm is based on probabilistic latent-component analysis (PLCA). PLCA is a statistical formulation of non-negative matrix factorization (NMF) using KL divergence. Because PLCA imposes the constraint of non-negativity but not orthogonality, the algorithm is effective for separating components of mass spectra. In addition, to estimate the components more accurately, a sparsity constraint is applied to PLCA for explosives detection. The main contribution is industrial application of the algorithm into an explosives-detection system. Results of an experimental evaluation of the algorithm with data obtained in a real railway station demonstrate that the proposed algorithm outperforms PCA and ICA. Also, results of calculation time demonstrate that the algorithm can work in real time.

  • Graph Isomorphism Completeness for Trapezoid Graphs

    Asahi TAKAOKA  

     
    LETTER-Graphs and Networks

      Vol:
    E98-A No:8
      Page(s):
    1838-1840

    The complexity of the graph isomorphism problem for trapezoid graphs has been open over a decade. This paper shows that the problem is GI-complete. More precisely, we show that the graph isomorphism problem is GI-complete for comparability graphs of partially ordered sets with interval dimension 2 and height 3. In contrast, the problem is known to be solvable in polynomial time for comparability graphs of partially ordered sets with interval dimension at most 2 and height at most 2.

  • Partial Encryption Method That Enhances MP3 Security

    Twe Ta OO  Takao ONOYE  Kilho SHIN  

     
    PAPER-Digital Signal Processing

      Vol:
    E98-A No:8
      Page(s):
    1760-1768

    The MPEG-1 layer-III compressed audio format, which is widely known as MP3, is the most popular for audio distribution. However, it is not equipped with security features to protect the content from unauthorized access. Although encryption ensures content security, the naive method of encrypting the entire MP3 file would destroy compliance with the MPEG standard. In this paper, we propose a low-complexity partial encryption method that is embedded during the MP3 encoding process. Our method reduces time consumption by encrypting only the perceptually important parts of an MP3 file rather than the whole file, and the resulting encrypted file is still compatible with the MPEG standard so as to be rendered by any existing MP3 players. For full-quality rendering, decryption using the appropriate cryptographic key is necessary. Moreover, the effect of encryption on audio quality can be flexibly controlled by adjusting the percentage of encryption. On the basis of this feature, we can realize the try-before-purchase model, which is one of the important business models of Digital Rights Management (DRM): users can render encrypted MP3 files for trial and enjoy the contents in original quality by purchasing decryption keys. From our experiments, it turns out that encrypting 2-10% of MP3 data suffices to generate trial music, and furthermore file size increasing after encryption is subtle.

  • Compressive Channel Estimation Using Distribution Agnostic Bayesian Method

    Yi LIU  Wenbo MEI  Huiqian DU  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E98-B No:8
      Page(s):
    1672-1679

    Compressive sensing (CS)-based channel estimation considerably reduces pilot symbols usage by exploiting the sparsity of the propagation channel in the delay-Doppler domain. In this paper, we consider the application of Bayesian approaches to the sparse channel estimation in orthogonal frequency division multiplexing (OFDM) systems. Taking advantage of the block-sparse structure and statistical properties of time-frequency selective channels, the proposed Bayesian method provides a more efficient and accurate estimation of the channel status information (CSI) than do conventional CS-based methods. Moreover, our estimation scheme is not limited to the Gaussian scenario but is also available for channels that have non-Gaussian priors or unknown probability density functions. This characteristic is notably useful when the prior statistics of channel coefficients cannot be precisely estimated. We also design a combo pilot pattern to improve the performance of the proposed estimation scheme. Simulation results demonstrate that our method performs well at high Doppler frequencies.

  • Improvement of the Solving Performance by the Networking of Particle Swarm Optimization

    Tomoyuki SASAKI  Hidehiro NAKANO  Arata MIYAUCHI  Akira TAGUCHI  

     
    PAPER-Nonlinear Problems

      Vol:
    E98-A No:8
      Page(s):
    1777-1786

    This paper presents a particle swarm optimization network (PSON) to improve the search capability of PSO. In PSON, multi-PSOs are connected for the purpose of communication. A variety of network topology can be realized by varying the number of connected PSOs of each PSO. The solving performance and convergence speed can be controlled by changing the network topology. Furthermore, high parallelism is can be realized by assigning PSO to single processor. The stability condition analysis and performance of PSON are shown.

  • A Note on Irreversible 2-Conversion Sets in Subcubic Graphs

    Asahi TAKAOKA  Shuichi UENO  

     
    LETTER-Fundamentals of Information Systems

      Pubricized:
    2015/05/14
      Vol:
    E98-D No:8
      Page(s):
    1589-1591

    Irreversible k-conversion set is introduced in connection with the mathematical modeling of the spread of diseases or opinions. We show that the problem to find a minimum irreversible 2-conversion set can be solved in O(n2log 6n) time for graphs with maximum degree at most 3 (subcubic graphs) by reducing it to the graphic matroid parity problem, where n is the number of vertices in a graph. This affirmatively settles an open question posed by Kyncl et al. (2014).

  • White Balancing by Using Multiple Images via Intrinsic Image Decomposition

    Ryo MATSUOKA  Tatsuya BABA  Mia RIZKINIA  Masahiro OKUDA  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2015/05/14
      Vol:
    E98-D No:8
      Page(s):
    1562-1570

    Using a flash/no-flash image pair, we propose a novel white-balancing technique that can effectively correct the color balance of a complex scene under multiple light sources. In the proposed method, by using multiple images of the same scene taken under different lighting conditions, we estimate the reflectance component of the scene and the multiple shading components of each image. The reflectance component is a specific object color which does not depend on scene illumination and the shading component is a shading effect caused by the illumination lights. Then, we achieve white balancing by appropriately correcting the estimated shading components. The proposed method achieves better performance than conventional methods, especially under colored illumination and mixed lighting conditions.

  • Speech Emotion Recognition Based on Sparse Transfer Learning Method

    Peng SONG  Wenming ZHENG  Ruiyu LIANG  

     
    LETTER-Speech and Hearing

      Pubricized:
    2015/04/10
      Vol:
    E98-D No:7
      Page(s):
    1409-1412

    In traditional speech emotion recognition systems, when the training and testing utterances are obtained from different corpora, the recognition rates will decrease dramatically. To tackle this problem, in this letter, inspired from the recent developments of sparse coding and transfer learning, a novel sparse transfer learning method is presented for speech emotion recognition. Firstly, a sparse coding algorithm is employed to learn a robust sparse representation of emotional features. Then, a novel sparse transfer learning approach is presented, where the distance between the feature distributions of source and target datasets is considered and used to regularize the objective function of sparse coding. The experimental results demonstrate that, compared with the automatic recognition approach, the proposed method achieves promising improvements on recognition rates and significantly outperforms the classic dimension reduction based transfer learning approach.

  • Dosimetry and Verification for 6-GHz Whole-Body Non-Constraint Exposure of Rats Using Reverberation Chamber

    Jingjing SHI  Jerdvisanop CHAKAROTHAI  Jianqing WANG  Kanako WAKE  Soichi WATANABE  Osamu FUJIWARA  

     
    PAPER

      Vol:
    E98-B No:7
      Page(s):
    1164-1172

    With the rapid increase of various uses of wireless communications in modern life, the high microwave and millimeter wave frequency bands are attracting much attention. However, the existing databases on above 6GHz radio-frequency (RF) electromagnetic (EM) field exposure of biological bodies are obviously insufficient. An in-vivo research project on local and whole-body exposure of rats to RF-EM fields above 6GHz was started in Japan in 2013. This study aims to perform a dosimetric design for the whole-body-average specific absorption rates (WBA-SARs) of unconstrained rats exposed to 6GHz RF-EM fields in a reverberation chamber (RC). The required input power into the RC is clarified using a two-step evaluation method in order to achieve a target exposure level in rats. The two-step method, which incorporates the finite-difference time-domain (FDTD) numerical solutions with electric field measurements in an RC exposure system, is used as an evaluation method to determine the whole-body exposure level in the rats. In order to verify the validity of the two-step method, we use S-parameter measurements inside the RC to experimentally derive the WBA-SARs with rat-equivalent phantoms and then compare those with the FDTD-calculated ones. It was shown that the difference between the two-step method and the S-parameter measurements is within 1.63dB, which reveals the validity and usefulness of the two-step technique.

  • Modeling of Bulk Current Injection Setup for Automotive Immunity Test Using Electromagnetic Analysis

    Yosuke KONDO  Masato IZUMICHI  Kei SHIMAKURA  Osami WADA  

     
    PAPER

      Vol:
    E98-B No:7
      Page(s):
    1212-1219

    This paper provides a method based on electromagnetic (EM) analysis to predict conducted currents in the bulk current injection (BCI) test system for automotive components. The BCI test system is comprised of an injection probe, equipment under test (EUT), line impedance stabilization networks (LISNs), wires and an electric load. All components are modeled in full-wave EM analysis. The EM model of the injection probe enables us to handle multi wires. By using the transmission line theory, the BCI setup model is divided into several parts in order to reduce the calculation time. The proposed method is applied to an actual BCI setup of an automotive component and the simulated common mode currents at the input terminals of EUT have a good accuracy in the frequency range of 1-400MHz. The model separation reduces the calculation time to only several hours.

  • Design of q-Parallel LFSR-Based Syndrome Generator

    Seung-Youl KIM  Kyoung-Rok CHO  Je-Hoon LEE  

     
    BRIEF PAPER

      Vol:
    E98-C No:7
      Page(s):
    594-596

    This paper presents a new parallel architecture of syndrome generator for a high-speed BCH (Bose-Chaudhuri-Hocquenghem) decoder. In particular, the proposed parallel syndrome generators are based on LFSR (linear feedback shift register) architecture to achieve high throughput without significant area overhead. From the experimental results, the proposed approach achieves 4.60 Gbps using 0.25-µm standard CMOS technology. This result is much faster than the conventional byte-wise GFM-based counterpart. The high throughputs are due to the well-tuned hardware implementation using unfolding transformation.

601-620hit(2741hit)