The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] SPAR(322hit)

181-200hit(322hit)

  • An Iterative Reweighted Least Squares Algorithm with Finite Series Approximation for a Sparse Signal Recovery

    Kazunori URUMA  Katsumi KONISHI  Tomohiro TAKAHASHI  Toshihiro FURUKAWA  

     
    LETTER-Fundamentals of Information Systems

      Vol:
    E97-D No:2
      Page(s):
    319-322

    This letter deals with a sparse signal recovery problem and proposes a new algorithm based on the iterative reweighted least squares (IRLS) algorithm. We assume that the non-zero values of a sparse signal is always greater than a given constant and modify the IRLS algorithm to satisfy this assumption. Numerical results show that the proposed algorithm recovers a sparse vector efficiently.

  • Unified Coprocessor Architecture for Secure Key Storage and Challenge-Response Authentication

    Koichi SHIMIZU  Daisuke SUZUKI  Toyohiro TSURUMARU  Takeshi SUGAWARA  Mitsuru SHIOZAKI  Takeshi FUJINO  

     
    PAPER-Hardware Based Security

      Vol:
    E97-A No:1
      Page(s):
    264-274

    In this paper we propose a unified coprocessor architecture that, by using a Glitch PUF and a block cipher, efficiently unifies necessary functions for secure key storage and challenge-response authentication. Based on the fact that a Glitch PUF uses a random logic for the purpose of generating glitches, the proposed architecture is designed around a block cipher circuit such that its round functions can be shared with a Glitch PUF as a random logic. As a concrete example, a circuit structure using a Glitch PUF and an AES circuit is presented, and evaluation results for its implementation on FPGA are provided. In addition, a physical random number generator using the same circuit is proposed. Evaluation results by the two major test suites for randomness, NIST SP 800-22 and Diehard, are provided, proving that the physical random number generator passes the test suites.

  • A Sparse Modeling Method Based on Reduction of Cost Function in Regularized Forward Selection

    Katsuyuki HAGIWARA  

     
    PAPER-Artificial Intelligence, Data Mining

      Vol:
    E97-D No:1
      Page(s):
    98-106

    Regularized forward selection is viewed as a method for obtaining a sparse representation in a nonparametric regression problem. In regularized forward selection, regression output is represented by a weighted sum of several significant basis functions that are selected from among a large number of candidates by using a greedy training procedure in terms of a regularized cost function and applying an appropriate model selection method. In this paper, we propose a model selection method in regularized forward selection. For the purpose, we focus on the reduction of a cost function, which is brought by appending a new basis function in a greedy training procedure. We first clarify a bias and variance decomposition of the cost reduction and then derive a probabilistic upper bound for the variance of the cost reduction under some conditions. The derived upper bound reflects an essential feature of the greedy training procedure; i.e., it selects a basis function which maximally reduces the cost function. We then propose a thresholding method for determining significant basis functions by applying the derived upper bound as a threshold level and effectively combining it with the leave-one-out cross validation method. Several numerical experiments show that generalization performance of the proposed method is comparable to that of the other methods while the number of basis functions selected by the proposed method is greatly smaller than by the other methods. We can therefore say that the proposed method is able to yield a sparse representation while keeping a relatively good generalization performance. Moreover, our method has an advantage that it is free from a selection of a regularization parameter.

  • Sequential Loss Tomography Using Compressed Sensing

    Kazushi TAKEMOTO  Takahiro MATSUDA  Tetsuya TAKINE  

     
    PAPER

      Vol:
    E96-B No:11
      Page(s):
    2756-2765

    Network tomography is a technique for estimating internal network characteristics from end-to-end measurements. In this paper, we focus on loss tomography, which is a network tomography problem for estimating link loss rates. We study a loss tomography problem to detect links with high link loss rates in network environments with dynamically changing link loss rates, and propose a window-based sequential loss tomography scheme. The loss tomography problem is formulated as an underdetermined linear inverse problem, where there are infinitely many candidates of the solution. In the proposed scheme, we use compressed sensing, which can solve the problem with a prior information that the solution is a sparse vector. Measurement nodes transmit probe packets on measurement paths established between them, and calculate packet loss rates of measurement paths (path loss rates) from probe packets received within a window. Measurement paths are classified into normal quality and low quality states according to the path loss rates. When a measurement node finds measurement paths in the low quality states, link loss rates are estimated by compressed sensing. Using simulation scenarios with a few link states changing dynamically from low to high link loss rates, we evaluate the performance of the proposed scheme.

  • An Inter-Prediction Method Using Sparse Representation for High Efficiency Video Coding

    Koji INOUE  Kohei ISECHI  Hironobu SAITO  Yoshimitsu KUROKI  

     
    LETTER-Image Processing

      Vol:
    E96-A No:11
      Page(s):
    2191-2193

    This paper proposes an inter-prediction method for the upcoming video coding standard named HEVC (High Efficiency Video Coding). The HEVC offers an inter-prediction framework called local intensity compensation which represents a current block by a linear combination of some reference blocks. The proposed method calculates weight coefficients of the linear combination by using sparse representation. Experimental results show that the proposed method increases prediction accuracy in comparison with other methods.

  • Speaker Recognition Using Sparse Probabilistic Linear Discriminant Analysis

    Hai YANG  Yunfei XU  Qinwei ZHAO  Ruohua ZHOU  Yonghong YAN  

     
    PAPER

      Vol:
    E96-A No:10
      Page(s):
    1938-1945

    Sparse representation has been studied within the field of signal processing as a means of providing a compact form of signal representation. This paper introduces a sparse representation based framework named Sparse Probabilistic Linear Discriminant Analysis in speaker recognition. In this latent variable model, probabilistic linear discriminant analysis is modified to obtain an algorithm for learning overcomplete sparse representations by replacing the Gaussian prior on the factors with Laplace prior that encourages sparseness. For a given speaker signal, the dictionary obtained from this model has good representational power while supporting optimal discrimination of the classes. An expectation-maximization algorithm is derived to train the model with a variational approximation to a range of heavy-tailed distributions whose limit is the Laplace. The variational approximation is also used to compute the likelihood ratio score of all trials of speakers. This approach performed well on the core-extended conditions of the NIST 2010 Speaker Recognition Evaluation, and is competitive compared to the Gaussian Probabilistic Linear Discriminant Analysis, in terms of normalized Decision Cost Function and Equal Error Rate.

  • Bayesian Nonparametric Approach to Blind Separation of Infinitely Many Sparse Sources

    Hirokazu KAMEOKA  Misa SATO  Takuma ONO  Nobutaka ONO  Shigeki SAGAYAMA  

     
    PAPER

      Vol:
    E96-A No:10
      Page(s):
    1928-1937

    This paper deals with the problem of underdetermined blind source separation (BSS) where the number of sources is unknown. We propose a BSS approach that simultaneously estimates the number of sources, separates the sources based on the sparseness of speech, estimates the direction of arrival of each source, and performs permutation alignment. We confirmed experimentally that reasonably good separation was obtained with the present method without specifying the number of sources.

  • Online Sparse Volterra System Identification Using Projections onto Weighted l1 Balls

    Tae-Ho JUNG  Jung-Hee KIM  Joon-Hyuk CHANG  Sang Won NAM  

     
    PAPER

      Vol:
    E96-A No:10
      Page(s):
    1980-1983

    In this paper, online sparse Volterra system identification is proposed. For that purpose, the conventional adaptive projection-based algorithm with weighted l1 balls (APWL1) is revisited for nonlinear system identification, whereby the linear-in-parameters nature of Volterra systems is utilized. Compared with sparsity-aware recursive least squares (RLS) based algorithms, requiring higher computational complexity and showing faster convergence and lower steady-state error due to their long memory in time-invariant cases, the proposed approach yields better tracking capability in time-varying cases due to short-term data dependence in updating the weight. Also, when N is the number of sparse Volterra kernels and q is the number of input vectors involved to update the weight, the proposed algorithm requires O(qN) multiplication complexity and O(Nlog 2N) sorting-operation complexity. Furthermore, sparsity-aware least mean-squares and affine projection based algorithms are also tested.

  • Application of Optimized Sparse Antenna Array in Near Range 3D Microwave Imaging

    Yaolong QI  Weixian TAN  Xueming PENG  Yanping WANG  Wen HONG  

     
    PAPER-Sensing

      Vol:
    E96-B No:10
      Page(s):
    2542-2552

    Near range microwave imaging systems have broad application prospects in the field of concealed weapon detection, biomedical imaging, nondestructive testing, etc. In this paper, the technique of optimized sparse antenna array is applied to near range microwave imaging, which can greatly reduce the complexity of imaging systems. In detail, the paper establishes three-dimensional sparse array imaging geometry and corresponding echo model, where the imaging geometry is formed by arranging optimized sparse antenna array in elevation, scanning in azimuth and transmitting broadband signals in range direction; and by analyzing the characteristics of near range imaging, that is, the maximum interval of transmitting and receiving elements is limited by the range from imaging system to targets, we propose the idea of piecewise sparse line array; secondly, by analyzing the convolution principle, we develop a method of arranging piecewise sparse array which can generate the same distribution of equivalent phase centers as filled antenna array; then, the paper deduces corresponding imaging algorithm; finally, the imaging geometry and corresponding algorithm proposed in this paper are investigated and verified via numerical simulations and near range imaging experiments.

  • Exemplar-Based Voice Conversion Using Sparse Representation in Noisy Environments

    Ryoichi TAKASHIMA  Tetsuya TAKIGUCHI  Yasuo ARIKI  

     
    PAPER

      Vol:
    E96-A No:10
      Page(s):
    1946-1953

    This paper presents a voice conversion (VC) technique for noisy environments, where parallel exemplars are introduced to encode the source speech signal and synthesize the target speech signal. The parallel exemplars (dictionary) consist of the source exemplars and target exemplars, having the same texts uttered by the source and target speakers. The input source signal is decomposed into the source exemplars, noise exemplars and their weights (activities). Then, by using the weights of the source exemplars, the converted signal is constructed from the target exemplars. We carried out speaker conversion tasks using clean speech data and noise-added speech data. The effectiveness of this method was confirmed by comparing its effectiveness with that of a conventional Gaussian Mixture Model (GMM)-based method.

  • Extended CRC: Face Recognition with a Single Training Image per Person via Intraclass Variant Dictionary

    Guojun LIN  Mei XIE  Ling MAO  

     
    LETTER-Image Recognition, Computer Vision

      Vol:
    E96-D No:10
      Page(s):
    2290-2293

    For face recognition with a single training image per person, Collaborative Representation based Classification (CRC) has significantly less complexity than Extended Sparse Representation based Classification (ESRC). However, CRC gets lower recognition rates than ESRC. In order to combine the advantages of CRC and ESRC, we propose Extended Collaborative Representation based Classification (ECRC) for face recognition with a single training image per person. ECRC constructs an auxiliary intraclass variant dictionary to represent the possible variation between the testing and training images. Experimental results show that ECRC outperforms the compared methods in terms of both high recognition rates and low computation complexity.

  • Exploiting Group Sparsity in Nonlinear Acoustic Echo Cancellation by Adaptive Proximal Forward-Backward Splitting

    Hiroki KURODA  Shunsuke ONO  Masao YAMAGISHI  Isao YAMADA  

     
    PAPER

      Vol:
    E96-A No:10
      Page(s):
    1918-1927

    In this paper, we propose a use of the group sparsity in adaptive learning of second-order Volterra filters for the nonlinear acoustic echo cancellation problem. The group sparsity indicates sparsity across the groups, i.e., a vector is separated into some groups, and most of groups only contain approximately zero-valued entries. First, we provide a theoretical evidence that the second-order Volterra systems tend to have the group sparsity under natural assumptions. Next, we propose an algorithm by applying the adaptive proximal forward-backward splitting method to a carefully designed cost function to exploit the group sparsity effectively. The designed cost function is the sum of the weighted group l1 norm which promotes the group sparsity and a weighted sum of squared distances to data-fidelity sets used in adaptive filtering algorithms. Finally, Numerical examples show that the proposed method outperforms a sparsity-aware algorithm in both the system-mismatch and the echo return loss enhancement.

  • Locality-Constrained Multi-Task Joint Sparse Representation for Image Classification

    Lihua GUO  

     
    LETTER-Image Recognition, Computer Vision

      Vol:
    E96-D No:9
      Page(s):
    2177-2181

    In the image classification applications, the test sample with multiple man-handcrafted descriptions can be sparsely represented by a few training subjects. Our paper is motivated by the success of multi-task joint sparse representation (MTJSR), and considers that the different modalities of features not only have the constraint of joint sparsity across different tasks, but also have the constraint of local manifold structure across different features. We introduce the constraint of local manifold structure into the MTJSR framework, and propose the Locality-constrained multi-task joint sparse representation method (LC-MTJSR). During the optimization of the formulated objective, the stochastic gradient descent method is used to guarantee fast convergence rate, which is essential for large-scale image categorization. Experiments on several challenging object classification datasets show that our proposed algorithm is better than the MTJSR, and is competitive with the state-of-the-art multiple kernel learning methods.

  • Fast Iterative Mining Using Sparsity-Inducing Loss Functions

    Hiroto SAIGO  Hisashi KASHIMA  Koji TSUDA  

     
    PAPER-Pattern Recognition

      Vol:
    E96-D No:8
      Page(s):
    1766-1773

    Apriori-based mining algorithms enumerate frequent patterns efficiently, but the resulting large number of patterns makes it difficult to directly apply subsequent learning tasks. Recently, efficient iterative methods are proposed for mining discriminative patterns for classification and regression. These methods iteratively execute discriminative pattern mining algorithm and update example weights to emphasize on examples which received large errors in the previous iteration. In this paper, we study a family of loss functions that induces sparsity on example weights. Most of the resulting example weights become zeros, so we can eliminate those examples from discriminative pattern mining, leading to a significant decrease in search space and time. In computational experiments we compare and evaluate various loss functions in terms of the amount of sparsity induced and resulting speed-up obtained.

  • Partial-Update Normalized Sign LMS Algorithm Employing Sparse Updates

    Seong-Eun KIM  Young-Seok CHOI  Jae-Woo LEE  Woo-Jin SONG  

     
    LETTER-Digital Signal Processing

      Vol:
    E96-A No:6
      Page(s):
    1482-1487

    This paper provides a novel normalized sign least-mean square (NSLMS) algorithm which updates only a part of the filter coefficients and simultaneously performs sparse updates with the goal of reducing computational complexity. A combination of the partial-update scheme and the set-membership framework is incorporated into the context of L∞-norm adaptive filtering, thus yielding computational efficiency. For the stabilized convergence, we formulate a robust update recursion by imposing an upper bound of a step size. Furthermore, we analyzed a mean-square stability of the proposed algorithm for white input signals. Experimental results show that the proposed low-complexity NSLMS algorithm has similar convergence performance with greatly reduced computational complexity compared to the partial-update NSLMS, and is comparable to the set-membership partial-update NLMS.

  • Speaker Adaptation in Sparse Subspace of Acoustic Models

    Yongwon JEONG  

     
    LETTER-Speech and Hearing

      Vol:
    E96-D No:6
      Page(s):
    1402-1405

    I propose an acoustic model adaptation method using bases constructed through the sparse principal component analysis (SPCA) of acoustic models trained in a clean environment. I perform experiments on adaptation to a new speaker and noise. The SPCA-based method outperforms the PCA-based method in the presence of babble noise.

  • Facial Image Super-Resolution Reconstruction Based on Separated Frequency Components

    Hyunduk KIM  Sang-Heon LEE  Myoung-Kyu SOHN  Dong-Ju KIM  Byungmin KIM  

     
    PAPER

      Vol:
    E96-A No:6
      Page(s):
    1315-1322

    Super resolution (SR) reconstruction is the process of fusing a sequence of low-resolution images into one high-resolution image. Many researchers have introduced various SR reconstruction methods. However, these traditional methods are limited in the extent to which they allow recovery of high-frequency information. Moreover, due to the self-similarity of face images, most of the facial SR algorithms are machine learning based. In this paper, we introduce a facial SR algorithm that combines learning-based and regularized SR image reconstruction algorithms. Our conception involves two main ideas. First, we employ separated frequency components to reconstruct high-resolution images. In addition, we separate the region of the training face image. These approaches can help to recover high-frequency information. In our experiments, we demonstrate the effectiveness of these ideas.

  • RLS-Based On-Line Sparse Nonnegative Matrix Factorization Method for Acoustic Signal Processing Systems

    Seokjin LEE  

     
    LETTER-Engineering Acoustics

      Vol:
    E96-A No:5
      Page(s):
    980-985

    Recursive least squares-based online nonnegative matrix factorization (RLS-ONMF), an NMF algorithm based on the RLS method, was developed to solve the NMF problem online. However, this method suffers from a partial-data problem. In this study, the partial-data problem is resolved by developing an improved online NMF algorithm using RLS and a sparsity constraint. The proposed method, RLS-based online sparse NMF (RLS-OSNMF), consists of two steps; an estimation step that optimizes the Euclidean NMF cost function, and a shaping step that satisfies the sparsity constraint. The proposed algorithm was evaluated with recorded speech and music data and with the RWC music database. The results show that the proposed algorithm performs better than conventional RLS-ONMF, especially during the adaptation process.

  • Dynamic Fault Tree Analysis Using Bayesian Networks and Sequence Probabilities

    Tetsushi YUGE  Shigeru YANAGI  

     
    PAPER-Reliability, Maintainability and Safety Analysis

      Vol:
    E96-A No:5
      Page(s):
    953-962

    A method of calculating the exact top event probability of a fault tree with dynamic gates and repeated basic events is proposed. The top event probability of such a dynamic fault tree is obtained by converting the tree into an equivalent Markov model. However, the Markov-based method is not realistic for a complex system model because the number of states that should be considered in the Markov analysis increases explosively as the number of basic events in the model increases. To overcome this shortcoming, we propose an alternative method in this paper. It is a hybrid of a Bayesian network (BN) and an algebraic technique. First, modularization is applied to a dynamic fault tree. The detected modules are classified into two types: one satisfies the parental Markov condition and the other does not. The module without the parental Markov condition is replaced with an equivalent single event. The occurrence probability of this event is obtained as the sum of disjoint sequence probabilities. After the contraction of modules without parent Markov condition, the BN algorithm is applied to the dynamic fault tree. The conditional probability tables for dynamic gates are presented. The BN is a standard one and has hierarchical and modular features. Numerical example shows that our method works well for complex systems.

  • Dictionary Learning with Incoherence and Sparsity Constraints for Sparse Representation of Nonnegative Signals

    Zunyi TANG  Shuxue DING  

     
    PAPER-Biocybernetics, Neurocomputing

      Vol:
    E96-D No:5
      Page(s):
    1192-1203

    This paper presents a method for learning an overcomplete, nonnegative dictionary and for obtaining the corresponding coefficients so that a group of nonnegative signals can be sparsely represented by them. This is accomplished by posing the learning as a problem of nonnegative matrix factorization (NMF) with maximization of the incoherence of the dictionary and of the sparsity of coefficients. By incorporating a dictionary-incoherence penalty and a sparsity penalty in the NMF formulation and then adopting a hierarchically alternating optimization strategy, we show that the problem can be cast as two sequential optimal problems of quadratic functions. Each optimal problem can be solved explicitly so that the whole problem can be efficiently solved, which leads to the proposed algorithm, i.e., sparse hierarchical alternating least squares (SHALS). The SHALS algorithm is structured by iteratively solving the two optimal problems, corresponding to the learning process of the dictionary and to the estimating process of the coefficients for reconstructing the signals. Numerical experiments demonstrate that the new algorithm performs better than the nonnegative K-SVD (NN-KSVD) algorithm and several other famous algorithms, and its computational cost is remarkably lower than the compared algorithms.

181-200hit(322hit)