The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] density(274hit)

121-140hit(274hit)

  • Least-Squares Independence Test

    Masashi SUGIYAMA  Taiji SUZUKI  

     
    LETTER-Artificial Intelligence, Data Mining

      Vol:
    E94-D No:6
      Page(s):
    1333-1336

    Identifying the statistical independence of random variables is one of the important tasks in statistical data analysis. In this paper, we propose a novel non-parametric independence test based on a least-squares density ratio estimator. Our method, called least-squares independence test (LSIT), is distribution-free, and thus it is more flexible than parametric approaches. Furthermore, it is equipped with a model selection procedure based on cross-validation. This is a significant advantage over existing non-parametric approaches which often require manual parameter tuning. The usefulness of the proposed method is shown through numerical experiments.

  • Improving the Accuracy of Least-Squares Probabilistic Classifiers

    Makoto YAMADA  Masashi SUGIYAMA  Gordon WICHERN  Jaak SIMM  

     
    LETTER-Pattern Recognition

      Vol:
    E94-D No:6
      Page(s):
    1337-1340

    The least-squares probabilistic classifier (LSPC) is a computationally-efficient alternative to kernel logistic regression. However, to assure its learned probabilities to be non-negative, LSPC involves a post-processing step of rounding up negative parameters to zero, which can unexpectedly influence classification performance. In order to mitigate this problem, we propose a simple alternative scheme that directly rounds up the classifier's negative outputs, not negative parameters. Through extensive experiments including real-world image classification and audio tagging tasks, we demonstrate that the proposed modification significantly improves classification accuracy, while the computational advantage of the original LSPC remains unchanged.

  • A Variable Output Voltage Switched-Capacitor DC-DC Converter with Pulse Density and Width Modulation (PDWM) for 57% Ripple Reduction at Low Output Voltage

    Xin ZHANG  Yu PU  Koichi ISHIDA  Yoshikatsu RYU  Yasuyuki OKUMA  Po-Hung CHEN  Takayasu SAKURAI  Makoto TAKAMIYA  

     
    PAPER

      Vol:
    E94-C No:6
      Page(s):
    953-959

    In this paper, a novel switched-capacitor DC-DC converter with pulse density and width modulation (PDWM) is proposed with reduced output ripple at variable output voltages. While performing pulse density modulation (PDM), the proposed PDWM modulates the pulse width at the same time to reduce the output ripple with high power efficiency. The prototype chip was implemented using 65 nm CMOS process. The switched-capacitor DC-DC converter has 0.2-V to 0.47-V output voltage and delivers 0.25-mA to 10-mA output current from a 1-V input supply with a peak efficiency of 87%. Compared with the conventional PDM scheme, the proposed switched-capacitor DC-DC converter with PDWM reduces the output ripple by 57% in the low output voltage region with the efficiency penalty of 2%.

  • Interaction of Bis-diethylaminosilane with a Hydroxylized Si (001) Surface for SiO2 Thin-Film Growth Using Density Functional Theory

    Seung-Bin BAEK  Dae-Hee KIM  Yeong-Cheol KIM  

     
    PAPER

      Vol:
    E94-C No:5
      Page(s):
    771-774

    We studied the interaction of Bis-diethylaminosilane (SiH2[N(C2H5)2]2, BDEAS) with a hydroxylized Si (001) surface for SiO2 thin-film growth using density functional theory (DFT). BDEAS was adsorbed on the Si surface and reacted with the H atom of hydroxyl (-OH) to produce the di-ethylaminosilane (-SiH2[N(C2H5)2], DEAS) group and di-ethylamine (NH(C2H5)2, DEA). Then, DEAS was able to react with another H atom of -OH to produce DEA and to form the O-(SiH2)-O bond at the inter-dimer, inter-row, or intra-dimer site. Among the three different sites, the intra-dimer site was the most probable when it came to forming the O-(SiH2)-O bond.

  • Weight Distributions of Non-binary LDPC Codes

    Kenta KASAI  Charly POULLIAT  David DECLERCQ  Kohichi SAKANIWA  

     
    PAPER-Coding Theory

      Vol:
    E94-A No:4
      Page(s):
    1106-1115

    In this paper, we study the average symbol and bit-weight distributions for ensembles of non-binary low-density parity-check codes defined on GF(2p). Moreover, we derive the asymptotic exponential growth rate of the weight distributions in the limit of large codelength. Interestingly, we show that the normalized typical minimum distance does not monotonically increase with the size of the field.

  • Fast Performance Evaluation Method of LDPC Codes

    Takakazu SAKAI  Koji SHIBATA  

     
    PAPER-Coding Theory

      Vol:
    E94-A No:4
      Page(s):
    1116-1123

    This paper shows a fast estimation method of very low error rate of low-density parity-check (LDPC) codes. No analytical tool is available to evaluate performance of LDPC codes, and the traditional Monte Carlo simulation methods can not estimate the low error rate of LDPC codes due to the limitation of time. To conquer this problem, we propose another simulation method which is based on the optimal simulation probability density function (PDF). The proposed simulation PDF can also avoid the dependency between the simulation time and the number of dominant trapping sets, which is the problem of some fast simulation methods based on the error event simulation method. Additionally, we show some numerical examples to demonstrate the effectiveness of the proposed method. The simulation time of the proposed method is reduced to almost less than 1/10 of that of Cole et al.'s method under the condition of the same accuracy of the estimator.

  • Fast Simulation Method for Turbo Codes over Additive White Class A Noise Channel

    Takakazu SAKAI  Koji SHIBATA  

     
    LETTER-Coding Theory

      Vol:
    E94-A No:3
      Page(s):
    1034-1037

    This study shows a fast simulation method for turbo codes over an additive white class A noise (AWAN) channel. The reduction of the estimation time is achieved by applying importance sampling (IS) which is one of the variance reduction simulation methods. In order to adapt the AWAN channel, we propose a design method of a simulation probability density function (PDF) utilized in IS. The proposed simulation PDF is related to the Bhattacharyya bound to evaluate wider area of the signal space than the conventional method. Since the mean translation method, which is a conventional design method of the simulation PDF used in IS, is optimized for an additive white Gaussian noise channel, the evaluation time of the error performance of turbo codes over the AWAN channel can not be reduced. To evaluate BER of 10-8, the simulation time of the proposed method can be reduced to 1/104 under the condition of the same accuracy of the traditional Monte Carlo simulation method.

  • Power Analysis against a DPA-Resistant S-Box Implementation Based on the Fourier Transform

    Yang LI  Kazuo SAKIYAMA  Shinichi KAWAMURA  Kazuo OHTA  

     
    PAPER-Implementation

      Vol:
    E94-A No:1
      Page(s):
    191-199

    This paper shows two power analysis attacks against a software implementation of a first-order DPA resistant S-box algorithm that is based on the discrete Fourier Transform (DFT). The DPA resistant S-box algorithm based on DFT was proposed by Prouff et al. in 2006 and improved by Coron et al. in 2008, respectively. In our attacks against the improved one, we pre-process the power traces by separating them into two subgroups, so that each has a biased mask. For the separated power traces, two post analysis methods are proposed to identify the key. One is based on DPA attack against one subgroup, and the other utilizes the difference of means for two subgroups and a pattern matching. Finally, we compare these two attack methods and propose an algorithm-level countermeasure to enhance the security of S-box calculation based on the DFT.

  • Redundant via Insertion: Removing Design Rule Conflicts and Balancing via Density

    Song CHEN  Jianwei SHEN  Wei GUO  Mei-Fang CHIANG  Takeshi YOSHIMURA  

     
    PAPER-Physical Level Design

      Vol:
    E93-A No:12
      Page(s):
    2372-2379

    The occurrence of via defects increases due to the shrinking size in integrated circuit manufacturing. Redundant via insertion is an effective and recommended method to reduce the yield loss caused by via failures. In this paper, we introduce the redundant via allocation problem for layer partition-based redundant via insertion methods [1] and solve it using the genetic algorithm. At the same time, we use a convex-cost flow model to equilibrate the via density, which is good for the via density rules. The results of layer partition-based model depend on the partition and processing order of metal layers. Furthermore, even we try all of partitions and processing orders, we might miss the optimal solutions. By introducing the redundant via allocation problem on partitioning boundaries, we can avoid the sub-optimality of the original layer-partition based method. The experimental results show that the proposed method got 12 more redundant vias inserted on average and the via density balance can be greatly improved.

  • Universal Slepian-Wolf Source Codes Using Low-Density Parity-Check Matrices

    Tetsunao MATSUTA  Tomohiko UYEMATSU  Ryutaroh MATSUMOTO  

     
    PAPER-Source Coding

      Vol:
    E93-A No:11
      Page(s):
    1878-1888

    Low-density parity-check (LDPC) codes become very popular in channel coding, since they can achieve the performance close to maximum-likelihood (ML) decoding with linear complexity of the block length. Recently, Muramatsu et al. proposed a code using LDPC matrices for Slepian-Wolf source coding, and showed that their code can achieve any point in the achievable rate region of Slepian-Wolf source coding. However, since they employed ML decoding, their decoder needs to know the probability distribution of the source. Hence, it is an open problem whether there exists a universal code using LDPC matrices, where universal code means that the error probability of the code vanishes as the block length tends to infinity for all sources whose achievable rate region contains the rate pair of encoders. In this paper, we show the existence of a universal Slepian-Wolf source code using LDPC matrices for stationary memoryless sources.

  • Weight Distributions of Multi-Edge Type LDPC Codes

    Kenta KASAI  Tomoharu AWANO  David DECLERCQ  Charly POULLIAT  Kohichi SAKANIWA  

     
    PAPER-Coding Theory

      Vol:
    E93-A No:11
      Page(s):
    1942-1948

    The multi-edge type LDPC codes, introduced by Richardson and Urbanke, present the general class of structured LDPC codes. In this paper, we derive the average weight distributions of the multi-edge type LDPC code ensembles. Furthermore, we investigate the asymptotic exponential growth rate of the average weight distributions and investigate the connection to the stability condition of the density evolution.

  • A Note on the Branch-and-Cut Approach to Decoding Linear Block Codes

    Shunsuke HORII  Toshiyasu MATSUSHIMA  Shigeichi HIRASAWA  

     
    PAPER-Coding Theory

      Vol:
    E93-A No:11
      Page(s):
    1912-1917

    Maximum likelihood (ML) decoding of linear block codes can be considered as an integer linear programming (ILP). Since it is an NP-hard problem in general, there are many researches about the algorithms to approximately solve the problem. One of the most popular algorithms is linear programming (LP) decoding proposed by Feldman et al. LP decoding is based on the LP relaxation, which is a method to approximately solve the ILP corresponding to the ML decoding problem. Advanced algorithms for solving ILP (approximately or exactly) include cutting-plane method and branch-and-bound method. As applications of these methods, adaptive LP decoding and branch-and-bound decoding have been proposed by Taghavi et al. and Yang et al., respectively. Another method for solving ILP is the branch-and-cut method, which is a hybrid of cutting-plane and branch-and-bound methods. The branch-and-cut method is widely used to solve ILP, however, it is unobvious that the method works well for the ML decoding problem. In this paper, we show that the branch-and-cut method is certainly effective for the ML decoding problem. Furthermore the branch-and-cut method consists of some technical components and the performance of the algorithm depends on the selection of these components. It is important to consider how to select the technical components in the branch-and-cut method. We see the differences caused by the selection of those technical components and consider which scheme is most effective for the ML decoding problem through numerical simulations.

  • Optimal Algorithms for Finding Density-Constrained Longest and Heaviest Paths in a Tree

    Sung Kwon KIM  

     
    PAPER-Fundamentals of Information Systems

      Vol:
    E93-D No:11
      Page(s):
    2989-2994

    Let T be a tree with n nodes, in which each edge is associated with a length and a weight. The density-constrained longest (heaviest) path problem is to find a path of T with maximum path length (weight) whose path density is bounded by an upper bound and a lower bound. The path density is the path weight divided by the path length. We show that both problems can be solved in optimal O(n log n) time.

  • Effective Management of Secondary User's Density in Cognitive Radio Networks

    Chunxiao JIANG  Shuai FAN  Canfeng CHEN  Jian MA  Yong REN  

     
    LETTER-Network Management/Operation

      Vol:
    E93-B No:9
      Page(s):
    2443-2447

    Cognitive radio has emerged as an efficient approach to reusing the licensed spectrums. How to appropriately set parameters of secondary user (SU) plays a rather important role in constructing cognitive radio networks. In this letter, we have analyzed the theoretical value of SUs' density, which provides a standard for controlling the number of SUs around one primary receiver, in order to guarantee that primary communication links do not experience excessive interference. The simulation result of secondary density well matches with the theoretical result derived from our analysis. Additionally, the achievable rate of secondary user under density control is also analyzed and simulated.

  • Capacity Performance Analysis for Decode-and-Forward OFDM Dual-Hop System

    Ha-Nguyen VU  Le Thanh TAN  Hyung Yun KONG  

     
    LETTER-Wireless Communication Technologies

      Vol:
    E93-B No:9
      Page(s):
    2477-2480

    In this paper, we propose an exact analytical technique to evaluate the average capacity of a dual-hop OFDM relay system with decode-and-forward protocol in an independent and identical distribution (i.i.d.) Rayleigh fading channel. Four schemes, (no) matching "and" or "or" (no) power allocation, will be considered. First, the probability density function (pdf) for the end-to-end power channel gain for each scheme is described. Then, based on these pdf functions, we will give the expressions of the average capacity. Monte Carlo simulation results will be shown to confirm the analytical results for both the pdf functions and average capacities.

  • Speech Emotion Recognition Based on Parametric Filter and Fractal Dimension

    Xia MAO  Lijiang CHEN  

     
    LETTER-Speech and Hearing

      Vol:
    E93-D No:8
      Page(s):
    2324-2326

    In this paper, we propose a new method that employs two novel features, correlation density (Cd) and fractal dimension (Fd), to recognize emotional states contained in speech. The former feature obtained by a list of parametric filters reflects the broad frequency components and the fine structure of lower frequency components, contributed by unvoiced phones and voiced phones, respectively; the latter feature indicates the non-linearity and self-similarity of a speech signal. Comparative experiments based on Hidden Markov Model and K Nearest Neighbor methods are carried out. The results show that Cd and Fd are much more closely related with emotional expression than the features commonly used.

  • New Conditions for Secure Knapsack Schemes against Lattice Attack

    Noboru KUNIHIRO  

     
    PAPER-Cryptography and Information Security

      Vol:
    E93-A No:6
      Page(s):
    1058-1065

    Many knapsack cryptosystems have been proposed but almost all the schemes are vulnerable to lattice attack because of their low density. To prevent the lattice attack, Chor and Rivest proposed a low weight knapsack scheme, which made the density higher than critical density. In Asiacrypt2005, Nguyen and Stern introduced pseudo-density and proved that if the pseudo-density is low enough (even if the usual density is not low enough), the knapsack scheme can be broken by a single call to SVP/CVP oracle. However, the usual density and the pseudo-density are not sufficient to measure the resistance to the lattice attack individually. In this paper, we first introduce the new notion of density D, which naturally unifies the previous two density. Next, we derive conditions for our density so that a knapsack scheme is secure against lattice attack. We obtain a critical bound of density which depends only on the rate of the message length and its Hamming weight. Furthermore, we show that if D<0.8677, the knapsack scheme is solved by lattice attack. Next, we show that the critical bound goes to 1 if the Hamming weight decreases, which means that it is (almost) impossible to construct a low weight knapsack scheme which is supported by an argument of density.

  • Continuous BP Decoding Algorithm for a Low-Density Parity-Check Coded Hybrid ARQ System

    Sangjoon PARK  Sooyong CHOI  Seung-Hoon HWANG  

     
    LETTER-Fundamental Theories for Communications

      Vol:
    E93-B No:4
      Page(s):
    993-996

    A continuous belief propagation (BP) decoding algorithm for a hybrid automatic repeat request (ARQ) system is proposed in this paper. The proposed continuous BP decoding algorithm utilizes the extrinsic information generated in the last iteration of the previous transmission for a continuous progression of the decoding through retransmissions. This allows the continuous BP decoding algorithm to accelerate the decoding convergence for codeword determination, especially when the number of retransmissions is large or a currently combined packet has punctured nodes. Simulation results verify the effectiveness of the proposed continuous BP decoding algorithm.

  • Theoretical Analysis of Density Ratio Estimation

    Takafumi KANAMORI  Taiji SUZUKI  Masashi SUGIYAMA  

     
    PAPER-Algorithms and Data Structures

      Vol:
    E93-A No:4
      Page(s):
    787-798

    Density ratio estimation has gathered a great deal of attention recently since it can be used for various data processing tasks. In this paper, we consider three methods of density ratio estimation: (A) the numerator and denominator densities are separately estimated and then the ratio of the estimated densities is computed, (B) a logistic regression classifier discriminating denominator samples from numerator samples is learned and then the ratio of the posterior probabilities is computed, and (C) the density ratio function is directly modeled and learned by minimizing the empirical Kullback-Leibler divergence. We first prove that when the numerator and denominator densities are known to be members of the exponential family, (A) is better than (B) and (B) is better than (C). Then we show that once the model assumption is violated, (C) is better than (A) and (B). Thus in practical situations where no exact model is available, (C) would be the most promising approach to density ratio estimation.

  • Least-Squares Conditional Density Estimation

    Masashi SUGIYAMA  Ichiro TAKEUCHI  Taiji SUZUKI  Takafumi KANAMORI  Hirotaka HACHIYA  Daisuke OKANOHARA  

     
    PAPER-Pattern Recognition

      Vol:
    E93-D No:3
      Page(s):
    583-594

    Estimating the conditional mean of an input-output relation is the goal of regression. However, regression analysis is not sufficiently informative if the conditional distribution has multi-modality, is highly asymmetric, or contains heteroscedastic noise. In such scenarios, estimating the conditional distribution itself would be more useful. In this paper, we propose a novel method of conditional density estimation that is suitable for multi-dimensional continuous variables. The basic idea of the proposed method is to express the conditional density in terms of the density ratio and the ratio is directly estimated without going through density estimation. Experiments using benchmark and robot transition datasets illustrate the usefulness of the proposed approach.

121-140hit(274hit)