The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] density(274hit)

61-80hit(274hit)

  • Singular-Spectrum Analysis for Digital Audio Watermarking with Automatic Parameterization and Parameter Estimation Open Access

    Jessada KARNJANA  Masashi UNOKI  Pakinee AIMMANEE  Chai WUTIWIWATCHAI  

     
    PAPER-Information Network

      Pubricized:
    2016/05/16
      Vol:
    E99-D No:8
      Page(s):
    2109-2120

    This paper proposes a blind, inaudible, robust digital-audio watermarking scheme based on singular-spectrum analysis, which relates to watermarking techniques based on singular value decomposition. We decompose a host signal into its oscillatory components and modify amplitudes of some of those components with respect to a watermark bit and embedding rule. To improve the sound quality of a watermarked signal and still maintain robustness, differential evolution is introduced to find optimal parameters of the proposed scheme. Test results show that, although a trade-off between inaudibility and robustness still persists, the difference in sound quality between the original and the watermarked one is considerably smaller. This improved scheme is robust against many attacks, such as MP3 and MP4 compression, and band-pass filtering. However, there is a drawback, i.e., some music-dependent parameters need to be shared between embedding and extraction processes. To overcome this drawback, we propose a method for automatic parameter estimation. By incorporating the estimation method into the framework, those parameters need not to be shared, and the test results show that it can blindly decode watermark bits with an accuracy of 99.99%. This paper not only proposes a new technique and scheme but also discusses the singular value and its physical interpretation.

  • Self-Adaptive Scaled Min-Sum Algorithm for LDPC Decoders Based on Delta-Min

    Keol CHO  Ki-Seok CHUNG  

     
    LETTER-Coding Theory

      Vol:
    E99-A No:8
      Page(s):
    1632-1634

    A self-adaptive scaled min-sum algorithm for LDPC decoding based on the difference between the first two minima of the check node messages (Δmin) is proposed. Δmin is utilized for adjusting the scaling factor of the check node messages, and simulation results show that the proposed algorithm improves the error correcting performance compared to existing algorithms.

  • Analysis of Density-Adaptive Spectrum Access for Cognitive Radio Sensor Networks

    Lei ZHANG  Tiecheng SONG  Jing HU  Xu BAO  

     
    PAPER-Network

      Vol:
    E99-B No:5
      Page(s):
    1101-1109

    Cognitive radio sensor networks (CRSNs) with their dynamic spectrum access capability appear to be a promising solution to address the increasing challenge of spectrum crowding faced by the traditional WSN. In this paper, through maximizing the utility index of the CRSN, a node density-adaptive spectrum access strategy for sensor nodes is proposed that takes account of the node density in a certain event-driven region. For this purpose, considering the burst real-time data traffic, we analyze the energy efficiency (EE) and the packet failure rate (PFR) combining network disconnected rate (NDR) and packet loss rate (PLR) during the channel switching interval (CSI) for both underlay and interweave spectrum access schemes. Numerical results confirm the validity of our theoretical analyses and indicate that the adaptive node density threshold (ANDT) exists for underlay and interweave spectrum access scheme switching.

  • Closed-Form Approximations for Gaussian Sum Smoother with Nonlinear Model

    Haiming DU  Jinfeng CHEN  Huadong WANG  

     
    PAPER-Digital Signal Processing

      Vol:
    E99-A No:3
      Page(s):
    691-701

    Research into closed-form Gaussian sum smoother has provided an attractive approach for tracking in clutter, joint detection and tracking (in clutter), and multiple target tracking (in clutter) via the probability hypothesis density (PHD). However, Gaussian sum smoother with nonlinear target model has particular nonlinear expressions in the backward smoothed density that are different from the other filters and smoothers. In order to extend the closed-form solution of linear Gaussian sum smoother to nonlinear model, two closed-form approximations for nonlinear Gaussian sum smoother are proposed, which use Gaussian particle approximation and unscented transformation approximation, separately. Since the estimated target number of PHD smoother is not stable, a heuristic approximation method is added. At last, the Bernoulli smoother and PHD smoother are simulated using Gaussian particle approximation and unscented transformation approximation, and simulation results show that the two proposed algorithms can obtain smoothed tracks with nonlinear models, and have better performance than filter.

  • MEMD-Based Filtering Using Interval Thresholding and Similarity Measure between Pdf of IMFs

    Huan HAO  Huali WANG  Weijun ZENG  Hui TIAN  

     
    LETTER-Digital Signal Processing

      Vol:
    E99-A No:2
      Page(s):
    643-646

    This paper presents a novel MEMD interval thresholding denoising, where relevant modes are selected by the similarity measure between the probability density functions of the input and that of each mode. Simulation and measured EEG data processing results show that the proposed scheme achieves better performance than other traditional denoisings.

  • 25-Gbps/ch Error-Free Operation over 300-m MMF of Low-Power-Consumption Silicon-Photonics-Based Chip-Scale Optical I/O Cores Open Access

    Kenichiro YASHIKI  Toshinori UEMURA  Mitsuru KURIHARA  Yasuyuki SUZUKI  Masatoshi TOKUSHIMA  Yasuhiko HAGIHARA  Kazuhiko KURATA  

     
    INVITED PAPER

      Vol:
    E99-C No:2
      Page(s):
    148-156

    Aiming to solve the input/output (I/O) bottleneck concerning next-generation interconnections, 5×5-millimeters-squared silicon-photonics-based chip-scale optical transmitters/receivers (TXs/RXs) — called “optical I/O cores” — were developed. In addition to having a compact footprint, by employing low-power-consumption integrated circuits (ICs), as well as providing multimode-fiber (MMF) transmission in the O band and a user-friendly interface, the developed optical I/O cores allow common ease of use with applications such as multi-chip modules (MCMs) and active optical cables (AOCs). The power consumption of their hybrid-integrated ICs is 5mW/Gbps. Their high-density user-friendly optical interface has a spot-size-converter (SSC) function and permits the physical contact against the outer waveguides. As a result, they provide large enough misalignment tolerance to allow use of passive alignment and visual alignment. In a performance test, they demonstrated 25-Gbps/ch error-free operation over 300-m MMF.

  • Greedy Approach Based Heuristics for Partitioning Sparse Matrices

    Jiasen HUANG  Junyan REN  Wei LI  

     
    LETTER-Fundamentals of Information Systems

      Pubricized:
    2015/07/02
      Vol:
    E98-D No:10
      Page(s):
    1847-1851

    Sparse Matrix-Vector Multiplication (SpMxV) is widely used in many high-performance computing applications, including information retrieval, medical imaging, and economic modeling. To eliminate the overhead of zero padding in SpMxV, prior works have focused on partitioning a sparse matrix into row vectors sets (RVS's) or sub-matrices. However, performance was still degraded due to the sparsity pattern of a sparse matrix. In this letter, we propose a heuristics, called recursive merging, which uses a greedy approach to recursively merge those row vectors of nonzeros in a matrix into the RVS's, such that each set included is ensured a local optimal solution. For ten uneven benchmark matrices from the University of Florida Sparse Matrix Collection, our proposed partitioning algorithm is always identified as the method with the highest mean density (over 96%), but with the lowest average relative difference (below 0.07%) over computing powers.

  • Software Abnormal Behavior Detection Based on Function Semantic Tree

    Yingxu LAI  Wenwen ZHANG  Zhen YANG  

     
    PAPER-Software System

      Pubricized:
    2015/07/03
      Vol:
    E98-D No:10
      Page(s):
    1777-1787

    Current software behavior models lack the ability to conduct semantic analysis. We propose a new model to detect abnormal behaviors based on a function semantic tree. First, a software behavior model in terms of state graph and software function is developed. Next, anomaly detection based on the model is conducted in two main steps: calculating deviation density of suspicious behaviors by comparison with state graph and detecting function sequence by function semantic rules. Deviation density can well detect control flow attacks by a deviation factor and a period division. In addition, with the help of semantic analysis, function semantic rules can accurately detect application layer attacks that fail in traditional approaches. Finally, a case study of RSS software illustrates how our approach works. Case study and a contrast experiment have shown that our model has strong expressivity and detection ability, which outperforms traditional behavior models.

  • Evaluation of Accuracy of Charge Pumping Current in Time Domain

    Tokinobu WATANABE  Masahiro HORI  Taiki SARUWATARI  Toshiaki TSUCHIYA  Yukinori ONO  

     
    PAPER

      Vol:
    E98-C No:5
      Page(s):
    390-394

    Accuracy of a method for analyzing the interface defect properties; time-domain charge pumping method, is evaluated. The method monitors the charge pumping (CP) current in time domain, and thus we expect that it gives us a noble way to investigate the interface state properties. In this study, for the purpose of evaluating the accuracy of the method, the interface state density extracted from the time-domain data is compared with that measured using the conventional CP method. The results show that they are equal to each other for all measured devices with various defect densities, demonstrating that the time-domain CP method is sufficiently accurate for the defect density evaluation.

  • Direct Density Ratio Estimation with Convolutional Neural Networks with Application in Outlier Detection

    Hyunha NAM  Masashi SUGIYAMA  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2015/01/28
      Vol:
    E98-D No:5
      Page(s):
    1073-1079

    Recently, the ratio of probability density functions was demonstrated to be useful in solving various machine learning tasks such as outlier detection, non-stationarity adaptation, feature selection, and clustering. The key idea of this density ratio approach is that the ratio is directly estimated so that difficult density estimation is avoided. So far, parametric and non-parametric direct density ratio estimators with various loss functions have been developed, and the kernel least-squares method was demonstrated to be highly useful both in terms of accuracy and computational efficiency. On the other hand, recent study in pattern recognition exhibited that deep architectures such as a convolutional neural network can significantly outperform kernel methods. In this paper, we propose to use the convolutional neural network in density ratio estimation, and experimentally show that the proposed method tends to outperform the kernel-based method in outlying image detection.

  • Impact of Cell Distance and Well-contact Density on Neutron-induced Multiple Cell Upsets

    Jun FURUTA  Kazutoshi KOBAYASHI  Hidetoshi ONODERA  

     
    PAPER

      Vol:
    E98-C No:4
      Page(s):
    298-303

    We measure neutron-induced Single Event Upsets (SEUs) and Multiple Cell Upsets (MCUs) on Flip-Flops (FFs) in a 65-nm bulk CMOS process in order to evaluate dependence of MCUs on cell distance and well-contact density using four different shift registers. Measurement results by accelerated tests show that MCU/SEU is up to 23.4% and it is exponentially decreased by the distance between latches on FFs. MCU rates can be drastically reduced by inserting well-contact arrays between FFs. The number of MCUs is reduced from 110 to 1 by inserting well-contact arrays under power and ground rails.

  • Iterative Channel Estimation and Decoding via Spatial Coupling

    Shuhei HORIO  Keigo TAKEUCHI  Tsutomu KAWABATA  

     
    PAPER

      Vol:
    E98-A No:2
      Page(s):
    549-557

    For low-density parity-check codes, spatial coupling was proved to boost the performance of iterative decoding up to the optimal performance. As an application of spatial coupling, in this paper, bit-interleaved coded modulation (BICM) with spatially coupled (SC) interleaving — called SC-BICM — is considered to improve the performance of iterative channel estimation and decoding for block-fading channels. In the iterative receiver, feedback from the soft-in soft-out decoder is utilized to refine the initial channel estimates in linear minimum mean-squared error (LMMSE) channel estimation. Density evolution in the infinite-code-length limit implies that the SC-BICM allows the receiver to attain accurate channel estimates even when the pilot overhead for training is negligibly small. Furthermore, numerical simulations show that the SC-BICM can provide a steeper reduction in bit error rate than conventional BICM, as well as a significant improvement in the so-called waterfall performance for high rate systems.

  • A Design Strategy of Error-Prediction Low-Density Parity-Check (EP-LDPC) Error-Correcting Code (ECC) and Error-Recovery Schemes for Scaled NAND Flash Memories

    Shuhei TANAKAMARU  Masafumi DOI  Ken TAKEUCHI  

     
    PAPER-Integrated Electronics

      Vol:
    E98-C No:1
      Page(s):
    53-61

    A design strategy (the required ECC strength and the judgment method of the dominant error mode) of error-prediction low-density parity-check (EP-LDPC) error-correcting code (ECC) and error-recovery schemes for scaled NAND flash memories is discussed in this paper. The reliability characteristics of NAND flash memories are investigated with 1X, 2X and 3Xnm NAND flash memories. Moreover, the system-level reliability of SSDs is analyzed from the acceptable data-retention time of the SSD. The reliability of the NAND flash memory is continuously degrading as the design rule shrinks due to various problems. As a result, future SSDs will not be able to maintain system-level reliability unless advanced ECCs with signal processing are adopted. Therefore, EP-LDPC and error-recovery (ER) schemes are previously proposed to improve the reliability. The reliability characteristics such as the bit-error rate (BER) versus the data-retention time and the effect of the cell-to-cell interference on the BER are measured. These reliability characteristics obtained in this paper are stored in an SSD as a reliability table, which plays a principal role in EP-LDPC scheme. The effectiveness of the EP-LDPC scheme with the scaling of the NAND flash memory is also discussed by analyzing the cell-to-cell interference. An interference factor $alpha$ is proposed to discuss the impact of the cell-to-cell coupling. As a result, the EP-LDPC scheme is assumed to be effective down to 1Xnm NAND flash memory. On the other hand, the ER scheme applies different voltage pulses to memory cells, according to the dominant error mode: program-disturb or data-retention error dominant mode. This paper examines when the error mode changes, corresponding to which pulse should be applied. Additionally, the estimation methods of the dominant error mode by ER scheme are also discussed. Finally, as a result of the system-level reliability analysis, it is concluded that the use of the EP-LDPC scheme can maintain the reliability of the NAND flash memory in 1Xnm technology node.

  • Algorithm for the Length-Constrained Maximum-Density Path Problem in a Tree with Uniform Edge Lengths

    Sung Kwon KIM  

     
    PAPER-Fundamentals of Information Systems

      Vol:
    E98-D No:1
      Page(s):
    103-107

    Given an edge-weighted tree with n vertices and a positive integer L, the length-constrained maximum-density path problem is to find a path of length at least L with maximum density in the tree. The density of a path is the sum of the weights of the edges in the path divided by the number of edges in the path. We present an O(n) time algorithm for the problem. The previously known algorithms run in O(nL) or O(n log n) time.

  • Location-Aware Store-Carry-Forward Routing Based on Node Density Estimation

    Tomotaka KIMURA  Takahiro MATSUDA  Tetsuya TAKINE  

     
    PAPER

      Vol:
    E98-B No:1
      Page(s):
    99-106

    We consider a location-aware store-carry-forward routing scheme based on node density estimation (LA Routing in short), which adopts different message forwarding strategies depending on node density at contact locations where two nodes encounter. To do so, each node estimates a node density distribution based on information about contact locations. In this paper, we clarify how the estimation accuracy affects the performance of LA Routing. We also examine the performance of LA Routing when it applies to networks with homogeneous node density. Through simulation experiments, we show that LA Routing is fairly robust against the accuracy of node density estimation and its performance is comparable with Probabilistic Routing even in the case that that node density is homogeneous.

  • On the Minimum-Weight Codewords of Array LDPC Codes with Column Weight 4

    Haiyang LIU  Gang DENG  Jie CHEN  

     
    PAPER-Coding Theory

      Vol:
    E97-A No:11
      Page(s):
    2236-2246

    In this paper, we investigate the minimum-weight codewords of array LDPC codes C(m,q), where q is an odd prime and m ≤ q. Using some analytical approaches, the lower bound on the number of minimum-weight codewords of C(m,q) given by Kaji (IEEE Int. Symp. Inf. Theory, June/July 2009) is proven to be tight for m = 4 and q > 19. In other words, C(4,q) has 4q2(q-1) minimum-weight codewords for all q > 19. In addition, we show some interesting universal properties of the supports of generators of minimum-weight codewords of the code C(4,q)(q > 19).

  • Unsupervised Dimension Reduction via Least-Squares Quadratic Mutual Information

    Janya SAINUI  Masashi SUGIYAMA  

     
    LETTER-Artificial Intelligence, Data Mining

      Pubricized:
    2014/07/22
      Vol:
    E97-D No:10
      Page(s):
    2806-2809

    The goal of dimension reduction is to represent high-dimensional data in a lower-dimensional subspace, while intrinsic properties of the original data are kept as much as possible. An important challenge in unsupervised dimension reduction is the choice of tuning parameters, because no supervised information is available and thus parameter selection tends to be subjective and heuristic. In this paper, we propose an information-theoretic approach to unsupervised dimension reduction that allows objective tuning parameter selection. We employ quadratic mutual information (QMI) as our information measure, which is known to be less sensitive to outliers than ordinary mutual information, and QMI is estimated analytically by a least-squares method in a computationally efficient way. Then, we provide an eigenvector-based efficient implementation for performing unsupervised dimension reduction based on the QMI estimator. The usefulness of the proposed method is demonstrated through experiments.

  • Constrained Least-Squares Density-Difference Estimation

    Tuan Duong NGUYEN  Marthinus Christoffel DU PLESSIS  Takafumi KANAMORI  Masashi SUGIYAMA  

     
    PAPER-Artificial Intelligence, Data Mining

      Vol:
    E97-D No:7
      Page(s):
    1822-1829

    We address the problem of estimating the difference between two probability densities. A naive approach is a two-step procedure that first estimates two densities separately and then computes their difference. However, such a two-step procedure does not necessarily work well because the first step is performed without regard to the second step and thus a small error in the first stage can cause a big error in the second stage. Recently, a single-shot method called the least-squares density-difference (LSDD) estimator has been proposed. LSDD directly estimates the density difference without separately estimating two densities, and it was demonstrated to outperform the two-step approach. In this paper, we propose a variation of LSDD called the constrained least-squares density-difference (CLSDD) estimator, and theoretically prove that CLSDD improves the accuracy of density difference estimation for correctly specified parametric models. The usefulness of the proposed method is also demonstrated experimentally.

  • Fast Density-Based Clustering Using Graphics Processing Units

    Woong-Kee LOH  Yang-Sae MOON  Young-Ho PARK  

     
    LETTER-Artificial Intelligence, Data Mining

      Vol:
    E97-D No:5
      Page(s):
    1349-1352

    Due to the recent technical advances, GPUs are used for general applications as well as screen display. Many research results have been proposed to the performance of previous CPU-based algorithms by a few hundred times using the GPUs. In this paper, we propose a density-based clustering algorithm called GSCAN, which reduces the number of unnecessary distance computations using a grid structure. As a result of our experiments, GSCAN outperformed CUDA-DClust [2] and DBSCAN [3] by up to 13.9 and 32.6 times, respectively.

  • Computationally Efficient Estimation of Squared-Loss Mutual Information with Multiplicative Kernel Models

    Tomoya SAKAI  Masashi SUGIYAMA  

     
    LETTER-Fundamentals of Information Systems

      Vol:
    E97-D No:4
      Page(s):
    968-971

    Squared-loss mutual information (SMI) is a robust measure of the statistical dependence between random variables. The sample-based SMI approximator called least-squares mutual information (LSMI) was demonstrated to be useful in performing various machine learning tasks such as dimension reduction, clustering, and causal inference. The original LSMI approximates the pointwise mutual information by using the kernel model, which is a linear combination of kernel basis functions located on paired data samples. Although LSMI was proved to achieve the optimal approximation accuracy asymptotically, its approximation capability is limited when the sample size is small due to an insufficient number of kernel basis functions. Increasing the number of kernel basis functions can mitigate this weakness, but a naive implementation of this idea significantly increases the computation costs. In this article, we show that the computational complexity of LSMI with the multiplicative kernel model, which locates kernel basis functions on unpaired data samples and thus the number of kernel basis functions is the sample size squared, is the same as that for the plain kernel model. We experimentally demonstrate that LSMI with the multiplicative kernel model is more accurate than that with plain kernel models in small sample cases, with only mild increase in computation time.

61-80hit(274hit)