The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] density(274hit)

41-60hit(274hit)

  • Doppler Spread Estimation for an OFDM System with a Rayleigh Fading Channel

    Eunchul YOON  Janghyun KIM  Unil YUN  

     
    PAPER-Terrestrial Wireless Communication/Broadcasting Technologies

      Pubricized:
    2017/11/13
      Vol:
    E101-B No:5
      Page(s):
    1328-1335

    A novel Doppler spread estimation scheme is proposed for an orthogonal frequency division multiplexing (OFDM) system with a Rayleigh fading channel. The proposal develops a composite power spectral density (PSD) function by averaging the multiple PSD functions computed with multiple sets of the channel frequency response (CFR) coefficients. The Doppler spread is estimated by finding the maximum location of the composite PSD quantities larger than a threshold value given by a fixed fraction of the maximum composite PSD quantity. It is shown by simulation that the proposed scheme performs better than three conventional Doppler spread estimation schemes not only in isotropic scattering environments, but also in nonisotropic scattering environments. Moreover, the proposed scheme is shown to perform well in some Rician channel environments if the Rician K-factor is small.

  • On the Second Separating Redundancy of LDPC Codes from Finite Planes

    Haiyang LIU  Yan LI  Lianrong MA  

     
    LETTER-Coding Theory

      Vol:
    E101-A No:3
      Page(s):
    617-622

    The separating redundancy is an important concept in the analysis of the error-and-erasure decoding of a linear block code using a parity-check matrix of the code. In this letter, we derive new constructive upper bounds on the second separating redundancies of low-density parity-check (LDPC) codes constructed from projective and Euclidean planes over the field Fq with q even.

  • Efficient Parallel Join Processing Exploiting SIMD in Multi-Thread Environments

    Gilseok HONG  Seonghyeon KANG  Chang soo KIM  Jun-Ki MIN  

     
    PAPER-Data Engineering, Web Information Systems

      Pubricized:
    2017/12/14
      Vol:
    E101-D No:3
      Page(s):
    659-667

    In this paper, we study parallel join processing to improve the performance of the merge phase of sort-merge join by integrating all parallelism provided by mainstream CPUs. Modern CPUs support SIMD instruction sets with wider SIMD registers which allows to process multiple data items per each instruction. Thus, we devise an efficient parallel join algorithm, called Parallel Merge Join with SIMD instructions (PMJS). In our proposed algorithm, we utilize data parallelism by exploiting SIMD instructions. And we also accelerate the performance by avoiding the usage of conditional branch instructions. Furthermore, to take advantage of the multiple cores, our proposed algorithm is threaded in multi-thread environments. In our multi-thread algorithm, to distribute workload evenly to each thread, we devise an efficient workload balancing algorithm based on the kernel density estimator which allows to estimate the workload of each thread accurately.

  • Efficient Early Termination Criterion for ADMM Penalized LDPC Decoder

    Biao WANG  Xiaopeng JIAO  Jianjun MU  Zhongfei WANG  

     
    LETTER-Coding Theory

      Vol:
    E101-A No:3
      Page(s):
    623-626

    By tracking the changing rate of hard decisions during every two consecutive iterations of the alternating direction method of multipliers (ADMM) penalized decoding, an efficient early termination (ET) criterion is proposed to improve the convergence rate of ADMM penalized decoder for low-density parity-check (LDPC) codes. Compared to the existing ET criterion for ADMM penalized decoding, the proposed method can reduce the average number of iterations significantly at low signal-to-noise ratios with negligible performance degradation.

  • Optimal Design Method of Sub-Ranging ADC Based on Stochastic Comparator

    Md. Maruf HOSSAIN  Tetsuya IIZUKA  Toru NAKURA  Kunihiro ASADA  

     
    PAPER

      Vol:
    E101-A No:2
      Page(s):
    410-424

    An optimal design method for a sub-ranging Analog-to-Digital Converter (ADC) based on stochastic comparator is demonstrated by performing theoretical analysis of random comparator offset voltages. If the Cumulative Distribution Function (CDF) of the comparator offset is defined appropriately, we can calculate the PDFs of the output code and the effective resolution of a stochastic comparator. It is possible to model the analog-to-digital conversion accuracy (defined as yield) of a stochastic comparator by assuming that the correlations among the number of comparator offsets within different analog steps corresponding to the Least Significant Bit (LSB) of the output transfer function are negligible. Comparison with Monte Carlo simulation verifies that the proposed model precisely estimates the yield of the ADC when it is designed for a reasonable target yield of >0.8. By applying this model to a stochastic comparator we reveal that an additional calibration significantly enhances the resolution, i.e., it increases the Number of Bits (NOB) by ∼ 2 bits for the same target yield. Extending the model to a stochastic-comparator-based sub-ranging ADC indicates that the ADC design parameters can be tuned to find the optimal resource distribution between the deterministic coarse stage and the stochastic fine stage.

  • Statistical Property Guided Feature Extraction for Volume Data

    Li WANG  Xiaoan TANG  Junda ZHANG  Dongdong GUAN  

     
    LETTER-Pattern Recognition

      Pubricized:
    2017/10/13
      Vol:
    E101-D No:1
      Page(s):
    261-264

    Feature visualization is of great significances in volume visualization, and feature extraction has been becoming extremely popular in feature visualization. While precise definition of features is usually absent which makes the extraction difficult. This paper employs probability density function (PDF) as statistical property, and proposes a statistical property guided approach to extract features for volume data. Basing on feature matching, it combines simple liner iterative cluster (SLIC) with Gaussian mixture model (GMM), and could do extraction without accurate feature definition. Further, GMM is paired with a normality test to reduce time cost and storage requirement. We demonstrate its applicability and superiority by successfully applying it on homogeneous and non-homogeneous features.

  • Low Cost Wearable Sensor for Human Emotion Recognition Using Skin Conductance Response

    Khairun Nisa' MINHAD  Jonathan Shi Khai OOI  Sawal Hamid MD ALI  Mamun IBNE REAZ  Siti Anom AHMAD  

     
    PAPER-Biological Engineering

      Pubricized:
    2017/08/23
      Vol:
    E100-D No:12
      Page(s):
    3010-3017

    Malaysia is one of the countries with the highest car crash fatality rates in Asia. The high implementation cost of in-vehicle driver behavior warning system and autonomous driving remains a significant challenge. Motivated by the large number of simple yet effective inventions that benefitted many developing countries, this study presents the findings of emotion recognition based on skin conductance response using a low-cost wearable sensor. Emotions were evoked by presenting the proposed display stimulus and driving stimulator. Meaningful power spectral density was extracted from the filtered signal. Experimental protocols and frameworks were established to reduce the complexity of the emotion elicitation process. The proof of concept in this work demonstrated the high accuracy of two-class and multiclass emotion classification results. Significant differences of features were identified using statistical analysis. This work is one of the most easy-to-use protocols and frameworks, but has high potential to be used as biomarker in intelligent automobile, which helps prevent accidents and saves lives through its simplicity.

  • An Efficient Weighted Bit-Flipping Algorithm for Decoding LDPC Codes Based on Log-Likelihood Ratio of Bit Error Probability

    Tso-Cho CHEN  Erl-Huei LU  Chia-Jung LI  Kuo-Tsang HUANG  

     
    PAPER-Fundamental Theories for Communications

      Pubricized:
    2017/05/29
      Vol:
    E100-B No:12
      Page(s):
    2095-2103

    In this paper, a weighted multiple bit flipping (WMBF) algorithman for decoding low-density parity-check (LDPC) codes is proposed first. Then the improved WMBF algorithm which we call the efficient weighted bit-flipping (EWBF) algorithm is developed. The EWBF algorithm can dynamically choose either multiple bit-flipping or single bit-flipping in each iteration according to the log-likelihood ratio of the error probability of the received bits. Thus, it can efficiently increase the convergence speed of decoding and prevent the decoding process from falling into loop traps. Compared with the parallel weighted bit-flipping (PWBF) algorithm, the EWBF algorithm can achieve significantly lower computational complexity without performance degradation when the Euclidean geometry (EG)-LDPC codes are decoded. Furthermore, the flipping criterion does not require any parameter adjustment.

  • Spatially “Mt. Fuji” Coupled LDPC Codes

    Yuta NAKAHARA  Shota SAITO  Toshiyasu MATSUSHIMA  

     
    PAPER-Coding Theory and Techniques

      Vol:
    E100-A No:12
      Page(s):
    2594-2606

    A new type of spatially coupled low density parity check (SCLDPC) code is proposed. This code has two benefits. (1) This code requires less number of iterations to correct the erasures occurring through the binary erasure channel in the waterfall region than that of the usual SCLDPC code. (2) This code has lower error floor than that of the usual SCLDPC code. Proposed code is constructed as a coupled chain of the underlying LDPC codes whose code lengths exponentially increase as the position where the codes exist is close to the middle of the chain. We call our code spatially “Mt. Fuji” coupled LDPC (SFCLDPC) code because the shape of the graph representing the code lengths of underlying LDPC codes at each position looks like Mt. Fuji. By this structure, when the proposed SFCLDPC code and the original SCLDPC code are constructed with the same code rate and the same code length, L (the number of the underlying LDPC codes) of the proposed SFCLDPC code becomes smaller and M (the code lengths of the underlying LDPC codes) of the proposed SFCLDPC code becomes larger than those of the SCLDPC code. These properties of L and M enables the above reduction of the number of iterations and the bit error rate in the error floor region, which are confirmed by the density evolution and computer simulations.

  • 5G Distributed Massive MIMO with Ultra-High Density Antenna Deployment in Low SHF Bands

    Tatsuki OKUYAMA  Satoshi SUYAMA  Jun MASHINO  Yukihiko OKUMURA  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2017/03/10
      Vol:
    E100-B No:10
      Page(s):
    1921-1927

    In order to tackle rapidly increasing traffic, dramatic performance enhancements in radio access technologies (RATs) are required for fifth-generation (5G) mobile communication system. In 5G, small/semi-macro cells using Massive MIMO (M-MIMO) with much wider bandwidth in higher frequency bands are overlaid on macro cell with existing frequency band. Moreover, high density deployment of small/semi-macro cell is expected to improve areal capacity. However, in low SHF band (below 6GHz), antenna array size of M-MIMO is large so that it cannot be installed on some environments. Therefore, to improve system throughput on various use cases in 5G, we have proposed distributed Massive MIMO (DM-MIMO). DM-MIMO coordinates lots of distributed transmission points (TPs) that are located in ultra-high density (UHD). Furthermore, DM-MIMO uses various numbers of antenna elements for each TP. In addition, DM-MIMO with UHD-TPs can create user-centric virtual cells corresponding to user mobility, and design of flexible antenna deployment for DM-MIMO is applicable to various use cases. Then, some key parameters such as the number of the distributed TPs, the number of antenna elements for each TP, and proper distance between TPs, should be determined. This paper presents such parameters for 5G DM-MIMO with flexible antenna deployment under fixed total transmission power and constant total number of antenna elements. Computer simulations show that DM-MIMO can achieve more than 1.9 times higher system throughput than an M-MIMO system using 128 antenna elements.

  • Estimation of Dense Displacement by Scale Invariant Polynomial Expansion of Heterogeneous Multi-View Images

    Kazuki SHIBATA  Mehrdad PANAHPOUR TEHERANI  Keita TAKAHASHI  Toshiaki FUJII  

     
    LETTER

      Pubricized:
    2017/06/14
      Vol:
    E100-D No:9
      Page(s):
    2048-2051

    Several applications for 3-D visualization require dense detection of correspondence for displacement estimation among heterogeneous multi-view images. Due to differences in resolution or sampling density and field of view in the images, estimation of dense displacement is not straight forward. Therefore, we propose a scale invariant polynomial expansion method that can estimate dense displacement between two heterogeneous views. Evaluation on heterogeneous images verifies accuracy of our approach.

  • Zigzag Decodable Fountain Codes

    Takayuki NOZAKI  

     
    PAPER-Coding Theory

      Vol:
    E100-A No:8
      Page(s):
    1693-1704

    This paper proposes a fountain coding system which has lower decoding erasure rate and lower space complexity of the decoding algorithm than the Raptor coding systems. A main idea of the proposed fountain code is employing shift and exclusive OR to generate the output packets. This technique is known as the zigzag decodable code, which is efficiently decoded by the zigzag decoder. In other words, we propose a fountain code based on the zigzag decodable code in this paper. Moreover, we analyze the overhead, decoding erasure rate, decoding complexity, and asymptotic overhead of the proposed fountain code. As a result, we show that the proposed fountain code outperforms the Raptor codes in terms of the overhead and decoding erasure rate. Simulation results show that the proposed fountain coding system outperforms Raptor coding system in terms of the overhead and the space complexity of decoding.

  • Change-Prone Java Method Prediction by Focusing on Individual Differences in Comment Density

    Aji ERY BURHANDENNY  Hirohisa AMAN  Minoru KAWAHARA  

     
    LETTER-Software Engineering

      Pubricized:
    2017/02/15
      Vol:
    E100-D No:5
      Page(s):
    1128-1131

    This paper focuses on differences in comment densities among individual programmers, and proposes to adjust the conventional code complexity metric (the cyclomatic complexity) by using the abnormality of the comment density. An empirical study with nine popular open source Java products (including 103,246 methods) shows that the proposed metric performs better than the conventional one in predicting change-prone methods; the proposed metric improves the area under the ROC curve (AUC) by about 3.4% on average.

  • Integration of Spatial Cue-Based Noise Reduction and Speech Model-Based Source Restoration for Real Time Speech Enhancement

    Tomoko KAWASE  Kenta NIWA  Masakiyo FUJIMOTO  Kazunori KOBAYASHI  Shoko ARAKI  Tomohiro NAKATANI  

     
    PAPER-Digital Signal Processing

      Vol:
    E100-A No:5
      Page(s):
    1127-1136

    We propose a microphone array speech enhancement method that integrates spatial-cue-based source power spectral density (PSD) estimation and statistical speech model-based PSD estimation. The goal of this research was to clearly pick up target speech even in noisy environments such as crowded places, factories, and cars running at high speed. Beamforming with post-Wiener filtering is commonly used in many conventional studies on microphone-array noise reduction. For calculating a Wiener filter, speech/noise PSDs are essential, and they are estimated using spatial cues obtained from microphone observations. Assuming that the sound sources are sparse in the temporal-spatial domain, speech/noise PSDs may be estimated accurately. However, PSD estimation errors increase under circumstances beyond this assumption. In this study, we integrated speech models and PSD-estimation-in-beamspace method to correct speech/noise PSD estimation errors. The roughly estimated noise PSD was obtained frame-by-frame by analyzing spatial cues from array observations. By combining noise PSD with the statistical model of clean-speech, the relationships between the PSD of the observed signal and that of the target speech, hereafter called the observation model, could be described without pre-training. By exploiting Bayes' theorem, a Wiener filter is statistically generated from observation models. Experiments conducted to evaluate the proposed method showed that the signal-to-noise ratio and naturalness of the output speech signal were significantly better than that with conventional methods.

  • Further Results on the Minimum and Stopping Distances of Full-Length RS-LDPC Codes

    Haiyang LIU  Hao ZHANG  Lianrong MA  

     
    LETTER-Coding Theory

      Vol:
    E100-A No:2
      Page(s):
    738-742

    Based on the codewords of the [q,2,q-1] extended Reed-Solomon (RS) code over the finite field Fq, we can construct a regular binary γq×q2 matrix H(γ,q), where q is a power of 2 and γ≤q. The matrix H(γ,q) defines a regular low-density parity-check (LDPC) code C(γ,q), called a full-length RS-LDPC code. Using some analytical methods, we completely determine the values of s(H(4,q)), s(H(5,q)), and d(C(5,q)) in this letter, where s(H(γ,q)) and d(C(γ,q)) are the stopping distance of H(γ,q) and the minimum distance of C(γ,q), respectively.

  • Up-Stream Dispatching of Power by Density of Power Packet

    Shinya NAWATA  Ryo TAKAHASHI  Takashi HIKIHARA  

     
    LETTER-Systems and Control

      Vol:
    E99-A No:12
      Page(s):
    2581-2584

    Power packet is a unit of electric power transferred by a pulse with an information tag. This letter discusses up-stream dispatching of required power at loads to sources through density modulation of power packet. Here, power is adjusted at a proposed router which dispatches power packets according to the tags. It is analyzed by averaging method and numerically verified.

  • A Mobile Agent Based Distributed Variational Bayesian Algorithm for Flow and Speed Estimation in a Traffic System

    Mohiyeddin MOZAFFARI  Behrouz SAFARINEJADIAN  

     
    PAPER-Sensor network

      Pubricized:
    2016/08/24
      Vol:
    E99-D No:12
      Page(s):
    2934-2942

    This paper provides a mobile agent based distributed variational Bayesian (MABDVB) algorithm for density estimation in sensor networks. It has been assumed that sensor measurements can be statistically modeled by a common Gaussian mixture model. In the proposed algorithm, mobile agents move through the routes of the network and compute the local sufficient statistics using local measurements. Afterwards, the global sufficient statistics will be updated using these local sufficient statistics. This procedure will be repeated until convergence is reached. Consequently, using this global sufficient statistics the parameters of the density function will be approximated. Convergence of the proposed method will be also analytically studied, and it will be shown that the estimated parameters will eventually converge to their true values. Finally, the proposed algorithm will be applied to one-dimensional and two dimensional data sets to show its promising performance.

  • Performance Analysis Based on Density Evolution on Fault Erasure Belief Propagation Decoder

    Hiroki MORI  Tadashi WADAYAMA  

     
    PAPER-Coding Theory and Techniques

      Vol:
    E99-A No:12
      Page(s):
    2155-2161

    In this paper, we will present analysis on the fault erasure BP decoders based on the density evolution. In the fault BP decoder, the messages exchanged in a BP process are stochastically corrupted due to unreliable logic gates and flip-flops; i.e., we assume circuit components with transient faults. We derived a set of the density evolution equations for the fault erasure BP processes. Our density evolution analysis reveals the asymptotic behaviors of the estimation error probability of the fault erasure BP decoders. In contrast to the fault free cases, it is observed that the error probabilities of the fault erasure BP decoder converge to positive values, and that there exists a discontinuity in an error curve corresponding to the fault BP threshold. It is also shown that an message encoding technique provides higher fault BP thresholds than those of the original decoders at the cost of increased circuit size.

  • Linear Programming Decoding of Binary Linear Codes for Symbol-Pair Read Channel

    Shunsuke HORII  Toshiyasu MATSUSHIMA  Shigeichi HIRASAWA  

     
    PAPER-Coding Theory and Techniques

      Vol:
    E99-A No:12
      Page(s):
    2170-2178

    In this study, we develop a new algorithm for decoding binary linear codes for symbol-pair read channels. The symbol-pair read channel was recently introduced by Cassuto and Blaum to model channels with higher write resolutions than read resolutions. The proposed decoding algorithm is based on linear programming (LP). For LDPC codes, the proposed algorithm runs in time polynomial in the codeword length. It is proved that the proposed LP decoder has the maximum-likelihood (ML) certificate property, i.e., the output of the decoder is guaranteed to be the ML codeword when it is integral. We also introduce the fractional pair distance dfp of the code, which is a lower bound on the minimum pair distance. It is proved that the proposed LP decoder corrects up to ⌈dfp/2⌉-1 errors.

  • Automatic Retrieval of Action Video Shots from the Web Using Density-Based Cluster Analysis and Outlier Detection

    Nga Hang DO  Keiji YANAI  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2016/07/21
      Vol:
    E99-D No:11
      Page(s):
    2788-2795

    In this paper, we introduce a fully automatic approach to construct action datasets from noisy Web video search results. The idea is based on combining cluster structure analysis and density-based outlier detection. For a specific action concept, first, we download its Web top search videos and segment them into video shots. We then organize these shots into subsets using density-based hierarchy clustering. For each set, we rank its shots by their outlier degrees which are determined as their isolatedness with respect to their surroundings. Finally, we collect high ranked shots as training data for the action concept. We demonstrate that with action models trained by our data, we can obtain promising precision rates in the task of action classification while offering the advantage of fully automatic, scalable learning. Experiment results on UCF11, a challenging action dataset, show the effectiveness of our method.

41-60hit(274hit)