Daisuke FUNAHASHI Takahiro ITO Akimasa HIRATA Takahiro IYAMA Teruo ONISHI
This study discusses an area-averaged incident power density to estimate surface temperature elevation from patch antenna arrays with 4 and 9 elements at the frequencies above 10 GHz. We computationally demonstrate that a smaller averaging area (1 cm2) of power density should be considered at the frequency of 30 GHz or higher compared with that at lower frequencies (4 cm2).
Eunchul YOON Janghyun KIM Unil YUN
A novel Doppler spread estimation scheme is proposed for an orthogonal frequency division multiplexing (OFDM) system with a Rayleigh fading channel. The proposal develops a composite power spectral density (PSD) function by averaging the multiple PSD functions computed with multiple sets of the channel frequency response (CFR) coefficients. The Doppler spread is estimated by finding the maximum location of the composite PSD quantities larger than a threshold value given by a fixed fraction of the maximum composite PSD quantity. It is shown by simulation that the proposed scheme performs better than three conventional Doppler spread estimation schemes not only in isotropic scattering environments, but also in nonisotropic scattering environments. Moreover, the proposed scheme is shown to perform well in some Rician channel environments if the Rician K-factor is small.
Gilseok HONG Seonghyeon KANG Chang soo KIM Jun-Ki MIN
In this paper, we study parallel join processing to improve the performance of the merge phase of sort-merge join by integrating all parallelism provided by mainstream CPUs. Modern CPUs support SIMD instruction sets with wider SIMD registers which allows to process multiple data items per each instruction. Thus, we devise an efficient parallel join algorithm, called Parallel Merge Join with SIMD instructions (PMJS). In our proposed algorithm, we utilize data parallelism by exploiting SIMD instructions. And we also accelerate the performance by avoiding the usage of conditional branch instructions. Furthermore, to take advantage of the multiple cores, our proposed algorithm is threaded in multi-thread environments. In our multi-thread algorithm, to distribute workload evenly to each thread, we devise an efficient workload balancing algorithm based on the kernel density estimator which allows to estimate the workload of each thread accurately.
Haiyang LIU Yan LI Lianrong MA
The separating redundancy is an important concept in the analysis of the error-and-erasure decoding of a linear block code using a parity-check matrix of the code. In this letter, we derive new constructive upper bounds on the second separating redundancies of low-density parity-check (LDPC) codes constructed from projective and Euclidean planes over the field Fq with q even.
Biao WANG Xiaopeng JIAO Jianjun MU Zhongfei WANG
By tracking the changing rate of hard decisions during every two consecutive iterations of the alternating direction method of multipliers (ADMM) penalized decoding, an efficient early termination (ET) criterion is proposed to improve the convergence rate of ADMM penalized decoder for low-density parity-check (LDPC) codes. Compared to the existing ET criterion for ADMM penalized decoding, the proposed method can reduce the average number of iterations significantly at low signal-to-noise ratios with negligible performance degradation.
Md. Maruf HOSSAIN Tetsuya IIZUKA Toru NAKURA Kunihiro ASADA
An optimal design method for a sub-ranging Analog-to-Digital Converter (ADC) based on stochastic comparator is demonstrated by performing theoretical analysis of random comparator offset voltages. If the Cumulative Distribution Function (CDF) of the comparator offset is defined appropriately, we can calculate the PDFs of the output code and the effective resolution of a stochastic comparator. It is possible to model the analog-to-digital conversion accuracy (defined as yield) of a stochastic comparator by assuming that the correlations among the number of comparator offsets within different analog steps corresponding to the Least Significant Bit (LSB) of the output transfer function are negligible. Comparison with Monte Carlo simulation verifies that the proposed model precisely estimates the yield of the ADC when it is designed for a reasonable target yield of >0.8. By applying this model to a stochastic comparator we reveal that an additional calibration significantly enhances the resolution, i.e., it increases the Number of Bits (NOB) by ∼ 2 bits for the same target yield. Extending the model to a stochastic-comparator-based sub-ranging ADC indicates that the ADC design parameters can be tuned to find the optimal resource distribution between the deterministic coarse stage and the stochastic fine stage.
Li WANG Xiaoan TANG Junda ZHANG Dongdong GUAN
Feature visualization is of great significances in volume visualization, and feature extraction has been becoming extremely popular in feature visualization. While precise definition of features is usually absent which makes the extraction difficult. This paper employs probability density function (PDF) as statistical property, and proposes a statistical property guided approach to extract features for volume data. Basing on feature matching, it combines simple liner iterative cluster (SLIC) with Gaussian mixture model (GMM), and could do extraction without accurate feature definition. Further, GMM is paired with a normality test to reduce time cost and storage requirement. We demonstrate its applicability and superiority by successfully applying it on homogeneous and non-homogeneous features.
Tso-Cho CHEN Erl-Huei LU Chia-Jung LI Kuo-Tsang HUANG
In this paper, a weighted multiple bit flipping (WMBF) algorithman for decoding low-density parity-check (LDPC) codes is proposed first. Then the improved WMBF algorithm which we call the efficient weighted bit-flipping (EWBF) algorithm is developed. The EWBF algorithm can dynamically choose either multiple bit-flipping or single bit-flipping in each iteration according to the log-likelihood ratio of the error probability of the received bits. Thus, it can efficiently increase the convergence speed of decoding and prevent the decoding process from falling into loop traps. Compared with the parallel weighted bit-flipping (PWBF) algorithm, the EWBF algorithm can achieve significantly lower computational complexity without performance degradation when the Euclidean geometry (EG)-LDPC codes are decoded. Furthermore, the flipping criterion does not require any parameter adjustment.
Khairun Nisa' MINHAD Jonathan Shi Khai OOI Sawal Hamid MD ALI Mamun IBNE REAZ Siti Anom AHMAD
Malaysia is one of the countries with the highest car crash fatality rates in Asia. The high implementation cost of in-vehicle driver behavior warning system and autonomous driving remains a significant challenge. Motivated by the large number of simple yet effective inventions that benefitted many developing countries, this study presents the findings of emotion recognition based on skin conductance response using a low-cost wearable sensor. Emotions were evoked by presenting the proposed display stimulus and driving stimulator. Meaningful power spectral density was extracted from the filtered signal. Experimental protocols and frameworks were established to reduce the complexity of the emotion elicitation process. The proof of concept in this work demonstrated the high accuracy of two-class and multiclass emotion classification results. Significant differences of features were identified using statistical analysis. This work is one of the most easy-to-use protocols and frameworks, but has high potential to be used as biomarker in intelligent automobile, which helps prevent accidents and saves lives through its simplicity.
Yuta NAKAHARA Shota SAITO Toshiyasu MATSUSHIMA
A new type of spatially coupled low density parity check (SCLDPC) code is proposed. This code has two benefits. (1) This code requires less number of iterations to correct the erasures occurring through the binary erasure channel in the waterfall region than that of the usual SCLDPC code. (2) This code has lower error floor than that of the usual SCLDPC code. Proposed code is constructed as a coupled chain of the underlying LDPC codes whose code lengths exponentially increase as the position where the codes exist is close to the middle of the chain. We call our code spatially “Mt. Fuji” coupled LDPC (SFCLDPC) code because the shape of the graph representing the code lengths of underlying LDPC codes at each position looks like Mt. Fuji. By this structure, when the proposed SFCLDPC code and the original SCLDPC code are constructed with the same code rate and the same code length, L (the number of the underlying LDPC codes) of the proposed SFCLDPC code becomes smaller and M (the code lengths of the underlying LDPC codes) of the proposed SFCLDPC code becomes larger than those of the SCLDPC code. These properties of L and M enables the above reduction of the number of iterations and the bit error rate in the error floor region, which are confirmed by the density evolution and computer simulations.
Tatsuki OKUYAMA Satoshi SUYAMA Jun MASHINO Yukihiko OKUMURA
In order to tackle rapidly increasing traffic, dramatic performance enhancements in radio access technologies (RATs) are required for fifth-generation (5G) mobile communication system. In 5G, small/semi-macro cells using Massive MIMO (M-MIMO) with much wider bandwidth in higher frequency bands are overlaid on macro cell with existing frequency band. Moreover, high density deployment of small/semi-macro cell is expected to improve areal capacity. However, in low SHF band (below 6GHz), antenna array size of M-MIMO is large so that it cannot be installed on some environments. Therefore, to improve system throughput on various use cases in 5G, we have proposed distributed Massive MIMO (DM-MIMO). DM-MIMO coordinates lots of distributed transmission points (TPs) that are located in ultra-high density (UHD). Furthermore, DM-MIMO uses various numbers of antenna elements for each TP. In addition, DM-MIMO with UHD-TPs can create user-centric virtual cells corresponding to user mobility, and design of flexible antenna deployment for DM-MIMO is applicable to various use cases. Then, some key parameters such as the number of the distributed TPs, the number of antenna elements for each TP, and proper distance between TPs, should be determined. This paper presents such parameters for 5G DM-MIMO with flexible antenna deployment under fixed total transmission power and constant total number of antenna elements. Computer simulations show that DM-MIMO can achieve more than 1.9 times higher system throughput than an M-MIMO system using 128 antenna elements.
Kazuki SHIBATA Mehrdad PANAHPOUR TEHERANI Keita TAKAHASHI Toshiaki FUJII
Several applications for 3-D visualization require dense detection of correspondence for displacement estimation among heterogeneous multi-view images. Due to differences in resolution or sampling density and field of view in the images, estimation of dense displacement is not straight forward. Therefore, we propose a scale invariant polynomial expansion method that can estimate dense displacement between two heterogeneous views. Evaluation on heterogeneous images verifies accuracy of our approach.
This paper proposes a fountain coding system which has lower decoding erasure rate and lower space complexity of the decoding algorithm than the Raptor coding systems. A main idea of the proposed fountain code is employing shift and exclusive OR to generate the output packets. This technique is known as the zigzag decodable code, which is efficiently decoded by the zigzag decoder. In other words, we propose a fountain code based on the zigzag decodable code in this paper. Moreover, we analyze the overhead, decoding erasure rate, decoding complexity, and asymptotic overhead of the proposed fountain code. As a result, we show that the proposed fountain code outperforms the Raptor codes in terms of the overhead and decoding erasure rate. Simulation results show that the proposed fountain coding system outperforms Raptor coding system in terms of the overhead and the space complexity of decoding.
Tomoko KAWASE Kenta NIWA Masakiyo FUJIMOTO Kazunori KOBAYASHI Shoko ARAKI Tomohiro NAKATANI
We propose a microphone array speech enhancement method that integrates spatial-cue-based source power spectral density (PSD) estimation and statistical speech model-based PSD estimation. The goal of this research was to clearly pick up target speech even in noisy environments such as crowded places, factories, and cars running at high speed. Beamforming with post-Wiener filtering is commonly used in many conventional studies on microphone-array noise reduction. For calculating a Wiener filter, speech/noise PSDs are essential, and they are estimated using spatial cues obtained from microphone observations. Assuming that the sound sources are sparse in the temporal-spatial domain, speech/noise PSDs may be estimated accurately. However, PSD estimation errors increase under circumstances beyond this assumption. In this study, we integrated speech models and PSD-estimation-in-beamspace method to correct speech/noise PSD estimation errors. The roughly estimated noise PSD was obtained frame-by-frame by analyzing spatial cues from array observations. By combining noise PSD with the statistical model of clean-speech, the relationships between the PSD of the observed signal and that of the target speech, hereafter called the observation model, could be described without pre-training. By exploiting Bayes' theorem, a Wiener filter is statistically generated from observation models. Experiments conducted to evaluate the proposed method showed that the signal-to-noise ratio and naturalness of the output speech signal were significantly better than that with conventional methods.
Aji ERY BURHANDENNY Hirohisa AMAN Minoru KAWAHARA
This paper focuses on differences in comment densities among individual programmers, and proposes to adjust the conventional code complexity metric (the cyclomatic complexity) by using the abnormality of the comment density. An empirical study with nine popular open source Java products (including 103,246 methods) shows that the proposed metric performs better than the conventional one in predicting change-prone methods; the proposed metric improves the area under the ROC curve (AUC) by about 3.4% on average.
Haiyang LIU Hao ZHANG Lianrong MA
Based on the codewords of the [q,2,q-1] extended Reed-Solomon (RS) code over the finite field Fq, we can construct a regular binary γq×q2 matrix H(γ,q), where q is a power of 2 and γ≤q. The matrix H(γ,q) defines a regular low-density parity-check (LDPC) code C(γ,q), called a full-length RS-LDPC code. Using some analytical methods, we completely determine the values of s(H(4,q)), s(H(5,q)), and d(C(5,q)) in this letter, where s(H(γ,q)) and d(C(γ,q)) are the stopping distance of H(γ,q) and the minimum distance of C(γ,q), respectively.
Shinya NAWATA Ryo TAKAHASHI Takashi HIKIHARA
Power packet is a unit of electric power transferred by a pulse with an information tag. This letter discusses up-stream dispatching of required power at loads to sources through density modulation of power packet. Here, power is adjusted at a proposed router which dispatches power packets according to the tags. It is analyzed by averaging method and numerically verified.
Mohiyeddin MOZAFFARI Behrouz SAFARINEJADIAN
This paper provides a mobile agent based distributed variational Bayesian (MABDVB) algorithm for density estimation in sensor networks. It has been assumed that sensor measurements can be statistically modeled by a common Gaussian mixture model. In the proposed algorithm, mobile agents move through the routes of the network and compute the local sufficient statistics using local measurements. Afterwards, the global sufficient statistics will be updated using these local sufficient statistics. This procedure will be repeated until convergence is reached. Consequently, using this global sufficient statistics the parameters of the density function will be approximated. Convergence of the proposed method will be also analytically studied, and it will be shown that the estimated parameters will eventually converge to their true values. Finally, the proposed algorithm will be applied to one-dimensional and two dimensional data sets to show its promising performance.
In this paper, we will present analysis on the fault erasure BP decoders based on the density evolution. In the fault BP decoder, the messages exchanged in a BP process are stochastically corrupted due to unreliable logic gates and flip-flops; i.e., we assume circuit components with transient faults. We derived a set of the density evolution equations for the fault erasure BP processes. Our density evolution analysis reveals the asymptotic behaviors of the estimation error probability of the fault erasure BP decoders. In contrast to the fault free cases, it is observed that the error probabilities of the fault erasure BP decoder converge to positive values, and that there exists a discontinuity in an error curve corresponding to the fault BP threshold. It is also shown that an message encoding technique provides higher fault BP thresholds than those of the original decoders at the cost of increased circuit size.
Shunsuke HORII Toshiyasu MATSUSHIMA Shigeichi HIRASAWA
In this study, we develop a new algorithm for decoding binary linear codes for symbol-pair read channels. The symbol-pair read channel was recently introduced by Cassuto and Blaum to model channels with higher write resolutions than read resolutions. The proposed decoding algorithm is based on linear programming (LP). For LDPC codes, the proposed algorithm runs in time polynomial in the codeword length. It is proved that the proposed LP decoder has the maximum-likelihood (ML) certificate property, i.e., the output of the decoder is guaranteed to be the ML codeword when it is integral. We also introduce the fractional pair distance dfp of the code, which is a lower bound on the minimum pair distance. It is proved that the proposed LP decoder corrects up to ⌈dfp/2⌉-1 errors.