The search functionality is under construction.

Author Search Result

[Author] Bin HU(11hit)

1-11hit
  • A Flexible Architecture for TURBO and LDPC Codes

    Yun CHEN  Yuebin HUANG  Chen CHEN  Changsheng ZHOU  Xiaoyang ZENG  

     
    LETTER-High-Level Synthesis and System-Level Design

      Vol:
    E95-A No:12
      Page(s):
    2392-2395

    Turbo codes and LDPC (Low-Density Parity-Check) codes are two of the most powerful error correction codes that can approach Shannon limit in many communication systems. But there are little architecture presented to support both LDPC and Turbo codes, especially by the means of ASIC. This paper have implemented a common architecture that can decode LDPC and Turbo codes, and it is capable of supporting the WiMAX, WiFi, 3GPP-LTE standard on the same hardware. In this paper, we will carefully describe how to share memory and logic devices in different operation mode. The chip is design in a 130 nm CMOS technology, and the maximum clock frequency can reach up to 160 MHz. The maximum throughput is about 104 Mbps@5.5 iteration for Turbo codes and 136 Mbps@10iteration for LDPC codes. Comparing to other existing structure, the design speed, area have significant advantage.

  • Comparative Study of Head-Disk Spacing Measurement Techniques between Optical Method and Various In-Situ Methods

    Sheng-Bin HU  Zhi-Min YUAN  Wei ZHANG  Bo LIU  Lei WAN  Rui XIAN  

     
    PAPER

      Vol:
    E85-C No:10
      Page(s):
    1784-1788

    The interaction between slider, lubricant and disk surface is becoming the most crucial robustness concern of advanced data storage systems. This paper reports comparative studies among various techniques for the measurement of head-disk spacing. It is noticed that the triple harmonic method gives a reading much closer to the reading of the head-disk spacing obtained optically at on-track center case, comparing with the PW50 method. Specially prepared disks with different carbon overcoat thickness (6.5 nm, 11 nm, 16 nm and 22 nm) were also used to study the reliability and repeatability of the triple harmonic method.

  • A New Method for Solving the Permutation Problem of Frequency-Domain Blind Source Separation

    Xuebin HU  Hidefumi KOBATAKE  

     
    PAPER-Engineering Acoustics

      Vol:
    E88-A No:6
      Page(s):
    1543-1548

    Frequency domain blind source separation has the great advantage that the complicated convolution in time domain becomes multiple efficient multiplications in frequency domain. However, the inherent ambiguity of permutation of ICA becomes an important problem that the separated signals at different frequencies may be permuted in order. Mapping the separated signal at each frequency to a target source remains to be a difficult problem. In this paper, we first discuss the inter-frequency correlation based method, and propose a new method using the continuity in power between adjacent frequency components of same source. The proposed method also implicitly utilizes the information of inter-frequency correlation, as such has better performance than the previous method.

  • Robust Adaptive Beamforming Based on the Effective Steering Vector Estimation and Covariance Matrix Reconstruction against Sensor Gain-Phase Errors

    Di YAO  Xin ZHANG  Bin HU  Xiaochuan WU  

     
    LETTER-Digital Signal Processing

      Pubricized:
    2020/06/04
      Vol:
    E103-A No:12
      Page(s):
    1655-1658

    A robust adaptive beamforming algorithm is proposed based on the precise interference-plus-noise covariance matrix reconstruction and steering vector estimation of the desired signal, even existing large gain-phase errors. Firstly, the model of array mismatches is proposed with the first-order Taylor series expansion. Then, an iterative method is designed to jointly estimate calibration coefficients and steering vectors of the desired signal and interferences. Next, the powers of interferences and noise are estimated by solving a quadratic optimization question with the derived closed-form solution. At last, the actual interference-plus-noise covariance matrix can be reconstructed as a weighted sum of the steering vectors and the corresponding powers. Simulation results demonstrate the effectiveness and advancement of the proposed method.

  • A More Accurate Analysis of Interference for Rake Combining on DS-CDMA Forward Link in Mobile Radio

    Kaibin HUANG  Fumiyuki ADACHI  Yong Huat CHEW  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E88-B No:2
      Page(s):
    654-663

    In this paper, we improve the performance analysis of the Rake receiver for the DS-CDMA forward link using long random spreading sequences (RSS's) by more accurately evaluating the correlation between the various interference terms. We also extend the analysis to the case of short (periodic) RSS. The accuracy of the expressions obtained in our analysis is verified by computer simulation. We show that for a given normalized spreading factor, the bit error rate (BER) performance of the Rake receiver is the same for BPSK and QPSK data modulation. We also show that when the channel delay spread is smaller than a data symbol duration, the CDMA receiver has similar BER performance for long and short RSS's. However, for large delay spread, the employment of short RSS's may result in severe performance degradation.

  • Micro Recording Performance Fluctuation and Magnetic Roughness Analysis: Methodology and Application

    Bo LIU  Wei ZHANG  Sheng-Bin HU  

     
    PAPER

      Vol:
    E83-C No:9
      Page(s):
    1530-1538

    As technology moves at an annual area density increase rate of 80-120% and channel density moves beyond 3, micro-fluctuation of media recording performance and the homogeneity of media's recording capability become serious reliability concerns in future high density magnetic recording systems. Two concepts are proposed in this work for the characterization of the micro-recording performance fluctuation at high bit and channel densities: recording performance roughness analysis and dynamic magnetic roughness analysis. The recording performance roughness analysis is based on an in-situ measurement technique of the non-linear transition shift (NLTS). Relationship between the performance roughness and the roughness of dynamic magnetic parameters are studied. Results of experimental investigations indicate that the NLTS based performance roughness analysis can reveal more details on media's recording capability and the capability fluctuation--the macro and micro fluctuation of recording performance. The dynamic magnetic roughness analysis is read/write operation based and can be used to characterize the macro and micro fluctuation of media's dynamic magnetic properties. The parameters used for the analysis include media's dynamic coercivity and the dynamic coercive squareness. Here, "dynamic" refers to the dynamic performance measured at MHz frequency. The authors also noticed in their technology development process that further methodology development and confirmation are necessary for media's dynamic performance analysis. Therefore, the work also extends to the accuracy analysis of the playback amplitude based methods for the analysis of the dynamic coercive squareness and dynamic hysteresis loop. A method which is of smaller testing error is identified and reported in this work.

  • ICA Mixture Analysis of Four-Phase Abdominal CT Images

    Xuebin HU  Akinobu SHIMIZU  Hidefumi KOBATAKE  Shigeru NAWANO  

     
    LETTER-Biological Engineering

      Vol:
    E87-D No:11
      Page(s):
    2521-2525

    This paper presents a new analysis result of two-dimensional four-phase abdominal CT images using variational Bayesian mixture of ICA. The four-phase CT images are assumed to be comprised of several exclusive areas, and each area is generated by a set of corresponding independent components. ICA mixture analysis results show that the CT images could be divided into a set of clinically and anatomically meaningful components. Initial analysis of the independent components shows its promising prospects in medical image processing and computer-aided diagnosis.

  • A “Group Marching Cube” (GMC) Algorithm for Speeding up the Marching Cube Algorithm

    Lih-Shyang CHEN  Young-Jinn LAY  Je-Bin HUANG  Yan-De CHEN  Ku-Yaw CHANG  Shao-Jer CHEN  

     
    PAPER-Computer Graphics

      Vol:
    E94-D No:6
      Page(s):
    1289-1298

    Although the Marching Cube (MC) algorithm is very popular for displaying images of voxel-based objects, its slow surface extraction process is usually considered to be one of its major disadvantages. It was pointed out that for the original MC algorithm, we can limit vertex calculations to once per vertex to speed up the surface extraction process, however, it did not mention how this process could be done efficiently. Neither was the reuse of these MC vertices looked into seriously in the literature. In this paper, we propose a “Group Marching Cube” (GMC) algorithm, to reduce the time needed for the vertex identification process, which is part of the surface extraction process. Since most of the triangle-vertices of an iso-surface are shared by many MC triangles, the vertex identification process can avoid the duplication of the vertices in the vertex array of the resultant triangle data. The MC algorithm is usually done through a hash table mechanism proposed in the literature and used by many software systems. Our proposed GMC algorithm considers a group of voxels simultaneously for the application of the MC algorithm to explore interesting features of the original MC algorithm that have not been discussed in the literature. Based on our experiments, for an object with more than 1 million vertices, the GMC algorithm is 3 to more than 10 times faster than the algorithm using a hash table. Another significant advantage of GMC is its compatibility with other algorithms that accelerate the MC algorithm. Together, the overall performance of the original MC algorithm is promoted even further.

  • An Area-Efficient Reconfigurable LDPC Decoder with Conflict Resolution

    Changsheng ZHOU  Yuebin HUANG  Shuangqu HUANG  Yun CHEN  Xiaoyang ZENG  

     
    PAPER

      Vol:
    E95-C No:4
      Page(s):
    478-486

    Based on Turbo-Decoding Message-Passing (TDMP) and Normalized Min-Sum (NMS) algorithm, an area efficient LDPC decoder that supports both structured and unstructured LDPC codes is proposed in this paper. We introduce a solution to solve the memory access conflict problem caused by TDMP algorithm. We also arrange the main timing schedule carefully to handle the operations of our solution while avoiding much additional hardware consumption. To reduce the memory bits needed, the extrinsic message storing strategy is also optimized. Besides the extrinsic message recover and the accumulate operation are merged together. To verify our architecture, a LDPC decoder that supports both China Multimedia Mobile Broadcasting (CMMB) and Digital Terrestrial/ Television Multimedia Broadcasting (DTMB) standards is developed using SMIC 0.13 µm standard CMOS process. The core area is 4.75 mm2 and the maximum operating clock frequency is 200 MHz. The estimated power consumption is 48.4 mW at 25 MHz for CMMB and 130.9 mW at 50 MHz for DTMB with 5 iterations and 1.2 V supply.

  • VLSI Implementation of a Modified Efficient SPIHT Encoder

    Win-Bin HUANG  Alvin W. Y. SU  Yau-Hwang KUO  

     
    PAPER-VLSI Architecture

      Vol:
    E89-A No:12
      Page(s):
    3613-3622

    Set Partitioning in Hierarchical Trees (SPIHT) is a highly efficient technique for compressing Discrete Wavelet Transform (DWT) decomposed images. Though its compression efficiency is a little less famous than Embedded Block Coding with Optimized Truncation (EBCOT) adopted by JPEG2000, SPIHT has a straight forward coding procedure and requires no tables. These make SPIHT a more appropriate algorithm for lower cost hardware implementation. In this paper, a modified SPIHT algorithm is presented. The modifications include a simplification of coefficient scanning process, a 1-D addressing method instead of the original 2-D arrangement of wavelet coefficients, and a fixed memory allocation for the data lists instead of a dynamic allocation approach required in the original SPIHT. Although the distortion is slightly increased, it facilitates an extremely fast throughput and easier hardware implementation. The VLSI implementation demonstrates that the proposed design can encode a CIF (352288) 4:2:0 image sequence with at least 30 frames per second at 100-MHz working frequency.

  • Adaptive Beamforming Based on Compressed Sensing with Gain/Phase Uncertainties

    Bin HU  Xiaochuan WU  Xin ZHANG  Qiang YANG  Di YAO  Weibo DENG  

     
    LETTER-Digital Signal Processing

      Vol:
    E101-A No:8
      Page(s):
    1257-1262

    A new method for adaptive digital beamforming technique with compressed sensing (CS) for sparse receiving arrays with gain/phase uncertainties is presented. Because of the sparsity of the arriving signals, CS theory can be adopted to sample and recover receiving signals with less data. But due to the existence of the gain/phase uncertainties, the sparse representation of the signal is not optimal. In order to eliminating the influence of the gain/phase uncertainties to the sparse representation, most present study focus on calibrating the gain/phase uncertainties first. To overcome the effect of the gain/phase uncertainties, a new dictionary optimization method based on the total least squares (TLS) algorithm is proposed in this paper. We transfer the array signal receiving model with the gain/phase uncertainties into an EIV model, treating the gain/phase uncertainties effect as an additive error matrix. The method we proposed in this paper reconstructs the data by estimating the sparse coefficients using CS signal reconstruction algorithm and using TLS method toupdate error matrix with gain/phase uncertainties. Simulation results show that the sparse regularized total least squares algorithm can recover the receiving signals better with the effect of gain/phase uncertainties. Then adaptive digital beamforming algorithms are adopted to form antenna beam using the recovered data.