The search functionality is under construction.

IEICE TRANSACTIONS on Fundamentals

  • Impact Factor

    0.48

  • Eigenfactor

    0.003

  • article influence

    0.1

  • Cite Score

    1.1

Advance publication (published online immediately after acceptance)

Volume E94-A No.2  (Publication Date:2011/02/01)

    Special Section on Image Media Quality
  • FOREWORD

    Mitsuho YAMADA  

     
    FOREWORD

      Page(s):
    471-472
  • Decoding Color Responses in Human Visual Cortex

    Ichiro KURIKI  Shingo NAKAMURA  Pei SUN  Kenichi UENO  Kazumichi MATSUMIYA  Keiji TANAKA  Satoshi SHIOIRI  Kang CHENG  

     
    INVITED PAPER

      Page(s):
    473-479

    Color percept is a subjective experience and, in general, it is impossible for other people to tell someone's color percept. The present study demonstrated that the simple image-classification analysis of brain activity obtained by a functional magnetic resonance imaging (fMRI) technique enables to tell which of four colors the subject is looking at. Our results also imply that color information is coded by the responses of hue-selective neurons in human brain, not by the combinations of red-green and blue-yellow hue components.

  • Color Shrinkage for Color-Image Sparse Coding and Its Applications

    Takahiro SAITO  Yasutaka UEDA  Takashi KOMATSU  

     
    INVITED PAPER

      Page(s):
    480-492

    As a basic tool for deriving sparse representation of a color image from its atomic-decomposition with a redundant dictionary, the authors have recently proposed a new kind of shrinkage technique, viz. color shrinkage, which utilizes inter-channel color dependence directly in the three primary color space. Among various schemes of color shrinkage, this paper particularly presents the soft color-shrinkage and the hard color-shrinkage, natural extensions of the classic soft-shrinkage and the classic hard-shrinkage respectively, and shows their advantages over the existing shrinkage approaches where the classic shrinkage techniques are applied after a color transformation such as the opponent color transformation. Moreover, this paper presents the applications of our color-shrinkage schemes to color-image processing in the redundant tight-frame transform domain, and shows their superiority over the existing shrinkage approaches.

  • Psychological Effects of Ambient Illumination Control and Illumination Layout While Viewing Various Video Images

    Takuya IWANAMI  Ayano KIKUCHI  Keita HIRAI  Toshiya NAKAGUCHI  Norimichi TSUMURA  Yoichi MIYAKE  

     
    PAPER-Vision

      Page(s):
    493-499

    Recently enhancing the visual experience of the user has been a new trend for TV displays. This trend comes from the fact that changes of ambient illuminations while viewing a Liquid Crystal Display (LCD) significantly affect human impressions. However, psychological effects caused by the combination of displayed video image and ambient illuminations have not been investigated. In the present research, we clarify the relationship between ambient illuminations and psychological effects while viewing video image displayed on the LCD by using a questionnaire based semantic differential (SD) method and a factor analysis method. Six kinds of video images were displayed under different colors and layouts of illumination conditions and rated by 15 observers. According to the analysis, it became clear that the illumination control around the LCD with displayed video image, the feeling of 'activity' and 'evaluating' were rated higher than the feeling of fluorescent ceiling condition. In particular, simultaneous illumination control around the display and the ceiling enhanced the feeling of 'activity,' and 'evaluating' with keeping 'comfort.' Moreover, the feeling of 'activity' under the illumination control around the LCD and the ceiling condition while viewing music video image was rated clearly higher than that with natural scene video image.

  • Interactive Support System for Image Quality Enhancement Focused on Lightness, Color and Sharpness

    Kazune AOIKE  Gosuke OHASHI  Yuichiro TOKUDA  Yoshifumi SHIMODAIRA  

     
    PAPER-Evaluation

      Page(s):
    500-508

    An interactive support system for image quality enhancement to adjust display equipments according to the user's own subjectivity is developed. Interactive support system for image quality enhancement enable the parameters based on user's preference to be derived by only selecting user's preference images without adjusting image quality parameters directly. In the interactive support system for image quality enhancement, the more the number of parameters is, the more effective this system is. In this paper, lightness, color and sharpness are used as the image quality parameters and the images are enhanced by increasing the number of parameters. Shape of tone curve is controlled by two image quality adjustment parameters for lightness enhancement. Images are enhanced using two image quality adjustment parameters for color enhancement. The two parameters are controlled in L* a* b* color space. Degree and coarseness of image sharpness enhancement are adjusted by controlling a radius of mask of smoothing filter and weight of adding. To confirm the effectiveness of the proposed method, the image quality and derivation time of the proposed method are compared with a manual adjustment method.

  • An Image Quality Assessment Model Based on the MPEG-7 Descriptor

    Masaharu SATO  Yuukou HORITA  

     
    PAPER-Evaluation

      Page(s):
    509-518

    Our research is focused on examining the Image Quality Assessment Model based on the MPEG-7 descriptor and the No Reference model. The model retrieves a reference image using image search and evaluate its subject score as a pseudo Reduced Reference model. The MPEG-7 descriptor was originally used for content retrieval, but we discovered that the MPEG-7 descriptor can also be used for image quality assessment. We examined the performance of the proposed model and the results revealed that this method has a higher performance rating than the SSIM.

  • Non-reference Quality Estimation for Temporal Degradation of Coded Picture

    Kenji SUGIYAMA  Naoya SAGARA  Ryo OKAWA  

     
    PAPER-Evaluation

      Page(s):
    519-524

    The non-reference method is widely useful for picture quality estimation on the decoder side. In other work, we discussed pure non-reference estimation using only the decoded picture, and we proposed quantitative estimation methods for mosquito noise and block artifacts. In this paper, we discuss the estimation method as it applies to the degradation of the temporal domain. In the proposed method, motion compensated inter-picture differences and motion vector activity are the basic parameters of temporal degradation. To obtain these parameters, accurate but unstable motion estimation is used with a 1/16 reduction of processing power. Similar values of the parameters in the pictures can be seen in the stable original picture, but temporal degradation caused by the coding increases them. For intra-coded pictures, the values increase significantly. However, for inter-coded pictures, the values are the same or decrease. Therefore, by taking the ratio of the peak frame and other frames, the absolute value of the temporal degradation can be estimated. In this case, the peak frame may be intra-coded. Finally, we evaluate the proposed method using coded pictures with different quantization.

  • High Contrast HDR Video Tone Mapping Based on Gamma Curves

    Takao JINNO  Kazuya MOURI  Masahiro OKUDA  

     
    PAPER-Processing

      Page(s):
    525-532

    In this paper we propose a new tone mapping method for HDR video. Two types of gamma tone mapping are blended to preserve local contrast in the entire range of luminance. Our method achieves high quality tone mapping especially for the HDR video that has a nonlinear response to scene radiance. Additionally, we apply it to an object-aware tone mapping method for camera surveillance. This method achieves high visibility of target objects in the tone mapped HDR video. We examine the validity of our methods through simulation and comparison with conventional work.

  • Block-Based Bag of Words for Robust Face Recognition under Variant Conditions of Facial Expression, Illumination, and Partial Occlusion

    Zisheng LI  Jun-ichi IMAI  Masahide KANEKO  

     
    PAPER-Processing

      Page(s):
    533-541

    In many real-world face recognition applications, there might be only one training image per person available. Moreover, the test images may vary in facial expressions and illuminations, or may be partially occluded. However, most classical face recognition techniques assume that multiple images per person are available for training, and they are difficult to deal with extreme expressions, illuminations and occlusions. This paper proposes a novel block-based bag of words (BBoW) method to solve those problems. In our approach, a face image is partitioned into multiple blocks, dense SIFT features are then calculated and vector quantized into different visual words on each block respectively. Finally, histograms of codeword distribution on each local block are concatenated to represent the face image. Our method is able to capture local features on each block while maintaining holistic spatial information of different facial components. Without any illumination compensation or image alignment processing, the proposed method achieves excellent face recognition results on AR and XM2VTS databases. Experimental results show that only using one neutral expression frame per person for training, our method can obtain the best performance ever on face images of AR database with extreme expressions, variant illuminations, and partial occlusions. We also test our method on the standard and darkened sets of XM2VTS database, and achieve the average rates of 100% and 96.10% on the standard and darkened sets of XM2VTS database, respectively.

  • Accuracy of Smooth Pursuit Eye Movement and Perception Rate of a False Contour While Pursuing a Rapidly Moving Image

    Yusuke HORIE  Yuta KAWAMURA  Akiyuki SEITA  Mitsuho YAMADA  

     
    LETTER-Vision

      Page(s):
    542-547

    The purpose of this study was to clarify whether viewers can perceive a digitally deteriorated image while pursuing a speedily moving digitally compressed image. We studied the perception characteristics of false contours among the various digital deteriorations for the four types of displays i.e. CRT, PDP, EL, LCD by changing the gradation levels and the speed of moving image as parameters. It is known that 8 bits is not high enough resolution for still images, and it is assumed that 8 bits is also not enough for an image moving at less than 5 deg/sec since the tracking accuracy of smooth pursuit eye movement (SPEM) is very high for a target moving at less than 5 deg/sec. Given these facts, we focused on images moving at more than 5 deg/sec. In our results, the images deteriorated by a false contour at a gradation level less than 32 were perceived by every subject at almost all velocities, from 5 degrees/sec to 30 degrees/sec, for all four types of displays we used. However, the perception rate drastically decreased when the gradation levels reached 64, with almost no subjects detecting deterioration for gradation levels more than 64 at any velocity. Compared to other displays, LCDs yielded relatively high recognition rates for gradation levels of 64, especially at lower velocities.

  • Linearity Improvement of Mosquito Noise Level Estimation from Decoded Picture

    Naoya SAGARA  Yousuke KASHIMURA  Kenji SUGIYAMA  

     
    LETTER-Evaluation

      Page(s):
    548-551

    DCT encoding of images leads to block artifact and mosquito noise degradations in the decoded pictures. We propose an estimation to determine the mosquito noise block and level; however, this technique lacks sufficient linearity. To improve its performance, we use the sub-divided block for edge effect suppression. The subsequent results are mostly linear with the quantization.

  • Image Quality Enhancement for Single-Image Super Resolution Based on Local Similarities and Support Vector Regression

    Atsushi YAGUCHI  Tadaaki HOSAKA  Takayuki HAMAMOTO  

     
    LETTER-Processing

      Page(s):
    552-554

    In reconstruction-based super resolution, a high-resolution image is estimated using multiple low-resolution images with sub-pixel misalignments. Therefore, when only one low-resolution image is available, it is generally difficult to obtain a favorable image. This letter proposes a method for overcoming this difficulty for single- image super resolution. In our method, after interpolating pixel values at sub-pixel locations on a patch-by-patch basis by support vector regression, in which learning samples are collected within the given image based on local similarities, we solve the regularized reconstruction problem with a sufficient number of constraints. Evaluation experiments were performed for artificial and natural images, and the obtained high-resolution images indicate the high-frequency components favorably along with improved PSNRs.

  • Special Section on Analog Circuit Techniques and Related Topics
  • FOREWORD

    Yasuyuki MATSUYA  

     
    FOREWORD

      Page(s):
    555-555
  • RF CMOS Integrated Circuit: History, Current Status and Future Prospects

    Noboru ISHIHARA  Shuhei AMAKAWA  Kazuya MASU  

     
    INVITED PAPER

      Page(s):
    556-567

    As great advancements have been made in CMOS process technology over the past 20 years, RF CMOS circuits operating in the microwave band have rapidly developed from component circuit levels to multiband/multimode transceiver levels. In the next ten years, it is highly likely that the following devices will be realized: (i) versatile transceivers such as those used in software-defined radios (SDR), cognitive radios (CR), and reconfigurable radios (RR); (ii) systems that operate in the millimeter-wave or terahertz-wave region and achieve high speed and large-capacity data transmission; and (iii) microminiaturized low-power RF communication systems that will be extensively used in our everyday lives. However, classical technology for designing analog RF circuits cannot be used to design circuits for the abovementioned devices since it can be applied only in the case of continuous voltage and continuous time signals; therefore, it is necessary to integrate the design of high-speed digital circuits, which is based on the use of discrete voltages and the discrete time domain, with analog design, in order to both achieve wideband operation and compensate for signal distortions as well as variations in process, power supply voltage, and temperature. Moreover, as it is thought that small integration of the antenna and the interface circuit is indispensable to achieve miniaturized micro RF communication systems, the construction of the integrated design environment with the Micro Electro Mechanical Systems (MEMS) device etc. of the different kind devices becomes more important. In this paper, the history and the current status of the development of RF CMOS circuits are reviewed, and the future status of RF CMOS circuits is predicted.

  • A 1.2-3.2 GHz CMOS VCO IC Utilizing Transformer-Based Variable Inductors and AMOS Varactors

    Qing LIU  Yusuke TAKIGAWA  Satoshi KURACHI  Nobuyuki ITOH  Toshihiko YOSHIMASU  

     
    PAPER

      Page(s):
    568-573

    A novel resonant circuit consisting of transformer-based switched variable inductors and switched accumulation MOS (AMOS) varactors is proposed to realize an ultrawide tuning range voltage-controlled-oscillator (VCO). The VCO IC is designed and fabricated using 0.11 µm CMOS technology and fully evaluated on-wafer. The VCO exhibits a frequency tuning range as high as 92.6% spanning from 1.20 GHz to 3.27 GHz at an operation voltage of 1.5 V. The measured phase noise of -120 dBc/Hz at 1 MHz offset from the 3.1 GHz carrier is obtained.

  • A Time-Variant Analysis of Phase Noise in Series Quadrature Oscillators

    Jinhua LIU  Guican CHEN  Hong ZHANG  

     
    PAPER

      Page(s):
    574-582

    This paper presents a systemic analysis for phase noise performances of the series quadrature oscillator (QOSC) by using the time-variant impulse sensitivity function (ISF) model. The effective ISF for each noise source in the oscillator is derived mathematically. According to these effective ISFs, the explicit closed-form expression for phase noise due to the total thermal noise in the series QOSC is derived, and the phase noise contribution from the flicker noise in the regenerative and coupling transistors is also figured out. The phase noise contributions from the thermal noise and the flicker noise are verified by SpectreRF simulations.

  • Active Q Factor Analysis for Non-uniform Microstrip Stub Colpitts FET Oscillators

    Tuya WUREN  Takashi OHIRA  

     
    PAPER

      Page(s):
    583-591

    This paper presents Q factor analysis for FET oscillators employing distributed-constant elements. We replace the inductor of a lumped constant Colpitts circuit by a shorted microstrip transmission line for high frequency applications. Involving the FET's transconductance and the transmission line's loss due to both conducting metal and dielectric substrate, we deduce the Q factor formula for the entire circuit in the steady oscillation state. We compared the computed results from the oscillator employing an uniform shorted microstrip line with that of the original LC oscillator. For obtaining even higher Q factor, we modify the shape of transmission line into nonuniform, i.e., step-, tapered-, and partially-tapered stubs. Non-uniformity causes some complexity in the impedance analysis. We exploit a piecewise uniform approximation for tapered part of the microstrip stub, and then involve the asymptotic expressions obtained from both stub's impedance and its frequency derivatives into the active Q factor formula. Applying these formulations, we calculate out the value of capacitance for tuning, the necessary FET's transconductance and achievable active Q factor, and then finally explore oscillator performances with a microstrip stub in different shapes and sizes.

  • A Low-Noise and Highly-Linear Transmitter with Envelope Injection Pre-Power Amplifier for Multi-Mode Radio

    Shouhei KOUSAI  Daisuke MIYASHITA  Junji WADATSUMI  Rui ITO  Takahiro SEKIGUCHI  Mototsugu HAMADA  Kenichi OKADA  

     
    PAPER

      Page(s):
    592-602

    A wideband, low noise, and highly linear transmitter for multi-mode radio is presented. Envelope injection scheme with a CMOS amplifier is developed to obtain sufficient linearity for complex modulation schemes such as OFDM, and to achieve low noise for concurrent operation of more than one standard. Active matching technique with doubly terminated LPF topology is also presented to realize wide bandwidth, low power consumption, and to eliminate off-chip components without increasing die area. A multi-mode transmitter is implemented in a 0.13 µm CMOS technology with an active area of 1.13 mm2. Third-order intermodulation product is improved by 17 dB at -3 dBm output by the envelope injection scheme. The transmitter achieves EVM of less than -29.5 dB at -3 dBm output from 0.2 to 7.2 GHz while consuming only 69 mW. The transmitter is also tested with multiple standards of UMTS, 802.11b, WiMax, 802.11a, and 802.11n, and satisfies EVM, ACLR, and spectrum specifications.

  • A Broadband High Suppression Frequency Doubler IC for Sub-Millimeter-Wave UWB Applications

    Jiangtao SUN  Qing LIU  Yong-Ju SUH  Takayuki SHIBATA  Toshihiko YOSHIMASU  

     
    PAPER

      Page(s):
    603-610

    A broadband balanced frequency doubler has been demonstrated in 0.25-µm SOI SiGe BiCMOS technology to operate from 22 GHz to 30 GHz. The measured fundamental frequency suppression of greater than 30 dBc is achieved by an internal low pass LC filter. In addition, a pair of matching circuits in parallel with the LO inputs results in high suppression with low input drive power. Maximum measured conversion gain of -6 dB is obtained at the input drive power as low as -1 dBm. The results presented indicate that the proposed frequency doubler can operate in broadband and achieve high fundamental frequency suppression with low input drive power.

  • A 2-GHz Gain Equalizer for Analog Signal Transmission Using Feedforward Compensation by a Low-Pass Filter

    Masayoshi TAKAHASHI  Keiichi YAMAMOTO  Norio CHUJO  Ritsurou ORIHASHI  

     
    PAPER

      Page(s):
    611-616

    A 2 GHz gain equalizer for analog signal transmission using a novel gain compensation method is described in this paper. This method is based on feedforward compensation by a low-pass filter, which improves the gain-equalizing performance by subtracting low-pass filtered signals from the directly passed signal at the end of a transmission line. The advantage of the proposed method over the conventional one is that the gain is equalized with a smaller THD at higher frequencies by using a low-pass instead of a high-pass filter. In this circuit, the peak gain is adjustable from 0 to 2.4 dB and the frequency of the peak gain can be controlled up to 2 GHz by varying the value of an external capacitor. Also this circuit achieves THD with 5 dB better than the conventional circuits.

  • Automated Microwave Filter Tuning Based on Successive Optimization of Phase and Amplitude Characteristics

    Yosuke TAKEUCHI  Koichi ICHIGE  Koichi MIYAMOTO  Yoshio EBINE  

     
    PAPER

      Page(s):
    617-624

    This paper presents a novel automated microwave filter tuning method based on successive optimization of phase and amplitude characteristics. We develop an optimization procedure to determine how much the adjusting screws of a filter should be rotated. The proposed filter tuning method consists of two stages; coarse and fine tuning stages. In the first stage, called coarse tuning, the phase response error of the target filter is minimized so that the filter roughly approximates almost ideal bandpass characteristics. Then in the second stage, called fine tuning, two different amplitude response errors are minimized in turn and then the resulting filter well approximate the ideal characteristics. Performance of the proposed tuning procedure is evaluated through some experiments of actual filter tuning.

  • Capacitance Reduction Technique for Switched-Capacitor Circuits Based on Charge Distribution and Partial Charge Transfer

    Retdian NICODIMUS  Shigetaka TAKAGI  

     
    PAPER

      Page(s):
    625-632

    This paper proposes a technique to reduce the capacitance spread in switched-capacitor (SC) filters. The proposed technique is based on a simple charge distribution and partial charge transfer which is applicable to various integrator topologies. An implementation example on an existing integrator topology and a design example of a 2nd-order SC low-pass filter are given to demonstrate the performance of the proposed technique. A design example of an SC filter show that the filter designed using the proposed technique has an approximately 23% less total capacitance than the one of SC low-pass filter with conventional capacitance spread reduction technique.

  • A Self-Calibrating Per-Pin Phase Adjuster for Source Synchronous Double Data Rate Signaling in Parallel Interface

    Young-Chan JANG  

     
    PAPER

      Page(s):
    633-638

    A self-calibrating per-pin phase adjuster, which does not require any feedback from the slave chip and a multi-phase clock in the master and slave chips, is proposed for a high speed parallel chip-to-chip interface with a source synchronous double data rate (DDR) signaling. It achieves not only per-pin phase adjustment but also 90° phase shift of a strobe signal for a source synchronous DDR signaling. For this self-calibration, the phase adjuster measures and compensates the only relative mismatched delay among channels by utilizing on-chip time-domain reflectometry (TDR). Thus, variable delay lines, finite state machines, and a test signal generator are additionally required for the proposed phase adjuster. In addition, the power-gating receiver is used to reduce the discontinuity effect of the channel including parasitic components of chip package. To verify the proposed self-calibrating per-pin phase adjuster, the transceivers with 16 data, strobe, and clock signals for the interface with a source synchronous DDR signaling were implemented by using a 60 nm 1-poly 3-metal CMOS DRAM process with a 1.5 V supply. Each phase skew between Strobe and 16 Data was corrected within 0.028UI at 1.6-Gb/s data rate in a point-to-point channel.

  • A Design Procedure for CMOS Three-Stage NMC Amplifiers

    Mohammad YAVARI  

     
    PAPER

      Page(s):
    639-645

    This paper presents a novel time-domain design procedure for fast-settling three-stage nested-Miller compensated (NMC) amplifiers. In the proposed design methodology, the amplifier is designed to settle within a definite time period with a given settling accuracy by optimizing both the power consumption and silicon die area. Detailed design equations are presented and the circuit level simulation results are provided to verify the usefulness of the proposed design procedure with respect to the previously reported design schemes.

  • Design and Modeling of a High Efficiency Step-Up/Step-Down DC-DC Converter with Smooth Transition

    Yanzhao MA  Hongyi WANG  Guican CHEN  

     
    PAPER

      Page(s):
    646-652

    This paper presents a step-up/step-down DC-DC converter with three operation modes to achieve high efficiency and small output ripple voltage. A constant time buck-boost mode, which is inserted between buck mode and boost mode, is proposed to achieve smooth transition. With the proposed mode, the output ripple voltage is significantly reduced when the input voltage is approximate to the output voltage. Besides, the novel control scheme minimizes the conduction loss by reducing the average inductor current and the switching loss by making the converter operate like a buck or boost converter. The small signal model of the step-up/step-down DC-DC converter is also derived to guide the compensation network design. The step-up/step-down converter is designed with a 0.5 µm CMOS n-well process, and can regulate an output voltage within the input voltage ranged from 2.5 V to 5.5 V with a maximum power efficiency of 96%. The simulation results show that the proposed converter exhibits an output ripple voltage of 28 mV in the transition mode.

  • Regular Section
  • Synthesis of 2-Channel IIR Paraunitary Filter Banks by Successive Extraction of 2-Port Lattice Sections

    Nagato UEDA  Eiji WATANABE  Akinori NISHIHARA  

     
    PAPER-Digital Signal Processing

      Page(s):
    653-660

    This paper proposes a synthesis method of 2-channel IIR paraunitary filter banks by successive extraction of 2-port lattice sections. When a power symmetry transfer function is given, a filter bank is realized as cascade of paraunitary 2-port lattice sections. The method can synthesize both odd- and even-order filters with Butterworth or elliptic characteristics. The number of multiplications per second can also be reduced.

  • Improvement of Detection Performance in DWT-Based Image Watermarking under Specified False Positive Probability

    Masayoshi NAKAMOTO  Kohei SAYAMA  Mitsuji MUNEYASU  Tomotaka HARANO  Shuichi OHNO  

     
    PAPER-Digital Signal Processing

      Page(s):
    661-670

    For copyright protection, a watermark signal is embedded in host images with a secret key, and a correlation is applied to judge the presence of watermark signal in the watermark detection. This paper treats a discrete wavelet transform (DWT)-based image watermarking method under specified false positive probability. We propose a new watermarking method to improve the detection performance by using not only positive correlation but also negative correlation. Also we present a statistical analysis for the detection performance with taking into account the false positive probability and prove the effectiveness of the proposed method. By using some experimental results, we verify the statistical analysis and show this method serves to improve the robustness against some attacks.

  • A Simplified Lattice Structure of Two Dimensional Generalized Lapped Orthogonal Transform

    Taichi YOSHIDA  Seisuke KYOCHI  Masaaki IKEHARA  

     
    PAPER-Digital Signal Processing

      Page(s):
    671-679

    In this paper, we propose a novel lattice structure of two dimensional (2D) nonseparable linear-phase paraunitary filter banks (LPPUFBs) called 2D GenLOT. Muramatsu et al. have previously proposed a lattice structure of 2D nonseparable LPPUFBs which have efficient frequency response. However, the proposed structure requires less number of design parameters and computational costs than the conventional one. Through some design examples and simulation results, we show that both filter banks have comparable frequency response and coding gain.

  • Min-Max Model Predictive Controller for Trajectory Tracking of a Wheeled Mobile Robot with Slipping Effects

    Yu GAO  Kil To CHONG  

     
    PAPER-Systems and Control

      Page(s):
    680-687

    A min-max model predictive controller is developed in this paper for tracking control of wheeled mobile robots (WMRs) subject to the violation of nonholonomic constraints in an environment without obstacles. The problem is simplified by neglecting the vehicle dynamics and considering only the steering system. The linearized tracking-error kinematic model with the presence of uncertain disturbances is formed in the frame of the robot. And then, the control policy is derived from the worst-case optimization of a quadratic cost function, which penalizes the tracking error and control variables in each sampling time over a finite horizon. As a result, the input sequence must be feasible for all possible disturbance realizations. The performance of the control algorithm is verified via the computer simulations with a predefined trajectory and is compared to a common discrete-time sliding mode control law. The result shows that the proposed method has a better tracking performance and convergence.

  • Substrate Pick-Up Impacting on ESD Performances of Cascode NMOS Transistors

    Shao-Chang HUANG  Ke-Horng CHEN  

     
    PAPER-VLSI Design Technology and CAD

      Page(s):
    688-695

    The cascode NMOS architecture has been tested by the Human Body Model (HBM), Machine Model (MM) and Transmission Line Pulse Generator (TLP) in this paper. For the TLP, detailed silicon data have been analyzed well in many parameters, such as the first triggering-on voltage (Vt1), the first triggering-on current (It1), the holding voltage (Vh), and the TLP I-V curve. Besides the above three kinds of Electrostatic Discharge (ESD) events, the device gate oxide breakdown voltage is also taken into consideration and the correlations between HBM, MM and TLP are also observed. In order to explain the bipolar transistor turning-on mechanisms, two kinds of models have been proposed in this paper. In typical cases, substrate resistance decreases as the technology advances. On the one hand, for processes older than the 0.35 µm process, such as 0.5 µm and 1 µm, ESD designers can use pick-up insertions to trigger integrated circuits (IC) turn on uniformly. The NPN Side Model can dominate ESD performances in such old processes. On the other hand, in 0.18 µm and newer processes, such as 0.15 µm, 0.13 µm, 90 nm, etc., ESD designers must use non-pick-up insertion structures. The NPN Central Model can dominate ESD performances in such processes. After combining both models together, the bipolar turning-on mechanisms can be explained as "ESD currents occur from side regions to central regions." Besides ESD parasitic bipolar transistor turning-on concerns, another reason that ESD designers should use non-pick-up insertions in deep sub-micron processes is the decreasing of the gate oxide breakdown voltage. As IC size scales down, the gate oxide thickness lessens. The thinner gate oxide thickness will encounter a smaller gate oxide breakdown voltage. In order to avoid gate oxide damage under ESD stresses, ESD designers should endeavor to decrease ESD device turn-on resistances. ESD protecting devices with low turn-on resistances can endure larger currents for the same TLP voltage. In this paper, silicon data show that the non-pick-up insertion cascode NMOS transistor's turning on resistance is smaller than the pick-up insertion cascode NMOS transistor's turning on resistance. Although this paper discovers NPN turning-on mechanisms based on the cascode NMOS structure, ESD designers can adopt the same theories for other kinds of ESD protecting structures, such as one single poly Gate-Grounded NMOS transistor (GGNMOST). ESD designers can use pick-up insertion architecture for NMOS transistors in the low-end processes, but utilize the non-pick-up insertion architecture for GGNMOST in the high-end processes. Then they can obtain the optimized ESD performances.

  • Multi-Level Bounded Model Checking with Symbolic Counterexamples

    Tasuku NISHIHARA  Takeshi MATSUMOTO  Masahiro FUJITA  

     
    PAPER-VLSI Design Technology and CAD

      Page(s):
    696-705

    Bounded model checking is a widely used formal technique in both hardware and software verification. However, it cannot be applied if the bounds (number of time frames to be analyzed) become large, and deep bugs which are observed only through very long counter-examples cannot be detected. This paper presents a method concatenating multiple bounded model checking results efficiently with symbolic simulation. A bounded model checking with a large bound is recursively decomposed into multiple ones with smaller bounds, and symbolic simulation on each counterexample supports smooth connections to the others. A strong heuristic for the proposed method that targets deep bugs is also presented, and can be applied together with other efficient bounded model checking methods since it does not touch the basic bounded model checking algorithm.

  • Post-Routing Double-Via Insertion for X-Architecture Clock Tree Yield Improvement

    Chia-Chun TSAI  Chung-Chieh KUO  Trong-Yen LEE  

     
    PAPER-VLSI Design Technology and CAD

      Page(s):
    706-716

    As the VLSI manufacturing technology shrinks to 65 nm and below, reducing the yield loss induced by via failures is a critical issue in design for manufacturability (DFM). Semiconductor foundries highly recommend using the double-via insertion (DVI) method to improve yield and reliability of designs. This work applies the DVI method in the post-stage of an X-architecture clock routing for double-via insertion rate improvement. The proposed DVI-X algorithm constructs the bipartite graphs of the partitioned clock routing layout with single vias and redundant-via candidates (RVCs). Then, DVI-X applies the augmenting path approach associated with the construction of the maximal cliques to obtain the matching solution from the bipartite graphs. Experimental results on benchmarks show that DVI-X can achieve higher double-via insertion rate by 3% and less running time by 68% than existing works. Moreover, a skew tuning technique is further applied to achieve zero skew because the inserted double vias affect the clock skew.

  • Sanitizable Signatures Reconsidered

    Dae Hyun YUM  Pil Joong LEE  

     
    PAPER-Cryptography and Information Security

      Page(s):
    717-724

    A sanitizable signature scheme allows a semi-trusted party, designated by a signer, to modify pre-determined parts of a signed message without interacting with the original signer. To date, many sanitizable signature schemes have been proposed based on various cryptographic techniques. However, previous works are usually built upon the paradigm of dividing a message into submessages and applying a cryptographic primitive to each submessage. This methodology entails the computation time (and often signature length) in linear proportion to the number of sanitizable submessages. We present a new approach to constructing sanitizable signatures with constant overhead for signing and verification, irrespective of the number of submessages, both in computational cost and in signature size.

  • Universally Composable and Statistically Secure Verifiable Secret Sharing Scheme Based on Pre-Distributed Data

    Rafael DOWSLEY  Jorn MULLER-QUADE  Akira OTSUKA  Goichiro HANAOKA  Hideki IMAI  Anderson C.A. NASCIMENTO  

     
    PAPER-Cryptography and Information Security

      Page(s):
    725-734

    This paper presents a non-interactive verifiable secret sharing scheme (VSS) tolerating a dishonest majority based on data pre-distributed by a trusted authority. As an application of this VSS scheme we present very efficient unconditionally secure protocols for performing multiplication of shares based on pre-distributed data which generalize two-party computations based on linear pre-distributed bit commitments. The main results of this paper are a non-interactive VSS, a simplified multiplication protocol for shared values based on pre-distributed random products, and non-interactive zero knowledge proofs for arbitrary polynomial relations. The security of the schemes is proved using the UC framework.

  • Public-Key Encryptions with Invariant Security Reductions in the Multi-User Setting

    Mototsugu NISHIOKA  Naohisa KOMATSU  

     
    PAPER-Cryptography and Information Security

      Page(s):
    735-760

    In [1], Bellare, Boldyreva, and Micali addressed the security of public-key encryptions (PKEs) in a multi-user setting (called the BBM model in this paper). They showed that although the indistinguishability in the BBM model is induced from that in the conventional model, its reduction is far from tight in general, and this brings a serious key length problem. In this paper, we discuss PKE schemes in which the IND-CCA security in the BBM model can be obtained tightly from the IND-CCA security. We call such PKE schemes IND-CCA secure in the BBM model with invariant security reductions (briefly, SR-invariant IND-CCABBM secure). These schemes never suffer from the underlying key length problem in the BBM model. We present three instances of an SR-invariant IND-CCABBM secure PKE scheme: the first is based on the Fujisaki-Okamoto PKE scheme [7], the second is based on the Bellare-Rogaway PKE scheme [3], and the last is based on the Cramer-Shoup PKE scheme [5].

  • Trace Representation of Binary Generalized Cyclotomic Sequences with Length pm

    Xiaoni DU  Zhixiong CHEN  

     
    PAPER-Information Theory

      Page(s):
    761-765

    Some new generalized cyclotomic sequences defined by C. Ding and T. Helleseth are proven to exhibit a number of good randomness properties. In this paper, we determine the defining pairs of these sequences of length pm (p prime, m ≥ 2) with order two, then from which we obtain their trace representation. Thus their linear complexity can be derived using Key's method.

  • Construction of Binary Array Set with Zero Correlation Zone Based on Interleaving Technique

    Yifeng TU  Pingzhi FAN  Li HAO  Xiyang LI  

     
    PAPER-Information Theory

      Page(s):
    766-772

    Sequences with good correlation properties are of substantial interest in many applications. By interleaving a perfect array with shift sequences, a new method of constructing binary array set with zero correlation zone (ZCZ) is presented. The interleaving operation can be performed not only row-by-row but also column-by-column on the perfect array. The resultant ZCZ binary array set is optimal or almost optimal with respect to the theoretical bound. The new method provides a flexible choice for the rectangular ZCZ and the set size.

  • Two-Way Parity Bit Correction Encoding Algorithm for Dual-Diagonal LDPC Codes

    Chia-Yu LIN  Chih-Chun WEI  Mong-Kai KU  

     
    PAPER-Coding Theory

      Page(s):
    773-780

    In this paper, an efficient encoding scheme for dual-diagonal LDPC codes is proposed. Our two-way parity bit correction algorithm breaks up the data dependency within the encoding process to achieve higher throughput, lower latency and better hardware utilization. The proposed scheme can be directly applied to dual-diagonal codes without matrix modifications. FPGA encoder prototypes are implemented for IEEE 802.11n and 802.16e codes. Results show that the proposed architecture outperforms in terms of throughput and throughput/area ratio.

  • Unique Fingerprint-Image-Generation Algorithm for Line Sensors

    Hao NI  Dongju LI  Tsuyoshi ISSHIKI  Hiroaki KUNIEDA  

     
    PAPER-Image

      Page(s):
    781-788

    It is theoretically impossible to restore the original fingerprint image from a sequence of line images captured by a line sensor. However, in this paper we propose a unique fingerprint-image-generation algorithm, which derives fingerprint images from sequences of line images captured at different swipe speeds by the line sensor. A continuous image representation, called trajectory, is used in modeling distortion of raw fingerprint images. Sequences of line images captured from the same finger are considered as sequences of points, which are sampled on the same trajectory in N-dimensional vector space. The key point here is not to reconstruct the original image, but to generate identical images from the trajectory, which are independent of the swipe speed of the finger. The method for applying the algorithm in a practical application is also presented. Experimental results on a raw fingerprint image database from a line sensor show that the generated fingerprint images are independent of swipe speed, and can achieve remarkable matching performance with a conventional minutiae matcher.

  • Quantitative Analysis on Usability of Button-Input Interfaces

    Yoshinobu MAEDA  Kentaro TANI  Nao ITO  Michio MIYAKAWA  

     
    PAPER-Human Communications

      Page(s):
    789-794

    In this paper we show that the performance workload of button-input interfaces do not monotonically increase with the number of buttons, but there is an optimal number of buttons in the sense that the performance workload is minimized. As the number of buttons increases, it becomes more difficult to search for the target button, and, as such, the user's cognitive workload is increased. As the number of buttons decreases, the user's cognitive workload decreases but his operational workload increases, i.e., the amount of operations becomes larger because one button has to be used for plural functions. The optimal number of buttons emerges by combining the cognitive and operational workloads. The experiments used to measure performance were such that we were able to describe a multiple regression equation using two observable variables related to the cognitive and operational workloads. As a result, our equation explained the data well and the optimal number of buttons was found to be about 8, similar to the number adopted by commercial cell phone manufacturers. It was clarified that an interface with a number of buttons close to the number of letters in the alphabet was not necessarily easy to use.

  • Local Search with Probabilistic Modeling for Learning Multiple-Valued Logic Networks

    Shangce GAO  Qiping CAO  Masahiro ISHII  Zheng TANG  

     
    PAPER-Neural Networks and Bioengineering

      Page(s):
    795-805

    This paper proposes a probabilistic modeling learning algorithm for the local search approach to the Multiple-Valued Logic (MVL) networks. The learning model (PMLS) has two phases: a local search (LS) phase, and a probabilistic modeling (PM) phase. The LS performs searches by updating the parameters of the MVL network. It is equivalent to a gradient decrease of the error measures, and leads to a local minimum of error that represents a good solution to the problem. Once the LS is trapped in local minima, the PM phase attempts to generate a new starting point for LS for further search. It is expected that the further search is guided to a promising area by the probability model. Thus, the proposed algorithm can escape from local minima and further search better results. We test the algorithm on many randomly generated MVL networks. Simulation results show that the proposed algorithm is better than the other improved local search learning methods, such as stochastic dynamic local search (SDLS) and chaotic dynamic local search (CDLS).

  • Estimation of Blood Pressure Measurements for Hypertension Diagnosis Using Oscillometric Method

    Youngsuk SHIN  

     
    PAPER-Neural Networks and Bioengineering

      Page(s):
    806-812

    Blood pressure is the measurement of the force exerted by blood against the walls of the arteries. Hypertension is a major risk factor of cardiovascular diseases. The systolic and diastolic blood pressures obtained from the oscillometric method could carry clues about hypertension. However, blood pressure is influenced by individual traits such as physiology, the geometry of the heart, body figure, gender and age. Therefore, consideration of individual traits is a requisite for reliable hypertension monitoring. The oscillation waveforms extracted from the cuff pressure reflect individual traits in terms of oscillation patterns that vary in size and amplitude over time. Thus, uniform features for individual traits from the oscillation patterns were extracted, and they were applied to evaluate systolic and diastolic blood pressures using two feedforward neural networks. The measurements of systolic and diastolic blood pressures from two neural networks were compared with the average values of systolic and diastolic blood pressures obtained by two nurses using the auscultatory method. The recognition performance was based on the difference between the blood pressures measured by the auscultation method and the proposed method with two neural networks. The recognition performance for systolic blood pressure was found to be 98.2% for 20 mmHg, 93.5% for 15 mmHg, and 82.3% for 10 mmHg, based on maximum negative amplitude. The recognition performance for diastolic blood pressure was found to be 100% for 20 mmHg, 98.8% for 15 mmHg, and 88.2% for 10 mmHg based on maximum positive amplitude. In our results, systolic blood pressure showed more fluctuation than diastolic blood pressure in terms of individual traits, and subjects with prehypertension or hypertension (systolic blood pressure) showed a stronger steep-slope pattern in 1/3 section of the feature windows than normal subjects. The other side, subjects with prehypertension or hypertension (diastolic blood pressure) showed a steep-slope pattern in front of the feature windows (2/3 section) than normal subjects. This paper presented a novel blood pressure measurement system that can monitor hypertension using personalized traits. Our study can serve as a foundation for reliable hypertension diagnosis and management based on consideration of individual traits.

  • Linearization Ability Evaluation for Loudspeaker Systems Using Dynamic Distortion Measurement

    Shoichi KITAGAWA  Yoshinobu KAJIKAWA  

     
    LETTER-Engineering Acoustics

      Page(s):
    813-816

    In this letter, the compensation ability of nonlinear distortions for loudspeaker systems is demonstrated using dynamic distortion measurement. Two linearization methods using a Volterra filter and a Mirror filter are compared. The conventional evaluation utilizes swept multi-sinusoidal waves. However, it is unsatisfactory because wideband signals such as those of music and voices are usually applied to loudspeaker systems. Hence, the authours use dynamic distortion measurement employing a white noise. Experimental results show that the two linearization methods can effectively reduce nonlinear distortions for wideband signals.

  • A Robust Detection in the Presence of Clutter and Jammer Signals with Unknown Powers

    Victor GOLIKOV  Olga LEBEDEVA  

     
    LETTER-Digital Signal Processing

      Page(s):
    817-822

    This work extends the constant false alarm rate (CFAR) detection methodology to detection in the presence of two independent interference sources with unknown powers. The proposed detector is analyzed on the assumption that clutter and jammer covariance structures are known and have relatively low rank properties. The limited-dimensional subspace-based approach leads to a robust false alarm rate (RFAR) detector. The RFAR detection algorithm is developed by an adaptation and extension of Hotelling's principal-component method. The detector performance loss and false alarm stability loss to unknown clutter and jammer powers have been evaluated for example scenario.

  • On Optimum Single-Tone Frequency Estimation Using Non-uniform Samples

    Hing Cheung SO  Kenneth Wing Kin LUI  

     
    LETTER-Digital Signal Processing

      Page(s):
    823-825

    Frequency estimation of a complex single-tone in additive white Gaussian noise from irregularly-spaced samples is addressed. In this Letter, we study the periodogram and weighted phase averager, which are standard solutions in the uniform sampling scenarios, for tackling the problem. It is shown that the estimation performance of both approaches can attain the optimum benchmark of the Cramér-Rao lower bound, although the former technique has a smaller threshold signal-to-noise ratio.

  • Performance of the Matched Subspace Detector in the Case of Subpixel Targets

    Victor GOLIKOV  Olga LEBEDEVA  Andres CASTILLEJOS-MORENO  Volodymyr PONOMARYOV  

     
    LETTER-Digital Signal Processing

      Page(s):
    826-828

    This Letter presents the matched subspace detection in the presence of Gaussian background with known covariance structure but different variance for hypothesis H0 and H1. The performance degradation has been evaluated when there are the following mismatches between the actual and designed parameters: background variance in the case of hypothesis H1 and one-lag correlation coefficient of background. It has been shown that the detectability depends strongly on the fill factor of targets in the case of the mode signal matrix with high rank for a prescribed false alarm probability and a given signal-to-background ratio. These results have been also justified via Monte Carlo simulations for an example scenario.

  • The Precoder Design for Intrablock MMSE Equalization and Block Delay Detection with a Modified Oblique Projection Framework

    Chun-Hsien WU  

     
    LETTER-Digital Signal Processing

      Page(s):
    829-832

    This letter presents a method to enable the precoder design for intrablock MMSE equalization with previously proposed oblique projection framework. The joint design of the linear transceiver with optimum block delay detection is built. Simulation results validate the proposed approach and show the superior BER performance of the optimized transceiver.

  • Semi-Blind Channel Covariance Estimation for MIMO Precoding

    Mohd Hairi HALMI  Mohd Yusoff ALIAS  Teong Chee CHUAH  

     
    LETTER-Digital Signal Processing

      Page(s):
    833-837

    A method for estimating channel covariance from the uplink received signal power for downlink transmit precoding in multiple-input multiple-output (MIMO) frequency division duplex (FDD) wireless systems is proposed. Unlike other MIMO precoding schemes, the proposed scheme does not require a feedback channel or pilot symbols, i.e. knowledge of the channel covariance is made available at the downlink transmitter through direct estimation from the uplink received signal power. This leads to low complexity and improved system efficiency. It is shown that the proposed scheme performs better or on par with other practical schemes and only suffers a slight performance degradation when compared with systems with perfect knowledge of the channel covariance.

  • Low-Complexity Coarse Frequency Offset Estimation for OFDM Systems with Non-uniform Phased Pilots

    Eu-Suk SHIM  Young-Hwan YOU  

     
    LETTER-Digital Signal Processing

      Page(s):
    838-841

    In this letter, we propose a low-complexity coarse frequency offset estimation scheme in an orthogonal frequency division multiplexing (OFDM) system using non-uniform phased pilot symbols. In our approach, the pilot symbol used for frequency estimation is grouped into a number of pilot subsets so that the phase of pilots in each subset is unique. We show via simulations that such a design achieves not only a low computational load but also comparable performance, when compared to the conventional estimator.

  • Energy-Saving Stochastic Scheduling of a Real-Time Parallel Task with Varying Computation Amount on Multi-Core Processors

    Wan Yeon LEE  Kyong Hoon KIM  

     
    LETTER-Systems and Control

      Page(s):
    842-845

    The proposed scheduling scheme minimizes the mean energy consumption of a real-time parallel task, where the task has the probabilistic computation amount and can be executed concurrently on multiple cores. The scheme determines a pertinent number of cores allocated to the task execution and the instant frequency supplied to the allocated cores. Evaluation shows that the scheme saves manifest amount of the energy consumed by the previous method minimizing the mean energy consumption on a single core.

  • An Approximative Calculation of the Fractal Structure in Self-Similar Tilings

    Yukio HAYASHI  

     
    LETTER-Nonlinear Problems

      Page(s):
    846-849

    Fractal structures emerge from statistical and hierarchical processes in urban development or network evolution. In a class of efficient and robust geographical networks, we derive the size distribution of layered areas, and estimate the fractal dimension by using the distribution without huge computations. This method can be applied to self-similar tilings based on a stochastic process.

  • Proportional Quasi-Fairness of End-to-End Rates in Network Utility Maximization

    Dang-Quang BUI  Rentsen ENKHBAT  Won-Joo HWANG  

     
    LETTER-Graphs and Networks

      Page(s):
    850-852

    This letter introduces a new fairness concept, namely proportional quasi-fairness and proves that the optimal end-to-end rate of a network utility maximization can be proportionally quasi-fair with a properly chosen network utility function for an arbitrary compact feasible set.

  • Cryptanalysis of a Public Key Encryption Scheme Using Ergodic Matrices

    Mohamed RASSLAN  Amr YOUSSEF  

     
    LETTER-Cryptography and Information Security

      Page(s):
    853-854

    Shi-Hui et al. proposed a new public key cryptosystem using ergodic binary matrices. The security of the system is derived from some assumed hard problem based on ergodic matrices over GF(2). In this note, we show that breaking this system, with a security parameter n (public key of length 4n2 bits, secret key of length 2n bits and block length of length n2 bits), is equivalent to solving a set of n4 linear equations over GF(2) which renders this system insecure for practical choices of n.

  • A Fault Analytic Method against HB+

    José CARRIJO  Rafael TONICELLI  Anderson C.A. NASCIMENTO  

     
    LETTER-Cryptography and Information Security

      Page(s):
    855-859

    The search for lightweight authentication protocols suitable for low-cost RFID tags constitutes an active and challenging research area. In this context, a family of protocols based on the LPN problem has been proposed: the so-called HB-family. Despite the rich literature regarding the cryptanalysis of these protocols, there are no published results about the impact of fault analysis over them. The purpose of this paper is to fill this gap by presenting fault analytic methods against a prominent member of the HB-family: HB+ protocol. We demonstrate that the fault analysis model can lead to a flexible and effective attack against HB-like protocols, posing a serious threat over them.

  • Improved User Authentication Scheme with User Anonymity for Wireless Communications

    Miyoung KANG  Hyun Sook RHEE  Jin-Young CHOI  

     
    LETTER-Cryptography and Information Security

      Page(s):
    860-864

    We propose a user authentication scheme with user anonymity for wireless communications. Previous works have some weaknesses such as (1) user identity can be revealed from the login message, and (2) after a smart card is no longer valid or is expired, users having the expired smart cards can generate valid login messages under the assumption that the server does not maintain the user information. In this letter, we propose a new user authentication scheme for providing user anonymity. In the proposed scheme, the server is capable of detecting forged login messages by users having only expired smart cards and their passwords without storing user information on the server.

  • Security Improvement on Wu and Zhu's Protocol for Password-Authenticated Group Key Exchange

    Junghyun NAM  Juryon PAIK  Dongho WON  

     
    LETTER-Cryptography and Information Security

      Page(s):
    865-868

    A group key exchange (GKE) protocol allows a group of parties communicating over a public network to establish a common secret key. As group-oriented applications gain popularity over the Internet, a number of GKE protocols have been suggested to provide those applications with a secure multicast channel. In this work, we investigate the security of Wu and Zhu's password-authenticated GKE protocol presented recently in FC'08. Wu and Zhu's protocol is efficient, supports dynamic groups, and can be constructed generically from any password-authenticated 2-party key exchange protocol. However, despite its attractive features, the Wu-Zhu protocol should not be adopted in its present form. Due to a flaw in its design, the Wu-Zhu protocol fails to achieve authenticated key exchange. We here report this security problem with the Wu-Zhu protocol and show how to solve it.

  • M-Ary Soft Information Relaying of Distributed Turbo Codes

    Sung Kwon HONG  Jong-Moon CHUNG  Daehwan KIM  

     
    LETTER-Coding Theory

      Page(s):
    869-871

    In this letter, an M-ary extension to the soft information relaying (SIR) scheme is derived for distributed turbo codes (DTCs) to enable higher data rate wireless communications with extended ranges. The M-ary based SIR design for DTCs is based on constructing a revised mapping constellation of the signals for calculating metrics from the soft mapping symbols. The numerical results show that DTCs using the proposed M-ary SIR with gray mapped quadrature phase shift keying (QPSK) provides a significant 5 dB performance gain over hard information relaying (HIR) DTCs at the 10-3 bit error rate (BER) level.

  • Circulation Technique of Distributed Space Time Trellis Codes with AF Relaying

    Sung Kwon HONG  Jong-Moon CHUNG  

     
    LETTER-Communication Theory and Signals

      Page(s):
    872-874

    In this letter, a circulation-based distributed space time trellis code (DSTTC) technique for amplify-and-forward (AF) relaying is proposed. The proposed circulation technique is a method of configuring new protocols from the existing protocols of which the performance is dependent on specific source to relay links. The simulation results show that the newly developed protocol is less dependent on weak conditions of specific links and a performance gain in frame error rate (FER) can be obtained over the original protocol.

  • Diversity Precoding for UWB MISO Systems in IEEE Channel Models

    Jinyoung AN  Sangchoon KIM  

     
    LETTER-Spread Spectrum Technologies and Applications

      Page(s):
    875-878

    In this letter, we consider a diversity precoding scheme for signal detection in ultra-wideband (UWB) multiple input single output (MISO) systems, which consists of linear diversity prefilters in the transmitter. For a UWB MISO system, the BER performance of a linear transmit diversity precoding system with imperfect channel estimation is presented in IEEE 802.15.3a UWB multipath channels and also compared with that of a linear receive diversity postcoding approach. It is shown that the diversity precoding UWB MISO system offers the performance equivalent to the diversity postcoding scheme for single input multiple output (SIMO) systems while making the mobiles low-cost and low-power.

  • Ordinal Optimization Approach for Throughput Maximization Problems in MOFDM Uplink System

    Jung-Shou HUANG  Shieh-Shing LIN  Shih-Cheng HORNG  

     
    LETTER-Mobile Information Network and Personal Communications

      Page(s):
    879-883

    This work presents a two-stage ordinal optimization theory-based approach for solving the throughput maximization problems with power constraints of sub-carrier assignment and power allocation in multi-user orthogonal frequency division multiplexing uplink systems. In the first stage, a crude but efficient model is employed to evaluate the performance of a sub-carrier assignment pattern and the genetic algorithm is used to search through the huge solution space. In the second stage, an exact model is employed to evaluate s best sub-carrier assignment patterns obtained in stage 1 and form the select subset. Finally, the best one of the select subset is the good enough solution that we seek. Via numerous tests, this work demonstrates the efficiency of the proposed algorithm and compares it with those of other heuristic methods.

  • Yellow-Blue Component Modification of Color Image for Protanopia or Deuteranopia

    Go TANAKA  Noriaki SUETAKE  Eiji UCHINO  

     
    LETTER-Image

      Page(s):
    884-888

    A new recoloring method to improve visibility of indiscriminable colors for protanopes or deuteranopes is proposed. In the proposed method, yellow-blue components of a color image perceived by protanopes/deuteranopes are adequately modified. Moreover, the gamut mapping is considered to obtain proper output color values in this method.

  • High Capacity Watermark Embedding Based on Invariant Regions of Visual Saliency

    Leida LI  Jeng-Shyang PAN  Xiaoping YUAN  

     
    LETTER-Image

      Page(s):
    889-893

    A new image watermarking scheme is presented to achieve high capacity information hiding and geometric invariance simultaneously. Visually salient region is introduced into watermark synchronization. The saliency value of a region is used as the quantitative measure of robustness, based on which the idea of locally most salient region (LMSR) is proposed to generate the disjoint invariant regions. A meaningful binary watermark is then encoded using Chinese Remainder Theorem (CRT) in transform domain. Simulation results and comparisons demonstrate the effectiveness of the proposed scheme.