The search functionality is under construction.
The search functionality is under construction.

IEICE TRANSACTIONS on Information

  • Impact Factor

    0.59

  • Eigenfactor

    0.002

  • article influence

    0.1

  • Cite Score

    1.4

Advance publication (published online immediately after acceptance)

Volume E93-D No.1  (Publication Date:2010/01/01)

    Special Section on Test, Diagnosis and Verification of SOCs
  • FOREWORD Open Access

    Kazumi HATAYAMA  Tsuyoshi SHINOGI  

     
    FOREWORD

      Page(s):
    1-1
  • High Launch Switching Activity Reduction in At-Speed Scan Testing Using CTX: A Clock-Gating-Based Test Relaxation and X-Filling Scheme

    Kohei MIYASE  Xiaoqing WEN  Hiroshi FURUKAWA  Yuta YAMATO  Seiji KAJIHARA  Patrick GIRARD  Laung-Terng WANG  Mohammad TEHRANIPOOR  

     
    PAPER

      Page(s):
    2-9

    At-speed scan testing is susceptible to yield loss risk due to power supply noise caused by excessive launch switching activity. This paper proposes a novel two-stage scheme, namely CTX (Clock-Gating-Based Test Relaxation and X-Filling), for reducing switching activity when a test stimulus is launched. Test relaxation and X-filling are conducted (1) to make as many FFs as possible inactive by disabling corresponding clock control signals of clock-gating circuitry in Stage-1 (Clock-Disabling), and (2) to equalize the input and output values in Stage-2 of as many remaining active FFs as possible (FF-Silencing). CTX effectively reduces launch switching activity and thus yield loss risk even when only a small number of don't care (X) bits are present (as in test compression) without any impact on test data volume, fault coverage, performance, or circuit design.

  • Scan Chain Ordering to Reduce Test Data for BIST-Aided Scan Test Using Compatible Scan Flip-Flops

    Hiroyuki YOTSUYANAGI  Masayuki YAMAMOTO  Masaki HASHIZUME  

     
    PAPER

      Page(s):
    10-16

    In this paper, the scan chain ordering method for BIST-aided scan test for reducing test data and test application time is proposed. In this work, we utilize the simple LFSR without a phase shifter as PRPG and configure scan chains using the compatible set of flip-flops with considering the correlations among flip-flops in an LFSR. The method can reduce the number of inverter codes required for inverting the bits in PRPG patterns that conflict with ATPG patterns. The experimental results for some benchmark circuits are shown to present the feasibility of our test method.

  • Reduction of Test Data Volume and Improvement of Diagnosability Using Hybrid Compression

    Anis UZZAMAN  Brion KELLER  Brian FOUTZ  Sandeep BHATIA  Thomas BARTENSTEIN  Masayuki ARAI  Kazuhiko IWASAKI  

     
    PAPER

      Page(s):
    17-23

    This paper describes a simple means to enable direct diagnosis by bypassing MISRs on a small set of tests (MISR-bypass test mode) while achieving ultimate output compression using MISRs for the majority of tests (MISR-enabled test mode.) By combining two compression schemes, XOR and MISRs in the same device, it becomes possible to have high compression and still support compression mode volume diagnostics. In our experiment, the MISR-bypass test was first executed and at 10% of the total test set the MISR-enabled test was performed. The results show that compared with MISR+XOR-based compression the proposed technique provides better volume diagnosis with slightly small (0.71 X to 0.97 X) compaction ratio. The scan cycles are about the same as the MISR-enabled mode. A possible application to partial good chips is also shown.

  • A Fault Dependent Test Generation Method for State-Observable FSMs to Increase Defect Coverage under the Test Length Constraint

    Ryoichi INOUE  Toshinori HOSOKAWA  Hideo FUJIWARA  

     
    PAPER

      Page(s):
    24-32

    Since scan testing is not based on the function of the circuit, but rather the structure, it is considered to be both a form of over testing and under testing. Moreover, it is important to test VLSIs using the given function. Since the functional specifications are described explicitly in the FSMs, high test quality is expected by performing logical fault testing and timing fault testing. This paper proposes a fault-dependent test generation method to detect specified fault models completely and to increase defect coverage as much as possible under the test length constraint. We present experimental results for MCNC'91 benchmark circuits to evaluate bridging fault coverage, transition fault coverage, and statistical delay quality level and to show the effectiveness of the proposed test generation method compared with a stuck-at fault-dependent test generation method.

  • A Fault Signature Characterization Based Analog Circuit Testing Scheme and the Extension of IEEE 1149.4 Standard

    Wimol SAN-UM  Masayoshi TACHIBANA  

     
    PAPER

      Page(s):
    33-42

    An analog circuit testing scheme is presented. The testing technique is a sinusoidal fault signature characterization, involving the measurement of DC offset, amplitude, frequency and phase shift, and the realization of two crossing level voltages. The testing system is an extension of the IEEE 1149.4 standard through the modification of an analog boundary module, affording functionalities for both on-chip testing capability, and accessibility to internal components for off-chip testing. A demonstrating circuit-under-test, a 4th-order Gm-C low-pass filter, and the proposed analog testing scheme are implemented in a physical level using 0.18-µm CMOS technology, and simulated using Hspice. Both catastrophic and parametric faults are potentially detectable at the minimum parameter variation of 0.5%. The fault coverage associated with CMOS transconductance operational amplifiers and capacitors are at 94.16% and 100%, respectively. This work offers the enhancement of standardizing test approach, which reduces the complexity of testing circuit and provides non-intrusive analog circuit testing.

  • Regular Section
  • A Model to Explain Quality Improvement Effect of Peer Reviews

    Mutsumi KOMURO  Norihisa KOMODA  

     
    PAPER-Software Engineering

      Page(s):
    43-49

    Through the analysis of Rayleigh model, an explanatory model for the quality effect of peer reviews is constructed. The review activities are evaluated by the defect removal rate at each phase. We made hypotheses on how these measurements are related to the product quality. These hypotheses are verified through regression analysis of actual project data, and concrete calculation formulae are obtained as a model. Making use of the mechanism to construct this model, we can develop a method for making concrete review plan and setting objective values to manage on-going review activities.

  • Voice Communications over 802.11 Ad Hoc Networks: Modeling, Optimization and Call Admission Control

    Changchun XU  Yanyi XU  Gan LIU  Kezhong LIU  

     
    PAPER-Networks

      Page(s):
    50-58

    Supporting quality-of-service (QoS) of multimedia communications over IEEE 802.11 based ad hoc networks is a challenging task. This paper develops a simple 3-D Markov chain model for queuing analysis of IEEE 802.11 MAC layer. The model is applied for performance analysis of voice communications over IEEE 802.11 single-hop ad hoc networks. By using the model, we finish the performance optimization of IEEE MAC layer and obtain the maximum number of voice calls in IEEE 802.11 ad hoc networks as well as the statistical performance bounds. Furthermore, we design a fully distributed call admission control (CAC) algorithm which can provide strict statistical QoS guarantee for voice communications over IEEE 802.11 ad hoc networks. Extensive simulations indicate the accuracy of the analytical model and the CAC scheme.

  • Testable Critical Path Selection Considering Process Variation

    Xiang FU  Huawei LI  Xiaowei LI  

     
    PAPER-Dependable Computing

      Page(s):
    59-67

    Critical path selection is very important in delay testing. Critical paths found by conventional static timing analysis (STA) tools are inadequate to represent the real timing of the circuit, since neither the testability of paths nor the statistical variation of cell delays caused by process variation is considered. This paper proposed a novel path selection method considering process variation. The circuit is firstly simplified by eliminating non-critical edges under statistical timing model, and then divided into sub-circuits, while each sub-circuit has only one prime input (PI) and one prime output (PO). Critical paths are selected only in critical sub-circuits. The concept of partially critical edges (PCEs) and completely critical edges (CCEs) are introduced to speed up the path selection procedure. Two path selection strategies are also presented to search for a testable critical path set to cover all the critical edges. The experimental results showed that the proposed circuit division approach is efficient in path number reduction, and PCEs and CCEs play an important role as a guideline during path selection.

  • Reliability Analysis and Modeling of ZigBee Networks

    Cheng-Min LIN  

     
    PAPER-Dependable Computing

      Page(s):
    68-78

    The architecture of ZigBee networks focuses on developing low-cost, low-speed ubiquitous communication between devices. The ZigBee technique is based on IEEE 802.15.4, which specifies the physical layer and medium access control (MAC) for a low rate wireless personal area network (LR-WPAN). Currently, numerous wireless sensor networks have adapted the ZigBee open standard to develop various services to promote improved communication quality in our daily lives. The problem of system and network reliability in providing stable services has become more important because these services will be stopped if the system and network reliability is unstable. The ZigBee standard has three kinds of networks; star, tree and mesh. The paper models the ZigBee protocol stack from the physical layer to the application layer and analyzes these layer reliability and mean time to failure (MTTF). Channel resource usage, device role, network topology and application objects are used to evaluate reliability in the physical, medium access control, network, and application layers, respectively. In the star or tree networks, a series system and the reliability block diagram (RBD) technique can be used to solve their reliability problem. However, a division technology is applied here to overcome the problem because the network complexity is higher than that of the others. A mesh network using division technology is classified into several non-reducible series systems and edge parallel systems. Hence, the reliability of mesh networks is easily solved using series-parallel systems through our proposed scheme. The numerical results demonstrate that the reliability will increase for mesh networks when the number of edges in parallel systems increases while the reliability quickly drops when the number of edges and the number of nodes increase for all three networks. More use of resources is another factor impact on reliability decreasing. However, lower network reliability will occur due to network complexity, more resource usage and complex object relationship.

  • Secure Bit-Plane Based Steganography for Secret Communication

    Cong-Nguyen BUI  Hae-Yeoun LEE  Jeong-Chun JOO  Heung-Kyu LEE  

     
    PAPER-Application Information Security

      Page(s):
    79-86

    A secure method for steganography is proposed. Pixel-value differencing (PVD) steganography and bit-plane complexity segmentation (BPCS) steganography have the weakness of generating blocky effects and noise in smooth areas and being detectable with steganalysis. To overcome these weaknesses, a secure bit-plane based steganography method on the spatial domain is presented, which uses a robust measure to select noisy blocks for embedding messages. A matrix embedding technique is also applied to reduce the change of cover images. Given that the statistical property of cover images is well preserved in stego-images, the proposed method is undetectable by steganalysis that uses RS analysis or histogram-based analysis. The proposed method is compared with the PVD and BPCS steganography methods. Experimental results confirm that the proposed method is secure against potential attacks.

  • Robust High-Capacity Audio Watermarking Based on FFT Amplitude Modification

    Mehdi FALLAHPOUR  David MEGIAS  

     
    PAPER-Application Information Security

      Page(s):
    87-93

    This paper proposes a novel robust audio watermarking algorithm to embed data and extract it in a bit-exact manner based on changing the magnitudes of the FFT spectrum. The key point is selecting a frequency band for embedding based on the comparison between the original and the MP3 compressed/decompressed signal and on a suitable scaling factor. The experimental results show that the method has a very high capacity (about 5 kbps), without significant perceptual distortion (ODG about -0.25) and provides robustness against common audio signal processing such as added noise, filtering and MPEG compression (MP3). Furthermore, the proposed method has a larger capacity (number of embedded bits to number of host bits rate) than recent image data hiding methods.

  • An Ego-Motion Detection System Employing Directional-Edge-Based Motion Field Representations

    Jia HAO  Tadashi SHIBATA  

     
    PAPER-Pattern Recognition

      Page(s):
    94-106

    In this paper, a motion field representation algorithm based on directional edge information has been developed. This work is aiming at building an ego-motion detection system using dedicated VLSI chips developed for real time motion field generation at low powers . Directional edge maps are utilized instead of original gray-scale images to represent local features of an image and to detect the local motion component in a moving image sequence. Motion detection by edge histogram matching has drastically reduced the computational cost of block matching, while achieving a robust performance of the ego-motion detection system under dynamic illumination variation. Two kinds of feature vectors, the global motion vector and the component distribution vectors, are generated from a motion field at two different scales and perspectives. They are jointly utilized in the hierarchical classification scheme employing multiple-clue matching. As a result, the problems of motion ambiguity as well as motion field distortion caused by camera shaking during video capture have been resolved. The performance of the ego-motion detection system was evaluated under various circumstances, and the effectiveness of this work has been verified.

  • A Rapid Model Adaptation Technique for Emotional Speech Recognition with Style Estimation Based on Multiple-Regression HMM

    Yusuke IJIMA  Takashi NOSE  Makoto TACHIBANA  Takao KOBAYASHI  

     
    PAPER-Speech and Hearing

      Page(s):
    107-115

    In this paper, we propose a rapid model adaptation technique for emotional speech recognition which enables us to extract paralinguistic information as well as linguistic information contained in speech signals. This technique is based on style estimation and style adaptation using a multiple-regression HMM (MRHMM). In the MRHMM, the mean parameters of the output probability density function are controlled by a low-dimensional parameter vector, called a style vector, which corresponds to a set of the explanatory variables of the multiple regression. The recognition process consists of two stages. In the first stage, the style vector that represents the emotional expression category and the intensity of its expressiveness for the input speech is estimated on a sentence-by-sentence basis. Next, the acoustic models are adapted using the estimated style vector, and then standard HMM-based speech recognition is performed in the second stage. We assess the performance of the proposed technique in the recognition of simulated emotional speech uttered by both professional narrators and non-professional speakers.

  • A Technique for Estimating Intensity of Emotional Expressions and Speaking Styles in Speech Based on Multiple-Regression HSMM

    Takashi NOSE  Takao KOBAYASHI  

     
    PAPER-Speech and Hearing

      Page(s):
    116-124

    In this paper, we propose a technique for estimating the degree or intensity of emotional expressions and speaking styles appearing in speech. The key idea is based on a style control technique for speech synthesis using a multiple regression hidden semi-Markov model (MRHSMM), and the proposed technique can be viewed as the inverse of the style control. In the proposed technique, the acoustic features of spectrum, power, fundamental frequency, and duration are simultaneously modeled using the MRHSMM. We derive an algorithm for estimating explanatory variables of the MRHSMM, each of which represents the degree or intensity of emotional expressions and speaking styles appearing in acoustic features of speech, based on a maximum likelihood criterion. We show experimental results to demonstrate the ability of the proposed technique using two types of speech data, simulated emotional speech and spontaneous speech with different speaking styles. It is found that the estimated values have correlation with human perception.

  • Robust Character Recognition Using Adaptive Feature Extraction Method

    Minoru MORI  Minako SAWAKI  Junji YAMATO  

     
    PAPER-Image Recognition, Computer Vision

      Page(s):
    125-133

    This paper describes an adaptive feature extraction method that exploits category-specific information to overcome both image degradation and deformation in character recognition. When recognizing multiple fonts, geometric features such as directional information of strokes are often used but they are weak against the deformation and degradation that appear in videos or natural scenes. To tackle these problems, the proposed method estimates the degree of deformation and degradation of an input pattern by comparing the input pattern and the template of each category as category-specific information. This estimation enables us to compensate the aspect ratio associated with shape and the degradation in feature values and so obtain higher recognition accuracy. Recognition experiments using characters extracted from videos show that the proposed method is superior to the conventional alternatives in resisting deformation and degradation.

  • Eyegaze Detection from Monocular Camera Image for Eyegaze Communication System

    Ryo OHTERA  Takahiko HORIUCHI  Hiroaki KOTERA  

     
    PAPER-Image Recognition, Computer Vision

      Page(s):
    134-143

    An eyegaze interface is one of the key technologies as an input device in the ubiquitous-computing society. In particular, an eyegaze communication system is very important and useful for severely handicapped users such as quadriplegic patients. Most of the conventional eyegaze tracking algorithms require specific light sources, equipment and devices. In this study, a simple eyegaze detection algorithm is proposed using a single monocular video camera. The proposed algorithm works under the condition of fixed head pose, but slight movement of the face is accepted. In our system, we assume that all users have the same eyeball size based on physiological eyeball models. However, we succeed to calibrate the physiologic movement of the eyeball center depending on the gazing direction by approximating it as a change in the eyeball radius. In the gaze detection stage, the iris is extracted from a captured face frame by using the Hough transform. Then, the eyegaze angle is derived by calculating the Euclidean distance of the iris centers between the extracted frame and a reference frame captured in the calibration process. We apply our system to an eyegaze communication interface, and verified the performance through key typing experiments with a visual keyboard on display.

  • Geometric BIC

    Kenichi KANATANI  

     
    PAPER-Image Recognition, Computer Vision

      Page(s):
    144-151

    The "geometric AIC" and the "geometric MDL" have been proposed as model selection criteria for geometric fitting problems. These correspond to Akaike's "AIC" and Rissanen's "BIC" well known in the statistical estimation framework. Another well known criterion is Schwarz' "BIC", but its counterpart for geometric fitting has not been known. This paper introduces the corresponding criterion, which we call the "geometric BIC", and shows that it is of the same form as the geometric MDL. Our result gives a justification to the geometric MDL from the Bayesian principle.

  • Real-Time Estimation of Fast Egomotion with Feature Classification Using Compound Omnidirectional Vision Sensor

    Trung Thanh NGO  Yuichiro KOJIMA  Hajime NAGAHARA  Ryusuke SAGAWA  Yasuhiro MUKAIGAWA  Masahiko YACHIDA  Yasushi YAGI  

     
    PAPER-Image Recognition, Computer Vision

      Page(s):
    152-166

    For fast egomotion of a camera, computing feature correspondence and motion parameters by global search becomes highly time-consuming. Therefore, the complexity of the estimation needs to be reduced for real-time applications. In this paper, we propose a compound omnidirectional vision sensor and an algorithm for estimating its fast egomotion. The proposed sensor has both multi-baselines and a large field of view (FOV). Our method uses the multi-baseline stereo vision capability to classify feature points as near or far features. After the classification, we can estimate the camera rotation and translation separately by using random sample consensus (RANSAC) to reduce the computational complexity. The large FOV also improves the robustness since the translation and rotation are clearly distinguished. To date, there has been no work on combining multi-baseline stereo with large FOV characteristics for estimation, even though these characteristics are individually are important in improving egomotion estimation. Experiments showed that the proposed method is robust and produces reasonable accuracy in real time for fast motion of the sensor.

  • Deforming NURBS Surfaces to Target Curves for Immersive VR Sketching

    Junghoon KWON  Jeongin LEE  Harksu KIM  Gilsoo JANG  Youngho CHAI  

     
    PAPER-Computer Graphics

      Page(s):
    167-175

    Designing NURBS surfaces by manipulating control points directly requires too much trial and error for immersive VR applications. A more natural interface is provided by deforming a NURBS surface so that it passes through a given target point; and by repeating such deformations we can make the surface follow one or more target curves. These deformations can be achieved by modifying the pseudo-inverse matrix of the basis functions, but this matrix is often ill-conditioned. However, the application of a modified FE approach to the weights and control points provides controllable deformations, which are demonstrated across a range of example shapes.

  • The Influence of a Low-Level Color or Figure Adaptation on a High-Level Face Perception

    Miao SONG  Keizo SHINOMORI  Shiyong ZHANG  

     
    PAPER-Biocybernetics, Neurocomputing

      Page(s):
    176-184

    Visual adaptation is a universal phenomenon associated with human visual system. This adaptation affects not only the perception of low-level visual systems processing color, motion, and orientation, but also the perception of high-level visual systems processing complex visual patterns, such as facial identity and expression. Although it remains unclear for the mutual interaction mechanism between systems at different levels, this issue is the key to understand the hierarchical neural coding and computation mechanism. Thus, we examined whether the low-level adaptation influences on the high-level aftereffect by means of cross-level adaptation paradigm (i.e. color, figure adaptation versus facial identity adaptation). We measured the identity aftereffects within the real face test images on real face, color chip and figure adapting conditions. The cross-level mutual influence was evaluated by the aftereffect size among different adapting conditions. The results suggest that the adaptation to color and figure contributes to the high-level facial identity aftereffect. Besides, the real face adaptation obtained the significantly stronger aftereffect than the color chip or the figure adaptation. Our results reveal the possibility of cross-level adaptation propagation and implicitly indicate a high-level holistic facial neural representation. Based on these results, we discussed the theoretical implication of cross-level adaptation propagation for understanding the hierarchical sensory neural systems.

  • Software Reliability Modeling Considering Fault Correction Process

    Lixin JIA  Bo YANG  Suchang GUO  Dong Ho PARK  

     
    LETTER-Software Engineering

      Page(s):
    185-188

    Many existing software reliability models (SRMs) are based on the assumption that fault correction activities take a negligible amount of time and resources, which is often invalid in real-life situations. Consequently, the estimated and predicted software reliability tends to be over-optimistic, which could in turn mislead management in related decision-makings. In this paper, we first make an in-depth analysis of real-life software testing process; then a Markovian SRM considering fault correction process is proposed. Parameter estimation method and software reliability prediction method are established. A numerical example is given which shows that by using the proposed model and methods, the results obtained tend to be more appropriate and realistic.

  • Classifying Categorical Data Based on Adoptive Hamming Distance

    Jae-Sung LEE  Dae-Won KIM  

     
    LETTER-Data Mining

      Page(s):
    189-192

    In this paper, we improve the classification performance of categorical data using an Adoptive Hamming Distance. We defined the equivalent categorical values and showed how those categorical values were searched to adopt the distance. The effectiveness of the proposed method was demonstrated using various classification examples.

  • A Selective Scan Chain Activation Technique for Minimizing Average and Peak Power Consumption

    Yongjoon KIM  Jaeseok PARK  Sungho KANG  

     
    LETTER-Dependable Computing

      Page(s):
    193-196

    In this paper, we present an efficient low power scan test technique which simultaneously reduces both average and peak power consumption. The selective scan chain activation scheme removes unnecessary scan chain utilization during the scan shift and capture operations. Statistical scan cell reordering enables efficient scan chain removal. The experimental results demonstrated that the proposed method constantly reduces the average and peak power consumption during scan testing.

  • On the Importance of Transition Regions for Automatic Speaker Recognition

    Bong-Jin LEE  Chi-Sang JUNG  Jeung-Yoon CHOI  Hong-Goo KANG  

     
    LETTER-Speech and Hearing

      Page(s):
    197-200

    This letter describes the importance of transition regions, e.g. at phoneme boundaries, for automatic speaker recognition compared with using steady-state regions. Experimental results of automatic speaker identification tasks confirm that transition regions include the most speaker distinctive features. A possible reason for obtaining such results is described in view of articulation, in particular, the degree of freedom of articulators. These results are expected to provide useful information in designing an efficient automatic speaker recognition system.

  • Incorporating Frame Information to Semantic Role Labeling

    Joo-Young LEE  Young-In SONG  Hae-Chang RIM  Kyoung-Soo HAN  

     
    LETTER-Natural Language Processing

      Page(s):
    201-204

    In this paper, we suggest a new probabilistic model of semantic role labeling, which uses the frameset of the predicate as explicit linguistic knowledge for providing global information on the predicate-argument structure that local classifier is unable to catch. The proposed model consists of three sub-models: role sequence generation model, frameset generation model, and matching model. The role sequence generation model generates the semantic role sequence candidates of a given predicate by using the local classification approach, which is a widely used approach in previous research. The frameset generation model estimates the probability of each frameset that the predicate can take. The matching model is designed to measure the degree of the matching between the generated role sequence and the frameset by using several features. These features are developed to represent the predicate-argument structure information described in the frameset. In the experiments, our model shows that the use of knowledge about the predicate-argument structure is effective for selecting a more appropriate semantic role sequence.