The search functionality is under construction.

IEICE TRANSACTIONS on Information

  • Impact Factor

    0.72

  • Eigenfactor

    0.002

  • article influence

    0.1

  • Cite Score

    1.4

Advance publication (published online immediately after acceptance)

Volume E86-D No.12  (Publication Date:2003/12/01)

    Special Issue on Dependable Computing
  • FOREWORD

    Haruo YOKOTA  

     
    FOREWORD

      Page(s):
    2501-2502
  • Consideration of Fault Tolerance in Autonomic Computing Environment

    Yoshihiro TOHMA  

     
    INVITED PAPER

      Page(s):
    2503-2507

    Since the characteristic to current information systems is the dynamic and concurrent change of their configurations and scales with non-stop provision of their services, the system management should inevitably rely on autonomic computing. Since fault tolerance is the one of important system management issues, it should also be incorporated in autonomic computing environment. This paper argues what should be taken into consideration and what approach could be available to realize the fault tolerance in such environments.

  • A Transparent Transient Faults Tolerance Mechanism for Superscalar Processors

    Toshinori SATO  

     
    PAPER-Dependable Systems

      Page(s):
    2508-2516

    In this paper, we propose a fault-tolerance mechanism for microprocessors, which detects transient faults and recovers from them. The investigation of fault-tolerance techniques for microprocessors is driven by two issues: One regards deep submicron fabrication technologies. Future semiconductor technologies could become more susceptible to alpha particles and other cosmic radiation. The other is the increasing popularity of mobile platforms. Cellular telephones are currently used for applications which are critical to our financial security, such as mobile banking, mobile trading, and making airline ticket reservations. Such applications demand that computer systems work correctly. In light of this, we propose a mechanism which is based on an instruction reissue technique for incorrect data speculation recovery and utilizes time redundancy, and evaluate our proposal using a timing simulator.

  • PREGMA: A New Fault Tolerant Cluster Using COTS Components for Internet Services

    Takeshi MISHIMA  Takeshi AKAIKE  

     
    PAPER-Dependable Systems

      Page(s):
    2517-2526

    We propose a new dependable system called PREGMA (Platform for Reliable Environment based on a General-purpose Machine Architecture). PREGMA aims to meet two requirements -- fault tolerance and low cost -- for Internet services. It can provide fault tolerance, so we can avoid system failure and prevent data corruption, even if faults occur. That is, it masks the faults by running multiple replicated servers, each possessing its own data, in a loosely synchronized manner and delivering the majority vote as output to clients. Moreover, PREGMA is composed of COTS (Commercial Off-The-Shelf) components without modification, which makes it possible to offer the services at a low cost. We investigated two approaches for achieving redundancy of the Coordinator, which is the core of PREGMA: using the primary backup method and the active replication method. We evaluated the effectiveness of PREGMA in terms of throughput overhead, data integrity and recovery time. The results for a prototype show that PREGMA using the Coordinator with the primary backup method outperforms that with the active replication method and has throughput only 3% lower than a non-redundant system. The results also show that, in the event of failure, the recovery time is only less than one second and no data corruption occurs.

  • A Self-Adjusting Destage Algorithm with High-Low Water Mark in Cached RAID5

    Young Jin NAM  Chanik PARK  

     
    PAPER-Dependable Systems

      Page(s):
    2527-2535

    The High-Low Water Mark destage (HLWM) algorithm is widely used to enable cached RAID5 to flush dirty data from its write cache to disks due to the simplicity of its operations. It starts and stops a destaging process based on the two thresholds that are configured at the initialization time with the best knowledge of its underlying storage performance capability and its workload pattern which includes traffic intensity, access patterns, etc. However, each time the current workload varies from the original, the thresholds need to be re-configured with the changed workload. This paper proposes an efficient destage algorithm which automatically re-configures its initial thresholds according to the changed traffic intensity and access patterns, called adaptive thresholding. The core of adaptive thresholding is to define the two thresholds as the multiplication of the referenced increasing and decreasing rates of the write cache occupancy level and the time required to fill and empty the write cache. We implement the proposed algorithm upon an actual RAID system and then verify the ability of the auto-reconfiguration with synthetic workloads having a different level of traffic intensity and access patterns. Performance evaluations under well-known traced workloads reveal that the proposed algorithm reduces disk IO traffic by about 12% with a 6% increase in the overwrite ratio compared with the HLWM algorithm.

  • A Novel Learning Algorithm Which Makes Multilayer Neural Networks Multiple-Weight-Fault Tolerant

    Itsuo TAKANAMI  Yasuhiro OYAMA  

     
    PAPER-Dependable Systems

      Page(s):
    2536-2543

    We propose an efficient algorithm for making multi-layered neural networks (MLN) fault-tolerant to all multiple weight faults in a multi-dimensional interval by injecting intentionally two extreme multi-dimensional values in the interval into the weights of the selected multiple links in a learning phase. The degree of fault-tolerance to a multiple weight fault is measured by the number of essential multiple links. First, we analytically discuss how to choose effectively the multiple links to be injected, and present a learning algorithm for making MLNs fault tolerant to all multiple (i.e., simultaneous) faults in the interval defined by two multi-dimensional extreme points. Then it is proved that after the learning algorithm successfully finishes, MLNs become fault tolerant to all multiple faults in the interval. It is also shown that the time in a weight modification cycle depends little on multiplicity of faults k for small k. These are confirmed by simulation.

  • A Three-tier Active Replication Protocol for Large Scale Distributed Systems

    Carlo MARCHETTI  Sara Tucci PIERGIOVANNI  Roberto BALDONI  

     
    PAPER-Dependable Software

      Page(s):
    2544-2552

    The deployment of server replicas of a service across an asynchronous distributed system (e.g., Internet) is a real practical challenge. This target cannot be indeed achieved by classical software replication techniques (e.g., passive and active replication) as these techniques usually rely on group communication toolkits that require server replicas to run over a partially synchronous distributed system to solve the underlying agreement problem. This paper proposes a three-tier architecture for software replication that encapsulates the need of partial synchrony in a specific software component of a mid-tier to free replicas and clients from the need of underlying partial synchrony assumptions. Then we propose how to specialize the mid-tier in order to manage active replication of server replicas.

  • Evaluation of Checkpointing Mechanism on SCore Cluster System

    Masaaki KONDO  Takuro HAYASHIDA  Masashi IMAI  Hiroshi NAKAMURA  Takashi NANYA  Atsushi HORI  

     
    PAPER-Dependable Software

      Page(s):
    2553-2562

    Cluster systems are getting widely used because of good performance / cost ratio. However, their reliability has not been well discussed in practical environment so far. As the number of commodity components in a cluster system gets increased, it is indispensable to support reliability by system software. SCore cluster system software is a parallel programming environment for High Performance Computing (HPC). SCore provides checkpointing and rollback-recovery mechanism for high availability. In this paper, we analyze and evaluate the checkpointing and rollback-recovery mechanisms of SCore quantitively. The experimental results reveal that the required time for checkpointing scales very well in respect to the number of computing nodes. However, the required time is quite long due to the low effective network bandwidth. Based on the results, we modify SCore and successfully make checkpointing and recovery 1.8 2.8 times and 3.7 5.0 times faster respectively. This is very helpful for cluster systems to achieve high performance and high availability.

  • Multidimensional Characterization of the Impact of Faulty Drivers on the Operating Systems Behavior

    João DURÃES  Henrique MADEIRA  

     
    PAPER-Dependable Software

      Page(s):
    2563-2570

    This paper presents the results of a continuing research work on the practical characterization of operating systems (OS) behavior in the presence of software faults in OS components, such as faulty device drivers. The methodology used is based on the emulation of software faults in device drivers and observation of the behavior of the overall system regarding a comprehensive set of failure modes, analyzed according to different dimensions related to multiple user perspectives. The emulation of the software faults is done through the injection of specific mutations at machine-code level that reproduce the code generated by compilers when typical programming errors occur in the high level language code. Two important aspects of this methodology are the independence of source code availability and the use of simple and established practices to evaluate operating systems failure modes, thus allowing its use as a dependability benchmarking technique. The generalization of the methodology to any software system built of discrete and identifiable components is also discussed.

  • Impact of Internal and External Software Faults on the Linux Kernel

    Tahar JARBOUI  Jean ARLAT  Yves CROUZET  Karama KANOUN  Thomas MARTEAU  

     
    PAPER-Dependable Software

      Page(s):
    2571-2578

    The application of fault injection in the context of dependability benchmarking is far from being straightforward. One decisive issue to be addressed is to what extent injected faults are representative of the considered faults. This paper proposes an approach to analyze the effects of real and injected faults.

  • Feature Interaction Detection by Bounded Model Checking

    Tomoyuki YOKOGAWA  Tatsuhiro TSUCHIYA  Masahide NAKAMURA  Tohru KIKUNO  

     
    PAPER-Dependable Communication

      Page(s):
    2579-2587

    Feature interaction is the term used in telephony systems to refer to inconsistent conflict between multiple communication services. Feature interaction is considered a major obstacle to developing reliable telephony systems and many approaches have been explored to resolve it. In this paper we present an automatic method for detecting latent feature interaction in service specifications. This method uses bounded model checking as its basis. The basic idea behind bounded model checking is to reduce the detection problem to the propositional satisfiability (SAT) decision problem. For asynchronous systems like telecommunication systems, however, traditional bounded model checking does not work well because resulting propositional formulas tend to become very large. We propose a new encoding scheme to overcome this problem and show the effectiveness through comparative experiments with traditional bounded model checking and other model checking methods.

  • An Algorithm for Node-to-Set Disjoint Paths Problem in Burnt Pancake Graphs

    Keiichi KANEKO  

     
    PAPER-Dependable Communication

      Page(s):
    2588-2594

    A burnt pancake graph is a variant of Cayley graphs and its topology is suitable for massively parallel systems. However, for a burnt pancake graph, there is much room for further research. Hence, in this study, we focus on n-burnt pancake graphs and propose an algorithm to obtain n disjoint paths from a source node to n destination nodes in polynomial order time of n, n being the degree of the graph. In addition, we estimate the time complexity of the algorithm and the sum of path lengths. We also give a proof of correctness of the algorithm. Moreover, we report the results of computer simulation to evaluate the average performance of the algorithm.

  • Unequal Error Protection in Ziv-Lempel Coding

    Eiji FUJIWARA  Masato KITAKAMI  

     
    PAPER-Dependable Communication

      Page(s):
    2595-2600

    Data compression is popularly applied to computer systems and communication systems. Especially, lossless compression is applied to text compression. Since compressed data are very sensitive to errors, several error control methods for data compression using probability model, such as for arithmetic coding, have been proposed. This paper proposes to apply an unequal error protection, or a UEP, scheme to LZ77 coding and LZW coding. This investigates a structure of the compressed data and clarifies a part which is more sensitive to errors than the other by using theoretical analysis and computer simulation. The UEP scheme protects the error-sensitive part from errors more strongly than the others. Computer simulation says that the proposed scheme can recover from errors in the compressed data more effectively than the conventional methods.

  • Partial Order Reduction for Timed Circuit Verification Based on Level Oriented Model

    Tomoya KITAI  Yusuke OGURO  Tomohiro YONEDA  Eric MERCER  Chris MYERS  

     
    PAPER-Verification and Dependability Analysis

      Page(s):
    2601-2611

    Using a level oriented model for verification of asynchronous circuits helps users to easily construct formal models with high readability or to naturally model data-path circuits. On the other hand, in order to use such a model for larger circuit, some technique to avoid the state explosion problem is essential. This paper first defines a level oriented formal model based on time Petri nets, and then proposes a partial order reduction algorithm that prunes unnecessary state generation while guaranteeing the correctness of the verification.

  • Formal Verification of an Intrusion-Tolerant Group Membership Protocol

    HariGovind V. RAMASAMY  Michel CUKIER  William H. SANDERS  

     
    PAPER-Verification and Dependability Analysis

      Page(s):
    2612-2622

    The traditional approach for establishing the correctness of group communication protocols is through rigorous arguments. While this is a valid approach, the likelihood of subtle errors in the design and implementation of such complex distributed protocols is not negligible. The use of formal verification methods has been widely advocated to instill confidence in the correctness of protocols. In this paper, we describe how we used the SPIN model checker to formally verify a group membership protocol that is part of an intrusion-tolerant group communication system. We describe how we successfully tackled the state-space explosion problem by determining the right abstraction level for formally specifying the protocol. The verification exercise not only formally showed that the protocol satisfies its correctness claims, but also provided information that will help us make the protocol more efficient without violating correctness.

  • Analyzing the Impact of Data Errors in Safety-Critical Control Systems

    Orjan ASKERDAL  Magnus GAFVERT  Martin HILLER  Neeraj SURI  

     
    PAPER-Verification and Dependability Analysis

      Page(s):
    2623-2633

    Computers are increasingly used for implementing control algorithms in safety-critical embedded applications, such as engine control, braking control and flight surface control. Consequently, computer errors can have severe impact on the safety of such systems. Addressing the coupling of control performance with computer related errors, this paper develops a methodology for analyzing the impacts data errors have on control system dependability. The impact of a data error is measured as the resulting control error. We use maximum bounds on this measure as the criterion for control system failure (i.e., if the control error exceeds a certain threshold, the system has failed). In this paper we a) develop suitable models of computer faults for analysis of control level effects and related analysis methods, and b) apply traditional control theory analysis methods for understanding the impacts of data errors on system dependability. An automobile slip-control brake-system is used as an example showing the viability of our approach.

  • Using VHDL-Based Fault Injection for the Early Diagnosis of a TTP/C Controller

    Joaquín GRACIA  Juan C. BARAZA  Daniel GIL  Pedro J. GIL  

     
    PAPER-Verification and Dependability Analysis

      Page(s):
    2634-2641

    Nowadays, the use of dependable systems is generalising, and diagnosis is an important step during their design . A diagnosis in early phases of the design cycle allows to save time and money. Fault injection can be used during the design process of the system, and using Hardware Description Languages, particularly VHDL, it is possible to accomplish this early diagnosis. During last years, the Time-Triggered Architecture (TTA) has emerged as a hard real-time fault-tolerant architecture for embedded systems. This novel architecture is gaining adepts mainly in the avionics and automotive industries ( x-by-wire ). The TTA implements a synchronous protocol with static scheduling that has been specifically targeted at hard real-time fault-tolerant distributed system. In this work, we present the study of the VHDL model of a communication controller based on the TTA, where a number of fault injection campaigns have been carried out. We comment the results produced and suggest some solutions to problems detected.

  • Dependability Evaluation with Fault Injection Experiments

    Piotr GAWKOWSKI  Janusz SOSNOWSKI  

     
    PAPER-Verification and Dependability Analysis

      Page(s):
    2642-2649

    In the paper we evaluate program susceptibility to hardware faults using fault injector. The performed experiments cover many applications with different features. The effectiveness of software techniques improving system dependability is analyzed. Practical aspects of embedding these techniques in real programs are discussed. They have significant impact on the final fault robustness.

  • An Alternative Test Generation for Path Delay Faults by Using Ni-Detection Test Sets

    Hiroshi TAKAHASHI  Kewal K. SALUJA  Yuzo TAKAMATSU  

     
    PAPER-Test

      Page(s):
    2650-2658

    In this paper, we propose an alternative method that does not generate a test for each path delay fault directly to generate tests for path delay faults. The proposed method generates an N-propagation test-pair set by using an Ni-detection test set for single stuck-at faults. The N-propagation test-pair set is a set of vector pairs which contains N distinct vector pairs for every transition faults at a check point. Check points consist of primary inputs and fanout branches in a circuit. We do not target the path delay faults for test generation, instead, the N-propagation test-pair set is generated for the transition (both rising and falling) faults of check points in the circuit. After generating tests, tests are simulated to determine their effectiveness for singly testable path delay faults and robust path delay faults. Results of experiments on the ISCAS'85 benchmark circuits show that the N-propagation test-pair sets obtained by our method are effective in testing path delay faults.

  • Delay Fault Testing for CMOS Iterative Logic Arrays with a Constant Number of Patterns

    Shyue-Kung LU  

     
    PAPER-Test

      Page(s):
    2659-2665

    Iterative Logic Arrays (ILAs) are widely used in many applications, e.g., general-purpose processors, digital signal processors, and embedded processors. Owing to the advanced VLSI technology, new defect mechanisms exist in the fabricated circuits. In order to ascertain the quality of manufactured products, the traditional single cell fault model is not sufficient. Therefore, more realistic fault models such as sequential fault models and delay fault models should also be adopted. A cell delay fault occurs if and only if an input transition cannot be propagated to the cell's output through a path in the cell in a specified clock period. It has been shown that all SIC (single input change) pairs of a circuit are sufficient to detect all robustly detectable path delay faults within the circuit. We extend the concept of SIC pairs for iterative logic arrays. We say that an ILA is C-testable for cell delay faults if it is possible to apply all SIC pairs of a cell to each cell of the array in such a way that the number of test pairs for the array is a constant. This is based on a novel fault model, called Realistic Sequential Cell Fault Model (RS-CFM). Necessary conditions for sending this test set to each cell in the array and propagating faulty effects to the primary outputs are derived. An efficient algorithm is also presented to obtain such a test sequence. We use the pipelined array multiplier as an example to illustrate our approach. The number of test pairs for completely testing of the array is only 84. Moreover, the hardware overhead to make it delay fault testable is about 5.66%.

  • Test Pattern Generation for CMOS Open Defect Detection by Supply Current Testing under AC Electric Field

    Hiroyuki YOTSUYANAGI  Taisuke IWAKIRI  Masaki HASHIZUME  Takeomi TAMESADA  

     
    PAPER-Test

      Page(s):
    2666-2673

    In this paper, supply current testing for detecting open defects in CMOS circuits is discussed. It is known that open defects cause unpredictable faulty effects and are difficult to be detected. In our test method, an AC electric field is applied during testing. The voltage at a floating node caused by an open defect is varied by the applied electric field and then the defect can be detected. The test pattern generation procedure for open defects is proposed and is applied to benchmark circuits. The experimental results shows that the number of test vectors for opens are much smaller than that for stuck-at faults. The experimental evaluation for an LSI chip is also shown to present the feasibility of our test method.

  • A Test Plan Grouping Method to Shorten Test Length for RTL Data Paths under a Test Controller Area Constraint

    Toshinori HOSOKAWA  Hiroshi DATE  Masahide MIYAZAKI  Michiaki MURAOKA  Hideo FUJIWARA  

     
    PAPER-Test

      Page(s):
    2674-2683

    This paper proposes a test generation method using several partly compacted test plan tables for RTL data paths. Combinational modules in data paths are tested using several partly compacted test plan tables. Each partly compacted test plan table is generated from each grouped test plan set and is used to test combinational modules corresponding to the grouped test plans. The values of control signals in a partly compacted test plan table are supplied by a test controller. This paper also proposes the architecture of a test controller which can be synthesized in a reasonable amount of time, and proposes a test plan grouping method to shorten test length for data paths under a test controller area constraint. Experimental results for benchmarks show that the test lengths are shortened by 4 to 36% with -9 to 8% additional test controller area compared with the test generation method using test plans.

  • Regular Section
  • Low Complexity Multiplexer-Based Parallel Multiplier of GF(2m)

    Gi-Young BYUN  Heung-Soo KIM  

     
    PAPER-Computer System Element

      Page(s):
    2684-2690

    Two operations, polynomial multiplication and modular reduction, are newly induced by the properties of the modified Booth's algorithm and irreducible all one polynomials, respectively. A new and effective methodology is hereby proposed for computing multiplication over a class of fields GF(2m) using the two operations. Then a low complexity multiplexer-based multiplier is presented based on the aforementioned methodology. Our multiplier consists of m 2-input AND gates, an (m2 + 3m - 4)/2 2-input XOR gates, and m(m - 1)/2 4 1 multiplexers. For the detailed estimation of the complexity of our multiplier, we will expand this argument into the transistor count, using a standard CMOS VLSI realization. The compared results show that our work is advantageous in terms of circuit complexity and requires less delay time compared to previously reported multipliers. Moreover, our architecture is very regular, modular and therefore, well-suited for VLSI implementation.

  • Detection of Autosymmetry in Logic Functions Using Spectrum Technique

    Ryoji ISHIKAWA  Goro KODA  Kensuke SHIMIZU  

     
    PAPER-Computer System Element

      Page(s):
    2691-2697

    The discrete nature of data in a functional domain can generally be replaced by the global nature of data in the spectrum domain. In this paper we propose a fast procedure to detect autosymmetric function as an application of the spectrum technique. The autosymmetric function differs from the usual symmetric function and strongly relates with EXOR-based representations. It is known that many practical logical networks are autosymmetric, and this nature allows a useful functional class to realize a compact network with EXOR gates. Our procedure is able to detect autosymmetric functions quickly by using spectral coefficients. In experiments, our technique can detect the autosymmetry of most networks with a small number of checks of the spectrum.

  • Comparative Performance Analysis of Ordering Strategies in Atomic Broadcast Algorithms

    Xavier DEFAGO  Andre SCHIPER  Peter URBAN  

     
    PAPER-Computer Systems

      Page(s):
    2698-2709

    In this paper, we present the results of a comparative analysis of Atomic Broadcast algorithms. The analysis was done by using an analytical method to compare the performance of five different classes of Atomic Broadcast algorithms. The five classes of Atomic Broadcast algorithms are determined by the mechanisms used by the algorithms to define the delivery order. To evaluate the performance of algorithms, the analysis relies on contention-aware metrics to provide a measure for both their latency and their throughput. The results thus obtained yield interesting insight into the performance tradeoffs of different Atomic Broadcast algorithms, thus providing helpful information to algorithms and systems designers.

  • Differential Evaluation of Fixpoints of Non-distributive Functions

    Joonseon AHN  

     
    PAPER-Theory and Models of Software

      Page(s):
    2710-2721

    We present a differential fixpoint computation method for program analyses based on abstract interpretation. An analysis of a program based on abstract interpretation can be expressed using a monotonic increasing function and a fixpoint of the function becomes an analysis result. To compute a fixpoint, the function is applied repeatedly until the results become stable. This brings redundant computation because new results always include the former results. Differential methods try to avoid such redundancy by computing only the increment of each function application. Compared with other differential fixpoint evaluation methods, our method can deal with non-distributive functions which often occur in practical program analyses. To compute increments for non-distributive functions, we adapt an indirect way of using a differential evaluation rule for expressions which form function bodies. We have designed a differential worklist algorithm and applied the algorithm to implement an alias and constant propagation analysis. Experiments show that our method can avoid much redundant computation.

  • Mining Traversal Patterns on the Internet

    Tzung-Shi CHEN  

     
    PAPER-Databases

      Page(s):
    2722-2730

    Mining traversal patterns on the Internet is one of critical issues for exploring the user access behaviors. In this paper, we propose a new data mining scheme for mining frequent trip traversal patterns on the Internet. First, we define a trip traversal as a historical contiguous sequence of web sites or web pages, which were surfed or visited on an information-providing system by one user. Next, we derive all of the maximal trip traversals by analyzing and filtering these collected trip traversals. For mining the large trip traversals from the maximal trip traversals, we present a data mining scheme integrated with the schemes presented in. Here, the extracted large trip traversals can be thought of as the realistic frequent browsed behaviors for most of users either on a web site or on an information-providing system, such as a proxy server. Finally, we implement and design a data mining system to explore the large trip traversal patterns in order to capture user access patterns to some proxy server.

  • Translation for Constraint Descriptions into a Colored Petri Net to Analyze Object Migration Behavior

    Hideki SATO  

     
    PAPER-Databases

      Page(s):
    2731-2742

    In databases based on a multi-aspects object data model whcih enables multiple aspects of a real-world entity to be represented and to be acquired/lost dynamically, Object Migration (OM) updating membership relationships between an object and classes occurs, as the properties of the object evolve in its lifetime. We have proposed an OM behavior modeling framework using Colored Petri Nets (CPN) to analyze OM behavior. Based on the proposed framework, this paper presents a technique for constructing OM behavior models from OM constraint descriptions and class schemas as its input. The presented technique makes it easy to construct consistent and complete OM behavior models, since OM constraints are described in a simple, modular, and declarative form.

  • An Efficiently Self-Reconstructing Array System Using E-1-Track Switches

    Tadayoshi HORITA  Itsuo TAKANAMI  

     
    PAPER-Fault Tolerance

      Page(s):
    2743-2752

    The E-1-track switch torus array model and the "EAR" reconfiguration method are proposed for fault tolerance of mesh or torus-connected processor arrays, where the original idea of EAR is in EAM. The comparison among these and others is described in terms of the (run-time) array reliability, hardware overhead, and/or reconfiguration time. When a designer chooses one among fault tolerant methods, he should consider their features synthetically case by case, and we consider that the results given by this paper are useful for the choice.

  • Active Learning with Model Selection -- Simultaneous Optimization of Sample Points and Models for Trigonometric Polynomial Models

    Masashi SUGIYAMA  Hidemitsu OGAWA  

     
    PAPER-Pattern Recognition

      Page(s):
    2753-2763

    In supervised learning, the selection of sample points and models is crucial for acquiring a higher level of the generalization capability. So far, the problems of active learning and model selection have been independently studied. If sample points and models are simultaneously optimized, then a higher level of the generalization capability is expected. We call this problem active learning with model selection. However, active learning with model selection can not be generally solved by simply combining existing active learning and model selection techniques because of the active learning/model selection dilemma: the model should be fixed for selecting sample points and conversely the sample points should be fixed for selecting models. In this paper, we show that the dilemma can be dissolved if there is a set of sample points that is optimal for all models in consideration. Based on this idea, we give a practical procedure for active learning with model selection in trigonometric polynomial models. The effectiveness of the proposed procedure is demonstrated through computer simulations.

  • Moving Target Detection and Tracking Using Edge Features Detection and Matching

    Alireza BEHRAD  Seyed AHMAD MOTAMEDI  

     
    PAPER-Pattern Recognition

      Page(s):
    2764-2774

    A new algorithm for fast detection and tracking of moving targets using a mobile video camera is presented. Our algorithm is based on image feature detection and matching. To detect features, we used edge points and their accumulated curvature. When the features are detected they are matched with their corresponding points using a new method called fuzzy-edge based feature matching. The proposed algorithm has two modes: detection and tracking. In the detection mode, background motion is estimated and compensated using an affine transformation. The resultant motion-rectified image is used for detection of the target location using split and merge algorithm. We also checked other features for precise detection of the target. When the target is identified, algorithm switches to the tracking mode, which also has two phases. In the first phase, the algorithm tracks the target with the intention to recover the target bounding-box more precisely and when the target bounding-box is determined precisely, the second phase of tracking algorithm starts to track the specified target more accurately. The algorithm has good performance in the environment with noise and illumination change.

  • A Packet Loss Recovery Method Using Packets Arrived behind the Playout Time for CELP Decoding

    Masahiro SERIZAWA  Hironori ITO  

     
    PAPER-Speech and Hearing

      Page(s):
    2775-2779

    This paper proposes a packet loss recovery method using packets arrived behind the playout time for CELP (Code Excited Liner Prediction) decoding. The proposed method recovers synchronization of the filter states between encoding and decoding in the period following packet loss. The recovery is performed by replacing the degraded filter states with the ones calculated from the late arrival packet in decoding. When the proposed method is applied to the AMR (Adaptive Multi-Rate) speech decoder, it improves the segmental SNR (Signal-to-Noise Ratio) by 0.2 to 1.8 dB at packet loss rates of 2 to 10 % in case that all the packet losses occur due to their late arrival. PESQ (Perceptual Evaluation of Speech Quality) results also show that the proposed method slightly improves the speech quality. The subjective test results show that five-grade mean opinion scores are improved by 0.35 and 0.28 at a packet loss rate of 5 % at speech coding bitrates of 7.95 and 12.2 kbit/s, respectively.

  • Color Transfer between Images Based on Basic Color Category

    Youngha CHANG  Suguru SAITO  Masayuki NAKAJIMA  

     
    PAPER-Image Processing, Image Pattern Recognition

      Page(s):
    2780-2785

    Usually, paintings are more appealing than photographic images. This is because paintings can incorporate styles based on the artist's subjective view of motif. This style can be distinguished by looking at elements such as motif, color, shape deformation and brush texture. In our work, we focus on the effect of "color" element and devise a method for transforming the color of an input photograph according to a reference painting. To do this, we consider basic color category concepts in the color transformation process. We assume that color transformations from one basic color category to another may cause peculiar feelings. Therefore, we restrict each color transformation within the same basic color category. For this, our algorithm first categorizes each pixel color of a photograph into one of eleven basic color categories. Next, for every pixel color of the photograph, the algorithm finds its corresponding color in the same category of a reference painting. Finally, the algorithm substitutes the pixel color with its corresponding color. In this way, we achieve large but natural color transformations of an image.

  • Digital Image Watermarking Method Based on Vector Quantization with Labeled Codewords

    Zhe-Ming LU  Wen XING  Dian-Guo XU  Sheng-He SUN  

     
    LETTER-Applications of Information Security Techniques

      Page(s):
    2786-2789

    This Letter presents a novel VQ-based digital image watermarking method. By modifying the conventional GLA algorithm, a codeword-labeled codebook is first generated. Each input image block is then reconstructed by the nearest codeword whose label is equal to the watermark bit. The watermark extraction can be performed blindly. Simulation results show that the proposed method is robust to JPEG compression, vector quantization (VQ) compression and some spatial-domain processing operations.

  • A Fuzzy Differential Diagnosis of Headache Applying Linear Regression Method and Fuzzy Classification

    Jeong-Yong AHN  Young-Hyun KIM  Soon-Ki KIM  

     
    LETTER-Medical Engineering

      Page(s):
    2790-2793

    The fuzzy set framework can be utilized in several different approaches to modeling the diagnostic process. In this paper, we introduce two main relations between symptoms and diseases where the relations are described by intuitionistic fuzzy set data. Also, we suggest four measures for medical diagnosis. We are dealing with the preliminary diagnosis from the information of interview chart. We quantify the qualitative information based on the interview chart by dual scaling. Prototype of fuzzy diagnostic sets and the linear regression methods are established with these quantified data. These methods can be used to classify new patient's tone of diseases with certain degrees of belief and its concerned symptoms.