The search functionality is under construction.

IEICE TRANSACTIONS on Information

  • Impact Factor

    0.72

  • Eigenfactor

    0.002

  • article influence

    0.1

  • Cite Score

    1.4

Advance publication (published online immediately after acceptance)

Volume E87-D No.4  (Publication Date:2004/04/01)

    Special Section on Knowledge-Based Software Engineering
  • FOREWORD

    Kenji KAIJIRI  

     
    FOREWORD

      Page(s):
    799-800
  • Transformation between Scenarios from Different Viewpoints

    HongHui ZHANG  Atsushi OHNISHI  

     
    PAPER-Requirement Engineering

      Page(s):
    801-810

    Scenarios that describe concrete situations of software operation play an important role in software development and especially in requirements engineering. Scenario details should vary in content when described from different viewpoints, but this presents a difficulty, because an informal scenario from one viewpoint can not easily be transformed into a scenario from another viewpoint with consistency and assurance. This paper describes (1) a language for describing scenarios in which simple action traces are embellished to include typed frames based on a simple case grammar of actions, and (2) a method to accomplish the transformation between scenarios from different viewpoints based on the scenario description language.

  • A Method to Develop Feasible Requirements for Java Mobile Code Application

    Haruhiko KAIYA  Kouta SASAKI  Kenji KAIJIRI  

     
    PAPER-Requirement Engineering

      Page(s):
    811-821

    We propose a method for analyzing trade-off between an environment where a Java mobile code application is running and requirements for the application. In particular, we focus on the security-related problems that originate in low-level security policy of the code-centric style of the access control in Java runtime. As the result of this method, we get feasible requirements with respect to security issues of mobile codes. This method will help requirements analysts to compromise the differences between customers' goals and realizable solutions. Customers will agree to the results of the analysis by this method because they can clearly trace the reasons why some goals are achieved but others are not. We can clarify which functions can be performed under the environment systematically. We also clarify which functions in mobile codes are needed so as to meet the goals of users by goal oriented requirements analysis(GORA). By comparing functions derived from the environment and functions from the goals, we can find conflicts between the environments and the goals, and also find vagueness of the requirements. By resolving the conflicts and by clarifying the vagueness, we can develop bases for the requirements specification.

  • Combining Goal-Oriented Analysis and Use Case Analysis

    Kenji WATAHIKI  Motoshi SAEKI  

     
    PAPER-Requirement Engineering

      Page(s):
    822-830

    Goal-oriented analysis and use case analysis are well known requirements analysis methods and are putting into practice. Roughly speaking, goal-oriented methods are suitable for eliciting constraints to a system and use case analysis methods elicit concrete system behavior. Thus these methods are complementary and their integration into a new method allows us to get a more powerful requirements elicitation method. This paper proposes a new method where both of the methods are amalgamated. In our method, constraints to the system are refined by goal-oriented style, while system behavior are described with hierarchical use cases. Since a use case is made relate to goals during our elicitation processes, the decomposition of goals and use cases are complementally supported. Furthermore we applied our method to a couple of development projects and assessed its effectiveness.

  • Deriving Tool Specifications from User Actions

    Christopher J. HOGGER  Frank R. KRIWACZEK  

     
    PAPER-Requirement Engineering

      Page(s):
    831-837

    We describe a framework for deriving specifications of wizard-like tools by detecting coherent patterns of behaviour among user actions observed in a portal environment. Implementation in the portal of tools compliant with these specifications can then provide useful support for the kind of work patterns observed. The derivation process employs a customizable knowledge base which defines coherent patterns and seeks concrete instances of them among series of actions that occur with sufficient frequency among those observed.

  • A Class Cohesion Metric Focusing on Cohesive-Part Size

    Hirohisa AMAN  Kenji YAMASAKI  Hiroyuki YAMADA  Matu-Tarow NODA  

     
    PAPER-Metrics, Test, and Maintenance

      Page(s):
    838-848

    Cohesion is an important software attribute, and it is one of significant criteria for assessing object-oriented software quality. Although several metrics for measuring cohesion have been proposed, there is an aspect which has not been supported by those existing metrics, that is "cohesive-part size." This paper proposes a new metric focusing on "cohesive-part size," and evaluates it in both of qualitative and quantitative ways, with a mathematical framework and an experiment measuring some Java classes, respectively. Through those evaluations, the proposed metric is showed to be a reasonable metric, and not redundant one. It can collaborate with other existing metrics in measuring class cohesion, and will contribute to more accurate measurement.

  • Intelligent versus Random Software Testing

    Juichi TAKAHASHI  

     
    PAPER-Metrics, Test, and Maintenance

      Page(s):
    849-854

    Comparison of intelligent and random testing in data inputting is still under discussion. Little is also known about testing for the whole software and empirical testing methodology when random testing used. This study research not only for data inputting testing, but also operation of software (called transitions) in order to test the whole GUI software by intelligent and random testing. Methodology of this study is that we attempt to research efficiency of random and intelligent testing by Chinese postman problem. In general, random testing is considered straightforward but not efficient. Chinese postman problem testing is complicated but efficient. The comparison between random and intelligent testing would give further recommendation for software testing methodology.

  • Formalizing Refactoring by Using Graph Transformation

    Hiroshi KAZATO  Minoru TAKAISHI  Takashi KOBAYASHI  Motoshi SAEKI  

     
    PAPER-Metrics, Test, and Maintenance

      Page(s):
    855-867

    Refactoring is one of the promising techniques for improving software design by means of behavior-preserving structural transformation, and is widely taken into practice. In particular, it is frequently applied to design models represented with UML such as class diagrams. However, since UML design models includes multiple diagrams which are closely related from various views, to get behavior-preserving property, we should get the other types of design information and should handle with the propagation of the change on a diagram to the other diagrams. For example, to refactor a class diagram, we need behavioral information of methods included in the class and should also refactor diagrams which represent the behavior, such as state diagrams, activity diagrams. In this paper, we introduce refactoring on design models as transformations of a graph described by UML class diagram and action semantics. First, we define basic transformations of design models that preserve the behavior of designed software, and compose them into refactoring operations. We use Object Constraint Language (OCL) to specify when we can apply a refactoring operation. Furthermore we implement our technique on a graph transformation system AGG to support the automation of refactoring, together with evaluation mechanism of OCL expressions. Some illustrations are presented to show its effectiveness. The work is the first step to handle with refactoring on UML design models in integrated way.

  • A Support Method for Widget Replacement to Realize High Usability and Its Evaluation

    Junko SHIROGANE  Hajime IWATA  Kouji WATANABE  Yoshiaki FUKAZAWA  

     
    PAPER-Metrics, Test, and Maintenance

      Page(s):
    868-876

    In recent years, not only functionality but also usability have come to be required in software. In order to develop a highly usable GUI (Graphical User Interface) application, it is effective that end users evaluate their GUI and the results of the evaluation are reflected on the original GUI. In these cases, it is necessary to replace a widget with another widget, and to reconnect the new GUI part with the original body part. When widgets are replaced, the operations are usually changed, but the roles of the GUI are seldom changed. In this research, we propose a development method for GUI applications with easy operations and also a method of automatic reconnection between GUI parts and new body parts. This reconnection is realized by classifying widgets according to their roles and by replacing methods of widgets with abstract methods categorized by common roles.

  • Integrated Development Environment for Knowledge-Based Systems and Its Practical Application

    Keiichi KATAMINE  Masanobu UMEDA  Isao NAGASAWA  Masaaki HASHIMOTO  

     
    PAPER-Knowledge Engineering and Robotics

      Page(s):
    877-885

    The modeling of an application domain and its specific knowledge description language are important for developing knowledge-based systems. A rapid-prototyping approach is suitable for such developments since in this approach the modeling and language development are processed simultaneously. However, programming languages and their supporting environments which are usually used for prototyping are not necessarily adequate for developing practical applications. We have been developing an integrated development environment for knowledge-based systems, which supports all the development phases from the early prototyping phase to final commercial development phase. The environment called INSIDE is based on a Prolog abstract machine, and provides all of the functions required for the development of practical applications in addition to the standard Prolog features. This enables the development of both prototypes and practical applications in the same environment. Moreover, their efficient development and maintenance can be achieved. In addition, the effectiveness of INSIDE is described by examples of its practical application.

  • SPAK: Software Platform for Agents and Knowledge Systems in Symbiotic Robots

    Vuthichai AMPORNARAMVETH  Pattara KIATISEVI  Haruki UENO  

     
    PAPER-Knowledge Engineering and Robotics

      Page(s):
    886-895

    This paper describes the design concept and implementation of a software platform for realization of symbiotic robots that interact intelligently with human in symbiosis manner. Such robots require proper combination of various technologies on a common platform that allows them to work co-operatively. "SPAK" has been developed to serve this purpose. It is a Java-based software platform to support knowledge processing and co-ordination of tasks among several software modules and agents representing the robotic hardware connected on a network. SPAK features frame-based knowledge system, a GUI knowledge building tool, forward and backward chaining engines, networking support, and class libraries for building software agent components. Beside the robotic applications, SPAK can be used as a general-purpose frame system as well. An experimental application of SPAK in human-robot interaction is also given.

  • A Unified View of Software Agents Interactions

    Behrouz Homayoun FAR  Wei WU  Mohsen AFSHARCHI  

     
    PAPER-Knowledge Engineering and Robotics

      Page(s):
    896-907

    Software agents are knowledgeable, autonomous, situated and interactive software entities. Agents' interactions are of special importance when a group of agents interact with each other to solve a problem that is beyond the capability and knowledge of each individual. Efficiency, performance and overall quality of the multi-agent applications depend mainly on how the agents interact with each other effectively. In this paper, we suggest an agent model by which we can clearly distinguish different agent's interaction scenarios. The model has five attributes: goal, control, interface, identity and knowledge base. Using the model, we analyze and describe possible scenarios; devise the appropriate reasoning and decision making techniques for each scenario; and build a library of reasoning and decision making modules that can be used readily in the design and implementation of multiagent systems.

  • DODDLE II: A Domain Ontology Development Environment Using a MRD and Text Corpus

    Masaki KUREMATSU  Takamasa IWADE  Naomi NAKAYA  Takahira YAMAGUCHI  

     
    PAPER-Knowledge Engineering and Robotics

      Page(s):
    908-916

    In this paper, we describe how to exploit a machine-readable dictionary (MRD) and domain-specific text corpus in supporting the construction of domain ontologies that specify taxonomic and non-taxonomic relationships among given domain concepts. In building taxonomic relationships (hierarchical structure) of domain concepts, some hierarchical structure can be extracted from a MRD with marked subtrees that may be modified by a domain expert, using matching result analysis and trimmed result analysis. In building non-taxonomic relationships (specification templates) of domain concepts, we construct concept specification templates that come from pairs of concepts extracted from text corpus, using WordSpace and an association rule algorithm. A domain expert modifies taxonomic and non-taxonomic relationships later. Through case studies with "the Contracts for the International Sales of Goods (CISG)" and "XML Common Business Library (xCBL)", we make sure that our system can work to support the process of constructing domain ontologies with a MRD and text corpus.

  • Smart Card Information Sharing Platform towards Global Nomadic World

    Eikazu NIWANO  Junko HASHIMOTO  Shoichi SENDA  Shuichiro YAMAMOTO  Masayuki HATANAKA  

     
    PAPER-System

      Page(s):
    917-927

    The demand for multi-application smart card platform has been increasing in various business sectors recently. When it comes to the actual implementation of the platform, however, network-based dynamic downloading in a Card Issuer-Service Provider separated environment has not made much progress. This paper introduces the smart card information sharing platform that uses licensing/policy/profile management and PKI-based technologies to enable multiple CIs and multiple SPs to reflect their own business policy flexibly via network. It makes the paradigm shift from card-oriented scheme to service-oriented scheme. By through world's first implementation of the scheme and some experiments including deployment, we confirmed that this technology is well-accepted and applicable to various business sectors and it can be of practical use.

  • The Multipurpose Methods for Efficient Searching at Online Shopping

    Tomomi SANJO  Morio NAGATA  

     
    PAPER-System

      Page(s):
    928-936

    Online shopping is becoming more and more popular in recent years. However, users are still unable to find what they want on the online market very efficiently. In our previous paper we suggested a system that helps to find short-sleeved T-shirts for young women on the online market. Then we conducted several verification experiments and proved that the system was effective. In this paper, we modified the system by adopting the following schemes in order to make it more versatile; First, all information is presented in a unified format. Second, users are provided with multiple-choice key words. Third, users search results are used to select information that is truly useful for the user. Lastly, we conducted several verification experiments and proved that these schemes were effective.

  • ACIS: A Large-Scale Autonomous Decentralized Community Communication Infrastructure

    Khaled RAGAB  Naohiro KAJI  Kinji MORI  

     
    PAPER-System

      Page(s):
    937-946

    This paper presents ACIS, an Autonomous Community Information System. ACIS is a proposition made to meet the rapidly changing users' requirements and cope with the extreme dynamism in current information services. ACIS is a decentralized bilateral-hierarchy architecture formed by a community of individual end-users (community members) having the same interests and demands at specified time and location. It allows those members to mutually cooperate and share information without loading up any single node excessively. In this paper, autonomous decentralized community construction and communication technologies are proposed to assure a productive cooperation, a flexible and timely communication among large number of community members. The main ideas behind the proposed communication technology are: content-code communication (service-based) for flexibility and multilateral benefits communication for timely and productive cooperation among members. All members communicate productively for the satisfaction of all the community members. The scalability of the system's response time regardless of the number of the community members has been shown by simulation. Thus, the autonomous decentralized community communication technology reveals interesting results when the total number of members in the community increases dramatically.

  • A Framework for Network Fault Management Using Software Agents

    Edidiong Uyai EKAETTE  Behrouz Homayoun FAR  

     
    PAPER-System

      Page(s):
    947-958

    This paper proposes a framework for distributed network management by incorporating fault and performance management metrics in a hierarchical decision making model. The goal of this research is to automate the fault management process. The fault management system is organized as a three level information processing model. Correlation results from each level are provided as evidence to the next level. Causal and temporal relationships between monitored variables are captured using Dynamic Bayesian Networks. As evidence is gathered, the probability of the presence of a fault is either strengthened or weakened. The proposed model is used for proactive fault detection as well as fault isolation purposes. A prototype implementing the ideas is presented.

  • Regular Section
  • Shrinking Alternating Two-Pushdown Automata

    Friedrich OTTO  Etsuro MORIYA  

     
    PAPER-Automata and Formal Language Theory

      Page(s):
    959-966

    The alternating variant of the shrinking two-pushdown automaton of Buntrock and Otto (1998) is introduced. It is shown that the class of languages accepted by these automata is contained in the class of deterministic context-sensitive languages, and that it contains a PSPACE-complete language. Hence, the closure of this class of languages under log-space reductions coincides with the complexity class PSPACE.

  • Evaluation of Performance Prediction Method for Master/Slave Parallel Programs

    Yasuharu MIZUTANI  Fumihiko INO  Kenichi HAGIHARA  

     
    PAPER-Computer Systems

      Page(s):
    967-975

    This paper describes the design and implementation of a testbed for predicting master/slave (M/S) programs written using Message Passing Interface (MPI) programs. The testbed, named M/S Emulator (MSE), aims at assisting developers in evaluating the performance of M/S programs and dynamic load-balancing strategies on clusters of PCs. In order to realize this, MSE predicts the communication time by using a realistic parallel computational model, an extension of the LogGPS model. This extended model improves the prediction accuracy on a large number of processors, because it captures the master's bottleneck: the overhead required for retrieving arrival messages from the slaves. Current MSE also employs a best effort emulation method for predicting the calculation time. In our experiments, MSE demonstrated an accurate prediction on clusters, especially on a larger number of nodes. Therefore, we believe that our extended model enables us to analyze the scalability of the M/S program performance.

  • Comparing Reading Techniques for Object-Oriented Design Inspection

    Giedre SABALIAUSKAITE  Shinji KUSUMOTO  Katsuro INOUE  

     
    PAPER-Software Engineering

      Page(s):
    976-984

    For more than twenty-five years software inspections have been considered an effective method for defect detection. Inspections have been investigated through controlled experiments in university environment and industry case studies. However, in most cases software inspections have been used for defect detection in documents of conventional structured development process. Therefore, there is a significant lack of information about how inspections should be applied to Object-Oriented artifacts, such as Object-Oriented code and design diagrams. In addition, extensive work is needed to determine whether some inspection techniques can be more beneficial than others. Most inspection experiments include inspection meetings after individual inspection is completed. However, several researchers suggested that inspection meetings may not be necessary since an insignificant number of new defects are found as a result of inspection meeting. Moreover, inspection meetings have been found to suffer from process loss. This paper presents the findings of a controlled experiment that was conducted to investigate the performance of individual inspectors as well as 3-person teams in Object-Oriented design document inspection. Documents were written using the notation of Unified Modelling Language. Two reading techniques, namely Checklist-based reading (CBR) and Perspective-based reading (PBR), were used during experiment. We found that both techniques are similar with respect to defect detection effectiveness during individual inspection as well as during inspection meetings. Investigating the usefulness of inspection meetings, we found out that the teams that used CBR technique exhibited significantly smaller meeting gains (number of new defect first found during team meeting) than meeting losses (number of defects first identified by an individual but never included into defect list by a team); meanwhile the meeting gains were similar to meeting losses of the teams that used PBR technique. Consequently, CBR 3-person team meetings turned out to be less beneficial than PBR 3-person team meetings.

  • Comparison of Efficiency in Key Entry among Young, Middle-Aged and Elderly Groups: Effects of Aging and Size of Keyboard Letters on Work Efficiency

    Atsuo MURATA  Yoshitomo OKADA  

     
    PAPER-Human-computer Interaction

      Page(s):
    985-991

    Making information technology (IT) more accessible to elderly users is an important objective, in particular, concerning input devices. In this study, it has been investigated how the aging factor and the letter (character) size of a keyboard affects the efficiency in data entry. In addition, computer experience by the elderly was examined relative to efficiency. The performance measures (entry speed and correctly entered number per min) were twice better in a young group of computer users than in middle-aged and elderly groups. The effect of the size of the keyboard letters on performance was observed for the middle-aged and elderly groups who had no experience using a computer. The young, middle-aged, and elderly groups with computer experience were not affected by the size of the keyboard letters.

  • Evaluation of Cognitive Function Using Event-Related Potential (P300 and CNV): Comparison among Young, Middle-Aged, and Elderly People

    Atsuo MURATA  Takashi SORA  

     
    PAPER-Rehabilitation Engineering and Assistive Technology

      Page(s):
    992-996

    Using event-related potential (P300 and CNV), the cognitive function of elderly subjects was compared with that of young subjects. It was found that the prolonged cognitive information processing induced by aging was reflected in the P300 and N400 latency. The effects of aging were not observed in the P300 amplitude. The CNV measurements, in the range of this study, did not reflect the effects of aging. This might be because the CNV reflects a higher cognitive function as compared with P300 and the effects of aging do not appear in such a function. The data also suggested that the cognitive style must be taken into account when evaluating the deterioration of cognitive functions with aging.

  • Independent Component Analysis for Color Indexing

    Xiang-Yan ZENG  Yen-Wei CHEN  Zensho NAKAO  Jian CHENG  Hanqing LU  

     
    PAPER-Pattern Recognition

      Page(s):
    997-1003

    Color histograms are effective for representing color visual features. However, the high dimensionality of feature vectors results in high computational cost. Several transformations, including singular value decomposition (SVD) and principal component analysis (PCA), have been proposed to reduce the dimensionality. In PCA, the dimensionality reduction is achieved by projecting the data to a subspace which contains most of the variance. As a common observation, the PCA basis function with the lowest frquency accounts for the highest variance. Therefore, the PCA subspace may not be the optimal one to represent the intrinsic features of data. In this paper, we apply independent component analysis (ICA) to extract the features in color histograms. PCA is applied to reduce the dimensionality and then ICA is performed on the low-dimensional PCA subspace. The experimental results show that the proposed method (1) significantly reduces the feature dimensions compared with the original color histograms and (2) outperforms other dimension reduction techniques, namely the method based on SVD of quadratic matrix and PCA, in terms of retrieval accuracy.

  • Normalization of Time-Derivative Parameters for Robust Speech Recognition in Small Devices

    Yasunari OBUCHI  Nobuo HATAOKA  Richard M. STERN  

     
    PAPER-Speech and Hearing

      Page(s):
    1004-1011

    In this paper we describe a new framework of feature compensation for robust speech recognition, which is suitable especially for small devices. We introduce Delta-cepstrum Normalization (DCN) that normalizes not only cepstral coefficients, but also their time-derivatives. Cepstral Mean Normalization (CMN) and Mean and Variance Normalization (MVN) are fast and efficient algorithms of environmental adaptation, and have been used widely. In those algorithms, normalization was applied to cepstral coefficients to reduce the irrelevant information from them, but such a normalization was not applied to time-derivative parameters because the reduction of the irrelevant information was not enough. However, Histogram Equalization (HEQ) provides better compensation and can be applied even to the delta and delta-delta cepstra. We investigate various implementation of DCN, and show that we can achieve the best performance when the normalization of the cepstra and the delta cepstra can be mutually interdependent. We evaluate the performance of DCN using speech data recorded by a PDA. DCN provides significant improvements compared to HEQ. It is shown that DCN gives 15% relative word error rate reduction from HEQ. We also examine the possibility of combining Vector Taylor Series (VTS) and DCN. Even though some combinations do not improve the performance of VTS, it is shown that the best combination gives the better performance than VTS alone. Finally, the advantage of DCN in terms of the computation speed is also discussed.

  • A New Method for Degraded Color Image Binarization Based on Adaptive Lightning on Grayscale Versions

    Shigueo NOMURA  Keiji YAMANAKA  Osamu KATAI  Hiroshi KAWAKAMI  

     
    PAPER-Image Processing and Video Processing

      Page(s):
    1012-1020

    We present a novel adaptive method to improve the binarization quality of degraded word color images. The objective of this work is to solve a nonlinear problem concerning the binarization quality, that is, to achieve edge enhancement and noise reduction in images. The digitized data used in this work were extracted automatically from real world photos. The motion of objects with reference to static camera and bad environmental conditions provoked serious quality problems on those images. Conventional methods, such as the nonlinear adaptive filter method proposed by Mo, or Otsu's method cannot produce satisfactory binarization results for those types of degraded images. Among other problems, we note mainly that contrast (between shapes and backgrounds) varies greatly within every degraded image due to non-uniform illumination. The proposed method is based on the automatic extraction of background information, such as luminance distribution to adaptively control the intensity levels, that is, without the need for any manual fine-tuning of parameters. Consequently, the new method can avoid noise or inappropriate shapes in the output binary images. Otsu's method is also applied to automatic threshold selection for classifying the pixels into background and shape pixels. To demonstrate the efficiency and the feasibility of the new adaptive method, we present results obtained by the binarization system. The results were satisfactory as we expected, and we have concluded that they can be used successfully as data in further processing such as segmentation or extraction of characters. Furthermore, the method helps to increase the eventual efficiency of a recognition system for poor-quality word images, such as number plate photos with non-uniform illumination and low contrast.

  • Novel Stroke Decomposition for Noisy and Degraded Chinese Characters Using SOGD Filters

    Yih-Ming SU  Jhing-Fa WANG  

     
    PAPER-Image Recognition, Computer Vision

      Page(s):
    1021-1030

    The paper presents a novel stroke decomposition approach based on a directional filtering technique for recognizing Chinese characters. The proposed filtering technique uses a set of the second-order Gaussian derivative (SOGD) filters to decompose a character into a number of stroke segments. Moreover, a new Gaussian function is proposed to overcome the general limitation in extracting stroke segments along some fixed and given orientations. The Gaussian function is designed to model the relationship between the orientation and power response of the stroke segment in the filter output. Then, an optimal orientation of the stroke segment can be estimated by finding the maximal power response of the stroke segment. Finally, the effects of decomposition process are analyzed using some simple structural and statistical features extracted from the stroke segments. Experimental results indicate that the proposed SOGD filtering-based approach is very efficient to decompose noisy and degraded character images into a number of stroke segments along an arbitrary orientation. Furthermore, the recognition performance from the application of decomposition process can be improved about 17.31% in test character set.

  • Extending Interrupted Feature Point Tracking for 3-D Affine Reconstruction

    Yasuyuki SUGAYA  Kenichi KANATANI  

     
    PAPER-Image Recognition, Computer Vision

      Page(s):
    1031-1038

    Feature point tracking over a video sequence fails when the points go out of the field of view or behind other objects. In this paper, we extend such interrupted tracking by imposing the constraint that under the affine camera model all feature trajectories should be in an affine space. Our method consists of iterations for optimally extending the trajectories and for optimally estimating the affine space, coupled with an outlier removal process. Using real video images, we demonstrate that our method can restore a sufficient number of trajectories for detailed 3-D reconstruction.

  • An Efficient Centralized Algorithm Ensuring Consistent Recovery in Causal Message Logging with Independent Checkpointing

    JinHo AHN  SungGi MIN  

     
    LETTER-Dependable Computing

      Page(s):
    1039-1043

    Because it has desirable features such as no cascading rollback, fast output commit and asynchronous logging, causal message logging needs a consistent recovery algorithm to tolerate concurrent failures. For this purpose, Elnozahy proposed a centralized recovery algorithm to have two practical benefits, i.e. reducing the number of stable storage accesses and imposing no restriction on the execution of live processes during recovery. However, the algorithm with independent checkpointing may force the system to be in an inconsistent state when processes fail concurrently. In this paper, we identify these inconsistent cases and then present a recovery algorithm to have the two benefits and ensure the system consistency when integrated with any kind of checkpointing protocol. Also, our algorithm requires no additional message compared with Elnozahy's algorithm.