The search functionality is under construction.
The search functionality is under construction.

IEICE TRANSACTIONS on Information

  • Impact Factor

    0.59

  • Eigenfactor

    0.002

  • article influence

    0.1

  • Cite Score

    1.4

Advance publication (published online immediately after acceptance)

Volume E74-D No.7  (Publication Date:1991/07/25)

    Regular Section
  • Complexity of the Optimum Join Order Problem in Relational Databases

    Yushi UNO  Toshihide IBARAKI  

     
    PAPER-Algorithm and Computational Complexity

      Page(s):
    2067-2075

    Optimizing the computing process of relational databeses is important in order to increase their applicability. The process consists of operations involving many relational tables. Among basic operations, joins are the most important because they require most of the computational time. In this paper, we consider to execute such joins on many relational tables by the merge-scan method, and try to find the optimum join order that minimizes the total size of intermediate tables (including the final answer table). The cost is important in its own right as it represents the memory space requirement of the entire computation. It can be also viewed as an approximate measure of computational time. However, it turns out that the problem is solvable in polynomial time only for very restricted special cases, and in NP-hard in general.

  • Evaluation for a Database Recovery Action with Periodical Checkpoint Generations

    Satoshi FUKUMOTO  Naoto KAIO  Shunji OSAKI  

     
    PAPER-Fault Tolerant Computing

      Page(s):
    2076-2082

    It is of great importane to make a recovery action to reconstruct the logical consistency of the databese on the occasion of a system failure. Such a recovery action consists of two operations. One is UNDO operation, which rolls back the effects of all incomplete transactions from the database, and the other is REDO operation, which reflects the results of all complete transactions in the databese. In general, we limit the amount of REDO operation by generating checkpoints, in which the results of a complete transactions(s) are collected in a safe place. In this paper, we discuss evaluation for a database recovery action with periodical checkpoint generations. A new model is proposed to evaluate the recovery action in a case where a failure rate of the system changes with time. The expected recovery time and the availability for one cycle are derived under the assumption of an arbitrary failure-time distribution. In particular, we obtain analytically the optimum checkpoint interval with the maximum availability in the case of an exponential distribution. We numerically calculate the above results by assuming Weibull distributions. We further discuss the numerical results in varying the parameters that we define in our model, and show the impact of these parameters on the recovery action.

  • On-line Signature Verification Incorporationg the Direction of Pen Movement

    Mitsu YOSHIMURA  Yutaka KATO  Shin-ichi MATSUDA  Isao YOSHUMURA  

     
    PAPER-Image Processing, Computer Graphics and Pattern Recognition

      Page(s):
    2083-2092

    This paper deals with an on-line signature verification system. It is assumed taht the system requires a person approaching it to declare his name and to write his signature. The system compares the written signature with reference signatures registered in advance and admits his access if the dissimilarity is below a threshold. New ideas in this paper on the design of such a system are to construct an effective dissimilarity measure by including the direction of pen movement, to select a few representative signatures from the reference set by using a clustering procedure, and to decide the threshold as the maximum of dissimilarity measures among reference signatures multiplied by a constant. The effectiveness of the designed system is examined experimentally using a database provided by CADIX Co., Ltd. It consists of 2203 writings for 28 signatures which contain 10 in Latin letters (English alphabet) by foreigners, 14 in Japanese letters by Japanese, and 4 in Latin letters by Japanese. It comprises genuine signatures and forgeries. As a result of the experiment, the system turned out to work highly satisfactory: it achieved about 99% of an average correct verification rate. The inclusion of the direction of pen movement into the dessimilarity measure leads to about 3% of an average increase in the correct verification rate.

  • Rule Generation and Selection with a Parallel Generalisation Architecture

    Ross Peter CLEMENT  

     
    PAPER-Artificial Intelligence and Cognitive Science

      Page(s):
    2093-2099

    The BAMBOO algorithm is an expert system rule generating algorithm developed from the well-known C4 decision tree algorthm. Because BAMBOO's search is less restricted than C4's it usually finds simpler rules than C4. Both algorithms have problems with incomplete search and brittleness. These problems can be avoided by layering both algorithms together with other algorithms, generating independent rule sets and selecting a subset of rules to use in the final expert system. This learning strategy is referred to as parallel generalisation. Problems of search and brittleness are because the algorithms have a single fixed bias. By layering several algorithms together the effect is of a single algoritm selectively applying many heuristics. Because selecting rules is much easier than generating rules, the select procedure has its own parameterised bias. The layered algorithm is much more flexible than the single algorithms, in addition to generating more accurate and concise rule sets. Brittleness is avoided as when one algorithm produces a worst case rule set other algorithms generate better rules. Parallel generalisation can be improved by altering the algorithms to cooperate moer.