The search functionality is under construction.
The search functionality is under construction.

IEICE TRANSACTIONS on Information

  • Impact Factor

    0.59

  • Eigenfactor

    0.002

  • article influence

    0.1

  • Cite Score

    1.4

Advance publication (published online immediately after acceptance)

Volume E77-D No.8  (Publication Date:1994/08/25)

    Regular Section
  • A Note on Inadequacy of the Model for Learning from Queries

    Ryuichi NAKANISHI  Hiroyuki SEKI  Tadao KASAMI  

     
    PAPER-Automata, Languages and Theory of Computing

      Page(s):
    861-868

    Learning correctly from queries" is a formal learning model proposed by Angluin. In this model, for a class Γ of language representations, a learner asks queries to a teacher of an unknown language Lq which can be represented by some GqΓ, and eventually outputs a language representation GΓ which represents Lq and halts. An algorithm (leaner) A is said to learn a class of languages represented by Γ in the weak definition if the time complexity of A is some polynomial of n and m, where n is the minimum size of the lagunage representations in Γ which represent Lq, and m is the maximum length of the counterexamples returned in an execution. On the other band, A is said to learn represented by Γ in the strong definition if at any point τ of the execution, the time consumed up to τ is some polynomial of n and m, where n is the same as above, and m is the maximum length of the counterexamples returned up to τ. In this paper, adequacy of the model is examined, and it is shown that both in the weak and strong definitions, there exist learners which extract a long counterexample, and identify Lq by using equivalence queries exhaustively. For example, there exists a learner which learns the class CFL of context-free languages represented by the class CFG of context-free grammars in the weak definition using only equivalence queries. Next, two restrictions concerning with learnability criteria are introduced. Proper termination condition is that when a teacher replies with yes" to an equivalence query, then the learner must halt immediately. The other condition, called LBC-condition, is that in the weak/strong definition, the time complexity must be some polynomial of n and log m. In this paper, it is shown that under these conditions, there still exist learners which execute exhaustive search. For instance, there exists a learner which learns CFL represented by CFG in the weak definition using membership queries and equivalence queries under the proper termination condition, and there also exists a learner that learns CFL represented by CFG in the strong definition using subset queries and superset queries under LBC-condition. These results suggest that the weak definition is not an adequate learning model even if the proper termination condition is assumed. Also, the model becomes inadequate in the strong definition if some combination of queries, such as subset queries and superset queries, is used instead of equivalence queries. Many classes of languages become learnable by our extracting long counterexample" technique. However, it is still open whether or not CFL represented by CFG is learnable in the strong definition from membership queries and equivalence queries, although the answer is known to be negative if at least one of (1) quadratic residues modulo a composite, (2) inverting RSA encryption, or (3) factoring Blum integers, is intractable.

  • Piecewise Parametric Cubic Interpolation

    Caiming ZHANG  Takeshi AGUI  Hiroshi NAGAHASHI  

     
    PAPER-Algorithm and Computational Complexity

      Page(s):
    869-876

    A method is described for constructing an interpolant to a set of arbitrary data points (xi, yi), i1, 2, , n. The constructed interpolant is a piecewise parametric cubic polynomial and satisfies C1 continuity, and it reproduces all parametric polynomials of degree two or less exactly. The experiments to compare the new method with Bessel method and spline method are also shown.

  • Design of Repairable Cellular Arrays on Multiple-Valued Logic

    Naotake KAMIURA  Yutaka HATA  Kazuharu YAMATO  

     
    PAPER-Fault Tolerant Computing

      Page(s):
    877-884

    This paper proposes a repairable and diagnosable k-valued cellular array. We assume a single fault, i.e., either stuck-at-O fault or stuck-at-(k1) fault of switches occurs in the array. By building in a duplicate column iteratively, when a stuck-at-(k1) fault occurs in the array, the fault never influences the output of the array. That is, we can construct a fault-tolerant array for the stuck-at-(k1) fault. While, for the stuck-at-O fault, the diagnosing method is simple and easy because we don't have to diagnose the stuck-at-(k1) fault. Moreover, our array can be repaired easily for the fault. The comparison with other rectangular arrays shows that our array has advantages for the number of cells and the cost of the fault diagnosis.

  • Automatic Seal Imprint Verification System with Imprint Quality Assessment Function and Its Performance Evaluation

    Katsuhiko UEDA  

     
    PAPER-Image Processing, Computer Graphics and Pattern Recognition

      Page(s):
    885-894

    An annoying problem encountered in automatic seal imprint verification is that for seal imprints may have a lot of variations, even if they are all produced from a single seal. This paper proposes a new automatic seal imprint verification system which adds an imprint quality assessment function to our previous system in order to solve this problem, and also examines the verification performance of this system experimentally. This system consists of an imprint quality assessment process and a verification process. In the imprint quality assessment process, an examined imprint is first divided into partial regions. Each partial region is classified into one of three quality classes (good quality region, poor quality region, and background) on the basis of characteristics of its gray level histogram. In the verification process, only good quality partial regions of an examined imprint are verified with registered one. Finally, the examined imprint is classified as one of two types: a genuine and a forgery. However, as a result of quality assessment, if the partial regions classified as poor quality are too many, the examined imprint is classified as ambiguous" without verification processing. A major advantage of this verification system is that this system can verify seal imprints of various qualities efficiently and accurately. Computer experiments with real seal imprints were performed by using this system, previous system (without image quality assessment function) and document examiners of a bank. The results of these experiments show that this system is superior in the verification performance to our previous system, and has a similar verification performance to that of document examiners (i.e., the experimental results show the effectiveness of adding the image quality assessment function to a seal imprint verification system).

  • The Scheduling of the Parameters in Hopfield Neural Networks with Fuzzy Control

    Tomoyuki UEDA  Kiyoshi TAKAHASHI  Chun-Ying HO  Shinsaku MORI  

     
    PAPER-Artificial Intelligence and Cognitive Science

      Page(s):
    895-903

    In this paper, we proposes a novel fuzzy control for parameter scheduling of the Hopfield neural network. When a combinatorial optimization problem, such as the traveling salesman problem, is solved by Hopfield neural network, it is efficient to adaptively change the parameters of the energy function and sigmoid function. By changing the parameters on purpose, this network can avoid being trapped at a local minima. Since there exists complex relations among these parameters, it is difficult to analytically determine the ideal scheduling. First, we investigate a bad scheduling to change parameters by simple experiments and find several rules that may lead to a good scheduling. The rules extracted from the experimental results are then realized by fuzzy control. By using fuzzy control, we can judge bad scheduling from vague network stages, and then correct the relations among the parameters. Computer simulation results of the Traveling Salesman Problem (TSP) is considered as an example to demonstrate its validity.

  • 3-D Object Recognition Using Hopfield-Style Neural Networks

    Tsuyoshi KAWAGUCHI  Tatsuya SETOGUCHI  

     
    PAPER-Bio-Cybernetics and Neurocomputing

      Page(s):
    904-917

    In this paper we propose a new algorithm for recognizing 3-D objects from 2-D images. The algorithm takes the multiple view approach in which each 3-D object is modeled by a collection of 2-D projections from various viewing angles where each 2-D projection is called an object model. To select the candidates for the object model that has the best match with the input image, the proposed algorithm computes the surface matching score between the input image and each object model by using Hopfield nets. In addition, the algorithm gives the final matching error between the input image and each candidate model by the error of the pose-transform matrix proposed by Hong et al. and selects an object model with the smallest matching error as the best matched model. The proposed algorithm can be viewed as a combination of the algorithm of Lin et al. and the algorithm of Hong et al. However, the proposed algorithm is not a simple combination of these algorithms. While the algorithm of Lin et al. computes the surface matching score and the vertex matching score berween the input image and each object model to select the candidates for the best matched model, the proposed algorithm computes only the surface matching score. In addition, to enhance the accuracy of the surface matching score, the proposed algorithm uses two Hopfield nets. The first Hopfield net, which is the same as that used in the algorithm of Lin et al., performs a coarse matching between surfaces of an input image and surfaces of an object model. The second Hopfield net, which is the one newly proposed in this paper, establishes the surface correspondences using the compatibility measures between adjacent surface-pairs of the input image and the object model. the results of the experiments showed that the surface matching score obtained by the Hopfield net proposed in this paper is much more useful for the selectoin of the candidates for the best matched model than both the sruface matching score obtained by the first Hopfield net of Lin et al. and the vertex matching score obtained by the second Hopfield net of Lin et al. and, as the result, the object recognition algorithm of this paper can perform much more reliable object recognition than that obtained by simply combining the algorithm of Lin et al. and the algorithm of Hong et al.

  • Ultrafast Single-Shot Water and Fat Separated Imaging with Magnetic Field Inhomogeneities

    Shoichi KANAYAMA  Shigehide KUHARA  Kozo SATOH  

     
    PAPER-Medical Electronics and Medical Information

      Page(s):
    918-924

    Ultrafast MR imaging (e.g., echo-planar imaging) acquires all the data within only several tens of milliseconds. This method, however, is affected by static magnetic field inhomogeneities and chemical shift; therefore, a high degree of field homogeneity and water and fat signal separation are required. However, it is practically impossible to obtain an homogeneous field within a subject even if in vivo shimming has been performed. In this paper, we describe a new ultrafast MR imaging method called Ultrafast Single-shot water and fat Separated Imaging (USSI) and a correction method for field inhomogeneities and chemical shift. The magnetic field distribution whthin the subject is measured before thd scan and used to obtain images without field inhomogeneity distortions. Computer simulation results have shown that USSI and the correction method can obtain water and fat separated images as real and imaginary parts, respectively, of a complex Fourier transform with a single-shot scan. Image quality is maintained in the presence of field inhomogeneities of several ppm similar to those occurring under practical imaging conditions. Limitations of the correction method are also discussed.

  • Moving Point Light Source Photometric Stereo

    Yuji IWAHORI  Robert J. WOODHAM  Hidekazu TANAKA  Naohiro ISHII  

     
    LETTER-Image Processing, Computer Graphics and Pattern Recognition

      Page(s):
    925-929

    This paper describes a new method to determine the 3-D position coordinates of a Lambertian surface from four shaded images acquired with an actively controlled, nearby moving point light source. The method treats both the case when the initial position of the light source is known and the case when it is unknown.