The search functionality is under construction.
The search functionality is under construction.

IEICE TRANSACTIONS on Information

  • Impact Factor

    0.59

  • Eigenfactor

    0.002

  • article influence

    0.1

  • Cite Score

    1.4

Advance publication (published online immediately after acceptance)

Volume E82-D No.2  (Publication Date:1999/02/25)

    Regular Section
  • Unreachability Proofs for β Rewriting Systems by Homomorphisms

    Kiyoshi AKAMA  Yoshinori SHIGETA  Eiichi MIYAMOTO  

     
    PAPER-Automata,Languages and Theory of Computing

      Page(s):
    339-347

    Given two terms and their rewriting rules, an unreachability problem proves the non-existence of a reduction sequence from one term to another. This paper formalizes a method for solving unreachability problems by abstraction; i. e. , reducing an original concrete unreachability problem to a simpler abstract unreachability problem to prove the unreachability of the original concrete problem if the abstract unreachability is proved. The class of rewriting systems discussed in this paper is called β rewriting systems. The class of β rewriting systems includes very important systems such as semi-Thue systems and Petri Nets. Abstract rewriting systems are also a subclass of β rewriting systems. A β rewriting system is defined on axiomatically formulated base structures, called β structures, which are used to formalize the concepts of "contexts" and "replacement," which are common to many rewritten objects. Each domain underlying semi-Thue systems, Petri Nets, and other rewriting systems are formalized by a β structure. A concept of homomorphisms from a β structure (a concrete domain) to a β structure (an abstract domain) is introduced. A homomorphism theorem (Theorem1)is established for β rewriting systems, which states that concrete reachability implies abstract reachability. An unreachability theorem (Corollary1) is also proved for β rewriting systems. It is the contraposition of the homomorphism theorem, i. e. , it says that abstract unreachability implies concrete unreachability. The unreachability theorem is used to solve two unreachability problems: a coffee bean puzzle and a checker board puzzle.

  • An On-Line Prediction Algorithm Combining Several Prediction Strategies in the Shared Bet Model

    Ichiro TAJIKA  Eiji TAKIMOTO  Akira MARUOKA  

     
    PAPER-Algorithm and Computational Complexity

      Page(s):
    348-355

    One of the most important problems in machine learning is to predict a binary value by observing a sequence of outcomes, up to the present time step, generated from some unknown source. Vovk and Cesa-Bianchi et al. independently proposed an on-line prediction model where prediction algorithms are assumed to be given a collection of prediction strategies called experts and hence be allowed to use the predictions they make. In this model, no assumption is made about the way the sequence of bits to be predicted is generated, and the performance of the algorithm is measured by the difference between the number of mistakes it makes on the bit sequence and the number of mistakes made by the best expert on the same sequence. In this paper we extend the model by introducing a notion of investment. That is, both the prediction algorithm and the experts are required to make bets on their predictions at each time step, and the performance of the algorithm is now measured with respect to the total money lost, rather than the number of mistakes. We analyze our algorithms in the particular situation where all the experts share the same amount of bets at each time step. In this shared bet model, we give a prediction algorithm that is in some sense optimal but impractical, and we also give an efficient prediction algorithm that turns out to be nearly optimal.

  • Learning Bayesian Belief Networks Based on the MDL Principle: An Efficient Algorithm Using the Branch and Bound Technique

    Joe SUZUKI  

     
    PAPER-Algorithm and Computational Complexity

      Page(s):
    356-367

    In this paper, the computational issue in the problem of learning Bayesian belief networks (BBNs) based on the minimum description length (MDL) principle is addressed. Based on an asymptotic formula of description length, we apply the branch and bound technique to finding true network structures. The resulting algorithm searches considerably saves the computation yet successfully searches the network structure with the minimum value of the formula. Thus far, there has been no search algorithm that finds the optimal solution for examples of practical size and a set of network structures in the sense of the maximum posterior probability, and heuristic searches such as K2 and K3 trap in local optima due to the greedy nature even when the sample size is large. The proposed algorithm, since it minimizes the description length, eventually selects the true network structure as the sample size goes to infinity.

  • Some Modifications of the Tournament Algorithm for the Mutual Exclusion Problem

    Yoshihide IGARASHI  Hironobu KURUMAZAKI  Yasuaki NISHITANI  

     
    PAPER-Algorithm and Computational Complexity

      Page(s):
    368-375

    We propose two lockout-free (starvation-free) mutual exclusion algorithms for the asynchronous multi-writer/reader shared memory model. The first algorithm is a modification of the well-known tournament algorithm for the mutual exclusion problem. By the modification we can speed up the original algorithm. The running time of the modified algorithm from the entrance of the trying region to the entrance of the critical region is at most (n-1)c+O(nl), where n is the number of processes, l is an upper bound on the time between successive two steps of each process, and c is is an upper bound on the time that any user spends in the critical region. The second algorithm is a further modification of the first algorithm. It is designed so that some processes have an advantage of access to the resource over other processes.

  • Invariant-Free Formal Verification of Pipelined and Superscalar Controls by Behavior-Covering and Partial Unfolding

    Toru SHONAI  Tsuguo SHIMIZU  

     
    PAPER-Computer Hardware and Design

      Page(s):
    376-388

    This paper describes an algorithm and its prototype system--VeriProc/1. 1--which can prove the correctness of pipelined and superscalar processor controls automatically without a pipeline invariant, human interaction, or additional information. This algorithm is based on behavior-covering and partial unfolding. No timing relations such as an abstract function or β-relation is required. The only information required is to specify the location of the selectors in the design. Partial unfolding makes it possible to derive superscalar specifications from conventional specifications. Correctness proof of the partial unfolding is given. The prototype system can verify various superscalar control designs of simple processors.

  • REMARC: Reconfigurable Multimedia Array Coprocessor

    Takashi MIYAMORI  Kunle OLUKOTUN  

     
    PAPER-Computer Hardware and Design

      Page(s):
    389-397

    This paper describes a new reconfigurable processor architecture called REMARC (Reconfigurable Multimedia Array Coprocessor). REMARC is a small array processor that is tightly coupled to a main RISC processor. It consists of a global control unit and 64 16-bit processors called nano processors. REMARC is designed to accelerate multimedia applications, such as video compression, decompression, and image processing. These applications typically use 8-bit or 16-bit data therefore, each nano processor has a 16-bit datapath that is much wider than those of other reconfigurable coprocessors. We have developed a programming environment for REMARC and several realistic application programs, DES encryption, MPEG-2 decoding, and MPEG-2 encoding. REMARC can implement various parallel algorithms which appear in these multimedia applications. For instance, REMARC can implement SIMD type instructions similar to multimedia instruction extensions for motion compensation of the MPEG-2 decoding. Furthermore, the highly pipelined algorithms, like systolic algorithms, which appear in motion estimation of the MPEG-2 encoding can also be implemented efficiently. REMARC achieves speedups ranging from a factor of 2.3 to 21.2 over the base processor which is a single issue processor or 2-issue superscalar processor. We also compare its performance with multimedia instruction extensions. Using more processing resources, REMARC can achieve higher performance than multimedia instruction extensions.

  • ASADAL/PROVER: A Toolset for Verifying Temporal Properties of Real-Time System Specifications in Statechart

    Kwang-Il KO  Kyo C. KANG  

     
    PAPER-Sofware System

      Page(s):
    398-411

    Critical properties of real-time embedded systems must be verified before these systems are deployed as failing to meet these properties may cause considerable property damages and/or human casualties. Although Statechart is one of the most popular languages for modeling behavior of real-time systems, proof systems and analysis tools for Statechart so far are in research and do not fully support the semantics of the original Statechart, or have limited capabilities for proving real-time properties. This paper introduces a toolset ASADAL/PROVER for verifying temporal properties of Statechart extended with justice and compassion properties. ASADAL/PROVER is composed of two subsystems, RTTL-Prover and Model-Checker. The RTTL-Prover converts Statechart specifications into real-time temporal logic (RTTL) formulas of Ostroff, and then checks if the formulas satisfy a temporal property (also in RTTL) using theorem proving techniques. The Model-Checker supports verification of a predefined set of real-time properties using a model checking technique. The RTTL-Prover can support verification of any real-time properties as long as they can be specified in RTTL and, therefore, messages generated by the tool are general and may not be of much help in debugging Statechart specifications. The Model-Checker, however, can provide detailed information for debugging. ASADAL/PROVER has been applied successfully to some experimental systems. One of on-going researches in this project is to apply the symbolic model-checking technique by[3]to support Statecharts with a much larger global-state space. We are also extending the types of temporal properties supported by the Model-Checker.

  • On Relocation Decision Policies of Mobile Databases

    BUDIARTO  Kaname HARUMOTO  Masahiko TSUKAMOTO  Shojiro NISHIO  

     
    PAPER-Databases

      Page(s):
    412-421

    Recently, mobile computing has received much attention from database community. Sharing information among mobile users is one of the most challenging issues in mobile computing due to user mobility. Replication is a promising technique to this issue. However, adopting replication into mobile computing is a non-trivial task, since we are still facing other problems such as the lack in disk capacity and wireless network bandwidth used by mobile users. We have proposed a dynamic replica allocation strategy called User Majority Replica Allocation (UMRA) that is well suited to the modern architecture of mobile computing environment while avoiding such problems mentioned above. In this paper, we propose two relocation decision policies for UMRA and we provide a cost analysis for them. We also provide a cost analysis for another replica allocation strategy called Static Replica Allocation (SRA) for a comparison purpose.

  • A Routing Algorithm for Multihop WDM Ring

    Xiaoshe DONG  Tomohiro KUDOH  Hideharu AMANO  

     
    PAPER-Computer Networks

      Page(s):
    422-430

    Divisor-Skip Wavelength Division Multiplexing (DS-WDM) ring is an optical interconnection network for workstation clusters or parallel machines which can connect various number of nodes easily using wavelength division multiplexing techniques. However, the wavelength-ordered routing algorithm proposed for the DS-WDM ring requires complicated processes in each router. Here, a new routing algorithm called the comparing dimensional number routing algorithm for the DS-WDM ring is proposed and evaluated. Although the diameter and average distance are almost same as traditional wavelength-ordered routing, the cost and latency are much reduced.

  • A Frame-Dependent Fuzzy Compensation Method for Speech Recognition over Time-Varying Telephone Channels

    Wei-Wen HUNG  Hsiao-Chuan WANG  

     
    PAPER-Speech Processing and Acoustics

      Page(s):
    431-438

    Speech signals transmitted over telephone network often suffer from interference due to ambient noise and channel distortion. In this paper, a novel frame-dependent fuzzy channel compensation (FD-FCC) method employing two-stage bias subtraction is proposed to minimize the channel effect. First, through maximum likelihood (ML) estimation over the set of all word models, we choose the word model which is best matched with the input utterance. Then, based upon this word model, a set of mixture biases can be derived by averaging the cepstral differences between the input utterance and the chosen model. In the second stage, instead of using a single bias, a frame-dependent bias is calculated for each input frame to equalize the channel variations in the input utterance. This frame-dependent bias is achieved by the convex combination of those mixture biases which are weighted by a fuzzy membership function. Experimental results show that the channel effect can be effectively canceled even though the additive background noise is involved in a telephone speech recognition system.

  • Feature-Specification Algorithm Based on Snake Model for Facial Image Morphing

    Aboul-Ella HASSANIEN  Masayuki NAKAJIMA  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Page(s):
    439-446

    In this paper a new snake model for image morphing with semiautomated delineation which depends on Hermite's interpolation theory, is presented. The snake model will be used to specify the correspondence between features in two given images. It allows a user to extract a contour that defines a facial feature such as the lips, mouth, and profile, by only specifying the endpoints of the contour around the feature which we wish to define. We assume that the user can specify the endpoints of a curve around the features that serve as the extremities of a contour. The proposed method automatically computes the image information around these endpoints which provides the boundary conditions. Then the contour is optimized by taking this information into account near its extremities. During the iterative optimization process, the image forces are turned on progressively from the contour extremities toward the center to define the exact position of the feature. The proposed algorithm helps the user to easily define the exact position of a feature. It may also reduce the time required to establish the features of an image.

  • Optimal Robot Self-Localization and Accuracy Bounds

    Kenichi KANATANI  Naoya OHTA  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Page(s):
    447-452

    We discuss optimal estimation of the current location of a mobile robot by matching an image of the scene taken by the robot with the model of the environment. We first present a theoretical accuracy bound and then give a method that attains that bound, which can be viewed as describing the probability distribution of the current location. Using real images, we demonstrate that our method is superior to the naive least-squares method. We also confirm the theoretical predictions of our theory by applying the bootstrap procedure.

  • Automated Detection and Removal of Clouds and Their Shadows from Landsat TM Images

    Bin WANG  Atsuo ONO  Kanako MURAMATSU  Noboru FUJIWARA  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Page(s):
    453-460

    In this paper, a scheme to remove clouds and their shadows from remotely sensed images of Landsat TM over land has been proposed. The scheme uses the image fusion technique to automatically recognize and remove contamination of clouds and their shadows, and integrate complementary information into the composite image from multitemporal images. The cloud regions can be detected on the basis of the reflectance differences with the other regions. Based on the fact that shadows smooth the brightness changes of the ground, the shadow regions can be detected successfully by means of wavelet transform. Further, an area-based detection rule is developed in this paper and the multispectral characteristics of Landsat TM images are used to alleviate the computational load. Because the wavelet transform is adopted for the image fusion, artifacts are invisible in the fused images. Finally, the performance of the proposed scheme is demonstrated experimentally.

  • Region-Based Prediction Coding for Compression of Noisy Synthetic Images

    Yu LIU  Masayuki NAKAJIMA  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Page(s):
    461-467

    Noise greatly degrades the image quality and performance of image compression algorithms. This paper presents an approach for the representation and compression of noisy synthetic images. A new concept region-based prediction (RBP) model is first introduced, and then the RBP model is utilized on noisy images. In the conventional predictive coding techniques, the context for prediction is always composed of individual pixels surrounding the pixel to be processed. The RBP model uses regions instead of individual pixels as the context for prediction. An algorithm for the implementation of RBP is proposed and applied to noisy synthetic images in our experiments. Using RBP to find the residual data and encoding them, we achieve a bit rate of 1.10 bits/pixel for the noisy synthetic image. The decompressed image achieves a peak SNR of 42.59 dB. Compared with a peak SNR of 41.01 dB for the noisy synthetic image, the quality of the decompressed synthetic image is improved by 1.58 dB in the MSE sense. In contrast to our proposed compression algorithm with its improvement in image quality, conventional coding methods can compress image data only at the expense of lower image quality. At the same bit rate, the image compression standard JPEG provides a peak SNR of 33.17 dB for the noisy synthetic image, and the conventional median filter with a 33 window provides a peak SNR of 25.89 dB.

  • A Character-Based Postprocessing System for Handwritten Japanese Address Recognition

    Keiji YAMANAKA  Susumu KUROYANAGI  Akira IWATA  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Page(s):
    468-474

    Based on a previous work on handwritten Japanese kanji character recognition, a postprocessing system for handwritten Japanese address recognition is proposed. Basically, the recognition system is composed of CombNET-II, a general-purpose large-scale character recognizer and MMVA, a modified majority voting system. Beginning with a set of character candidates, produced by a character recognizer for each character that composes the input word and a lexicon, an interpretation to the input word is generated. MMVA is used in the postprocessing stage to select the interpretation that accumulates the highest score. In the case of more than one possible interpretation, the Conflict Analyzing System calls the character recognizer again to generate scores for each character that composes each interpretation to determine the final output word. The proposed word recognition system was tested with 2 sets of handwritten Japanese city names, and recognition rates higher than 99% were achieved, demonstrating the effectiveness of the method.

  • Finite Approximations of Predicate Circumscription

    Kazuhiko OOTA  Koji IWANUMA  

     
    PAPER-Artificial Intelligence and Cognitive Science

      Page(s):
    475-479

    Predicate Circumscription is a fundamental formalization of common sense reasoning. In this paper, we study a new approximation formula of it. In our previous works, we investigated Lifschitz's pointwise circumscription and its generalization, which functions as a finite approximation to predicate circumscription in the first-order framework. In this paper, at first, we study the ability of the generalized pointwise circumscription more closely, and give a simple example which shows that it cannot be complete even when a minimized predicate has only finite extension on the minimal models. Next, we introduce a new approximation formula, called finite constructive circumscription, in order to overcome that limitation. Finally, we compare expressive power of the two approximation methods with of predicate circumscription schema, and propose a open problem that should be solved to clarify that the completeness of predicate circumscription schema with respect to minimal model semantics.

  • Improving Generalization Ability through Active Learning

    Sethu VIJAYAKUMAR  Hidemitsu OGAWA  

     
    PAPER-Bio-Cybernetics and Neurocomputing

      Page(s):
    480-487

    In this paper, we discuss the problem of active training data selection for improving the generalization capability of a neural network. We look at the learning problem from a function approximation perspective and formalize it as an inverse problem. Based on this framework, we analytically derive a method of choosing a training data set optimized with respect to the Wiener optimization criterion. The final result uses the apriori correlation information on the original function ensemble to devise an efficient sampling scheme which, when used in conjunction with the learning scheme described here, is shown to result in optimal generalization. This result is substantiated through a simulated example and a learning problem in high dimensional function space.

  • Admissibility of Memorization Learning with Respect to Projection Learning in the Presence of Noise

    Akira HIRABAYASHI  Hidemitsu OGAWA  Yukihiko YAMASHITA  

     
    PAPER-Bio-Cybernetics and Neurocomputing

      Page(s):
    488-496

    In learning of feed-forward neural networks, so-called 'training error' is often minimized. This is, however, not related to the generalization capability which is one of the major goals in the learning. It can be interpreted as a substitute for another learning which considers the generalization capability. Admissibility is a concept to discuss whether a learning can be a substitute for another learning. In this paper, we discuss the case where the learning which minimizes a training error is used as a substitute for the projection learning, which considers the generalization capability, in the presence of noise. Moreover, we give a method for choosing a training set which satisfies the admissibility.

  • A Fast and Stable Method for Detecting and Tracking Medical Organs in MRI Sequences

    Dong Joong KANG  Chang Yong KIM  Yang Seok SEO  In So KWEON  

     
    LETTER-Image Processing,Computer Graphics and Pattern Recognition

      Page(s):
    497-499

    A discrete dynamic model for defining contours in 2-D medical images is presented. An active contour in this objective is optimized by a dynamic programming algorithm, for which a new constraint that has fast and stable properties is introduced. The internal energy of the model depends on local behavior of the contour, while the external energy is derived from image features. The algorithm is able to rapidly detect convex and concave objects even when the image quality is poor.

  • Viewpoint-Based Similarity Discernment on SNAP

    Takashi YUKAWA  Sanda M. HARABAGIU  Dan I. MOLDOVAN  

     
    LETTER-Artificial Intelligence and Cognitive Science

      Page(s):
    500-502

    This paper presents an algorithm for viewpoint-based similarity discernment of linguistic concepts on Semantic Network Array Processor (SNAP). The viewpoint-based similarity discernment plays a key role in retrieving similar propositions. This is useful for advanced knowledge processing areas such as analogical reasoning and case-based reasoning. The algorithm assumes that a knowledge base is constructed for SNAP, based on information acquired from the WordNet linguistic database. The algorithm identifies paths on the knowledge base between each given concept and a given viewpoint concept, then computes a similarity degree between the two concepts based on the number of nodes shared by the paths. A small scale knowledge base was constructed and an experiment was conducted on a SNAP simulator that demonstrated the feasibility of this algorithm. Because of SNAP's scalability, the algorithm is expected to work similarly on a large scale knowledge base.

  • Derivation of the Iteration Algorithm for the Modified Pseudo-Inverse Model for Associative Memory from the Consideration of the Energy Function

    Yoshifumi OGAWA  Iku NEMOTO  

     
    LETTER-Artificial Intelligence and Cognitive Science

      Page(s):
    503-507

    The pseudo-inverse model for the associative memory has an iterative algorithm converging to its weight matrix. The present letter shows that the same algorithm except for the lack of self couplings can be derived by simple consideration of the energy of the network state.

  • Acceleration Techniques for the Network Inversion Algorithm

    Hiroyuki TAKIZAWA  Taira NAKAJIMA  Masaaki NISHI  Hiroaki KOBAYASHI  Tadao NAKAMURA  

     
    LETTER-Bio-Cybernetics and Neurocomputing

      Page(s):
    508-511

    We apply two acceleration techniques for the backpropagation algorithm to an iterative gradient descent algorithm called the network inversion algorithm. Experimental results show that these techniques are also quite effective to decrease the number of iterations required for the detection of input vectors on the classification boundary of a multilayer perceptron.