The search functionality is under construction.
The search functionality is under construction.

IEICE TRANSACTIONS on Information

  • Impact Factor

    0.59

  • Eigenfactor

    0.002

  • article influence

    0.1

  • Cite Score

    1.4

Advance publication (published online immediately after acceptance)

Volume E82-D No.12  (Publication Date:1999/12/25)

    Regular Section
  • Design of Optimal Array Processors for Two-Step Division-Free Gaussian Elimination

    Shietung PENG  Stanislav G. SEDUKHIN  

     
    PAPER-Algorithm and Computational Complexity

      Page(s):
    1503-1511

    The design of array processors for solving linear systems using two-step division-free Gaussian elimination method is considered. The two-step method can be used to improve the systems based on the one-step method in terms of numerical stability as well as the requirements for high-precision. In spite of the rather complicated computations needed at each iteration of the two-step method, we develop an innovative parallel algorithm whose data dependency graph meets the requirements for regularity and locality. Then we derive two-dimensional array processors by adopting a systematic approach to investigate the set of all admissible solutions and obtain the optimal array processors under linear time-space scheduling. The array processors is optimal in terms of the number of processing elements used.

  • Multi-Threaded Design for a Software Distributed Shared Memory Systems

    Jyh-Chang UENG  Ce-Kuen SHIEH  Su-Cheong MAC  An-Chow LAI  Tyng-Yue LIANG  

     
    PAPER-Sofware System

      Page(s):
    1512-1523

    This paper describes the design and implementation of a multi-threaded Distributed Shared Memory (DSM) system, called Cohesion, which provides high programming flexibility and latency masking, and supports load balancing. Cohesion offers a parallel programming environment which is very similar to that on a multiprocessors system. Threads could be created recursively in this environment, and users are not required to handle the locations of the threads. Instead of supporting a shared variable model, Cohesion provides a global shared address space among all nodes in the system. The space is further divided into three regions, i. e. , release, conventional, and object-based memory, each is applied with different consistency protocol. In this paper, the design issues in an ordinary thread system, such as thread management, load balancing, and synchronization, have been reconsidered with the memory management provided by the DSM system. Several real applications have been used to evaluate the performance of the system. The results show that multi-threading usually has better performance than single-threading because the network latency can be masked by overlapping communication and computation. However, the gain depends on program behavior and the number of threads executed on each node in the system.

  • Evaluating Adaptability of Software Systems Based on Algebraic Equivalency

    Yoshiyuki SHINKAWA  Masao J. MATSUMOTO  

     
    PAPER-Sofware System

      Page(s):
    1524-1534

    Adaptability evaluation of software systems is one of the key concerns in both software engineering and requirements engineering. In this paper, we present a formal and systematic approach to evaluate adaptability of software systems to requirements in enterprise business applications. Our approach consists of three major parts, that is, the common modeling method for both business realms and software realms, functional adaptability evaluation between the models with Σ algebra and behavioral adaptability evaluation with process algebra. By our approach, one can rigorously and uniquely determine whether a software system is adaptable to the requirements, either totally or partially. A sample application from an order processing is illustrated to show how this approach is effective in solving the adaptability evolution problem.

  • Evaluation of Two Load-Balancing Primary-Backup Process Allocation Schemes

    Heejo LEE  Jong KIM  Sung Je HONG  

     
    PAPER-Fault Tolerant Computing

      Page(s):
    1535-1544

    In this paper, we show two process allocation schemes to tolerate multiple faults when the primary-backup replication method is used. The first scheme, called multiple backup scheme, is running multiple backup processes for each process to tolerate multiple faults. The second scheme, called regenerative backup scheme, is running only one backup process for each process, but re-generates backup processes for processes that do not have a backup process after a fault occurrence to keep the primary-backup process pair available. In both schemes, we propose heuristic process allocation methods for balancing loads in spite of the occurrence of faults. Then we evaluate and compare the performance of the proposed heuristic process allocation methods using simulation. Next, we analyze the reliability of two schemes based on their fault-tolerance capability. For the analysis of fault-tolerance capability, we find the degree of fault tolerance for each scheme. Then we find the reliability of each scheme using Markov chains. The comparison results of two schemes indicate that the regenerative single backup process allocation scheme is more suitable than the multiple backup allocation scheme.

  • An Efficient Method for Reconfiguring the 1 1/2 Track-Switch Mesh Array

    Tadayoshi HORITA  Itsuo TAKANAMI  

     
    PAPER-Fault Tolerant Computing

      Page(s):
    1545-1553

    As VLSI technology has developed, the interest in implementing an entire or significant part of a parallel computer system using wafer scale integration is growing. The major problem for the case is the possibility of drastically low yield and/or reliability of the system if there is no strategy for coping with such situations. Various strategies to restructure the faulty physical system into the fault-free target logical system are described in the literature [1]-[5]. In this paper, we propose an efficient approximate method which can reconstruct the 1 1/2 track-switch mesh arrays with faulty PEs using hardware as well as software. A logical circuit added to each PE and a network connecting the circuits are used to decide spare PEs which compensate for faulty PEs. The hardware compexity of each circuit is much less than that of a PE where the size of each additional circuit is independent of array sizes and constant. By using the exclusive hardware scheme, a built-in self-reconfigurable system without using a host computer is realizable and the time for reconfiguring arrays becomes very short. The simulation result of the performance of the method shows that the reconstructing efficiency of our algorithm is a little less than those of the exaustive and Shigei's ones [6] and [7], but much better than that of the neural one [3]. We also compare the time complexities of reconstructions by hardware as well as software, and the hardware complexity in terms of the number of gates in the logical circuit added to each PE among the other methods.

  • A Built-in Self-Reconfigurable Scheme for 3D Mesh Arrays

    Itsuo TAKANAMI  Tadayoshi HORITA  

     
    PAPER-Fault Tolerant Computing

      Page(s):
    1554-1562

    We propose a model for fault tolerant 3D processor arrays using one-and-half track switches. Spare processors are laid on the two opposite surfaces of the 3D array. The fault compensation process is performed by shifting processors on a continuous straight line (called compensation path) from a faulty processor to a spare on the surfaces. It is not allowed that compensantion paths are in the near-miss relation each other. Then, switches with only 4 states are needed to preserve the 3D mesh topology after compensating for faults. We give an algorithm in a convenient form for reconfiguring by hardware the 3D mesh arrays with faults. The algorithm can reconfigure the 3D mesh arrays in polynomial time. By computer simulation, we show the survival rates and the reliabilities of arrays which express the efficiencies of reconfiguration according to the algorithm. The reliabilities are compared with those of the model using double tracks for which the near-miss relation among compensation paths is allowed, but whose hardware overhead is almost double of that of the proposed model using one-and-half track. Finally, we design a logical circuit for hardware realization of the algorithm. Using the circuit, we can construct such a built-in self-reconfigurable 3D mesh array that the reconfiguration is done very quickly without an aid of a host computer.

  • Diagnosing Delay Faults in Combinational Circuits Under the Ambiguous Delay Model

    Kwame Osei BOATENG  Hiroshi TAKAHASHI  Yuzo TAKAMATSU  

     
    PAPER-Fault Tolerant Computing

      Page(s):
    1563-1571

    In our previous paper we presented a path-tracing method of multiple gate delay fault diagnosis in combinational circuits. In this paper, we propose an improved method that uses the ambiguous delay model. This delay model makes provision for parameter variations in the manufacturing process of ICs. For the effectiveness of the current method, we propose a timed 8-valued simulation and some new diagnostic rules. Furthermore, we introduce a preparatory process that speeds up diagnosis. Also, at the end of diagnosis, additional information from the results of the preparatory process makes it possible to distinguish between non-existent faults and undiagnosed faults.

  • An Edge-Preserving Image Coding System with Vector Quantization

    Chou-Chen WANG  Chin-Hsing CHEN  Chaur-Heh HSIEH  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Page(s):
    1572-1581

    Image coding with vector quantization (VQ) reveals several defects which include edge degradation and high encoding complexity. This paper presents an edge-preserving coding system based on VQ to overcome these defects. A signal processing unit first classifies image blocks into low-activity or high-activity class. A high-activity block is then decomposed into a smoothing factor, a bit-plane and a smoother (lower variance) block. These outputs can be more efficiently encoded by VQ with lower distortion. A set of visual patterns is used to encode the bit-planes by binary vector quantization. We also develop a modified search-order coding to further reduce the redundancy of quantization indexes. Simulation results show that the proposed algorithm achieves much better perceptual quality with higher compression ratio and significant lower computational complexity, as compared to the direct VQ.

  • Semi-Automatic Tool for Aligning a Parameterized CAD Model to Stereo Image Pairs

    Chu-Song CHEN  Kuan-Chung HUNG  Yi-Ping HUNG  Lin-Lin CHEN  Chiou-Shann FUH  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Page(s):
    1582-1588

    Fully automatic reconstruction of 3D models from images is well-known to be a difficult problem. For many applications, a limited amount of human assistance is allowed and can greatly reduce the complexity of the 3D reconstruction problem. In this paper, we present an easy-to-use method for aligning a parameterized 3D CAD model to images taken from different views. The shape parameters of the 3D CAD model can be recovered accurately. Our work is composed of two parts. In the first part, we developed an interactive tool which allows the user to associate the features in the CAD model to the features in the 2D images. This interactive tool is designed to achieve efficiency and accuracy. In the second part, 3D information extracted from different stereo views are integrated together by using an optimization technique to obtain accurate shape parameters. Some experimental results have been shown to demonstrate the accuracy and usefulness of the recovered CAD model.

  • A New Class of Multichannel Image Processing Filters: Vector Median-Rational Hybrid Filters

    Lazhar KHRIJI  Moncef GABBOUJ  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Page(s):
    1589-1596

    A new class of nonlinear filters called Vector Median Rational Hybrid Filters (VMRHF) for multispectral image processing is introduced and applied to color image filtering problems. These filters are based on Rational Functions (RF). The VMRHF filter is a two-stage filter, which exploits the features of the vector median filter and those of the rational operator. The filter output is a result of vector rational function operating on the output of three sub-functions. Two vector median (VMF) sub-filters and one center weighted vector median filter (CWVMF) are proposed to be used here due to their desirable properties, such as, edge and details preservation and accurate chromaticity estimation. Experimental results show that the new VMRHF outperforms a number of widely known nonlinear filters for multispectral image processing such as the Vector Median ilter (VMF) and Distance Directional Filters (DDf) with respect to all criteria used.

  • Indexing Method for Three-Dimensional Position Estimation

    Iris FERMIN  Sudhanshu SEMWAL  Jun OHYA  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Page(s):
    1597-1604

    Indexing techniques usually are used in model-based object recognition and ray tracing algorithms. In this paper we present a new method for estimating the three-dimensional position of a subject (resp. object) in a circumscribed space based on an indexing method. We construct two and three-dimensional indices of a space, which are used to estimate the three-dimensional position by an interpolation technique. There are two processes in estimating the three-dimensional position of a subject (resp. object): preprocessing and three-dimensional position estimation. We have implemented this idea using stereo camera, and tested by using two different sizes of a grid pattern. Promising results for preprocessing and 3D position estimation are presented. Moreover, we show that this approach can also be extended for multiple cameras.

  • Mean Field Decomposition of a Posteriori Probability for MRF-Based Image Segmentation: Unsupervised Multispectral Textured Image Segmentation

    Hideki NODA  Mehdi N. SHIRAZI  Bing ZHANG  Nobuteru TAKAO  Eiji KAWAGUCHI  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Page(s):
    1605-1611

    This paper proposes a Markov random field (MRF) model-based method for unsupervised segmentation of multispectral images consisting of multiple textures. To model such textured images, a hierarchical MRF is used with two layers, the first layer representing an unobservable region image and the second layer representing multiple textures which cover each region. This method uses the Expectation and Maximization (EM) method for model parameter estimation, where in order to overcome the well-noticed computational problem in the expectation step, we approximate the Baum function using mean-field-based decomposition of a posteriori probability. Given provisionally estimated parameters at each iteration in the EM method, a provisional segmentation is carried out using local a posteriori probability (LAP) of each pixel's region label, which is derived by mean-field-based decomposition of a posteriori probability of the whole region image. Experiments show that the use of LAPs is essential to perform a good image segmentation.

  • Utilizing Repair Cases of Home Electrical Appliances

    Satoshi HORI  Hiromitsu SUGIMATSU  Soshi FURUKAWA  Hirokazu TAKI  

     
    PAPER-Artificial Intelligence and Cognitive Science

      Page(s):
    1612-1617

    We have developed a diagnostic Case-Based Reasoning (CBR) system, Doctor, which infers possible defects in a home electrical appliance and lists up necessary service parts. The CBR is suitable to build a diagnostic system for the field service because the CBR imitates how experienced service technicians infer and is able to learn defect trends and novel repair cases from a service report database. In order to apply a CBR system to this real-world problem, Our system has the following new features: (1) Its CBR mechanism utilizes not only repair cases, but also diagnostic rules that are elicited from human experts so that accurate diagnosis can be achieved. (2) Its casebase maintenance mechanism updates the casebase and adapts it to the changing real world.

  • Strategy Acquisition for the Game "Othello" Based on Reinforcement Learning

    Taku YOSHIOKA  Shin ISHII  Minoru ITO  

     
    PAPER-Bio-Cybernetics and Neurocomputing

      Page(s):
    1618-1626

    This article discusses automatic strategy acquisition for the game "Othello" based on a reinforcement learning scheme. In our approach, a computer player, which initially knows only the game rules, becomes stronger after playing several thousands of games against another player. In each game, the computer player refines the evaluation function for the game state, which is achieved by min-max reinforcement learning (MMRL). MMRL is a simple learning scheme that uses the min-max strategy. Since the state space of Othello is huge, we employ a normalized Gaussian network (NGnet) to represent the evaluation function. As a result, the computer player becomes strong enough to beat a player employing a heuristic strategy. This article experimentally shows that MMRL is better than TD(0) and also shows that the NGnet is better than a multi-layered perceptron, in our Othello task.

  • An Analog Neural Network System with Learning Capability Using Simultaneous Perturbation

    Yutaka MAEDA  Toshiyuki KUSUHASHI  

     
    PAPER-Bio-Cybernetics and Neurocomputing

      Page(s):
    1627-1633

    In this paper, we describe an implementation of analog neural network system with on-line learning capability. A learning rule using the simultaneous perturbation is adopted. Compared with usage of the ordinary back-propagation method, we can easily implement the simultaneous perturbation learning rule. The neural system can monitor weight values and an error value. The exclusive OR problem and a simple function problem are shown.

  • A Practical Method for Constructing a Semi-Optimal Coterie

    Takashi HARADA  Masafumi YAMASHITA  

     
    LETTER-Algorithm and Computational Complexity

      Page(s):
    1634-1638

    A coterie is a set of quorums such that any two quorums intersect each other, and is used in a quorum based algorithm for solving the mutual exclusion problem. The availability of a coterie is the probability that the algorithm (adopting the coterie) tolerates process and/or link failures. Constructing an optimal coterie in terms of the availability is therefore important from the view of fault tolerance, but unfortunately, even calculating the availability is known to be #P-hard. Recently Harada and Yamashita proposed several heuristic methods for improving the availability of a coterie. This letter first evaluates their performance and then proposes a practical method for constructing a semi-optimal coterie by using one of the heuristic methods as a main component.