The search functionality is under construction.
The search functionality is under construction.

IEICE TRANSACTIONS on Information

  • Impact Factor

    0.59

  • Eigenfactor

    0.002

  • article influence

    0.1

  • Cite Score

    1.4

Advance publication (published online immediately after acceptance)

Volume E81-D No.6  (Publication Date:1998/06/25)

  • The Degrees of Immune and Bi-Immune Sets

    John GESKE  

     
    PAPER-Automata,Languages and Theory of Computing

      Page(s):
    491-495

    We study the pm-degrees and pT-degrees of immune and bi-immune sets. We demonstrate the existence of incomparable pT-immune degrees in deterministic time classes.

  • Characterization of Monotonic Multiple-Valued Functions and Their Logic Expressions

    Kyoichi NAKASHIMA  Yutaka NAKAMURA  Noboru TAKAGI  

     
    PAPER-Computer Hardware and Design

      Page(s):
    496-503

    This paper presents some fundamental properties of multiple-valued logic functions monotonic in a partial-ordering relation which is introduced in the set of truth values and does not necessarily have the greatest or least element. Two kinds of necessary and sufficient conditions for monotonic p-valued functions are given with the proofs. Their logic formulas using unary operators defined in the partial-ordering relation and a simplification method for those logic formulas are also given. These results include as their special cases our former results for p-valued functions monotonic in the ambiguity relation which is a partial-ordering relation with the greatest element.

  • Analytic Modeling of Updating Based Cache Coherent Parallel Computers

    Kazuki JOE  Akira FUKUDA  

     
    PAPER-Computer Systems

      Page(s):
    504-512

    In this paper, we apply the Semi-markov Memory and Cache coherence Interference (SMCI) model, which we had proposed for invalidating based cache coherent parallel computers, to an updating based protocol. The model proposed here, the SMCI/Dragon model, can predict performance of cache coherent parallel computers with the Dragon protocol as well as the original SMCI model for the Synapse protocol. Conventional analytic models by stochastic processes to describe parallel computers have the problem of numerical explosion in the number of states necessary as the system size increases. We have already shown that the SMCI model achieved both the small number of states to describe parallel computers with the Synapse protocol and the inexpensive computation cost to predict their performance. In this paper, we demonstrate generality of the SMCI model by applying it to the another cache coherence protocol, Dragon, which has opposite characteristics than Synapse. We show the number of states required by constructing the SMCI/Dragon model is only 21 which is as small as SMCI/Synapse, and the computation cost is also the order of microseconds. Using the SMCI/Dragon model, we investigate several comparative experiments with widely known simulation results. We found that there is only a 5. 4% differences between the simulation and the SMCI/Dragon model.

  • Distributed Concurrency Control with Local Wait-Depth Control Policy

    Jiahong WANG  Jie LI  Hisao KAMEDA  

     
    PAPER-Databases

      Page(s):
    513-520

    Parallel Transaction Processing (TP) systems have great potential to serve the ever-increasing demands for high transaction processing rate. This potential, however, may not be reached due to the data contention and the widely-used two-phase locking (2PL) Concurrency Control (CC) method. In this paper, a distributed locking-based CC policy called LWDC (Local Wait-Depth Control) was proposed for dealing with this problem for the shared-nothing parallel TP system. On the basis of the LWDC policy, an algorithm called LWDCk was designed. Using simulation LWDCk was compared with the 2PL and the base-line Distributed Wait-Depth Limited (DWDL) CC methods. Simulation studies show that the new algorithm offers better system performance than those compared.

  • An Authorization Model for Object-Oriented Databases and Its Efficient Access Control

    Toshiyuki MORITA  Yasunori ISHIHARA  Hiroyuki SEKI  Minoru ITO  

     
    PAPER-Databases

      Page(s):
    521-531

    Access control is a key technology for providing data security in database management systems (DBMSs). Recently, various authorization models for object-oriented databases (OODBs) have been proposed since authorization models for relational databases are insufficient for OODBs because of the characteristics of OODBs, such as class hierarchies, inheritance, and encapsulation. Generally, an authorization is modeled as a set of rights, where a right consists of at least three components s, o, t and means that subject s is authorized to perform operation t on object o. In specifying authorizations implicitly, inference rules are useful for deriving rights along the class hierarchies on subjects, objects, and operations. An access request req=(s,o,t) is permitted if a right corresponding to req is given explicitly or implicitly. In this paper, we define an authorization model independent of any specific database schemas and authorization policies, and also define an authorization specification language which is powerful enough to specify authorization policies proposed in the literature. Furthermore, we propose an efficient access control method for an authorization specified by the proposed language, and evaluate the proposed method by simulation.

  • A Fault-Tolerant Wormhole Routing Algorithm in Two Dimensional Mesh Networks

    Jinsoo KIM  Ji-Yun KIM  Hyunsoo YOON  Seung Ryoul MAENG  Jung Wan CHO  

     
    PAPER-Fault Tolerant Computing

      Page(s):
    532-544

    We propose a fault-tolerant routing algorithm for 2D meshes. Our routing algorithm can tolerate any number of concave fault regions. It is based on xy-routing and uses the concept of the fault ring/chain composed of fault-free elements surrounding faults. Three virtual channels per physical link are used for deadlock-free routing on a fault ring. Four virtual channels are needed for a fault chain. For a concave fault ring, fault-free nodes in the concave region have been deactivated to avoid deadlock in the previous algorithms, which results in excessive loss of the computational power. Our algorithm ensures deadlock-freedom by restricting the virtual channel usage in the concave region, and it minimizes the loss of the computational power. We also extend the proposed routing scheme for adaptive fault-tolerant routing. The adaptive version requires the same number of virtual channels as the deterministic one.

  • An Efficient Mandarin Text-to-Speech System on Time Domain

    Yih-Jeng LIN  Ming-Shing YU  

     
    PAPER-Speech Processing and Acoustics

      Page(s):
    545-555

    This paper describes a complete Mandarin text-to-speech system on time domain. We take advantage of the advancement of memory technology, which achieves ever-increasing capacity and ever-lower price. We try to collect as more as possible the synthesis units in a Mandarin text-to-speech system. With such an effort, we developed simpler speech processing techniques and achieved faster processing speed by using only an ordinary personal computer. We also developed delicate methods to measure the intelligibility, comprehensibility, and naturalness of a Mandarin text-to-speech system. Our system performs very well compared with existing systems. We first develop a set of useful algorithms and methods to deal with some features of the syllables, such as duration, amplitude, fundamental frequency, pause, and so on. Based on these algorithms and methods, we then build a Mandarin text-to-speech system. Given any Chinese text in some computerized form, e. g. , in BIG-5 code representation, our system can pronounce the text in real time. Our text-to-speech system runs on an IBM 80486 compatible PC, with no special hardware for signal processing. The evaluation of our text-to-speech system is based on a proposed subjective evaluation method. An evaluation was made by 51 undergraduate students. The intelligibility of our text-to-speech system is 99. 5%, the comprehensibility of our text-to-speech system is 92. 6%, and the naturalness of our text-to-speech system is 81. 512 points in a percentile grading system (the highest score is 100 points, and the lowest score is 0 point). Other 40 Ph. D. students also did the same evaluation about naturalness. The result shows that the naturalness of our text-to-speech system is 82. 8 points in a percentile grading system.

  • A New Feature Selection Method to Extract Functional Structures from Multidimensional Symbolic Data

    Yujiro ONO  Manabu ICHINO  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Page(s):
    556-564

    In this paper, we propose a feature selection method to extract functional structures embedded in multidimensional data. In our approach, we do not approximate functional structures directly. Instead, we focus on the seemingly trivial property that functional structures are geometrically thin in an informative subspace. Using this property, we can exclude irrelevant features to describe functional structures. As a result, we can use conventional identification methods, which use only informative features, to accurately identify functional structures. In this paper, we define Geometrical Thickness (GT) in the Cartesian System Model (CSM), a mathematical model that can manipulate symbolic data. Additionally, we define Total Geometrical Thickness (TGT) which expresses geometrical structures in data. Using TGT, we investigate a new feature selection method and show its capabilities by applying it to two sets of artificial and one set of real data.

  • Trade-Off between Requirement of Learning and Computational Cost

    Tzung-Pei HONG  Ching-Hung WANG  Shian-Shyong TSENG  

     
    PAPER-Artificial Intelligence and Cognitive Science

      Page(s):
    565-571

    Machine learning in real-world situations sometimes starts from an initial collection of training instances; learning then proceeds off and on as new training instances come intermittently. The idea of two-phase learning has then been proposed here for effectively solving the learning problems in which training instances come in this two-stage way. Four two-phase learning algorithms based on the learning method PRISM have also been proposed for inducing rules from training instances. These alternatives form a spectrum, showing achievement of the requirement of PRISM (keeping down the number of irrelevant attributes) heavily dependent on the spent computational cost. The suitable alternative, as a trade-off between computational costs and achievement to the requirements, can then be chosen according to the request of the application domains.

  • Associative Semantic Memory Capable of Fast Inference on Conceptual Hierarchies

    Qing MA  Hitoshi ISAHARA  

     
    PAPER-Bio-Cybernetics and Neurocomputing

      Page(s):
    572-583

    The adaptive associative memory proposed by Ma is used to construct a new model of semantic network, referred to as associative semantic memory (ASM). The main novelty is its computational effectiveness which is an important issue in knowledge representation; the ASM can do inference based on large conceptual hierarchies extremely fast-in time that does not increase with the size of conceptual hierarchies. This performance cannot be realized by any existing systems. In addition, ASM has a simple and easily understandable architecture and is flexible in the sense that modifying knowledge can easily be done using one-shot relearning and the generalization of knowledge is a basic system property. Theoretical analyses are given in general case to guarantee that ASM can flawlessly infer via pattern segmentation and recovery which are the two basic functions that the adaptive associative memory has.

  • Kohonen Learning with a Mechanism, the Law of the Jungle, Capable of Dealing with Nonstationary Probability Distribution Functions

    Taira NAKAJIMA  Hiroyuki TAKIZAWA  Hiroaki KOBAYASHI  Tadao NAKAMURA  

     
    PAPER-Bio-Cybernetics and Neurocomputing

      Page(s):
    584-591

    We present a mechanism, named the law of the jungle (LOJ), to improve the Kohonen learning. The LOJ is used to be an adaptive vector quantizer for approximating nonstationary probability distribution functions. In the LOJ mechanism, the probability that each node wins in a competition is dynamically estimated during the learning. By using the estimated win probability, "strong" nodes are increased through creating new nodes near the nodes, and "weak" nodes are decreased through deleting themselves. A pair of creation and deletion is treated as an atomic operation. Therefore, the nodes which cannot win the competition are transferred directly from the region where inputs almost never occur to the region where inputs often occur. This direct "jump" of weak nodes provides rapid convergence. Moreover, the LOJ requires neither time-decaying parameters nor a special periodic adaptation. From the above reasons, the LOJ is suitable for quick approximation of nonstationary probability distribution functions. In comparison with some other Kohonen learning networks through experiments, only the LOJ can follow nonstationary probability distributions except for under high-noise environments.

  • A New Method of Estimating Coronary Artery Diameter Using Direction Codes in Angiographic Images

    ChunKee JEON  KwangNham KANG  TaeWon RHEE  

     
    PAPER-Medical Electronics and Medical Information

      Page(s):
    592-601

    The conventional method requires a centerline of a vessel to estimate the vessel diameter. Two methods of estimating the centerline of vessels have been reported: One is to manually define the centerline of the vessel. This potentially contributes to inter- and intra-observer variability. The orientation of the centerline has an effect on the diameter function since diameters are computed perpendicular to the centerline. And the other is to automatically detect the centerline of the vessel. But this is a very complicated method. In this paper, we propose a new method of estimating vessel diameter using direction codes without detecting centerline. Since this method detects the vessel boundary and direction code at the same time, it simplifies the procedure and reduces execution time in estimating the vessel diameter. Compared to a method that automatically estimates the vessel diameter using centerline, a proposed method provides an improved accuracy in image with poor contrast, branching or obstructed vessels. Also, this provides a good compression of boundary description. Our experiments demonstrate the usefulness of the technique using direction code for quantitative angiography. Experimental results justify the validity of the proposed method.

  • A Correlation-Based Motion Correction Method for Functional MRI

    Arturo CALDERON  Shoichi KANAYAMA  Shigehide KUHARA  

     
    PAPER-Medical Electronics and Medical Information

      Page(s):
    602-608

    One serious problem affecting the rest and active state images obtained during a functional MRI (fMRI) study is that of involuntary subject movements inside the magnet while the imaging protocol is being carried out. The small signal intensity rise and small activation areas observed in the fMRI results, such as the statistical maps indicating the significance of the observed signal intensity difference between the rest and active states for each pixel, are greatly affected even by head displacements of less than one pixel. Near perfect alignment in the subpixel level of each image with respect to a reference, then, is necessary if the results are to be considered meaningful, specially in a clinical setting. In this paper we report the brain displacements that take place during a fMRI study with an image alignment method based on a refined crosscorrelation function which obtains fast (non-iterative) and precise values for the inplane rotation and X and Y translation correction factors. The performance of the method was tested with phantom experiments and fMRI studies using normal subjects executing a finger-tapping motor task. In all cases, subpixel translations and rotations were detected. The rest and active phases of the time course plots obtained from pixels in the primary motor area were well differentiated after only one pass of the motion correction program, giving enhanced activation zones. Other related areas such as the supplementary motor area became visible only after correction, and the number of pixels showing false activation was reduced.

  • Composition of Strongly Infix Codes

    Tetsuo MORIYA  

     
    LETTER-Automata,Languages and Theory of Computing

      Page(s):
    609-611

    We introduce a strongly infix code. A code X is a strongly infix code if X is an infix code and any catenation of two words in X has no proper factor in X, which is neither a left factor nor a right factor. We show that the class of strongly infix codes is closed under composition, and, as the dual result, that the property to be strongly infix is inherited by a component of a decomposition.

  • Efficient Encoding of Excitation Codes Using Trained Partial Algebraic Codebook

    Yun Keun LEE  Hwang Soo LEE  Robert M. GRAY  

     
    LETTER-Speech Processing and Acoustics

      Page(s):
    612-615

    An efficient encoding method of excitation codes using a partial algebraic codebook (PAC) is proposed. Since the conventional algebraic code excited linear prediction (ACELP) encodes the positions and signs of all excitation pulses separately, the bits required for encoding excitation codes take a large portion of the total bit rate. Vector quantization (VQ) of the positions and signs of the excitation pulses results in a PAC. Using PAC instead of the full set of algebraic codes, we can reduce the bits required to encode the excitation codes while maintaining the output speech quality. An iterative training algorithm is proposed to obtain the suboptimal PAC by modifying the Lloyd algorithm. Simulation results show that considerable bit savings can be obtained with only a small amount of degradation in the segmental signal to noise ratio (SEGSNR).

  • Variable-Rate Vector Quantizer Design Using Genetic Algorithm

    Wen-Jyi HWANG  Sheng-Lin HONG  

     
    LETTER-Image Processing,Computer Graphics and Pattern Recognition

      Page(s):
    616-620

    This letter presents a novel variable-rate vector quantizer (VQ) design algorithm, which is a hybrid approach combining a genetic algorithm with the entropy-constrained VQ (ECVQ) algorithm. The proposed technique outperforms the ECVQ algorithm in the sense that it reaches to a nearby global optimum rather than a local one. Simulation results show that, when applied to the image coding, the technique achieves higher PSNR and image quality than those of ECVQ algorithm.