The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] OMP(3945hit)

3861-3880hit(3945hit)

  • Graph Rewriting Systems and Their Application to Network Reliability Analysis

    Yasuyoshi OKADA  Masahiro HAYASHI  

     
    PAPER-Automaton, Language and Theory of Computing

      Vol:
    E76-D No:2
      Page(s):
    154-162

    We propose a new type of Graph Rewriting Systems (GRS) that provide a theoretical foundation for using the reduction method which plays an important role on analyze network reliability. By introducing this GRS, several facts were obtained as follows: (1) We clarified the reduction methods of network reliability analysis in the theoretical framework of GRS. (2) In the framework of GRS, we clarified the significance of the completeness in the reduction methods. (3) A procedure of recognizing complete systems from only given rewriting rules was shown. Specially the procedure (3) is given by introducing a boundary graph (B-Graph). Finally an application of GRS to network reliability analysis is shown.

  • Orientable Closed Surface Construction from Volume Data

    Takanori NAGAE  Takeshi AGUI  Hiroshi NAGAHASHI  

     
    PAPER-Image Processing, Computer Graphics and Pattern Recognition

      Vol:
    E76-D No:2
      Page(s):
    269-273

    Surface construction is known as a way to visualize volume data. Although currently used algorithms such as marching cubes have good enough quality for volume visualization, they do not ensure adequate surface topology. These algorithms work well when the surface is rather simple. While when complicated, the surface does not separate the internal and external spaces, that is, there exist some holes on the surface, or exist redundant overlaps or self-intersection. Actually, adequate surface topology is important not only for visualization but for laser stereolithography, which creates real 3D plastic objects. In the present paper, we propose a new method that produces a set of triangular patches from a given volume data. The fact that the set of patches has no holes, no redundancy, no self-intersection, and has orientable closed surface topology is shown.

  • Extended Key Management System Using Complementary Exponential Calculation

    Naoya TORII  Takayuki HASEBE  Ryota AKIYAMA  

     
    PAPER

      Vol:
    E76-A No:1
      Page(s):
    78-87

    We propose two types of key management systems that use complementary exponential calculation, in which users in the system divide into groups, and the different modulus numbers are assigned to each group and edges between groups. Key generation information over the modulus numbers is issued to a user by a trusted center. The user who receives the information can generate shared encryption keys between users in the system without using key exchange protocol. In our proposed system, the number of primes is one of the parameters for generating key generation information. The number decreases in inverse proportion to the square of the number of groups compared to the original method. Our proposed technique enabled us to extend the number of users in the system to more than one million, which is not possible with the original method.

  • Communication Complexity of Perfect ZKIP for a Promise Problem

    Kaoru KUROSAWA  Takashi SATOH  

     
    PAPER

      Vol:
    E76-A No:1
      Page(s):
    46-49

    We define the communication complexity of a perfect zero-knowledge interactive proof (ZKIP) as the expected number of bits communicated to achieve the given error probabilities (of both the completeness and the soundness). While the round complexity of ZKIPs has been studied greatly, no progress has been made for the communication complexity of those. This paper shows a perfect ZKIP whose communication complexity is 11/12 of that of the standard perfect ZKIP for a specific class of Quadratic Residuosity.

  • A Complementary Optical Interconnection for Inter-Chip Networks

    Hideto FURUYAMA  Masaru NAKAMURA  

     
    PAPER-Integration of Opto-Electronics and LSI Technologies

      Vol:
    E76-C No:1
      Page(s):
    112-117

    A new optical interconnection system suitable for high-speed ICs using a novel complementary optical interconnection technique has been developed. This system uses paired light sources and photodetectors for optical complementary operation, and greatly lowers the power consumption compared with conventional systems. Analyses and experimental results indicate that this system can operate in the gigabit range, and reduces power consumption to less than 20% of that in conventional systems at 1 Gb/s.

  • Photonic LSI--Merging the Optical Technology into LSI--

    Yoshihiko MIZUSHIMA  

     
    INVITED PAPER-Key Paper

      Vol:
    E76-C No:1
      Page(s):
    4-12

    The future trends of optical technologies combined with LSI are reviewed. Present problems of LSI, and the possible solutions to these problems through the merger of the optical technology into LSI are discussed. One of the present trends in interconnection between LSI components is the timeserial approach, originally developed for the optical communication. This method is capable of high speed data transfer. The other is a space-parallel approach, arising from the two-dimensional nature of the light propagation. This approach has the capability of performing parallel processing. A hybrid OEIC, possibly on GaAs, is discussed as an example of future photonic LSI. The lack of key devices is a fundamental barrier to the future improvement of photonic LSI. Direct interaction between photons and electrons is a promissing approach. Some of the Author's ideas to promote the merger of photonics and LSI are proposed.

  • A System for Deciding the Security of Cryptographic Protocols

    Hajime WATANABE  Toru FUJIWARA  Tadao KASAMI  

     
    PAPER

      Vol:
    E76-A No:1
      Page(s):
    96-103

    It is difficult to decide whether or not a given cryptographic protocol is secure even though the cryptographic algorithm used for the protocol is assumed to be secure. We have proposed an algorithm to decide the security of cryptographic protocols under several conditions. In this paper, we review our algorithm and report a system to verify the security. The system has be implemented on a computer. By using this system, we have verified the security of several protocols efficiently.

  • A Real-Time Speech Dialogue System Using Spontaneous Speech Understanding

    Yoichi TAKEBAYASHI  Hiroyuki TSUBOI  Hiroshi KANAZAWA  Yoichi SADAMOTO  Hideki HASHIMOTO  Hideaki SHINCHI  

     
    PAPER

      Vol:
    E76-D No:1
      Page(s):
    112-120

    This paper describes a task-oriented speech dialogue system based on spontaneous speech understanding and response generation (TOSBURG). The system has been developed for a fast food ordering task using speaker-independent keyword-based spontaneous speech understanding. Its purpose being to understand the user's intention from spontaneous speech, the system consists of a noise-robust keyword-spotter, a semantic keyword lattice parser, a user-initiated dialogue manager and a multimodal response generator. After noise immunity keyword-spotting is performed, the spotted keyword candidates are analyzed by a keyword lattice parser to extract the semantic content of the input speech. Then, referring to the dialogue history and context, the dialogue manager interprets the semantic content of the input speech. In cases where the interpretation is ambiguous or uncertain, the dialogue manager invites the user to confirm verbally the system's understanding of the speech input. The system's response to the user throughout the dialogue is multimodal; that is, several modes of communication (synthesized speech, text, animated facial expressions and ordered food items) are used to convey the system's state to the user. The object here is to emulate the multimodal interaction that occurs between humans, and so achieve more natural and efficient human-computer interaction. The real-time dialogue system has been constructed using two general purpose workstations and four DSP accelerators (520MFLOPS). Experimental results have shown the effectiveness of the newly developed speech dialogue system.

  • On the Complexity of Composite Numbers

    Toshiya ITOH  Kenji HORIKAWA  

     
    PAPER

      Vol:
    E76-A No:1
      Page(s):
    23-30

    Given an integer N, it is easy to determine whether or not N is prime, because a set of primes is in LPP. Then given a composite number N, is it easy to determine whether or not N is of a specified form? In this paper, we consider a subset of odd composite numbers +1MOD4 (resp. +3MOD4), which is a subset of odd composite numbers consisting of prime factors congruent to 1 (resp. 3) modulo 4, and show that (1) there exists a four move (blackbox simulation) perfect ZKIP for the complement of +1MOD4 without any unproven assumption; (2) there exists a five move (blackbox simulation) perfect ZKIP for +1MOD4 without any unproven assumption; (3) there exists a four move (blackbox simulation) perfect ZKIP for +3MOD4 without any unproven assumption; and (4) there exists a five move (blackbox simulation) statistical ZKIP for the complement of +3MOD4 without any unproven assumption. To the best of our knowledge, these are the first results for a language L that seems to be not random self-reducible but has a constant move blackbox simulation perfect or statistical ZKIP for L and without any unproven assumption.

  • An Access Control Mechanism for Object-Oriented Database Systems

    Tadashi ARAKI  Tetsuya CHIKARAISHI  Thomas HARDJONO  Tadashi OHTA  Nobuyoshi TERASHIMA  

     
    PAPER

      Vol:
    E76-A No:1
      Page(s):
    112-121

    The security problems of object-oriented database system are investigated and security level assignment constraints and an access control mechanism based on the multilevel access control security policy are proposed. The proposed mechanism uses the Trusted Computing Base. A unique feature of the mechanism is that security levels are assigned not only to data items (objects), but also to methods and methods are not shown to the users whose security level is lower than that of the methods. And we distinguish between the security level of a variable in a class and that in an instance and distinguish between the level of an object when it is taken by itself and it is taken as a variable or an element of another complex object. All of this realizes the policy of multilevel access control.

  • Scattering from Conductor or Complementary Aperture Array on a Semi-infinite Substrate

    Hideaki WAKABAYASHI  Masanobu KOMINAMI  Shinnosuke SAWA  Hiroshi NAKASHIMA  

     
    LETTER

      Vol:
    E75-A No:12
      Page(s):
    1762-1764

    Frequency Selective Screens (FSS) with conductor or complementary aperture array are investigated. The electric current distribution on conductor or the magnetic current distribution on aperture is determined by the moment method in the spectral domain. In addition, the power reflection coefficients are calculated and the scattering properties are considered.

  • Discussion on a Method to Generalize the Computerized Test Based on the Analysis of Learners' Image Structure to Computer System

    Takako AKAKURA  Keizo NAGAOKA  

     
    LETTER

      Vol:
    E75-A No:12
      Page(s):
    1751-1754

    In this letter authors discussed on the strategy to apply computerized tests on learners who have negative attitude to computerized tests. First, learners' image to computer system was measured by semantic differential method (SD method). It was revealed that the image of computer systems was made up of four factors of subjective evaluation (Es), objective evaluation (Eo), potency (P) and activity (A). Learners who have negative attitude to computerized test were revealed to have negative image on (Es) and (A) factors, while on the other hand have rather positive image on (Eo) and (P) factors. Then authors developed the feedback record charts laying stress on (Eo) and (P) factors. This feedback chart was effective to improve learners' acceptability of computerized test.

  • Detecting Separability of Nonlinear Mappings Using Computational Graphs

    Kiyotaka YAMAMURA  Masahiro KIYOI  

     
    LETTER-Analog Circuits and Signal Processing

      Vol:
    E75-A No:12
      Page(s):
    1820-1825

    Separability is a valuable property of nonlinear mappings. By exploiting this property, computational complexity of many numerical algorithms can be substantially reduced. In this letter, a new algorithm is presented that detects the separability of nonlinear mappings using the concept of "computational graph". A hybrid algorithm using both the top-down search and the bottom-up search is proposed. It is shown that this hybrid algorithm is advantageous in detecting the separability of nonlinear simultaneous functions.

  • Context-Free Grammars with Memory

    Etsuro MORIYA  

     
    PAPER-Automaton, Language and Theory of Computing

      Vol:
    E75-D No:6
      Page(s):
    847-851

    CFGs (context-free grammars) with various types of memory are introduced and their generative capacities are investigated. For an automata-theoretic characterization, a new type of automaton called partitioning automaton is introduced and it is shown that the class of languages generated by CFGs with memory type X is equal to the class of languages accepted by partitioning automata of type X.

  • Modeling and Simulation of the Sliding Window Algorithm for Fault-Tolerant Clock Synchronization

    Manfred J. PFLUEGL  Douglas M. BLOUGH  

     
    PAPER

      Vol:
    E75-D No:6
      Page(s):
    792-796

    Synchronous clocks are an essential requirement for a variety of distributed system applications. Many of these applications are safety-critical and require fault tolerance. In this paper, a general probabilistic clock synchronization model is presented. This model is uniformly probabilistic, incorporating random message delays, random clock drifts, and random fault occurrences. The model allows faults in any system component and of any type. Also, a new Sliding Window Clock Synchronization Algorithm (SWA) providing increased fault tolerance is proposed. The probabilistic model is used for an evaluation of SWA which shows that SWA is capable of tolerating significantly more faults than other algorithms and that the synchronization tightness is as good or better than that of other algorithms.

  • A New Indexing Technique for Nested Queries on Composite Objects

    Yong-Moo KWON  Yong-Jin PARK  

     
    PAPER-Databases

      Vol:
    E75-D No:6
      Page(s):
    861-872

    A new indexing technique for rapid evaluation of nested query on composite object is propoced, reducing the overall cost for retrieval and update. An extended B+ tree is introduced in which object identifier (OID) to be searched and path information usud for update of index record are stored in leaf node and subleaf node, respectively. In this method, the retrieval oeration is applied only for OIDs in the leaf node. The index records of both leaf and subleaf nodes are updated in such a way that the path information in the subleaf node and OIDs in the leaf node are reorganized by deleting and inserting the OIDs. The techniaue presented offers advantages over currently related indexing techniques in data reorganization and index allocation. In the proposed index record, the OIDs to be reorganized are always consecutively provided, and thus only the record directory is updated when an entire page should be removed. In addition, the proposed index can be allocate to a path with the length greater than 3 without splitting the path. Comparisons under a variety of conditions are given with current indexing techniques, showing improved performance in cost, i.e., the total number of pages accessed for retrieval and update.

  • Fault Tolerance Assurance Methodology of the SXO Operating System for Continuous Operation

    Hiroshi YOSHIDA  Hiroyuki SUZUKI  Kotaro OKAZAKI  

     
    PAPER

      Vol:
    E75-D No:6
      Page(s):
    797-803

    In developing the SXO operating system for the SURE SYSTEM 2000 continuous operation system, we aimed to create an unprecedentedly high software and hardware fault tolerance. We devised a fault tolerant architecture and various methodologies to ensure fault tolerance. We implemented these techniques systematically throughout operating system development. In the design stage, we developed a design methodology called the recovery process chart to verify that recovery mechanisms were complete. In the manufacturing stage, we applied the concept of critical routes to recovery and other processes essential to high dependability. We also developed a method of finding critical routes in a recovery process chart. In the test stage, we added an artificial software fault injection mechanism to the operating system. It generates various reproducible errors at appropriate times and reduces the number of personnel needed for test, making system reliability evaluation easy.

  • Stabilization of Voltage Limiter Circuit for High-Density DRAM's Using Pole-Zero Compensation

    Hitoshi TANAKA  Masakazu AOKI  Jun ETOH  Masashi HORIGUCHI  Kiyoo ITOH  Kazuhiko KAJIGAYA  Tetsurou MATSUMOTO  

     
    PAPER

      Vol:
    E75-C No:11
      Page(s):
    1333-1343

    To improve the stability and the power supply rejection ratio (PSRR) of the voltage limiter circuit used in high-density DRAM's we present a voltage limiter circuit with pole-zero compensation. Analytical expressions that describe the stability of the circuit are provided for comprehensive consideration of circuit design. Voltage limiters with pole-zero compensation are shown to have excellent performance with respect to the stability, PSRR, and circuit area occupation. The parasitic resistances in internal voltage supply lines, signal transmission lines, and transistors are important parameters determining the stability of pole-zero compensation. Evaluation of a 16-Mbit test device revealed internal voltage fluctuations of 6% during operation of a chip-internal circuit, a phase margin of 53, and a PSRR of 30 dB.

  • Simplification to Enhance Comprehensibility of Communications Software Descriptions Written in a Procedural Language

    Yasushi WAKAHARA  Atsushi ITO  Eiji UTSUNOMIYA  Fumio NITTA  

     
    INVITED PAPER

      Vol:
    E75-B No:10
      Page(s):
    942-948

    The purpose of this paper is to propose a technique to simplify the communications software descriptions written in a procedural language in order to enhance their comprehensibility. Although such a technique was not much studied and discussed in the past, this technique is important to realize high productivity and high quality of the communications software by reducing the complexity of the software description. This paper firstly systematically presents various simplification methods with their principles for the descriptions of the communications software from the viewpoints of their layout, syntactical structures etc. Then, it describes a simplification support system based on these principles for the software specifications written in SDL. Lastly, this paper demonstrates the usefulness and effectiveness of the proposed simplification technique by analyzing the evaluation results of the simplification system.

  • Algorithms for Multiplexers Assignment after Scheduling and Allocation Steps

    Hiroshi SEKIGAWA  Kiyoshi OGURI  Ryo NOMURA  Yukihiro NAKAMURA  

     
    PAPER

      Vol:
    E75-A No:10
      Page(s):
    1202-1211

    In recent VLSI design of digital data paths, significantly more area is occupied by interconnect elements than by functional units and registers. Nevertheless, until recently most work in data path synthesis has been concentrated on trying to reduce the area of functional units and registers, without paying much attention to the interconnect area. Lately, research that addresses reducing the area of interconnection and of functional units and registers is increasing, but in them, most algorithms for assigning interconnect elements are not efficient enough to optimize the interconnect area. In most current research, algorithms for interconnect element assignment are used to calculate the cost functions during the scheduling and/or allocation steps. This makes it impossible to use efficient optimization algorithms that may consume long time. This paper presents some new algorithms used to assign interconnect elements in data paths. The algorithms minimize the number of multiplexer inputs after the scheduling and operator/register allocations have been made. The algorithms have two characteristics. First, we use a branch and bound method for small problems. We confirmed that exact solutions in practical time can be obtained with this method for rather large problems, when the solutions are restricted to a one-level multiplexer model. Second, we use a certain heuristic method for larger problems. The algorithms have been implemented in C on an Apollo Domain Series 10000.

3861-3880hit(3945hit)