The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] TE(21534hit)

20741-20760hit(21534hit)

  • A System for the Synthesis of High-Quality Speech from Texts on General Weather Conditions

    Keikichi HIROSE  Hiroya FUJISAKI  

     
    PAPER

      Vol:
    E76-A No:11
      Page(s):
    1971-1980

    A text-to-speech conversion system for Japanese has been developed for the purpose of producing high-quality speech output. This system consists of four processing stages: 1) linguistic processing, 2) phonological processing, 3) control parameter generation, and 4) speech waveform generation. Although the processing at the first stage is restricted to the texts on general weather conditions, the other three stages can also cope with texts of news and narrations on other topics. Since the prosodic features of speech are largely related to the linguistic information, such as word accent, syntactic structure and discourse structure, linguistic processing of a wider range than ever, at least a sentence, is indispensable to obtain good quality speech with respect to the prosody. From this point of view, input text was restricted to the weather forecast sentences and a method for linguistic processing was developed to conduct morpheme, syntactic and semantic analyses simultaneously. A quantitative model for generating fundamental frequency contours was adopted to make a good reflection of the linguistic information on the prosody of synthetic speech. A set of prosodic rules was constructed to generate prosodic symbols representing prosodic structures of the text from the linguistic information obtained at the first stage. A new speech synthesizer based on the terminal analog method was also developed to improve the segmental quality of synthetic speech. It consists of four paths of cascade connection of pole/zero filters and three waveform generators. The four paths are respectively used for the synthesis of vowels and vowel-like sounds, nasal murmur and buzz bar, friction, and plosion, while the three generators produce voicing source waveform approximated by polynomials, white Gaussian noise source for fricatives and impulse source for plosives. The validity of the approach above has been confirmed by the listening tests using speech synthesized by the developed system. Improvements both in the quality of prosodic features and in the quality of segmental features were realized for the synthetic speech.

  • A Verification Method via Invariant for Communication Protocols Modeled as Extended Communicating Finite-State Machines

    Masahiro HIGUCHI  Osamu SHIRAKAWA  Hiroyuki SEKI  Mamoru FUJII  Tadao KASAMI  

     
    PAPER-Signaling System and Communication Protocol

      Vol:
    E76-B No:11
      Page(s):
    1363-1372

    This paper presents a method for verifying safety property of a communication protocol modeled as two extended communicating finite-state machines with two unbounded FIFO channels connecting them. In this method, four types of atomic formulae specifying a condition on a machine and a condition on a sequence of messages in a channel are introduced. A human verifier describes a logical formula which expresses conditions expected to be satisfied by all reachable global states, and a verification system proves that the formula is indeed satisfied by such states (i.e. the formula is an invariant) by induction. If the invariant is never satisfied in any unsafe state, it can be concluded that the protocol it safe. To show the effectiveness of this method, a sample protocol extracted from the data transfer phase of the OSI session protocol was verified by using the verification system.

  • An Effective Defect-Repair Scheme for a High Speed SRAM

    Sadayuki OOKUMA  Katsuyuki SATO  Akira IDE  Hideyuki AOKI  Takashi AKIOKA  Hideaki UCHIDA  

     
    PAPER-SRAM

      Vol:
    E76-C No:11
      Page(s):
    1620-1625

    To make a fast Bi-CMOS SRAM yield high without speed degradation, three defect-repair methods, the address comparison method, the fuse decoder method and the distributed fuse method, were considered in detail and their advantages and disadvantages were made clear. The distributed fuse method is demonstrated to be further improved by a built-in fuse word driver and a built-in fuse column selector, and fuse analog switches. This enhanced distributed fuse scheme was examined in a fast Bi-CMOS SRAM. A maximun access time of 14 ns and a chip size of 8.8 mm17.4 mm are expected for a 4 Mb Bi-CMOS SRAM in the future.

  • Circuit and Functional Design Technologies for 2 Mb VRAM

    Katsuyuki SATO  Masahiro OGATA  Miki MATSUMOTO  Ryouta HAMAMOTO  Kiichi MANITA  Terutaka OKADA  Yuji SAKAI  Kanji OISHI  Masahiro YAMAMURA  

     
    PAPER-Application Specific Memory

      Vol:
    E76-C No:11
      Page(s):
    1632-1640

    Four circuit techniques and a layout design scheme were proposed to realize a 2 Mb VRAM used 0.8 µm technology. They are the enhanced circuit technologies for high speed operation, the functional circuit design and the effective repair schemes for a VRAM, the low power consumption techniques to active and standby mode and a careful layout design scheme realizing high noise immunity. Using these design techniques, a 2 Mb VRAM is suitable for the graphics application of a 5125128 pixels basis screen, with a clear mode of 4.6 GByte/sec and a 4-multi column write mode of 400 MByte/sec, even using the same 0.8 µm technology as the previous VRAM (1 Mb) was realized.

  • Analysis of Electromagnetic Wave Scattering by a Cavity Model with Lossy Inner Walls

    Noh-Hoon MYUNG  Young-Seek SUN  

     
    PAPER-Antennas and Propagation

      Vol:
    E76-B No:11
      Page(s):
    1445-1449

    An approximate but sufficiently accurate high frequency solution is developed in this paper for analyzing the problem of electromagnetic plane wave scattering by an open-ended, perfectly-conducting, semi-infinite parallel-plate waveguide with a thin layer of lossy or absorbing material on its inner wall, and with a planar termination inside. The high frequency solution combines uniform geometrical theory of diffraction (UTD) and aperture integration (AI) methods. The present method has several advantages in comparison with other methods.

  • Power Control of a Terminal Analog Synthesizer Using a Glottal Model

    Mikio YAMAGUCHI  

     
    PAPER

      Vol:
    E76-A No:11
      Page(s):
    1957-1963

    A terminal-analog synthesizer which uses a glottal model has already been proposed for rule-based speech synthesis, but the control strategy for glottal source intensity levels has not yet been defined. On the other hand, power-control rules which determine the target segmental power of synthetic speech have been proposed, based on statistical analysis of the power in natural speech. It is pointed out that there is a close correlation between observed fundamental frequency and power levels in natural speech; however, the theoretical reasons for this correlation have not been explained. This paper shows the relationship between fundamental frequency and resultant power in a terminal-analog synthesizer which uses a glottal model. From the equations it can be deduced that the tendency in natural speech for power to increase with fundamental frequency can be closely simulated by the sum of the effect of the radiation characteristic and the effect of the synthesis system's vocal tract transfer function. In addition, this paper proposes a method for adjusting the power of synthetic speech to any desired value. This control method can be executed in real-time.

  • Small-Amplitude Bus Drive and Signal Transmission Technology for High-Speed Memory-CPU Bus Systems

    Tatsuo KOIZUMI  Seiichi SAITO  

     
    INVITED PAPER

      Vol:
    E76-C No:11
      Page(s):
    1582-1588

    Computing devices have reached data frequencies of 100 MHz, and have created a need for small-amplitude impedance-matched buses. We simulated signal transmission characteristics of two basic driver circuits, push-pull and open-drain,for a synchronous DRAM I/O bus. The push-pull driver caused less signal distortion with parasitic inductance and capacitance of packages, and thus has higher frequency limits than the open-drain GTL type. We describe a bus system using push-pull drivers which operates at over 125 MHz. The bus line is 70 cm with 8 I/O loads distributed along the line, each having 25 nH7pF parasitic inductance and capacitance.

  • On the Surface-Patch and Wire-Grid Modeling for Planar Antenna Mounted on Metal Housing

    Morteza ANALOUI  Yukio KAGAWA  

     
    PAPER-Antennas and Propagation

      Vol:
    E76-B No:11
      Page(s):
    1450-1455

    Numerical analysis of the electromagnetic radiation from conducting surface structures is concerned. The method of moments is discussed with the surface-patch modeling in which the surface quantities, i.e. the current, charge and impedance are directly introduced and with the wire-grid modeling in which the surface quantities are approximated by the filamentary traces. The crucial element to a numerical advantage of the wire-grid modeling lies in the simplicity of its mathematical involvements that should be traded for the uncertainties in the construction of the model. The surface-patch techniques are generally not only clear and straightforward but also more reliable than the wire-grid modeling for the computation of the surface quantities. In this work, we bring about a comparative discussion of the two approaches while the analysis of a built-in planar antenna is reported. For the purpose of the comparison, the same electric field integral equation and the Galerkin's procedure with the linear expansion/testing functions are used for both the wire-grid and surface-patch modeling.

  • An Investigation on Space-Time Tradeoff of Routing Schemes in Large Computer Networks

    Kenji ISHIDA  

     
    PAPER

      Vol:
    E76-D No:11
      Page(s):
    1341-1347

    Space-time tradeoff is a very fundamental issue to design a fault-tolerant real-time (called responsive) system. Routing a message in large computer networks is efficient when each node knows the full topology of the whole network. However, in the hierarchical routing schemes, no node knows the full topology. In this paper, a tradeoff between an optimality of path length (message delay: time) and the amount of topology information (routing table size: space) in each node is presented. The schemes to be analyzed include K-scheme (by Kamoun and Kleinrock), G-scheme (by Garcia and Shacham), and I-scheme (by authors). The analysis is performed by simulation experiments. The results show that, with respect to average path length, I-scheme is superior to both K-scheme and G-scheme, and that K-scheme is better than G-scheme. Additionally, an average path length in I-scheme is about 20% longer than the optimal path length. On the other hand, for the routing table size, three schemes are ranked in reverse direction. However, with respect to the order of size of routing table, the schemes have the same complexity O (log n) where n is the number of nodes in a network.

  • A Consensus-Based Model for Responsive Computing

    Miroslaw MALEK  

     
    INVITED PAPER

      Vol:
    E76-D No:11
      Page(s):
    1319-1324

    The emerging discipline of responsive systems demands fault-tolerant and real-time performance in uniprocessor, parallel, and distributed computing environments. The new proposal for responsiveness measure is presented, followed by an introduction of a model for responsive computing. The model, called CONCORDS (CONsensus/COmputation for Responsive Distributed Systems), is based on the integration of various forms of consensus and computation (progress or recovery). The consensus tasks include clock synchronization, diagnosis, checkpointing scheduling and resource allocation.

  • Should Responsive Systems be Event-Triggered or Time-Triggered ?

    Hermann KOPETZ  

     
    INVITED PAPER

      Vol:
    E76-D No:11
      Page(s):
    1325-1332

    In this paper the two different paradigms for the design of responsive, i.e., distributed fault-tolerant real-time systems, the event-triggered (ET) approach and the time-triggered (TT) approach, are analyzed and compared. The comparison focuses on the temporal properties and considers the issues of predictability, testability, resource utilization, extensibility, and assumption coverage.

  • A Portable Text-to-Speech System Using a Pocket-Sized Formant Speech Synthesizer

    Norio HIGUCHI  Tohru SHIMIZU  Hisashi KAWAI  Seiichi YAMAMOTO  

     
    PAPER

      Vol:
    E76-A No:11
      Page(s):
    1981-1989

    The authors developed a portable Japanese text-to-speech system using a pocket-sized formant speech synthesizer. It consists of a linguistic processor and an acoustic processor. The linguistic processor runs on an MS-DOS personal computer and has functions to determine readings and prosodic information for input sentences written in kana-kanji-mixed style. New techniques, such as minimization of a cost function for phrases, rare-compound flag, semantic information, information of reading selection and restriction by associated particles, are used to increase the accuracy of readings and accent positions. The accuracy of determining readings and accent positions is 98.6% for sentences in newspaper articles. It is possible to use the linguistic processor through an interface library which has also been developed by the authors. Consequently, it has become possible not only to convert whole texts stored in text files but also to convert parts of sentences sent by the interface library sequentially, and the readings and prosodic information are optimized for the whole sentence at one time. The acoustic processor is custom-made hardware, and it has adopted new techniques, for the improvement of rules for vowel devoicing, control of phoneme durations, control of the phrase components of voice fundamental frequency and the construction of the acoustic parameter database. Due to the above-mentioned modifications, the naturalness of synthetic speech generated by a Klatt-type formant speech synthesizer was improved. On a naturalness test it was rated 3.61 on a scale of 5 points from 0 to 4.

  • Exploiting Parallelism in Neural Networks on a Dynamic Data-Driven System

    Ali M. ALHAJ  Hiroaki TERADA  

     
    PAPER-Neural Networks

      Vol:
    E76-A No:10
      Page(s):
    1804-1811

    High speed simulation of neural networks can be achieved through parallel implementations capable of exploiting their massive inherent parallelism. In this paper, we show how this inherent parallelism can be effectively exploited on parallel data-driven systems. By using these systems, the asynchronous parallelism of neural networks can be naturally specified by the functional data-driven programs, and maximally exploited by pipelined and scalable data-driven processors. We shall demonstrate the suitability of data-driven systems for the parallel simulation of neural networks through a parallel implementation of the widely used back propagation networks. The implementation is based on the exploitation of the network and training set parallelisms inherent in these networks, and is evaluated using an image data compression network.

  • Prciseness of Discrete Time Verification

    Shinji KIMURA  Shunsuke TSUBOTA  Hiromasa HANEDA  

     
    PAPER

      Vol:
    E76-A No:10
      Page(s):
    1755-1759

    The discrete time analysis of logic circuits is usually more efficient than the continuous time analysis, but the preciseness of the discrete time analysis is not guaranteed. The paper shows a method to decide a unit time for a logic circuit under which the analysis result is the same as the result based on the continuous time. The delay time of an element is specified with an interval between the minimum and maximum delay times, and we assume an analysis method which enumerates all possible delay cases under the deisrete time. Our main theorem is as follows: refine the unit time by a factor of 1/2, and if the analysis result with a unit time u and that with a unit time u/2 are the same, then u is the expected unit time.

  • A Compostite Signal Detection Scheme in Additive and Signal-Dependent Noise

    Sangyoub KIM  Iickho SONG  Sun Yong KIM  

     
    PAPER-Information Theory and Coding Theory

      Vol:
    E76-A No:10
      Page(s):
    1790-1803

    When orignal signals are contaminated by both additive and signal-dependent noise components, the test statistics of locally optimum detector are obtained for detection of weak composite signals based on the generalized Neyman-Pearson lemma. In order to consider the non-additive noise as well as purely-additive noise, a generalized observation model is used in this paper. The locally optimum detector test statisics are derived for all different cases according to the relative strengths of the known signal, random signal, and signal-dependent noise components. Schematic diagrams of the structures of the locally optimum detector are also included. The finite sample-size performance characteristics of the locally optimum detector are compared with those of other common detectors.

  • Compact Test Sequences for Scan-Based Sequential Circuits

    Hiroyuki HIGUCHI  Kiyoharu HAMAGUCHI  Shuzo YAJIMA  

     
    PAPER

      Vol:
    E76-A No:10
      Page(s):
    1676-1683

    Full scan design of sequential circuits results in greatly reducing the cost of their test generation. However, it introduces the extra expense of many test clocks to control and observe the values of flip-flops because of the need to shift values for the flip-flops into the scan panh. In this paper we propose a new method of generating compact test sequences for scan-based sequential circuits on the assumption that the number of shift clocks is allowed to vary for each test vector. The method is based on Boolean function manipulation using a shared binary decision diagram (SBDD). Although the test generation algorithm is basically for general sequential circuits, the computational cost is much lower for scan-based sequential circuits than for non-scanbased sequential circuits because the length of a test sequence for each fault is limited. Experimental results show that, for all the tested circuits, test sequences generated by the method require much smaller number of test clocks than compact or minimum test sets for combinational logic part of scan-based sequential circuits. The reduction rate was 48% on the average in the experiments.

  • An ASIP Instruction Set Optimization Algorithm with Functional Module Sharing Constraint

    Alauddin Y. ALOMARY  Masaharu IMAI  Nobuyuki HIKICHI  

     
    PAPER

      Vol:
    E76-A No:10
      Page(s):
    1713-1720

    One of the most interesting and most analyzed aspects of the CPU design is the instruction set design. How many and which operations to be provided by hardware is one of the most fundamental issues relaing to the instruction set design. This paper describes a novel method that formulates the instruction set design of ASIP (an Application Specific Integrated Processor) using a combinatorial appoach. Starting with the whole set of all possible candidata instructions that represesnt a given application domain, this approach selects a subset that maximizes the performance under the constraints of chip area, power consumption, and functional module sharing relation among operations. This leads to the efficient implementation of the selected instructions. A branch-and-bound algorithm is used to solve this combinatorial optimization problem. This approach selects the most important instructions for a given application as well as optimizing the hardware resources that implement the selected instructions. This approach also enables designers to predict the perfomance of their design before implementing them, which is a quite important feature for producing a quality design in reasonable time.

  • A Note on Leaf Reduction Theorem for Reversal- and Leaf-Bounded Alternating Turing Machines

    Hiroaki YAMAMOTO  Takashi MIYAZAKI  

     
    LETTER-Automaton, Language and Theory of Computing

      Vol:
    E76-D No:10
      Page(s):
    1298-1301

    There have been several studies related to a reduction of the amount of computational resources used by Turing machines. As consequences, linear speed-up theorem" tape compression theorem", and reversal reduction theorem" have been obtained. In this paper, we consider reversal- and leaf-bounded alternating Turing machines, and then show that the number of leaves can be reduced by a constant factor without increasing the number of reversals. Thus our results say that a constant factor on the leaf complexity does not affect the power of reversal- and leaf-bounded alternating Turing machines

  • Interval Properties of Lattice Allpass Fiters with Applications

    Saed SAMADI  Akinori NISHIHARA  Nobuo FUJII  

     
    PAPER-Digital Signal Processing

      Vol:
    E76-A No:10
      Page(s):
    1775-1780

    In practical applications of digital filters it is more realistic to treat multiplier coefficients as finite intervals than restricting them to infinite or very long word-length representations. However, this can not be done it the frequency response performance under interval assumption is difficult to analyze. In this paper, it is proved that stable lattice allpass filters possess bounded continuous phase response when lattice parameters vary in bounded intervals. It is shown that sharp bounds on the interval phase response can be computed easily at an arbitrary frequency using a simple recursive procedure. Application of this property to the problem of finite word-length lattice allpass filter design is also discussed. By formulating this problem as an interval design it is possible to solve it efficiently independent of the number system used to represent multiplier coefficients.

  • Theory and Techniques for Testing Check Bits of RAMs with On-Chip ECC

    Manoj FRANKLIN  Kewal K. SALUJA  

     
    PAPER-Fault Tolerant Computing

      Vol:
    E76-D No:10
      Page(s):
    1243-1252

    As RAMs become dense, their reliability reduces because of complex interactions between memory cells and soft errors due to alpha particle radiations. In order to rectify this problem, RAM manufacturers have started incorporating on-chip (built-in) ECC. In order to minimize the area overhead of on-chip ECC, the same technology is used for implementing the check bits and the information bits. Thus the check bits are exposed to the same failure modes as the information bits. Furthermore, faults in the check bits will manifest as uncorrectable multiple errors when a soft error occurs. Therefore it is important to test the check bits for all failure modes expected of other cells. In this paper, we formulate the problem of testing RAMs with on-chip ECC capability. We than derive necessary and sufficient conditions for testing the check bits for arbitrary and adjacent neighborhood pattern sensitive faults. We also provide an efficient solution to test a memory array of N bits (including check bits) for 5-cell neighborhood pattern sensitive faults in O (N) reads and writes, with the check bits also tested for the same fault classes as the information bits.

20741-20760hit(21534hit)