The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] form(3161hit)

3041-3060hit(3161hit)

  • Optical Parallel Interconnection Based on Group Multiplexing and Coding Technique

    Tetsuo HORIMATSU  Nobuhiro FUJIMOTO  Kiyohide WAKAO  Mitsuhiro YANO  

     
    PAPER

      Vol:
    E77-C No:1
      Page(s):
    35-41

    A transmission data format for high-speed optical parallel interconnections is proposed and a 4-channel transmitter and receiver link module operating at up to 1.2 Gb/s per channel is demonstrated. The data format features "Group Multiplexing and Coding." In this scheme, input several tens channels are multiplexed and coded in group into reduced channels, resulting in burst-mode compatible, skew-free transmission, and low power-consumption of a link module. Experiments with fabricated modules comfirm that our data coding in multichannel optical transmission is promising for use in high-speed interconnections in information and switching systems.

  • On a High-Ranking Node of B-ISDN

    Chung-Ju CHANG  Po-Chou LIN  Jia-Ming CHEN  

     
    PAPER-Communication Theory

      Vol:
    E77-B No:1
      Page(s):
    43-50

    The paper studies a high-ranking node in a broadband integrated services digital network(B-ISDN). The input traffic is classified into two types: real-time and non-real-time. For each type of input traffic, we assume that the message arrival process is a batch Poisson process and that the message size is arbitrarily distributed so as to describe services from narrowband to wideband. We model the high-ranking node by a queueing system with multiple synchronous servers and two separate finite buffers, one for each type of traffic. We derive performance measures exactly by using a two-dimensional imbedded discrete-time Markov chain analysis, within which the transition probabilities are obtained via an application of the residue theorem in complex variables. The performance measures include the blocking probability, delay, and throughput.

  • A Factored Reliability Formula for Directed Source-to-All-Terminal Networks

    Yoichi HIGASHIYAMA  Hiromu ARIYOSHI  Isao SHIRAKAWA  Shogo OHBA  

     
    PAPER-System Reliability

      Vol:
    E77-A No:1
      Page(s):
    134-143

    In a probabilistic graph (network), source-to-all-terminal (SAT) reliability may be defined as the probability that there exists at least one path consisting only of successful arcs from source vertex s to every other vertex. In this paper, we define an optimal SAT reliability formula to be the one with minimal number of literals or operators. At first, this paper describes an arc-reductions (open- or short-circuiting) method for obtaining a factored formula of directed graph. Next, we discuss a simple strategy to get an optimal formula being a product of the reliability formulas of vertex-section graphs, each of which contains a distinct strongly connected component of the given graph. This method reduces the computing cost and data processing effort required tu generate the optimal factored formula, which contains no identical product terms.

  • Preventive Replacement Policies and Their Application to Weibull Distribution

    Michio HORIGOME  Yoshito KAWASAKI  Qin Qin CHEN  

     
    LETTER-Maintainability

      Vol:
    E77-A No:1
      Page(s):
    240-243

    This letter deals with the reliability function in the case of periodic preventive replacement of items in order to increase MTBF, that is, two replacement policies; strictly periodic replacement (SPR) and randomly periodic replacement (RPR). We stress on simple introduction of the reliability theory under preventive replacement policies using the Laplace transform and obtain the theoretical results of SPR and RPR. Then these results are applied to the Weibull distribution and finally in order to show useful information of preventive replacement, the numerical results of SPR are provided.

  • Identity-Based Non-interactive Key Sharing

    Hatsukazu TANAKA  

     
    PAPER

      Vol:
    E77-A No:1
      Page(s):
    20-23

    In this paper an identity-based non-interactive key sharing scheme (IDNIKS) is proposed in order to realize the original concept of identity-based cryptosystem, of which secure realization scheme has not been proposed. First the necessary conditions for secure realization of IDNIKS are considered from two different poinrts of view: (i) the possibility to share a common-key non-interactively and (ii) the security for entity's conspiracy. Then a new non-interactive key sharing scheme is proposed, of which security depends on the difficulty of factoring. The most important contribution is to have succeeded in obtaining any entity's secret information as an exponent of the obtainer's identity information. The security of IDNIKS for entity's conspiracy is also considered in details.

  • Reforming the National Research Institutions in Japan

    Nobuyoshi FUGONO  

     
    INVITED PAPER

      Vol:
    E77-B No:1
      Page(s):
    1-4

    It is recognized in Japan that reformation of the national research institutions is urgently necessary. Present situation and constraints are shown and the action items are discussed.

  • A Superresolution Technique for Antenna Pattern Measurements

    Yasutaka OGAWA  Teruaki NAKAJIMA  Hiroyoshi YAMADA  Kiyohiko ITOH  

     
    PAPER

      Vol:
    E76-B No:12
      Page(s):
    1532-1537

    A new superresolution technique is proposed for antenna pattern measurements. Unwanted reflected signals often impinge on the antenna when we measure it outdoors. A time-domain superresolution technique (a MUSIC algorithm) has been proposed to eliminate the unwanted signal for a narrow pass-band antenna. The MUSIC algorithm needs many snapshots to obtain a correlation matrix. This is not preferable for antenna pattern measurements because it takes a long time to obtain the data. In this paper, we propose to reduce a noise component (stochastic quantity) using the FFT and gating techniques before we apply the MUSIC. The new technique needs a few snapshots and saves the measurement time.

  • A Two-Cascaded Filtering Method for the Enhancement of X-Ray CT Image

    Shanjun ZHANG  Toshio KAWASIMA  Yoshinao AOKI  

     
    PAPER-Image Processing, Computer Graphics and Pattern Recognition

      Vol:
    E76-D No:12
      Page(s):
    1500-1509

    A two-cascaded image processing approach to enhance the subtle differences in X-ray CT image is proposed. In the method, an asymmetrical non-linear subfilter is introduced to reduce the noise inherent in the image while preserving local edges and directional structural information. Then, a subfilter is used to compress the global dynamic range of the image and emphasize the details in the homogeneous regions by performing a modular transformation on local image den-sities. The modular transformation is based on a dynamically defined contrast fator and the histogram distributions of the image. The local contrast factor is described in accordance with Weber's fraction by a two-layer neighborhood system where the relative variances of the medians for eight directions are computed. This method is suitable for low contrast images with wide dynamic ranges. Experiments on X-ray CT images of the head show the validity of the method.

  • Generating a Binary Markov Chain by a Discrete-Valued Auto-Regressive Equation

    Junichi NAKAYAMA  Hiroya MOTOYAMA  

     
    LETTER-Digital Signal Processing

      Vol:
    E76-A No:12
      Page(s):
    2114-2118

    This paper gives a systematic approach to generate a Markov chain by a discrete-valued auto-regressive equation, which is a a nonlinear auto-regressive equation having a discrete-valued solution. The power spectrum, the correlation function and the transition probability are explicitly obtained in terms of the discrete-valued auto-regressive equation. Some computer results are illustrated in figures.

  • In-Vehicle Information Systems and Semiconductor Devices They Employ

    Takeshi INOUE  Kikuo MURAMATSU  

     
    INVITED PAPER

      Vol:
    E76-C No:12
      Page(s):
    1744-1755

    It was more than 10 years ago that the first map navigation system, as an example of invehicle information system, has appeared in the market in Japan. Today's navigation system has been improved to the level that the latest system has 10 micro-processors, 7 MBytes of memories, and 4 GBytes of external data storage for map database. From the viewpoint of the automobile driver, there are still some problems with the system. Major problems in general are a lack of traffic information, better human interface, and a need for cost-reduction. The introduction of application specific ICs (ASICs) is expected to make systems smaller, costless, and give higher speed response. Today's in-vehicle information systems are reviewed function by function to discover what functions need to be implemented into ASICs for future systems, what ASICs will be required, and what technology has to be developed. It is concluded that more integration technology is expected including high parformance CPUs, large capacity memories, interface circuits, and some analog circuits such as DA converter. To develop this technology, some, major problems such as power consumption, number of input/output signals, as well as design aid and process technology are pointed out.

  • ECG Data Compression by Using Wavelet Transform

    Jie CHEN  Shuichi ITOH  Takeshi HASHIMOTO  

     
    PAPER

      Vol:
    E76-D No:12
      Page(s):
    1454-1461

    A new method for the compression of electrocardiographic (ECG) data is presented. The method is based on the orthonormal wavelet analysis recently developed in applied mathematics. By using wavelet transform, the original signal is decomposed into a set of sub-signals with different frequency channels corresponding to the different physical features of the signal. By utilizing the optimum bit allocation scheme, each decomposed sub-signal is treated according to its contribution to the total reconstruction distortion and to the bit rate. In our experiments, compression ratios (CR) from 13.5: 1 to 22.9: 1 with the corresponding percent rms difference (PRD) between 5.5% and 13.3% have been obtained at a clinically acceptable signal quality. Experimental results show that the proposed method seems suitable for the compression of ECG data in the sense of high compression ratio and high speed.

  • Efficient Application of Coding Technique for Data Compression of ECG

    Susumu TSUDA  Koichi SHIMIZU  Goro MATSUMOTO  

     
    PAPER

      Vol:
    E76-D No:12
      Page(s):
    1425-1433

    A technique was developed to reduce ECG data efficiently within a controlled accuracy. The sampled and digitized data of the original waveform of an ECG is transformed in three major processes. They are the calculation of a beat-to-beat variation, a polygonal approximation and the calculation of the difference between consecutive node points. Then, an adaptive coding technique is applied to minimize redundancies in the data. It was demonstrated that the ECG waveform sampled in 200 Hz, 10 bit/sample, 5 µV/digit could be reduced with the bit reduction ratio of about 10% and within the reconstruction error of about 2.5%. A polygonal approximation method, called MSAPA, was newly developed as a modification of the well known method, SAPA. It was shown that the MSAPA gave better reduction efficiency and smaller reconstruction error than the SAPA, when it was applied to the beat-to-beat variation waveform. The importance of the low-pass filtering as a preprocessing for the polygonal approximation was confirmed in concrete examples. The efficiency of the proposed technique was compared with the cased in which the polygonal approximation was not used. Through these analyses, it was found that the redundancy elimination of the coding technique worked effectively in the proposed technique.

  • Data Compression of a Gaussian Signal by TP Algorithm and Its Application to the ECG

    Kosuke KATO  Shunsuke SATO  

     
    PAPER

      Vol:
    E76-D No:12
      Page(s):
    1470-1478

    In the present paper, we focus ourselves on the turning point (TP) algorithm proposed by Mueller and evaluate its performance when applied to a Gaussian signal with definite covariance function. Then the ECG wave is modeled by Gaussian signals: namely, the ECG is divided into two segments, the baseline segment and the QRS segment. The baseline segment is modeled by a Gaussian signal with butterworth spectrum and the QRS one by a narrow-band Gaussian signal. Performance of the TP algorithm is evaluated and compared when it is applied to a real ECG signal and its Gaussian model. The compression rate (CR) and the normalized mean square error (NMSE) are used as measures of performance. These measures show good coincidence with each other when applied to Gaussian signals with the mentioned spectra. Our results suggest that performance evaluation of the compression algorithms based on the stochastic-process model of ECG waves may be effective.

  • Technological Trends and Key Technologies in Intelligent Vehicles

    Takao SASAYAMA  

     
    INVITED PAPER

      Vol:
    E76-C No:12
      Page(s):
    1717-1726

    The technical trends of intelligent vehicles are discussed basing on the progress of technology of microelectronics, sensing and information processing. The concept of intelligent vehicles has started when the installation of computers on vehicles became possible in 1970s. The functions of computerized cars increased gradually with the progress of technology of microelectronics, sensing and information processing responding to the demands of the society. The first issues we had to challenge with the capability of electronic systems were the environmental and the energy resources problems. The R & D works of these purposes created many sophisticated computer control systems. Moreover, these works established the base of intelligent vehicles that contains various functions for drivability, safety, and information communications. On the other hand, many kinds of information and communication technology became useful to solve the issues on automobiles through infrastructure systems. United States, Europe, and Japan have started their own projects to realize such hierarchy management systems for traffic and vehicles. From the viewpoint of vehicle itself, it is the indispensable conditions and directions to implement the computer and telecommunications functions to the vehicles to establish clean, comfort, convenient, efficient and safe automobiles toward the next century.

  • A Model for Explaining a Phenomenon in Creative concept Formation

    Koichi HORI  

     
    PAPER-Artificial Intelligence and Cognitive Science

      Vol:
    E76-D No:12
      Page(s):
    1521-1527

    This paper gives a model to explain one phenomenon found in the process of creative concept formation, i.e. the phenomenon that people often get trapped in some state where the mental world remains nebulous and sometimes suddenly make a jump to a new concept. This phenomenon has been qualitatively explained mainly by the philosophers but there have not been models for explaining it quantitatively. Such model is necessary in a new research field to study the systems for aiding human creative activities. So far, the work on creation aid has not had theoretical background and the systems have been built based only on trial and error. The model given in this paper explains some aspects of the phenomena found in creative activities and give some suggestions for the future systems for aiding creative concept formation.

  • A Hybrid-ARQ Protocol with Adaptive Rate Error Control

    Hui ZHAO  Toru SATO  Iwane KIMURA  

     
    PAPER-Information Theory and Coding Theory

      Vol:
    E76-A No:12
      Page(s):
    2095-2101

    This paper presents an adaptive rate error control scheme for digital communication over time-varying channels. The cyclic code with majority-logic decoding is used in a cascaded way as an inner code to create a simple and powerful hybrid-ARQ error control scheme. Inner code is used only for error correction and the outer code is used for both error correction and error detection. When an error is detected, retransmission is required. The unsuccessful packets are not discarded as with conventional schemes, but are combined with their retransmitted copies. Approximations for the throughput efficiency and the undetectable error probability are given. A high reliability coupled with a simple high-speed implementation makes it suitable for high data rate error control over both stationary and nonstationary channels. Adaptive error control scheme becomes the best solution for time-varying channels when the optimum code is selected according to the actual channel conditions to enhance the system performance. The main feature of this system is that the basic structure of the encoder and decoder need not be modified while the error-correction capability of the code increases. Results of a comparative analysis show that the proposed scheme outperforms other similar ARQ protocols.

  • High Quality Speech Synthesis System Based on Waveform Concatenation of Phoneme Segment

    Tomohisa HIROKAWA  Kenzo ITOH  Hirokazu SATO  

     
    PAPER

      Vol:
    E76-A No:11
      Page(s):
    1964-1970

    A new system for speech synthesis by concatenating waveforms selected from a dictionary is described. The dictionary is constructed from a two-hour speech that includes isolated words and sentences uttered by one male speaker, and contains over 45,000 entries which are identified by their average pitch, dynamic pitch parameter which represents micro pitch structure in a segment, duration and average amplitude. Phoneme duration is set according to phoneme environment, and phoneme power is controlled, by both pitch frequency and phoneme environment. Tests show the average errors in vowel duration and consonant duration are 28.8 ms and 16.8 ms respectively, and the vowel power average error is 2.9 dB. The pitch frequency patterns are calculated according to a conventional model in which the accent component is abbed to a gross phrase component. Set a phoneme string and prosody information, the optimum waveforms are selected from the dictionary by matching their attributes with the given phonetic and prosodic information. A waveform selection function, which has two terms corresponding to prosody and phonological coincidence between rule-set values and waveform values from the dictionary, is proposed. The weight coefficients used in the selection function are determined through subjective hearing tests. The selected waveform segments are then modified in waveform domain to further adjust for the desired prosody. A pitch frequency modification method based on pitch synchronous overlap-add technique is introduced into the system. Lastly, the waveforms are interpolated between voiced waveforms to avoid abrupt changes in voice spectrum and waveform shape. An absolute evaluation test of five grades is performed to the synthesized voice and the mean of the score is 3.1, which is over "good," and while the original speaker quality is retained.

  • Manifestation of Linguistic Information in the Voice Fundamental Frequency Contours of Spoken Japanese

    Hiroya FUJISAKI  Keikichi HIROSE  Noboru TAKAHASHI  

     
    PAPER

      Vol:
    E76-A No:11
      Page(s):
    1919-1926

    Prosodic features of the spoken Japanese play an important role in the transmission of linguistic information concerning the lexical word accent, the sentence structure and the discourse structure. In order to construct prosodic rules for synthesizing high-quality speech, therefore, prosodic features of speech should be quantitatively analyzed with respect to the linguistic information. With a special focus on the fundamental frequency contour, we first define four prosodic units for the spoken Japanese, viz., prosodic word, prosodic phrase, prosodic clause and prosodic sentence, based on a decomposition of the fundamental frequency contour using a functional model for the generation process. Syntactic units are also introduced which have rough correspondence to these prosodic units. The relationships between the linguistic information and the characteristics of the components of the fundamental frequency contour are then described on the basis of results obtained by the analysis of two sets of speech material. Analysis of weathercast and newscast sentences showed that prosodic boundaries given by the manner of continuation/termination of phrase components fall into three categories, and are primarily related to the syntactic boundaries. On the other hand, analysis of noun phrases with various combinations of word accent types, syntactic structures, and focal conditions, indicated that the magnitude and the shape of the accent components, which of course reflect the information concerning the lexical accent types of constituent words, are largely influenced by the focal structure. The results also indicated that there are cases where prosody fails to meet all the requirements presented by word accent, syntax and discourse.

  • A Portable Text-to-Speech System Using a Pocket-Sized Formant Speech Synthesizer

    Norio HIGUCHI  Tohru SHIMIZU  Hisashi KAWAI  Seiichi YAMAMOTO  

     
    PAPER

      Vol:
    E76-A No:11
      Page(s):
    1981-1989

    The authors developed a portable Japanese text-to-speech system using a pocket-sized formant speech synthesizer. It consists of a linguistic processor and an acoustic processor. The linguistic processor runs on an MS-DOS personal computer and has functions to determine readings and prosodic information for input sentences written in kana-kanji-mixed style. New techniques, such as minimization of a cost function for phrases, rare-compound flag, semantic information, information of reading selection and restriction by associated particles, are used to increase the accuracy of readings and accent positions. The accuracy of determining readings and accent positions is 98.6% for sentences in newspaper articles. It is possible to use the linguistic processor through an interface library which has also been developed by the authors. Consequently, it has become possible not only to convert whole texts stored in text files but also to convert parts of sentences sent by the interface library sequentially, and the readings and prosodic information are optimized for the whole sentence at one time. The acoustic processor is custom-made hardware, and it has adopted new techniques, for the improvement of rules for vowel devoicing, control of phoneme durations, control of the phrase components of voice fundamental frequency and the construction of the acoustic parameter database. Due to the above-mentioned modifications, the naturalness of synthetic speech generated by a Klatt-type formant speech synthesizer was improved. On a naturalness test it was rated 3.61 on a scale of 5 points from 0 to 4.

  • PDM: Petri Net Based Development Methodology for Distributed Systems

    Mikio AOYAMA  

     
    INVITED PAPER

      Vol:
    E76-A No:10
      Page(s):
    1567-1579

    This article discusses on PDM (Petri net based Development Methodology) which integrates approaches, modeling methods, design methods and analysis methods in a coherent manner. Although various development techniques based on Petri nets have demonstrated advantages over conventional techniques, those techniques are rather ad hoc and lack an overall picture on entire development process. PDM anticipates to provide a refernce process model to develop distributed systems with various Petri net based development methods. Behavioral properties of distrbuted systems can be an appropriate application domain of PDM.

3041-3060hit(3161hit)