The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] FA(3430hit)

3281-3300hit(3430hit)

  • Radio Holographic Metrology with Best-Fit Panel Model of the Nobeyama 45-m Telescope

    Hiroyuki DEGUCHI  Masanori MASUDA  Takashi EBISUI  Yutaka SHIMAWAKI  Nobuharu UKITA  Katsunori M. SHIBATA  Masato ISHIGURO  

     
    PAPER

      Vol:
    E76-B No:12
      Page(s):
    1492-1499

    A best-fit panel model in the radio holographic metrology taking into account locations and sizes of actual surface panels in a large reflector antenna is presented. A displacement and tilt of each panel can be estimated by introducing the best-fit panel model. It was confirmed by simulations that the distinction can be drawn between a continuous surface error and a discontinuous one. Errors due to truncation of the radiation pattern were calculated by simulations. It was found that a measurement of a 128128 map is optimum for the 45-m telescope. The reliability of the measurements using this model was examined by experiments with panel displacements. Panel adjustments using the best-fit panel model successfully improved the surface accuracy of the antenna from 138µm rms to 84µm rms (/D=210-6).

  • A Superresolution Technique for Antenna Pattern Measurements

    Yasutaka OGAWA  Teruaki NAKAJIMA  Hiroyoshi YAMADA  Kiyohiko ITOH  

     
    PAPER

      Vol:
    E76-B No:12
      Page(s):
    1532-1537

    A new superresolution technique is proposed for antenna pattern measurements. Unwanted reflected signals often impinge on the antenna when we measure it outdoors. A time-domain superresolution technique (a MUSIC algorithm) has been proposed to eliminate the unwanted signal for a narrow pass-band antenna. The MUSIC algorithm needs many snapshots to obtain a correlation matrix. This is not preferable for antenna pattern measurements because it takes a long time to obtain the data. In this paper, we propose to reduce a noise component (stochastic quantity) using the FFT and gating techniques before we apply the MUSIC. The new technique needs a few snapshots and saves the measurement time.

  • Scene Interpretation with Default Parameter Models and Qualitative Constraints

    Michael HILD  Yoshiaki SHIRAI  

     
    PAPER-Image Processing, Computer Graphics and Pattern Recognition

      Vol:
    E76-D No:12
      Page(s):
    1510-1520

    High variability of object features and bad class separation of objects are the main causes for the difficulties encountered during the interpretation of ground-level natural scenes. For coping with these two problems we propose a method which extracts those regions that can be segmented and immediately recognized with sufficient reliability (core regions) in the first stage, and later try to extend these core regions up to their real object boundaries. The extraction of reliable core regions is generally difficult to achieve. Instead of using fixed sets of features and fixed parameter settings, our method employs multiple local features (including textural features) and multiple parameter settings. Not all available features may yield useful core regions, but those core regions that are extracted from these multiple features make a cntributio to the reliability of the objects they represent. The extraction mechanism computes multiple segmentations of the same object from these multiple features and parameter settings, because it is not possible to extract such regions uniquely. Then those regions are extracted which satisfy the constraints given by knowledge about the objects (shape, location, orientation, spatial relationships). Several spatially overlapping regions are combined. Combined regions obtained for several features are integrated to form core regions for the given object calss.

  • A Reconfigurable Parallel Processor Based on a TDLCA Model

    Masahiro TSUNOYAMA  Masataka KAWANAKA  Sachio NAITO  

     
    PAPER

      Vol:
    E76-D No:11
      Page(s):
    1358-1364

    This paper proposes a reconfigurable parallel processor based on a two-dimensional linear celular automaton model. The processor based on the model can be reconfigured quickly by utilizing the characteristics of the automaton used for its model. Moreover, the processor has short data path length between processing elements compared with the length of the processor based on one-dimensional linear cellular automaton model which has been already discussed. The processing elements of the processor based on the two-dimensional linear cellular automaton model are regarded as cells and the operational states of the processor are treated as the states of the automaton. When faults are detected, the processor can be reconfigured by changing its state under the state transition function of the processor determined by the weighting function of the automaton model. The processor can be reconfigured within a clock period required for making a state transition. This processor is extremely effective for real-time data processing systems required high reliability.

  • Development of TTS Card for PCs and TTS Software for WSs

    Yoshiyuki HARA  Tsuneo NITTA  Hiroyoshi SAITO  Ken'ichiro KOBAYASHI  

     
    PAPER

      Vol:
    E76-A No:11
      Page(s):
    1999-2007

    Text-to-speech synthesis (TTS) is currently one of the most important media conversion techniques. In this paper, we describe a Japanese TTS card developed for constructing a personal-computer-based multimedia platform, and a TTS software package developed for a workstation-based multimedia platform. Some applications of this hardware and software are also discussed. The TTS consists of a linguistic processing stage for converting text into phonetic and prosodic information, and a speech processing stage for producing speech from the phonetic and prosodic symbols. The linguistic processing stage uses morphological analysis, rewriting rules for accent movement and pause insertion, and other techniques to impart correct accentuation and a natural-sounding intonation to the synthesized speech. The speech processing stage employs the cepstrum method with consonant-vowel (CV) syllables as the synthesis unit to achieve clear and smooth synthesized speech. All of the processing for converting Japanese text (consisting of mixed Japanese Kanji and Kana characters) to synthesized speech is done internally on the TTS card. This allows the card to be used widely in various applications, including electronic mail and telephone service systems without placing any processing burden on the personal computer. The TTS software was used for an E-mail reading tool on a workstation.

  • On the Surface-Patch and Wire-Grid Modeling for Planar Antenna Mounted on Metal Housing

    Morteza ANALOUI  Yukio KAGAWA  

     
    PAPER-Antennas and Propagation

      Vol:
    E76-B No:11
      Page(s):
    1450-1455

    Numerical analysis of the electromagnetic radiation from conducting surface structures is concerned. The method of moments is discussed with the surface-patch modeling in which the surface quantities, i.e. the current, charge and impedance are directly introduced and with the wire-grid modeling in which the surface quantities are approximated by the filamentary traces. The crucial element to a numerical advantage of the wire-grid modeling lies in the simplicity of its mathematical involvements that should be traded for the uncertainties in the construction of the model. The surface-patch techniques are generally not only clear and straightforward but also more reliable than the wire-grid modeling for the computation of the surface quantities. In this work, we bring about a comparative discussion of the two approaches while the analysis of a built-in planar antenna is reported. For the purpose of the comparison, the same electric field integral equation and the Galerkin's procedure with the linear expansion/testing functions are used for both the wire-grid and surface-patch modeling.

  • Small-Amplitude Bus Drive and Signal Transmission Technology for High-Speed Memory-CPU Bus Systems

    Tatsuo KOIZUMI  Seiichi SAITO  

     
    INVITED PAPER

      Vol:
    E76-C No:11
      Page(s):
    1582-1588

    Computing devices have reached data frequencies of 100 MHz, and have created a need for small-amplitude impedance-matched buses. We simulated signal transmission characteristics of two basic driver circuits, push-pull and open-drain,for a synchronous DRAM I/O bus. The push-pull driver caused less signal distortion with parasitic inductance and capacitance of packages, and thus has higher frequency limits than the open-drain GTL type. We describe a bus system using push-pull drivers which operates at over 125 MHz. The bus line is 70 cm with 8 I/O loads distributed along the line, each having 25 nH7pF parasitic inductance and capacitance.

  • A Consensus-Based Model for Responsive Computing

    Miroslaw MALEK  

     
    INVITED PAPER

      Vol:
    E76-D No:11
      Page(s):
    1319-1324

    The emerging discipline of responsive systems demands fault-tolerant and real-time performance in uniprocessor, parallel, and distributed computing environments. The new proposal for responsiveness measure is presented, followed by an introduction of a model for responsive computing. The model, called CONCORDS (CONsensus/COmputation for Responsive Distributed Systems), is based on the integration of various forms of consensus and computation (progress or recovery). The consensus tasks include clock synchronization, diagnosis, checkpointing scheduling and resource allocation.

  • Group-to-Group Communications for Fault-Tolerance in Distributed Systems

    Hiroaki HIGAKI  Terunao SONEOKA  

     
    PAPER

      Vol:
    E76-D No:11
      Page(s):
    1348-1357

    This paper proposes a group-to-group communications algorithm that can extend the range of distributed systems where we can achieve active replication fault-tolerance to partner model distributed systems, in which all processes communicate with each other on an equal footing. Active replication approach, in which all replicated processes are active, can achieve fault-tolerance with low overhead because checkhpoint setting and rollback are not required for recovery from process failure. This algorithm guarantees that each replicated process in a process group has the same execution history and that communications between process groups keeps consistency even in the presence of process failure and message loss. The number of control messages that must be transmitted between processes for a communication between process groups is only a linear order of the number of replicated processes in each process group. Furthemore, this algorithm reduces the overhead for reconfiguration of a process group by keeping process failure and recovery information local to each process group.

  • Significance of Suitability Assessment in Speech Synthesis Applications

    Hideki KASUYA  

     
    INVITED PAPER

      Vol:
    E76-A No:11
      Page(s):
    1893-1897

    The paper indicates the importance of suitability assesment in speech synthesis applications. Human factors involved in the use of a synthetic speech are first discussed on the basis of an example of a newspaper company where synthetic speech is extensively used as an aid for proofreading a manuscript. Some findings obtained from perceptual experiments on the subjects' preference for paralinguistic properties of synthetic speech are then described, focusing primarily on the suitability of pitch characteristics, speaker's gender, and speaking rates in the task where subjects are asked to proofread a printed text while listening to the speech. The paper finally claims the need for a flexibile speech synthesis system which helps the users create their own synthetic speech.

  • Should Responsive Systems be Event-Triggered or Time-Triggered ?

    Hermann KOPETZ  

     
    INVITED PAPER

      Vol:
    E76-D No:11
      Page(s):
    1325-1332

    In this paper the two different paradigms for the design of responsive, i.e., distributed fault-tolerant real-time systems, the event-triggered (ET) approach and the time-triggered (TT) approach, are analyzed and compared. The comparison focuses on the temporal properties and considers the issues of predictability, testability, resource utilization, extensibility, and assumption coverage.

  • Test Sequence Generation for Sequential Circuits with Distinguishing Sequences

    Yoshinobu HIGAMI  Seiji KAJIHARA  Kozo KINOSHITA  

     
    PAPER

      Vol:
    E76-A No:10
      Page(s):
    1730-1737

    In this paper we present a method to generate test sequences for stuck-at faults in sequential circuits which have distinguishing sequences. Since the circuit may have no distinguishing sequence, we use two design techniques for circuits which have distinguishing sequences. One is at state transition level and the other is at gate level. In our proposed method complete test sequence can be generated. The sequence consists of test vectors for the combinational part of the circuit, distinguishing sequences and transition sequences. The test vectors, which are generated by a combinational test generator, cause faulty staes or faulty output responses for a fault, and disinguishing sequences identify the differences between faulty states and fault free states. Transition sequences are necessary to make the state in the combinational vectors. And the distinguishing sequence and the transition sequence are used in the initializing sequence. Some techniques for shortening the test sequence is also proposed. The basic ideas of the techniques are to use a short initializing sequence and to find the order in concatenating sequences. But fault simulation is conducted so as not to miss any faults. The initializing sequence is obtained by using a distinguishing sequence. The efficiency of our method is shown in the experimental results for benchmark circuits.

  • A Knowledge-Based Database Assistant with a Menu-Based Natural Language User-Interface

    Xu WU  

     
    PAPER-Artificial Intelligence and Cognitive Science

      Vol:
    E76-D No:10
      Page(s):
    1276-1287

    Knowledge-based Database Assistant is an expert system designed to help novice users formulate correct and complete database queries. This paper describes a knowledge-based database assistant with advanced facilities such as (1) a menu-based querymaking guidance, (2) a menu-based natural-language user-interface, and (3) database-commands generator which formulates formal database queries with SQL language. The system works as an intelligent front-end to an SQL database system or a computer-aided SQL tutorial-system. In this paper, we also discuss a semantic-network model, named S-Net, which is used to represent the knowledge for formal database-query formulating processes. The menu-based English user-interface allows end-users to make a query by filling a certain query pattern with appropriate words. The query-pattern filling process is guided by pop-up menus provided by the system. The query-pattern instances thus obtained are then translated into formal database queries. The translation is carried out by evaluating operations on S-Net knowledge-base which conveys knowledge about application domain, and the underlying database schema.

  • Resonance Absorptions in a Metal Grating with a Dielectric Overcoating

    Toyonori MATSUDA  Yoichi OKUNO  

     
    LETTER-Scattering and Diffraction

      Vol:
    E76-C No:10
      Page(s):
    1505-1509

    Field distributions and energy flows of the surface waves excited in singlelayer-overcoated gratings are evaluated in order to investigate the behavior of the resonance absorption in the grating.

  • Recent Progress in Borehole Radars and Ground Penetrating Radars in Japan

    Motoyuki SATO  Tsutomu SUZUKI  

     
    INVITED PAPER

      Vol:
    E76-B No:10
      Page(s):
    1236-1242

    This paper describes fundamental system of borehole radars and its recent progress in Japan. Early development of borehole radars were carried out for detection of cracks in crystallized rock, however, the fields of applications are expanding to other various objects such as soil and sedimental rocks. Conventionally developed radar systems are not necessarily suitable for these applications and they must be modified. New technologies such as radar polarimetry and radar tomography were also introduced.

  • Suppression of Weibull Radar Clutter

    David FERNANDES  Matsuo SEKINE  

     
    INVITED PAPER

      Vol:
    E76-B No:10
      Page(s):
    1231-1235

    Weibull-distributed clutter are reviewed. Most of the clutter received by L, S, C, X and Ku band radars obey Weibull distribution. Clutter suppression techniques for Weibull clutter are also reviewed. Especially, the generalized Weibull CFAR detector is emphasized. The approch is to estimate the shape and scale parameters of the Weibull clutter using order statistics and then use them in the detector. The generalized CFAR detector transforms the Weibull clutter distribution into a normalized exponential distribution. When a target is present, the transformation produces a large error that can be used to detect the target. Actual data taken by a Ku band radar are used to compare the proposed method with another method to estimate the Weibull parameters and with the Weibull CFAR detector. Order statistics estimation requires a small number of samples and can be used to find the local value of Weibull clutter parameters and, thus, the proposed method requires less computational time to find the Weibull parameters.

  • Statistical Property and Signal Processing of Received Wave of Subsurface Radar

    Kihachiro TAKETOMI  Yasumitsu MIYAZAKI  

     
    PAPER-Subsurface Radar

      Vol:
    E76-B No:10
      Page(s):
    1285-1289

    This paper proposes that the statistical property of the wave form obtained by a pulse type subsurface radar follows the Weibull probability density distribution. The shape parameter of this distribution is related to the underground condition. By using the shape parameter, we calculated the statistical variance. The ratio of the variance of target area to that of non-target area in invisible medium is evaluated for the effect of the radar signal processing. Over 20dB improvement, for example, can be obtained by means of Log/CFAR processing. It made clear that the cell size of processing should be selected the length corresponding to self-correlation.

  • Theory and Techniques for Testing Check Bits of RAMs with On-Chip ECC

    Manoj FRANKLIN  Kewal K. SALUJA  

     
    PAPER-Fault Tolerant Computing

      Vol:
    E76-D No:10
      Page(s):
    1243-1252

    As RAMs become dense, their reliability reduces because of complex interactions between memory cells and soft errors due to alpha particle radiations. In order to rectify this problem, RAM manufacturers have started incorporating on-chip (built-in) ECC. In order to minimize the area overhead of on-chip ECC, the same technology is used for implementing the check bits and the information bits. Thus the check bits are exposed to the same failure modes as the information bits. Furthermore, faults in the check bits will manifest as uncorrectable multiple errors when a soft error occurs. Therefore it is important to test the check bits for all failure modes expected of other cells. In this paper, we formulate the problem of testing RAMs with on-chip ECC capability. We than derive necessary and sufficient conditions for testing the check bits for arbitrary and adjacent neighborhood pattern sensitive faults. We also provide an efficient solution to test a memory array of N bits (including check bits) for 5-cell neighborhood pattern sensitive faults in O (N) reads and writes, with the check bits also tested for the same fault classes as the information bits.

  • Automatic Extraction of Target Images for Face Identification Using the Sub-Space Classification Method

    Shigeru AKAMATSU  Tsutomu SASAKI  Hideo FUKAMACHI  Yasuhito SUENAGA  

     
    PAPER

      Vol:
    E76-D No:10
      Page(s):
    1190-1198

    This paper proposes a scheme that offers robust extraction of target images in standard view from input facial images, in order to realize accurate and automatic identification of human faces. A standard view for target images is defined using internal facial features, i.e., the two eyes and the mouth, as steady reference points of the human face. Because reliable detection of such facial features is not an easy task in practice, the proposed scheme is characterized by a combination of two steps: first, all possible regions of facial features are extracted using a color image segmentation algorithm, then the target image is selected from among the candidates defined by tentative combination of the three reference points, through applying the classification framework using the sub-space method. Preliminary experiments on the scheme's flexibility based on subjective assessment indicate a stability of nearly 100% in consistent extraction of target images in the standard view, not only for familiar faces but also for unfamiliar faces, when the input face image roughly matches the front view. By combining this scheme for normalizing images into the standard view with an image matching technique for identification, an experimental system for identifying faces among a limited number of subjects was implemented on a commercial engineering workstation. High success rates achieved in the identification of front view face images obtained under uncontrolled conditions have objectively confirmed the potential of the scheme for accurate extraction of target images.

  • Algebraic Approaches for Nets Using Formulas to Describe Practical Software Systems

    kazuhito OHMAKI  Yutaka SATO  Ichiro OGATA  Kokichi FUTATSUGI  

     
    PAPER

      Vol:
    E76-A No:10
      Page(s):
    1580-1590

    We often use data flow diagrams or state transition diagrams to design software systems with concurrency. We call those diagrams as nets in this paper. Semantics of any methods to describe such software systems should be defined in some formal ways. There would be no doubts that any nets should be supported by appropriate theoretical frameworks. In this paper, we use CCS as a typical algebraic approach of using formulas to express concurrent behaviors and point out the different features of CCS from Petri nets. Any approaches should be not only theoretically beautiful but also practically useful. We use a specification language LOTOS as such example which has two features, CCS and ADT, and is designed to specify practical communication protocols. Algebraic approaches of using formulas, like LOTOS, can be considered as a compact way to express concurrent behaviors. We explore our discussions of net-oriented approaches into UIMS research fields. After mentioning state transition models of UIMS, we exemplify a practically used example, VIA-UIMS, which has been developed by one of authors. VIA-UIMS employs a net-oriented architecture. It has been designed to reconstuct tools which have already been widely used in many sites.

3281-3300hit(3430hit)