The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] REM(1013hit)

841-860hit(1013hit)

  • Unreachability Proofs for β Rewriting Systems by Homomorphisms

    Kiyoshi AKAMA  Yoshinori SHIGETA  Eiichi MIYAMOTO  

     
    PAPER-Automata,Languages and Theory of Computing

      Vol:
    E82-D No:2
      Page(s):
    339-347

    Given two terms and their rewriting rules, an unreachability problem proves the non-existence of a reduction sequence from one term to another. This paper formalizes a method for solving unreachability problems by abstraction; i. e. , reducing an original concrete unreachability problem to a simpler abstract unreachability problem to prove the unreachability of the original concrete problem if the abstract unreachability is proved. The class of rewriting systems discussed in this paper is called β rewriting systems. The class of β rewriting systems includes very important systems such as semi-Thue systems and Petri Nets. Abstract rewriting systems are also a subclass of β rewriting systems. A β rewriting system is defined on axiomatically formulated base structures, called β structures, which are used to formalize the concepts of "contexts" and "replacement," which are common to many rewritten objects. Each domain underlying semi-Thue systems, Petri Nets, and other rewriting systems are formalized by a β structure. A concept of homomorphisms from a β structure (a concrete domain) to a β structure (an abstract domain) is introduced. A homomorphism theorem (Theorem1)is established for β rewriting systems, which states that concrete reachability implies abstract reachability. An unreachability theorem (Corollary1) is also proved for β rewriting systems. It is the contraposition of the homomorphism theorem, i. e. , it says that abstract unreachability implies concrete unreachability. The unreachability theorem is used to solve two unreachability problems: a coffee bean puzzle and a checker board puzzle.

  • Transfer Function Matrix Measurement of AWG Multi/Demulti-Plexers

    Kazunari HARADA  Kenji SHIMIZU  Nobuhiro SUGANO  Teruhiko KUDOU  Takeshi OZEKI  

     
    PAPER-Photonic WDM Devices

      Vol:
    E82-C No:2
      Page(s):
    349-353

    Wavelength Division Multiplex (WDM) photonic networks are expected as key for global communication infrastructure. The accurate measurement methods for AWG-MUX/DMUX are desirable for WDM network design. We measured a transfer function matrix of an AWG-MUX to find that polarization mode dispersion (PMD) and polarization dependent loss (PDL) shows the bandpass characteristics, which may limit the maximum size and the bit rate of the system. These bandpass characteristics of PMD and PDL are reproduced by a simple AWG-MUX model: The phase constant difference of 0.5% between orthogonal modes in arrayed waveguides is sufficient to obtain the measured passband characteristics of PMD and PDL. We find phase distribution difference between two orthogonal modes in the arrayed waveguide grating gives arise to complex PMD.

  • Transfer Function Matrix Measurement of AWG Multi/Demulti-Plexers

    Kazunari HARADA  Kenji SHIMIZU  Nobuhiro SUGANO  Teruhiko KUDOU  Takeshi OZEKI  

     
    PAPER-Photonic WDM Devices

      Vol:
    E82-B No:2
      Page(s):
    401-405

    Wavelength Division Multiplex (WDM) photonic networks are expected as key for global communication infrastructure. The accurate measurement methods for AWG-MUX/DMUX are desirable for WDM network design. We measured a transfer function matrix of an AWG-MUX to find that polarization mode dispersion (PMD) and polarization dependent loss (PDL) shows the bandpass characteristics, which may limit the maximum size and the bit rate of the system. These bandpass characteristics of PMD and PDL are reproduced by a simple AWG-MUX model: The phase constant difference of 0.5% between orthogonal modes in arrayed waveguides is sufficient to obtain the measured passband characteristics of PMD and PDL. We find phase distribution difference between two orthogonal modes in the arrayed waveguide grating gives arise to complex PMD.

  • Optimal Problem for Contrast Enhancement in Polarimetric Radar Remote Sensing

    Jian YANG  Yoshio YAMAGUCHI  Hiroyoshi YAMADA  Masakazu SENGOKU  Shi-Ming LIN  

     
    PAPER-Electronic and Radio Applications

      Vol:
    E82-B No:1
      Page(s):
    174-183

    This paper proposes two numerical methods to solve the optimal problem of contrast enhancement in the cross-pol and co-pol channels. For the cross-pol channel case, the contrast (power ratio) is expressed in a homogeneous form, which leads the polarimetric contrast optimization to a distinctive eigenvalue problem. For the co-pol channel case, this paper proposes a cross iterative method for optimization, based on the formula used in the matched-pol channel. Both these numerical methods can be proved as convergent algorithms, and they are effective for obtaining the optimum polarization state. Besides, one of the proposed methods is applied to solve the optimal problem of contrast enhancement for the time-independent targets case. To verify the proposed methods, this paper provides two numerical examples. The results of calculation are completely identical with other authors', showing the validity of the proposed methods.

  • A Simple Algorithm for Adaptive Allpass-FIR Digital Filter Using Lattice Allpass Filter with Minimum Multipliers

    James OKELLO  Yoshio ITOH  Yutaka FUKUI  Masaki KOBAYASHI  

     
    PAPER-Digital Signal Processing

      Vol:
    E82-A No:1
      Page(s):
    138-144

    Adaptive infinite impulse response (IIR) digital filter implemented using a cascade of second order direct form allpass filters and a finite impulse response (FIR) filter, has the property of its poles converging to those of the unknown system. In this paper we implement the adaptive allpass-FIR digital filter using a lattice allpass filter with minimum number of multipliers. We then derive a simple adaptive algorithm, which does not increase the overall number of multipliers of the proposed adaptive digital filter (ADF) in comparison to the ADF that uses the direct form allpass filter. The proposed structure and algorithm exhibit a kind of orthogonality, which ensures convergence of the poles of the ADF to those of the unknown system. Simulation results confirm this convergence.

  • An Upper Bound on Bandwidth Requirement and Its Applications to Traffic Control in ATM Networks

    Piya TANTHAWICHIAN  Akihiro FUJII  Yoshiaki NEMOTO  

     
    PAPER-QoS Control and Traffic Control

      Vol:
    E81-B No:12
      Page(s):
    2371-2379

    Major problems of traffic control in ATM networks include how to decide whether a network accepts a new call or not in real time and how to select the best set of Dual Leaky Bucket (DLB) parameter values. To solve these problems, it is necessary to determine the amount of network bandwidth required by the call. In this paper, we present an analysis based on bounding technique to derive an upper bound on bandwidth requirement when the call is characterized by a set of DLB parameters. Consequently, a new definition of the upper bound on bandwidth requirement and simple formulae used for computing the upper bound have been obtained. To clarify the advantages of the derived upper bound, we demonstrate its two applications, one to select the best set of DLB parameter values from candidates for minimizing the amount of bandwidth to be allocated to the call and the other to establish a Connection Admission Control (CAC) scheme. The upper bound-based CAC scheme is fast enough to process in real time due to its simplicity and provides a significant improvement of network utilization compared to the peak rate-based CAC scheme.

  • A Non-Reflection-Influence Method for On-Line Measurement of Permittivity Using Microwave Free-Space Technique

    Zhihong MA  Seichi OKAMURA  

     
    PAPER-Microwave and Millimeter Wave Technology

      Vol:
    E81-C No:12
      Page(s):
    1936-1941

    This paper describes a new method for permittivity measurement using microwave free-space technique. The general consideration is to measure the amplitudes of transmission and reflection coefficients and calculate the permittivity from the measurement values. Theoretical analysis shows that the permittivity of the sample can be calculated solely from the measurement values of the amplitudes of transmission and reflection coefficients when the sample is prepared with so large attenuation that the multiple reflections between the two surfaces of the sample can be neglected. Using this method, the permittivity measurement can be performed without reflection influence, and on-line measurement of the permittivity becomes possible because the permittivity can be measured instantaneously and without contact with the material.

  • Efficient Evaluation of Aperture Field Integration Method for Polyhedron Surfaces and Equivalence to Physical Optics

    Suomin CUI  Makoto ANDO  

     
    PAPER-Electromagnetic Theory

      Vol:
    E81-C No:12
      Page(s):
    1948-1955

    The equivalence between Aperture Field Integration Method (AFIM) and Physical Optical (PO) is discussed for polyhedron surfaces in this paper. The necessary conditions for the equivalence are summarized which demand complete equivalent surface currents and complete apertures. The importance of the exact expressions for both incident and reflected fields in constructing equivalent surface currents is emphasized and demonstrated numerically. The fields from reflected components on additional surface which lies on the Geometrical Optics (GO) reflection boundary are evaluated asymptotically. The analytical expression enhances the computational efficiency of the complete AFIM. The equivalent edge currents (EECs) for AFIM (AFIMEECs) are used to extract the mechanism of this equivalence between AFIM and PO.

  • A Support Tool for Specifying Requirements Using Structures of Documents

    Tomofumi UETAKE  Morio NAGATA  

     
    PAPER-Application

      Vol:
    E81-D No:12
      Page(s):
    1429-1438

    The software requirements specification process consists of three steps; requirements capture and analysis, requirements definition and specification, and requirements validation. At the beginning of the second step which this paper focuses on, there have been several types of massive documents generated in the first step. Since the developers and the clients/users of the new software system may not have common knowledge in the field which the system deals with, it is difficult for the developers to produce correct requirements specification by using these documents. There has been few research work to solve this problem. The authors have developed a support tool to produce correct requirements specification by arranging and restructuring those documents into clearly understandable forms. In the second step, the developers must specify the functions and their constraints of the new system from those documents. Analyzing the developers' real activities for designing the support tool, the authors propose a model of this step as the following four activities. To specify the functions of the new system, the developers must collect the sentences which may suggest the functions scattering those documents. To define the details of each function, the developers must gather the paragraphs including the descriptions of the functions. To verify the correctness of each function, the developers must survey all related documents. To perform above activities successfully, the developers must manage various versions of those documents correctly. According to these four types of activities, the authors propose the effective ways to support the developers by arranging those documents. This paper shows algorithms based on this model by using the structures of the documents and keywords which may suggest the functions or constraints. To examine the feasibility of their proposal, the authors implemented a prototype tool. Their tool extracts complete information scattering those documents. The effectiveness of their proposal is demonstrated by their experiments.

  • Patterned Versus Conventional Object-Oriented Analysis Methods: A Group Project Experiment

    Shuichiro YAMAMOTO  Hiroaki KUROKI  

     
    PAPER-Experiment

      Vol:
    E81-D No:12
      Page(s):
    1458-1465

    Object-oriented analysis methods can be grouped into data-driven and behavior-driven approaches. With data-driven approaches, object models are developed based on a list of objects and their inter-relationships, which describe a static view of the real world. With behavior-oriented approaches, a system usage scenario is analyzed before developing the object models. Although qualitative comparisons of these two types of methods have been made, there was no statistical study has evaluated them based on controlled experiments. This paper proposes the patterned object-oriented method, POOM, which is a behavior-oriented approach, and compares it to OMT, a data-driven approach, using small team experiments. The effectiveness of POOM is shown in terms of productivity and homogeneity.

  • Proposal for Incremental Formal Verification

    Toru SHONAI  Kazuhiko MATSUMOTO  

     
    PAPER-Computer Hardware and Design

      Vol:
    E81-D No:11
      Page(s):
    1172-1185

    A formal verification approach that combines verification based on binary decision diagrams (BDDs) and theorem-prover-based verification has been developed. This approach is called the incremental formal verification approach. It uses an incremental verifier based on BDDs and a conventional theorem-prover-based verifier. Inputs to the incremental verifier are specifications in higher-level descriptions given in terms of arithmetic expressions, lower-level design descriptions given in terms of Boolean expressions, and constraints. The incremental verifier limits the behavior of the design by using the constraints, and compares the partial behavior limited by the constraints with the specifications by using BDD-based Boolean matching. It also replaces the matched part of the lower design description with equivalent constructs in the higher descriptions. Successive uses of the incremental verifier with different constraints can produce higher design descriptions from the lower design descriptions in a step-by-step manner. These higher descriptions are then input to the theorem-prover-based verification which enables faster treatment of larger circuits. Preliminary experimental results show that the incremental verifier can successfully check the partial equivalence and replace the matched parts by higher constructs.

  • Propagation Mechanisms of UHF Radiowave Propagation into Multistory Buildings for Microcellular Environment

    Jenn-Hwan TARNG  Yung-Chao CHANG  Chih-Ming CHEN  

     
    PAPER-Antennas and Propagation

      Vol:
    E81-B No:10
      Page(s):
    1920-1926

    Mechanism of UHF radiowave propagation into multistory office buildings are explored by using ray-tracing based models, which include a three-dimensional (3-D) ray-tracing model and a direct-transmitted ray (DTR) model. Prediction accuracy of the models is ascertained by many measured data and the measurements are carried out at many specific sites with different propagation scenarios. Their measured results also demonstrate some important propagation phenomena. It is found that (1) the direct transmitted wave may be the dominant mode; (2) the path loss neither increases nor decreases monotonically as a function of increasing floor level; and (3) there is not much difference of the average path loss among the receiving positions in the same room.

  • Synthesis of Low Peak-to-Peak Waveforms with Flat Spectra

    Takafumi HAYASHI  

     
    PAPER-Circuit Theory

      Vol:
    E81-A No:9
      Page(s):
    1902-1908

    This paper presents both new analytical and new numerical solutions to the problem of generating waveforms exhibiting a low peak-to-peak factor. One important application of these results is in the generation of pseudo-white noise signals that are commonly uses in multi-frequency measurements. These measurements often require maximum signal-to-noise ratio while maintaining the lowest peak-to-peak excursion. The new synthesis scheme introduced in this paper uses the Discrete Fourier Transform (DFT) to generate pseudo-white noise sequence that theoretically has a minimized peak-to-peak factor, Fp-p. Unlike theoretical works in the literature, the method presented here is based in purely discrete mathematics, and hence is directly applicable to the digital synthesis of signals. With this method the shape of the signal can be controlled with about N parameters given N harmonic components. A different permutation of the same set of offset phases of the "source harmonics" creates an entirely different sequence.

  • A Proposal of a Method of Total Quality Evaluation in Remote Conference Systems Based on ATM Networks

    Nobuhiro KATAOKA  Hisao KOIZUMI  Hideru DOI  Kenichi KITAGAWA  Norio SIRATORI  

     
    PAPER-Communication Networks and Services

      Vol:
    E81-B No:9
      Page(s):
    1709-1717

    In this paper we propose a total quality evaluation method in an ATM network-type remote conference system, and describe the results of evaluations of a proving system. The quality of a remote conference system depends on such various elements as video images, voice signals, and cost; but a total quality index may be regarded as the cost of a remote conference system compared with that of a conventional face-to-face conference. Here, however, the decline in communication quality arising from the remote locations of participants must be included in the evaluation. Moreover, the relative weightings of voice signals, video images of participants, and shared data will vary depending on the type of conference, and these factors must also be taken into account in evaluations. An actual conference systems were constructed for evaluation, and based on a MOS (Mean Opinion Score) of the quality elements, the total system quality was evaluated with reference to the proposed concepts. These results are also described in this paper.

  • Millimeter-Wave Multipath Propagation Delay Characteristics in the Outdoor Mobile Radio Environments

    Kazunori KIMURA  Jun HORIKOSHI  

     
    LETTER-Antennas and Propagation

      Vol:
    E81-B No:8
      Page(s):
    1696-1699

    Millimeter-wave propagation characteristics are measured in the outdoor environments. Especially, specific features in the urban area and the open meadowland are compared.

  • Spatial Resolution Improvement of a Low Spatial Resolution Thermal Infrared Image by Backpropagated Neural Networks

    Maria del Carmen VALDES  Minoru INAMURA  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Vol:
    E81-D No:8
      Page(s):
    872-880

    Recent progress in neural network research has demonstrated the usefulness of neural networks in a variety of areas. In this work, its application in the spatial resolution improvement of a remotely sensed low resolution thermal infrared image using high spatial resolution of visible and near-infrared images from Landsat TM sensor is described. The same work is done by an algebraic method. The tests developed are explained and examples of the results obtained in each test are shown and compared with each other. The error analysis is also carried out. Future improvements of these methods are evaluated.

  • Synchronous RAID5 with Region-Based Layout and Buffer Analysis in Video Storage Servers

    Chan-Ik PARK  Deukyoon KANG  

     
    PAPER-Computer Systems

      Vol:
    E81-D No:8
      Page(s):
    813-821

    Disk arrays are widely accepted as a disk subsystem for video servers due to its high throughput as well as high concurrency. RAID-like disk arrays are usually managed in either RAID level 3 (a request is handled by all the disks in the system) or RAID level 5 (a request is handled by some number of disks subject to the request size) when they are used in video servers, i. e. , either only one video stream is handled at a time in RAID level 3 or a certain number of video streams are handled independently at the same time in RAID level 5. Note that RAID level 3 is inappropriate to handle large number of video streams and RAID level 5 is inefficient to handle multiple video streams since handling continuous video streams is inherently synchronous operation. In this paper, we propose a new video data layout scheme called region-based layout and synchronous management of RAID5 called synchronous RAID5 for disk array used in video servers. It is shown that we can reduce the amount of buffers required to support a given number of video requests by integrating our region-based layout with synchronous RAID5 scheme. Group Sweeping Scheduling (GSS) is used as a basic disk scheduling. We have shown through analysis that our proposed scheme is superior to the existing schemes in the respect of the buffer requirements.

  • Media Synchronization in Heterogeneous Networks: Stored Media Case

    Shuji TASAKA  Yutaka ISHIBASHI  

     
    PAPER-Media Synchronization and Video Coding

      Vol:
    E81-B No:8
      Page(s):
    1624-1636

    This paper studies a set of lip synchronization mechanisms for heterogeneous network environments. The set consists of four schemes, types 0 through 3, which are classified into the single-stream approach and the multi-stream approach. Types 0 and 1 belong to the single-stream approach, which interleaves voice and video to form a single transport stream for transmission. On the other hand, types 2 and 3, both of which are the multi-stream approach, set up separate transport streams for the individual media streams. Types 0 and 2 do not exert synchronization control at the destination, while types 1 and 3 do. We first discuss the features of each type in terms of networks intended for use, required synchronization quality of each medium, physical locations of media sources and implementation complexity. Then, a synchronization algorithm, which is referred to as the virtual-time rendering (VTR) algorithm, is specified for stored media; MPEG video and voice are considered in this paper. We implemented the four types on an ATM LAN and on an interconnected ATM-wireless LAN under the TCP protocol. The mean square error of synchronization, total pause time, throughput and total output time were measured in each of the two networks. We compare the measured performance among the four types to find out which one is the most suitable for a given condition of the underlying communication network and traffic.

  • Evaluation of Electrical Properties of Merocyanine Films Doped with Acceptor or Donor Molecules by Field Effect Measurements

    Koji HIRAGA  Masaaki IIZUKA  Shigekazu KUNIYOSHI  Kazuhiro KUDO  Kuniaki TANAKA  

     
    PAPER

      Vol:
    E81-C No:7
      Page(s):
    1077-1082

    The doping effect of acceptor molecule tetracyanoquinodimethane (TCNQ) and donor molecule tetramethyltetraselenafulvalene (TMTSF) in an organic semiconductor was investigated by field effect measurements in merocyanine (MC) films. The electrical conductivity and carrier concentration of TCNQ-doped MC films were increased compared with those of undoped MC film. An efficient doping effect was observed at the doping concentration of approximately 9%. The electrical conductivity, on the other hand, was decreased by doping of the donor molecule TMTSF in MC film. However, no inversion of the conduction type was obtained. Furthermore, the transport mechanism of TCNQ-doped MC film and undoped film was elucidated from the temperature dependence of electrical parameters. These results demonstrate that TCNQ and TMTSF molecules act as acceptor and donor impurities in MC film, respectively, and the doping of these molecules is effective to control the electrical properties of organic semiconductors.

  • Trade-Off between Requirement of Learning and Computational Cost

    Tzung-Pei HONG  Ching-Hung WANG  Shian-Shyong TSENG  

     
    PAPER-Artificial Intelligence and Cognitive Science

      Vol:
    E81-D No:6
      Page(s):
    565-571

    Machine learning in real-world situations sometimes starts from an initial collection of training instances; learning then proceeds off and on as new training instances come intermittently. The idea of two-phase learning has then been proposed here for effectively solving the learning problems in which training instances come in this two-stage way. Four two-phase learning algorithms based on the learning method PRISM have also been proposed for inducing rules from training instances. These alternatives form a spectrum, showing achievement of the requirement of PRISM (keeping down the number of irrelevant attributes) heavily dependent on the spent computational cost. The suitable alternative, as a trade-off between computational costs and achievement to the requirements, can then be chosen according to the request of the application domains.

841-860hit(1013hit)