The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] FA(3430hit)

2661-2680hit(3430hit)

  • Derivation of the Hard Deadlines of Multi-Sampled Tasks

    Chunhee WOO  Daehyun LEE  Hagbae KIM  

     
    PAPER-Systems and Control

      Vol:
    E83-A No:6
      Page(s):
    1199-1202

    When a failure or upset occurring in a controller computer induces a task failure durable for a substantial period, system dynamics apparently deviates from its desirable sample paths, and loses its stability in an extreme case for the period to exceed the hard deadline in a real-time control system. In the paper, we propose an algorithm to combine the deadlines of all elementary tasks (derived formerly by our work) executed in several operation modes with multi-sampling periods. This results in computing the hard deadline of the entire system through modifying task-state equations to capture the effects of task failures and inter-correlations among tasks.

  • Adaptive Multiple-Symbol Differential Detection of MAPSK over Frequency Selective Fading Channels

    Mingya LIU  Shiro HANDA  Masanobu MACHIDA  Shinjiro OSHITA  

     
    PAPER

      Vol:
    E83-A No:6
      Page(s):
    1175-1183

    We propose a novel adaptive multiple-symbol differential detection (MSDD) scheme that has excellent performance over frequency selective fading (FSF) channels. The adaptive MSDD scheme consists of an adaptive noncoherent least mean square channel estimator that can accomplish channel estimation without any decision delay and the MSDD. The M-algorithm is introduced into this detection scheme to reduce the complication of computation due to increasing observed sequence length in the MSDD. Because of the application of the adaptive channel estimator and the M-algorithm, this adaptive MSDD make it possible that channel estimation is accomplished for every symbol along M surviving paths without any decision delay. And the SER performance of this adaptive MSDD is not affected by phase fluctuation introduced by a channel because the MSDD and the noncoherent channel estimator are applied. The adaptive MSDD scheme is applied to typical constellation of 16APSK, the (4,12) QAM and the star QAM. The excellent tracking performance of this adaptive MSDD scheme over FSF channels is confirmed by computer simulations.

  • A Reconfiguration Algorithm for Memory Arrays Containing Faulty Spares

    Keiichi HANDA  Kazuhito HARUKI  

     
    PAPER

      Vol:
    E83-A No:6
      Page(s):
    1123-1130

    Reconfiguration of memory arrays using spare lines is known to be an NP-complete problem. In this paper, we present an algorithm that reconfigures a memory array without any faults by using spare lines effectively even if they contain faulty elements. First, the reconfiguration problem is transformed to an equivalent covering problem in which faulty elements are covered by imaginary fault-free spare lines. Next, the covering problem is heuristically solved by using the Dulmange-Mendelsohn decomposition. The experiments for recently designed memory arrays show that the proposed algorithm is fast and practical.

  • Data-Driven Implementation of Highly Efficient TCP/IP Handler to Access the TINA Network

    Hiroshi ISHII  Hiroaki NISHIKAWA  Yuji INOUE  

     
    PAPER-Software Platform

      Vol:
    E83-B No:6
      Page(s):
    1355-1362

    This paper discusses and clarifies effectiveness of data-driven implementation of protocol handling system to access TINA (Telecommunications Information Networking Architecture) network and internet. TINA is a networking architecture that achieves networking services and management ubiquitously for users and networks. Many TINA related ACTS (Advanced Communication Technologies and Services) projects have been organized in Europe. In Japan, The TINA Trial (TTT) to achieve ATM network management and services based on TINA architectures was done by NTT and several manufactures from April 1997 to April 1999. In these studies and trials, much effort is devoted to development of software based on service architecture and network architecture being standardized in TINA-C (TINA Consortium). In order to achieve TINA environment universally in customers and network sides, we have to consider how to deploy TINA environment onto user side and how to use access transmission capacity as efficiently as possible. Recent technology can easily achieve application and environment downloading from the network side to user side by use of e. g. , JAVA. In accessing the network, there are several possible bottlenecks in information exchange in customer side such as PC processing capability, access protocol handling capability, intra-house wiring bandwidth. Authors, in parallel with TINA software architecture study, have been studying versatile requirements for hardware platform of TINA network. In those studies, we have clarified that the stream-oriented data-driven processor authors have been studying and developing have high reliability, high multiprocessing and multimedia information processing capability. Based on these studies, this paper first shows Von Neumann-based protocol handler is ineffective in case of multiprocessing through mathematical and emulation studies. Then, we show our data-driven protocol handling can effectively realize access protocol handling by emulation study. Then, we describe a result of first step of implementation of data-driven TCP/IP protocol handling. This result proves our TCP/IP hub based on data-driven processor is applicable not only for TINA/CORBA network but normal internet access. Finally, we show a possible customer premises network configuration which resolves bottleneck to access TINA network through ATM access.

  • Ultrafast Hybrid-Integrated Symmetric Mach-Zehnder All-Optical Switch and Its 168 Gbps Error-Free Demultiplexing Operation

    Kazuhito TAJIMA  Shigeru NAKAMURA  Yoshiyasu UENO  Jun'ichi SASAKI  Takara SUGIMOTO  Tomoaki KATO  Tsuyoshi SHIMODA  Hiroshi HATAKEYAMA  Takemasa TAMANUKI  Tatsuya SASAKI  

     
    PAPER-High-Speed Optical Devices

      Vol:
    E83-C No:6
      Page(s):
    959-965

    A newly developed hybrid-integrated Symmetric Mach-Zehnder (HI-SMZ) all-optical switch is reported. For integration, we chose the Symmetric Mach-Zehnder (SMZ) structure rather than the Polarization-Discriminating Symmetric Mach-Zehnder (PD-SMZ) structure which is similar to SMZ but more often used in experiments using discrete optical components. We discuss advantages and disadvantages of SMZ and PD-SMZ to show that SMZ is more suitable for integration. We also discuss about the use of SOAs as nonlinear elements for all-optical switches. We conclude that, although the ultrafast switching capability of SMZ is limited by the gain compression of SOAs, the very low switching energy is more important for practical devices. We then describe the HI-SMZ all-optical switch. This integration scheme has advantages which include low loss, low dispersion silica waveguides for high speed operation and ease in large scale integration of many SMZs with other optical, electrical, and opto-electrical devices. We show that a very high dynamic extinction ratio is possible with HI-SMZ. We also examine HI-SMZ with 1 ps pulses to show its ultrafast capability. Finally, we describe a 168 to 10.5 Gbps error-free demultiplexing experiment which is to our best knowledge the fastest experiment with an integrated device.

  • On Reconfiguration Latency in Fault-Tolerant Systems

    Hagbae KIM  Sangmoon LEE  Taewha HONG  

     
    LETTER-Fault Tolerance

      Vol:
    E83-D No:5
      Page(s):
    1181-1182

    The reconfiguration latency defined as the time taken for reconfiguring a system upon failure detection or mode change. We evaluate it quantitatively for backup sparing, which is one of the most popular reconfiguration methods, by investigating the effects of key parameters.

  • Monte Carlo Simulation for Analysis of Sequential Failure Logic

    Wei LONG  Yoshinobu SATO  Hua ZHANG  

     
    PAPER

      Vol:
    E83-A No:5
      Page(s):
    812-817

    The Monte Carlo simulation is applied to fault tree analyses of the sequential failure logic. In order to make the validity of the technique clear, case studies for estimation of the statistically expected numbers of system failures during (0, t] are conducted for two types of systems using the multiple integration method as well as the Monte Carlo simulation. Results from these two methods are compared. This validates the Monte Carlo simulation in solving the sequential failure logic with respectably small deviation rates for those cases.

  • Defect and Fault Tolerance SRAM-Based FPGAs by Shifting the Configuration Data

    Abderrahim DOUMAR  Hideo ITO  

     
    PAPER-Fault Tolerance

      Vol:
    E83-D No:5
      Page(s):
    1104-1115

    The homogeneous structure of field programmable gate arrays (FPGAs) suggests that the defect tolerance can be achieved by shifting the configuration data inside the FPGA. This paper proposes a new approach for tolerating the defects in FPGA's configurable logic blocks (CLBs). The defects affecting the FPGA's interconnection resources can also be tolerated with a high probability. This method is suited for the makers, since the yield of the chip is considerably improved, specially for large sizes. On the other hand, defect-free chips can be used as either maximum size, ordinary array chips or fault tolerant chips. In the fault tolerant chips, the users will be able to achieve directly the fault tolerance by only shifting the design data automatically, without changing the physical design of the running application, without loading other configurations data from the off-chip FPGA, and without the intervention of the company. For tolerating defective resources, the use of spare CLBs is required. In this paper, two possibilities for distributing the spare resources (king-shifting and Horse-allocation) are introduced and compared.

  • A Two-Phased Weighted Fair Queueing Scheme for Improving CDV and CLP in ATM Networks

    Jaesun CHA  Changhwan OH  Kiseon KIM  

     
    LETTER-Fundamental Theories

      Vol:
    E83-B No:5
      Page(s):
    1136-1139

    This paper proposes a new scheduling algorithm named TWFQ (Two-phased Weighted Fair Queueing) not only to maintain the fair utilization of available bandwidth but also to improve the performance of CDV and cell loss probability. The TWFQ algorithm makes use of the cell inter-arrival time of each connection for determining the cell service order among connections, which contributes to get a small CDV. To achieve low cell loss probability, the TWFQ allows connections, which suffer from the more bursty input traffic, to send the cell with more opportunities by using two scheduling phases. Through simulations, we show that the proposed algorithm achieves good performance in terms of CDV and cell loss probability, while other performance criteria are preserved in an acceptable level.

  • The Influence of Stud Bumping above the MOSFETs on Device Reliability

    Nobuhiro SHIMOYAMA  Katsuyuki MACHIDA  Masakazu SHIMAYA  Hideo AKIYA  Hakaru KYURAGI  

     
    PAPER

      Vol:
    E83-A No:5
      Page(s):
    851-856

    This paper presents the effect of stress on device degradation in metal-oxide-semiconductor field-effect transistors (MOSFETs) due to stud bumping. Stud bumping above the MOSFET region generates interface traps at the Si/SiO2 interface and results in the degradation of transconductance in N-channel MOSFETs. The interface traps are apparently eliminated by both nitrogen and hydrogen annealing. However, the hot-carrier immunity after hydrogen annealing is one order of magnitude stronger than that after nitrogen annealing. This effect is explained by the termination of dangling bonds with hydrogen atoms.

  • Fault Diagnosis Technique for Yield Enhancement of Logic LSI Using IDDQ

    Masaru SANADA  Hiromu FUJIOKA  

     
    PAPER

      Vol:
    E83-A No:5
      Page(s):
    842-850

    Abnormal IDDQ (Quiescent VDD supply current) indicates the existence of physical damage in a circuit. Using this phenomenon, a CAD-based fault diagnosis technology has been developed to analyze the manufacturing yield of logic LSI. This method to detect the fatal defect fragments in several abnormalities identified with wafer inspection apparatus includes a way to separate various leakage faults, and to define the diagnosis area encircling the abnormal portions. The proposed technique progressively narrows the faulty area by using logic simulation to extract the logic states of the diagnosis area, and by locating test vectors related to abnormal IDDQ. The fundamental diagnosis way employs the comparative operation of each circuit element to determine whether the same logic state with abnormal IDDQ exists in normal logic state or not.

  • Guided Waves on 2D Periodic Structures and Their Relation to Planar Photonic Band Gap Structures

    Ruey Bing HWANG  Song Tsuen PENG  

     
    INVITED PAPER

      Vol:
    E83-C No:5
      Page(s):
    705-712

    We present here a study on the propagation characteristics of two-dimensional periodic structures. The method of mode matching is employed to formulate the boundary-value problem in an exact fashion, and a perturbation analysis is carried out to explain the wave phenomena associated with photonic band gap structures. The dispersion curves of 2D periodic medium and 2D periodic impedance surface are investigated in detail.

  • On the Concept of "Stability" in Asynchronous Distributed Decision-Making Systems

    Tony S. LEE  Sumit GHOSH  

     
    PAPER-Real Time Control

      Vol:
    E83-B No:5
      Page(s):
    1023-1038

    Asynchronous, distributed, decision-making (ADDM) systems constitute a special class of distributed problems and are characterized as large, complex systems wherein the principal elements are the geographically-dispersed entities that communicate among themselves, asynchronously, through message passing and are permitted autonomy in local decision-making. A fundamental property of ADDM systems is stability that refers to their behavior under representative perturbations to their operating environments, given that such systems are intended to be real, complex, and to some extent, mission critical systems, and are subject to unexpected changes in their operating conditions. ADDM systems are closely related to autonomous decentralized systems (ADS) in the principal elements, the difference being that the characteristics and boundaries of ADDM systems are defined rigorously. This paper introduces the concept of stability in ADDM systems and proposes an intuitive yet practical and usable definition that is inspired by those used in Control Systems and Physics. A comprehensive stability analysis on an accurate simulation model will provide the necessary assurance, with a high level of confidence, that the system will perform adequately. An ADDM system is defined as a stable system if it returns to a steady-state in finite time, following perturbation, provided that it is initiated in a steady-state. Equilibrium or steady-state is defined through placing bounds on the measured error in the system. Where the final steady-state is equivalent to the initial one, a system is referred to as strongly stable. If the final steady-state is potentially worse then the initial one, a system is deemed marginally stable. When a system fails to return to steady-state following the perturbation, it is unstable. The perturbations are classified as either changes in the input pattern or changes in one or more environmental characteristics of the system such as hardware failures. Thus, the key elements in the study of stability include steady-state, perturbations, and stability. Since the development of rigorous analytical models for most ADDM systems is difficult, if not impossible, the definitions of the key elements, proposed in this paper, constitute a general framework to investigate stability. For a given ADDM system, the definitions are based on the performance indices that must be judiciously identified by the system architect and are likely to be unique. While a comprehensive study of all possible perturbations is too complex and time consuming, this paper focuses on a key subset of perturbations that are important and are likely to occur with greater frequency. To facilitate the understanding of stability in representative real-world systems, this paper reports the analysis of two basic manifestations of ADDM systems that have been reported in the literature --(i) a decentralized military command and control problem, MFAD, and (ii) a novel distributed algorithm with soft reservation for efficient scheduling and congestion mitigation in railway networks, RYNSORD. Stability analysis of MFAD and RYNSORD yields key stable and unstable conditions.

  • Duplicated Hash Routing: A Robust Algorithm for a Distributed WWW Cache System

    Eiji KAWAI  Kadohito OSUGA  Ken-ichi CHINEN  Suguru YAMAGUCHI  

     
    PAPER

      Vol:
    E83-D No:5
      Page(s):
    1039-1047

    Hash routing is an algorithm for a distributed WWW caching system that achieves a high hit rate by preventing overlaps of objects between caches. However, one of the drawbacks of hash routing is its lack of robustness against failure. Because WWW becomes a vital service on the Internet, the capabilities of fault tolerance of systems that provide the WWW service come to be important. In this paper, we propose a duplicated hash routing algorithm, an extension of hash routing. Our algorithm introduces minimum redundancy to keep system performance when some caching nodes are crashed. In addition, we optionally allow each node to cache objects requested by its local clients (local caching), which may waste cache capacity of the system but it can cut down the network traffic between caching nodes. We evaluate various aspects of the system performance such as hit rates, error rates and network traffic by simulations and compare them with those of other algorithms. The results show that our algorithm achieves both high fault tolerance and high performance with low system overhead.

  • Types and Basic Properties of Leaky Modes in Microwave and Millimeter-Wave Integrated Circuits

    Arthur A. OLINER  

     
    INVITED PAPER

      Vol:
    E83-C No:5
      Page(s):
    675-686

    Leaky waves have been known for many years in the context of leaky-wave antennas, but it is only within the past dozen years or so that it was realized that the dominant mode on printed-circuit transmission lines used in microwave and millimeter-wave integrated circuits can also leak. Such leakage is extremely important because it may cause power loss, cross talk between neighboring parts of the circuit, and various undesired package effects. These effects can ruin the performance of the circuit, so we must know when leakage can occur and how to avoid it. In most cases, these transmission lines leak only at high frequencies, but some lines leak at all frequencies. However, those lines can be modified to avoid the leakage. This paper explains why and when leakage occurs, and shows how the dominant mode behaves on different lines. The paper also examines certain less well known but important features involving unexpected new physical effects. These include an additional dominant mode on microstrip line that is leaky at higher frequencies, and a simultaneous propagation effect, which is rather general and which occurs when the line's relative cross-sectional dimensions are changed. The final section of the paper is concerned with three important recent developments: (a) the new effects that arise when the frequency is raised still higher and leakage occurs into an additional surface wave, (b) a basic and unexpected discovery relating to improper real modes, which are nonphysical but which can strongly influence the total physical field under the right circumstances, and (c) the important practical issue of how leakage behavior is modified when the circuit is placed into a package.

  • Mobile Agent-Based Transactions in Open Environments

    Flavio Morais de ASSIS SILVA  Radu POPESCU-ZELETIN  

     
    PAPER-Mobile Agents

      Vol:
    E83-B No:5
      Page(s):
    973-987

    This paper describes a transaction model for open environments based on mobile agents. Mobile agent-based transactions combine mobility and the execution of control flows with transactional semantics. The model presented represents an approach for providing reliability and correctness of the execution of distributed activities, which fulfills important requirements of applications in Open Environments. The presented transaction model is based on a protocol for providing fault tolerance when executing mobile agent-based activities. This protocol is outlined in this paper. With this protocol, if an agent executing an activity at an agency (logical "place" in a distributed agent environment) becomes unreachable for a long time, the execution of the activity can be recovered and continue at another agency. The fault tolerance approach supports "multi-agent activities," i. e. , activities where some of its parts are spawned to execute and migrate asynchronously in relation to other parts. The described transaction model, called the basic (agent-based) transaction model, is an open nested transaction model. By being based on the presented fault tolerance mechanism, subtransactions can be executed asynchronously in relation to their parent transactions and agent-based transactions can explore alternatives in the event of agent unavailability. The model fulfills requirements for supporting the autonomy of organizations in a distributed agent environment.

  • OTA-C Based BIST Structure for Analog Circuits

    Cheng-Chung HSU  Wu-Shiung FENG  

     
    LETTER-VLSI Design Technology and CAD

      Vol:
    E83-A No:4
      Page(s):
    771-773

    In this letter, a novel built-in self-test (BIST) structure based on operational transconductance amplifiers and grounded capacitors (OTA-Cs) for the fault diagnosis of analog circuits is proposed. The proposed analog BIST structure, namely ABIST, can be used to increase the number of test points, sampling and controlling of all test points with voltage data, and making less time for test signal observable. Experimental measurements have been made to verify that the proposed ABIST structure is effective.

  • High Alumina Co-Doped Silica EDFA and Its Gain-Equalization in Long-Haul WDM Transmission System

    Takao NAITO  Naomasa SHIMOJOH  Takafumi TERAHARA  Toshiki TANAKA  Terumi CHIKAMA  Masuo SUYAMA  

     
    PAPER-Fiber-Optic Transmission

      Vol:
    E83-B No:4
      Page(s):
    775-781

    In an optical submarine cable transmission system, small size, low consumption power, and high reliability are required for inline repeaters. The structure of the inline repeater should be a simple single stage. The design of erbium doped fiber (EDF) itself is very important for the inline repeater to achieve broad bandwidth, high output power, and low noise figure. We designed and developed high alumina co-doped erbium doped fiber amplifiers (EDFAs) for long-haul, high-capacity WDM transmission systems. We investigated the trade-off relationship between the gain flatness and the output power to optimize the EDF length. We obtained high performance, including a slightly sloped gain flatness of +0.04 dB/nm at 1550 nm, a superior noise figure of 4.7 dB, and a relatively large output power of +11.5 dBm for an EDF length of 5 m using a 1480-nm pumping laser diode. We applied gain-equalizers (GEQs) using Mach-Zehnder type filters with different FSRs to accurately compensate for the EDFAs ' gain-wavelength characteristics. The main GEQs have free-spectral-ranges (FSRs) of 48-nm, which are about 2 times as long as the wavelength difference between a 1558-nm EDFA gain peak and a 1536-nm EDFA gain valley. Using a circulating loop with the above EDFAs and GEQs, we performed the broad wavelength bandwidth. The achieved signal wavelength bandwidth after 5,958-km transmission was 20 nm. We successfully transmitted 700-Gbit/s (66 10.66-Gbit/s) WDM signals over 2,212 km. The combination of high alumina co-doped silica EDFA and large FSR GEQ is attractive for long-haul, high-capacity WDM transmission systems.

  • Distributed Software Agents for Network Fault Management

    Hassan HAJJI  Behrouz Homayoun FAR  

     
    PAPER-Application

      Vol:
    E83-D No:4
      Page(s):
    735-746

    This paper discusses a framework for automating fault management using distributed software agents. The management function is distributed among multiple agents that can carry out advanced reasoning activities on the network domain. Network domain modeling using Bayesian network is introduced. The agent detects, correlates and selectively seeks to derive a clear explanation of the alarms generated in its domain. Depending on the network's degree of automation, the agent can even carry out local recovery actions. The ideas of the paper are implemented in a software for inference in Bayesian network. We identify the potentialities of learning in the agent model, and present the class of problems to be addressed.

  • A False-Sharing Free Distributed Shared Memory Management Scheme

    Alexander I-Chi LAI  Chin-Laung LEI  Hann-Huei CHIOU  

     
    PAPER-Computer Systems

      Vol:
    E83-D No:4
      Page(s):
    777-788

    Distributed shared memory (DSM) systems on top of network of workstations are especially vulnerable to the impact of false sharing because of their higher memory transaction overheads and thus higher false sharing penalties. In this paper we develop a dynamic-granularity shared memory management scheme that eliminates false sharing without sacrificing the transparency to conventional shared-memory applications. Our approach utilizes a special threaded splay tree (TST) for shared memory information management, and a dynamic token-based path-compression synchronization algorithm for data transferring. The combination of the TST and path compression is quite efficient; asymptotically, in an n-processor system with m shared memory segments, synchronizing at most s segments takes O(s log m log n) amortized computation steps and generates O(s log n) communication messages, respectively. Based on the proposed scheme we constructed an experimental DSM prototype which consists of several Ethernet-connected Pentium-based computers running Linux. Preliminary benchmark results on our prototype indicate that our scheme is quite efficient, significantly outperforming traditional schemes and scaling up well.

2661-2680hit(3430hit)