The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] SI(16314hit)

12841-12860hit(16314hit)

  • Experimental Characterization and Modeling of Transmission Line Effects for High-Speed VLSI Circuit Interconnects

    Woojin JIN  Seongtae YOON  Yungseon EO  Jungsun KIM  

     
    PAPER

      Vol:
    E83-C No:5
      Page(s):
    728-735

    IC interconnect transmission line effects due to the characteristics of a silicon substrate and current return path impedances are physically investigated and experimentally characterized. With the investigation, a novel transmission line model is developed, taking these effects into account. Then an accurate signal delay on the IC interconnect lines is analyzed by using the transmission line model. The transmission line effects of the metal-insulator-semiconductor IC interconnect structure are experimentally verified with s-parameter-based wafer level signal-transient characterizations for various test patterns. They are designed and fabricated with a 0.35 µm CMOS process technology. Throughout this work, it is demonstrated that the conventional ideal RC- or RLC-model of the IC interconnects without considering these detailed physical phenomena is not accurate enough to verify the pico-second level timing of high-performance VLSI circuits.

  • Simultaneous-Propagation Effect in Conductor-Backed Coplanar Strips and Its Experimental Verification

    Mikio TSUJI  Hiroshi SHIGESAWA  

     
    PAPER

      Vol:
    E83-C No:5
      Page(s):
    742-749

    We first reported the simultaneous-propagation effect that the leaky dominant mode can be present on conductor-backed coplanar strips at the same time as the conventional bound dominant mode. We have investigated here numerically and experimentally this effect in detail. Consequently, we have found that it occurs under a certain condition of structural parameters, and also have verified that it affects circuit performance significantly.

  • Parallelizing SDP (Sum of Disjoint Products) Algorithms for Fast Reliability Analysis

    Tatsuhiro TSUCHIYA  Tomoya KAJIKAWA  Tohru KIKUNO  

     
    LETTER-Fault Tolerance

      Vol:
    E83-D No:5
      Page(s):
    1183-1186

    The SDP (Sum of Disjoint Products) approach is a well-known technique for computing network reliability measures. So far several algorithms have been developed based on this approach. In this letter, we present a general framework for parallelization of these SDP algorithms. Based on the framework, we implemented a parallel version of an SDP algorithm called CAREL on a network of workstations. Experimental results show that it works fairly well with almost linear speedups.

  • On the Concept of "Stability" in Asynchronous Distributed Decision-Making Systems

    Tony S. LEE  Sumit GHOSH  

     
    PAPER-Real Time Control

      Vol:
    E83-B No:5
      Page(s):
    1023-1038

    Asynchronous, distributed, decision-making (ADDM) systems constitute a special class of distributed problems and are characterized as large, complex systems wherein the principal elements are the geographically-dispersed entities that communicate among themselves, asynchronously, through message passing and are permitted autonomy in local decision-making. A fundamental property of ADDM systems is stability that refers to their behavior under representative perturbations to their operating environments, given that such systems are intended to be real, complex, and to some extent, mission critical systems, and are subject to unexpected changes in their operating conditions. ADDM systems are closely related to autonomous decentralized systems (ADS) in the principal elements, the difference being that the characteristics and boundaries of ADDM systems are defined rigorously. This paper introduces the concept of stability in ADDM systems and proposes an intuitive yet practical and usable definition that is inspired by those used in Control Systems and Physics. A comprehensive stability analysis on an accurate simulation model will provide the necessary assurance, with a high level of confidence, that the system will perform adequately. An ADDM system is defined as a stable system if it returns to a steady-state in finite time, following perturbation, provided that it is initiated in a steady-state. Equilibrium or steady-state is defined through placing bounds on the measured error in the system. Where the final steady-state is equivalent to the initial one, a system is referred to as strongly stable. If the final steady-state is potentially worse then the initial one, a system is deemed marginally stable. When a system fails to return to steady-state following the perturbation, it is unstable. The perturbations are classified as either changes in the input pattern or changes in one or more environmental characteristics of the system such as hardware failures. Thus, the key elements in the study of stability include steady-state, perturbations, and stability. Since the development of rigorous analytical models for most ADDM systems is difficult, if not impossible, the definitions of the key elements, proposed in this paper, constitute a general framework to investigate stability. For a given ADDM system, the definitions are based on the performance indices that must be judiciously identified by the system architect and are likely to be unique. While a comprehensive study of all possible perturbations is too complex and time consuming, this paper focuses on a key subset of perturbations that are important and are likely to occur with greater frequency. To facilitate the understanding of stability in representative real-world systems, this paper reports the analysis of two basic manifestations of ADDM systems that have been reported in the literature --(i) a decentralized military command and control problem, MFAD, and (ii) a novel distributed algorithm with soft reservation for efficient scheduling and congestion mitigation in railway networks, RYNSORD. Stability analysis of MFAD and RYNSORD yields key stable and unstable conditions.

  • Applying the Adaptive Agent Oriented Software Architecture to the Parsing of Context Sensitive Grammars

    Babak HODJAT  Makoto AMAMIYA  

     
    PAPER-Cooperation in Distributed Systems and Agents

      Vol:
    E83-D No:5
      Page(s):
    1142-1152

    Adaptive Agent Oriented Software Architecture (AAOSA) is a new approach to software design based on an agent-oriented architecture. In this approach, agents are considered adaptively communicating concurrent modules that are divided into a "white box" module responsible for communications and learning and a "black box" which is responsible for the independent specialized processes. An AAOSA system is actually parsing input in the interpretation phase. We will show that AAOSA can be applied to the parallel, and distributed parsing of context sensitive languages.

  • Verification of a Microcomputer Program Specification Embedded in a Reactive System

    Yasunori ISHIHARA  Kiichiro NINOMIYA  Hiroyuki SEKI  Daisuke TAKAHARA  Yutaka YAMADA  Shigesada OMOTO  

     
    PAPER-Software Engineering

      Vol:
    E83-D No:5
      Page(s):
    1082-1091

    This paper proposes a model checking method for microcomputer programs. To deal with the state explosion problem, we adopt a compositional verification approach. Based on the proposed method, a microcomputer program for a real-life air-conditioner is verified. The program is large enough to cause state explosion. Among fourteen typical properties of the program, five properties are successfully verified by our method.

  • Dynamically Variable Line-Size Cache Architecture for Merged DRAM/Logic LSIs

    Koji INOUE  Koji KAI  Kazuaki MURAKAMI  

     
    PAPER-Computer System Element

      Vol:
    E83-D No:5
      Page(s):
    1048-1057

    This paper proposes a novel cache architecture suitable for merged DRAM/logic LSIs, which is called "dynamically variable line-size cache (D-VLS cache). " The D-VLS cache can optimize its line-size according to the characteristic of programs, and attempts to improve the performance by exploiting the high on-chip memory bandwidth on merged DRAM/logic LSIs appropriately. In our evaluation, it is observed that an average memory-access time improvement achieved by a direct-mapped D-VLS cache is about 20% compared to a conventional direct-mapped cache with fixed 32-byte lines. This performance improvement is better than that of a doubled-size conventional direct-mapped cache.

  • Fast Testable Design for SRAM-Based FPGAs

    Abderrahim DOUMAR  Toshiaki OHMAMEUDA  Hideo ITO  

     
    PAPER-Fault Tolerance

      Vol:
    E83-D No:5
      Page(s):
    1116-1127

    This paper presents a new design for testing SRAM-based field programmable gate arrays (FPGAs). The original FPGA's SRAM memory is modified so that the FPGA may have the facility to loop the testing configuration data inside the chip. The full testing of the FPGA is achieved by loading typically only one carefully chosen testing configuration data instead of the whole configurations data. The other required configurations data are obtained by shifting the first one inside the chip. As a result, the test becomes faster. This method does not need a large off-chip memory for the test. The evaluation results prove that this method is very effective when the complexity of the configurable blocks (CLBs) or the chip size increases.

  • Evaluation of Mental Workload by Variability of Pupil Area

    Atsuo MURATA  Hirokazu IWASE  

     
    LETTER-Medical Engineering

      Vol:
    E83-D No:5
      Page(s):
    1187-1190

    It is generally known that the autonomic nervous system regulates the pupil. In this study, we attempted to assess mental workload on the basis of the fluctuation rhythm in the pupil area. Controlling the respiration interval, we measured the pupil area during mental tasking for one minute. We simultaneously measured the respiration curve to monitor the respiration interval. We required the subject to perform two mental tasks. One was a mathematical division task, the difficulty of which was set to two, three, four, and five dividends. The other was a Sternberg memory search task, which had four work levels defined by the number of memory sets. In the Sternberg memory search, the number of memory set changed from five to eight. In such a way, we changed the mental workload induced by mental loading. As a result of calculating an autoregressive (AR) power spectrum, we could observe two peaks which corresponded to the blood pressure variation and respiratory sinus arrhythmia under a low workload. With an increased workload, the spectral peak related to the respiratory sinus arrhythmia disappeared. The ratio of the power at the low frequency band, from 0.05-0.15Hz, to the power at the respiration frequency band, from 0.35-0.4Hz, increased with the work level. In conclusion, the fluctuation of the pupil area is a promising means for the evaluation of mental workload or autonomic nervous function.

  • Monte Carlo Simulation for Analysis of Sequential Failure Logic

    Wei LONG  Yoshinobu SATO  Hua ZHANG  

     
    PAPER

      Vol:
    E83-A No:5
      Page(s):
    812-817

    The Monte Carlo simulation is applied to fault tree analyses of the sequential failure logic. In order to make the validity of the technique clear, case studies for estimation of the statistically expected numbers of system failures during (0, t] are conducted for two types of systems using the multiple integration method as well as the Monte Carlo simulation. Results from these two methods are compared. This validates the Monte Carlo simulation in solving the sequential failure logic with respectably small deviation rates for those cases.

  • Design and Evaluation of Computer Telephony Services in a Distributed Processing Environment

    Shinji MOTEGI  Masaru ENOMOTO  Eiji UTSUNOMIYA  Hiroki HORIUCHI  Toshikane ODA  

     
    PAPER-Novel Applications

      Vol:
    E83-B No:5
      Page(s):
    1075-1084

    TINA (Telecommunications Information Networking Architecture) has been developed to support efficient operation of a wide range of complex services. TINA is effective in building advanced multimedia related services and provides effective solutions for complex service control and management along with a high level of quality of services. However the benefits and effectiveness of TINA for other types of services such as ordinary telephone services and facsimile messaging services are not clear. This paper clarifies how to apply TINA to control and management of computer telephony (CT) services and ordinary telephony services. We designed and implemented CT services in a distributed processing environment (DPE), and in particular a click-to-dial service, as a target for our study. We demonstrate the effectiveness of the design through qualitative and quantitative evaluation. The results of our study show that the distributed processing technique, based on component concepts makes it easy to build and extend CT services, and also that TINA service architecture is applicable to ordinary telephony and advanced CT services.

  • A K-Band MMIC Frequency Doubler Using Resistive Series Feedback Circuit

    Yasushi SHIZUKI  Yumi FUCHIDA  Fumio SASAKI  Kazuhiro ARAI  Shigeru WATANABE  

     
    PAPER-Microwaves, Millimeter-Waves

      Vol:
    E83-C No:5
      Page(s):
    759-766

    A novel K-band MMIC frequency doubler has been developed using resistive series feedback circuit. The doubler exhibits much better D/U ratio, smaller output power variation against ambient temperature and lower power consumption than those of the conventional single-ended doubler. This paper presents the simulation results on the effect of the resistive series feedback by harmonic balance methods. To obtain practical and accurate simulation results, newly developed gate charge model for Cgs and Cgd is introduced. The fabricated result of the proposed MMIC is also demonstrated.

  • Distance-Based Test Feature Classifiers and Its Applications

    Vakhtang LASHKIA  Shun'ichi KANEKO  Stanislav ALESHIN  

     
    PAPER-Image Processing, Image Pattern Recognition

      Vol:
    E83-D No:4
      Page(s):
    904-913

    In this paper, we present a class of combinatorial-logical classifiers called test feature classifiers. These are polynomial functions that can be used as pattern classifiers of binary-valued feature vectors. The method is based on so-called tests, sets of features, which are sufficient to distinguish patterns from different classes of training samples. Based on the concept of test we propose a new distance-based test feature classifiers. To test the performance of the classifiers, we apply them to a well-known phoneme database and to a textual region location problem where we propose a new effective textual region searching system that can locate textual regions in a complex background. Experimental results show that the proposed classifiers yield a high recognition rate than conventional ones, have a high ability of generalization, and suggest that they can be used in a variety of pattern recognition applications.

  • A Business Flow Diagram for Acquiring Users' Requirements of Object Oriented Software

    Mikito KUROKI  Morio NAGATA  

     
    PAPER-Theory and Methodology

      Vol:
    E83-D No:4
      Page(s):
    608-615

    To bridge a wide gap between the end users and the requirements engineers, we propose a business flow diagram for acquiring users' requirements of the object oriented software development in the business application domain. Each field of this diagram shows either a role or a responsibility of a particular person or an organization. This paper proposes a development method that the engineers acquire the requirements by using our diagrams. We have implemented a supporting tool based on this study for collaborating the requirements engineers with their users. At first, the end users of an information system to be developed draw diagrams representing the flows of information and physical objects in their work from their own points of view. Sometimes the engineers write them with the users. If all users submit their diagrams, then our tool collects them and constructs a total diagram. The requirements engineers analyze the total diagram for improving the business flow. After the engineers complete this diagram, our tool can automatically transform it into an initial version of the class diagram. We show the effectiveness of our approach with some experiments. Comparing the related works, we discuss some issues of the practical aspects of this proposal.

  • Creating Virtual Environment Based on Video Data with Forward Motion

    Xiaohua ZHANG  Hiroki TAKAHASHI  Masayuki NAKAJIMA  

     
    PAPER-Multimedia Pattern Processing

      Vol:
    E83-D No:4
      Page(s):
    931-936

    The construction of photo-realistic 3D scenes from video data is an active and competitive area of research in the fields of computer vision, image processing and computer graphics. In this paper we address our recent work in this area. Unlike most methods of 3D scene construction, we consider the generation of virtual environments from video sequence with a video-cam's forward motion. Each frame is decomposed into sub-images, which are registered correspondingly using the Levenberg-Marquardt iterative algorithm to estimate motion parameters. The registered sub-images are correspondingly pasted together to form a pseudo-3D space. By controlling the position and direction, the virtual camera can walk through this virtual space to generate novel 2D views to acquire an immersive impression. Even if the virtual camera goes deep into this virtual environment, it can still obtain a novel view while maintaining relatively high resolution.

  • A False-Sharing Free Distributed Shared Memory Management Scheme

    Alexander I-Chi LAI  Chin-Laung LEI  Hann-Huei CHIOU  

     
    PAPER-Computer Systems

      Vol:
    E83-D No:4
      Page(s):
    777-788

    Distributed shared memory (DSM) systems on top of network of workstations are especially vulnerable to the impact of false sharing because of their higher memory transaction overheads and thus higher false sharing penalties. In this paper we develop a dynamic-granularity shared memory management scheme that eliminates false sharing without sacrificing the transparency to conventional shared-memory applications. Our approach utilizes a special threaded splay tree (TST) for shared memory information management, and a dynamic token-based path-compression synchronization algorithm for data transferring. The combination of the TST and path compression is quite efficient; asymptotically, in an n-processor system with m shared memory segments, synchronizing at most s segments takes O(s log m log n) amortized computation steps and generates O(s log n) communication messages, respectively. Based on the proposed scheme we constructed an experimental DSM prototype which consists of several Ethernet-connected Pentium-based computers running Linux. Preliminary benchmark results on our prototype indicate that our scheme is quite efficient, significantly outperforming traditional schemes and scaling up well.

  • A Naming, Storage, and Retrieval Model for Software Assets

    Yuen-Chang SUN  Chin-Laung LEI  

     
    PAPER-Software Systems

      Vol:
    E83-D No:4
      Page(s):
    789-796

    This paper presents a model for the naming, packing, version control, storage, and retrieval of software assets. The naming and version control schemes are based on content-derived coding of the asset body. The name-to-asset mapping can be fixed or be varying with time. The packing scheme is designed so that the asset integrity verification and authentication information is made an intrinsic part of the packed asset. A comparison with an existing naming and version control scheme is given. The storage and retrieval scheme is a unification of four most commonly used methods, namely the enumerative method, the keyword-based method, the faceted method, and the text-based method. This unification makes it possible for the users to access heterogeneous repositories (with different storage and retrieval methods) simultaneously, using the same tool set and query syntax. The degradation functions technique is used to automatically broaden queries. A demonstrative retrieval tool is described to show how the unification of methods is done at the user interface level.

  • Capturing Wide-View Images with Uncalibrated Cameras

    Vincent van de LAAR  Kiyoharu AIZAWA  

     
    PAPER-Image Processing, Image Pattern Recognition

      Vol:
    E83-D No:4
      Page(s):
    895-903

    This paper describes a scheme to capture a wide-view image using a camera setup with uncalibrated cameras. The setup is such that the optical axes are pointed in divergent directions. The direction of view of the resulting image can be chosen freely in any direction between these two optical axes. The scheme uses eight-parameter perspective transformations to warp the images, the parameters of which are obtained by using a relative orientation algorithm. The focal length and scale factor of the two images are estimated by using Powell's multi-dimensional optimization technique. Experiments on real images show the accuracy of the scheme.

  • Link Capacity and Signal Power According to Allocations of Spreading Codes and Bandwidth in CDMA Systems

    Chang Soon KANG  Sung Moon SHIN  Dan Keun SUNG  

     
    LETTER-Wireless Communication Technology

      Vol:
    E83-B No:4
      Page(s):
    858-860

    Reverse link performance analysis in single-code and multi-code CDMA systems is presented. Results show that the single-code system yields better performance than does the multi-code system in terms of link capacity and signal power. This improvement increases as spreading bandwidth is reduced and the number of spreading codes assigned to a user is increased.

  • Software Creation: A Study on the Inside of Human Design Knowledge

    Hassan ABOLHASSANI  Hui CHEN  Behrouz Homayoun FAR  Zenya KOONO  

     
    PAPER-Theory and Methodology

      Vol:
    E83-D No:4
      Page(s):
    648-658

    This paper discusses the characteristics of human design knowledge. By studying a number of actual human made designs of excellent designers, the most frequent basic mental operations of a typical human designer have been found. They are: a design rule for hierarchical detailing reported previously, a micro design rule for generating a hierarchical expansion, dictionary operations to build a micro design rule and dictionaries. This study assumes a multiplicity of knowledge based on Zipf's theory, "the principle of least effort. " Zipf's principle may be proved and it becomes possible to understand the fundamental nature of human design.

12841-12860hit(16314hit)