The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] Ti(30728hit)

21001-21020hit(30728hit)

  • Automated Edge Detection by a Fuzzy Morphological Gradient

    Sathit INTAJAG  Kitti PAITHOONWATANAKIJ  

     
    PAPER-Image

      Vol:
    E86-A No:10
      Page(s):
    2678-2689

    Edge detection has been an essential step in image processing, and there has been much work undertaken to date. This paper inspects a fuzzy mathematical morphology in order to reach a higher-level of edge-image processing. The proposed scheme uses a fuzzy morphological gradient to detect object boundaries, when the boundaries are roughly defined as a curve or a surface separating homogeneous regions. The automatic edge detection algorithm consists of two major steps. First, a new version of anisotropic diffusion is proposed for edge detection and image restoration. All improvements of the new version use fuzzy mathematical morphology to preserve the edge accuracy and to restore the images to homogeneity. Second, the fuzzy morphological gradient operation detects the step edges between the homogeneous regions as object boundaries. This operation uses geometrical characteristics contained in the structuring element in order to extract the edge features in the set of edgeness, a set consisting of the quality values of the edge pixels. This set is prepared with fuzzy logic for decision and selection of authentic edge pixels. For experimental results, the proposed method has been tested successfully with both synthetic and real pictures.

  • Assuring Interoperability of Heterogeneous Multi-Agent Systems

    Hiroki SUGURI  Eiichiro KODAMA  Masatoshi MIYAZAKI  

     
    PAPER-Agent-Based Systems

      Vol:
    E86-D No:10
      Page(s):
    2095-2103

    In order for the agent-based applications to be truly autonomous and decentralized, heterogeneous multi-agent systems themselves must communicate and interoperate with each other. To solve the problem, we take a two-step approach. First, message-level interoperability is realized by a gateway agent that interconnects heterogeneous multi-agent systems. Second, higher-level interoperation of conversations, which consist of bi-directional streams of messages, is achieved by dynamically negotiating the interaction protocols. We demonstrate the concept, technique and implementation of integrating multi-agent systems and show how the method improves the assurance of real-world applications in autonomous decentralized systems.

  • Field Configurable Self-Assembly: A New Heterogeneous Integration Technology

    Alan O'RIORDAN  Gareth REDMOND  Thierry DEAN  Mathias PEZ  

     
    INVITED PAPER

      Vol:
    E86-C No:10
      Page(s):
    1977-1984

    Field Configurable Self-assembly is a novel programmable force field based heterogeneous integration technology. Herein, we demonstrate application of the method to rapid, parallel assembly of similar and dissimilar sub-200 µm GaAs-based light emitting diodes at silicon chip substrates. We also show that the method is compatible with post-process collective wiring techniques for fully planar hybrid integration of active devices.

  • First Microwave Characteristics of InGaAlAs/GaAsSb/InP Double HBTs

    Xin ZHU  Dimitris PAVLIDIS  Guangyuan ZHAO  Philippe BOVE  Hacene LAHRECHE  Robert LANGER  

     
    PAPER

      Vol:
    E86-C No:10
      Page(s):
    2010-2014

    We report for the first time the design, process and characterization of InP-based micrometer emitter InGaAlAs/GaAsSb/InP Double HBTs (DHBTs) and their microwave performance. The layer structure not only allows the implementation of InP collector free of current blocking, but also enables small turn-on voltage and ballistic launching of electrons due to the positive conduction band discontinuity of emitter to base. The DHBT structure was grown on nominal (001) InP substrates using MBE. Solid Si and CBr4 gas were used for n-type and p-type doing respectively. Fabricated large DHBTs showed high DC gain (> 80), small turn-on voltage 0.62 V, almost zero offset voltage, and nearly ideal base and collector current characteristics (ideality factors 1.0 for both B-E and B-C junctions). Small DHBTs demonstrated VCEO > 8 V and stable operation at high current density exceeding 100 kA/cm2. Maximum fT of 57 GHz and maximum fmax of 66 GHz were achieved from 1 20 µm2 devices at similar bias condition: JC = 8.0 104 A/cm2 and VCE =3.5 V. The InGaAlAs/GaAsSb/InP DHBTs appear to be a very promising HBT solution having simultaneous excellent RF and DC performances.

  • Reduced Complexity Iterative Decoding Using a Sub-Optimum Minimum Distance Search

    Jun ASATANI  Takuya KOUMOTO  Kenichi TOMITA  Tadao KASAMI  

     
    LETTER-Coding Theory

      Vol:
    E86-A No:10
      Page(s):
    2596-2600

    In this letter, we propose (1) a new sub-optimum minimum distance search (sub-MDS), whose search complexity is reduced considerably compared with optimum MDSs and (2) a termination criterion, called near optimality condition, to reduce the average number of decoding iterations with little degradation of error performance for the proposed decoding using sub-MDS iteratively. Consequently, the decoding algorithm can be applied to longer codes with feasible complexity. Simulation results for several Reed-Muller (RM) codes of lengths 256 and 512 are given.

  • An FTP Proxy System to Assure Providing the Latest Version of Replicated Files

    Junichi FUNASAKA  Masato BITO  Kenji ISHIDA  Kitsutaro AMANO  

     
    PAPER-Network Systems and Applications

      Vol:
    E86-B No:10
      Page(s):
    2948-2956

    As so many software titles are now being distributed via the Internet, the number of accesses to file servers, such as FTP servers, is rapidly increasing. To prevent the concentration of accesses to the original file server, mirror servers are being introduced that contain the same directories and files as held by the original server. However, inconsistency among the mirror servers and the original server is often observed because of delivery latency, traffic congestion on the network, and management policies of the mirror servers. This inconsistency degrades the value of the mirror servers. Accordingly, we have developed an intermediate FTP proxy server system that guarantees the freshness of the files as well as preventing access concentration on the original FTP server. The system adopts per-file selection of the replicated files; most existing methods are based on per-host or per-directory selection. Therefore it can assure users of a quick, stable, and up-to-date FTP mirroring service even in the face of frequent content updates, which tend to degrade the homogeneity of services. Moreover, it can forward the retrieved files with little overhead. Tests confirmed that our system is comparable to existing systems from the viewpoint of actual retrieval time, required traffic, and load endurance. This technology can assure clients that they will receive the latest version of the file(s) desired. It well supports heterogeneous network environments such as the Internet.

  • Method to Generate Images for a Motion-Base in an Immersive Display Environment

    Toshio MORIYA  Haruo TAKEDA  

     
    PAPER-Image Processing, Image Pattern Recognition

      Vol:
    E86-D No:10
      Page(s):
    2231-2239

    We propose an image generation method for an immersive multi-screen environment that contains a motion ride. To allow a player to look around freely in a virtual world, a method to generate an arbitrary direction image is required, and this technology has already been established. In our environment, displayed images must also be updated according to the movement of the motion ride in order to keep a consistency between the player's viewpoint and the virtual world. In this paper, we indicate that this updating process can be performed by the similar method to generate looking-around images and the same data format can be applicable. Then we discuss the format in terms of the data size and the amount of calculations need to consider the performance in our display environment, and we propose new image formats which improve on the widely-used formats such as the perspective, or the fish-eye format.

  • A Zone-Based Data Placement and Retrieval Scheme for Video-on-Demand Applications Regardless of Video Popularity

    Ming-Jen CHEN  Chwan-Chia WU  

     
    PAPER-Multimedia Systems

      Vol:
    E86-B No:10
      Page(s):
    3094-3102

    This paper presents a novel data placement and retrieval scheme for video-on-demand systems. In particular, a zone-based data placement scheme is employed to reduce the average seek time of the disk array storage system and thus increase the disk access bandwidth to allow the video server provide more services of video programs concurrently. Furthermore, due to the inherent nature of video access, a popular program always requires more accesses and therefore occupies more disk I/O bandwidth as request for serving such program increases. A new retrieval strategy is proposed to maintain a single copy of each video program disregarding the popularity of the video programs, and to achieve maximum I/O throughput of the video server.

  • Calculation of Sommerfeld Integrals for Modeling Vertical Dipole Array Antenna for Borehole Radar

    Satoshi EBIHARA  Weng Cho CHEW  

     
    PAPER-Electromagnetic Theory

      Vol:
    E86-C No:10
      Page(s):
    2085-2096

    This paper describes a method for the fast evaluation of the Sommerfeld integrals for modeling a vertical dipole antenna array in a borehole. When we analyze the antenna inside a medium modeled by multiple cylindrical layers with the Method of Moment (MoM), we need a Green's function including the scattered field from the cylindrical boundaries. We focus on the calculation of Green's functions under the condition that both the detector and the source are situated in the innermost layer, since the Green's functions are used to form the impedance matrix of the antenna. Considering bounds on the location of singularities on a complex wave number plane, a fast convergent integration path where pole tracking is unnecessary is considered for numerical integration. Furthermore, as an approximation of the Sommerfeld integral, we describe an asymptotic expansion of the integrals along the branch cuts. The pole contribution of TM01 and HE11 modes are considered in the asymptotic expansion. To obtain numerical results, we use a fast convergent integration path that always proves to be accurate and efficient. The asymptotic expansion works well under specific conditions. The Sommerfeld integral values calculated with the fast evaluation method is used to model the array antenna in a borehole with the MoM. We compare the MoM data with experimental data, and we show the validity of the fast evaluation method.

  • The Theory of Software Reliability Corroboration

    Bojan CUKIC  Erdogan GUNEL  Harshinder SINGH  Lan GUO  

     
    PAPER-Testing

      Vol:
    E86-D No:10
      Page(s):
    2121-2129

    Software certification is a notoriously difficult problem. From software reliability engineering perspective, certification process must provide evidence that the program meets or exceeds the required level of reliability. When certifying the reliability of a high assurance system very few, if any, failures are observed by testing. In statistical estimation theory the probability of an event is estimated by determining the proportion of the times it occurs in a fixed number of trials. In absence of failures, the number of required certification tests becomes impractically large. We suggest that subjective reliability estimation from the development lifecycle, based on observed behavior or the reflection of one's belief in the system quality, be included in certification. In statistical terms, we hypothesize that a system failure occurs with the hypothesized probability. Presumed reliability needs to be corroborated by statistical testing during the reliability certification phase. As evidence relevant to the hypothesis increases, we change the degree of belief in the hypothesis. Depending on the corroboration evidence, the system is either certified or rejected. The advantage of the proposed theory is an economically acceptable number of required system certification tests, even for high assurance systems so far considered impossible to certify.

  • Optimization of Handover Plans to Minimize the Degradation of Communications QoS for LEO Satellite Systems

    Yasushi WAKAHARA  Kazumasa SATO  Jun-ichi MIZUSAWA  

     
    PAPER-Network Control and Management

      Vol:
    E86-B No:10
      Page(s):
    2891-2901

    Handovers are occasionally required for LEO (Low Earth Orbit) satellite systems, since the satellites are always moving. Handovers, however, generally cause some degradation of communication quality. Some LEO systems have a feature of two types of satellites in terms of the direction of their orbits and the handover between satellites of different types may lead to a large change in the path between the service satellites used for the communication. Thus, the degradation can be generally large especially in the case of handovers between different type satellites. As such, best handover plan should be derived so that the number of handovers should be made smallest to minimize the degradation of the communication quality. In this paper, such optimization problems are formulated in relation to the handovers and their solutions are proposed to actualize the optimization of handover plans for LEO satellite systems with two types of satellites.

  • Defense Against Man-in-the-Middle Attack in Client-Server Systems with Secure Servers

    Dimitrios N. SERPANOS  Richard J. LIPTON  

     
    LETTER

      Vol:
    E86-B No:10
      Page(s):
    2966-2970

    Digital rights management in client-server environments requires the establishment of client integrity, in order to protect sensitive (secret) information from loss or misuse. Clients are vulnerable to powerful man-in-the-middle attacks through malicious software (viruses, etc.), which is undetectable by conventional anti-virus technology. We present such powerful viruses and demonstrate their ability to compromise clients. Furthermore, we introduce a defense against all viruses, which is based on simple hardware devices that execute specialized protocols to establish client integrity and protect against sensitive data loss.

  • Randomized Caches for Power-Efficiency

    Hans VANDIERENDONCK  Koen De BOSSCHERE  

     
    PAPER-Integrated Electronics

      Vol:
    E86-C No:10
      Page(s):
    2137-2144

    Embedded processors are used in numerous devices executing dedicated applications. This setting makes it worthwhile to optimize the processor to the application it executes, in order to increase its power-efficiency. This paper proposes to enhance direct mapped data caches with automatically tuned randomized set index functions to achieve that goal. We show how randomization functions can be automatically generated and compare them to traditional set-associative caches in terms of performance and energy consumption. A 16 kB randomized direct mapped cache consumes 22% less energy than a 2-way set-associative cache, while it is less than 3% slower. When the randomization function is made configurable (i.e., it can be adapted to the program), the additional reduction of conflicts outweighs the added complexity of the hardware, provided there is a sufficient amount of conflict misses.

  • Autonomous Step-by-Step System Construction Technique Based on Assurance Evaluation

    Kazuo KERA  Keisuke BEKKI  Kinji MORI  

     
    PAPER-Reliability and Availability

      Vol:
    E86-D No:10
      Page(s):
    2145-2153

    The recent real time systems have the needs of system expandability with heterogeneous functions and operations. High assurance system is very important for such systems. In order to realize the high assurance system, we research the autonomous step-by-step construction technique based on assurance evaluation. In this paper we propose the average functional reliability as the best index to indicate the assurance performance for system construction. We also propose the autonomous step-by-step construction technique to decide the construction sequence to maximize the assurance performance.

  • Autonomous Integration and Optimal Allocation of Heterogeneous Information Services for High-Assurance in Distributed Information Service System

    Xiaodong LU  Kinji MORI  

     
    PAPER-Agent-Based Systems

      Vol:
    E86-D No:10
      Page(s):
    2087-2094

    Information service provision and utilization is an important infrastructure in the high-assurance distributed information service system. In order to cope with the rapidly evolving situations of providers' and users' heterogeneous requirements, one autonomous information service system has been proposed, called Faded Information Field (FIF). FIF is a distributed information service system architecture, sustained by push/pull mobile agents, through a recursive demand-oriented provision of the most popular information closer to the users to make a tradeoff between the cost of service allocation and access. In this system, users' requests are autonomously driven by pull mobile agents in charge of finding the relevant service. In the case of a mono-service request, the system is designed to reduce the time needed for users to access the information and to preserve the consistency of the replicas. However, when the user requests joint selection of multiple services, synchronization of atomic actions and timeliness have to be assured by the system. In this paper, the relationship that exists among the contents, properties and access ratios of information services is clarified. Based on these factors, the ratio of correlation and degree of satisfaction are defined and the autonomous integration and optimal allocation of information services for heterogeneous FIFs to provide one-stop service for users' multi-service requirements are proposed. The effectiveness of the proposed technology is shown through evaluation and the results show that the integrated services can reduce the total users access time and increase services consumption compared with separate systems.

  • High-Assurance Video Conference System over the Internet

    Masayuki ARAI  Hitoshi KUROSU  Mamoru OHARA  Ryo SUZUKI  Satoshi FUKUMOTO  Kazuhiko IWASAKI  

     
    PAPER-Network Systems and Applications

      Vol:
    E86-B No:10
      Page(s):
    2940-2947

    In video conference systems over the Internet, audio and video data are often lost due to UDP packet losses, resulting in degradation of assurance. In this paper we describe a high-assurance video conference system applying the following two techniques: (1) packet loss recovery using convolutional codes, which improves the assurance of communication; and (2) Xcast, a multicast scheme that is designed for relatively small groups, reducing the bandwidth required for a multi-point conference. We added these functions to a GateKeeper (GK), a device used in conventional conference systems. Encoding/decoding and Xcast routing were then implemented as the upper layer for the UDP. We examined the functions of the system over the Internet in a multi-point conference between three sites around Tokyo, as well as a conference between Tokyo and Korea. We also investigated the effectiveness of the proposed system in experiments using an Internet simulator. Experimental results showed that the quality of received picture was improved in comparison with the case where no encoding schemes were applied.

  • Assuring Communications by Balancing Cell Load in Cellular Network

    Xiaoxin WU  Biswanath MUKHERJEE  S.-H. Gary CHAN  Bharat BHARGAVA  

     
    PAPER-Mobile Ad Hoc Networks

      Vol:
    E86-B No:10
      Page(s):
    2912-2921

    In a fixed-channel-allocation (FCA) cellular network, a fixed number of channels are assigned to each cell. However, under this scheme, the channel usage may not be efficient because of the variability in the offered traffic. Different approaches such as channel borrowing (CB) and dynamic channel allocation (DCA) have been proposed to accommodate variable traffic. Our work expands on the CB scheme and proposes a new channel-allocation scheme--called mobile-assisted connection-admission (MACA) algorithm--to achieve load balancing in a cellular network, so as to assure network communication. In this scheme, some special channels are used to directly connect mobile units from different cells; thus, a mobile unit, which is unable to connect to its own base station because it is in a heavily-loaded "hot" cell, may be able to get connected to its neighboring lightly-loaded cold cell's base station through a two-hop link. Research results show that MACA can greatly improve the performance of a cellular network by reducing blocking probabilities.

  • An Integrated Approach for Implementing Imprecise Computations

    Hidenori KOBAYASHI  Nobuyuki YAMASAKI  

     
    PAPER

      Vol:
    E86-D No:10
      Page(s):
    2040-2048

    The imprecise computation model is one of the flexible computation models used to construct real-time systems. It is especially useful when the worst case execution times are difficult to estimate or the execution times vary widely. Although there are several ways to implement this model, they have not attained much attentions of real-world application programmers to date due to their unrealistic assumptions and high dependency on the execution environment. In this paper, we present an integrated approach for implementing the imprecise computation model. In particular, our research covers three aspects. First, we present a new imprecise computation model which consists of a mandatory part, an optional part, and another mandatory part called wind-up part. This wind-up part allows application programmers to explicitly incorporate into their programs the exact operations needed for safe degradation of performance when there is a shortage in resources. Second, we describe a scheduling algorithm called Mandatory-First with Wind-up Part (M-FWP) which is based on the Earliest Deadline First strategy. This algorithm, unlike scheduling algorithms developed for the classical imprecise computation model, is capable of scheduling a mandatory portion after an optional portion. Third, we present a dynamic priority server method for an efficient implementation of the M-FWP algorithm. We also show that the number of the proposed server at most needed per node is one. In order to estimate the performance of the proposed approach, we have implemented a real-time operating system called RT-Frontier. The experimental analyses have proven its ability to implement tasks based on the imprecise computation model without requiring any knowledge on the execution time of the optional part. Moreover, it also showed performance gain over the traditional checkpointing technique.

  • REX: A Reconfigurable Experimental System for Evaluating Parallel Computer Systems

    Yuetsu KODAMA  Toshihiro KATASHITA  Kenji SAYANO  

     
    PAPER

      Vol:
    E86-D No:10
      Page(s):
    2016-2024

    REX is a reconfigurable experimental system for evaluating and developing parallel computer systems. It consists of large-scale FPGAs, and enables the systems to be reconfigured from their processors to the network topology in order to support their evaluation and development. We evaluated REX using several implementations of parallel computer systems, and showed that it had enough scalability of gates, memory throughput and network throughput. We also showed that REX was an effective tool because of its emulation speed and reconfigurability to develop systems.

  • Performance Evaluation of Wireless Multihop Ad Hoc Networks Using IEEE 802.11 DCF Protocol

    Ting-Chao HOU  Chien-Min WU  Ming-Chieh CHAN  

     
    PAPER-Wireless Communication Technology

      Vol:
    E86-B No:10
      Page(s):
    3004-3012

    There has been a growing interest in multihop wireless ad hoc networks in recent years. Previous studies have shown that, in a wireless multihop network using the slotted ALOHA as the medium access control mechanism, the maximum throughput can be achieved if the number of neighbors is six to eight. We show that, when using the IEEE 802.11 DCF protocol in a wireless ad hoc network, the maximum end-to-end goodput is achieved when all nodes are within transmission range of each other. The main reason is that the channel spatial reuse factor gained from the multihop network does not match the increase in additional transmission hops that a packet needs to travel in a multihop network. For a multihop network, its MAC frame delivery capacity is approximately fixed at a value dependent on its spatial reuse factor. If the offered load increases, less capacity will be spent on delivering packets that eventually reach their destinations and hence resulting in lower end-to-end goodput.

21001-21020hit(30728hit)