The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] tin(3578hit)

2441-2460hit(3578hit)

  • Raman Gain Distribution Measurement Employing Reference Optical Fiber

    Kunihiro TOGE  Kazuo HOGARI  Tsuneo HORIGUCHI  

     
    LETTER-Optical Fiber

      Vol:
    E86-B No:11
      Page(s):
    3293-3295

    This letter proposes a novel technique for evaluating the longitudinal distribution of the Raman gain characteristics in optical fibers connected to a reference optical fiber with a known Raman gain efficiency. This technique can evaluate the Raman gain efficiency in test fibers using a simplified experimental setup. We performed experiments on various test fibers and confirmed that their Raman gain efficiency can be obtained easily and accurately by employing a reference fiber.

  • A Standard Measure of Mobility for Evaluating Mobile Ad Hoc Network Performance

    Byung-Jae KWAK  Nah-Oak SONG  Leonard E. MILLER  

     
    PAPER-Network

      Vol:
    E86-B No:11
      Page(s):
    3236-3243

    The performance of a mobile ad hoc network (MANET) is related to the efficiency of the routing protocol in adapting to changes in the network topology and the link status. However, the use of many different mobility models without a unified quantitative "measure" of the mobility has made it very difficult to compare the results of independent performance studies of routing protocols. In this paper, a mobility measure for MANETs is proposed that is flexible and consistent. It is flexible because one can customize the definition of mobility using a remoteness function. It is consistent because it has a linear relationship with the rate at which links are established or broken for a wide range of network scenarios. This consistency is the strength of the proposed mobility measure because the mobility measure reliably represents the link change rate regardless of network scenarios.

  • Software TLB Management for Embedded Systems

    Yukikazu NAKAMOTO  

     
    LETTER

      Vol:
    E86-D No:10
      Page(s):
    2034-2039

    The virtual memory functions in real-time operating systems have been used in embedded systems. Recent RISC processors provide virtual memory supports through software-managed Translation Lookaside Buffer (TLB) in software. In real-time aspects of the embedded systems, managing TLB entries is the most important because overhead at TLB miss time gives a great effect to overall performance of the system. In this paper, we propose several TLB management algorithms in MIPS processors. In the algorithms, a replaced TLB entry is randomly chosen or managed. We analyze the algorithms by comparing overheads at task switching times and TLB miss times.

  • HEMT: Looking Back at Its Successful Commercialization

    Takashi MIMURA  

     
    INVITED PAPER

      Vol:
    E86-C No:10
      Page(s):
    1908-1910

    The history of the development of the High Electron Mobility Transistor (HEMT) is an outstanding illustration of how a new device can be successfully marketed. In this paper we discuss a key to successful commercialization of new devices.

  • Concurrency Control for Read-Only Client Transactions in Broadcast Disks

    Haengrae CHO  

     
    PAPER-Broadcast Systems

      Vol:
    E86-B No:10
      Page(s):
    3114-3122

    Broadcast disks are suited for disseminating information to a large number of clients in mobile computing environments. In broadcast disks, the server continuously and repeatedly broadcasts all data items in the database to clients without specific requests. The clients monitor the broadcast channel and read data items as they arrive on the broadcast channel. The broadcast channel then becomes a disk from which clients can read data items. In this paper, we propose a cache conscious concurrency control (C4) algorithm to preserve the consistency of read-only client transactions, when the values of broadcast data items are updated at the server. The C4 algorithm is novel in the sense that it can reduce the response time of client transactions with minimal control information to be broadcast from the server. This is achieved by the judicious caching strategy of the client and by adjusting the currency of data items read by the client.

  • Pulse: A Class of Super-Worms against Network Infrastructure

    Artemios G. VOYIATZIS  Dimitrios N. SERPANOS  

     
    LETTER

      Vol:
    E86-B No:10
      Page(s):
    2971-2974

    Conventional worms and super-worms focus on the infection of end-systems. They exploit network vulnerabilities and use the network resources only to route their processes appropriately. We describe a new class of super-worms which target to infect network resources and utilizes routing information to effectively partition the address space of Internet.

  • A Random-Error-Resilient Collusion-Secure Fingerprinting Code, Randomized c-Secure CRT Code

    Hajime WATANABE  Takashi KITAGAWA  

     
    PAPER-Information Security

      Vol:
    E86-A No:10
      Page(s):
    2589-2595

    In digital content distribution systems, digital watermarking (fingerprinting) technique provides a good solution to avoid illegal copying and has been studied very actively. c-Secure CRT Code is one of the most practical ID coding schemes for such fingerprinting since it is secure against collusion attacks and also secure even though random errors are furthermore added. But its usefulness is decreased in the case that random errors are added because the code length will be longer. In this paper, a new collusion attack with addition of random errors is introduced and show that c-Secure CRT Code is not sufficiently secure against the attack at first. Next, we analyze the problem and propose a new ID coding scheme, Randomized c-Secure CRT Code which overcomes the problem. As a result, this new scheme improves the error tracing probabilities against the proposed attack drastically. This new scheme has the same code length, so this is one of the most responsible fingerprinting codes for content distribution systems.

  • Testing for High Assurance System by FSM

    Juichi TAKAHASHI  Yoshiaki KAKUDA  

     
    PAPER-Testing

      Vol:
    E86-D No:10
      Page(s):
    2114-2120

    Software and its systems are more complicated than a decade ago, and the systems are used for mission critical business, flight control and so on which often require high assurance systems. In this circumstance, we often use black-box testing. The question now arises that black-box testing does not generate numerical value of testing result but empirical. Thus, in this research, we develop and enhance FSM (Finite State Machine) testing method which can produce code coverage rate as numerical value. Our developed FSM testing by code coverage focuses on not only software system behavior but also data. We found higher code coverage rate, which indicates quality of system, by this method than existing black box testing method.

  • Architecture and Evaluation of a Third-Generation RHiNET Switch for High-Performance Parallel Computing

    Hiroaki NISHI  Shinji NISHIMURA  Katsuyoshi HARASAWA  Tomohiro KUDOH  Hideharu AMANO  

     
    PAPER

      Vol:
    E86-D No:10
      Page(s):
    1987-1995

    RHiNET-3/SW is the third-generation switch used in the RHiNET-3 system. It provides both low-latency processing and flexible connection due to its use of a credit-based flow-control mechanism, topology-free routing, and deadlock-free routing. The aggregate throughput of RHiNET-3/SW is 80 Gbps, and the latency is 140 ns. RHiNET-3/SW also provides a hop-by-hop retransmission mechanism. Simulation demonstrated that the effective throughput at a node in a 64-node torus RHiNET-3 system is equivalent to the effective throughput of a 64-bit 33-MHz PCI bus and that the performance of RHiNET-3/SW almost equals or exceeds the best performance of RHiNET-2/SW, the second-generation switch. Although credit-based flow control requires 26% more gates than rate-based flow control to manage the virtual channels (VCs), it requires less VC memory than rate-based flow control. Moreover, its use in a network system reduces latency and increases the maximum throughput compared to rate-based flow control.

  • An Integrated Approach for Implementing Imprecise Computations

    Hidenori KOBAYASHI  Nobuyuki YAMASAKI  

     
    PAPER

      Vol:
    E86-D No:10
      Page(s):
    2040-2048

    The imprecise computation model is one of the flexible computation models used to construct real-time systems. It is especially useful when the worst case execution times are difficult to estimate or the execution times vary widely. Although there are several ways to implement this model, they have not attained much attentions of real-world application programmers to date due to their unrealistic assumptions and high dependency on the execution environment. In this paper, we present an integrated approach for implementing the imprecise computation model. In particular, our research covers three aspects. First, we present a new imprecise computation model which consists of a mandatory part, an optional part, and another mandatory part called wind-up part. This wind-up part allows application programmers to explicitly incorporate into their programs the exact operations needed for safe degradation of performance when there is a shortage in resources. Second, we describe a scheduling algorithm called Mandatory-First with Wind-up Part (M-FWP) which is based on the Earliest Deadline First strategy. This algorithm, unlike scheduling algorithms developed for the classical imprecise computation model, is capable of scheduling a mandatory portion after an optional portion. Third, we present a dynamic priority server method for an efficient implementation of the M-FWP algorithm. We also show that the number of the proposed server at most needed per node is one. In order to estimate the performance of the proposed approach, we have implemented a real-time operating system called RT-Frontier. The experimental analyses have proven its ability to implement tasks based on the imprecise computation model without requiring any knowledge on the execution time of the optional part. Moreover, it also showed performance gain over the traditional checkpointing technique.

  • On Stability of Singular Systems with Saturating Actuators

    Jia-Rong LIANG  Ho-Lim CHOI  Jong-Tae LIM  

     
    LETTER-Systems and Control

      Vol:
    E86-A No:10
      Page(s):
    2700-2703

    This paper investigates the stability problem of singular systems with saturation actuators. A Lyapunov method is employed to give the sufficient conditions for stability of closed-loop systems with saturation actuators. The controller is designed to satisfy the requirement for stability under the nonlinear saturation. In addition, a method is presented for estimating the domain of attraction of the origin.

  • The Development of the Earth Simulator

    Shinichi HABATA  Mitsuo YOKOKAWA  Shigemune KITAWAKI  

     
    INVITED PAPER

      Vol:
    E86-D No:10
      Page(s):
    1947-1954

    The Earth Simulator (ES), developed by the Japanese government's initiative "Earth Simulator project," is a highly parallel vector supercomputer system. In May 2002, the ES was proven to be the most powerful computer in the world by achieving 35.86 teraflops on the LINPACK benchmark and 26.58 teraflops for a global atmospheric circulation model with the spectral method. Three architectural features enabled these great achievements; vector processor, shared-memory and high-bandwidth non-blocking interconnection crossbar network. In this paper, an overview of the ES, the three architectural features and the result of performance evaluation are described particularly with its hardware realization of the interconnection among 640 processor nodes.

  • The Theory of Software Reliability Corroboration

    Bojan CUKIC  Erdogan GUNEL  Harshinder SINGH  Lan GUO  

     
    PAPER-Testing

      Vol:
    E86-D No:10
      Page(s):
    2121-2129

    Software certification is a notoriously difficult problem. From software reliability engineering perspective, certification process must provide evidence that the program meets or exceeds the required level of reliability. When certifying the reliability of a high assurance system very few, if any, failures are observed by testing. In statistical estimation theory the probability of an event is estimated by determining the proportion of the times it occurs in a fixed number of trials. In absence of failures, the number of required certification tests becomes impractically large. We suggest that subjective reliability estimation from the development lifecycle, based on observed behavior or the reflection of one's belief in the system quality, be included in certification. In statistical terms, we hypothesize that a system failure occurs with the hypothesized probability. Presumed reliability needs to be corroborated by statistical testing during the reliability certification phase. As evidence relevant to the hypothesis increases, we change the degree of belief in the hypothesis. Depending on the corroboration evidence, the system is either certified or rejected. The advantage of the proposed theory is an economically acceptable number of required system certification tests, even for high assurance systems so far considered impossible to certify.

  • A Hierarchical Routing Protocol Based on Autonomous Clustering in Ad Hoc Networks

    Tomoyuki OHTA  Munehiko FUJIMOTO  Shinji INOUE  Yoshiaki KAKUDA  

     
    PAPER-Mobile Ad Hoc Networks

      Vol:
    E86-B No:10
      Page(s):
    2902-2911

    Recently, in wired networks, a hierarchical structure has been introduced to improve management and routing. In ad hoc networks, we introduce a hierarchical structure to achieve the same goal. However, it is difficult to introduce the hierarchical structure because all mobile hosts are always moving around the network. So, we proposed the clustering scheme to construct the hierarchical structure before. In this paper, we propose a new hierarchical routing protocol called Hi-TORA based on the clustering scheme. And we show the experimental evaluation of Hi-TORA with respect to the number of control packets, accuracy of packet delivery and hop counts in comparison with TORA.

  • Performance Comparison of Multipath Routing Algorithms for TCP Traffic

    Guangyi LIU  Shouyi YIN  Xiaokang LIN  

     
    LETTER-Network

      Vol:
    E86-B No:10
      Page(s):
    3144-3146

    Multipath is a big problem for TCP traffic in traffic engineering. To solve it, hash functions such as CRC-16 are usually applied over source and destination address segments in packet headers. Through simulations and performance comparison of several multipath algorithms, it is found out that high network utilization achieved by using hash functions is at the expense of low fairness among coexisting TCP flows. It is also illustrated that packet size has significant influence on performance.

  • Novel Built-In Current Sensor for On-Line Current Testing

    Chul Ho KWAK  Jeong Beom KIM  

     
    LETTER-Integrated Electronics

      Vol:
    E86-C No:9
      Page(s):
    1898-1902

    This paper proposes a novel CMOS built-in current sensor (BICS) for on-line current testing. Proposed BICS detects abnormal current in circuit under test (CUT) and makes a Pass/Fail signal through comparison between the CUT current and the duplicated inverter current. This circuit consists of two current-to-voltage conversion transistors, a full swing generator, a voltage comparator, and an inverter block. It requires 16 transistors. Since this BICS does not require the extra clock, the added extra pin is only one output pin. Furthermore, the BICS does not require test mode selection. Therefore the BICS can be applied to on-line current testing. The validity and effectiveness are verified through the HSPICE simulation of circuits with defects. When the CUT is an 8 8 parallel multiplier, the area overhead of the BICS is about 4.34%.

  • On Probabilistic Scheme for Encryption Using Nonlinear Codes Mapped from 4 Linear Codes

    Chunming RONG  

     
    LETTER

      Vol:
    E86-A No:9
      Page(s):
    2248-2250

    Probabilistic encryption becomes more and more important since its ability to against chosen-ciphertext attack. Applications like online voting schemes and one-show credentials are based on probabilistic encryption. Research on good probabilistic encryptions are on going, while many good deterministic encryption schemes are already well implemented and available in many systems. To convert any deterministic encryption scheme into a probabilistic encryption scheme, a randomized media is needed to apply on the message and carry the message over as an randomized input. In this paper, nonlinear codes obtained by certain mapping from linear error-correcting codes are considered to serve as such carrying media. Binary nonlinear codes obtained by Gray mapping from 4-linear codes are discussed as example for a such scheme.

  • Adaptive Backtracking Handover Scheme Using a Best-Fit COS Search Method for Improving Handover Efficiency in Wireless ATM Networks

    Hosang YUN  Kwangwook SHIN  Hyunsoo YOON  

     
    PAPER-Networking and Architectures

      Vol:
    E86-D No:9
      Page(s):
    1495-1503

    The crucial handover elements in wireless ATM networks are handover delay and handover efficiency. Since the research about the handover in wireless ATM has until now focused mainly on minimizing handover delay, the results have shown the inefficiency of network resources. In broadband wireless ATM networks, handover efficiency is critical to network capacity. In this paper, we propose a new handover scheme based on a partial path rerouting scheme called the delay limited best-fit backtracking scheme. The scheme searches for the crossover switch that limits handover delay and at the same time maximizes handover efficiency. It uses a new crossover switch searching method, which is an adaptive backtracking searching method that uses a best-fit search manner, to search for the optimal crossover switch that satisfies the given crossover switch condition. We evaluated the performance of proposed handover scheme, and show that the suggested scheme can improve handover efficiency more than other handover schemes.

  • An Electronic Voting Protocol Preserving Voter's Privacy

    Hiroshi YAMAGUCHI  Atsushi KITAZAWA  Hiroshi DOI  Kaoru KUROSAWA  Shigeo TSUJII  

     
    PAPER-Applications of Information Security Techniques

      Vol:
    E86-D No:9
      Page(s):
    1868-1878

    In this paper we present a new, two-centered electronic voting scheme that is capable of preserving privacy, universal verifiability, and robustness. An interesting property of our scheme is the use of double encryption with additive homomorphic encryption functions. In the two-centered scheme, the first center decrypts the ballots, checks the eligibility of the voters, and multiplies each eligible vote, which is still encrypted in the cryptosystem of the second center. After the deadline is reached, the second center obtains the final tally by decrypting the accumulated votes. As such, both centers cannot know the content of any individual vote, as each vote is hidden in the accumulated result, therefore the privacy of the voters is preserved. Our protocols, together with some existing protocols, allow everyone to verify that all valid votes are correctly counted. We apply the r-th residue cryptosystem as the homomorphic encryption function. Although decryption in the r-th residue cryptosystem requires an exhaustive search for all possible values, based on experiments we show that it is possible to achieve desirable performance for large-scale elections.

  • Organizing the LDD in Mobile Environments

    JeHyok RYU  MoonBae SONG  Chong-Sun HWANG  

     
    PAPER-Networking and Architectures

      Vol:
    E86-D No:9
      Page(s):
    1504-1512

    In wireless mobile environments, data requests based on the location of mobile clients (MCs) have increased. The requested data, called location-dependent data (LDD), may be uncertain if MCs use terms of general distance like "near". Fuzzy theory allows us to represent uncertainty without sharp boundaries. In this paper we quantify the fuzziness and propose a method for constructing the data region of LDD by the degree of the distance between LDDs' and MCs' locations. In simulation studies, we evaluate the LDD regions (LDRs) in various situations: MCs' extensive and intensive queried pattern in data regions of two "near" senses and civilized regions with regional features. Our performance evaluation shows that the number of database accesses in proposed LDRs can be reduced in each case.

2441-2460hit(3578hit)