The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] server(152hit)

121-140hit(152hit)

  • State Observers for Moore Machines and Generalized Adaptive Homing Sequences

    Koji WATANABE  Takeo IKAI  Kunio FUKUNAGA  

     
    LETTER-Theory of Automata, Formal Language Theory

      Vol:
    E84-D No:4
      Page(s):
    530-533

    Off-line state identification methods for a sequential machine using a homing sequence or an adaptive homing sequence (AHS) are well-known in the automata theory. There are, however, so far few studies on the subject of the on-line state estimator such as a state observer (SO) which is used in the linear system theory. In this paper, we shall construct such an SO for a Moore machine based on the state identification process by means of AHSs, and discuss the convergence property of the SO.

  • A Technique for On-Line Data Migration

    Jiahong WANG  Masatoshi MIYAZAKI  Jie LI  

     
    PAPER-Databases

      Vol:
    E84-D No:1
      Page(s):
    113-120

    In recent years, more emphasis is placed on the performance of massive databases. It is often required not only that database systems provide high throughputs with rapid response times, but also that they are fully available 24-hours-per-day and 7-days-per-week. Requirements for throughput and response time can be satisfied by upgrading the hardware. As a result, databases in the old hardware environment have to be moved to the new one. Moving a database, however, generally requires taking the database off line for a long time, which is unacceptable for numerous applications. In this paper, a very practical and important subject is addressed: how to upgrade the hardware on line, i.e., how to move a database from an old hardware environment to a new one concurrently with users' reading and writing of the database. A technique for this purpose is proposed. We have implemented a prototype based on this technique. Our experiments with the prototype shown that compared with conventional off-line approach, the proposed technique could give a performance improvement by more than 85% in the query-bound environment and 40% in the update-bound environment.

  • Evaluation of a Process Scheduling Policy for a WWW Server Based on Its Contents

    Sukanya SURANAUWARAT  Hideo TANIGUCHI  

     
    PAPER-Software Systems

      Vol:
    E83-D No:9
      Page(s):
    1752-1761

    Traditional process schedulers in operating systems control the sharing of the processor resources among processes using a fixed scheduling policy based on the utilization of a computer system such as a real-time or a timesharing system. Since the control over processor allocation is based on a fixed policy, not based on contents or behavior of processes, this can hinder an effective use of a processor or can extend the processing time of a process unnecessarily in some cases. We have already proposed a process scheduling policy, which responds to the behavior of multiple processes of a WWW server, in order to improve the response time of a WWW server. This policy gives any process of a WWW server that is predicted to be a WWW server process handling a text data request from a browser priority over all other processes by moving it to the head of the ready queue where processes waiting for the processor to become available are placed. In this paper, we present the experimental evaluation of our proposed process scheduling policy with regard to the number of simultaneous accesses from browsers and the processor load of the server machine, and explain the results we obtained.

  • An Efficient Buffer Management Scheme for Multimedia File System

    Jongho NANG  Sungkwan HEO  

     
    PAPER-Software Systems

      Vol:
    E83-D No:6
      Page(s):
    1225-1236

    File system buffers provide memory space for data being transferred to and from disk and act as caches for the recently used blocks, and the buffer manager usually reads ahead data blocks to minimize the number of disk accesses. However, if several multimedia files with different consumption rates are accessed simultaneously from the file system in which LRU buffer replacement strategy is used, the read-ahead blocks of the low rate file are unloaded from memory to be used for loading a data block of a high data rate file, therefore they should be reloaded again into memory from disk when these blocks are actually referenced. This paper proposes and implements a new buffer cache management scheme for a multimedia file system and analyzes the performance of the proposed scheme by modifying the file system kernel of FreeBSD. In this proposed scheme, initially, some buffers are allocated to each opened multimedia file, privately, then these buffers are reused for other data blocks of that file when they are loaded from the disk. Moreover, the number of private buffers allocated for the file is dynamically adjusted according to its data rate. An admission control scheme is also proposed to prevent opening of a new file which may cause overloads in the file system. Experimental results comparing proposed scheme with the original FreeBSD and a simple CTL-based model show that the proposed buffer management scheme could support the realtime play back of several multimedia files with various data rates concurrently without helps of a realtime CPU and disk scheduling.

  • VLRU: Buffer Management in Client-Server Systems

    Sung-Jin LEE  Chin-Wan CHUNG  

     
    PAPER-Databases

      Vol:
    E83-D No:6
      Page(s):
    1245-1254

    In a client-server system, when LRU or its variant buffer replacement strategy is used on both the client and the server, the cache performance on the server side is very poor mainly because of pages duplicated in both systems. This paper introduces a server buffer replacement strategy which uses a replaced page-id than a request page-id, for the primary information for its operations. The importance of the corresponding pages in the server cache is decided according to the replaced page-ids that are delivered from clients to the server, so that locations of the pages are altered. Consequently, if a client uses LRU as its buffer replacement strategy, then the server cache is seen by the client as a long virtual client LRU cache extended to the server. Since the replaced page-id is only sent to the server by piggybacking whenever a new page fetch request is sent, the operation to deliver the replaced page-id is simple and induces a minimal overhead. We show that the proposed strategy reveals good performance characteristics in diverse situations, such as single and multiple clients, as well as with various access patterns.

  • An Analysis of WWW Server Status by Packet Monitoring

    Yutaka NAKAMURA  Ken-ichi CHINEN  Suguru YAMAGUCHI  Hideki SUNAHARA  

     
    PAPER

      Vol:
    E83-D No:5
      Page(s):
    1012-1019

    A management of WWW server is still relying on the expertise and heuristic of administrators, because the comprehensive understandings of server behavior are missing. The administrators should maintain the WWW server with good states that they should investigate the WWW server in real time. Therefore, it is exactly desirable to provide a measurement application that enables the WWW server administrators to monitor WWW servers in the actual operational environment. We developed a measurement application called ENMA (Enhanced Network Measurement Agent) which is specially designed for WWW server state analysis. Furthermore, we applied this application to the large scale WWW server operation to show its implementation and advantages. In this paper, we analyze the WWW server states based on precise monitoring of performance indices of WWW system to help the server management.

  • A New Efficient Server-Aided RSA Secret Computation Protocol against Active Attacks

    Shin-Jia HWANG  Chin-Chen CHANG  

     
    LETTER-Information Security

      Vol:
    E83-A No:3
      Page(s):
    567-570

    In this paper, we propose a new secure server-aided RSA secret computation protocol which guards against not only the attacks in [1],[2],[15],[18] but also the new powerful active attacks in [3],[4]. The new protocol is also efficient to support high security level.

  • Trunk Reservation Effects on Multi-Server System with Batch Arrivals of Loss and Delay Customers

    Ken'ichi KAWANISHI  Yoshitaka TAKAHASHI  Toyofumi TAKENAKA  

     
    PAPER-Signaling System and Communication Protocol

      Vol:
    E83-B No:1
      Page(s):
    20-29

    A multi-server system with trunk reservation is studied. The system is offered by two types of customers (class-1 and class-2). They arrive in independent batch Poisson streams and have an exponentially distributed service time. Class-1 customers will be lost or rejected if they find all S servers busy on their arrivals. Class-2 customers will use at most S'=S-R servers and enter a queue with N capacity if they find the number of idle servers less than or equal to R on their arrivals. Here, R is the number of reserved servers for class-1 customers. An example of the system is realized in NTT's facsimile communications network F-NET.

  • Dead-Beat Chaos Synchronization and Its Applications to Image Communications

    Teh-Lu LIAO  Nan-Sheng HUANG  

     
    LETTER-Communication Theory and Signals

      Vol:
    E82-A No:8
      Page(s):
    1669-1673

    This paper presents a novel dead-beat synchronization scheme and applies it to communications in discrete-time chaotic systems. A well-known Henon system is considered as an illustrative example. In addition, a Henon-based image processing application effectively exploits the proposed scheme's effectiveness.

  • A Connectionless Server Using AAL5 in Public ATM Networks

    Woojin SEOK  Okhwan BYEON  Changhwan OH  Kiseon KIM  

     
    PAPER

      Vol:
    E82-A No:6
      Page(s):
    994-1001

    Since ATM network is a connection-oriented network, the operation for connectionless service is required for data service in it. There are many ways to support connectionless service in ATM network. They are ATM LAN Emulation, Classical IP and ARP over ATM, Indirect approach, Direct approach, and IP switch. It is known that Direct approach is suited for public network. The connectionless server supports connectionless service in Direct approach. There have been presented two kinds of methods, that is, streaming forwarding method and reassembly forwarding method, to forward the frames in the connectionless server. Reassembly forwarding method can work well with AAL5 which has better efficient characteristics than AAL3/4 in terms of easy use and fewer overheads. This paper proposes an algorithm that can decrease the loss of frame by a proposed buffer management working with AAL5. This paper also investigates the structure of the proposed connectionless server and its performance with the one of the conventional connectionless server through simulations. The proposed connectionless server shows a less frame loss and transfer delay than that of the conventional connectionless server.

  • Estimation of Network Characteristics and Its Use in Improving Performance of Network Applications

    Ahmed ASHIR  Glenn MANSFIELD  Norio SHIRATORI  

     
    PAPER

      Vol:
    E82-D No:4
      Page(s):
    747-755

    Network applications such as FTP, WWW, Mirroring etc. are presently operated with little or no knowledge about the characteristics of the underlying network. These applications could operate more efficiently if the characteristics of the network are known and/or are made available to the concerned application. But network characteristics are hard to come by. The IP Performance Metrics working group (IETF-IPPM-WG) is working on developing a set of metrics that will characterize Internet data delivery services (networks). Some tools are being developed for measurements of these metrics. These generally involve active measurements or require modificationsin applications. Both techniques have their drawbacks. In this work, we show a new and more practical approach of estimating network characteristics. This involves gathering and analyzing the network's experience. The experience is in the form of traffic statistics, information distilled from management related activities and ubiquitously available logs (squid access logs, mail logs, ftp logs etc. ) of network applications. An analysis of this experience provides an estimate of the characteristics of the underlying network. To evaluate the concept we have developed and experimented with a system wherein the network characteristics are generated by analyzing the logs and traffic statistics. The network characteristics are made available to network clients and administrators by Network Performance Metric (NPM) servers. These servers are accessed using standard network management protocols. Results of the evaluation are presented and a framework for efficient operation of network operations, using the network characteristics is outlined.

  • Group Two-Phase Locking: A Scalable Data Sharing Protocol

    Sujata BANERJEE  Panos K. CHRYSANTHIS  

     
    PAPER-Concurrency Control

      Vol:
    E82-D No:1
      Page(s):
    236-245

    The advent of high-speed networks with quality of service guarantees, will enable the deployment of data-server distributed systems over wide-area networks. Most implementations of data-server systems have been over local area networks. Thus it is important, in this context, to study the performance of existing distributed data management protocols in the new networking environment, identify the performance bottlenecks and develop protocols that are capable of taking advantage of the high speed networking technology. In this paper, we examine and compare the scalability of the server-based two-phase locking protocol (s-2PL), and the group two-phase locking protocol (g-2PL). The s-2PL protocol is the most widely used concurrency control protocol, while the g-2PL protocol is an optimized version of the s-2PL protocol, tailored for high-speed wide-area network environments. The g-2PL protocol reduces the effect of the network latency by message grouping, client-end caching and data migration. Detailed simulation results indicate that g-2PL indeed scales better than s-2PL. For example, upto 28% improvement in response time is reported.

  • An Ultra High-Speed File Server with 105 Mbytes/s Read Performance Based on a Personal Computer

    Tetsuo TSUJIOKA  Tetsuya ONODA  

     
    PAPER-Network Design, Operation, and Management

      Vol:
    E81-B No:12
      Page(s):
    2503-2508

    This paper proposes a novel ultra high-speed file server based on a personal computer (PC) to provide the instantaneous delivery of huge files, like movie files, graphic images and computer programs, over high-speed networks. In order to improve the sustained sequential read speed from arrays of hard drives to host memory in the server, two key techniques are proposed: "multi-stage striping (MSS)" and the "sequential file system (SFS)." An experimental file server based on a general-purpose PC is constructed and its performance is measured. The results show that the server offers ultra high read speeds, up to 105Mbytes/s, with just 8 hard drives.

  • Waiting-Time Distribution for a Finite-Capacity Single-Server Queue with Constant Service and Vacation Times

    Yoshiaki SHIKATA  Yoshitaka TAKAHASHI  

     
    PAPER-Communication Theory

      Vol:
    E81-B No:11
      Page(s):
    2141-2146

    We consider a finite-capacity single-server queue with constant service and vacation times, which is seen in the time division multiple access (TDMA) scheme. First we derive the probability that j customers remain in the queue when a test customer arrives. Using this probability we then evaluate the probability that the test customer who arrives during the vacation or service time has to wait in the queue for longer than a given time. From these results, we obtain the waiting time distribution for the customer arriving at an arbitrary time. We also show a practical application to wireless TDMA communications systems.

  • Realizing the Vision of Multiwavelength Optical Networking

    Richard E. WAGNER  

     
    INVITED PAPER-Photonic Networking

      Vol:
    E81-C No:8
      Page(s):
    1159-1166

    The Multiwavelength Optical Networking (MONET) program consists of a consortium of industrial partners, working together with the intent to demonstrate the key capabilities needed for configurable WDM networks. This involves integrating WDM technologies with optical switching technologies to provide a managed, high capacity, national scale WDM server layer to transport optical signals transparently across multiple interworking subnetworks.

  • Synchronous RAID5 with Region-Based Layout and Buffer Analysis in Video Storage Servers

    Chan-Ik PARK  Deukyoon KANG  

     
    PAPER-Computer Systems

      Vol:
    E81-D No:8
      Page(s):
    813-821

    Disk arrays are widely accepted as a disk subsystem for video servers due to its high throughput as well as high concurrency. RAID-like disk arrays are usually managed in either RAID level 3 (a request is handled by all the disks in the system) or RAID level 5 (a request is handled by some number of disks subject to the request size) when they are used in video servers, i. e. , either only one video stream is handled at a time in RAID level 3 or a certain number of video streams are handled independently at the same time in RAID level 5. Note that RAID level 3 is inappropriate to handle large number of video streams and RAID level 5 is inefficient to handle multiple video streams since handling continuous video streams is inherently synchronous operation. In this paper, we propose a new video data layout scheme called region-based layout and synchronous management of RAID5 called synchronous RAID5 for disk array used in video servers. It is shown that we can reduce the amount of buffers required to support a given number of video requests by integrating our region-based layout with synchronous RAID5 scheme. Group Sweeping Scheduling (GSS) is used as a basic disk scheduling. We have shown through analysis that our proposed scheme is superior to the existing schemes in the respect of the buffer requirements.

  • VOD Data Storage in Multimedia Environments

    Jihad BOULOS  Kinji ONO  

     
    PAPER-Heterogeneous Multimedia Servers

      Vol:
    E81-B No:8
      Page(s):
    1656-1665

    Video-on-Demand (VOD)servers are becoming feasible. These servers are a building component in a heterogeneous multimedia environment but have voluminous data to store and manage. If only disk-based secondary storage systems are used to store and manage this huge amount of data the system cost would be extensively high. A tape-based tertiary storage system seems to be a reasonable solution to lowering the cost of storage and management of this continuous data. However, the usage of a tertiary storage system to store large continuous data introduces several issues. These are mainly the replacement policy on disks, the decomposition and the placement of continuous data chunks on tapes, and the scheduling of multiple requests for materializing objects from tapes to disks. In this paper we address these issues and we propose solutions based on some heuristics we experimented in a simulator.

  • A Simple Relation between Loss Performance and Buffer Contents in a Statistical Multiplexer with Periodic Vacations

    Koohong KANG  Bart STEYAERT  Cheeha KIM  

     
    LETTER-Communication Networks and Services

      Vol:
    E80-B No:11
      Page(s):
    1749-1752

    In this Letter, we investigate the loss performance of a discrete-time single-server queueing system with periodic vacations, with which we are often confronted in traffic control, such as cell scheduling or priority control schemes, at ATM nodes. Explicit expressions are derived for the cell loss ratio in terms of the distribution of the buffer contents in an infinite capacity queue.

  • Performance Comparisons of Approaches for Providing Connectionless Service over ATM Networks

    Doo Seop EOM  Masayuki MURATA  Hideo MIYAHARA  

     
    PAPER-Communication protocol

      Vol:
    E80-B No:10
      Page(s):
    1454-1465

    Connectionless data from existing network applications compose a large portion of the workload during an early ATM deployment, and are likely to make up an important portion of ATM's workload even in the long term. For providing a connectionless service over the ATM network, we compare two approaches; an indirect and a direct approaches, which are adopted by International Telecommunication Union-Telecommunication (ITU-T) as generic approaches. Our main subject of this paper is to compare network costs of two approaches by taking into account several cost factors such as transmission links, buffers, and connectionless servers in the case of the direct approach. Since the cost of the direct approach heavily depends on the configuration of a virtual connectionless overlay network, we propose a new heuristic algorithm to construct an effective connectionless overlay network topology. The proposed algorithm determines an optimal number of connectionless servers and their locations to minimize the network cost while satisfying QoS requirements such as maximum delay time and packet loss probability. Through numerical examples, we compare the indirect and direct approaches, the latter of which is constructed by means of our proposed algorithm.

  • Active Attacks on Two Efficient Server-Aided RSA Secret Computation Protocols

    Gwoboa HORNG  

     
    LETTER-Information Security

      Vol:
    E80-A No:10
      Page(s):
    2038-2039

    Recently, two new efficient server-aided RSA secret computation protocols were proposed. They are efficient and can guard against some active attacks. In this letter, we propose two multi-round active attacks which can effectively reduce their security level even break them.

121-140hit(152hit)