The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] data caching(2hit)

1-2hit
  • Probabilistic Broadcast-Based Cache Invalidation Scheme for Location Dependent Data in Mobile Environments

    Shigeaki TAGASHIRA  Yutaka KAMINISHI  Yutaka ARAKAWA  Teruaki KITASUKA  Akira FUKUDA  

     
    PAPER-Data Engineering, Web Information Systems

      Vol:
    E94-D No:8
      Page(s):
    1590-1601

    Data caching is widely known as an effective power-saving technique, in which mobile devices use local caches instead of original data placed on a server, in order to reduce the power consumption necessary for network accesses. In such data caching, a cache invalidation mechanism is important in preventing these devices from unintentionally accessing invalid data. In this paper, we propose a broadcast-based protocol for cache invalidation in a location-aware system. The proposed protocol is designed to reduce the access time required for obtaining necessary invalidation reports through broadcast media and to avoid client-side sleep fragmentation while retrieving the reports. In the proposed protocol, a Bloom filter is used as the data structure of an invalidation report, in order to probabilistically check the invalidation of caches. Furthermore, we propose three broadcast scheduling methods that are intended to achieve flexible broadcasting structured by the Bloom filter: fragmentation avoidance scheduling method (FASM), metrics balancing scheduling method (MBSM), and minimizing access time scheduling method (MASM). The broadcast schedule is arranged for consecutive accesses to geographically neighboring invalidation reports. In addition, the effectiveness of the proposed methods is evaluated by simulation. The results indicate that the MBSM and MASM achieve a high rate of performance scheduling. Compared to the FASM, the MBSM reduces the access time by 34%, while the fragmentations on the resultant schedule increase by 40%, and the MASM reduces the access time by 40%, along with an 85% increase in the number of fragmentations.

  • Group Two-Phase Locking: A Scalable Data Sharing Protocol

    Sujata BANERJEE  Panos K. CHRYSANTHIS  

     
    PAPER-Concurrency Control

      Vol:
    E82-D No:1
      Page(s):
    236-245

    The advent of high-speed networks with quality of service guarantees, will enable the deployment of data-server distributed systems over wide-area networks. Most implementations of data-server systems have been over local area networks. Thus it is important, in this context, to study the performance of existing distributed data management protocols in the new networking environment, identify the performance bottlenecks and develop protocols that are capable of taking advantage of the high speed networking technology. In this paper, we examine and compare the scalability of the server-based two-phase locking protocol (s-2PL), and the group two-phase locking protocol (g-2PL). The s-2PL protocol is the most widely used concurrency control protocol, while the g-2PL protocol is an optimized version of the s-2PL protocol, tailored for high-speed wide-area network environments. The g-2PL protocol reduces the effect of the network latency by message grouping, client-end caching and data migration. Detailed simulation results indicate that g-2PL indeed scales better than s-2PL. For example, upto 28% improvement in response time is reported.