The search functionality is under construction.

Keyword Search Result

[Keyword] cache consistency(5hit)

1-5hit
  • Exploiting Versions for Transactional Cache Consistency

    Heum-Geun KANG  

     
    PAPER-Database

      Vol:
    E88-D No:6
      Page(s):
    1191-1198

    The efficiency of algorithms managing data caches has a major impact on the performance of systems that utilize client-side data caching. In these systems, two versions of data can be maintained without additional overhead by exploiting the replication of data in the server's buffer and clients' caches. In this paper, we present a new cache consistency algorithm employing versions: Two Versions-Callback Locking (2V-CBL). Our experimental results indicate that 2V-CBL provides good performance, and in particular outperforms a leading cache consistency algorithm, Asynchronous Avoidance-based Cache Consistency, when some clients run only read-only transactions.

  • Caching and Concurrency Control in a Wireless Mobile Computing Environment

    SangKeun LEE  

     
    PAPER-Databases

      Vol:
    E85-D No:8
      Page(s):
    1284-1296

    Caching of frequently accessed data has been shown to be a useful technique for reducing congestion on the narrow bandwidth of wireless channels. However, traditional client/server strategies for supporting transactional cache consistency, which require extensive communications between a client and a server, are not appropriate in a wireless mobile database. This paper proposes two, simple but effective, transactional cache consistency protocols for mobile read-only transactions by utilizing the broadcast-based solutions for the problem of invalidating caches. The novelty of our approach is that the consistency check on accessed data and the commitment protocol are implemented in a truly distributed fashion as an integral part of cache invalidation process. The applicability of proposed techniques is also examined by an analytical study.

  • An Optimistic Cache Consistency Protocol Using Preemptive Approach

    SungHo CHO  Jeong-Hyon HWANG  Kyoung Yul BAE  Chong-Sun HWANG  

     
    PAPER-Databases

      Vol:
    E83-D No:9
      Page(s):
    1772-1780

    In Optimistic Two-Phase Locking (O2PL), when a transaction requests a commit, the transaction can not be committed until all requested locks are obtained. By this reason, O2PL leads to unnecessary waits and operations even though it adopts an optimistic approach. This paper suggests an efficient optimistic cache consistency protocol that provides serializability of committed transactions. Our cache consistency scheme, called PCP (Preemptive Cache Protocol), decides whether to commit or abort without waiting when transactions request commits. In PCP, some transactions that read stale data items can not be aborted, because it adopts a re-ordering scheme to enhance the performance. In addition, for re-ordering, PCP stores only one version of each data item. This paper presents a simulation-based analysis on the performance of PCP with other protocols such as O2PL, Optimistic Concurrency Control and Caching Two-Phase Locking. The simulation experiments show that PCP performs as well as or better than other schemes with low overhead.

  • Deferred Locking with Buffer Validation on Demand for Client-Server Database Consistency: DL

    Hyeokmin KWON  Songchun MOON  

     
    PAPER-Databases

      Vol:
    E80-D No:7
      Page(s):
    705-716

    In client-server database management systems (DBMSs), inter-transaction caching is an effective technique for improving the performance. However, inter-transaction caching requires a cache consistency maintenance (CCM) protocol to ensure that cached copies at clients are kept mutually consistent. Such a protocol could be complex to implement and expensive to run, since several rounds of message exchange may be required. In this paper, we propose a new CCM scheme based on the primary-copy locking algorithm. In the proposed scheme, a number of lock requests and a data-shipping request are combined into a single message packet to reduce client-server interactions, which are known to be very critical to the performance of clientserver DBMSs. We examine its performance tradeoffs on the basis of a simulation model under a wide range of workloads. The performance results indicate that the proposed scheme improves the overall system throughput significantly over the caching two-phase locking and the optimistic two-phase locking scheme. Its higher performance mainly results from its lower communication overhead and lower degree of transaction blocking ratio.

  • A Cache-Coherent, Distributed Memory Multiprocessor System and Its Performance Analysis

    Douglas E. MARQUARDT  Hasan S. ALKHATIB  

     
    PAPER-Computer Systems

      Vol:
    E75-D No:3
      Page(s):
    274-290

    The problems of cache coherency in multiprocessor systems are directly related to their architectural structures. Small scale multiprocessor systems have focused on the use of bus based memory interconnection networks using centrally shared memory and a sequential consistency model for coherency. This has limited scalability to but a few tens of processors due to the limited bus bandwidth used for both coherency updates and memory traffic. Recently, large scale multiprocessor systems have been proposed that use general interconnection networks and distributed shared memory. These architectures have been proposed using weak consistency models and various directory map schemes to hide the overhead for coherency maintenance within the memory hieratchy, interconnection network or process context switch latencies. The coherency and memory traffic are still maintained over the same interconnection network. In this paper, we present the architecture of a new general purpose medium scale multiprocessor system. This Cache Coherent Multiprocessor System (C2MP), supports distributed shared memory using a general memory interconnection network for memory traffic and a separate bus based coherency interconnection network for coherency maintenance. Through the use of a special directory based coherency protocol and cache oriented distributed coherency controllers, direct cache-to-cache coherency maintenance is performed over the dedicated coherency bus. This minimizes coherency updates to only those processor nodes needing coherency maintenance. An aggressive sequential coherncy model is used, which reduces the hardware penalty to support an ideal sequential consistency programmers model. The system can scale up to 256-512 processors depending on the degree of shared data and is expected to have higher per processor utilization in this range than currently proposed medium and large scale multiprocessor systems. The C2MP system is analyzed utilizing a Generalized Timed Petri-Net model of a processor node. A stochastic model for internode interactions over the general memory interconnection network and coherency bus are used . The model of the proposed architecture is analyzed under steady-state conditions for varying system work load parameters.