The search functionality is under construction.

Keyword Search Result

[Keyword] storage(137hit)

41-60hit(137hit)

  • Well-Balanced Successive Simple-9 for Inverted Lists Compression

    Kun JIANG  Yuexiang YANG  Qinghua ZHENG  

     
    PAPER-Data Engineering, Web Information Systems

      Pubricized:
    2017/04/17
      Vol:
    E100-D No:7
      Page(s):
    1416-1424

    The growth in the amount of information available on the Internet and thousands of user queries per second brings huge challenges to the index update and query processing of search engines. Index compression is partially responsible for the current performance achievements of existing search engines. The selection of the index compression algorithms must weigh three factors, i.e., compression ratio, compression speed and decompression speed. In this paper, we study the well-known Simple-9 compression, in which exist many branch operations, table lookup and data transfer operations when processing each 32-bit machine word. To enhance the compression and decompression performance of Simple-9 algorithm, we propose a successive storage structure and processing metric to compress two successive Simple-9 encoded sequence of integers in a single data processing procedure, thus the name Successive Simple-9 (SSimple-9). In essence, the algorithm shortens the process of branch operations, table lookup and data transfer operations when compressing the integer sequence. More precisely, we initially present the data storage format and mask table of SSimple-9 algorithm. Then, for each mode in the mask table, we design and hard-code the main steps of the compression and decompression processes. Finally, analysis and comparison on the experimental results of the simulation and TREC datasets show the compression and decompression efficiency speedup of the proposed SSimple-9 algorithm.

  • Workload-Based Co-Design of Non-Volatile Cache Algorithm and Storage Class Memory Specifications for Storage Class Memory/NAND Flash Hybrid SSDs

    Tomoaki YAMADA  Chihiro MATSUI  Ken TAKEUCHI  

     
    PAPER

      Vol:
    E100-C No:4
      Page(s):
    373-381

    In order to realize solid-state drives (SSDs) with high performance, low energy consumption and high reliability, storage class memory (SCM)/multi-level cell (MLC) NAND flash hybrid SSD has been proposed. Algorithm of the hybrid SSD should be designed according to SCM specifications and workload characteristics. In this paper, SCMs are used as non-volatile cache. Cache operation guidelines and optimal SCM specifications for the hybrid SSD are provided for various workload characteristics. Three kinds of non-volatile cache operation for the hybrid SSD are discussed: i) write cache, ii) read-write cache without space control (RW cache) and iii) read-write cache with space control (RW cache w/ SC). SSD workloads are categorized into eight according to read/write ratio, access frequency and access data size. From evaluation result, the write cache algorithm is suitable for write-intensive workloads and read-cold-sequential workloads, while the RW cache algorithm is suitable for read-cold-random workloads to achieve the highest performance of the hybrid SSD. In contrast, as for read-hot-random workloads, write cache is appropriate when the SCM capacity is less than 3% of the NAND flash capacity. On the other hand, RW cache should be used in case that SCM capacity is more than 5% of NAND flash capacity. The effect of Memory-type SCM (M-SCM) and Storage-type SCM (S-SCM) on the hybrid SSD performance is also analyzed. The M-SCM latency is below 1 us (high speed) but the capacity is only 2% of the NAND flash capacity (small capacity). On the other hand, the S-SCM capacity is assumed to be 5% of the NAND flash capacity (large capacity) but S-SCM speed is larger than 1 us (low speed). If the additional SCM cost is limited to 20% of MLC NAND flash cost, up to 7-times and 8-times performance improvement are achieved in write-hot-random workload and read-hot-random workloads, respectively. Moreover, if the additional SCM cost is the same as MLC NAND flash cost, M-SCM/MLC NAND flash hybrid SSD achieves 24-times performance improvement.

  • Survey of Cloud-Based Content Sharing Research: Taxonomy of System Models and Case Examples Open Access

    Shinji SUGAWARA  

     
    INVITED SURVEY PAPER-Network System

      Pubricized:
    2016/10/21
      Vol:
    E100-B No:4
      Page(s):
    484-499

    This paper illustrates various content sharing systems that take advantage of cloud's storage and computational resources as well as their supporting conventional technologies. First, basic technology concepts supporting cloud-based systems from a client-server to cloud computing as well as their relationships and functional linkages are shown. Second, the taxonomy of cloud-based system models from the aspect of multiple clouds' interoperability is explained. Interoperability can be categorized into provider-centric and client-centric scenarios. Each can be further divided into federated clouds, hybrid clouds, multi-clouds and aggregated service by broker. Third, practical cloud-based systems related to contents sharing are reported and their characteristics are discussed. Finally, future direction of cloud-based content sharing is suggested.

  • Fast Reconstruction for Degraded Reads and Recovery Process in Primary Array Storage Systems

    Baegjae SUNG  Chanik PARK  

     
    PAPER-Data Engineering, Web Information Systems

      Pubricized:
    2016/11/11
      Vol:
    E100-D No:2
      Page(s):
    294-303

    RAID has been widely deployed in disk array storage systems to manage both performance and reliability simultaneously. RAID conducts two performance-critical operations during disk failures known as degraded reads/writes and recovery process. Before the recovery process is complete, reads and writes are degraded because data is reconstructed using data redundancy. The performance of degraded reads/writes is critical in order to meet stipulations in customer service level agreements (SLAs), and the recovery process affects the reliability of a storage system considerably. Both operations require fast data reconstruction. Among the erasure codes for fast reconstruction, Local Reconstruction Codes (LRC) are known to offer the best (or optimal) trade-off between storage overhead, fault tolerance, and the number of disks involved in reconstruction. Originally, LRC was designed for fast reconstruction in distributed cloud storage systems, in which network traffic is a major bottleneck during reconstruction. Thus, LRC focuses on reducing the number of disks involved in data reconstruction, which reduces network traffic. However, we observe that when LRC is applied to primary array storage systems, a major bottleneck in reconstruction results from uneven disk utilization. In other words, underutilized disks can no longer receive I/O requests as a result of the bottleneck of overloaded disks. Uneven disk utilization in LRC is due to its dedicated group partitioning policy to achieve the Maximally Recoverable property. In this paper, we present Distributed Reconstruction Codes (DRC) that support fast reconstruction in primary array storage systems. DRC is designed with group shuffling policy to solve the problem of uneven disk utilization. Experiments on real-world workloads show that DRC using global parity rotation (DRC-G) improves degraded performance by as much as 72% compared to RAID-6 and by as much as 35% compared to LRC under the same reliability. In addition, our study shows that DRC-G reduces the recovery process completion time by as much as 52% compared to LRC.

  • Applying Write-Once Memory Codes to Binary Symmetric Asymmetric Multiple Access Channels

    Ryota SEKIYA  Brian M. KURKOSKI  

     
    PAPER-Communication Theory and Systems

      Vol:
    E99-A No:12
      Page(s):
    2202-2210

    Write once memory (WOM) codes allow reuse of a write-once medium. This paper focuses on applying WOM codes to the binary symmetric asymmetric multiple access channel (BS-AMAC). At one specific rate pair, WOM codes can achieve the BS-AMAC maximum sum-rate. Further, any achievable rate pair for a two-write WOM code is also an achievable rate pair for the BS-AMAC. Compared to the uniform input distribution of linear codes, the non-uniform WOM input distribution is helpful for a BS-AMAC. In addition, WOM codes enable “symbol-wise estimation”, resulting in the decomposition to two distinct channels. This scheme does not achieve the BS-AMAC maximum sum-rate if the channel has errors, however leads to reduced-complexity decoding by enabling independent decoding of two codewords. Achievable rates for this decomposed system are also given. The AMAC has practical application to the relay channel and we briefly discuss the relay channel with block Markov encoding using WOM codes. This scheme may be effective for cooperative wireless communications despite the fact that WOM codes are designed for data storage.

  • A Secure Light-Weight Public Auditing Scheme in Cloud Computing with Potentially Malicious Third Party Auditor

    Yilun WU  Xinye LIN  Xicheng LU  Jinshu SU  Peixin CHEN  

     
    LETTER-Information Network

      Pubricized:
    2016/06/23
      Vol:
    E99-D No:10
      Page(s):
    2638-2642

    Public auditing is a new technique to protect the integrity of outsourced data in the remote cloud. Users delegate the ability of auditing to a third party auditor (TPA), and assume that each result from the TPA is correct. However, the TPA is not always trustworthy in reality. In this paper, we consider a scenario in which the TPA may lower the reputation of the cloud server by cheating users, and propose a novel public auditing scheme to address this security issue. The analyses and the evaluation prove that our scheme is both secure and efficient.

  • Coordinated Ramp Metering for Minimum Waiting Time and Limited Ramp Storage

    Soobin JEON  Inbum JUNG  

     
    PAPER-Intelligent Transport System

      Vol:
    E99-A No:10
      Page(s):
    1843-1855

    Ramp metering is the most effective and direct method to control a vehicle entering a freeway. This study proposes a novel density-based ramp metering method. Existing methods typically use flow data that has low reliability, and they suffer from various problems. Furthermore, when ramp metering is performed based on freeway congestion, additional congestion and over-capacity can occur in the ramp. To solve these problems faced with existing methods, the proposed method uses the density and acceleration data of vehicles on the freeway and considers the ramp status. The experimental environment was simulated using PTV Corporation's VISSIM simulator. The Traffic Information and Condition Analysis System was developed to control the VISSIM simulator. The experiment was conducted between 2:00 PM and 7:00 PM on October 5, 2014, during severe traffic congestion. The simulation results showed that total travel time was reduced by 10% compared to existing metering system during the peak time. Thus, we solved the problem of ramp congestion and over-capacity.

  • LAB-LRU: A Life-Aware Buffer Management Algorithm for NAND Flash Memory

    Liyu WANG  Lan CHEN  Xiaoran HAO  

     
    LETTER-Computer System

      Pubricized:
    2016/06/21
      Vol:
    E99-D No:10
      Page(s):
    2633-2637

    NAND flash memory has been widely used in storage systems. Aiming to design an efficient buffer policy for NAND flash memory, a life-aware buffer management algorithm named LAB-LRU is proposed, which manages the buffer by three LRU lists. A life value is defined for every page and the active pages with higher life value can stay longer in the buffer. The definition of life value considers the effect of access frequency, recency and the cost of flash read and write operations. A series of trace-driven simulations are carried out and the experimental results show that the proposed LAB-LRU algorithm outperforms the previous best-known algorithms significantly in terms of the buffer hit ratio, the numbers of flash write and read operations and overall runtime.

  • Reliability and Failure Impact Analysis of Distributed Storage Systems with Dynamic Refuging

    Hiroaki AKUTSU  Kazunori UEDA  Takeru CHIBA  Tomohiro KAWAGUCHI  Norio SHIMOZONO  

     
    PAPER-Data Engineering, Web Information Systems

      Pubricized:
    2016/06/17
      Vol:
    E99-D No:9
      Page(s):
    2259-2268

    In recent data centers, large-scale storage systems storing big data comprise thousands of large-capacity drives. Our goal is to establish a method for building highly reliable storage systems using more than a thousand low-cost large-capacity drives. Some large-scale storage systems protect data by erasure coding to prevent data loss. As the redundancy level of erasure coding is increased, the probability of data loss will decrease, but the increase in normal data write operation and additional storage for coding will be incurred. We therefore need to achieve high reliability at the lowest possible redundancy level. There are two concerns regarding reliability in large-scale storage systems: (i) as the number of drives increases, systems are more subject to multiple drive failures and (ii) distributing stripes among many drives can speed up the rebuild time but increase the risk of data loss due to multiple drive failures. If data loss occurs by multiple drive failure, it affects many users using a storage system. These concerns were not addressed in prior quantitative reliability studies based on realistic settings. In this work, we analyze the reliability of large-scale storage systems with distributed stripes, focusing on an effective rebuild method which we call Dynamic Refuging. Dynamic Refuging rebuilds failed blocks from those with the lowest redundancy and strategically selects blocks to read for repairing lost data. We modeled the dynamic change of amount of storage at each redundancy level caused by multiple drive failures, and performed reliability analysis with Monte Carlo simulation using realistic drive failure characteristics. We showed a failure impact model and a method for localizing the failure. When stripes with redundancy level 3 were sufficiently distributed and rebuilt by Dynamic Refuging, the proposed technique turned out to scale well, and the probability of data loss decreased by two orders of magnitude for systems with a thousand drives compared to normal RAID. The appropriate setting of a stripe distribution level could localize the failure.

  • A Virtualization-Based Hybrid Storage System for a Map-Reduce Framework

    Aseffa DEREJE TEKILU  Chin-Hsien WU  

     
    PAPER-Software System

      Pubricized:
    2016/05/25
      Vol:
    E99-D No:9
      Page(s):
    2248-2258

    A map-reduce framework is popular for big data analysis. In the typical map-reduce framework, both master node and worker nodes can use hard-disk drives (HDDs) as local disks for the map-reduce computation. However, because of the inherit mechanical problems of HDDs, the I/O performance is a bottleneck for the map-reduce framework when I/O-intensive applications (e.g., sorting) are performed. Replacing HDDs with solid-state drives (SSDs) is not economical, although SSDs have better performance than HDDs. In this paper, we propose a virtualization-based hybrid storage system for the map-reduce framework. The objective of the paper is to combine the advantages of the fast access property of SSDs and the low cost of HDDs by realizing an economical design and improving I/O performance of a map-reduce framework in a virtualization environment. We propose three storage combinations: SSD-based, HDD-based, and a hybrid of SSD-based and HDD-based storage systems which balances speed, capacity, and lifetime. According to experiments, the hybrid of SSD-based and HDD-based storage systems offers superior performance and economy.

  • Parity Data De-Duplication in All Flash Array-Based OpenStack Cloud Block Storage

    Huiseong HEO  Cheongjin AHN  Deok-Hwan KIM  

     
    LETTER-Data Engineering, Web Information Systems

      Pubricized:
    2016/02/02
      Vol:
    E99-D No:5
      Page(s):
    1384-1387

    In recent years, the need to build solid state drive (SSD)-based cloud storage systems has been increasing in order to process the big data generated by lots of Internet of Things devices and Internet users. Because these kinds of cloud systems require high performance and reliable storage, the use of flash-based Redundant Array of Independent Disks (RAID) will increase. But in flash-based RAID storage, parity data must be updated with every data write operation, which can more quickly overwhelm SSD's lifespan. To solve this problem, this letter proposes parity data deduplication for OpenStack cloud storage systems using an all flash array. Unlike the traditional data deduplication method, it only removes parity data, which will be stored in the parity disks of the all flash array. Experiments show that the proposed parity data deduplication method can efficiently reduce the number of parity data write operations, compared to the traditional data deduplication method.

  • Placement of Virtual Storages for Distributed Robust Cloud Storage

    Yuya TARUTANI  Yuichi OHSITA  Masayuki MURATA  

     
    PAPER-Network Management/Operation

      Vol:
    E99-B No:4
      Page(s):
    885-893

    Cloud storage has become popular and is being used to hold important data. As a result, availability to become important; cloud storage providers should allow users to upload or download data even if some part of the system has failed. In this paper, we discuss distributed cloud storage that is robust against failures. In distributed cloud storage, multiple replicas of each data chunk are stored in the virtual storage at geographically different locations. Thus, even if one of the virtual storage systems becomes unavailable, users can access the data chunk from another virtual storage system. In distributed cloud storage, the placement of the virtual storage system is important; if the placement of the virtual cloud storage system means that a large number of virtual storages are possible could become unavailable from a failure, a large number of replicas of each data chunk should be prepared to maintain availability. In this paper, we propose a virtual storage placement method that assures availability with a small number of replicas. We evaluated our method by comparing it with three other methods. The evaluation shows that our method can maintain availability while requiring only with 60% of the network costs required by the compared methods.

  • Variation of SCM/NAND Flash Hybrid SSD Performance, Reliability and Cost by Using Different SSD Configurations and Error Correction Strengths

    Hirofumi TAKISHITA  Shuhei TANAKAMARU  Sheyang NING  Ken TAKEUCHI  

     
    PAPER

      Vol:
    E99-C No:4
      Page(s):
    444-451

    Storage-Class Memory (SCM) and NAND flash hybrid Solid-State Drive (SSD) has advantages of high performance and low power consumption compared with NAND flash only SSD. In this paper, first, three SSD configurations are investigated. Three different SCMs are used with 0.1 µs, 1 µs and 10 µs read/write latencies, respectively, and the required SCM/NAND flash capacity ratios are analyzed to maintain the same SSD performance. Next, by using the three SSD configurations, the variation of SSD reliability, performance and cost are analyzed by changing error correction strengths. The SSD reliability of acceptable SCM and NAND flash Bit Error Rates (BERs) is limited by achieving specified SSD performance with error correction, and/or limited by SCM and NAND flash parity size and SSD cost. Lastly, the SSD replacement cost is also analyzed by considering the limitation of NAND flash write/erase cycles. The purpose of this paper is to provide a design guideline for obtaining high performance, highly reliable and cost-effective SCM/NAND hybrid structure SSD with ECC.

  • Cooperative Local Repair with Multiple Erasure Tolerance

    Jiyong LU  Xuan GUANG  Linzhi SHEN  Fang-Wei FU  

     
    LETTER-Coding Theory

      Vol:
    E99-A No:3
      Page(s):
    765-769

    In distributed storage systems, codes with lower repair locality are much more desirable due to their superiority in reducing the disk I/O complexity of each repair process. Motivated partially by both codes with information (r,δ1)c locality and codes with cooperative (r,l) locality, we propose the concept of codes with information (r,l,δ) locality in this paper. For a linear code C with information (r,l,δ) locality, values at arbitrary l information coordinates of an information set I can be recovered by connecting any of δ existing pairwise disjoint local repair sets with size no more than r, where a local repair set of l coordinates is defined as the set of some other coordinates by which one can recover the values at these l coordinates. We derive a lower bound on the codeword length n for [n,k,d] linear codes with information (r,l,δ) locality. Furthermore, we indicate its tightness for some special cases. Particularly, some existing results can be deduced from our bound by restriction on parameters.

  • Reusing the Results of Queries in MapReduce Systems by Adopting Shared Storage

    Zhanye WANG  Chuanyi LIU  Dongsheng WANG  

     
    PAPER

      Vol:
    E99-B No:2
      Page(s):
    315-325

    Over the last few years, Apache MapReduce has become the prevailing framework for large scale data processing. Instead of writing MapReduce programs which are too obscure to express, many developers usually adopt high level query languages, such as Hive or Pig Latin, to finish their complex queries. These languages automatically compile each query into a workflow of MapReduce jobs, so they greatly facilitate the querying and management of large datasets. One option to speed up the execution of workflows is to save the results produced previously and reuse them in the future if needed. In this paper we present SuperRack, which uses shared storage devices to store the results of each workflow and allows a new query to reuse these results in order to avoid redundant computation and hasten execution. We propose several novel techniques to improve the access and storage efficiency of the previous results. We also evaluate SuperRack to verify its feasibility and effectiveness. Experiments show that our solution outperforms Hive significantly under TPC-H benchmark and real life workloads.

  • A Flexible Direct Attached Storage for a Data Intensive Application

    Takatsugu ONO  Yotaro KONISHI  Teruo TANIMOTO  Noboru IWAMATSU  Takashi MIYOSHI  Jun TANAKA  

     
    PAPER-Storage System

      Pubricized:
    2015/09/15
      Vol:
    E98-D No:12
      Page(s):
    2168-2177

    Big data analysis and a data storing applications require a huge volume of storage and a high I/O performance. Applications can achieve high level of performance and cost efficiency by exploiting the high I/O performance of direct attached storages (DAS) such as internal HDDs. With the size of stored data ever increasing, it will be difficult to replace servers since internal HDDs contain huge amounts of data. Generally, the data is copied via Ethernet when transferring the data from the internal HDDs to the new server. However, the amount of data will continue to rapidly increase, and thus, it will be hard to make these types of transfers through the Ethernet since it will take a long time. A storage area network such as iSCSI can be used to avoid this problem because the data can be shared with the servers. However, this decreases the level of performance and increases the costs. Improving the flexibility without incurring I/O performance degradation is required in order to improve the DAS architecture. In response to this issue, we propose FlexDAS, which improves the flexibility of direct attached storage by using a disk area network (DAN) without degradation the I/O performance. A resource manager connects or disconnects the computation nodes to the HDDs via the FlexDAS switch, which supports the SAS or SATA protocols. This function enables for the servers to be replaced in a short period of time. We developed a prototype FlexDAS switch and quantitatively evaluated the architecture. Results show that the FlexDAS switch can disconnect and connect the HDD to the server in just 1.16 seconds. We also confirmed that the FlexDAS improves the performance of the data intensive applications by up to 2.84 times compared with the iSCSI.

  • Power-Saving in Storage Systems for Cloud Data Sharing Services with Data Access Prediction

    Koji HASEBE  Jumpei OKOSHI  Kazuhiko KATO  

     
    PAPER-Software System

      Pubricized:
    2015/06/30
      Vol:
    E98-D No:10
      Page(s):
    1744-1754

    We present a power-saving method for large-scale storage systems of cloud data sharing services, particularly those providing media (video and photograph) sharing services. The idea behind our method is to periodically rearrange stored data in a disk array, so that the workload is skewed toward a small subset of disks, while other disks can be sent to standby mode. This idea is borrowed from the Popular Data Concentration (PDC) technique, but to avoid an increase in response time caused by the accesses to disks in standby mode, we introduce a function that predicts future access frequencies of the uploaded files. This function uses the correlation of potential future accesses with the combination of elapsed time after upload and the total number of accesses in the past. We obtain this function in statistical analysis of the real access patterns of 50,000 randomly selected publicly available photographs on Flickr over 7,000 hours (around 10 months). Moreover, to adapt to a constant massive influx of data, we propose a mechanism that effectively packs the continuously uploaded data into the disk array in a storage system based on the PDC. To evaluate the effectiveness of our method, we measured the performance in simulations and a prototype implementation. We observed that our method consumed 12.2% less energy than the static configuration (in which all disks are in active mode). At the same time, our method maintained a preferred response time, with 0.23% of the total accesses involving disks in standby mode.

  • Cryptanalysis and Improvement of an Encoding Method for Private-Key Hidden Vector Encryptions

    Fu-Kuo TSENG  Rong-Jaye CHEN  

     
    LETTER-Cryptography and Information Security

      Vol:
    E98-A No:9
      Page(s):
    1982-1984

    A predicate encryption scheme enables the owner of the master key to enforce fine-grained access control on encrypted cloud data through the delegation of predicate tokens to cloud storages. In particular, Blundo et al. proposed a construction where a predicate token reveals partial information of the involved keywords to enable efficient operations on encrypted keywords. However, we found that a predicate token reveals more information than what was claimed because of the encoding scheme. In this letter, we not only analyze this extra information leakage but also present an improved encoding scheme for the Blundo et al's scheme and the other similar schemes to preserve predicate privacy.

  • Isolated VM Storage on Clouds

    Jinho SEOL  Seongwook JIN  Seungryoul MAENG  

     
    LETTER-Dependable Computing

      Pubricized:
    2015/06/08
      Vol:
    E98-D No:9
      Page(s):
    1706-1710

    Even though cloud users want to keep their data on clouds secure, it is not easy to protect the data because cloud administrators could be malicious and hypervisor could be compromised. To solve this problem, hardware-based memory isolation schemes have been proposed. However, the data in virtual storage are not protected by the memory isolation schemes, and thus, a guest OS should encrypt the data. In this paper, we address the problems of the previous schemes and propose a hardware-based storage isolation scheme. The proposed scheme enables to protect user data securely and to achieve performance improvement.

  • Discreet Method to Match Safe Site-Pairs in Short Computation Time for Risk-Aware Data Replication

    Takaki NAKAMURA  Shinya MATSUMOTO  Hiroaki MURAOKA  

     
    PAPER-Dependable Computing

      Pubricized:
    2015/04/28
      Vol:
    E98-D No:8
      Page(s):
    1493-1502

    Risk-aware Data Replication (RDR), which replicates data at primary sites to nearby safe backup sites, has been proposed to mitigate service disruption in a disaster area even after a widespread disaster that damages a network and a primary site. RDR assigns a safe backup site to a primary site while considering damage risk for both the primary site and the backup candidate site. To minimize the damage risk of all site-pairs the Integer Programing Problem (IPP), which is a mathematical optimization problem, is applied. A challenge for RDR is to choose safe backup sites within a short computation time even for a huge number of sites. As described in this paper, we propose a Discreet method for RDR to surmount this hurdle. The Discreet method first judges the backup sites of a potentially unsafe primary site and avoids assigning a very safe primary site with a very safe backup site. We evaluated the computation time for site-paring and the data availability in the cases of Earthquake and Tsunami using basic disaster simulations. We confirmed that the computation rate of the proposed method is more than 1000 times faster than the existing method when the number of sites is greater than 1000. We also confirmed the data availability of the proposed method; it provides almost equal rates to existing methods of strict optimization. These results mean that the proposed method makes RDR more practical for massively multiple sites.

41-60hit(137hit)