1-7hit |
Hiroshi YUASA Hiroshi TSUTSUI Hiroyuki OCHI Takashi SATO
We propose a novel acceleration scheme for Monte Carlo based statistical static timing analysis (MC-SSTA). MC-SSTA, which repeatedly executes ordinary STA using a set of randomly generated gate delay samples, is widely accepted as an accuracy reference. A large number of random samples, however, should be processed to obtain accurate delay distributions, and software implementation of MC-SSTA, therefore, takes an impractically long processing time. In our approach, a generalized hardware module, the STA processing element (STA-PE), is used for the delay evaluation of a logic gate, and netlist-specific information is delivered in the form of instructions from an SRAM. Multiple STA-PEs can be implemented for parallel processing, while a larger netlist can be handled if only a larger SRAM area is available. The proposed scheme is successfully implemented on Altera's Arria II GX EP2AGX125EF35C4 device in which 26 STA-PEs and a 624-port Mersenne Twister-based random number generator run in parallel at a 116 MHz clock rate. A speedup of far more than10 is achieved compared to conventional methods including GPU implementation.
Zhiyong ZHANG Gaolei FEI Shenli PAN Fucai YU Guangmin HU
Network tomography is an appealing technology to infer link delay distributions since it only relies on end-to-end measurements. However, most approaches in network delay tomography are usually computationally intractable. In this letter, we propose a Fast link Delay distribution Inference algorithm (FDI). It estimates the node cumulative delay distributions by explicit computations based on a subtree-partitioning technique, and then derives the individual link delay distributions from the estimated cumulative delay distributions. Furthermore, a novel discrete delay model where each link has a different bin size is proposed to efficiently capture the essential characteristics of the link delay. Combining with the variable bin size model, FDI can identify the characteristics of the network-internal link delay quickly and accurately. Simulation results validate the effectiveness of our method.
Jeonggyu KIM Jongmin SHIN Dongmin YANG Cheeha KIM
We propose a novel epidemic routing policy, named energy optimal epidemic routing, for delay tolerant networks (DTNs). By investigating the tradeoff between delay and energy, we found the optimal transmission range as well as the optimal number of infected nodes for the minimal energy consumption, given a delivery requirement, specifically delay bound and delivery probability to the destination. We derive an analytic model of the Binary Spraying routing to find the optimal values, describing the delay distributions with respect to the number of infected nodes.
Shingo TAKAHASHI Shuji TSUKIYAMA
In order to improve the performance of the existing statistical timing analysis, slew distributions must be taken into account and a mechanism to propagate them together with delay distributions along signal paths is necessary. This paper introduces Gaussian mixture models to represent the slew and delay distributions, and proposes a novel algorithm for statistical timing analysis. The algorithm propagates a pair of delay and slew in a given circuit graph, and changes the delay distributions of circuit elements dynamically by propagated slews. The proposed model and algorithm are evaluated by comparing with Monte Carlo simulation. The experimental results show that the accuracy improvement in µ+3σ value of maximum delay is up to 4.5 points from the current statistical timing analysis using Gaussian distributions.
Insoo KOO Jeongrok YANG Kiseon KIM
In this letter, we present a procedure to analyze the delay distribution of data traffic in CDMA systems supporting voice and delay-tolerant data services with a finite buffer. The queueing method using a buffer for a delay-tolerant traffic can be used to improve the system utilization or the availability of system resources. Under the first-come and first-serve (FCFS) service discipline, we present a numerical procedure for the formation of delay distribution that is defined as the probability that a new data call get a service within the maximum tolerable delay requirement, based on a two-dimensional Markov model.
We propose a method of controlling the view divergence of data freshness when copies of sites in a replicated database are updated asynchronously. The view divergence of the replicated data freshness is the difference in the recentness of the updates reflected in the data acquired by clients. Our method accesses multiple sites and provides a client with data that reflects all the updates received by the sites. First, we define the probabilistic recentness of updates reflected in acquired data as read data freshness (RDF). The degree of RDF of data acquired by clients is the range of view divergence. Second, we propose a way to select sites in a replicated database by using the probability distribution of the update delays so that the data acquired by a client satisfies its required RDF. This way calculates the minimum number of sites in order to reduce the overhead of read transactions. Our method continues to adaptively and reliably provide data that meets the client's requirements in an environment where the delay of update propagation varies and applications' requirements change depending on the situation. Finally, we evaluate by simulation the view divergence we can control using our method. The simulation showed that our method can control the view divergence to about 1/4 that of a normal read transaction for 100 replicas. In addition, the increase in the overhead of a read transaction imposed by our method is not as much as the increase in the total number of replicas.
Wei-Yeh CHEN Jean-Lien C. WU Hung-Huan LIU
In this paper, we analyzed the performance of dynamic resource allocation with channel de-allocation and buffering in cellular networks. Buffers are applied for data traffic to reduce the packet loss probability while channel de-allocation is exploited to reduce the voice blocking probability. The results show that while buffering data traffic can reduce the packet loss probability, it has negative impact on the voice performance even if channel de-allocation is exploited. Although the voice blocking probability can be reduced with large slot capacity, the improvement decreases as the slot capacity increases. On the contrary, the packet loss probability increases as the slot capacity increases. In addition to the mean value analysis, the delay distribution and the 95% delay of data packets are provided.