1-4hit |
Norio MATSUFURU Kouji NISHIMURA Reiji AIBARA
We study buffer access policies which provide different loss priorities between two types of services, namely, real-time and nonreal-time services in ATM networks. Real-time services, such as video and voice, require the cell transmission with bounded delay. For these services, their available buffer sizes are limited by the delay bounds. We compare the performance of several buffering policies with bounded delay constraints of real-time services. Numerical results indicate that a simple buffering policy, called limited partial buffer sharing (LPBS) proposed in this paper, has a good performance for efficient use of ATM networks.
To provide QoS guarantees for each connection, efficient scheduling algorithms, such as WFQ, have been proposed. These algorithms assume a certain amount of buffer is allocated for each connection to provide loss free transmission of packets. This buffer allocation policy, however, requires much buffer space especially when many connections are sharing a link. In this paper we propose the use of partial buffer sharing (PBS) policy combined with usage parameter control (UPC) for efficient buffer management and flexible QoS control in ATM switches. We evaluate the feasibility of the proposed method by solving a Markov model. We also show that using the proposed method, we can control the cell loss ratio (CLR) independently of the delay. Numerical evaluations are presented, which indicates the PBS combined with UPC significantly reduces the buffer size required to satisfy given cell loss ratios.
Norio MATSUFURU Kouji NISHIMURA Reiji AIBARA
In this paper, we study efficient scheduling algorithms that are suitable for ATM networks. In ATM networks, all packets have a fixed small length of 53 bytes and they are transmitted at very high rate. Thus time complexity of a scheduling algorithm is quite important. Most scheduling algorithms proposed so far have a complexity of O(log N) per packet, where N denotes the number of connections sharing the link. In contrast, weighted round robin (WRR) has the advantage of having O(1) complexity; however, it is known that its delay property gets worse as N increases. To solve this problem, in this paper we propose two new variants of WRR, uniform round robin (URR) and idling uniform round robin (I-URR). Both disciplines provide end-to-end delay and fairness bounds which are independent of N. Complexity of URR, however, slightly increases as N increases, while I-URR has complexity of O(1) per packet. I-URR also works as a traffic shaper, so that it can significantly alleviate congestion on the network. We also introduce a hierarchical WRR discipline (H-WRR) which consists of different WRR servers using I-URR as the root server. H-WRR efficiently accommodates both guaranteed and best-effort connections, while maintaining O(1) complexity per packet. If several connections are reserving the same bandwidth, H-WRR provides them with delay bounds that are close to those of weighted fair queueing.
Norio MATSUFURU Kouji NISHIMURA Reiji AIBARA
We study resource allocation strategies in ATM switches, which provide quality of service (QoS) guarantees to individual connections. In order to minimize the cell loss rate over a wide range of traffic characteristics, an efficient allocation strategy is necessary. In this paper we introduce a resource allocation strategy, named TP+WRR (Threshold Pushout + Weighted Round Robin) which can fully utilize the buffer space and the bandwidth. We compare the performance of TP+WRR with two typical resource allocation strategies. An exact queueing analysis based on a Markov model is carried out under bursty traffic sources to evaluate their performance. Our results reveal that TP+WRR considerably improves the cell loss probability over the other strategies considered in this paper, especially when many connections are sharing a link.