1-2hit |
Wei-Tsong LEE Kuo-Chi CHU Kun-Chen CHUNG Jen-Yi PAN Pau-Choo CHUNG
The multi-channel Hybrid Fiber Coaxial (HFC) network is essentially a shared medium with multi-channels. Its operation requires the use of a scheduling algorithm to manage the data transmission within each channel. The Data-Over-Cable Service Interface Specification (DOCSIS) protocol is an important standard for HFC networks. Since this protocol does not explicitly specify the scheduling algorithm to be used, many alternative algorithms have been proposed. However, none of these algorithms are applicable to the scheduling of non-Unsolicited Grant Service (UGS) data in multi-channel HFC networks. Accordingly, the present study develops a multi-channel scheduling algorithm which optimizes the scheduling delay time of each transmitted non-UGS request. This algorithm manages the amount of data transmission in each upstream channel according to the overall network load and the bandwidth available in each channel. This study constructs a mathematical model of the algorithm and then uses this model as the basis for a series of simulations in which the performance of the scheduling algorithm is evaluated.
Tomoya SAITO Kyoko KATO Hiroshi INAI
As an access network to the Internet, CATV/HFC network has been widespread recently. Such a network employs a reservation access method under which reservation and data transmission periods appear by turns. Before data transmission, a station must send a request in a random access manner during the reservation period called a request cluster. If the cluster size is large, the probability of request collision occurrence becomes small. A large cluster size however increases the packet transmission delay. Moreover the throughput decreases since vacant duration of reservation period increases. DOCSIS, a de facto standard for the networks, employs the binary back-off method for request cluster allocation. Since that method normally allocates unnecessary large request cluster, the transmission delay increases under heavy load conditions. In this paper, we propose a request cluster allocation method which dynamically changes the cluster size according to the load conditions. In order to evaluate performance of the proposed method, we build a queuing model and execute computer simulation. Simulation result shows that the proposed method provides smaller delay than the binary back-off method.