The search functionality is under construction.

Author Search Result

[Author] Ryoichi KAWAHARA(31hit)

1-20hit(31hit)

  • A Method of Bandwidth Dimensioning and Management Using Flow Statistics

    Ryoichi KAWAHARA  Keisuke ISHIBASHI  Takuya ASAKA  Shuichi SUMITA  Takeo ABE  

     
    PAPER-Network Management/Operation

      Vol:
    E88-B No:2
      Page(s):
    643-653

    We develop a method of dimensioning and managing the bandwidth of a link on which TCP flows from access links are aggregated. To do this, we extend the application of the processor-sharing queue model to TCP performance evaluation by using flow statistics. To handle various factors that affect actual TCP behavior, such as round-trip time, window-size, and restrictions other than access-link bandwidth, we extend the model by replacing the access-link bandwidth with the actual file-transfer speed of a flow when the aggregation link is not congested. We only use the number of active flows and the link utilization to estimate the file-transfer speed. Unlike previous studies, the extended model based on the actual transfer speed does not require any assumptions/predeterminations about file-size, packet-size, and round-trip times, etc. Using the extended model, we predict the TCP performance when the link utilization increases. We also show a method of dimensioning the bandwidth needed to maintain TCP performance. We show the effectiveness of our method through simulation analysis.

  • Detection of TCP Performance Degradation Using Link Utilization Statistics

    Keisuke ISHIBASHI  Ryoichi KAWAHARA  Takuya ASAKA  Masaki AIDA  Satoshi ONO  Shoichiro ASANO  

     
    PAPER-Network

      Vol:
    E89-B No:1
      Page(s):
    47-56

    In this paper, we propose a method of detecting TCP performance degradation using only bottleneck-link utilization statistics: mean and variance. The variance of link utilization normally increases as the mean link-utilization increases. However, because link-utilization has a maximum of 100%, as the mean approaches 100%, the possible range of fluctuation becomes narrow and the variance decreases to zero. In this paper, using the M/G/R processor sharing model, we relate this phenomenon to the behavior of flows. We also show that by using this relationship, we can detect TCP performance degradation using the mean and variance of link utilization. In particular, this method enables a network operator to determine whether or not the degradation originates from the congestion of his/her own network. Because our method requires us to measure only link utilization, the cost of performance management can be greatly decreased compared with the conventional method, which requires dedicated functions for directly measuring the TCP performance.

  • A Scalable IP Traffic Control Method for Weighted Bandwidth Allocation per Flow

    Ryoichi KAWAHARA  Naohisa KOMATSU  

     
    PAPER-Internet

      Vol:
    E84-B No:10
      Page(s):
    2815-2829

    A method is described that can allocate bandwidth to each user flow fairly in a scalable network architecture such as differentiated services architecture. As promising queueing techniques for providing differentiated services, class-based packet scheduling and selective packet discarding have been attracting attention. However, if we consider that bandwidth should be allocated to each flow in a weighted manner, the parameters used in these methods such as the weight assigned to each class queue should be pre-determined appropriately based on an assumption about the number of flows in each class. Thus, when the actual traffic pattern differs from the assumed one, they may not work well. Instead of assuming the traffic conditions, our method estimates the number of active flows in each class by simple traffic measurement and dynamically changes the weight assigned to each class queue based on the estimated number. Our method does not need to maintain the per-flow state, which gives it scalability. Simulation showed that this method is effective under various patterns of the number of active flows.

  • Optimizing Edge-Cloud Cooperation for Machine Learning Accuracy Considering Transmission Latency and Bandwidth Congestion Open Access

    Kengo TAJIRI  Ryoichi KAWAHARA  Yoichi MATSUO  

     
    PAPER-Network Management/Operation

      Pubricized:
    2023/03/24
      Vol:
    E106-B No:9
      Page(s):
    827-836

    Machine learning (ML) has been used for various tasks in network operations in recent years. However, since the scale of networks has grown and the amount of data generated has increased, it has been increasingly difficult for network operators to conduct their tasks with a single server using ML. Thus, ML with edge-cloud cooperation has been attracting attention for efficiently processing and analyzing a large amount of data. In the edge-cloud cooperation setting, although transmission latency, bandwidth congestion, and accuracy of tasks using ML depend on the load balance of processing data with edge servers and a cloud server in edge-cloud cooperation, the relationship is too complex to estimate. In this paper, we focus on monitoring anomalous traffic as an example of ML tasks for network operations and formulate transmission latency, bandwidth congestion, and the accuracy of the task with edge-cloud cooperation considering the ratio of the amount of data preprocessed in edge servers to that in a cloud server. Moreover, we formulate an optimization problem under constraints for transmission latency and bandwidth congestion to select the proper ratio by using our formulation. By solving our optimization problem, the optimal load balance between edge servers and a cloud server can be selected, and the accuracy of anomalous traffic monitoring can be estimated. Our formulation and optimization framework can be used for other ML tasks by considering the generating distribution of data and the type of an ML model. In accordance with our formulation, we simulated the optimal load balance of edge-cloud cooperation in a topology that mimicked a Japanese network and conducted an anomalous traffic detection experiment by using real traffic data to compare the estimated accuracy based on our formulation and the actual accuracy based on the experiment.

  • An Online Framework for Flow Round Trip Time Measurement

    Xinjie GUAN  Xili WAN  Ryoichi KAWAHARA  Hiroshi SAITO  

     
    PAPER-Network

      Vol:
    E97-B No:10
      Page(s):
    2145-2156

    With the advent of high speed links, online flow measurement for, e.g., flow round trip time (RTT), has become difficult due to the enormous demands placed on computational resources. Most existing measurement methods are designed to count the numbers of flows or sizes of flows, but we address the flow RTT measurement, which is an important QoS metric for network management and cannot be measured with existing measurement methods. We first adapt a standard Bloom Filter (BF) for the flow RTT distribution estimation. However, due to the existence of multipath routing and Syn flooding attacks, the standard BF does not perform well. We further design the double-deletion bloom filter (DDBF) scheme, which alleviates potential hash collisions of the standard BF by explicitly deleting used records and implicitly deleting out-of-date records. Because of these double deletion operations, the DDBF accurately estimates the RTT distribution of TCP flows with limited memory space, even with the appearance of multipath routing and Syn flooding attacks. Theoretical analysis indicates that the DDBF scheme achieves a higher accuracy with a constant and smaller amount of memory compared with the standard BF. In addition, we validate our scheme using real traces and demonstrate significant memory-savings without degrading accuracy.

  • Finding Cardinality Heavy-Hitters in Massive Traffic Data and Its Application to Anomaly Detection

    Keisuke ISHIBASHI  Tatsuya MORI  Ryoichi KAWAHARA  Yutaka HIROKAWA  Atsushi KOBAYASHI  Kimihiro YAMAMOTO  Hitoaki SAKAMOTO  Shoichiro ASANO  

     
    PAPER-Measurement Methodology for Network Quality Such as IP, TCP and Routing

      Vol:
    E91-B No:5
      Page(s):
    1331-1339

    We propose an algorithm for finding heavy hitters in terms of cardinality (the number of distinct items in a set) in massive traffic data using a small amount of memory. Examples of such cardinality heavy-hitters are hosts that send large numbers of flows, or hosts that communicate with large numbers of other hosts. Finding these hosts is crucial to the provision of good communication quality because they significantly affect the communications of other hosts via either malicious activities such as worm scans, spam distribution, or botnet control or normal activities such as being a member of a flash crowd or performing peer-to-peer (P2P) communication. To precisely determine the cardinality of a host we need tables of previously seen items for each host (e.g., flow tables for every host) and this may infeasible for a high-speed environment with a massive amount of traffic. In this paper, we use a cardinality estimation algorithm that does not require these tables but needs only a little information called the cardinality summary. This is made possible by relaxing the goal from exact counting to estimation of cardinality. In addition, we propose an algorithm that does not need to maintain the cardinality summary for each host, but only for partitioned addresses of a host. As a result, the required number of tables can be significantly decreased. We evaluated our algorithm using actual backbone traffic data to find the heavy-hitters in the number of flows and estimate the number of these flows. We found that while the accuracy degraded when estimating for hosts with few flows, the algorithm could accurately find the top-100 hosts in terms of the number of flows using a limited-sized memory. In addition, we found that the number of tables required to achieve a pre-defined accuracy increased logarithmically with respect to the total number of hosts, which indicates that our method is applicable for large traffic data for a very large number of hosts. We also introduce an application of our algorithm to anomaly detection. With actual traffic data, our method could successfully detect a sudden network scan.

  • A Method of IP Traffic Management Using the Relationship between TCP Flow Behavior and Link Utilization

    Ryoichi KAWAHARA  Keisuke ISHIBASHI  Takuya ASAKA  Katsunori ORI  

     
    PAPER-Network Management/Operation

      Vol:
    E86-B No:11
      Page(s):
    3244-3256

    We propose a method of IP traffic management where the TCP performance at a bottleneck link is estimated from monitored data about the behavior of the number of active flows versus link utilization, which are both easy to measure. This method is based on our findings that (i) TCP performance remains constant as long as the link utilization is below some threshold value, but becomes degraded when it exceeds this value and (ii) the number of active flows increases linearly with link utilization up to the same value, and the increase becomes nonlinear above it. Though this threshold may vary depending on traffic/network conditions, our method requires neither predetermination of a threshold on the basis of assumed traffic conditions nor direct measurement of TCP performance.

  • Effect of Limiting Pre-Distribution and Clustering Users on Multicast Pre-Distribution VoD

    Noriaki KAMIYAMA  Ryoichi KAWAHARA  Tatsuya MORI  Haruhisa HASEGAWA  

     
    PAPER-Network

      Vol:
    E96-B No:1
      Page(s):
    143-154

    In Video on Demand (VoD) services, the demand for content items greatly changes daily over the course of the day. Because service providers are required to maintain a stable service during peak hours, they need to design system resources on the basis of peak demand time, so reducing the server load at peak times is important. To reduce the peak load of a content server, we propose to multicast popular content items to all users independently of actual requests as well as providing on-demand unicast delivery. With this solution, however, the hit ratio of pre-distributed content items is small, and large-capacity storage is required at each set-top box (STB). We can expect to cope with this problem by limiting the number of pre-distributed content items or clustering users based on their viewing histories. We evaluated the effect of these techniques by using actual VoD access log data. We also evaluated the total cost of the multicast pre-distribution VoD system with the proposed two techniques.

  • Analyzing Characteristics of TCP Quality Metrics with Respect to Type of Connection through Measured Traffic Data

    Yasuhiro IKEDA  Ryoichi KAWAHARA  Noriaki KAMIYAMA  Tatsuaki KIMURA  Tatsuya MORI  

     
    PAPER-Internet

      Vol:
    E96-B No:2
      Page(s):
    533-542

    We analyze measured traffic data to investigate the characteristics of TCP quality metrics such as packet retransmission rate, roundtrip time (RTT), and throughput of connections classified by their type (client-server (C/S) or peer-to-peer (P2P)), or by the location of the connection host (domestic or overseas). Our findings are as follows. (i) The TCP quality metrics of the measured traffic data are not necessarily consistent with a theoretical formula proposed in a previous study. However, the average RTT and retransmission rate are negatively correlated with the throughput, which is similar to this formula. Furthermore, the maximum idle time, which is defined as the maximum length of the packet interarrival times, is negatively correlated with throughput. (ii) Each TCP quality metric of C/S connections is higher than that of P2P connections. Here “higher quality” means that either the throughput is higher, or the other TCP quality metrics lead to higher throughput; for example the average RTT is lower or the retransmission rate is lower. Specifically, the median throughput of C/S connections is 2.5 times higher than that of P2P connections in the incoming direction of domestic traffic. (iii) The characteristics of TCP quality metrics depend on the location of the host of the TCP connection. There are cases in which overseas servers might use a different TCP congestion control scheme. Even if we eliminate these servers, there is still a difference in the degree of impact the average RTT has on the throughput between domestic and overseas traffic. One reason for this is thought to be the difference in the maximum idle time, and another is the fact that congestion levels of these types of traffic differ, even if their average RTTs are the same.

  • Optimally Designing ISP-Operated CDN

    Noriaki KAMIYAMA  Tatsuya MORI  Ryoichi KAWAHARA  Haruhisa HASEGAWA  

     
    PAPER-Network

      Vol:
    E96-B No:3
      Page(s):
    790-801

    Recently, the number of users downloading video content on the Internet has dramatically increased, and it is highly anticipated that downloading huge size, rich content such as movie files will become a popular use of the Internet in the near future. The transmission bandwidth consumed by delivering rich content is enormous, so it is urgent for ISPs to design an efficient delivery system that minimizes the amount of network resources consumed. To deliver web content efficiently, a content delivery network (CDN) is often used. CDN providers collocate a huge number of servers within multiple ISPs without being informed of detailed network information, i.e., network topologies, from ISPs. Minimizing the amount of network resources consumed is difficult because a CDN provider selects a server for each request based on only rough estimates of response time. Therefore, an ordinary CDN is not suited for delivering rich content. P2P-based delivery systems are becoming popular as scalable delivery systems. However, by using a P2P-based system, we still cannot obtain the ideal delivery pattern that is optimal for ISPs because the server locations depend on users behaving selfishly. To provide rich content to users economically and efficiently, an ISP itself should optimally provide servers with huge storage capacities at a limited number of locations within its network. In this paper, we investigate the content deployment method, the content delivery process, and the server allocation method that are desirable for this ISP-operated CDN. Moreover, we evaluate the effectiveness of the ISP-operated CDN using the actual network topologies of commercial ISPs.

  • Traffic Engineering of Peer-Assisted Content Delivery Network with Content-Oriented Incentive Mechanism

    Naoya MAKI  Takayuki NISHIO  Ryoichi SHINKUMA  Tatsuya MORI  Noriaki KAMIYAMA  Ryoichi KAWAHARA  Tatsuro TAKAHASHI  

     
    PAPER-Network and Communication

      Vol:
    E95-D No:12
      Page(s):
    2860-2869

    In content services where people purchase and download large-volume contents, minimizing network traffic is crucial for the service provider and the network operator since they want to lower the cost charged for bandwidth and the cost for network infrastructure, respectively. Traffic localization is an effective way of reducing network traffic. Network traffic is localized when a client can obtain the requested content files from other a near-by altruistic client instead of the source servers. The concept of the peer-assisted content distribution network (CDN) can reduce the overall traffic with this mechanism and enable service providers to minimize traffic without deploying or borrowing distributed storage. To localize traffic effectively, content files that are likely to be requested by many clients should be cached locally. This paper presents a novel traffic engineering scheme for peer-assisted CDN models. Its key idea is to control the behavior of clients by using content-oriented incentive mechanism. This approach enables us to optimize traffic flows by letting altruistic clients download content files that are most likely contributed to localizing traffic among clients. In order to let altruistic clients request the desired files, we combine content files while keeping the price equal to the one for a single content. This paper presents a solution for optimizing the selection of content files to be combined so that cross traffic in a network is minimized. We also give a model for analyzing the upper-bound performance and the numerical results.

  • An Adaptive Load Balancing Method for Multiple Paths Using Flow Statistics and Its Performance Analysis

    Ryoichi KAWAHARA  

     
    PAPER-Network

      Vol:
    E87-B No:7
      Page(s):
    1993-2003

    We propose an adaptive load balancing method for multiple paths that makes it possible to achieve high TCP performance on each path. In conventional load balancing methods, link utilization is the main parameter to be balanced among multiple paths that are established between an ingress and egress node pair. However, when we take into account TCP-level performance, balancing the traffic in terms of only link utilization may not always result in balanced TCP performance on each path. Our method utilizes flow statistics such as the number of active flows in each path, which is easy to measure, and can treat TCP performance. By adaptively equalizing the average bandwidth used per active flow in each path, which is calculated by dividing the input rate to the path by the mean number of active flows, our method achieves fair and high TCP performance on each path. Unlike other methods, intermediate nodes between an ingress-egress pair are not required to perform traffic controls or measurements besides normal packet forwarding. We describe a load balancing method for adaptively equalizing the average bandwidth used per active flow on each path and show its effectiveness under heterogeneous conditions through simulation analysis.

  • Overload Control for the Intelligent Network and Its Analysis by Simulation

    Ryoichi KAWAHARA  Takuya ASAKA  Shuichi SUMITA  

     
    PAPER

      Vol:
    E78-B No:4
      Page(s):
    494-503

    This paper reports an overload control method for the Intelligent Network (IN). The IN, which is being investigated as a future communication network, facilitates both rapid introduction of new services and easy modification of existing services. In the IN, the call processing functions and data needed to achieve IN services are distributed over several nodes. Therefore, traffic demand for the various services may cause varying patterns of node overloads. It is therefore important to develop effective overload control methods and to evaluate their characteristics. We propose an overload control method and evaluate its characteristics in comparison with other methods under various overload traffic patterns with a network simulator that models all nodes and their relationships in the IN. In particular, we focus on three aspects of overload control: how can high throughput be maintained, how can an overloaded node be stabilized, and how can fair access be guaranteed.

  • Network Tomography for Information-Centric Networking

    Ryoichi KAWAHARA  Takuya YANO  Rie TAGYO  Daisuke IKEGAMI  

     
    PAPER-Network

      Pubricized:
    2021/09/24
      Vol:
    E105-B No:3
      Page(s):
    259-269

    This paper proposes a network tomography scheme for information-centric networking (ICN), which we call ICN tomography. When content is received over a conventional IP network, the communication occurs after converting the content name into an IP address, which is the locator, so as to identify the position of the network. By contrast, in ICN, communication is achieved by directly specifying the content name or content ID. The content is sent to the requesting user by a nearby node having the content or cache, making it difficult to apply a conventional network tomography that uses end-to-end quality of service (QoS) measurements and routing information between the source and destination node pairs as input to the ICN. This is because, in ICN, the end-to-end flow for an end host receiving some content can take various routes; therefore, the intermediate and source nodes can vary. In this paper, we first describe the technical challenges of applying network tomography to ICN. We then propose ICN tomography, where we use the content name as an endpoint to define an end-to-end QoS measurement and a routing matrix. In defining the routing matrix, we assume that the end-to-end flow follows a probabilistic routing. Finally, the effectiveness of the proposed method is evaluated through a numerical analysis and simulation.

  • Identifying Heavy-Hitter Flows from Sampled Flow Statistics Open Access

    Tatsuya MORI  Tetsuya TAKINE  Jianping PAN  Ryoichi KAWAHARA  Masato UCHIDA  Shigeki GOTO  

     
    PAPER

      Vol:
    E90-B No:11
      Page(s):
    3061-3072

    With the rapid increase of link speed in recent years, packet sampling has become a very attractive and scalable means in collecting flow statistics; however, it also makes inferring original flow characteristics much more difficult. In this paper, we develop techniques and schemes to identify flows with a very large number of packets (also known as heavy-hitter flows) from sampled flow statistics. Our approach follows a two-stage strategy: We first parametrically estimate the original flow length distribution from sampled flows. We then identify heavy-hitter flows with Bayes' theorem, where the flow length distribution estimated at the first stage is used as an a priori distribution. Our approach is validated and evaluated with publicly available packet traces. We show that our approach provides a very flexible framework in striking an appropriate balance between false positives and false negatives when sampling frequency is given.

  • Packet Sampling TCP Flow Rate Estimation and Performance Degradation Detection Method

    Ryoichi KAWAHARA  Tatsuya MORI  Keisuke ISHIBASHI  Noriaki KAMIYAMA  Hideaki YOSHINO  

     
    PAPER-Measurement Methodology for Network Quality Such as IP, TCP and Routing

      Vol:
    E91-B No:5
      Page(s):
    1309-1319

    Managing the performance at the flow level through traffic measurement is crucial for effective network management. With the rapid rise in link speeds, collecting all packets has become difficult, so packet sampling has been attracting attention as a scalable means of measuring flow statistics. In this paper, we firstly propose a method of estimating TCP flow rates of sampled flows through packet sampling, and then develop a method of detecting performance degradation at the TCP flow level from the estimated flow rates. In the method of estimating flow rates, we use sequence numbers of sampled packets, which make it possible to improve markedly the accuracy of estimating the flow rates of sampled flows. Using both an analytical model and measurement data, we show that this method gives accurate estimations. We also show that, by observing the estimated rates of sampled flows, we can detect TCP performance degradation. The method of detecting performance degradation is based on the following two findings: (i) sampled flows tend to have high flow-rates and (ii) when a link becomes congested, the performance of high-rate flows becomes degraded first. These characteristics indicate that sampled flows are sensitive to congestion, so we can detect performance degradation of flows that are sensitive to congestion by observing the rate of sampled flows. We also show the effectiveness of our method using measurement data.

  • Workflow Extraction for Service Operation Using Multiple Unstructured Trouble Tickets

    Akio WATANABE  Keisuke ISHIBASHI  Tsuyoshi TOYONO  Keishiro WATANABE  Tatsuaki KIMURA  Yoichi MATSUO  Kohei SHIOMOTO  Ryoichi KAWAHARA  

     
    PAPER

      Pubricized:
    2018/01/18
      Vol:
    E101-D No:4
      Page(s):
    1030-1041

    In current large-scale IT systems, troubleshooting has become more complicated due to the diversification in the causes of failures, which has increased operational costs. Thus, clarifying the troubleshooting process also becomes important, though it is also time-consuming. We propose a method of automatically extracting a workflow, a graph indicating a troubleshooting process, using multiple trouble tickets. Our method extracts an operator's actions from free-format texts and aligns relative sentences between multiple trouble tickets. Our method uses a stochastic model to detect a resolution, a frequent action pattern that helps us understand how to solve a problem. We validated our method using real trouble-ticket data captured from a real network operation and showed that it can extract a workflow to identify the cause of a failure.

  • Method of Implementing GFR Service in Large-Scale Networks Using ABR Control Mechanism and Its Performance Analysis

    Ryoichi KAWAHARA  Yuki KAMADO  Masaaki OMOTANI  Shunsaku NAGATA  

     
    PAPER-Communication Networks and Services

      Vol:
    E82-B No:12
      Page(s):
    2081-2094

    This paper proposes implementing guaranteed frame rate (GFR) service using the available bit rate (ABR) control mechanism in large-scale networks. GFR is being standardized as a new ATM service category to provide a minimum cell rate (MCR) guarantee to each virtual channel (VC) at the frame level. Although ABR also can support MCR, a source must adjust its cell emission rate according to the network congestion indication. In contrast, GFR service is intended for users who are not equipped to comply with the source behavior rules required by ABR. It is expected that many existing users will fall into this category. As one implementation of GFR, weighted round robin (WRR) with per-VC queueing at each switch is well known. However, WRR is hard to implement in a switch supporting a large number of VCs because it needs to determine in one cell time which VC queue should be served. In addition, it may result in ineffective bandwidth utilization at the network level because its control mechanism is closed at the node level. On the other hand, progress in ABR service standardization has led to the development of some ABR control algorithms that can handle a large number of connections. Thus, we propose implementing GFR using an already developed ABR control mechanism that can cope with many connections. It consists of an explicit rate (ER) control mechanism and a virtual source/virtual destination (VS/VD) mechanism. Allocating VSs/VDs to edge switches and ER control to backbone switches enables us to apply ABR control up to the entrance of a network, which results in effective bandwidth utilization at the network level. Our method also makes it possible to share resources between GFR and ABR connections, which decreases the link cost. Through simulation analysis, we show that our method can work better than WRR under various traffic conditions.

  • Name Resolution Based on Set of Attribute-Value Pairs of Real-World Information

    Ryoichi KAWAHARA  Hiroshi SAITO  

     
    PAPER-Network

      Pubricized:
    2016/08/04
      Vol:
    E100-B No:1
      Page(s):
    110-121

    It is expected that a large number of different objects, such as sensor devices and consumer electronics, will be connected to future networks. For such networks, we propose a name resolution method for directly specifying a condition on a set of attribute-value pairs of real-world information without needing prior knowledge of the uniquely assigned name of a target object, e.g., a URL. For name resolution, we need an algorithm to find the target object(s) satisfying a query condition on multiple attributes. To address the problem that multi-attribute searching algorithms may not work well when the number of attributes (i.e., dimensions) d increases, which is related to the curse of dimensionality, we also propose a probabilistic searching algorithm to reduce searching time at the expense of a small probability of false positives. With this algorithm, we choose permutation pattern(s) of d attributes to use the first K (K « d) ones to search objects so that they contain relevant attributes with a high probability. We argue that our algorithm can identify the target objects at a false positive rate less than 10-6 and a few percentages of tree-searching cost compared with a naive d-dimensional searching under a certain condition.

  • Performance of TCP/IP over ATM over an ADSL

    Ryoichi KAWAHARA  Hiroshi SAITO  

     
    PAPER-IP/ATM

      Vol:
    E83-B No:2
      Page(s):
    140-154

    The performance of TCP/IP over ATM over an asymmetric digital subscriber line (ADSL) was investigated. Because the bandwidth of an ADSL link can vary over time due to changes in the link's physical conditions, which degrades TCP performance, we performed simulations for various ATM traffic controls, including available bit rate (ABR) and generic flow control, used to handle variations in the ADSL bandwidth. This analysis showed that using an ABR control is effective under various traffic conditions. An ABR switch algorithm that can achieve good performance under any condition was investigated.

1-20hit(31hit)