The search functionality is under construction.

Author Search Result

[Author] Sumit GHOSH(5hit)

1-5hit
  • An Adaptive Dynamic Buffer Management (ADBM) Approach for Input Buffers in ATM Networks

    Ricardo CITRO  Tony S. LEE  Seong-Soon JOO  Sumit GHOSH  

     
    PAPER-Switching for Communications

      Vol:
    E88-B No:3
      Page(s):
    1084-1096

    Current literature on input buffer management reveals that, in representative ATM networks under highly bursty traffic conditions, the fuzzy thresholding approach yields lower cell loss rate at the cost of lower throughput. Also, under less bursty traffic, the traditional fixed thresholding approach achieves higher throughput at the expense of higher cell loss rate. The integration of these two properties into practice is termed adaptive dynamic buffer management (ADBM) approach for input buffers and its assessment is the objective of this paper. The argument is that, given that the traffic conditions are constantly changing, to achieve efficiency during actual operation, the network control must dynamically switch, at every ATM switch, under the call processor's control, between the two input buffer management techniques, dictated by the nature of the traffic at the inputs of the corresponding switch. The need to involve the call processor marks the first effort in the literature to dynamically configure input buffer management architectures at the switch fabric level under higher level call processor control. It stems from the fact that the switch fabric operates very fast and cannot engage in complex decision making without incurring stiff penalty. To achieve this goal, the network control needs knowledge of the burstiness of the traffic at the inputs of every ATM switch. The difficulties with this need are two-fold. First, it is not always easy to obtain the traffic model and model parameters for a specific user's call. Second, even where the traffic model and the model parameters are known for a specific user's call, this knowledge is valid only at the source switch where the user interfaces with the network. At all other switches in the network, the cells of the traffic in question interact asynchronously with the cells from other traffic sources and are subject to statistical multiplexing. Thus, to obtain the exact nature of the composite traffic at the inputs of any ATM switch, is a challenge. Conceivably, one may determine the burstiness by counting the number of cells incurred at the inputs of an ATM switch over a defined time interval. The challenge posed by this proposition lies in the very definition of burstiness in that the time interval must approach, in the limit, zero or the resolution of time in the network. To address this challenge, first, a 15-node representative ATM network is modeled in an asynchronous, distributed simulator and, second, simulated on a network of workstations under realistic traffic stimuli. Third, burstiness indices are measured for the synthetic, stochastic traffic at the inputs of every ATM switch as a function of the progress of simulation for different choices of time interval values, ranging from 20,000 timesteps down to 1,000 timesteps. A timestep equals 2.73 µs. Results reveal that consistent burstiness indices are obtained for interval choices between 1,000 and 5,000 timesteps and that a burstiness index of 25, measured at 3,000 timestep interval, constitutes a reasonable and practical threshold value that distinguishes highly bursty traffic that warrants the use of the fuzzy thresholding approach from less bursty traffic that can benefit from the fixed thresholding scheme. A comparative performance analysis of ADBM yields the following. For pure fixed and pure fuzzy thresholding schemes, the throughputs are at 73.88% and 71.53% while the cell drop rates are at 4.31% and 2.44%,respectively. For the ADBM approach, where the input buffer management alternates at each individual ATM switch between the fixed and fuzzy schemes, governed by measured burstiness index threshold of 25 for a 3,000 timestep interval, the throughput is 74.77%, which is higher than even the pure fixed scheme while the cell drop rate is 2.21% that is lower than that of the pure fuzzy scheme. In essence, ADBM successfully integrates the best characteristics of the fuzzy and fixed thresholding schemes.

  • The Influence of Buffer Management on End-to-End Cell Delay in a Cell Switching Network

    Qutaiba RAZOUQI  Sumit GHOSH  

     
    PAPER-Switching

      Vol:
    E86-B No:3
      Page(s):
    1073-1081

    This is the first paper to report the influence of fuzzy thresolding-based buffer management scheme on the end-to-end delay performance of cell switching networks including asynchronous transfer mode (ATM) networks. In this approach, the fraction of the selectively blocked cells, corresponding to the difference of cell loss due to buffer overflow, between the traditional fixed and fuzzy schemes, are re-routed to their final destinations. A 50-switch, representative, cell-switching network under fuzzy thresholding is first modeled, second, simulated on a testbed consisting of a network of 25+ Pentium workstations under Linux, configured as a loosely-coupled parallel processor, and third, its performance is studied under realistic input traffic conditions. A total of 10,000 user calls, generating between 1.0 and 1.5 million ATM cells, is stochastically distributed among the nodes. Performance analysis reveals that for different input traffic distributions ranging from light to moderate to heavy traffic, the re-routing approach successfully routes these blocked cells, although it causes the average end-to-end cell delay in the network to increase, compared to the fixed scheme, by a factor ranging from 1.65 for relatively light traffic to 6.7 for heavy traffic.

  • A Mathematical Framework for Asynchronous, Distributed, Decision-Making Systems with Semi-Autonomous Entities: Algorithm Synthesis, Simulation, and Evaluation

    Tony S. LEE  Sumit GHOSH  Jin LIU  Xiaolin GE  Anil NERODE  Wolf KOHN  

     
    PAPER-Systems and Control

      Vol:
    E83-A No:7
      Page(s):
    1381-1395

    For many military and civilian large-scale, real-world systems of interest, data are first acquired asynchronously, i. e. at irregular intervals of time, at geographically-dispersed sites, processed utilizing decision-making algorithms, and the processed data then disseminated to other appropriate sites. The term real-world refers to systems under computer control that relate to everyday life and are beneficial to the society in the large. The traditional approach to such problems consists of designing a central entity which collects all data, executes a decision making algorithm sequentially to yield the decisions, and propagates the decisions to the respective sites. Centralized decision making algorithms are slow and highly vulnerable to natural and artificial catastrophes. Recent literature includes successful asynchronous, distributed, decision making algorithm designs wherein the local decision making at every site replaces the centralized decision making to achieve faster response, higher reliability, and greater accuracy of the decisions. Two key issues include the lack of an approach to synthesize asynchronous, distributed, decision making algorithms, for any given problem, and the absence of a comparative analysis of the quality of their decisions. This paper proposes MFAD, a Mathematical Framework for Asynchronous, Distributed Systems, that permits the description of centralized decision-making algorithms and facilities the synthesis of distributed decision-making algorithms. MFAD is based on the Kohn-Nerode distributed hybrid control paradigm. It has been a belief that since the centralized control gathers every necessary data from all entities in the system and utilizes them to compute the decisions, the decisions may be "globally" optimal. In truth, however, as the frequency of the sensor data increases and the environment gets larger, dynamic, and more complex, the decisions are called into question. In the distributed decision-making system, the centralized decision-making is replaced by those of the constituent entities that aim at minimizing a Lagrangian, i. e. a local, non-negative cost criterion, subject to the constraints imposed by the global goal. Thus, computations are carried out locally, utilizing locally obtained dataand appropriate information that is propagated from other sites. It is hypothesized that with each entity engaged in optimizing its individual behavior, asynchronously, concurrently, and independent of other entities, the distributed system will approach "global" optimal behavior. While it does not claim that such algorithms may be synthesized for all centralized real-world systems, this paper implements both the centralized and distributed paradigms for a representative military battlefield command, control, and communication (C3) problem. It also simulates them on a testbed of a network of workstations for a comparative performance evaluation of the centralized and decentralized paradigms in the MFAD framework. While the performance results indicate that the decentralized approach consistently outperforms the centralized scheme, this paper aims at developing a quantitative evaluation of the quality of decisions under the decentralized paradigm. To achieve this goal, it introduces a fundamental concept, embodied through a hypothetical entity termed "Perfect Global Optimization Device (PGOD)," that generates perfect or ideal decisions. PGOD possesses perfect knowledge, i. e. the exact state information of every entity of the entire system, at all times, unaffected by delay. PGOD utilizes the same decision-making algorithm as the centralized paradigm and generates perfect globally-optimal decisions which, though unattainable, provide a fundamental and absolute basis for comparing the quality of decisions. Simulation results reveal that the quality of decisions in the decentralized paradigm are superior to those of the centralized approach and that they approach PGOD's decisions.

  • The Impact of Source Traffic Distribution on Quality of Service (QoS) in ATM Networks

    Seshasayi PILLALAMARRI  Sumit GHOSH  

     
    PAPER-Network

      Vol:
    E87-B No:8
      Page(s):
    2290-2307

    A principal attraction of ATM networks, in both wired and wireless realizations, is that the key quality of service (QoS) parameters of every call, including end-to-end delay, jitter, and loss are guaranteed by the network when appropriate cell-level traffic controls are imposed at the user network interface (UNI) on a per call basis, utilizing the peak cell rate (PCR) and the sustainable cell rate (SCR) values for the multimedia--voice, video, and data, traffic sources. There are three practical difficulties with these guarantees. First, while PCR and SCR values are, in general, difficult to obtain for traffic sources, the typical user-provided parameter is a combination of the PCR, SCR, and the maximum burstiness over the entire duration of the traffic. Second, the difficulty in accurately defining PCR arises from the requirement that the smallest time interval must be specified over which the PCR is computed which, in the limit, will approach zero or the network's resolution of time. Third, the literature does not contain any reference to a scientific principle underlying these guarantees. Under these circumstances, the issue of providing QoS guarantees in the real world, through traffic controls applied on a per call basis, is rendered uncertain. This paper adopts a radically different, high level approach to the issue of QoS guarantees. It aims at uncovering through systematic experimentation a relationship, if any exists, between the key high level user traffic characteristics and the resulting QoS measures in a realistic operational environment. It may be observed that while each user is solely interested in the QoS of his/her own traffic, the network provider cares for two factors: (1) Maximize the link utilization in the network since links constitute a significant investment, and (2) ensure the QoS guarantees for every user traffic, thereby maintaining customer satisfaction. Based on the observations, this paper proposes a two-phase strategy. Under the first phase, the average "link utilization" computed over all the links in a network is maintained within a range, specified by the underlying network provider, through high level call admission control, i.e. by limiting the volume of the incident traffic on the network, at any time. The second phase is based on the hypothesis that the number of traffic sources, their nature--audio, video, or data, and the bandwidth distribution of the source traffic, admitted subject to a specific chosen value of "link utilization" in the network, will exert a unique influence on the cumulative delay distribution at the buffers of the representative nodes and, hence, on the QoS guarantees of each call. The underlying thinking is as follows. The cumulative buffer delay distribution, at any given node and at any time instant, will clearly reflect the cumulative effect of the traffic distributions of the multiple connections that are currently active on the input links. Any bounds imposed on the cumulative buffer delay distribution at the nodes of the network will also dominate the QoS bounds of each of the constituent user traffic. Thus, for each individual traffic source, the buffer delay distributions at the nodes of the network, obtained for different traffic distributions, may serve as its QoS measure. If the hypothesis is proven true, in essence, the number of traffic sources and their bandwidth distribution will serve asa practically realizable high level traffic control in providing realistic QoS guarantees for every call. To verify the correctness of the hypothesis, an experiment is designed that consists of a representative ATM network, traffic sources that are characterized through representative and realistic user-provided parameters, and a given set of input traffic volumes appropriate for a network provider approved link utilization measure. The key source traffic parameters include the number of sources that are incident on the network and the constituent links at any given time, the bandwidth requirement of the sources, and their nature. For each call, the constituent cells are generated stochastically, utilizing the typical user-provided parameter as an estimate of the bandwidth requirement. Extensive simulations reveal that, for a given link utilization level held uniform throughout the network, while the QoS metrics--end-to-end cell delay, jitter, and loss, are superior in the presence of many calls each with low bandwidth requirement, they are significantly worse when the network carries fewer calls of very high bandwidths. The findings demonstrate the feasibility of guaranteeing QoS for each and every call through high level traffic controls. As for practicality, call durations are relatively long, ranging from ms to even minutes, thereby enabling network management to exercise realistic controls over them, even in a geographically widely dispersed ATM network. In contrast, current traffic controls that act on ATM cells at the UNI face formidable challenge from high bandwidth traffic where cell lifetimes may be extremely short, in the range of µs. The findings also underscore two additional important contributions of this paper. First, the network provider may collect data on the high level user traffic characteristics, compute the corresponding average link utilization in the network, and measure the cumulative buffer delay distributions at the nodes, in an operational network. The provider may then determine, based on all relevant criteria, a range of input and system parameters over which the network may be permitted to operate, the intersection of all of which may yield a realistic network operating point (NOP). During subsequent operation of the network, the network provider may guide and maintain the network at a desired NOP by exercising control over the input and system parameters including link utilization, call admittance based on the requested bandwidth, etc. Second, the finding constitutes a vulnerability of ATM networks which a perpetrator may exploit to launch a performance attack.

  • On the Concept of "Stability" in Asynchronous Distributed Decision-Making Systems

    Tony S. LEE  Sumit GHOSH  

     
    PAPER-Real Time Control

      Vol:
    E83-B No:5
      Page(s):
    1023-1038

    Asynchronous, distributed, decision-making (ADDM) systems constitute a special class of distributed problems and are characterized as large, complex systems wherein the principal elements are the geographically-dispersed entities that communicate among themselves, asynchronously, through message passing and are permitted autonomy in local decision-making. A fundamental property of ADDM systems is stability that refers to their behavior under representative perturbations to their operating environments, given that such systems are intended to be real, complex, and to some extent, mission critical systems, and are subject to unexpected changes in their operating conditions. ADDM systems are closely related to autonomous decentralized systems (ADS) in the principal elements, the difference being that the characteristics and boundaries of ADDM systems are defined rigorously. This paper introduces the concept of stability in ADDM systems and proposes an intuitive yet practical and usable definition that is inspired by those used in Control Systems and Physics. A comprehensive stability analysis on an accurate simulation model will provide the necessary assurance, with a high level of confidence, that the system will perform adequately. An ADDM system is defined as a stable system if it returns to a steady-state in finite time, following perturbation, provided that it is initiated in a steady-state. Equilibrium or steady-state is defined through placing bounds on the measured error in the system. Where the final steady-state is equivalent to the initial one, a system is referred to as strongly stable. If the final steady-state is potentially worse then the initial one, a system is deemed marginally stable. When a system fails to return to steady-state following the perturbation, it is unstable. The perturbations are classified as either changes in the input pattern or changes in one or more environmental characteristics of the system such as hardware failures. Thus, the key elements in the study of stability include steady-state, perturbations, and stability. Since the development of rigorous analytical models for most ADDM systems is difficult, if not impossible, the definitions of the key elements, proposed in this paper, constitute a general framework to investigate stability. For a given ADDM system, the definitions are based on the performance indices that must be judiciously identified by the system architect and are likely to be unique. While a comprehensive study of all possible perturbations is too complex and time consuming, this paper focuses on a key subset of perturbations that are important and are likely to occur with greater frequency. To facilitate the understanding of stability in representative real-world systems, this paper reports the analysis of two basic manifestations of ADDM systems that have been reported in the literature --(i) a decentralized military command and control problem, MFAD, and (ii) a novel distributed algorithm with soft reservation for efficient scheduling and congestion mitigation in railway networks, RYNSORD. Stability analysis of MFAD and RYNSORD yields key stable and unstable conditions.