Network Bibliography Search Results: infocom and 1998

Jonathan Rosenberg and Henning Schulzrinne, "Timer Reconsideration for Enhanced RTP Scalability," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), March/April 1998.

Abstract: RTP, the Real Time Transport Protocol, has gained widespread acceptance as the transport protocol for voice and video on the Internet. Its companion control protocol, the Real Time Control Protocol (RTCP), is used for loose session control, QoS reporting, and media synchronization, among other functions. The RTP specification describes an algorithm for determining the RTCP packet transmission rate at a host participating in a multicast RTP session. This algorithm was designed to allow RTP to be used in sessions with anywhere from one to a million members. However, we have discovered several problems with this algorithm when used with very large groups with rapidly changing group membership. One problem is the flood of RTCP packets which occurs when many users join a multicast RTP session at nearly the same time. To solve this problem, we present a novel adaptive timer algorithm called reconsideration. We present a mathematical analysis of this algorithm, and demonstrate that it performs extremely well, reducing the congestion problem by several orders of magnitude. We also back up these results with simulation.
Keywords: RTP; scaling; timers; reconsideration

Jonathan Rosenberg and Henning Schulzrinne, "Internet Telephony Gateway Location," online in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), March/April 1998.

Abstract: Although the Internet was designed to handle non-real time data traffic, it is being used increasingly to carry voice and video. One important class of contributors to this growth are Internet telephones. Critical to more widespread use of Internet telephony is smooth interoperability with the existing telephone network. This interoperability comes through the use of Internet Telephony Gateway's (ITG's) which perform protocol translation between an IP network and the Public Switched Telephone Network (PSTN). In order for an IP host to call a user on the PSTN, the IP host must know the IP address of an appropriate gateway. We consider here the problem of finding these gateways. An analysis of a number of protocol architectures is presented, including hierarchical databases, multicast advertisement, routing protocols, and centralized databases. We propose a new protocol architecture, called Brokered Multicast Advertisements (BMA) which serves as a lightweight, scalable mechanism for locating ITG's. The BMA architecture is general, and can be applied to location of any ser-vice across a wide area network.
Keywords: service location; Internet telephony; gateway; multicast

Nen-Fu Huang, Chi-An Su and Han-Chieh Chao, "Architectures and handoff schemes for CATV-based personal communications networks," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), March/April 1998.

Abstract: The initial cost to provide personal communications services (PCS) based on the conventional networks is relatively high. As the radio cells move toward smaller size, the traditional procedures for call setup and control are not suitable well due to the high handoff frequency. The cable TV (CATV) network is one of the most attractive backbones for PCS due to its prevalence and broadcast nature. This significantly reduces the implementation costs and the handoff overheads. This paper proposes two architectures for the CATV-based PCS system. In the first architecture, each base station is equipped with multiple fixed receivers to provide fast and seamless handoffs for mobile terminals. Nevertheless, it suffers from the expensive hardware cost. In the second architecture, each base station is only equipped with a tunable receiver. This simple and economic architecture suffers from the possibility of offset conflict when mobile terminals handoff between the cells. Three channel allocation algorithms are proposed to resolve the offset conflict problem. Simulation results indicate the one with the concept of clustering performs much better than the two other schemes in terms of offset conflict probability.
Keywords: CATV; mobility; PCS; cable television

Lampros Kalampoukas, Anujan Varma and K. K. Ramakrishnan, "Explicit Window Adaptation: A Method to Enhance TCP Performance," online in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), March/April 1998.

Abstract: We study the performance of TCP in an internetwork consist-ing of both rate-controlled and non-rate-controlled segments. A common example of such an environment occurs when the end systems are part of IP datagram networks interconnected by a rate-controlled segment, such as an ATM network using the ABR service. In the absence of congestive losses in ei-ther segment, TCP keeps increasing its window to its maximum size. Mismatch between the TCP window and the bandwidth-delay product of the network will result in accumulation of large queues and possibly buffer overflows in the devices at the edges of the rate-controlled segment, causing degraded throughput and unfairness. We develop an explicit feedback scheme, called Explicit Window Adaptation based on modifying the receiver's advertised window in TCP acknowledgments returning to the source. The window size indicated to TCP is a function of the free buffer in the edge device. Results from simulations with a wide range of traffic scenarios show that this explicit window adaptation scheme can control the buffer occupancy efficiently at the edge device, and results in significant improvements in packet loss rate, fairness, and throughput over a packet discard policy such as Drop-from-Fkont or Random Early Detection.
Keywords: TCP; congestion control; RED; fairness; transport protocols; ABR

Dolors Sala, John O. Limb and Sunil U. Khaunte, "Adaptive Control Mechanism for a Cable Modem MAC Protocol," online in Proceedings of the Conference on Computer Communications (IEEE Infocom), Roch Guerin, ed., (San Francisco, California), pp. 8, Mar. 1998.

Abstract: Cable plants were initially designed for one-way broadcast of analog television signals (from the head-end to the neighborhood). They are now being upgraded to provide an upstream path (from the home to the head-end). New challenges arise in using the upstream channel since the available bandwidth is low and the noise levels are high. Different variations of the reservation Aloha protocol have been proposed as the MAC protocol to efficiently share the scarce upstream capacity. The efficiency of the reservation protocol highly depends on the capacity assigned to the reservation channel. In this paper we present a control mechanism that dynamically adjusts the operating parameters to the current load on the system. The control mechanism does not require any framing structure and is built around a ``sea of mini-slots''. The performance under both static and highly dynamic loads is close to optimum. The station implementation remains particularly simple and the downstream control structure is also simple. Results are given here for fixed length data units (ATM cells) but the algorithm extends very simply to variable length MAC frames.
Keywords: MAC protocols; HFC; multiple access; residential broadband access; adaptive control; long propagation delay

Dean H. Lorenz and Ariel Orda, "QoS Routing in Networks with Uncertain Parameters," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 3, March/April 1998.

Abstract: This work considers the problem of routing connections with QoS requirements across networks, when the information available for marking routing decisions is inaccurate. This uncertainty about the actual stale of a network component arises naturally in a number of different environments, which are reviewed in the paper. The goal of the route selection process is then to identify a path that is most likely to satisfy the QoS requirements. For end-to-end delay guarantees, this problem is intractable. However, we show that by decomposing the end-to-end constraint into local delay constraints, efficient and tractable solutions can be established. We first consider the simpler problem of decomposing the end-to-end constraint into local constraints, for a given path. We show that, for general distributions, this problem is also intractable. Nonetheless, by defining a certain class of probability distributions, which posses a certain convexity property, and restricting ourselves to that class, we are able to establish efficient and exact solutions. Moreover, we show that typical distributions would belong to that class. We then consider the general problem, of combined path optimization and delay decomposition. We present an efficient solution scheme for the above class of probability distributions. Our solution is similar to that of the restricted shortest-path problem, which renders itself to near-optimal approximations of polynomial complexity. We also show that yet simpler solutions exist in the special case of uniform distributions.
Keywords: Routing; networks; QoS; delay; metric inaccuracy; topology aggregation

N. S. V. Rao and S. G. Batsell, "QoS Routing Via Multiple Paths Using Bandwidth Reservation," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 11, March/April 1998.

Abstract: We consider two generic routing problems via multiple paths in a computer network wherein bandwidth can be reserved, and guaranteed, once reserved, on the links. The first problem requires that a message of finite length be transmitted from s to d within T units of time. The second problem requires that a sequential message of r units be transmitted at a rate of q such that maximum time difference between two units received out of order is no more than q, We propose a polynomial-time algorithm to the first problem, and present simulation results to illustrate its applicability. We show the second problem to be NP-complete, and propose a polynomial-time approximate solution.
Keywords: Routing; networks; QoS

S. Mithal, "Bounds on End-to-End Performance via Greedy, Multi-Path Routing in Integrated Services Networks," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 19, March/April 1998.

Abstract: In this paper we focus on the design and performance of resource management controls, specially state dependent routing, with which to compute bounds on end-to-end performance in integrated services networks. The routing control problem at each hop is transformed into an optimization problem, the solution of which suggests a path selection algorithm that relies on the available capacity process at adjacent nodes. In the paper we focus on computing a bound on average end-to-end packet loss, although our results can easily be adapted to minimize mean transfer delay. The routing scheme based on the solution to the optimization problem is as follows: each node computes multiple alternate paths to a given destination. The available capacity process, e.g., available bandwidth, available buffer, etc., is derived from the link state updates received at each node and the forwarding rule is based on some function of the maximum available capacity process applied to the set of possible nodes corresponding to the next hop on an alternate path for a particular source destination pair. A consequence of this path selection rule is the pooling of network resources when load is increased. This allows us to prove the bound (at each hop) on packet loss in terms of network parameters. We show how the node reduction technique (and its associated bound) can be used as a building block to compute end-to-end performance bounds in a network setting.
Keywords: Quality of service; QoS routing; Brownian networks; finite buffers

A. Orda, "Routing with End to End QoS Guarantees in Broadband Networks," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 27, March/April 1998.

Abstract: We consider routing schemes for connections with end-to-end delay requirements, and investigate several fundamental problems. First, we focus on networks which employ rate-based schedulers and hence map delay guarantees into nodal rate guarantees, as done with the Guaranteed Service class proposed for the Internet. We consider first the basic problem of identifying a feasible route for the connection, for which a straightforward, yet computationally costly solution exists. Accordingly, we establish several c-optimal solutions that offer substantially lower computational complexity. We then consider the more general problem of optimizing the route choice in terms of balancing loads and accommodating multiple connections, for which we formulate and validate several optimal algorithms. We discuss the implementation of such schemes in the context of link-state and distance-vector protocols. Next, we consider the fundamental problem of constrained path optimization. This problem, typical of QoS routing, is NP-hard. While standard approximation methods exist, their complexity may often be prohibitive in terms of scalability. Such approximations do not make use of the particular properties of large-scale networks, such as the fact that the path selection process is typically presented with a hierarchical, aggregated topology. By exploiting the structure of such topologies, we obtain an c-optimal algorithm for the constrained shortest path problem, which offers a substantial improvement in terms of scalability.
Keywords: QoS routing; rate-based schedulers; constrained path optimization; topology aggregation; hierarchical networks

D. Tse and S. Hanly, "Effective Bandwidths in Wireless Networks with Multiuser Receivers," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 35, March/April 1998.

Abstract: To meet the increasing capacity demand on wireless net works, there have been intense efforts in the past decade on developing multi-user receiver structures which mitigate the interference between users in spread-spectrum and antenna array systems. While much of the research is performed at the physical layer, the capacity of networks with multi-user receivers and the associated resource allocation problems are less well-understood. In this paper, we show that under some conditions, the capacity of a single cell for several important receivers can be very simply characterized via a notion of effective bandwidth: the QoS requirements of all the users can be met if and only if the sum of the effective bandwidths of the users is less than the total number of degrees of freedom in the system. The number of degrees of freedom is the processing gain in a spread-spectrum system and the number of antenna elements in an antenna array. The effective bandwidth of a user depends only on its own QoS requirement, expressed in terms of the desired signal-to-interference ratio. It is hoped that such an abstraction of resource requirement will help in bridging the resource allocation problems at the networking layer and multi-user techniques at the physical layer.
Keywords: QoS routing; effective bandwidths; wireless networks

P. L. Hiew and M. Zukerman, "Teletraffic Issues Related to Channel Allocation in Digital Mobile Cellular Networks," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 43, March/April 1998.

Abstract: In this paper, the performance of channel allocation schemes for TDMA type Digital Mobile Cellular systems are studied. We study by simulation the effect of traffic loading and characteristics, including new call arrivals and inter-cell handovers, on blocking and drop out probabilities. The channel allocation schemes under examination includes: (1) Fixed Channel Allocation (FCA) where the number of frequency carriers in each cell is fixed, (2) Dynamic Channel Allocation where the number of frequency carriers in each cell is adaptive and dependent on the load, and (3) Dynamic Frequency/Time Channel Allocation where the number of channels is adaptive (based on the load), allowing two different time division channels of the same frequency carrier to be used in two neighboring cells. We also study the possible benefit of a simple channel reservation scheme on these systems. We demonstrate that the effects of arrival rate on blocking and drop out probabilities are as expected. As the arrival rate increases, the blocking as well as the drop out probabilities increases. We demonstrate that the implementation of channel reservation is suitable for FCA but not suitable with schemes that is capable of dynamic channel allocation as it may actually worsen drop out rates in such schemes. We also demonstrate that under overload situation, there is no significant benefit in the dynamic schemes and in certain situation they may even performance worse than FCA.
Keywords: Channel allocation; mobile cellular; wireless networks

P. Whiting and S. Borst, "Performance Bounds for Dynamic Channel Assignment Schemes Operating under Varying Re-Use Constraints," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 51, March/April 1998.

Abstract: We derive bounds for the performance of dynamic channel assignment (DCA) schemes which strengthen the existing Erlang bound. The construction of the bounds is based on a reward paradigm as an intuitively appealing way of characterizing the achievable carried traffic region. In one-dimensional networks, our bounds closely approach the performance of Maximum Packing (MP), which is an idealized DCA scheme. This suggests not only that the bounds are extremely tight, but also that no DCA scheme, however sophisticated, can be expected to outperform MP in any significant manner, if at all. Our bounds extend to scenarios with varying re-use which may arise in the case of dynamic re-use partitioning or measurement-based DCA schemes. In these cases, the bounds slightly diverge from the performance of MP, which inflicts higher blocking on outer calls than inner calls, but not to the extent required to maximize carried traffic. This reflects the trade-off that arises in the case of varying reuse between efficiency and fairness. Asymptotic analysis confirms that schemes which minimize blocking intrinsically favor inner calls over outer calls, whereas schemes which do not discriminate among calls inevitably produce higher network-average blocking.
Keywords: performance bounds; blocking, networks

S. Sarkar and K. N. Sivarajan, "Channel Assignment Algorithms Satisfying Co-Channel and Adjacent Channel Reuse Constraints in Cellular Mobile Networks," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 59, March/April 1998.

Abstract: Recently improved channel assignment algorithms for cellular networks were designed by modeling the interference constraints in terms of a hypergraph [1]. However these algorithms only considered cochannel reuse constraints. Receiver filter responses impose restrictions on simultaneous adjacent channel usage in the same cell or in neighboring cells. An asymptotically tight upper bound for the traffic carried by the system in the presence of arbitrary cochannel and adjacent channel reuse constraints was developed in [2]. However the bound is computationally intractable even for small systems like a regular hexagonal cellular system of 19 cells. We have obtained approximations to the bound using the optimal solutions for cochannel reuse constraints only, and a further graph theoretic approach. Our approximations are computationally much more efficient and have turned out to track very closely the exact performance bounds in most cases of interest. We will also present some heuristics for designing fixed channel assignment algorithms with a minimum number of channels satisfying both cochannel and adjacent channel reuse constraints.
Keywords: co-channel; adjacent channel; cellular; networks

O. Gerstel, R. Ramaswami and G. Sasaki, "Cost Effective Traffic Grooming in WDM Rings," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 69, March/April 1998.

Abstract: We provide network designs for optical wavelength division multiplexed (OWDM) rings that minimize overall network cost, rather than just the number of wavelengths needed. The network cost includes the cost of the transceivers required at the nodes as well as the number of wavelengths. The transceiver cost includes the cost of terminating equipment as well as higher-layer electronic processing equipment, and in practice, can dominate over the cost, of the number of wavelengths in the network. The networks support dynamic (time varying) traffic streams that are at lower rates (e.g., OC-3, 155 Mb/s) than the lightpath capacities (e.g., OC-48, 2.5 Gb/s). A simple OWDM ring is the point-to-point ring, where traffic is transported on WDM links optically, but switched through nodes electronically. Although the network is efficient in using link bandwidth, it has high electronic and opto-electronic processing costs. Two OWDM ring net-works are given that have similar performance but are less expensive. Two other OWDM ring networks are considered that are nonblocking, where one has a wide sense nonblocking property and the other has a rearrangeably nonblocking property. All the networks are compared using the cost criteria of number of wavelengths and number of transceivers.
Keywords: WDM; OWDM; ring networks

I. Balldine and G. N. Rouskas, "Dynamic Load Balancing in Broadcast WDM Networks with Tuning Latencies," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 78, March/April 1998.

Abstract: In this paper we study the problem of dynamic load balancing in broadcast WDM networks by retuning a subset of transceivers in response to changes in the overall traffic pattern. Assuming an existing wavelength assignment and some information regarding the new traffic demands, we present two approaches to obtaining a new wavelength assignment such that (a) the new traffic load is balanced across the channels, and (b) the number of transceivers that need to be retuned is minimized. The latter objective is motivated by the fact that tunable transceivers take a non-negligible amount of time to switch between wavelengths during which parts of the network are unavailable for normal operation. Our main contribution is a new approximation algorithm for the load balancing problem that provides for tradeoff selection, using a single parameter, between the two conflicting goals. This algorithm leads to a scalable approach to reconfiguring the network since, in addition to providing guarantees in terms of load balancing, the expected number of retunings scales with the number of channels, not the number of nodes in the network.
Keywords: WDM networks; transceiver; traffic

E. Modiano, "Unscheduled Multicasts in WDM Broadcast-and-Select Networks," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 86, March/April 1998.

Abstract: This paper considers the problem of scheduling multicast transmissions in a WDM broadcast based Local Area Network. In very high speed networks optimal scheduling may be too time consuming and complex to be executed in real time, thus we are led to consider unscheduled (random) multicast transmissions. We consider two random scheduling schemes: in the first, a message is continuously retransmitted until it is received by all of its intended recipients; and in the second, a random delay is introduced between retransmissions of the same message, We develop an exact throughput analysis for both schemes using methods from discrete-time queuing systems and show that the algorithm with random delays between retransmissions results in higher throughput. Finally, we consider a number of receiver algorithms for selecting among multiple simultaneous transmissions and show that an algorithm where the receiver selects the message with the least number of intended recipients performs better than a random selection algorithm.
Keywords: multicast; WDM; throughput; random selection

O. Gerstel, P. Lin and G. Sasaki, "Wavelength Assignment in a WDM Ring: To Minimize Cost of Embedded SONET Rings," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 94, March/April 1998.

Abstract: This work deals with wavelength assignment for lightpaths. We restrict our attention to the near term, in which WDM networks will be in the form of rings and higher level networks will be SONET/SDH self-healing rings. This view changes the goal of wavelength assignment (WLA) vs. previous work on the subject in a number of aspects. First, a pair of SONET add/drop multiplexers (ADMs) terminates each lightpath. These ADMs also terminate adjacent lightpaths to form rings, implying that the WLA has to support this type of sharing. Second, following [GRS98], we argue that the first-order optimization goal should be to minimize the overall network cost which is dominated by the number of required ADMs and not the number of wavelengths. We show that these two minimization problems are intrinsically different, and there exist cases where the two minima cannot be simultaneously achieved. We derive a simple lower bound to the number of ADMs. Depending on the given lightpaths, we show that this lower bound is not always achievable. We also show that adding wavelength converters to the system does not improve the cost but that splitting a lightpath and handling each part separately may reduce the total number of ADMs. We develop two heuristics to minimize the number of ADMs: Cut-First, and Assign-First. Both heuristics, using different approaches, attempt to use the smallest number of ADMs possible. We also show that Cut-First always uses the minimum number of wavelengths, but because of the cut, may use more ADMs than necessary. However, the number of extra ADMs is proven to be bounded by the number of supported wavelengths and typically much less. We show instances where Cut-First performs better than Assign-First and vice versa. Finally, we present a set of transformations that take any WLA and improve its cost.
Keywords: Wavelength assignment; wavelength division multiplexing (WDM); optical networks; SONET/SDH; self-healing rings

M. Gerla, P. Palnati, E. Leonardi and F. Neri, "Minimum Distance Routing in the Bidirectional Shufflenet," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 102, March/April 1998.

Abstract: In this paper we study the bidirectional shufflenet topology, which is obtained from the well-konwn (unidirectional) shufflenet by considering bidirectional links. More specifically, we define a shortest-path routing algorithm, and derive the diameter and the average distance of the topology. The bidirectional shufflenet is then compared, in terms of average distance, with other variations of the perfect shuffle.
Keywords: bidirectional shufflenet; topology

P. P. To and T. T. Lee, "The Multi-Dimensional Shuffle-Exchange Network: A Novel Topology for Regular Network Architectures," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 110, March/April 1998.

Abstract: In this paper a novel class of network topologies known as the Multi-dimensional Shuffle-exchange Network (A4DSXN) is proposed. We show that the well-known de Bruijn graph and hyper-cube in fact both belong to the same class of graphs represented by MDSXN. The MDSXN is therefore the unification and generalization of the de Brutjn graph and hypercube. We show that members of MDSXN inherit the topological properties of both the de Bruzjn graph and hypercube to a varying degree. This allows us to tradeoff cost and performance effectively and construct networks which are most suitable for a particular purpose. Examples of applications of MDSXN include structure for switching and multicasting networks, optical network topology, and virtual topology for local and metropolitan area networks.
Keywords: multi-dimensional; hyper-cube; shuffle; topology

M. Vaez and C.-T. Lea, "Strictly and Wide-Sense Nonblocking Photonic Switching Under Crosstalk Constraint," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 118, March/April 1998.

Abstract: Photonic switching systems based on directional coupler switching elements can switch multiple-wavelength signals at the speed of tera-bits per second. Crosstalk is an intrinsic shortcoming of the directional couplers which must be overcome in the construction of an efficient switching system. Therefore, it adds a new dimension to the design of the networks based on directional couplers. In this paper, we explore the principles of constructing strictly nonblocking directional coupler-based photonic switching systems under various crosstalk constraints. We will show that a lower level of crosstalk can be achieved at a cost of additional hardware. We also introduce a rule that limits the routing of the new connections based on the state of the switching network. Such rule can significantly reduce the undesired crosstalk and the hardware complexity of the strictly nonblocking network at the same time. The resulting network is therefore categorized as a wide-sense nonblocking network.
Keywords: Banyan Network; photonic switching; crosstalk; strictly nonblocking; wide-sense nonblocking

I. Busi and A. Pattavina, "Strict-Sense Non-Blocking Conditions for Shuffle/Exchange Networks with Vertical Replication," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 126, March/April 1998.

Abstract: The strict-sense non-blocking conditions of an interconnection network built by vertical replication of a banyan network with additional stages are known only if the expanded banyan network has a recursive construction. Here we consider a Shuffle/Exchange network as the basic building block to be replicated vertically. Since the network does not have a recursive construction, the previous analysis cannot be applied. By relying on the topological properties of the channel graph in a Shuffle/ Exchange network, we find that the non-blocking conditions for this network are numerically the same needed for a recursive banyan network.
Keywords: interconnection networks; non-blocking networks; shuffle/exchange networks; channel graph

G. Anastasi, L. Lenzini and E. Mingozzi, "Stability and Performance Analysis of HIPERLAN," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 134, March/April 1998.

Abstract: This paper thoroughly analyses the HIPERLAN MAC protocol, which is a standard for Wireless Local Area Networks (WLANs) defined by the European Telecommunications Standards Institute (ETSI). Since the HIPERLAN MAC protocol belongs to the class of Carrier Sensing Multiple Access with Collision Avoidance (CSMA/CA), in the first part of the paper we analyze the HIPERLAN stability problem. In the second part we present an in depth performance analysis, by simulation, of the HIPERLAN MAC protocol. The analysis is performed by considering data traffic patterns (hereafter advanced data traffic) which have a very similar shape to traffic generated by WWW applications. Furthermore, in the analysis we consider Poissonian data traffic too. Our conclusion is that the HIPERLAN MAC protocol performs satisfactorily, although performance measures with advanced traffic are worse than the corresponding performance measures with Poissonian traffic. Furthermore, we broadened our analysis to include the influence of the packet lifetime and channel access priority mechanism on the performance measures provided by HIPERLAN.
Keywords: performance; WLAN; ETSI; CSMA; HIPERLAN

F. Cali, M. Conti and E. Gregori, "IEEE 802.11 Wireless LAN: Capacity Analysis and Protocol Enhancement," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 142, March/April 1998.

Abstract: In WLAN the medium access control (MAC) protocol is the main element for determining the efficiency in sharing the limited communication bandwidth of the wireless channel. This paper focuses on the efficiency of the IEEE 802.11 standard for wireless LANs. Specifically, we derive an analytical formula for the protocol capacity [Kur 84]. From this analysis we found i) the theoretical upper bound of the IEEE 802.11 protocol capacity; ii) that the standard can operate very far from the theoretical limits depending on the network configuration; iii) that an appropriate tuning of the backoff algorithm can drive the IEEE 802.11 protocol close to its theoretical limits. Hence we propose a distributed algorithm which enables each station to tune its backoff algorithm at run-time. The performances of the IEEE 802.11 protocol, enhanced with our algorithm, are investigated via simulation. The results indicate that the enhanced protocol is very close to the maximum theoretical efficiency.
Keywords: performance; WLAN; IEEE 802.11; protocol; capacity

J.-C. Chen, K. M. Sivalingam, P. Agrawal and S. Kishore, "A Comparison of MAC Protocols for Wireless Local Networks Based on Battery Power Consumption," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 150, March/April 1998.

Abstract: Energy efficiency is an important issue in mobile wireless networks since the battery life of mobile terminals is limited. Conservation of battery power has been addressed using many techniques. This paper addresses energy efficiency in medium access control (MAC) protocols for wireless networks. The paper develops a frame work to study the energy consumption of a MAC protocol from the transceiver usage perspective. This framework is then applied to compare the performance of a set of protocols that includes IEEE 802.11, E.C.-MAC, PARMA, MAR.-TDMA, and DQRUMAa. The performance metrics considered are transmitter and receiver usage times for packet transmission and reception. The analysis here shows that protocols that aim to reduce the number of contentions perform better from a energy consumption perspective. The receiver usage time, however, tends to be higher for protocols that require the mobile to sense the medium before attempting transmission.
Keywords: performance; WLAN; IEEE 802.11; protocol

R. Garcés and J. J. Garcia-Luna-Aceves, "A Near-Optimum Channel Access Protocol Based on Incremental Collision Resolution and Distributed Transmission Queues," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 158, March/April 1998.

Abstract: We introduce a new stable multiple access protocol for broadcast channels shared by multiple stations, which we call the incremental collision resolution multiple access (ICRMA) protocol. ICRMA dynamically divides the channel into cycles of variable length; each cycle consists of a contention period and a queue-transmission period. The queue-transmission period is a variable-length train of packets, which are transmitted by stations that have been added to the distributed transmission queue by successfully completing a collision-resolution round in a previous contention period. During the contention period, stations with one or more packets to send compete for the right to be added to the data-transmission queue using a deterministic tree-splitting algorithm. A single round of collision resolution is allowed in each contention period. Analytical results show that collision resolution in ICRMA is much more efficient than DQRAP's. Simulation and analytical results show that ICRMA's throughput is within 570 of the throughput achieved by the ideal channel access protocol based on a distributed transmission queue and incremented collision resolution.
Keywords: collision resolution; broadcast; channels; ICRMA; protocol

E. Altman, T. Basar and R. Srikant, "Robust Rate Control for ABR Sources," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 166, March/April 1998.

Abstract: The paper considers the design of explicit rate-based flow control for ABR sources in an ATM network. The goal is to share the available capacity ``fairly'' among many sources while maintaining queue length at a bottleneck node at a desired level. This problem is formulated as a stochastic control problem, and in this framework rate-control mechanisms are developed, which stabilize the queue length even though different sources may have different round-trip de-lays to the bottleneck node. Various robustness properties of the solution are illustrated through simulation experiments.
Keywords: ATM; ABR; rate control; team theory; stochastic control

S. Prasad, K. Kiasaleh and P. Balsara, "LAPLUS: An Efficient, Effective and Stable Switch Algorithm for Flow Control of the Available Bit Rate ATM Service," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 174, March/April 1998.

Abstract: LAPLUS is a novel switch algorithm for flow control of the Available Bit Rate (ABR) Asynchronous Transfer Mode (ATM) service. It ensures a steady-state rate allocation satisfying the MCR-plus-equal-share criterion. It only requires constant-time processing and two tags to be stored per flow. It is naturally able to take Peak Cell Rates of flows into account. LAPLUS solves in a novel way the problem of selecting a measurement interval. The solution allows it to contain queue growth and keep utilization high on one hand and control low speed flows and operate with stability on the other. We describe results of simulation study of LAPLUS. The results show it to be fair, responsive and stable.
Keywords: switch algorithm; ABR; ATM; LAPLUS

L. Benmohamed and Y. T. Wang, "A Control-Theoretic ABR Explicit Rate Algorithm for ATM Switches with Per-VC Queueing," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 183, March/April 1998.

Abstract: There have been numerous studies on congestion control for the ABR service in ATM networks. These studies typically focus on the performance and fairness of the algorithms and make simplistic assumptions regarding the switch architecture and the link scheduling. One central issue of these studies has been the computation of the fair- share of the link bandwidth. On the other hand, newer generation of ATM chipsets and switches now implement per-VC queueing and scheduling that is capable of providing flow isolation as well as fair sharing of the link bandwidth among contending connections. As a result, ABR congestion control algorithms can now focus on solving the congestion control problem without unnecessarily being burdened by fairness considerations. In this paper, we take advantage of the per-VC queueing/scheduling capability of the new generation of ATM switches and develop an AM? rate-based congestion control algorithm. In contrast to most algorithms that appeared in the literature which are heuristics-based, this algorithm extends the work of [4] using a control-theoretic approach and takes advantage of the per-VC queue length information to achieve a simple to implement and yet complete control of the stability, rate of convergence, and performance of ABR service. Simulation results confirm the excellent performance and fairness characteristics achieved by the algorithm.
Keywords: switch algorithm; ABR; ATM; queueing

H. T. Kung and S. Y. Wang, "Zero Queueing Flow Control and Applications," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 192, March/April 1998.

Abstract: Zero Queueing Flow Control (ZQFC) is a new credit-based flow control method for ATM networks. The receiving node of such a flow-controlled link will have zero queue-occupancy in the steady state. ZQFC uses both link and VC flow control simultaneously over the link to implement the required rate adaptation for achieving zero queueing. Because of its zero queueing property, ZQFC is able to solve a head-of-the-line blocking problem that may arise when multiple VCs share the same receiver buffer in imple-menting their flow control. ZQFC works well with TCP traffic. When there is any queue buildup associated with a TCP connection in a shared buffer, ZQFC will identify the connection and take steps to reduce its arrival rate at the buffer. This will allow many TCP connections to share the buffer efficiently and fairly. It makes sense to deploy ZQFC for just a single link where performance improvement is critical. Simulations using real-life TCP code have demonstrated these advantages of ZQFC.
Keywords: flow control; ATM; queueing; ZQFC; TCP traffic

S. Ma and C. Ji, "Modeling Video Traffic in the Wavelet Domain," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 201, March/April 1998.

Abstract: A significant discovery from this work is that although video traffic has complicated short- and long-range dependence in the time domain, the corresponding wavelet coefficients are no longer long-range dependent in the wavelet domain. Therefore, a ``short-range'' dependent process can be used to model video traffic in the wavelet domain. In this work, we develop such wavelet models for VBR video traffic. The strength of the developed wavelet models includes (1) it provides a unified approach to model both long-range and short-range dependence in video traffic simultaneously, (2) it has the ability to reduce the temporal dependence so significantly that the wavelet coefficients can be modeled by either independent or Markov models, and (3) the model results in a computationally efficient method on generating high quality video traffic.
Keywords: wavelet; long-range dependence; short-range dependence; traffic modeling; VBR video traffic; video networking; B-ISDN and ATM; admission control

B. S. Jang and C. Thomson, "Threshold Autoregressive Models for VBR MPEG Video Traces," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 209, March/April 1998.

Abstract: In this paper variable bit rate VBR Moving Picture Experts Group (MPEG) coded full-motion video traffic is modeled by a nonlinear time-series process. The threshold autoregressive (TAR) process is of particular interest. The TAR model is comprised of a set of autoregressive (AR) processes that are switched between amplitude sub-regions. To model the dynamics of the switching between the sub-regions a selection of amplitude dependent thresholds and a delay value is required. To this end, an efficient and accurate TAR model construction algorithm is developed to model VBR MPEG-coded video traffic. The TAR model is shown to accurately represent statistical characteristics of the actual full-motion video trace. Furthermore, in simulations for the bit-loss rate actual and TAR traces show good agreement.
Keywords: video modeling; MPEG; TAR; VBR; nonlinear model

A. Lombardo, G. Morabito and G. Schembra, "An Accurate and Treatable Markov Model of MPEG-Video Traffic," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 217, March/April 1998.

Abstract: In this paper an accurate and treatable Markov-based model for MPEG video traffic is proposed. Before reaching a definition of the model, a large number of MPEG video sequences are analyzed and their statistical characteristics are highlighted. Besides the well-known autocorrelation function, testifying the pseudo-periodicity of the sequence in the short term, and the Gamma-shaped probability density function, the correlation between different frames belonging to the same Group of Pictures (GOP) is also accurately studied. Then a structured model is proposed. Thanks to its two-level structure, it is able to capture both the inter-GOP and the intra-GOP correlation. In order to obtain the first level of the model, the well-known inverse eigenvalue problem is solved in the discrete-time domain. Finally, the model is used to evaluate the loss probability in an ATM multiplexer loaded by an MPEG video source and an aggregate of external traffic. The accuracy of the model is evaluated by comparing the analytically obtained loss probability with the loss probability calculated by simulating the system using real traffic sequences as driven traces.
Keywords: video modeling; MPEG; Markov model; GOP; traffic

M. H. Willebeek-LeMair, Z. Y. Shae and Y. C. Chang, "Robust H.263 Video Coding for Transmission over the Internet," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 225, March/April 1998.

Abstract: The widely popular World Wide Web along with advances in desktop computers has brought the world into a new age of computing and communications. Low bit-rate video applications across the Internet are quickly emerging. The ITU-T H.263 standard was designed for low bit-rate video conferencing across phone lines and is an ideal candidate to be extended for Internet video applications. This paper focuses on the error robustness issue of compressed H.263 video streams when transmitted over the Internet. The traditional approach of inserting intra-colded frames increases the error resilience at the expense of busy output traffic, lower picture quality, and uneven frame dropping. By extending the macroblock force update feature of the H.263 standard, we developed a scheme that complies with the standard and increases the robustness of the video stream. This macroblock updating scheme analyzes the temporal dependencies of macroblocks in successive frames and selectively updates the macroblocks which have the most impact on later frames, The performance evaluation of the proposed technique demonstrates that it achieves a good balance between error recovery speeds and video quality.
Keywords: video modeling; MPEG; Markow model; GOP; traffic

H. Balakrishnan, V. N. Padmanabhan, S. Seshan, M. Stemm and R. H. Katz, "TCP Behavior of a Busy Internet Server: Analysis and Improvements," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 252, March/April 1998.

Abstract: The rapid growth of the World Wide Web in recent years has caused a significant shift in the composition of Internet traffic. Although past work has studied the behavior of TCP dynamics in the context of bulk-transfer applications and some studies have begun to investigate the interactions of TCP and HTTP, few have used extensive real-world traffic traces to examine the problem. This interaction is interesting because of the way in which current Web browsers use TCP connections: multiple concurrent short connections from a single host. In this paper, we analyze the way in which Web browsers use TCP connections based on extensive traffic traces obtained from a busy Web server (the official Web server of the 1996 Atlanta Olympic games). At the time of operation, this Web server was one of the busiest on the Internet, handling tens of millions of requests per day from several hundred thousand clients. We first describe the techniques used to gather these traces and reconstruct the behavior of TCP on the server. We then present a detailed analysis of TCP's loss recovery and congestion control behavior from the recorded transfers. Our two most important results are: (1) short Web transfers lead to poor loss recovery performance for TCP, and (2) concurrent connections are overly aggressive users of the network. We then discuss techniques designed to solve these problems. To improve the data-driven loss recovery performance of short transfers, we present a new enhancement to TCP's loss recovery. To improve the congestion control and loss recovery performance of parallel TCP connections, we present a new integrated approach to congestion control and loss recovery that works across the set of concurrent connections. Simulations and trace analysis show that our enhanced loss recovery scheme could have eliminated 25% of all timeout events, and that our integrated approach provides greater fairness and improved startup performance for concurrent connections. Our solutions are more general than application-specific enhancements such as the use of persistent connections in P-HTTP [13] and HTTP/1.1 [7], and addresses issues, such as improved TCP loss recovery, that are not considered by them.
Keywords: Web; WWW; TCP; HTTP; P-HTTP; traffic; loss recovery

D. Lin and H. T. Kung, "TCP Fast Recovery Strategies: Analysis and Improvements," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 263, March/April 1998.

Abstract: This paper suggests that, to match an ideal Internet gateway which rigorously enforces fair sharing among competing TCP connections, an ideal TCP sender should possess two properties while obeying congestion avoidance and control principles. First, the TCP sender which under-uses network resources should avoid retransmission timeouts. When experiencing network congestion, a TCP connection should not time out unless it has already reduced its congestion window to one packet but still cannot survive. Second, the TCP sender which over-uses network resources should lower its bandwidth. The congestion window for a connection should decrease each time a lost packet is detected, because an ideal gateway will drop packets, during congestion, with a probability proportional to the bandwidth of the connection. Following these guidelines, we propose Network-sensitive Reno (Net Reno), a set of optimizations that can be added to a traditional Reno TCP sender. Using TCP's self-clocking property and the packet conservation rule, Net Reno improves Reno and its variants (New-Reno and SACK), in reducing TCP retransmission timeouts (RTOs) and in being conservative in network usage during the fast recovery phase. Through a trace analysis, we have shown that over 85\% of RTOs are due to small congestion windows that prevent fast retransmission and recovery algorithms from being effective. This implies that sophisticated recovery schemes such as SACK will have limited benefits for these loads. Net Reno overcomes this problem with a small window optimization. While being less aggressive than previous approaches, Net Reno can recover any number of packet losses without timeouts as long as the network keeps at least one packet alive for the connection. This scheme thus brings TCP one step further toward the ideal model. Net Reno requires no modifications to the TCP receiver. Simulations and laboratory experiments have shown that they significantly reduce RTOs and improve TCP's goodput.
Keywords: TCP; recovery; RTO; Reno; SACK; packet; self-clocking

Y. Zheng and H. Imai, "Compact and Unforgeable Key Establishment over an ATM Network," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 411, March/April 1998.

Abstract: Authenticated session key establishment is a central issue in network security. This paper addresses a question on whether we can design a compact, efficient and authenticated key establishment protocol that has the following two properties: (1) each message exchanged between two participants can be transferred in a short packet such as an ATM cell whose payload has only 384 bits, and (2) messages that carry key materials are unforgable and non-repudiatable without the involvement of a trusted key distribution center. We discuss why the answer to this question is negative if one follows the currently standard approach to key establishment, namely employing secret/public key encryption and, possibly, digital signature. We then present a number of protocols that represent a positive answer to the question. Our protocols are all based on a recently introduced cryptographic primitive called ''signcryption'' that fulfills both the functions of digital signature and public key encryption with a cost far smaller than that required by ''digital signature followed by encryption''.
Keywords: ATM Networks; cryptography; key establishment; multicast; network security; signcryption

T. Y. C. Woo and S. S. Lam, "Designing a Distributed Authorization Service," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 419, March/April 1998.

Abstract: We present the design of a distributed authorization service which parallels existing authentication services for distributed systems. Such a service would operate on top of an authentication substrate. There are two distinct ideas underlying our design: (1) The use of a language, called generalized access control list (GACL), as a common representation of authorization requirements. (2) The use of authenticated delegation to effect authorization offloading from an end server to an authorization server. We present the syntax and semantics of GACL, and illustrate how it can be used to specify authorization requirements that cannot be easily specified by ordinary ACL. We also describe the protocols in our design.

G. Pavlou, A. Liotta, P. Abbi and S. Ceri, "CMIS/P++: Extensions to CMIS/P for Increased Expressiveness and Efficiency in the Manipulation of Management Information," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 430, March/April 1998.

Abstract: CMIS/P is the OSI System Management service and protocol, used as the base technology for the Telecommunication Management Network. It is a generic object-oriented protocol that provides multiple object access capabilities to managed object clusters administered by agent applications. Its navigation and object selection capabilities rely on traversing containment relationships. This is restrictive as information models for emerging broadband technologies (SDH/SONET, ATM) exhibit various other relationships. In this paper we present extensions to the CMIS service that provide a richer access language and show how these extensions can be supported by corresponding extensions to the CMIP protocol. These extensions allow to traverse any object relationship and to filter out objects at any stage of the selection process. CMIS++ provides much greater expressive power than CMIS while CMIP++ supports the remote evaluation of the corresponding expressions, minimizing the management traffic required for complex management information retrieval. These extensions follow an incremental approach, starting from a version compatible with the current standard and adding gradually sophisticated features. The applicability and importance of the proposed concepts is demonstrated through an example from SDH management while we also discuss implementation considerations.
Keywords: Management; Information Retrieval; CMIS/P; OSI-SM; TMN; SDH

Y. A. Korilis, T. A. Varvarigou and S. R. Ahuja, "Incentive-Compatible Pricing Strategies in Noncooperative Networks," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 439, March/April 1998.

Abstract: The complexity of modern networks calls for decentralized control schemes where each user makes its control decisions independently based on some individual performance objectives. The operating points of such noncooperative networks are the Nash equilibria of the underlying control game. Nash equilibria are generically inefficient and lead to suboptimal network performance. Using routing as a control paradigm, a methodology is devised for overcoming this inefficiency based on pricing mechanisms. Assuming that the price for usage of each link is proportional to the congestion level at the link, it is shown that the provider can enforce any operating point it deems efficient by offering the capacity of the various links at discount prices. The incentive compatible discount vector is shown to be unique to the extent of a multiplicative constant, and its structure is specified explicitly. An adaptive algorithm for distributed computation of the incentive compatible discount vector is introduced. The applicability of the results in various practical scenario is investigated by means of a prototype that implements the routing game as a Web-based game.

M. Xu and J. Wu, "An Extension to Concurrent TTCN," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 447, March/April 1998.

Abstract: This paper proposes an extension to the concurrent TTCN to meet the needs of protocol performance testing. The operational semantics of the extended concurrent TTCN are defined in terms of input-output labeled transition system. Based on the extended concurrent TTCN, the architecture of protocol performance test system is designed, and an example of test cases about throughput is given.
Keywords: Protocol performance testing; extended concurrent TTCN; FDT; IOLTS

A. Basu, G. Morrisett and T. von Eicken, "Promela++: A Language for Constructing Correct and Efficient Protocols," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 455, March/April 1998.

Keywords: high level protocol; specification language

J. C. Brustoloni and P. Steenkiste, "User-Level Protocol Servers with Kernel-Level Performance," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 463, March/April 1998.

Abstract: Compared to kernel-level servers, user-level ones can be debugged and maintained more easily and safely, but traditionally have had much worse performance. We describe a novel Z/O-oriented IPC facility that combines the emulated copy data passing scheme for monolithic systems with new copy avoidance techniques for microkernel systems. Unlike previous optimizations, I/O-oriented IPC does not require changes in existing user applications or complex restructuring of servers; it offers an API with copy semantics and allows the same servers to be installed at kernel or user level. In end-to-end experiments on an ATM network at 512 Mbps, I/O-oriented IPC gave user-level protocol servers performance approaching that of kernel-level ones. Performance differences scaled roughly inversely to the processor's SPECint95 rating, projecting fast further improvement.

J. P. Richter and H. de Meer, "Towards Formal Semantics for QoS Support," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 472, March/April 1998.

Abstract: The introduction of the concept of QoS has led to an extension of the traditional concepts of service and service specification. However, design of QoS support is usually done without a systematic approach, leading to concepts of QoS support ranging from basic QoS monitoring capabilities to hard real-time guarantees. In more advanced QoS support, intermediate layers should be designed in a way that enables the masking or controlled handling of sporadic QoS violations. To implement this degradation path support across multiple layers, a negotiation of preferred and supportable failure semantics is a requirement. To realize these advanced QoS support features, not only new QoS control mechanisms within the layers have to be developed but the semantics of QoS negotiation protocols between layers must be better understood and subsequently extended. A framework formally based on set theory and relations is presented that allows the specification of QoS hierarchies including a well-defined failure type model. The framework supports the development of QoS negotiation protocols and can be used as a formal base for a structured system analysis.

J. Bolot and S. Fosse-Parisis, "Adding Voice to a Distributed Game on the Internet," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 480, March/April 1998.

Abstract: Much of the recent work on distributed virtual environments (DVES) has focused on efficient schemes and algorithms for the description and visual rendering of such environments. However, there has been comparatively little effort on senses other than sight, and in particular on live voice. This is surprising because adding voice to DVES enhances the sense of immersion in the environment, and can in fact completely change the way entities interact. For example, adding voice in a distributed game impacts tactics, team building, cheating, etc. In this paper, we examine issues related to adding voice between participants in DVES. We consider in particular a special kind of DVE, namely distributed games over the Internet. We consider all stages of voice manipulation, including voice generation (with emphasis on echo cancellation), voice transmission (with emphasis on RTP and packetization), and voice restitution (with emphasis on spatial rendition and on synchronization between voice and visual cues). We also consider implementation issues, and illustrate these with the MiMaze game and the FreePhone audio tool, both developed at INRIA.

Michael S. Borella and Gregory B. Brewster, "Measurement and Analysis of Long-Range Dependent Behavior of Internet Packet Delay," online in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 497, March/April 1998.

Abstract: We analyze 12 traces of round-trip Internet packet delay. We find that these traces, when viewed as time series data, often exhibit Hurst parameter (H) estimates greater than 0.5, indicating long-range dependence. Several traces, however, are not well-modeled with a constant .H. We discuss in detail our analytical methods and the robustness of empirical estimators of If under conditions of non-negligible packet loss. Our results indicate that Internet delay is bursty across multiple time scales, which implies that end-user quality of service in the Internet is likely to be impacted by long periods of very large and/or highly variable delays.

M. Podolsky, C. Romer and S. McCanne, "Simulation of FEC-Based Error Control for Packet Audio on the Internet," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 505, March/April 1998.

Abstract: Real-time audio over a best-effort network, such as the Internet, frequently suffers from packet loss. To mitigate the impact of such packet loss, several research efforts and implementation studies advocate the use of forward error correction (FEC) coding. Although these prior works have pioneered promising and novel applications of FEC to Internet audio, they do not definitively demonstrate the advantages of FEC because they do not evaluate aggregate performance that results from multiplexing many like flows. For example, previous work assesses the efficacy of FEC by generalizing the performance of a single audio connection; other work restricts its design space to one particular coding scheme and does not evaluate the potentially negative impact of increased network load that results from FEC; and other work limits its scope to an all-audio source network employing a simplistic FEC-encoding. In this paper, we build on these landmark works with a systematic study of FEC for packet audio that characterizes the aggregate performance across all audio sources in the network. We refine the novel but ad hoc coding techniques proposed elsewhere into a formal framework that we call ''signal processing-based FEC'' (SFEC) and use our framework to more rigorously evaluate the relative merits of this approach. Through extensive simulation, we evaluate the ''scalability'' of SFEC for packet audio, i.e., the ability for a coding algorithm to improve aggregate performance when used by all sources in the network and find that optimal signal quality is achieved when sources react to network congestion not by blindly adding FEC, but rather by adding FEC in a controlled fashion that simultaneously constrains the source-coding rate. As a result, packet loss is mitigated without introducing more congestion, thus admitting a more scalable and effective approach than successively adding redundancy to a constant bit-rate source. While this result may seem intuitive, it has not been previously suggested in the context of Internet audio, and until now, has not been systematically studied.

Young Young Kim and San-qi Li, "Performance Analysis of Feedback Controlled Data Packet Transmission over High-Speed Networks," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 516, March/April 1998.

Abstract: The research on guaranteeing Quality of Services has been to a large extent focusing on satisfying cell-level performance such as cell loss rate, cell delay variation, etc. In the authors' previous work, we have used stochastic modeling techniques to evaluate the packet-level performance of two existing packet discarding schemes, packet tail discarding and early packet discarding, based on three packet-level performance measures: packet success probability, goodput, and badput. The work focused on the performance evaluation of data packet transmission over ATM UBR service. With the introduction of ABR service, source regulation based on closed-loop control becomes increasingly important. In this work we attempt to analyze the performance of early packet discarding when the source transmission rates are governed by the feedback information from a congested ATM bottleneck node. Numerical study shows that the steady-state performance are largely affected by slow time scales of the feedback system. Nonetheless, the design of control threshold in early packet discarding is not significantly sensitive to the system time scales.

N. G. Duffield and S. H. Low, "The Cost of Qualtity in Networks of Aggregate Traffic," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 525, March/April 1998.

Abstract: We relate burstiness (or indifference) curves for stochastic traffic flows to the quality they experience under the allocation of the resources of bandwidth and buffer space. We use the relation to explore the quality experienced by merged flows under various rules for allocating resources to them, including one motivated by the Controlled Load service specification. We show how cost structures on the resources can be used to encourage optimal use of shared resources amongst heterogeneous flows.
Keywords: Pricing; economies of scale; indifference curve; admission control; performance analysis; large deviations; stochastic networks

L. Tassiulas, "Linear Complexity Algorithms for Maximum Throughput in Radio Networks and Input Queued Switches," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 533, March/April 1998.

Abstract: A resource allocation model that has within its scope a number of computer and communication network architectures was introduced earlier and scheduling methods that achieve maximum throughput were proposed. Those methods require the solution of a complex optimization problem at each packet transmission time and as a result they are not amenable to direct implementations. In this paper we propose a class of maximum throughput scheduling policies for the model we introduced earlier that have linear complexity and can lead to practical implementations. They rely on a randomized, iterative algorithm for the solution of the optimization problem arising in the scheduling, in combination with an incremental updating rule. The proposed policy is of maximum throughput under some fairly general conditions on the randomized algorithm.

R. Y. C. Brockett and I. Rubin, "Transport Layer Adaptable Rate Control (TARC): Analysis of a Transport Layer Flow Control Mechanism for High Speed Networks," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 540, March/April 1998.

Abstract: The rapid growth of today's emerging high-speed networks has made currently implemented transport layer flow control techniques inadequate for the performance levels required now and in the future. Throughput at the upper layers is throttled due to architectural designs tailored to ``last-generation'' technologies. We define and analyze a transport layer flow control algorithm featuring hybrid open-loop/feedback-based rate control with burst detection for a connection-oriented transport layer protocol. TARC can be used alone or in conjunction with a window flow control mechanism. It can provide support for QoS guarantees, and can take advantage of lower layer congestion control or bandwidth underutilization information. We analyze our model under multiple transport connections with bursty traffic and present an iterative method for computing system state distributions and performance measures. Using this iterative analysis, we explore the behavior of TARC and compare its performance to other transport layer flow control algorithms.

F. M. Chiussi and A. Francini, "Implementing Fair Queueing in ATM Switches: The Discrete-Rate Approach," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 272, March/April 1998.

Abstract: The total implementation cost of schedulers which approximate the Generalized Processor Sharing (GPS) policy is dominated by the complexity of maintaining and sorting the time stamps for all connections. Several approaches have been proposed which reduce the cost of the sorting operation and only introduce a small degradation in the delay bounds of the schedule. They include logarithmic calendar queues, which reduce complexity by increasing the granularity of the sorting in an optimal way, and schedulers supporting a discrete set of guaranteed rates, which use a FIFO queue per rate and only require sorting a single time stamp per rate. All these techniques still require computing and storing one time stamp per connection, thus maintaining the cost of GPS-related algorithms clearly higher than that of less sophisticated schedulers. Furthermore, in the case of the discrete-rate approach, the complexity increases linearly with the number of supported rates, thus making it attractive only for relatively small numbers of rates. In this paper, we introduce a discrete-rate GPS-related scheduler which does not require the computation and storage of one time stamp per connection, and only maintains a single time stamp per rate. The elimination of the per-connection timestamps has no negative effect on the delay bounds. Then, we present a generalized discrete-rate approach, which uses a given number of FIFO queues to support a larger number of guaranteed rates, and only introduces a modest degradation in delay bounds for certain rates. The technique can be applied to our no-per-connection-time stamp scheduler, as well as to any discrete-rate scheduler.

D. C. Stephens and H. Zhang, "Implementing Distributed Packet Fair Queueing in a Scalable Switch Architecture," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 282, March/April 1998.

Abstract: To support the Internet's explosive growth and expansion into a true integrated services network, there is a need for cost-effective switching technologies that can simultaneously provide high capacity switching and advanced QoS. Unfortunately, these two goals are largely believed to be contradictory in nature. To support QoS, sophisticated packet scheduling algorithms, such as Fair Queueing, are needed to manage queueing points. However, the bulk of current research in packet scheduling algorithms assumes an output buffered switch architecture, whereas most high performance switches (both commercial and research) are input buffered. While output buffered systems may have the desired quality of service, they lack the necessary scalability. Input buffered systems, while scalable, lack the necessary quality of service features. In the paper, we propose the construction of switching systems that are both input and output buffered, with the scalability of input buffered switches and the robust quality of service of output buffered switches. We call the resulting architecture Distributed Packet Fair Queueing (D-PFQ) as it enables physically dispersed time cards to provide service that closely approximates an output-buffered switch with Fair Queueing. By equalizing the growth of the virtual time functions across the switch system, most of the PFQ algorithms in the literature can be properly defined for distributed operation. We present our system using a cross-bar for the switch core, as they are widely used in commercial products and enable the clearest presentation of our architecture. Buffering techniques are used to enhance the system's latency tolerance, which enables the use of pipeline and variable packet sizes internally. Our system is truely distributed in that there is neither a central arbiter nor any global synchronization. Simulation results are presented to evaluate the delay and bandwidth sharing properties of the proposed D-PFQ system.

F. Toutain, "Decoupled Generalized Processor Sharing: A Fair Queueing Principle for Adaptive Multimedia Applications," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 291, March/April 1998.

Abstract: Traditionally targeted to best-effort packet-switching net-works, adaptive applications implement a variety of mechanisms to make use of variable quality bearing service. In the framework of integrated services networks, relaxing quality of service (QoS) requirements through the use of such applications allows a greater operation flexibility and increases statistical multiplexing gains. However, these applications must be given some QoS guarantees, relative to minimum service, bounded transmission delay, and fair sharing of the available bandwidth. This paper focuses on a fair queuing, fluid-flow model to be embedded in the network switching nodes. This model is based on the GPS paradigm, but avoids its inherent limitations. It is approximated by means of a dynamic priority algorithm. It is shown that an implementation having reduced complexity can be achieved and exhibits good service characteristics, relative to packet service delay and conformance to the fluid-flow model fairness. Simulation results give evidence that the resulting algorithm accurately meets the needs of adaptive applications.

B. Suter, T. V. Lakshman, D. Stiliadis and A. Choudhury, "Design Considerations for Supporting TCP with Per-Flow Queueing," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 299, March/April 1998.

Abstract: In this paper we investigate the extent to which fair queueing (and its variants), in conjunction with appropriately tailored buffer management schemes, can be used to achieve the following goals for TCP traffic: 1) alleviate the inherent unfairness of TCP towards connections with long round-trip times, 2) provide isolation when connections using different TCP versions share a bottleneck link, 3) provide protection from TCP-unfriendly traffic sources (which might include TCP ACKS since they are not loss-responsive) and misbehaving users, 4) alleviate the effects of ACK compression in the presence of two-way traffic, 5) prevent users experiencing ACK loss (which causes their traffic to be bursty) from significantly affecting other connections, 6) provide low latency to interactive connections which share a bottleneck with ''greedy'' connections without reducing overall link utilization. The paper proposes new buffer management schemes to be used in conjunction with fair queueing, so as to achieve the above goals for TCE and compares the performance of the proposed schemes to the performance obtained using RED for packet dropping.

N. R. Figueira and J. Pasquale, "Remote-Queueing Multiple Access (RQMA): Providing Quality of Service for Wireless Communication," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 307, March/April 1998.

Abstract: Providing quality-of-service (QoS) guarantees for wireless communications requires special consideration for the relatively large and time-varying bit error rates of wireless links, due to impairments that are difficult to predict. We describe a new wireless link access scheme called Remote-Queueing Multiple Access (RQMA), which allows seamless integration of scheduling disciplines proposed for wired networks that achieve end-to-end QoS guarantees. RQMA supports the flexible and efficient allocation of bandwidth for real-time sessions. Mobile stations send information about their backlogged packets to a base station which executes a scheduling discipline that provides delay bounds. Some minimum portion of the link bandwidth is reserved for the retransmission of packets that are not correctly received but whose deadlines are still not violated. For a fast fading channel with an average error rate of 10e-2 , RQMA is capable of delivering real-time traffic with a small average packet loss rate on the order of 10e-5 . Similar performance can be achieved for a slow fading channel if the average error rate is no more than 10e-3 .

M. Listanti, F. Mascitelli and A. Mobilia, "D2MA: A Distributed Access Protocol for Wireless ATM Networks," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 315, March/April 1998.

Abstract: The main purpose of this paper is to assess the feasibility of the distributed control approach in the definition of a wireless ATM MAC protocol. A novel protocol, named D2MA, is presented. D2MA is based on a double distribution of the access procedure and queueing capabilities. All the reservation and scheduling functions are performed by the mobile stations independently, whereas only passive operations are required to the base station. An accurate analytical model of D2MA has been developed. This model examines a single radio cell with a population m of mobile stations acting as traffic sources. Each station is equipped with a finite buffer of B cell size. A comparison with simulation results shows the accuracy of the analytical approach.

C. Zhu and M. S. Corson, "A Five-Phase Reservation Protocol (FPRP) for Mobile Ad Hoc Networks," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 322, March/April 1998.

Abstract: A new single channel, time division multiple access (TDMA)-based broadcast scheduling protocol, termed the Five-Phase Reservation Protocol (FPRP), is presented for mobile ad hoc networks. The protocol jointly and simultaneously performs the tasks of channel access and node broadcast scheduling. The protocol allows nodes to make reservations within TDMA broadcast schedules. It employs a contention-based mechanism with which nodes compete with each other to acquire TDMA slots. The FPRP is free of the ''hidden terminal'' problem, and is designed such that reservations can be made quickly and efficiently with negligible probability of conflict. It is fully-distributed and parallel (a reservation is made through a localized conversation between nodes in a 2-hop neighborhood), and is thus arbitrarily scalable. A ''multihop ALOHA'' policy is developed to support the FPRP. This policy uses a multihop, pseudo-Bayesian algorithm to calculate contention probabilities and enable faster convergence of the reservation procedure. The performance of the protocol is studied via simulation, and the node coloring process is seen to be as effective as an existing centralized approach. Some future work and applications are also discussed.

Y. Birk and Y. Keren, "Judicious Use of Redundant Transmissions Multi-Channel ALOHA Networks with Deadlines," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 332, March/April 1998.

Abstract: This paper shows how to improve the classic multi-channel slotted ALOHA protocols by judiciously using redundant transmissions. The focus is on user-oriented requirements: deadlines and a permissible probability of failing to meet them. Subject to those, maximization of through-put is the optimization goal. When there is with no success/failure feedback prior to the deadline, the use of information dispersal with some redundancy provided by error-correcting codes for the data in conjunction with a replicated, separately-transmitted synchronization preamble sharply reduces the overhead resulting from the use of shorter packets and significantly increases capacity. When the permissible delay is several times greater than the roundtrip propagation delay, we propose a novel retransmission policy: all attempts except the final one entail the transmission of a single or very few copies, and the remaining copies are transmitted in the final attempt. The sharply increases channel capacity.
Keywords: ALOHA, multi-channel; redundancy; deadline scheduling; information dispersity; dispersity routing

N. Likhanov and R. R. Mazumdar, "Cell Loss Asymptotics in Buffer Fed with a Large Number of Independent Stationary Sources," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 339, March/April 1998.

Abstract: In this paper we derive asymptotically exact expressions for buffer overflow probabilities and cell loss probabilities for a finite buffer which is fed by a large number of independent and stationary sources. The technique is based on scaling, measure change and local limit theorems and extends the recent results of Courcoubetis and Weber on buffer overflow asymptotic. We discuss the cases when the buffers are of the same order as the transmission bandwidth as well as the case of bufferless multiplexer. Moreover we show that the results hold for a wide variety of traffic sources including ON/OFF sources with heavy-tailed distributed ON periods which are typical candidates for so-called ''self-similar'' inputs showing that the asymptotic cell loss probability behaves in much the same manner for such sources as for Markovian type of sources which has important implications for statistical multiplexing. The paper concludes with comparison of the theoretical results with simulations.
Keywords: ATM; Cell loss; Bahadur-Rao theorem; finite buffers; heavy-tailed on-off sources

S. Rajagopal, M. Reisslein and K. W. Ross, "Packet Multiplexers with Adversarial Regulated Traffic," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 347, March/April 1998.

Abstract: We consider a finite-buffer packet multiplexer to which traffic arrives from several independent sources. The traffic from each of the sources is regulated, i.e., the amount of traffic that can enter the multiplexer is constrained by known regulator constraints. The regulator constraints depend on the source and are more general than those resulting from cascaded leaky buckets. We assume that the traffic is adversarial to the extent permitted by the regulators. For lossless multiplexing, we show that if the original multiplexer is lossless it is possible to allocate bandwidth and buffer to the sources so that the resulting segregated systems are lossless. For lossy multiplexing, we use our results for lossless multiplexing to estimate the loss probability of the multiplexer. Our estimate involves transforming the original system into two independent resource systems, and using adversarial sources for the two independent resources to obtain a bound on the loss probabilities for the transformed system. We show that the adversarial sources are not external on-off sources, even when the regulator consists of a peak rate controller in series with a leaky bucket. We explicitly characterize the form of the adversarial source for the transformed problem. We also provide numerical results for the case of the simple regulator.
Keywords: Call admission control; leaky bucket; quality-of-service guarantees; resource allocation; statistical multiplexing; worst-case sources

M. Roughan, D. Veitch and M. Rumsewicz, "Computing Queue-Length Distributions for Power-Law Queues," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 356, March/April 1998.

Abstract: The interest sparked by observations of long-range dependent traffic in real networks has lead to a revival of interest in non-standard queueing systems. One such queueing system is the M/G/l queue where the service-time distribution has infinite variance. The known results for such systems are asymptotic in nature, typically providing the asymptotic form for the tail of the workload distribution, simulation being required to learn about the rest of the distribution. Simulation however performs very poorly for such systems due to the large impact of rare events. In this paper we provide a method for numerically evaluating the entire distribution for the number of customers in the M/G/1 queue with power-law tail service-time. The method is computationally efficient and shown to be accurate through careful simulations. It can be directly extended to other queueing systems and more generally to many problems where the inversion of probability y generating functions complicated by power-laws is at issue. Through the use of examples we study the limitations of simulation and show that information on the tail of the queue-length distribution is not always sufficient to answer significant performance questions. We also derive the asymptotic form of the number of customers in the system in the case of a service-time distribution with a regularly varying tail (eg infinite variance) and thus illustrate the techniques required to apply the method in other contexts.

J. Choe and N. B. Shroff, "New Bounds and Approximations Using Extreme Value Theory for the Queue Length Distribution in High-Speed Networks," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 364, March/April 1998.

Abstract: In this paper we study P({Q > x}), the tail of the steady state queue length distribution at a high-speed multiplexer. The tail probability distribution P({Q > x}) is a fundamental measure of network congestion and thus important for the efficient design and control of networks. In particular, we focus on the case when the aggregate traffic to the multiplexer can be characterized by a stationary Gaussian process. In our approach, a multiplexer is modeled by a uid queue serving a large number of input processes. We propose two asymptotic upper bounds for P({Q > x}), and provide several numerical examples to illustrate the tight- ness of these bounds. We also use these bounds to study important properties of the tail probability. Further, we apply these bounds for a large number of non-Gaussian input sources, and validate their performance via simulations. We have conducted our simulation study using Importance Sampling in order to improve its reliability and to electively capture rare events. Our analytical study is based on Extreme Value Theory, and therefore different from the approaches using traditional Markovian and Large Deviations techniques.

Indra Widjaja and Anwar I. Elwalid, "Performance Issues in VC-Merge Capable Switches for IP over ATM Networks," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 372, March/April 1998.

Abstract: VC merging allows many routes to be mapped to the same VC label, providing a scalable mapping method that can support tens of thousands of edge routers. VC merging requires reassembly buffers so that cells belonging to different packets intended for the same destination do not interleave with each other, The impact of VC merging on the additional buffer required for the reassembly buffers and other buffers due to the perturbation in the traffic process is investigated in this paper. We propose a realistic output-buffered ATM switch architecture that supports VC merging capability. We analyze the performance of the switch using a decomposition approach, and verify the results using simulation. We investigate the impact of VC merging on loss and delay performance for realistic traffic scenarios. The main result indicates that VC merging incurs a minimal overhead compared to non-VC merging in terms of additional buffering. Moreover, the overhead decreases as utilization increases, or as the traffic becomes more bursty. The finding has important implication since practical ATM switches are dimensioned for high utilization and stressful traffic conditions. We also study the delay performance and find that the additional delay due to VC merging is insignificant for most applications.

Hao Che, San-qi Li and Arthur Lin, "Adaptive Resource Management for Flow-Based IP/ATM Hybrid Switching Systems," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 381, March/April 1998.

Abstract: This paper proposes a novel approach for adaptive flow classification for flow-based hybrid switching systems to match the time varying traffic/resource characteristics. It minimizes the maximum of the system resource utilizations associated with packet processing power, signaling capacity and routing table size. After formulating the proposed flow adaptation as a min-max stochastic control problem, a heuristic algorithm is developed. The simulation study based on real traces shows the viability of the proposed flow adaptation for dynamic resource management in hybrid switching system design. The algorithm is simple to implement and only requires the adaptation of two global variables at time intervals of every few seconds based on the present usage of resources.

K. Lee and A. Fisher, "Flow-Aware Gateway Support for IP-over-ATM," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 390, March/April 1998.

Abstract: Multi-service ATM networks will be deployed in internetwork environment to relay IP traffic, including elastic packet flows generated by adaptive network applications. In light of this advance, we devise a flow-aware traffic management framework for the better-than-best-effort support of long-duration adaptive IP flows over ATM. It requires the applications and the ATM gateways to periodically exchange hints indicating up-to-dated flow information. This two-way explicit indication mechanism facilitates improved interactions between applications and the network, and increased coordination between end-to-end and link-level traffic control. Network hints help an application to adapt and avoid congestion, while application hints enable an ATM gateway to selectively aggregate and map packet flows into ATM VCs, to regulate CBR bandwidth, and to interoperate with ABR rate-based flow control. Via simulation and experimentation, we demonstrate the feasibility and effectiveness of this framework in balancing application performance and bandwidth efficiency in the use of CBR VCs, and in improving ATM link utilization and avoiding gateway congestion I n the use of ABR VCs.

F. Hoymany and D. Mosse, "A Simulation Study of Packet Forwarding Methods over ATM: SBR Evaluation," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 401, March/April 1998.

Abstract: The desire to switch ATM cells at high speed and forward data packets in a connectionless (CL) manner poses a challenging architectural difficulty that has not yet been satisfactorily resolved. This difficulty is mainly due to lack of a packet concept in ATM switches, a packet is a level-3 abstraction, totally hidden from the switch. Switch-Borne flouter (SBR) is a proposed switch/router architecture that makes it possible to switch CL packets at very high speed using ATM technology. This paper introduces SBR and compares its performance with other forwarding methods using a simulator. Compared to other methods, SBR allows for a significantly smaller number of open/close VC operations per second, has less buffering requirement, and achieves higher throughput. The same results hold using a real-life Internet packet trace as well as using workload generator.

M. Elaoud and P. Ramanathan, "Adaptive Use of Error-Correcting Codes for Real-Time Communication in Wireless Networks," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 548, March/April 1998.

Abstract: The growing demand for mobility has increased the need to develop more efficient, reliable and cost effective services for data transmission over the air interface. The air interface, as compared to the wired domain, is characterized by a high bit-error-rate, limited bandwidth, and intermittent connectivity. In addition, power in the hand-held devices and laptops is limited by the battery technology. In this paper, we present AFEC, an Adaptive Forward Error-Correct ion scheme, which makes effective use of the air interface by minimizing the number of data bits transmitted to convey message packets in a real time stream over an air interface. Through simulations, we show that AFEC outperforms the traditional single forward-error-correction scheme.

S. J. Oh and K. M. Wasserman, "Adaptive Resource Management for DS-CDMA Networks Subject to Energy Constraints," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 556, March/April 1998.

Abstract: In this paper, we consider a time-slotted DS-CDMA network consisting of a single radio access point and a finite number of mobile wireless terminals. The terminals generate non-real- time packet data and transmit, subject to constraints on the average energy consumed per packet transmission, in a random access fashion over a common broadband channel to the access point using different spreading codes. As in narrowband ALOHA systems, the performance of random access spread spectrum networks can be severely hampered due to saturation effects caused by inherent bistable behavior. We study the effect of dynamic spreading gain control on the stability and throughput properties of the network. We first impose no energy constraints and provide an optimal (throughput maximizing) algorithm under which (i) the asymptotically optimal retransmission probability is equal to one, and (ii) the optimal spreading gain increases linearly as the MAI level increases, or equivalently, the transmission rate decreases inverse linearly as the MAI level increases. We also obtain a simple closed-form expression for the asymptotic throughput as the number of backlogged terminals becomes large, We then impose energy constraints and modify the optimal algorithm by controlling both the spreading gain and retransmission probability of the terminals. The resulting algorithm may or may not eliminate bistability; however, in either case, it achieves high throughput.

P. Lettieri and M. B. Srivastava, "Adaptive Frame Length Control for Improving Wireless Link Throughput, Range and Energy Efficiency," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 564, March/April 1998.

Abstract: Wireless network links are characterized by rapidly time varying channel conditions and battery energy limitations at the wireless mobile user nodes. Therefore static link control techniques that make sense in comparatively well behaved wired links do not necessarily apply to wireless links. New adaptive link layer control techniques are needed to provide robust and energy efficient operation even in the presence of orders of magnitude variations in bit error rates and other radio channel conditions. For example, recent research has advocated adaptive link layer techniques such as adaptive error control [Lettieri97], channel state dependent protocols [Bhagwat96, Fragouli97], and variable spreading gain [Chien97]. In this paper we explore one such adaptive technique: dynamic sizing of the MAC layer frame, the atomic unit that is sent through the radio channel. A trade-off exists between the desire to reduce header and physical layer overhead by making frames large, and the need to reduce frame error rates in the noisy channel by using small frame lengths. Clearly the optimum depends on the channel conditions. Through analysis supported by physical measurements with Lucent's WaveLAN radio we show that adaptive sizing of the MAC layer frame in the presence of varying channel noise indeed has a large impact on the user seen throughput (goodput). In addition, we show how that adaptive frame length control can be exploited to improve the energy efficiency for a desired level of goodput, and to extend the usable radio range with graceful throughput degradation. We describe the implementation of the adaptive MAC frame length control mechanism in combination with adaptive hybrid FEC/ARQ error control [Lettieri97] in a reconfigurable wireless link layer packet processing architecture for a low power adaptive wireless multimedia node.

C. Fragouli, V. Sivaraman and M. Srivastava, "Controlled Multimedia Wireless Link Sharing via Enhanced Class-Based Queueing with Channel-State-Dependent Packet Scheduling," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 572, March/April 1998.

Abstract: A key problem in transporting multimedia traffic across wireless networks is a controlled sharing of the wireless link by different packet streams. So far this problem has been treated as that of providing support for quality of service in time division multiplexing based medium access control protocols (MAC). Adopting a different perspective to the problem, this paper describes an approach based on extending the Class-Based Queuing (CBQ) based controlled hierarchical link sharing model proposed for the Internet [Floyd95]. Our scheme enhances CBQ, which works well in wired links such as point-to-point wires of fixed bandwidth, to also work well with wireless links based on radio channels that are (i) inherently shared on-demand among multiple radios, and (ii) are subject to highly dynamic bandwidth variations due to spatially and temporally varying fadings with accompanying burst errors. The proposed scheme is based on combining a modified version of CBQ with Channel-State Dependent Packet Scheduling [Bhagwat96].

L. W. Lehman, S. Garland and D. L. Tennenhouse, "Active Reliable Multicast," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 581, March/April 1998.

Abstract: This paper presents a novel loss recovery scheme, Active Reliable Multicast (ARM), for large-scale reliable multicast. ARM is active in that routers in the multicnst tree play an active role in loss recovery. Additionally, ARM utilizes soft-state storage within the network to improve performance and scalability. In the upstream direction, routers suppress dupticate NACKS from multiple receivers to control the implosion problem. By suppressing dupticate NACKS, ARM also lessens the traffic that propagates back through the network. ln the downstream direction, routers limit the delivery of repair packets to receivers experiencing loss, thereby reducing network bandwidth consumption. Finally, to reduce wide-area recovery latency and to distribute the retransmission load, routers cache multicast data on a best-effort basis. ARM is flexible and robust in that it does not require all nodes to be active, nor does it require any specific router or receiver to perform loss recovery. Analysis and simulation results show that ARM yields significant benefits even when less than half the routers within the multicast tree can perform ARM processing.
Keywords: Reliable multicaat; active networks; soft state; distributed network algorithms; protocol design and analysis; NACK implosion

U. Legedza, D. Wetherall and J. Guttag, "Improving the Performance of Distributed Applications Using Active Networks," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 590, March/April 1998.

Abstract: An active network allows applications to inject customized programs into network nodes. This enables faster protocol innovation by making it easier to deploy new network protocols, even over the wide area. In this paper, we argue that the ability to introduce active protocols offers important opportunities for end-to-end performance improvements of distributed applications. We begin by describing several active protocols that provide novel network services and discussing the potential impact of these kinds of services on end-to-end application performance. We then present and analyze the performance of an active networking protocol that uses caching within the network backbone to reduce load on both servers and backbone routers.
Keywords: active networks; caching; distributed applications; networking protocols; performance

S. Bhattacharjee, K. Calvert and E. W. Zegura, "Self-Organizing Wide-Area Network Caches," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 600, March/April 1998.

Abstract: A substantial fraction of all network traffic today comes from applications in which clients retrieve objects from servers. The caching of objects invocations close to clients is an important technique for reducing both network traffic and response time for such applications. In this paper we consider the benefits of associating caches with switching nodes throughout the network, rather than in a few locations. We also consider the use of various self-organizing or active cache management strategies for organizing cache content. We evaluate caching techniques using both simulation and a general analytic model for network caching. Our results indicate that in-network caching can make effective use of cache space, and in many cases self-organizing caching schemes yield better average round-trip latencies than traditional approaches, using much smaller per-node caches.

D. Decasper and B. Plattner, "DAN: Distributed Code Caching for Active Networks," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 609, March/April 1998.

Keywords: Active networking allows the network infrastructure to be programmable. Recent research focuses on two commonly separated approaches: capsules and programmable switches. Capsules are typically small programs in packets which flow through the network and are executed in-band on nodes receiving them. Programmable switches are network devices which offer a back door to inject code by a network administrator out-of-band in order to enhance the device's capabilities. By combining these two approaches, this paper proposes a novel system architecture which allows both application specific data processing in network nodes as well as rapid deployment of new network protocol implementations. Instead of carrying code, data packets carry pointers to digitally signed active modules initially loaded on-the-fly, in-band, from trusted code servers on the network. Packet processing runs at high speed, may access and modify the whole network subsystem and no potentially slow virtual machines are needed.

E. Altman, A. Orda and N. Shimkin, "Bandwidth Allocation for Guaranteed versus Best Effort Service Categories," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 617, March/April 1998.

Abstract: Modern communication networks evolve towards integration of guaranteed-performance and best-effort service types. The co-existence of these two service types offers substantial benefits, such as resource sharing between service classes, and the ability of the user to select an appropriate service class according to its individual requirements and preferences. Notwithstanding, such interaction potentially complicates the system behavior, and gives rise to subtle optimization questions, which need to be explored and understood in order to allow efficient network operation. In this paper we address some essential performance and flow control issues associated with such service interactions. We propose a fluid model for session flow, which captures the two interaction mechanisms of resource sharing. In particular, our model incorporates the possibility of session migration, where sessions may shift from best effort to guaranteed performance service due to congestion experienced in the former. Within this model, we analyze the system performance and characterize its steady state behavior. We further show that under certain conditions the system exhibits bistable behavior, where some transient congestion mi~y stir the system from a stable and efficient operating point to an inefficient and congested one, which might persist indefinitely. For the latter case, we propose a call admission control scheme which prevents the system from getting trapped in a congested-type equilibrium, while not interfering with normal system operation.
Keywords: Integrated services; Broadband Networks; Best-effort service; Guaranteed performance service; Resource Reservation; Session Migration; Dynamic Resource Allocation; Call Admission Control

R. L. Cruz, "SCED+: Efficient Management of Quality of Service Guarantees," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 625, March/April 1998.

Abstract: Current proposals for the provision of deterministic quality of service guarantees in integrated services networks require per-session management of traffic flowing in network switches, raising scalability questions for practical implementation of high speed packet switching in large scale networks. At the same time, the end-to-end delay bounds associated with current proposals can be overly conservative, limiting the utility of the bounds to guide efficient resource allocation. In this paper, we introduce SCED+, a network scheduling algorithm that yields scalable provision of tight deterministic end-to-end delay bounds. These features are achieved through the use of aggregation and efficient statistical multiplexing between best-effort and guaranteed traffic. The SCED+ algorithm also supports statistical multiplexing between guaranteed traffic streams, providing tight end-to- end delay bounds for traffic streams which can tolerate non-zero packet loss rates. In order to facilitate the analysis of SCED+, we refine the so-called network calculus to address delay jitter, and introduce a simple and unified approach to the analysis of output burstiness of traffic departing from a general network element.
Keywords: Scheduling; delay jitter; network calculus; quality of service guarantees; statistical multiplexing; large scale networks; high speed packet switching

E. W. Knightly, "Enforceable Quality of Service Guarantees for Bursty Traffic Streams," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 635, March/April 1998.

Abstract: Providing statistical quality-of-service guarantees introduces the conflicting requirements for both deterministic traffic models to isolate and police users and statistical multiplexing to efficiently utilize and share network resources. We address this issue by introducing two schemes for providing statistical services to deterministically policed sources: (1) adversarial mode resource allocation in which we bound the stochastic envelopes of policed streams and provide a statistical service for adversarial or worst case sources and (2) non-adversarial mode allocation in which we approximate the stochastic envelopes of policed, but non-worst-case streams in order to exploit a statistical multiplexing gain in the typical case. Our key technique is to study the problem within the domain of deterministic and stochastic envelopes, which allows us to explicitly consider sources with rate variations over multiple time scales, obtain results for any deterministic traffic model, and apply accurate admission control tests for buffered priority schedulers. We evaluate the scheme's performance with experiments using traces of compressed video and show that substantial statistical multiplexing gains are achieved.

C. F. Su and G. de Veciana, "On Statistical Multiplexing, Traffic Mixes, and VP Management," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 643, March/April 1998.

Abstract: ATM-based integrated services networks are likely to rely on the Virtaal Path (VP) concept as an intermediate resource management layer wherein key decisions concerning resource allocation, sharing, and flow aggregation are made. In this paper we consider the impact that statistically multiplexing heterogeneous services on VP connections will have on network design and management. Based on simple models we consider several questions with perhaps surprising answers including the following. Given two traffic types with different quality of service requirements, should one segregate such flows on their own VPS, or is it to the network's advantage to multiplex the flows on a single VP guaranteeing the most stringent QoS requirement? Assuming two VPs have been set up between a given origin-destination pair and heterogeneous flows are to be carried, how should one route the connections to achieve good performance?

D. Aksoy and M. Franklin, "Scheduling for Large-Scale On-Demand Data Broadcasting," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 651, March/April 1998.

Abstract: Recent advances in telecommunications have enabled the deployment of broadcast-based wide-area information services that provide on-demand data access to very large client populations. In order to effectively utilize a broadcast medium for such a service, it is necessary to have efficient, on-line scheduling algorithms that can balance individual and overall performance, and can scale in terms of data set sizes, client populations, and broadcast bandwidth. In this study we introduce a parameterized algorithm that provides good performance across all of these criteria and can be tuned to emphasize either average or worst case waiting time. Unlike previous work on low overhead scheduling, the algorithm is not based on estimates of the access probabilities of items, but rather, it makes scheduling decisions based on the current queue state, allowing it to easily adapt to changes in the intensity and distribution of the workload. We examine the performance of the algorithm using a simulation model.

Y. Wang, Z. L. Zhang, D. H. C. Du and D. Su, "A Network-Conscious Approach to End-to-End Video Delivery over Wide Area Networks Using Proxy Servers," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 660, March/April 1998.

Abstract: In this paper we present a novel network-conscious approach to the problem of end-to-end video delivery over wide-area networks using proxy servers situated between local-area networks (LAIVS) and a backbone wide-area network (WAN). We develop a novel and effective video delivery technique called video staging via intelligent utilization of the disk bandwidth and storage space available at proxy servers. We also design several video staging methods and evaluate their effectiveness in reducing the backbone WAN bandwidth requirement. Our results demonstrate that the proposed proxy-server-based, network-conscious approach provides an effective and scalable solution to the problem of the end-to-end video delivery over wide-area networks.

P. Cuenca, A. Garrido, F. Quiles and L. Orozco-Barbosa, "Some Proposals to Improve Error Resilience in the MPEG-2 Video Transmission over ATM Networks," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 668, March/April 1998.

Abstract: MPEG-2 video communications over ATM networks is one of the most active research areas in the field of computer communications. In this paper, we introduce a set of control mechanisms at different levels of the protocol architecture to be used in MPEG-2-based video communications systems using ATM networks as their underlying transmission mechanism. We argue that in order to be able to create video systems able to cope with some of the errors encountered in computer communications systems, a structured set of error-resilient protocol mechanisms is needed. Our results show the effectiveness of using a structured set of control mechanisms to overcome for the loss of cells carrying VBR MPEG-2 video streams.

Z. Jiang and L. Kleinrock, "A General Optimal Video Smoothing Algorithm," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 676, March/April 1998.

Abstract: Video smoothing is a promising technique for reducing the bandwidth variability of video in order to improve network efficiency. This paper presents a general optimal video smoothing algorithm based on the concept of dynamic programming. The algorithm generates the optimum transmission schedule for different requirements by setting the constraints and the cost function accordingly. It can be used to study the smoothing of both stored video and real time video. In particular, for stored video, we show how the number of rate changes in the smoothed video is affected by the renegotiation cost and buffer size, assuming that the transmission rate is allowed to be lower than the reserved rate. For the real time system, we study the impact of various system parameters, including playout delay, client buffer size, and server buffer size, on the performance of video smoothing.

M. Montgomery and G. de Veciana, "Hierarchical Source Routing through Clouds," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 685, March/April 1998.

Abstract: Based on a loss network model, we present an adaptive source routing scheme for a large, hierarchically-organized network. To represent the available capacity of a cloud (subnetwork), we compute the average implied cost to go through or into the cloud. Such implied costs reject the congestion in the cloud as well as the interdependencies among traffic streams in the network. We prove that both a synchronous and asynchronous distributed computation of the implied costs will converge to a unique solution under a light load condition. To assess accuracy, we derive a bound on the difference between our implied costs and those calculated for a flat network. In addition, we show how on-line measurements can be incorporated into the routing algorithm, and we present some representative computational results which demonstrate the ability of our scheme to appropriately route high level flows while significantly reducing complexity.

D. G. Thaler and C. V. Ravishanka, "Distributed Top-Down Hierarchy Construction," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 693, March/April 1998.

Abstract: Hierarchies provide scalability in large networks and are integral to many widely-used protocols and applications. Previous approaches to constructing hierarchies have typically either assumed static hierarchy configuration, or have used bottom-up construction methods. We describe how to construct hierarchies in a top-down fashion, and show that our method is much more efficient than bottom-up methods. We also show that top-down hierarchy construction is a better choice when administrative policy constraints are imposed on hierarchy formation.

J. Behrens and J. J. Garcia-Luna-Aceves, "Hierarchical Routing Using Link Vectors," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 702, March/April 1998.

Abstract: An area-based link-vector algorithm (ALVA) is introduced for the distributed maintenance of renting information in very large internetworks. According to ALVA, destinations in an internetwork are aggregated in areas in multiple levels of hierarchy. Renters maintain a database that contains a subset of the topology at each level of the hierarchy. This subset corresponds to those links used in preferred paths to reach destinations (nodes inside the same immediate area or remote areas). ALVA is the first hierarchical routing algorithm based on link-state information that does not require complete topology information at each level in the hierarchy. The correctness of ALVA is verified. Simulation results are presented showing that ALVA outperformed OSPF in terms of communication and storage overhead.

J. Tian and G. Neufeld, "Forwarding State Reduction for Sparse Mode Multicast Communication," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 711, March/April 1998.

Abstract: Reducing forwarding state overhead of multicast routing protocols is an important issue towards a scalable global multicast solution. In this paper, we propose a new approach, Dynamic Tunnel Multicast, which utilizes dynamically established tunnels on unbranched links of a multicast distribution tree to eliminate unnecessary multicast forwarding states. Analysis and simulation results show promising reduction in the state overhead of sparse mode multicast routing protocols.
Keywords: Multicast; Routing; State reduction; Dynamic tunnel

Z. Naor and H. Levy, "Minimizing the Wireless Cost of Tracking Mobile Users: An Adaptive Threshold Scheme," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 720, March/April 1998.

Abstract: Mobile user tracking is a major issue in wireless networks. Previous studies and traditional approaches dealt only with tracking algorithms which adapt themselves to the user activity. In this work we propose a novel approach for user tracking, in which the tracking activity is adapted to both user and system activity. The basic idea is to make the user location update rate dependent not only on the user activity (such as the call profile and mobility pattern); rather, it is made dependent also on the signaling load, which reflects the actual cost of the update operation. Thus, in low signaling load locations, the users are to transmit location update messages more frequently. To carry out this approach we propose an Adaptive Threshold Scheme (ATS): The network determines for each cell a registration threshold level (which depends on the cell load) and announces it, as a broadcast message, to the users. A user computes its own registration priority and then transmits a registration message only if its priority exceeds the announced threshold level. Thus, whenever the local load on the cell is low, the users are requested to update their locations more often, while in loaded cells the registration activity is minimized. Our analysis shows that the adaptive threshold scheme reduces the paging cost, in comparison with other dynamic methods, without increasing the wireless cost of registration.

Y. Cui, D. Lam, J. Widom and D. C. Cox, "Efficient PCS Call Setup Protocols," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 728, March/April 1998.

Abstract: Increasing demand for wireless mobile communications, coupled with limited network resources, has motivated investigation into alternative efficient mobility management solutions. In this paper, we consider wireless PCS call setup protocols. We propose a Lightweight Location Lookup Protocol, which significantly reduces the network signaling and setup delay during call setup. As an alternative, we consider a Reverse Connection Setup protocol that also performs efficient call setup, whale minimizing the cost of failed call attempts. The two protocols have different advantages, in particular when combined with location management techniques including HLR/VLR, HLR/VLR with partial replication, and HLR/VLR with caching. In this paper, we study the combinations of call setup protocols with location management techniques and compare their performance to find the most efficient call set up schemes. results are presented.

T. Woo, T. F. La Porta, J. Golestani and N. Agarwal, "Update and Search Algorithms for Wireless Two-Way Messaging: Design and Performance," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 737, March/April 1998.

Abstract: Wireless two-way messaging is a new wireless data service that is rapidly gaining popularity. The basic service it provides is acknowledged exchange of short messages among subscribers or network-based servers. Like cellular/PCS systems, wireless two-way messaging systems are cellular in structure, and thus share the location management problem. In this paper, we study the problem of location management for wireless two-way messaging. We first highlight the unique concerns of location management for wireless two-way messaging, and lay out its differences from cellular/PCS telephony. We then provide a new cost formulation for its study. Based on this formulation, we revisit existing schemes that have been proposed for cellular/PCS telephony to evaluate how they perform under wireless two-way messaging. We then introduce new classes of algorithms, called Pending Replies and Deferred Delivery, whose designs take advantage of the unique characteristics of wireless two-way messaging, and show through simulation, that they provide improved performance.

A. Indiresan, A. Mehra and K. G. Shin, "The END: A Network Adapter Design Tool," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 756, March/April 1998.

Abstract: We present the design and implementation of Emulated Network Device (END), a network adapter design tool that facilitates accurate evaluation of alternative adapter designs. Using device emulation, END permits designers to couple a representative model of an adapter with a real host and its communication software. Different adapter designs can be evaluated and compared accurately in a realistic setting, i.e., while capturing host-adapter concurrency and interaction overheads, before building a prototype. We present the architectural framework adopted by END and demonstrate its feasibility via a case study of a commercial network adapter. Several design improvements to alleviate performance bottlenecks are realized and evaluated using END, highlighting its utility as a network adapter design tool.

B. Krupczak, M. Ammar and K. Calvert, "Implementing Protocols in Java: The Price of Portability," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 765, March/April 1998.

Abstract: As the number and variety of Web- and network-based applications continues to increase, so does the need for flexible communication protocols and services to support them. Traditionally, a major impediment to deployment of new protocols is the need to upgrade millions of end-systems with compatible implementations. At the same time, Java a language explicitly designed to support development and distribution of new applications via the Web is emerging as a (potentially) ubiquitous system platform. It is therefore natural to consider whether Java might speed the introduction of protocols to better support new applications. In this paper, we investigate the tradeoffs involved in using Java for protocol implementation and deployment. Using insights from a Java-based protocol suite and supporting subsystem we have implemented, we describe the benefits of using the Java language and quantify the performance cost of implementing a protocol in Java for various combinations of interpretation and compilation. We find that the present performance cost of using Java-based protocols is roughly equivalent to four years of hard-ware performance gains, i.e., interpreted, Java-based protocol performance on current hardware is roughly equivalent to the performance of compiled C code on four-year-old hardware.

J. Hall, R. Sabatino, S. Crosby, I. Leslie and R. Black, "A Comparative Study of High Speed Networks," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 774, March/April 1998.

Abstract: In the management of a modern LAN or campus network, two are of key importance, namely network performance and capacity planning. In this paper we report on results from an experimental programme which aims to quantify the performance that can be achieved with a real distributed application running over a range of different network technologies, including Ethernet, ATM, FDDI, and a 100 Mb/s packet-switched LAN. In particular we analyse the contribution of each of the various application and network components to the overall performance experienced by applications.

A. Mekkittikul and N. McKeown, "A Practical Scheduling Algorithm to Achieve 100\% Throughput in Input-Queued Switches," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 792, March/April 1998.

Abstract: Input queueing is becoming increasingly used for high-bandwidth switches and routers. In previous work, it was proved that it is possible to achieve 100\% throughput for input-queued switches using a combination of virtual out-put queueing and a scheduling algorithm called LQF. However, this is only a theoretical result: LQF is too complex to implement in hardware. In this paper we introduce a new algorithm called Longest Port First (LPF), which is designed to overcome the complexity problems of LQF, and can be implemented in hardware at high speed. By giving preferential service based on queue lengths, we prove that LPF can achieve 100\% throughput.

M. Hashemi and A. Leon-Garcia, "A Multicast Single-Queue Switch with a Novel Copy Mechanism," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 800, March/April 1998.

Abstract: A new multicasting mechanism for RAM-based shared-buffer ATM switches is introduced. Multiple logical output queues, including a new queue for multicast and broadcast cells, are all interleaved into a single physical buffer as in the single-queue switch architecture [Hashemi and Garcia 97]. Queues are hardware-independent and full buffer-sharing is achieved. Cells are scheduled in the multicast queue based on their priority level and service type as in the unicast queues. A single copy of a multicast cell is kept in the queue. The cell is sent to all of its designations upon its service time. A unicast cell in a given output queue can still be sent if it has higher priority than a multicast cell. In this case a unicast copy of the multicast cell for that output port is placed in the output queue. This copying scheme requires neither extra hardware nor extra memory space for duplicated cells. A new grouping algorithm is presented which supports the incorporation of the multicast queue and the unicast queues into a single overall queue.
Keywords: ATM Switch Architecture; Multicasting; Scheduling; RAM-Based Shared-Buffer Switch; Quality of Service; Single-Queue Switch

Y. Joo and N. McKeown, "Doubling Memory Bandwidth for Network Buffers," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 808, March/April 1998.

Abstract: Memory bandwidth is frequently a limiting factor in the design of high-speed switches and routers. In this paper, we introduce a buffering scheme called ping-pong buffering, that increases memory bandwidth by a factor of two. Ping-pong buffering halves the number of memory operations per unit time, allowing faster buffers to be built from a given type of memory. Alternatively, for a buffer of given bandwidth, ping-pong buffering allows the use of slower, lower-cost memory devices. But ping-pong buffers have an inherent penalty: they waste a fraction of the memory. Unless additional memory is used, the overflow rate is increased; in the worst case, half of the memory is wasted. Although this can be compensated by doubling the size of the memory, this is undesirable in practice. Using simulations, we argue that the problem is eliminated by the addition of just 5% more memory. We show that this result holds over a wide range of traffic and switch types, for low or high offered load, and continues to hold when the buffer size is increased.

J. P. Jue and B. Mukherjee, "Multiconfiguration Multihop Protocols (MMPs): A New Class of Protocols for Packet-Switched WDM Optical Networks," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 816, March/April 1998.

Abstract: Wavelength-division multiplexing (WDM) local-area networks based on the optical passive-star coupler have traditionally been classified as being either single-hop or multihop. A single-hop network provides a direct connection between the source and the destination of a packet during the packet transfer duration, but may require some amount of coordination between the nodes which may involve tuning of the transmitters or receivers at each node. Since the time required to tune a tunable optical transmitter or receiver maybe high, a single-hop network may incur significant overhead. On the other hand, a typical multihop network requires little or no tuning, but a packet may traverse a number of intermediate nodes between the source and destination nodes. Each hop incurs additional queuing delays at each node and also increases the overall load on each link and on the network. In this paper, we propose a new class of multiconfiguration multihop protocols (MMPs) which use tunable transmitters and receivers to cycle through a number of configurations which together make up a multihop logical topology. This class of protocols offers a trade-off between the tuning required in a single-hop network and the number of hops required in a multihop network. We present a generalized framework for comparing the proposed protocols with existing single-hop and multihop protocols, and we show that these protocols may offer significant performance gains for systems with high tuning delays and a limited number of transmitters and receivers at each node.
Keywords: Multiconfiguration; Single-Hop; Multihop; Packet-Switching; Passive-Star Coupler; Wavelength-Division Multiplexing (WDM); Optical Network

I. Cidon, T. Hsiao, A. Khamisy, A. Parekh, R. Rom and M. Sidi, "OPENET: An Open and Efficient Control Platform for ATM Networks," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 824, March/April 1998.

Abstract: ATM networks are moving to a state where large production networks are deployed and require a universal, open and efficient ATM network control platform (NCP). The emerging PNNI (Private Network to Network Interface) standard introduces an internetworking architecture which can also be used as an intranetwork interface. However, PNNI fails in the latter due to performance limitations, limited functionality and the lack of open interfaces for functional extensions. OPENET is an open high-performance NCP based on performance and functional enhancements to PNNI. It addresses the issues of scalability, high performance and functionality. OPENET focuses on intranetworking and is fully compatible with PNNI in the internetwork environment. The major novelties of the OPENET architecture compared to PNNI is its focus on network control performance. A particular emphasis is given to the increase of the overall rate of connection handling, to the reduction of the call establishment latency and to the efficient utilization of the network resources. These performance enhancements are achieved by the use of a native ATM distribution tree for utilization updates, light-weight signalling and extensive use of caching and pre-calculation of routes. OPENET also extends PNNI functionality. It utilizes a new signalling paradigm that better supports fast reservation and multicast services, a control communication infrastructure which enables the development of augmented services such as directory, hand-off, billing, security etc. OPENET was implemented by the High-Speed Networking group at Sun Labs and is under operational tests.

G. Hjalmtysson and K. K. Ramakrishnan, "UNITE - An Architecture for Lightweight Signalling in ATM Networks," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 832, March/April 1998.

Abstract: Networks support a wide range of applications with diverse service quality requirements. This demands distinct identification of flows according to their requirements, and the ability to service them differently, Different networking technologies have been evolving towards this common goal. ATM is cost-efficient by exploiting switching. But signaling is slow, making ATM unattractive for bursty data traffic. One difficulty with ATM signaling is the inextricable connection between basic connectivity and QoS management. Consequently, every flow, even a best-effort flow suffers the overhead of end-to-end connection establishment, We propose a new lightweight architecture for ATM signaling called UNITE. The fundamental philosophy of UNITE is the separation of connectivity from QoS control. This eliminates the round-trip connection setup delay before initiating data transmission. Using a single cell with proper encoding, we avoid the overhead of reassembly and segmentation on the signaling channel, and enable hardware implementation. Performing QoS negotiation in-band, allows the switches in the path to process QoS-requests in parallel, facilitates connection specific control policies, supports both sender and receiver initiated QoS, and allows for uniform treatment of unicast and multicast connections. UNITE supports variegated multicast trees.

P. L. Tien and M. C. Yuang, "Intelligent Voice Smoother for VBR Voice over ATM Networks," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 841, March/April 1998.

Abstract: For distinctively transporting voice data with silence suppression over Asynchronous Transfer Mode (Am) networks via the Variable Bit Rate (VBR) service, the problem of jitter introduced from the network often renders the speech unintelligible. It is thus indispensable to offer intramedia synchronization to remove jitter while retaining minimal playout delay. In this paper, we propose a neural network based intra-voice synchronization mechanism, called the Intelligent Voice Smoother (IVOS). IVOS is composed of three components: Smoother Buffer, Neural Network (NN) Traffic Predictor, and Constant Bit Rate (CBR) Enforcer. Newly arriving frames, being assumed to follow a generic Markov Modulated Bernoulli Process (MMBP), are queued in the Smoother Buffer. The NN Traffic Predictor employs an on-line trained Back Propagation Neural Network (BPNN) to predict three traffic characteristics of every newly encountered talkspurt period. Based on the predicted characteristics, the CBR Enforcer derives an adaptive buffering delay by means of a near-optimal, simple, closed form formula. It then imposes such delay on the playout of the first frame in the talkspurt period. The CBR Enforcer in turn regulates CBR based departures for the remaining flames of the talkspurt, aimed at assuring minimal mean and variance of Distortion of Talkspurts (D07) and mean playout Delay (PD). Simulation results reveal that, compared to three other playout approaches, IVOS achieves superior playout yielding negligible DOT and PD irrespective of trafllc variation.

S. Mukherjee, D. Reininger and B. Sengupta, "An Adaptive Connection Admission Control Policy for VBR+ Service Class," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 849, March/April 1998.

Abstract: A new service class, called VBR+ [Reininger, et al, 94], has been proposed for multimedia applications on ATM networks. VBR+ extends the functionality of the traditional VBR service with the added capability of dynamic resource renegotiation. It makes the specification and modification of the usage parameter controls flexible, and potentially can increase the network resource utilization through statistical multiplexing. These advantages come at the expense of a more complex connection admission controller which should be designed to handle bandwidth renegotiation efficiently. In this paper a connection admission cent roller for VBR+ service class is proposed. We identify the desirable features of a VBR+ connection admission controller, and present a novel one that can achieve them through (1) dynamic resource partitioning, and (2) dynamic resource redistribution among active connections. Simulation results showing the performance of the controller using a number of actual video traces are presented. The results show that the controller is robust in admitting a variety of video sources with widely different traffic burstiness. By partitioning the resource pool dynamically, and distributing resources among contending connections fairly, it can maintain very good quality, and can achieve high utilization. Comparison wit h traditional VBR service reveals that the CAC is able to provide comparable quality with higher network utilization.
Keywords: ATM Networks; VBR and VBR+ Services; Bandwidth Renegotiation; Connection Admission Control; Quality of Service

S. H. Low, "Equilibrium Allocation of Variable Resources for Elastic Traffics," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 858, March/April 1998.

Abstract: Consider a set of connections sharing a network node under an allocation scheme that provides each connection with a fixed minimum and a random extra amount of bandwidth and buffer. Allocations and prices are adjusted to adapt to resource availability y and user demands. We consider two scenarios of user behavior. In the first scenario a connection purchases an allocation to maximize its expected utility in such a way that the resource cost of the new allocation, and hence its connection charge, remains the same as that for the old allocation. In the second scenario this budget constraint is relaxed and a connection tries to maximize its benefit, expected utility minus the resource cost. Equilibrium is achieved when all connections achieve their optimality and demand equals supply for non-free resources. We show that at equilibrium expected return on purchasing variable resources can be higher than that on fixed resources. Thus connections must balance the marginal increase in utility due to higher return on variable resources and the marginal decrease in utility due to their variability. We further show that in equilibrium where such tradeoff is optimized all connections hold strictly positive amounts of variable bandwidth and bufer.

E. Bortnikov and R. Cohen, "Schemes for Scheduling of Control Messages by Hierarchical Protocols," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 865, March/April 1998.

Abstract: The paper addresses the problem of designing efficient scheduling policies for the transmission of control messages by hierarchical network protocols. Such protocols encounter a tradeoff between the desire to forward a control message across the tree as soon as it is received, and the desire to reduce control traffic. Scheduling problems that arise in this context are defined and discussed. The paper mainly concentrates on minimizing the average extra delay encountered by the control messages under an upper bound on the number of outgoing messages a node can send during a jibed period of time. A polynomial-time algorithm is presented for the off-line version of the problem, and then several efficient on-line heuristics are presented and compared.
Keywords: hierarchical protocols; scheduling algorithms; dynamic programming; competitive analysis

A. Greenberg and D. Wischik, "Admission Control for Booking Ahead Shared Resources," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 873, March/April 1998.

Abstract: Calls that make large, persistent demands for network resources will be denied consistent service, unless the network employs adequate control mechanisms. Calls of this type include video conferences. Although overprovisioning network capacity would increase the likelihood of accepting these calls, it is a very expensive option to apply uniformly in a large network, especially as the calls require high bandwidth and low blocking probabilities. Such large calls typically require coordination of geographically distributed facilities and people at the end systems. So it is natural to book the network requirements ahead of their actual use. In this paper, we present a new, effective admission control algorithm for booking ahead network services. The admission control is based on a novel application of effective bandwidth theory to the time domain. Systematic and comprehensive simulation experiments provide understanding of how booking ahead effects call blocking and network utilization, considering call duration, number of links, bandwidth, routing, and the mix of bookahead versus immediate arrival traffic. Allowing some calls to book ahead radically reduces their chance of service denial, while allowing flexible and efficient sharing of network resources with normal calls that do not book ahead.
Keywords: Booking Ahead; Advance Reservation; Admission Control; Integrated Services Networks; Video Conferencing; Effective Bandwidths

R. Engel, D. Kandlur, A. Mehra and D. Saha, "Exploring the Performance Impact of QoS Support in TCP/IP Protocol Stacks," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 883, March/April 1998.

Abstract: This paper explores the performance impact of supporting QoS guarantees on communication in TCP/IP protocol stacks at Unix-Iike end hosts. We first demonstrate the efficacy of our RSVP-based QoS architecture in providing the desired QoS to individual connections via application-level experiments using UDP sessions and TCP connections on an ATM network. We then identify and measure, via detailed profiling, the overheads imposed by the individual components of the QoS architecture, such as traffic policing, traffic shaping, and buffer management. Our measurements reveal that traffic policing overheads are largely offset by savings due to per-session buffer pre-allocation, and, for ATM networks, a faster path through the network interface layer. In the latter case the data path latency for compliant packets can even be slightly smaller than the default best-effort data path latency. Traffic shaping presents more challenges, primarily because of interactions with the operating system CPU scheduler. We discuss the performance implications of traffic shaping and suggest techniques to reduce or mask some of the associated overheads.

V. Sharma and E. A. Varvarigos, "Limited Wavelength Translation in All-Optical WDM Mesh Networks," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 893, March/April 1998.

Abstract: We analyze limited wavelength translation in all-optical, wavelength division multiplexed (WDM) wrap-around mesh networks, where upto W wavelengths, each of which can carry one circuit, are multiplexed onto a network link. All-optical wavelength translators with a limited translation range permit an incoming wavelength to be switched only to a small subset of the outgoing wave-lengths. Although more restrictive than full wavelength translation (which permits an incoming wavelength to be switched to any outgoing wavelength), limited wavelength translation is a topic of recent study, since current practical wavelength translators are capable only of limited translation. We consider the case where an incoming wavelength can be switched to one of k (k = 2,3) outgoing wavelengths (called the feasible wavelength set), and we obtain the probability that a session arriving at a node at a random time successfully establishes a connection from its source node to its destination node. Our analysis captures the state of a feasible wavelength set at a network node, which allows us to obtain the probability of successfully establishing the circuit. Based on this probability y, we quantify the benefits of limited wavelength translation by demonstrating that in mesh networks, it can obtain most of the performance advantages of full translation at a fraction of the cost. Our work is the first to analyze limited wavelength translation for mesh networks under a probabilistic model, and accurately predicts the network performance over a wider range of network loads than previous works.

S. Subramaniam, M. Azizoglu and A. K. Somani, "On the Optimal Placement of Wavelength Converters in Wavelength-Routed Networks," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 902, March/April 1998.

Abstract: In this paper, we consider the problem of optimally placing a given number of wavelength converters on a path to minimize the call blocking probability. Using a simple performance model, we first prove that uniform spacing of converters is optimal for the end-to-end performance when link loads are uniform and statistically independent. We then show that significant gains are achievable with optimal placement compared to random placement. For non-uniform link loads, we provide a dynamic programming algorithm for the optimal placement and compare the performance with random and uniform placement. Optimal solutions for bus and ring topologies are also presented.

M. Alanyali and E. Ayanoglu, "Provisioning Algorithms for WDM Optical Networks," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 910, March/April 1998.

Abstract: This paper concerns connection provisioning for optical networks employing wavelength division multiplexing. A heuristic algorithm is developed and numerically studied for routing and wavelength assignment of a set of static connection requests. The algorithm runs much faster than the optimum solution of this problem. An adaptation of the algorithm is proposed to design restorable networks which can handle a specified set of failures. The proposed algorithm is based on taking all failures into consideration simultaneously, and performs better than developing independent designs for each failure.

R. M. Krishnaswamy and K. N. Sivarajan, "Design of Logical Topologies: A Linear Formulation for Wavelength Routed Optical Networks with No Wavelength Changers," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 919, March/April 1998.

Abstract: We consider the problem of constructing logical topologies over a wavelength routed optical network with no wavelength changers. In [Mukherjee, et al, 96] the wavelength continuity constraints were formulated as a set of non-linear constraints and the objective considered was either delay minimization or minimizing the maximum offered load. Simulated annealing has been used to solve the non-linear optimization problem of [Mukherjee, et al, 96]. In [Banerjee and Mukherjee 97] the wavelength continuity constraint was relaxed, the nodes were equipped with wavelength changers, and the objective was to minimize the average hop length. This resulted in the hop lengths being small and hence reduced the number of wavelength changers being used but the resulting logical topology did not reflect the traffic intensities between the nodes. In this paper we present a exact linear fomulation (mixed integer linear programming) for designing a logical topology with no wavelength changers. Since the objective of our linear formulation is minimizing the maximum offered load on any link, which is called congestion, the resulting logical topology reflects the traffic intensities between the nodes. Our linear formulation yields a logical topology and routing and wavelength assignment for the logical links. In [Ramaswami and Sivarajan 96] the problem of logical topology design is considered but the number of wavelengths the fiber can support is not a constraint, In the paper we take into consideration the number of wavelengths the fiber supports, the hop length of a logical link, and show the trade-offs involved in minimizing congestion. Since the whole problem is linearizable the solution obtained by relaxation of the integer constraints yields a lower bound on the congestion. This can be used to compare the efficiency of heuristic algorithms. We solve the problem exactly for a small size network and for large size networks we develop some heuristic algorithms to obtain a feasible solution using the solution obtained by relaxing the integer constraints. Following [Bienstock and Gunhrk 95] we introduce a cutting plane which enables us to reduce the previously obtained upper bounds on congestion. Numerical results for a six node wide area network and the National Science Foundation Network (NSFNET) are presented for various cases.

X. Li, S. Paul and M. Ammar, "Layered Video Multicast with Retransmissions (LVMR): Evaluation of Hierarchical Rate Control," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1062, March/April 1998.

Abstract: Layered Video Multicast with Retransmissions (LVMR) is a system for distributing video using layered coding over the Internet, The two key contributions of the system are: (1) improving the quality of reception within each layer by retransmitting lost packets given an upper bound on recovery time and applying an adaptive playback point scheme to help achieve more successful retransmission, and (2) adapting to network congestion and heterogeneity using hierarchical rate control mechanism. This paper concentrates on the rate control aspects of LVMR. In contrast to the existing sender-based and receiver-based rate control in which the entire information about network congestion is either available at the sender (in sender-based approach) or replicated at the receivers (in receiver-based approach), the hierarchical rate control mechanism distributes the information between the sender, receivers, and some agents in the network in such a way that each entity maintains only the information relevant to itself. In addition to that, the hierarchical approach enables intelligent decisions to be made in terms of conducting concurrent experiments and choosing one of several possible experiments at any instant of time based on minimal state information at the agents in the network. Protocol details are presented in the paper together with experimental and simulation results to back our claims.

B. Vickers, C. Albuquerque and T. Suda, "Adaptive Multicast of Multi-Layered Video: Rate-Based and Credit-Based Approaches," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1073, March/April 1998.

Abstract: Network architectures that can efficiently transport high quality, multicast video are rapidly becoming a basic requirement of emerging multimedia applications. The main problem complicating multicast video transport is variation in network bandwidth constraints. An attractive solution to this problem is to use an adaptive, multi-layered video encoding mechanism. In this paper, we consider two such mechanisms for the support of video multicast; one is a rate-based mechanism that relies on explicit rate congestion feedback from the network, and the other is a credit-based mechanism that relies on hop-by-hop congestion feedback. The responsiveness, bandwidth utilization, scalability and fairness of the two mechanisms are evaluated through simulations. Results suggest that while the two mechanisms exhibit performance trade-offs, both are capable of providing a high quality video service in the presence of varying bandwidth constraints.

M. Baldi and Y. Ofek, "End-to-End Delay of Videoconferencing over Packet Switched Networks," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1084, March/April 1998.

Abstract: Videoconferencing is an important global application - it enables people around the globe to interact when they are far from one another. In order for the participants in a video-conference call to interact naturally, the end-to-end delay should be below human perception - about 100 ms. Since the global propagation delay can be about 100 ms, the actual end-to-end delay budget available to the system designer (excluding propagation delay) can be no more than 10 rns. We identify the components of the end-to-end delay in various configurations with the objective of understanding how it can be kept below the desired 10 ms bound. We analyze these components going step-by-step through six system configurations obtained by combining three generic network architectures with two video encoding schemes. We study the transmission of raw video and variable bit rate (VBR) MPEG video encoding over (I) circuit switching, (ii) synchronous packet switching, and (iii) asynchronous packet switching. In addition, we show that constant bit rate (CBR) MPEG encoding delivers unacceptable delay, which is on the order of the group of pictures (GOP) time interval. This study shows that having a global common time reference together with time-driven priority (TDP) and VBR MPEG video encoding, provides adequate end-to-end delay, which is (I) below 10 ms, (ii) independent of the network instant load, and (iii) independent of the connection rate. The resulting end-to-end delay (excluding propagation delay) can be smaller than the video frame period, which is better than what can be obtained with circuit switching.
Keywords: Time-driven Priority; MPEG; Videoconference; Quality of Service End-to-end Delay; Performance Guarantees

N. G. Duffield, K. K. Ramakrishnan and A. R. Reibman, "SAVE: An Algorithm for Smoothed Adaptive Video over Explicit Rate Networks," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1093, March/April 1998.

Abstract: Supporting compressed video efficiently on networks is a challenge because of its burstiness. Although a large number of applications using compressed video are rate adaptive, it is also important to preserve quality as much as possible. We propose a smoothing and rate adaptation algorithm, called SAVE, that the compressed video source uses in conjunction with explicit rate based control in the network. SAVE smooths the demand from the source to the network, thus helping achieve good multiplexing gains. SAVE maintains the quality of the video and ensures that the delay at the source buffer does not exceed a bound. We examine the effectiveness of SAVE across 28 different traces (entertainment and teleconferencing videos) using different compression algorithms.
Keywords: Compressed Video; Rate Control; Smoothing; Multiplexing

T. S. E. Ng, I. Stoica and H. Zhang, "Packet Fair Queueing Algorithms for Wireless Networks with Location-Dependent Errors," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1103, March/April 1998.

Abstract: While Packet Fair Queuing (PFQ) algorithms provide both bounded delay and fairness in wired networks, they cannot be applied directly to wireless networks. The key difficulty is that in wireless networks sessions can experience location-dependent channel errors. This may lead to situations in which a session receives significantly less service than it is supposed to, while another receives more. The results in large discrepancies between the sessions' virtual times, making it difficult to provide both delay-guarantees and fairness simultaneously. Our contribution is twofold. First, we identify a set of properties, called Channel-condition Independent Fair (CIF.), that a Packet Fair Queuing algorithm should have in a wireless environment (1) delay aud throughput guarantees for error-free sessions, (2) long term fairness for error sessions, (3) short term fairness for error-free sessions, and (4) graceful degradation for sessions that have received excess service. Second, we present a methodology for adapting PFQ algorithms for wireless networks and we apply this methodology to derive a novel algorithm based on Start-time Fair Queuing, called Channel-condition Independent packet Fair Queuing (CIF-Q), that achieves all the above properties. To evaluate the algorithm we provide both theoretical analysis and simulation results.

F. Chiussi and A. Francini, "Minimum Delay Self-Clocked Fair Queuing for Packet-Switched Networks," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1112, March/April 1998.

Abstract: Packet-fair-queuing scheduling algorithms, which aim at approximating the Generalized Processor Sharing (GPS) discipline, use a system-potential function to compute a timestamp for each packet, and serve packets by increasing value of their timestamps; each algorithm is characterized by the specific system-potential function used, which defines its delay and fairness properties. The total complexity of such GPS-related schedulers is the combination of two terms: (I) the complexity involved in sorting the timestamps, which is common to all algorithms in the class, and (ii) the complexity required to compute the system-potential function, which is specific of each algorithm. So far, the quest for a scheduling algorithm that achieves optimal delay and fairness properties and uses a system-potential function of minimum complexity has been only partially successful. The packet-by-packet version of GPS (P-GPS) provides minimum delay bounds and good fairness properties, but is not practical since its system-potential function has worst-case complexity of O(V), where V is the number of connections; Self-Clocked Fair Queuing (SCFQ) uses a system-potential function of O(1) complexity and has good fairness properties, but its delay bounds are well above those of P-GPS; Virtual Clock also uses a system-potential function of minimal complexity, matches the delay bounds of P-GPS, but its unfairness is unbounded; Starting-Potential Fair Queuing (SPFQ) achieves optimal delay bounds and has fairness properties similar to P-GPS, but its system-potential function has O(logV) complexity; the recent Leap Forward Virtual Clock (LFVC) achieves excellent delay and fairness properties using a simple system-potential function inspired by Virtual Clock, but requires an additional quarantine mechanism of O(VloglogV) worst-case complexity. The Minimum-Delay Self-Clocked Fair Queuing (MD-SCFQ) algorithm presented in this paper is the first scheme which achieves the same delay bounds as P-GPS, has fairness properties similar to P-GPS, and uses a system-potential function of 0( 1) complexity. We show that the fluid version of MD-SCFQ belongs to the class of Rate-Proportional Servers (RPS); thus, its packet-by-packet version inherits all the well-known single-node and multiple-node delay properties of Packet-by-packet RPS schedulers.

N. G. Duffield, T. V. Lakshman and D. Stiliadis, "On Adaptive Bandwidth Sharing with Rate Guarantees," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1122, March/April 1998.

Abstract: The objective of recent research in fair queuing schemes has been to efficiently emulate a fluid-flow generated (weighted) processor sharing (GPS) system, as closely as possible. A primary motivation for the use of fair-queuing has been its use as a means of providing bandwidth guarantees and as a consequence end-to-end delay bounds for traffic with bounded burstiness. The rate guarantees translate 10 scheduling weights which are set when admission control is done. A consequence of fair queuing systems closely emulating GPS is that when one or more connections are not back-logged, any excess bandwidth is distributed to backlogged connections in proportion to their weights. However weights are set based on the long-term requirements of traffic flows and not in any state-dependent manner that reflects instantaneous needs. In this paper we question the notion that queuing systems should closely emulate a GPS system. Instead of emulating GPS, we propose three modified scheduling schemes which preserve the rate guarantees of fair queuing (and hence preserve deterministic delay bounds) but adaptively redistribute the excess bandwidth such that either losses are reduced or delays equalized. We compare the performance of the proposed schemes to that of fair queuing using different traffic sources such as voice and video, as well as sources which have aggregate long-range dependent behavior We find that the proposed schemes, in comparison to packet GPS (PGPS), reduce packet losses and curtail the tails of delay distributions for real-time traffic and hence permit the use of significantly smaller playout buffers for the same network load.
Keywords: fair queuing; scheduling; jitter; delay bounds; real-time traffic; traffic management

W. Zhao, T. Seth, M. Kim and M. Willebeek-LeMair, "Optimal Bandwidth/Delay Tradeoff for Feasible-Region-Based Scalable Multimedia Scheduling," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1131, March/April 1998.

Abstract: Feasible region is a simple and optimal framework for scheduling the transmission of data with deadlines. In this paper, we establish the fundamental relationship between the bandwidth requirement and the initial delay in feasible-region-based transmission scheduling. The relationship represents the optimal bandwidth/ delay tradeoff. In the process, we identify the essential bandwidth, the exact bandwidth lower bound regardless of initial delay. Efficient algorithms are given to calculate the essential bandwidth as well as the optimal bandwidth/delay tradeoff. The results extend the previous results on feasible-region-based scheduling, most of which derived in the context of video traffic smoothing. When applied to the problem of scheduling scalable multimedia, we show that the feasible region framework enables the integrated scheduling of presentation and transmission and is a new and promising approach of dealing with scalable multimedia. We establish the presentation feasibility condition and present some initial results on scheduling spatial scalability.

B. A. J. Banh, G. J. Anido and E. Dutkiewicz, "Handover Re-routing Schemes for Connection Oriented Services in Mobile ATM Networks," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1139, March/April 1998.

Abstract: Wireless ATM (WATM) will be used to support broadband applications for future generation mobile services. To achieve this, the existing ATM protocol must be augmented with mobility management capabilities. One of the important components of mobility management is the ability of the network to seamlessly re-route existing connections to different parts of the network. To date, a number of schemes have been proposed for broadband wireless networks. These schemes can potentially be used in WATM networks. In this paper, we compare the performance of these schemes in terms of their complexity, handover latency, communication disruption period and buffer and bandwidth requirements.

R. Ramjee, T. La Porta, J. Kurose and D. Towsley, "User Agent Migration Policies in Multimedia Wireless Networks," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1147, March/April 1998.

Abstract: Multimedia wireless networks often employ network based user agents as proxies for mobile users. In this paper, we consider a fundamental question in the design of these networks: should the user agents migrate and if so, what are good user agent migration policies? We first introduce a general framework for user agent migration policies. We then highlight, through analysis and simulation, the numerous parameters and tradeoffs that dictate the design of migration policies. Finally, we identify two simple threshold-based policies that deliver very good performance over a wide range of system parameters and configurations.
Keywords: Wireless Networks

C. S. Wu and G. K. Ma, "Data Transfer Scheme for Wireless Real-Time Communications," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1156, March/April 1998.

Abstract: We proposed a link protocol called Burst-oriented Transfer with Time-bounded Retransmission (BTTR) for wireless real-time communications. This scheme uses a large transmission window for sending/receiving a burst of time-sensitive data, and within this window, another smaller observation window is repeatedly used for dynamic error retransmission(s). There is time limitation on each retransmission such. that the burst of data can be received in a timely manner however by trading the packet loss performance for delay and throughput performance. An analysis is given in terms of the expectations of delay, throughput, and packet drop rate. The result shows that the proposed scheme can meet the delay and throughput requirement at reasonable packet loss performance.

M. A. Arad and A. Leon-Garcia, "A Generalized Processor Sharing Approach to Time Scheduling in Hybrid CDMA/TDMA," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1164, March/April 1998.

Abstract: Future wireless ATM networks are expected to support a variety of services with different bit-rates and Quality-of-Service (QoS) requirements. A major challenge for these networks is the de= sign of the multiple access scheme. In our previous works, we proposed hybrid CDMA/TDMA technique as an appropriate multiple access control (MAC) scheme for wireless ATM networks. We also investigated the problem of power control in such hybrid systems. In this paper, we continue our study and address the time scheduling issue of hybrid CDMA/TDMA. Using GPS technique, typically employed in TDMA systems, we develop a time scheduling scheme suitable for the class of hybrid CDMA/TDMA systems. We also study the implementation of this scheme in a specific hybrid system we proposed earlier and address its integration with our power control technique.

S. Bhattacharyya, J. F. Kurose, D. Towsley and R. Nagarajan, "Efficient Rate-Controlled Bulk Data Transfer Using Multiple Multicast Groups," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1172, March/April 1998.

Abstract: Controlling the rate of bulk data multicast to a large number of receivers is difficult due to the heterogeneity among the end-systems' capabilities and their available network bandwidth. If the data transfer rate is too high, some receivers will lose data, and retransmission will be required. If the data transfer rate is too low, an inordinate amount of time will be required to transfer the data. In this paper, we examine an approach towards rate-controlled multicast of bulk data in which the sender uses multiple multicast groups to transmit data at different rates to different sub-groups of receivers. We present simple algorithms for determining the transmission rate associated with each multicast channel, based on static resource constraints, e.g., network bandwidth bottlenecks. Transmission rates are chosen so as to minimize the average time needed to transfer data to all receivers. Analysis and simulation are used to show that our policies for rate selection perform well for large and diverse receiver groups and make efficient use of network bandwidth. Moreover, we find that only a small number of multicast groups are needed to reap most of the possible performance benefits.

M. P. Barcellos and P. D. Ezhilchelvan, "An End-to-End Reliable Multicast Protocol Using Polling for Scaleability," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1180, March/April 1998.

Abstract: Reliable sender-based one-to-many protocols do not scale well due mainly to implosion caused by excessive rate of feedback packets arriving from receivers. We show that this problem can be circumvented by making the sender poll the receivers at carefully planned timing instants, so that the arrival rate of feedback packets is not large enough to cause implosion. We describe a generic end-to-end protocol which incorporates this polling scheme together with error and flow control mechanisms. We analyze the behavior of our protocol using simulations which indicate that our scheme can be effective in minimizing losses due to implosion, achieving high throughput with low network cost.
Keywords: Feedback Implosion; Reliable Multicast; Transport Protocols

C. Papadopoulos, G. Parulkar and G. Varghese, "An Error Control Scheme for Large-Scale Multicast Applications," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1188, March/April 1998.

Abstract: Retransmission based error control for large scale multicast applications is difficult because of implosion and exposure. Existing schemes (SRM, RMTP, TMTP, LBRRM) have good solutions to implosion, but only approximate solutions to exposure. We present a scheme that achieves finer grain fault recovery by exploiting new forwarding services that allow us to create a dynamic hierarchy of receivers. We extend the IP Multicast service model so that routers provide a more refined form of multicasting (which may be useful to other applications), that enables local recovery. The new services are simple to implement and do not require routers to examine or store application packets; hence, they do not violate layering. Besides providing better implosion control and less exposure than other schemes, our scheme integrates well with the current IP model, has small recovery latencies (it requires no back-off delays), and completely isolates group members from topology. Our scheme can be used with a variety of multicast routing protocols, including DVMRP and PIM. We have implemented our scheme in NetBSD Unix, using about 250 lines of new C-code. The implementation requires two new IP options, 4 additional bytes in each routing entry and a slight modification to IGMP reports. The forwarding overhead incurred by the new services is actually lower than forwarding normal multicast traffic.
Keywords: reliable multicast; error control

J. Liebeherr and B. S. Sethi, "A Scalable Control Topology for Multicast Communications," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1197, March/April 1998.

Abstract: Large-scale multicast applications for the Internet require the availability of multicast protocols that enhance the basic connectionless IP Multicast service. A critical requirement of such protocols is their ability to support a large group of simultaneous users. In this paper, we present a new approach for distributing control information within a multicast group. The goal of our approach is to scale to very large group sizes (in excess of lOO,OOO users). Multicast group members are organized as a logical n-dimensional hypercube, and all control information is transmitted along the edges of the hypercube. We analyze the scalability of the hypercube control topology and show that the hypercube balances the load per member for processing control information better than existing topologies.

A. Feldmann, J. Rexford and R. Caceres, "Reducing Overhead in Flow-Switched Networks: An Empirical Study of Web Traffic," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1205, March/April 1998.

Abstract: To efficiently transfer large amounts of diverse traffic over high-speed links, modern integrated networks require more efficient packet-switching techniques that can capitalize on recent advances in switch hardware. Several promising approaches attempt to improve performance by creating dedicated shortcut connections for long-lived traffic flows, at the expense of the network overhead for establishing and maintaining these shortcuts. The network can balance these cost-performance tradeoffs through three tunable parameters: the granularity of flow end-point addresses, the timeout for grouping related packets into flows, and the trigger for migrating a long-lived flow to a shortcut connection. Drawing on a continuous one-week trace of Internet traffic, we evaluate the processor and switch overheads for transferring HTTP server traffic through a flow-switched network. In contrast to previous work, we focus on the full probability distributions of flow sizes and cost-performance metrics to highlight the subtle influence of the HTTP protocol and user behavior on the performance of flow switching. We find that moderate levels of aggregation and triggering yield significant reductions in overhead with a negligible reduction in performance. The traffic characterization results further suggest schemes for limiting the shortcut setup rate and the number of simultaneous shortcuts by temporarily delaying the creation of shortcuts during peak load, and by aggregating related packets that share a portion of their routes through the network.
Keywords: Traffic characterization; IP flows; HTTP protocol; switching; signaling, routing

K. C. Almeroth, M. Ammar and Z. Fei, "Scalable Delivery of Web Pages Using Cyclic Best-Effort Multicast," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1214, March/April 1998.

Abstract: The World Wide Web (WWW) has gained tremendously in popularity over the last several years. In this work we explore the use of UDP, best-effort multicast as a delivery option. Reliability is achieved through repetitive, cyclic transmission of a requested page. This solution is expected to be most efficient when used for highly requested pages. We view this cyclic multicast technique as a delivery option that can be integrated with the traditional reliable unicast and recently proposed reliable multicast options. We first describe the architecture of an integrated web server employing all three delivery options. We then describe the cyclic multicast technique and consider the various procedures needed for its successful operation. We characterize the gains in performance achieved by our proposal through an extensive performance analysis and simulation of our technique by itself, and when integrated with the other delivery options. We also describe our experience with an implementation of a prototype cyclic multicast server and its performance over the Multicast Backbone (MBone).
Keywords: Multicast; push; Mbone; web server

J. C. Hu, S. Mungee and D. C. Schmidt, "Techniques for Developing and Measuring High Performance Web Servers over High Speed ATM Networks," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1222, March/April 1998.

Abstract: High-performance Web servers are essential to meet the growing demands of the Internet and large-scale intranets. Satisfying these demands requires a thorough understanding of key factors affecting Web server performance. This paper presents empirical analysis illustrating how dynamic and static adaptivity can enhance Web server performance. Two research contributions support this conclusion. First, the paper presents results from a comprehensive empirical study of Web servers (such as Apache, Netscape Enterprise, PHTTPD, Zeus, and JAWS) over high-speed ATM networks. This study illustrates their relative performance and precisely pinpoints the server design choices that cause performance bottlenecks. We found that once network and disk I/O overheads are reduced to negligible constant factors, the main determinants of Web server performance are its protocol processing path and concurrence strategy. Moreover no single strategy performs optimally for all load conditions and Traffic types. Second, we describe the design techniques and optimization used to develop JAWS, our high-performance, adaptive web server JAWS is an object-oriented Web server that was explicitly designed to alleviate the performance bottlenecks we identified in existing Web servers. It consistently outperforms all other Web servers over ATM networks. The performance optimization used in JAWS include adaptive pre-spawned threading, fixed headers, cached date processing, and file caching. In addition, JAWS uses a novel software architecture that substantially improves its portability and flexibility relative to other Web servers. Our empirical results illustrate that highly efficient communication software is not antithetical to highly flexible software.

M. Crovella and P. Barford, "The Network Effects of Prefetching," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1232, March/April 1998.

Abstract: Prefetching has been shown to be an effective technique for reducing user perceived latency in distributed systems. In this paper we show that even when prefetching adds no extra traffic to the network, it can have serious negative performance effects. Straightforward approaches to prefetching increase the burstiness of individual sources, leading to increased average queue sizes in network switches. However, we also show that applications can avoid the undesirable queuing effects of prefetching. In fact, we show that applications employing prefetching can significantly improve network performance, to a level much better than that obtained without any prefetching at all. This is because prefetching offers increased opportunities for traffic shaping that are not available in the absence of prefetching. Using a simple transport rate control mechanism, a prefetching application can modify its behavior from a distinctly ON/OFF entity to one whose data transfer rate changes less abruptly, while still delivering all data in advance of the user's actual requests.

P. Gupta, S. Lin and N. McKeown, "Routing Lookups in Hardware at Memory Access Speeds," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1241, March/April 1998.

Abstract: Increased bandwidth in the Internet puts great demands on network routers; for example, to route minimum sized Gigabit Ethernet packets, an IP router must process about packets per second per port. Using the rule-of-thumb that it takes roughly 1000 packets per second for every 10^6 bits per second of line rate, an OC-192 line requires 10 x 10^6 routing lookups per second; well above current router capabilities. One limitation of router performance is the route lookup mechanism. IP routing requires that a router perform a longest-prefix-match address lookup for each incoming datagram in order to determine the datagram's next hop. In this paper, we present a route lookup mechanism that when implemented in a pipelined fashion in hardware, can achieve one route lookup every memory access. With current 50ns DRAM, this corresponds to approximately packets per second; much faster than current commercially available routing lookup schemes. We also present novel schemes for performing quick updates to the forwarding table in hardware. We demonstrate using real routing update patterns that the routing tables can be updated with negligible overhead to the central processor.

B. Lampson, V. Srinivasan and G. Varghese, "IP Lookups Using Multiway and Multicolumn Search," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1248, March/April 1998.

Abstract: IP address lookup is becoming critical because of increasing routing table size, speed, and traffic in the Internet. Our paper shows how binary search can be adapted for best matching prefix using two entries per prefix and by doing precomputation. Next we show how to improve the performance of any best matching prefix scheme using an initial array indexed by the first X bits of the address. We then describe how to take advantage of cache line size to do a multiway search with 6-way branching. Finally, we show how to extend the binary search solution and the multiway search solution for IPv6. For a database of N prefixes with address length W, naive binary search scheme would take O(W * logN); we show how to reduce this to O(W + logN) using multiple column binary search. Measurements using a practical (Mae-East) database of 30000 entries yield a worst case lookup time of 490 nanoseconds, five times faster than the Patricia scheme used in BSD UNIX. Our scheme is attractive for IPv6 because of small storage requirement (2N nodes) and speed (estimated worst case of 7 cache line reads)
Keywords: Longest Prefix Match; IP Lookup

Y. Birk and T. Kol, "Informed-Source Coding-on-Demand (ISCOD) over Broadcast Channels," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1257, March/April 1998.

Abstract: We present the Informed-Source Coding-On-Demand (ISCOD) approach for efficiently supplying non-identical data from a central server to multiple caching clients through a broadcast channel. The key idea underlying ISCOI) is the joint exploitation of the data already cached by each client, the server's full awareness of client-cache contents and client requests, and the fact that each client only needs to be able to derive the items requested by it rather than all the items ever transmitted or even the union of the items requested by the different clients. We present a set of two-phase ISCOD algorithms. The server uses these algorithms to assemble ad-hoc error-correction sets based its knowledge of every client's cache content and of the items requested, it uses error-correction codes to construct the data that is actually transmitted. Each client uses its cached data and the received supplemental data to derive the items that it has requested. This technique achieves a reduction of up to tens of percents in the amount of data that must be transmitted in order for every client to be able to derive the data requested by it. Finally, we define k-partial cliques in a directed graph, and cast the two-phase approach in terms of partial-clique covers. As a byproduct of the work, bounds and a close approximation for the expected cardinality of the maximum matching in a random graph have been derived and are outlined.
Keywords: ISCOD; caching clients; information-dissemination; multicast; k-partial clique; maximum matching; error correcting codes

O. Sharon and M. Spratt, "A CSMA/CD Compatible MAC for Real-Time Transmissions Based on Varying Collision Intervals," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1265, March/April 1998.

Abstract: This paper suggests a CSMA/CD compatible MAC protocol for real-time transmissions in a Home or Small office Local Area Network. It has the key advantage c,f allowing devices implementing the existing Ethernet protocol such as PCS, printers and ISDN routers to operate with devices implementing the new MAC. The new MAC enables real-time traffic such as digital Video and audio to be transmitted with low delay and jitter.

K. Kumaran and S. Khanna, "On Wireless Spectrum Estimation and Generalized Graph Coloring," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1273, March/April 1998.

Abstract: We address the problem of estimating the spectrum required in a wireless network for a given demand and interference pattern. This problem can be abstracted as a generalization of the graph coloring problem, which typically presents additional degree of hardness compared to the standard coloring problem. It is worthwhile to note that the question of estimating the spectrum requirement differs markedly from that of allocating channels. The main focus of this work is to obtain strong upper and lower bounds on the spectrum requirement, as opposed to the study of spectrum allocation/management. While the relation to graph coloring establishes the intractability of the spectrum estimation problem for arbitrary network topologies, useful bounds and algorithms are obtainable for specific topologies. In the first part of this work, we establish some new results regarding generalized coloring, which we use to derive tight bounds for specific families of graphs. The latter part is devoted to the hexagonal grid topology, a commonly used topology for wireless networks. We design efficient algorithms that exploit the geometric structure of the hexagonal grid topology to determine upper bounds on the spectrum requirement for arbitrary demand patterns. The slack in our upper bounds is estimated by analyzing subgraphs with specific properties. While we consider the worst-case demand patterns to evaluate the performance of our algorithms, we expect them to perform much better in practice.

A. Sen, T. Roxborough and S. Medidi, "Upper and Lower Bounds of a Class of Channel Assignment Problems in Cellular Networks," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1284, March/April 1998.

Abstract: A cellular network is often modeled as a graph and the channel assignment problem is formulated as a coloring problem of the graph. In this paper we introduce the notion of cellular graphs that models the hexagonal cell structures of a cellular network. Exploiting the regular structure of the cellular graphs we compute the upper and the lower bounds for a class of channel assignment problems. Assuming a k-band buffering system where the interference does not extend beyond k cells away from the call originating cell, we provide two different formulations of the channel assignment problem - Distance-k Chromatic Number problem and k-Band Chromatic Bandwidth problem. We give one algorithm for the first problem and two for the second, with all three algorithms assigning channels to the cells. The complexity of the algorithm for the first problem is O(p), where p is the number of cells. For the second problem, the complexity of the first algorithm is O(p) and the complexity of the second algorithm is O (k^5logk). All the algorithms are asymptotically optimal, in the sense that the order of the upper bound of the number of channels required is the same as the order of the lower bound.

Y. Y. Kim and S. Li, "Modeling Fast Fading Channel Dynamics for Packet Data Performance Analysis," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1292, March/April 1998.

Abstract: The fast fading channel modeling traditionally focuses on physical-level dynamics such as signal strength and bit error rate. In this paper we characterize fast fading channel dynamics at packet-level and analyze the corresponding data queuing performance in various environments. The integration of wireless channel modeling and data queuing analysis provides us a unique way to capture important channel statistics with respect to various wireless network factors such as channel bandwidth, speed and channel coding. The second order channel statistics, I, e., channel power spectrum, is identified to play a dominant role in fast fading channel modeling. The data queuing performance is largely dependent on the interaction between channel power spectrum and data arrival power spectrum, whichever has more low frequency power will have dominant impact on queuing performance. The data arrival power spectrum provides a measure of burstiness and correlation behavior of data packet arrivals. In queuing analysis, we use a Markov chain modeling technique to match the measured important channel statistics.

Q. Zhang, T. F. Wong and J. S. Lehnert, "Stability of a Type-II Hybrid ARQ Protocol for DS-SSMA Packet Radio Systems," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1301, March/April 1998.

Abstract: In this paper, the stability of a slotted direct-sequence spread-spectrum multiple-access (DS-SSMA) packet radio network employing a type-II hybrid automatic-repeat-request (ARQ)protocol is considered. The equilibrium point analysis (EPA) technique is employed to approximately compute the system throughput and delay, and to analyze the system stability. It is found that the system exhibits bistable behavior in some situations. With this information, a desirable region of operation for the DS-SSMA network employing the type-II hybrid ARQ protocol is obtained.

H. Levy, T. Mendelson, M. Sidi and J. Keren Zvi, "Sizing Exit Buffers in ATM Networks under CBR Traffic," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1309, March/April 1998.

Abstract: This paper deals with the sizing of end buffers in ATM networks for sessions subject to Constant Bit Rate (CBR) traffic. Our objective is to predict the cell loss rate at the end buffer as a function of the system parameters. We introduce the D+ G/D/l queue as a generic model to represent exit buffers in telecommunications networks under constant rate traffic and use it to model the end buffer. This is a queue whose arrival rate is equal to its service rate and whose arrivals are generated at regular intervals and materialize after generally distributed random amount of time. We reveal that under the infinite buffer assumption the system possesses rather intriguing properties: On the one hand it is unstable, so that the steady state contents of the buffer may exceed any value. On the other hand, in practice, the likelihood of the buffer exceeding even small values is very small. Improper simulation of such systems may therefore lead to false results. We turn to analyze this system under finite buffer assumption and derive bounds on the cell loss rates. The bounds are expressed in terms of simple formulae of the system parameters. We carry out the analysis for two major types of networks : 1) Datagram networks, where the packets (cells) traverse the network via independent paths, and 2) Virtual circuit networks, where all cells of a connection traverse the same path. Numerical examination of ATM-like examples show that the bounds are very good for practical prediction of cell loss and the selection of buffer size.

G. C. Lin and T. Suda, "On the Impact of Long-Range-Dependent Traffic in Dimensioning ATM Network Buffer," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1317, March/April 1998.

Abstract: Recent measurements of packet networks have shown that packet traffic exhibits long-range-dependent property. Studies have also discovered that this property is not adequately captured by the conventional Markov traffic models and often yields queuing behavior such as heavy-tailed or subexponential queue length distribution. These findings have led to the implication that long-range-dependent traffic has a significant impact on determining queuing performance and dimensioning buffer requirement in general. This paper examines the impact of long-range-dependent traffic on providing connectionless services over ATM networks using AAL5 protocol. As connectionless data packets enter an ATM network, they are encapsulated into AAL5 frames. Within the ATM network, AAL5 frames are segmented into cells for transmission and reassembled at intermediate connectionless servers for frame switching. A queuing model characterizing the AAIJ5 frame reassembly operation is developed and analyzed to investigate the impact of long-range-dependent traffic on dimensioning the frame reassembly buffer. The buffer overflow probability is analytically derived. Tight upper bound and lower bound of the AAL5 frame loss probability are also established. The analysis shows that long-range-dependent traffic and conventional Markovian traffic yield similar queuing behavior in the frame reassembly operation.

A. Privalov and K. Sohraby, "Per-Stream Jitter Analysis in CBR ATM Multiplexors," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1325, March/April 1998.

Abstract: Constant Bit Rate (CBR) traffic is expected to be a major source of traffic in high-speed networks. Such sources may have stringent delay and loss requirements and in many cases, they should be delivered exactly as they were generated, A simple delay priority scheme will bound the cell delay and jitter for CAR streams, so that in the network switches, CAR traffic will only compete with other CAR traffic in the networks. In this paper, we will consider a multiplexer in such an environment. We provide an exact analysis of the jitter process in the homogeneous case. In this case, we obtain the complete characterization of the jitter process showing the inaccuracies of the existing results. Our results indicate that jitter variance is bounded and never exceeds the constant 2/3 slot. It is also shown that the per-stream successive cell inter-departures times are negatively correlated with the lag 1 correlation of -1/2. Higher order correlation coefficients are shown to be zero. Simple asymptotic results on per-stream behavior are also provided when the number of CAR streams is considered large. In the heterogeneous case, we bound the jitter distribution and moments. Simple results are provided for the computation of the bound on the jitter variance for any mix of CAR streams in this case. It is shown that streams with a low rate (large period) do experience little jitter variance. However, the jitter variance for the high-rate streams could be quite substantial.

S. Galmes and R. Puigjaner, "On the Capabilities of On-Off Models to Capture Arbitrary ATM Sources," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1333, March/April 1998.

Abstract: B-ISDN is conceived to support all types of existing and new emerging applications. In the context of B-ISDN and the new age of information, the user terminal will generate data, voice, video and, in general, multimedia traffic. ATM is the selected mechanism for the efficient transfer of these informations across the future worldwide network. The proper design of this network requires cost-effective solutions that may satisfy the quality of service constraints of all the connections. To achieve this goal, an accurate characterization of the traffic generated by the sources is needed. This characterization should capture the statistical properties of the ATM traffic that are relevant to network performance. A review of literature shows that, from a general point of view, this objective has been reached, but with two drawbacks: traffic models are application dependant and, moreover, for some applications, traffic models are not analytically tractable. The final objective of this paper is to construct a general analytical B-ISDN model. Particularly, the paper focuses on the on-off models with general distributions, for which a complete exact statistical characterization at cell level was already obtained in a previous paper. On the basis of this characterization, an algorithm devoted to capture the statistical behavior of any source is formulated and developed. Particularly, the algorithm captures the average rate and burstiness of any source in an exact way, and the autocorrelation function in an optimal way. Finally, numerical results for different types of video sources are provided.
Keywords: B-ISDN; ATM; network performance; average rate; burstiness; autocorrelation; cell level; frame level; optimization problem; AAL

H. B. Chiou and Z. Tsai, "Performance of ATM Switches with Age Priority Packet Discarding under the ON-OFF Source Model," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 931, March/April 1998.

Abstract: In the current ATM AAL5 implementation, even a single cell loss event can lead to the corruption of one whole packet. Hence, it has been observed that the throughput of upper layer protocol may easily collapse on a congested ATM network. In this paper, we propose a buffer management method called Age Priority Packet Discarding (APPD] scheme to be used along with two other schemes: the Early Packet Discarding (EPD) and the Partial Packet Discarding (PPD) schemes. After describing the operations and the pseudo code of the proposed APPD scheme and how d operates with the EPD/PPD schemes, the packet level QoS of APPD and its extended versions are derived analytically under homogeneous ON- OFF source model. Numerical results obtained via analytical approach suggest that the proposed APPD scheme can more effectively and fairly reduce packet loss probability than other schemes.

J. M. Hah and M. C. Yuang, "A Delay and Loss Versatile Scheduling Discipline in ATM Switches," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 939, March/April 1998.

Abstract: In this paper, we propose a versatile scheduling discipline, called Precedence with Partial Push-out (PPP), in Asynchronous Transfer Mode (ATM) switches supporting two delay and two loss priorities. By employing a threshold L, PPP provides delay guarantee by allowing a newly arriving high-delay priority cell to precede a maximum of L low-delay priority cells. Through the use of another threshold R, the discipline offers loss guarantee by permitting a newly-arriving high-loss priority cell to push out the last low loss priority cell located beyond the Rth location in a full queue. By setting L and R properly, PPP versatilely performs as any one of the four widely accepted disciplines, namely the FCFS, head of line, push-out, or head of line with push out disciplines. To determine L and R retaining demanded Quality of Services (QoSS), we provide an in-depth queuing analysis for the Cell Delay (CD) and Cell Loss Ratio (CLR) of high-delay-priority, low-loss-priority cells. We further propose a simple, algebra- based analysis for the CD and CLR for low-delay-priority, high-loss-priority cells. On the basis of these analyses, L and R can be dynamically and effectively adjusted to provide adequate delay and loss guarantees for high-priority cells while incurring only minimal performance degradation for other classes of cells. Finally, the paper presents simulation results confirming the accuracy of the analyses.

C. Fulton, S. Li and A. Lin, "Impact Analysis of Packet-Level Scheduling on an ATM Shared-Memory Switch," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 947, March/April 1998.

Abstract: This paper evaluates the additional buffer requirements of an ATM shared-memory switch using packet-level (store-and- forward) rather than cell-level (cut-through) service scheduling. Packet-level scheduling is important to tag and IP switching for multipoint-to-one services; it allows different connections to be merged into a single Virtual Connection to reduce routing complexity. Cisco Systems will incorporate packet-level scheduling in their LightStream 1010 ATM switch using a feature upgrade card; we thus specifically consider the Light Stream 1010 architecture in our analysis.
Keywords: scheduling; store-and-forward; ATM switch; traffic models; performance evaluation; merged VC

Q. Bian, K. Shiomoto and J. Turner, "Dynamic Flow Switching, A New Communication Service for ATM Networks," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 955, March/April 1998.

Abstract: This paper presents a new communication service for ATM networks that provides one-way, adjustable rate, on-demand communication channels. The proposed dynamic flow service is designed to operate within a multi-service cell switched network that supports both conventional switched virtual circuits and IP packet routing and is designed to complement those services. It is particularly well suited to applications that transmit substantial amounts of data (a few tens of kilobytes or more) in fairly short time periods (up to a few seconds). Much of the current world wide web traffic falls within this domain. Like IP packet networks, the new service permits the transmission of data without prior end-to-end connection establishment. It uses a variant of ATM resource management cells to carry dynamic flow setup information along with the data, including an IP destination address and burst transmission rate. Switches along the path select next an outgoing link dynamically using the burst rate to guide the routing decision. The dynamic flow setup protocol is based on soft-state control and is designed to facilitate the use of buffers that are shared across all ports of a switch, minimizing per port memory costs.

Jörg Nonnenmacher and Ernst W. Biersack, "Optimal Multicast Feedback," online in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 964-971, March/April 1998.

Abstract: We investigate the scalability of feedback in multicast communication and propose a new method of probabilistic feedback based on exponentially distributed timers. By analysis and simulation for up to 106 receivers we show that feedback implosion is avoided while feedback latency is low. The mechanism is robust against the loss of feedback messages and robust against homogeneous and heterogeneous delays. We apply the feedback mechanism to reliable multicast and compare it to existing timer-based feedback schemes. Our mechanism achieves lower NAK latency for the same performance in terms of NAK suppression. It is scalable, the amount of state at every group member is independent of the number of receivers. No topological information of the network is used and data delivery is the only support required from the network. It adapts to the number of receivers and leads therefore to a constant performance for implosion avoidance and feedback latency.
Keywords: Feedback; multicast; reliable multicast; performance evaluation; extreme value theory

J. Nonnenmacher, M. Lacher, M. Jung, E. Biersack and G. Carle, "How Bad is Reliable Multicast without Local Recovery?," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 972, March/April 1998.

Abstract: We examine the impact of the loss recovery mechanisms on the performance of a reliable multicast protocol. Approaches to reliable multicast can be divided into two major classes: source-based recovery, and distributed recovery. For both classes we consider the state of the art: For source-based recovery, a type 2 hybrid ARQ scheme with parity retransmission. For distributed recovery, a scheme with local multicast retransmission and local feedback processing. We further show the benefits of combining the two approaches and consider a type 2 hybrid ARQ scheme with local retransmission. The schemes are compared for up to 106 receivers under different loss scenarios with respect to network bandwidth usage and completion time of a reliable transfer We show that the protocol based on local retransmissions via type 2 hybrid ARQ performs best for bandwidth and latency. For networks, where local retransmission is not possible, we show that a protocol based on type 2 hybrid ARQ comes close to the performance of a protocol with local retransmission.
Keywords: Reliable Multicast Protocol; Error Control; ARQ; FEC; Performance Evaluation

S. Chen, O. Gunluk and B. Yener, "Optimal Packing of Group Multi-Casting," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 980, March/April 1998.

Abstract: This paper presents algorithms, heuristics and lower bounds addressing optimization issues in many-to-many multicasting. Two main problems are addressed: (1) a precise combinatorial comparison of optimal multicast trees with optimal multicast rings, (2) an optimized sharing of network resources (i.e., nodes and links) among multiple multicast groups that coexist. The former is central to the choice of multicast protocols and their performance, while the latter is crucial for network utilization. The first problem is treated as a comparison of Steiner Tree and Traveling Salesman problems on the same input set. The underlying integer programming problems are solved to optimum by using cutting-plane inequalities and the branch-and- cut algorithm. In addition to these exact solutions, fast heuristics are presented for approximate solutions. The second problem is formulated as a packing problem in which the network tries to accommodate all the multicast groups by optimizing the utilization of resources. Precise mathematical programming formulations, lower bounds and a heuristic for the underlying optimization problem are presented. The heuristic aims to accommodate multiple multicast groups while avoiding bottlenecks on the links for higher throughput. The heuristics and exact algorithms are implemented on various networks and multicast groups. The simulations show that multicast trees can be built by using 25% fewer links than the rings, both for optimal and suboptimal constructions. The packing heuristic is also implemented and its performance is compared to the constructive lower bound.

S. K. Kasera, J. Kurose and D. Towsley, "A Comparison of Server-Based and Receiver-Based Local Recovery Approaches for Scalable Reliable Multicast," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 988, March/April 1998.

Abstract: Local recovery approaches for reliable multicast have the potential to provide significant performance gains in terms of reduced bandwidth and delay, and higher system throughput. In this paper we examine two local recovery approaches: one server-based, and the other receiver-based, and compare their performance. The server-based approach makes use of specially designated hosts, called repair servers, co-located with routers inside the network. In the receiver-based approach, only the end hosts (sender and receivers) are involved in error recovery. Using analytical models, we first show that the two local recovery approaches yield significantly higher protocol throughput and lower bandwidth usage than an approach that does not use local recovery. Next, we demonstrate that server-based local! recovery yields higher protocol throughput and lower bandwidth usage than receiver-based local recovery when the repair servers have processing power slightly higher than that of a receiver and several hundred kilobytes of buffer per multicast session.

L. Vicisano, L. Rizzo and J. Crowcroft, "TCP-Like Congestion Control for Layered Multicast Data Transfer," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), March/April 1998.

Abstract: We present a novel congestion control algorithm suitable for use with cumulative, layered data streams in the MBone. Our algorithm behaves similarly to TCP congestion control algorithms, and shares bandwidth fairly with other instances of the protocol and with TCP flows. It is entirely receiver driven and requires no per-receiver status at the sender, in order to scale to large numbers of receivers. It relies on standard functionalities of multicast routers, and is suitable for continuous stream and reliable bulk data transfer. In the paper we illustrate the algorithm, characterize its response to losses both analytically and by simulations, and analyze its behavior using simulations and experiments in real networks. We also show how error recovery can be dealt with independently from congestion control by using FEC techniques, so as to provide reliable bulk data transfer.
Keywords: congestion control; multicast

S. Fahmy, R. Jain, R. Goyal, B. Vandalore, S. Kalyanaraman, S. Kota and P. Samudra, "Feedback Consolidation Algorithms for ABR Point-to-Multipoint Connections in ATM Networks," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1004, March/April 1998.

Abstract: ABR traffic management for point-to-multipoint connections controls the source rate to the minimum rate supported by all the branches of the multicast tree. A number of algorithms have been developed for extending ABR congestion avoidance algorithms to perform feedback consolidation at the branch points. This paper discusses various design options and implementation alternatives for the consolidation algorithms, and proposes a number of new algorithms. The performance of the proposed algorithms and the previous algorithms is compared under a variety of conditions. Results indicate that the algorithms we propose eliminate the consolidation noise (caused if the feedback is returned before all branches respond), while exhibiting a fast transient response.
Keywords: ATM networks; ABR service category; traffic management; congestion control; multipoint communication

I. Matta and A. Bestavros, "A Load Profiling Approach to Routing Guaranteed Bandwidth Flows," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1014, March/April 1998.

Abstract: We study anew approach to routing multi-class traffic flows with guaranteed bandwidth requirements. The approach is based on our recently proposed concept of load profiling [Bestavros and Matta 97]. We thoroughly characterize routing performance using load profiling and contrast it to routing using load balancing and load packing. We do so both analytically and via extensive simulations on Virtual Path (VP) based networks. Our findings confirm that load balancing is not desirable as at results in VP bandwidth fragmentation, which adversely affects the likelihood of accepting new flow requests. This fragmentation is more pronounced when the granularity of the requests is large. Our simulation results also show that our load-profiling routing scheme performs better or as well as the traditional load-balancing routing in terms of revenue under both skewed and uniform workloads. Furthermore, load-profiling routing improves routing fairness by proactively increasing the chances of admitting high-bandwidth flows.

C. K. Tham and W. S. Soh, "Multi-Service Connection Admission Control Using Modular Neural Networks," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1022, March/April 1998.

Abstract: Although neural networks have been applied for traffic and congestion control in ATM networks, most implementations use multi-layer perception (MLP) networks which are known to converge slowly. In this paper, we present a Connection Admission Control (CAC) scheme which uses a modular neural network with fast learning ability to predict the cell loss ratio (CLR) at each switch in the network. A special type of OAM cell travels from the source node to the destination node and back in order to gather information at each switch. The information is used at the source to make CAN decisions such that Quality of Service (QoS) commitments are not violated, Experimental results which compare the performance of the proposed method with other CAN methods which use the Peak Cell Rate (PCR), Average Cell Rate (ACR) and Equivalent Bandwidth are presented.
Keywords: Congestion and Admission Control; BISDN and ATM; Traffic Management Distributed Network Algorithms

V. Bose, A. B. Shah and M. Ismert, "Software Radios for Wireless Networking," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1030, March/April 1998.

Abstract: This paper describes a novel architecture for building software wireless network interfaces. These interfaces, implemented in user-level software, run on off-the-shelf PCS and replace all of the link and many of the physical layer functions typically implemented in dedicated hardware on a network interface card (NIC). They provide all of the processing needed to transform between wideband IF signals and network packets. Moving this functionality into user-level software has several advantages. Among other things, it makes it easy to implement protocols that adapt to different applications and environmental conditions. Our approach is compatible with the existing OSI protocol stack, but supports a finer granularity of layering. This finer granularity makes it possible for our NIC to dynamically change functions, such as modulation technique, that are fixed in other NICS. It also offers interfaces that facilitate interoperation with a variety of other systems, e.g., coders. We also present a brief description of our architecture which allows these software NICS to be built, as well as a sample NIC that runs on a PentiumPro, designed to interoperate with a commercial 2.4 GHz ISM band frequency hopping spread spectrum radio.

D. A. Maltz and P. Bhagwat, "MSOCKS: An Architecture for Transport Layer Mobility," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1037, March/April 1998.

Abstract: Mobile nodes of the future will be equipped with multiple network interfaces to take advantage of overlay networks, yet no current mobility systems provide full support for the simultaneous use of multiple interfaces. The need for such support arises when multiple connectivity options are available with different cost, coverage, latency and bandwidth characteristics, and applications want their data to flow over the interface that best matches the characteristics of the data. We present an architecture called Transport Layer Mobility that allows mobile nodes to not only change their point of attachment to the Internet, but also to control which network interfaces are used for the different kinds of data leaving from and arriving at the mobile node. We implement our transport layer mobility scheme using a split-connection proxy architecture and a new technique called TCP Splice that gives split-connection proxy systems the same end-to-end semantics as normal TCP connections.
Keywords: mobile networking; proxies; TCP; connection redirection; SOCKS; firewalls

K. Y. Wang and S. K. Tripathi, "Mobile-End Transport Protocol: An Alternative to TCP/IP Over Wireless Links," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1046, March/April 1998.

Abstract: Unlike wired links, wireless network bandwidth is highly limited and the channel usually suffers from frequent and bursty loss due to its vulnerability to various kinds of interference. At the same time, the traditional TCP with its congestion control is well known for its poor performance over wireless links. This paper proposes a new protocol that (1) replaces TCP/IP over the wireless link by a simpler protocol with smaller headers, if the link is the last hop along a data path, (2) shifts functions needed to communicate with an Internet host using TCP/IP from the mobile host to the base station, so that the distinct wireless link is hidden from the outside Internet, and (3) exploits link-layer acknowledgments and retransmissions to quickly recover losses over the wireless link. Our simulation results show a substantial performance improvement achieved by the new protocol.

K. Tutschku, "Demand-based Radio Network Planning of Cellular Mobile Communication Systems," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1054, March/April 1998.

Abstract: This paper presents a demand-based engineering method for designing radio networks of cellular mobile communication systems. The proposed procedure is based on a forward-engineering method, the Integrated Approach to cellular network planning and is facilitated by the application of a new discrete population model for the traffic description, the Demand Node Concept. The use of the concept enables the formulation of the transmitter locating task as a Maximal Coverage Location Problem (MCLP), which is well known in economics for modeling and solving facility location problems. For the network optimization task, we introduced the Set Cover Base Station Positioning Algorithm (SCBPA), which is based on a greedy heuristic for solving the MCLP problem. Furthermore, we present the planning tool prototype ICEPT (Integrated Cellular network Planning Tool), which is based on these ideas and show a first result from a real world planning case.

D. P. Hong and T. Suda, "Performance of ERICA and QFC for Transporting Bursty TCP Sources with Bursty Interfering Traffic," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1341, March/April 1998.

Abstract: In ATM networks, the Available Bit Rate (ABR) service enables data sources to efficiently utilize network resources without adversely affecting the performance of non-ABR, guaranteed traffic such as video. Most research to date on ABR performance has not used realistic assumptions. Unrealistic assumptions such as persistent data sources, an absence of non-ABR traffic, and inaccurate TCP models may lead to misleading results. In this paper, we address the above shortcomings of previous research. In a comparison of two major ABR schemes, the rate based ERICA and the credit based QFC, we find that the performance may be significantly better with QFC than previously thought when bursty traffic sources and bursty interfering traffic is considered.

B. Awerbuch and Y. Shavitt, "Converging to Approximated Max-Min Flow Fairness in Logarithmic Time," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1350, March/April 1998.

Abstract: Max-Min is a frequently praised criterion for flow control despite its limitations. In practice, the fast rate of changes in the connection lay-out in modern networks makes it hard for any flow control algorithm to converge to an optimal point. It such an environment, it might be better to trade accuracy with speed. We present algorithms that relax the optimality criterion of the max-min flow fairness but achieve a fast convergence time that is logarithmic in the link bandwidth and not a function of the number of active connections (or sessions). The relaxation we use requires rates to be increased or decreased by a certain factor, 1 + e, or in other words, assigned rates can only be a natural power of some basic bandwidth 1 + e. Un-der this criterion, the quiescent time of our flow control algorithms is logl+e B, where B is the maximum link bandwidth in minimum allocation units. This is a great improvement over the super-linear quiescent time of known algorithms both exact and approximated

S. P. Abraham and A. Kumar, "A Stochastic Approximation Approach for Max-Min Fair Adaptive Rate Control of ABR Sessions with MCRs," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1358, March/April 1998.

Abstract: The ABR sessions in an ATM network share the bandwidth left over after guaranteeing service to CBR and VBR traffic. Hence the bandwidth available to ABR sessions is randomly varying. This bandwidth must be shared by the sessions in a max-min fair fashion. Our point of departure in this paper is to formulate the problem of determining the max-min fair session rates as the problem of finding the root of a certain nonlinear vector equation; the same formulation also arises with our notion of max-min fairness with positive MCRS. This formulation allows us to use a stochastic approximation algorithm for online distributed computation of the max-min fair rates. We use the well known ordinary differential equation technique to prove convergence of the algorithm in the synchronous update case. We provide simulation results using the NIST simulator to show that the algorithm is able to track the max-min fair rates for slowly varying random available link bandwidth.

Y. T. Hou, H. H. Y. Tzeng and S. S. Panwar, "A Generalized Max-Min Rate Allocation Policy and Its Distributed Implementation Using ABR Flow Control Mechanism," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1366, March/April 1998.

Abstract: We generalize the classical max-min rate allocation policy with the support of the minimum rate requirement and peak rate constraint for each connection. Since a centralized algorithm for the generalized max-min (GMM) rate allocation requires global information, which is difficult to maintain and manage in a large network, we develop a distributed protocol to achieve the GMM policy using the available bit rate (ABR) flow control mechanism. We give a proof that our distributed protocol converges to the GMM rate allocation through distributed and asynchronous iterations under any network configuration and any set of link distances.

P. Kim, "Deterministic Service Guarantees in 802.12 Networks, Part II: the Cascaded Network Case," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1376, March/April 1998.

Abstract: In part I [Kim 97] of this paper, we studied the problem of allocating resources in shared single hub 802.12 networks. We described the packet scheduling model and defined the admission control conditions which enable the network to provide deterministic service guarantees. In this paper, we analyze cascaded (multi-hub) 802.12 topologies. We derive the relevant network parameters and show that the admission control conditions defined in part I also apply to cascaded topologies when used with these parameters. Experimental results were achieved with a UNIX kernel based implementation in cascaded test networks. These confirm the theoretical results received for network parameters, throughput and end-to-end delay.

M. V. Ivanovich and M. Zukerman, "Evaluation of Priority and Scheduling Schemes for an IEEE 802.14 MAC Protocol Loaded by Real Traffic," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1384, March/April 1998.

Abstract: We study a new scheme for provision of priorities within the framework of a MAC protocol which is a potential candidate of the (as yet unpublished) IEEE 802.14 standard. A comprehensive simulation study, based on real traffic traces, shows that this new method provides better protection for the high priority traffic than an earlier proposed scheme. Except for physical and MAC layer overheads, the scheme exhibits comparable behaviour to that of an Ideal Multiplexer, for traces with very diverse correlation structures. The significance of using realistic (correlated) traffic streams for modeling purposes is demonstrated and discussed.

M. D. Corner, N. Golmie, J. Liebeherr and D. H. Su, "A Priority Scheme for the IEEE 802.14 MAC Protocol for Hybrid Fiber-Coax Networks," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1400, March/April 1998.

Abstract: In order to provide Quality of Service (QoS) to users with real-time data such as voice, video and interactive services, the evolving IEEE 802.14 standard for Hybrid Fiber Coaxial (HFC) networks must include an effective priority scheme. In this paper we investigate the ability of the current IEEE 802.14 specification to provide priority service and show that a preemptive scheduler is not a sufficient solution. We propose to augment the scheduler with a novel scheme for implementing priority access in an HFC random access environment. The proposed mechanism integrates a multilevel priority collision resolution system into the proposed IEEE 802.1 MAC. The scheme separates and resolves collisions between stations in a priority order. A set of simulation scenarios is presented that shows the robustness and efficiency of the protocol, such as its ability to isolate higher priorities from lower ones and provide quick access to high priority requests.

W. T. Zaumen and J. J. Garcia-Luna-Aceves, "Loop-Free Multipath Routing Using Generalized Diffusing Computations," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1408, March/April 1998.

Abstract: A new distributed algorithm for the dynamic computation of multiple loop-free paths from source to destination in a computer network or Internet are presented, validated, and analyzed, According to this algorithms, which is called DASM (Diffusing Algorithm for Shortest Multipath), each router maintains a set of entries for each destination in its routing table, and each such entry consists of a set of tuples specifying the next router and distance in a loop-free path to the destination. DASM guarantees instantaneous loop freedom of multipath routing tables by means of a generalization of Dijkstra and Scholten's diffusing computations, With generalized diffusing computations, a node in a directed acyclic graph (DAG) defined for a given destination has multiple next nodes in the DAG and is able to modify the DAG without creating a directed loop. DASM is shown to be loop-free at every instant, and its average performance is analyzed by simulation and compared against an ideal link-state algorithm and the Diffusing Update Algorithm (DUAL).

J. Chen, P. Druschel and D. Subramanian, "An Efficient Multipath Forwarding Method," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1418, March/April 1998.

Abstract: We motivate and formally define dynamic multipath routing and present the problem of packet forwarding in the multipath routing context. We demonstrate that for multipath sets that are suffix matched, forwarding can be efficiently implemented with (1) a per packet overhead of a small, fixed-length path identifier, and (2) router space overhead linear in K, the number of alternate paths between a source and a destination. We derive multipath forwarding schemes for suffix matched path sets computed by both de-centralized (link-state) and distributed (distance-vector) routing algorithms. We also prove that (1) distributed multipath routing algorithms compute suffix matched multipath sets, and (2) for the criterion of ranked k-shortest paths, de-centralized routing algorithms also yield suffix matched multipath sets.
Keywords: Routing; Multipath; Forwarding

H. C. Lin and S. C. Lai, "VTDM - A Dynamic Multicast Routing Algorithm," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1426, March/April 1998.

Abstract: In this paper, the dynamic multicast routing problem is studied. The multicast routing problem has been shown to be NP-complete. Many heuristics have been proposed to find the multicast trees for multicast connections. In computer networks, application services may allow nodes to join or leave the multicast connection dynamically. The mrdticast routing problem in which nodes are allowed to join or leave the multicast connection is caned the dynamic multicast routing problem. A new dynamic multicast routing algorithm called Virtual Trunk Dynamic Multicast (VTDM) routing algorithm is proposed for this problem, A Virtual Trunk (VT) is a tree of the underlying graph. It is used as a template for constructing multicast trees. The VTDM routing algorithm constructs multicast trees based on the virtual trunk. Simulations are performed to study the performance of the VTDM routing algorithm. Simulation results show that the performance of the VTDM algorithm is close to that of the KMB algorithm [Kou et at, 81] (a near optimal heuristic for the static multicast routing problem).
Keywords: ATM networks; Dynamic multicast routing algorithm

S. P. Hong, H. Lee and B. H. Park, "An Efficient Multicast Routing Algorithm for Delay-Sensitive Applications with Dynamic Membership," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1433, March/April 1998.

Abstract: We propose an algorithm for finding a multicast tree in packet-switched networks. The objective is to minimize total cost incurred at the multicast path. The routing model is based on the minimum cost Steiner tree problem. The Steiner problem is extended, in our paper to incorporate two additional requirements. First, the delay experienced along the path from the source to each destination is bounded. Second, the destinations are allowed to join and leave multicasting anytime during a session. To minimize the disruption to on-going multicasting the algorithm adopts the idea of connecting a new destination to the correct multicasting by a minimum cost path satisfying the delay bound. To find such a path is an NP-hard problem and an enumerative method relying on generation of delay bounded paths between node pairs is not likely to find good routing path in acceptable computation time when network size is large. To cope with such difficulty, the proposed algorithm utilizes an optimization technique called Lagrangian relaxation method. A computational experiment is done on relatively dense and large Waxman's networks. The results seem to be promising. For sparse networks, the algorithm can find near-optimal multicast trees. Also the quality of multicast trees does not seem to deteriorate even when the network size grows. Furthermore, the experimental results shows that the computational efforts for each addition of node to the call are fairly moderate, namely the same as to solve a few shortest path problems.

M. Krunz and A. Makowski, "A Source Model for VBR Video Traffic Based on M/G/infinity Input Processes," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1441, March/April 1998.

Abstract: Statistical evidence suggests that the autocorrelation function of a compressed-video sequence is better captured by p(k) = e^(b*sqrt(k)) than by p(k) = k^(-b) = e^(-b*logk) (long-range dependence) or p(k) = e^(-bk) (Markovian). A video model with such a correlation structure is introduced based on the so-called M/G/infinity input processes. Though not Markovian, the model exhibits short-range dependence. Using the queuing performance under ``real'' video traffic as a reference, we study via simulations the queuing performance under two video models: the M/G/infinity model and the fractional ARIMA (F-ARIMA) model (which exhibits LRD). Our results indicate that the M/G/infinity model is much more accurate in predicting the actual queuing performance than the F-ARIMA model. Furthermore, only O(n) computations are required to generate an M/G/infinity trace of length n, compared to O(n^2] for a F-ARIMA trace.

K. Kumaran and Debasis Mitra, "Performance and Fluid Simulations of a Novel Shared Buffer Management System," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1449, March/April 1998.

Abstract: We consider a switching system which has multiple ports that share a common buffer, in which there is a FIFO logical queue for each port. Each port may support a large number of flows or connections, which are approximately homogeneous in their statistical characteristics, with common QoS requirements in cell loss and maximum delay. Heterogeneity may exist across ports. Our first contribution is a buffer management scheme based on Buffer Admission Control, which is integrated with Connection Admission Control at the switch, and is at the same time fair, efficient and robust in sharing the buffer resources across ports. Our scheme is based on the resource-sharing technique of Virtual Partitioning. Our second major contribution is to advance the practice of discrete-event fluid simulations. Such simulations are approximations to cell-level simulations and offer orders of magnitude speed-up. A third contribution of the paper is the formulation and solution of a problem of optimal allocation of bandwidth and buffers to each port having specific delay bounds, in a Iossless multiplexing framework. Finally, we report on extensive simulation results. The scheme is found to be effective, efficient and robust.
Keywords: Buffer Management; Fluid Simulations; Virtual Partitioning

P. R. Jelenkovic, "Long-Tailed Loss Rates in a Single Server Queue," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1462, March/April 1998.

Abstract: Consider a single server queue with i.i.d. arrival and service processes. For a fluid queue with capacity c, M/G/infinity arrivals. Accuracy of the asymptotic relations is verified with extensive numerical and simulation experiments. These explicit formulas have potential application in designing communication networks that will carry traffic with long-tailed characteristics, e.g., Internet data services.
Keywords: Long-tailed traffic models; Subexponential distributions; Long-range dependency; Network multiplexer; Finite buffer queue; Fluid flow queue; M/G/infinity process

J. Y. Lee and Y. H. Kim, "Performance Analysis of a Hybrid Priority Control Scheme for Input and Output Queuing ATM Switches," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), pp. 1470, March/April 1998.

Abstract: In this paper, we consider a nonblocking ATM switch with capacity C which adopts a hybrid priority control scheme in input buffers. Each input queue has two separate buffer spaces with different sizes for two classes of traffics, and adopts a general state-dependent scheduling scheme which assigns priorities to each class dynamically. We obtain the distribution of input queue length, loss probabilities and mean waiting times of two traffic classes using a semi-Markov process concept and compare the performances of various scheduling schemes. Numerical analysis and simulation indicate that the utilization of the switch with the hybrid priority control scheme satisfying the QOS's of each traffic class is much higher than that of the switch without control. And, the required buffer size is reduced while satisfying the same QOS's. Among the 5 scheduling schemes we have considered in this paper, the queue length threshold (QLT) scheme yields the highest utilization of the switch and has flexible performance which is adequate to satisfy various values of QOS's for two classes of traffics.

Samrat Bhattacharjee Zongming Fei, "A Novel Server Selection Technique for Improving the Response Time of a Replicated Service," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (San Francisco, California), March/April 1998.

Abstract: Server replication is an approach often used to improve the ability of a service to handle a large number of clients. One of the important factors in the efficient utilization of replicated servers is the ability to direct client requests to the best server, according to some optimality criteria. In this paper we target an environment in which servers are distributed across the Internet, and clients identify servers using our application-layer anycasting service. Our goal is to allocate servers to clients in a way that minimizes a client's response time. To that end, we develop an approach for estimating the performance that a client would experience when accessing particular servers. Such information is maintained in a resolver that clients can query to obtain the identity of the server with the best response time. Our performance collection technique combines server push with client probes to estimate the expected response time. A set of experiments is used to demonstrate the properties of our performance determination approach and to show its advantages when used within the application-layer anycasting architecture.

Jim Challenger, Arun Iyengar and Paul Dantzig, "A Scalable System for Consistently Caching Dynamic Web Data," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (New York), Mar. 1999.

Abstract: This paper presents a new approach for consistently caching dynamic Web data in order to improve performance. Our algorithm, which we call Data Update Propagation (DUP), maintains data dependence information between cached objects and the underlying data which affect their values in a graph. When the system becomes aware of a change to underlying data, graph traversal algorithms are applied to determine which cached objects are affected by the change. Cached objects which are found to be highly obsolete are then either invalidated or updated. DUP was a critical component at the official Web site for the 1998 Olympic Winter Games. By using DUP, we were able to achieve cache hit rates close to 100% compared with 80% for an earlier version of our system which did not employ DUP. As a result of the high cache hit rates, the Olympic Games Web site was able to serve data quickly even during peak request periods.
Keywords: Internet and experimental systems

netbib software created by H. Schulzrinne. Report problems to schulzrinne@cs.columbia.edu. Mon Jul 03 12:51:10 EDT 2000