Advanced Internet Services (CS E6998 ) Project Report
Rui-Feng Raymond Liao , Sang Hyun Hwang , and Michael E. Kounavis
Comet Group
Center for Telecommunications Research
Columbia University
Mview is a software tool that allows to visualize the operation of MBone, by using several independent network tools in order to collect topological information and performance statistics. The current version of mview does not make use of the QoS feedback that is being provided by RTCP, the control protocol of RTP. Our task is to enhance the operation of mview by adding RTP monitoring functionality to mview's core mechanisms of visualizing MBone topology and performance, while complying to the fundamental design philosophy of mview which is separation between network utilities and user interface facilities.
The Internet Multicast Backbone (MBone) is an overlay network that has been deployed on the Internet to support the usage of multicast applications. MBone consists of "multicast islands", which are areas of the Internet supporting multicast services, and connected to each other through IP tunnels ( [6] ).
Mview [1] is a software tool developed to visualize and manage the MBone. It allows the user to collect and monitor performance statistics on MBone routers and links, aiding in diagnosing network problems with its fancy graphical user interfaces.
Mview learns about the MBone topology in multiple ways by using either the MBone utilities such as mrinfo, mrtree and mtrace, the DNS query such as nslookup, or the SNMP command such as snmpoidget. For example, mview uses mrinfo to determine topology information, tunnel thresholds and tunnel metrics; mtrace to determine the packet path between two MBone routers; and mrtree to print the multicast tree rooted at a given node.
Mview's essential functionalities include process dispatch, data parsing, and front-end display. The current version of mview, however, does not support the RTCP Quality-of-Service (QoS) feedback.
The Real-time Transport Protocol (RTP [2]) carries data that has real-time properties and provides mechanism to ensure timely delivery and other QoS guarantees by relying on lower-layer services. As an integral part of the RTP's role as a transport protocol, the RTP Control Protocol (RTCP [2]) periodically transmits control packets to all participants in the multicast session, using the same distribution mechanism as the data packets in RTP. The primary function of RTCP is to provide feedback on the quality of the data distribution, which can then be used by the flow and congestion control functions of other transport protocols.
In IP multicasting, it is desirable to know who is participating at any moment and how well they are receiving the data. A sender may use the RTCP reception report to probe the quality of transport path and to adjust the adaptive encodings. Moreover, it is crucial to get feedback from the receivers to diagnose faults in the distribution. With a distribution mechanism like IP multicast, it is possible for an entity such as a network service provider who is not involved in the session to receive the feedback information and act as a third-party monitor to diagnose network problems. Mview is one natural candidate for such task. The project goal of this project is therefore, defined to enhance the current MBone monitor mview so that it can receive and process RTCP QoS feedback, measure the statistics, and present the QoS statistics in a graphical and hierarchical fashion.
There have been several MBone applications capable of displaying the RTCP QoS feedback through GUI. The LBNL Audio Conferencing Tool (vat [4]) is one example of such applications. Vat v4.0 uses pull-down menus to display the statistics on RTCP QoS feedback. It presents the statistics (total, delta, and exp. weighted average) of a dozen of RTP packets and error counts in a table format which gets updated every second. Vat's GUI design ideas and data processing functions provide us with a valuable reference for this mview enhancement project which should be able to
Another example of third party RTCP monitoring tools is rtpmon developed at UC Berkeley. The data presentation style of rtpmon [5] is strongly influenced by vat and it does not have the luxury of mview's graphical and hierarchical display such as displaying network topology with QoS-based highlighting for congestion/failure spots.
This project requires programming skills in both C and Tcl/Tk. The first task in carrying out this project would be the reverse engineering of mview since we want to preserve most of mview's inherent functionalities including the GUIs, data structures and graphical displays while adding the proposed enhancements for RTCP QoS feedback. Due to the lack of documentation on the source codes, much of our initial effort is focused on understanding how mview really works and figuring out how to tightly integrate the RTCP support with the existing mview.
The fundamental design philosophy of mview is on the decoupling of network utilities and user interfaces. The resulting simplicity and flexibility allow easy integration of new utilities such as the RTCP handling extensions anticipated in this project.
Mview binds each network-related Tcl/Tk command from user interface to a network utility such as mrinfo or snmpiodget and its data processing peer function called handler inside mview. Mview has a general process control for all the network utilities. It uses internal functions like exec_handler, start_handler, and collect to fork child processes to execute the network utilities in a command-line fashion, to create a one-way pipe from the network utility and collect data, and to invoke the corresponding handler to process data.
All mview handlers have a uniform function interface, passing the output data in a string as a function argument. The string stores the output data from network utilities line-by-line, and is parsed by the handler which corresponds to the appropriate network utility. The parsed network information is updated in mview database and used in refreshing the display.
Therefore, mview's key design features can be summarized as:
Clearly, the above mview design features predicate the design approach of this project. Instead of building into mview an RTCP packet receiver/parser, which continuously processes the RTCP packets and update the display, we create a stand-alone RTCP utility called rtpstats which receives and parses RTCP packets for a specified duration of time, and outputs formatted statistics before terminating its process. Inside mview, we create a task-specific handler for rtpstats called rtpstats_hlr to process the data and update database for highlighting links and showing the QoS statistics on links.
Mview's internal database was originally designed to support highlighting of the nodes and links based on different thresholds and display criteria. However, it is a per-link or per-node text-based flat structure which can store only the per-link or per-node aggregate RTCP statistics, not the detailed RTCP statistics which has several hierarchies including per-multicast group, per-sender, and per-receiver. Therefore, to meet the specific needs of displaying detailed RTCP statistics, a separate internal RTCP database is created to store all the RTCP-related statistics, and it can be retrieved by a display function called show_rtpstats when triggered by a Tcl command.
In summary, our design approach is to comply to mview's basic design philosophy to decouple rtpstats utility from mview's display, to use mview's original database only for aggregate link QoS highlighting purpose, and to create a separate and general data structure and a display mechanism for detailed RTCP statistics.
In this section, some initial measurements on an MBone RTP multicast session (the Mbone video broadcasting of the IETF Dec. 12th meeting, address: 224.0.1.14/21020) are presented. The measurements including topological view of all the members and their traffic congestion status, and detail reception reports.
Figure 6.1 displays the macro-topology level view of the multicast group with asname as the cloud definition.
Figure 6.1 Macro-Topology Window
Figure 6.2 shows the link highlighting effect based on the received packet rate. The clouds are defined as asname to preserve a 2-level topology hierarchy. The link coloring criterion is shown in the right Figure. The node in the middle with the multicast group address is the virtual root node for this multicast group. The view within one cloud, which is the Columbia University, is shown in Figure 6.3, here mview is monitoring the multicast session from host vega.ctr.columbia.edu, while another host orion.ctr.columbia.edu is participating the multicast session.
Figure 6.2 Example of Link Highlighting in Micro-Topology Window
Figure 6.3 Link Highlighting in Within a Topology Cloud
Figure 6.4 presents the detailed aggregated statistics from node orion.ctr.columbia.edu.
Figure 6.4 Detail Nodal RTP Statistics - RTCP Receiver
Similarly, Figure 6.5 shows the link highlighting effect based on the sender packet rate. The link coloring criterion is shown in the right Figure. This is a convenient way to identify the active sender. The view within the BARRNET-BLK cloud, is shown in Figure 6.6. Here video2.mtg.ietf.org is the sender.
This example also illustrates the statistics aggregation function of mview: within the BARRNET-BLK cloud, there are 4 nodes participating in the multicast session while only one is the active sender. In the higher level view, the link between BARRNET-BLK cloud and the virtual node is green, which is decided by the maximum PacketSentRate among all the nodes in the cloud.
Figure 6.5 Link Highlighting in Micro-Topology Window
Figure 6.6 Link Highlighting in Within a Topology Cloud
Figure 6.7 presents the detailed aggregated statistics from the sender node.
Figure 6.7 Detail Nodal RTP Statistics - RTCP Sender
Figure 6.8 displays the micro-topology level view of the same multicast group with clouds to be the full-name of the host. Here all the multicast group members are displayed with their host-name whenever available.
Figure 6.8 Micro-Topology Window
Figure 6.9 illustrates the multicast group member SDES information which can be obtained by clicking on the multicast virtual root node. The first node shown is the active sender.
Figure 6.9 Detail Nodal RTP Statistics - RTCP SDES Information
The directories of source code and binary for Solaris 2.5 are available for view without warrant.
In particular, the following mview source code modules are created or modified to process RTCP statistics:
rtpstats.h, rtpstats.c for RTCP packet reception and initial parsing,
structure.h for internal RTCP database structure;
handler.c the major
part for
app.h for mapping C functions rtpstats_hlr(), show_rtpstats() into Tcl commands;
display.tcl and misc.tcl for RTCP related GUI;
global.h for addition of curr_mgroup variable
display.c function ecolor(), adding "rln>=0" in all the "if" conditions to avoid non-valid reverse link locking the link display color in some display settings.
There are two major areas need further enhancements:
Currently the RTCP statistics collection function is invoked manually by user, it is better to have this function automatically and periodically invoked. Also an internal timeout mechanism should be installed to remove the out-of-date members from display and database.
The text-based RTCP statistics display can be enhanced with a GUI interface similar to those in [4] and [5].
[1] Merit Network Inc. and University of Michigan, Source code and documentation of mview
[2] H. Schulzrinne, et. al., RFC-1889, RTP, a Transport Protocol for Realtime Applications
[3] H. Schulzrinne, source code of rtpdump
[4] S. McCanne, et. al., vat
[5] A. Swan, et. al., rtpmon
[6] C. Huitema, Routing in the Internet, Prentice Hall, 1995
For Comments, please send to {liao,
sang, mk}@ctr.columbia.edu.
Last update: January 16, 1997.