- Two providers - C&W and Savvis - delivered picture-perfect availability. Both had zero downtime, with Savvis running trouble-free for the full monthlong test. C&W also had perfect uptime, but its test window began a few days later than other providers because of a test configuration error on its part.
- The networks of four providers - C&W, Level 3, Savvis and WilTel - met or exceeded the vaunted “five nines” standard for network uptime during normal operations.
- Sprint’s numbers for average delay were the theoretical minimum rates for a beam of light traveling cross-country (see story).
- Average jitter for all ISPs was measured in microseconds, well below the point where application performance could suffer.
- Packet loss for all providers averaged just 0.01%.
That day, AT&T handled 431 million voice calls - 20 percent more than usual and the most it had ever carried on a business day.
At AT&T's Network and Computing Services organization, one of the most important gauges of network reliability is Defects Per Million. This measurement is a statistically valid record of how many calls per million did not go through the first time because of a network procedural, hardware or software failure. Defects-Per-Million is not an average, it is an accurate accounting of network performance that is tallied by the day, as well as by month-to-date and year-to-date.
During 1997, AT&T's Defects-Per-Million performance was 173, which means that of every one million calls placed on the AT&T network, only 173 did not go through the first time due to a network failure. That equals a network reliability rate of 99.98 percent for 1997.
Last updated by Henning Schulzrinne