The origins of the testbed initiative date back to 1987, when CNRI submitted a proposal to NSF and was subsequently awarded a grant to plan a research program on very high speed networks. The original proposal, which involved participants from industry and the university research community, was written by Robert Kahn of CNRI and David Farber of the University of Pennsylvania. Farber later became an active researcher on the follow-on effort, while CNRI ran the overall initiative. As part of this planning, CNRI issued a call for white papers in October 1988. This call, published in the Commerce Business Daily, requested submissions in the form of white papers from organizations with technological capabilities relevant to very high speed networking.
The selection of organizations to participate in the resulting testbed effort was carried out in accordance with normal government practices. A panel of fourteen members, drawn largely from the government, was assembled to review the white papers and to make recommendations for inclusion in the program. Those recommendations formed the basis for determining the government-funded participants. CNRI then worked with telecommunications carriers to obtain commitments for wide-area transmission facilities and with others in industry to develop a cohesive plan for structuring the overall program.
A subsequent proposal was submitted to NSF in mid-1989 for government funding of the non-industry research participants, with the wide-area transmission facilities and industrial research participation to be provided by industry at no cost to the government. A Cooperative Agreement, funded jointly by NSF and DARPA, was awarded to CNRI later that year to carry out testbed research on gigabit networks. The research efforts were underway by Spring 1990. Government funding over the resulting five-year duration of the project totaled approximately $20M, with these funds used primarily for university research efforts, with total value of industry contributions over this period estimated to be perhaps 10 or 20 times greater than the Government funding..
The overall effort was managed by CNRI in conjunction with NSF and DARPA program officials. Within NSF, Darleen Fisher of the CISE directorate, provided program management throughout the entire effort. A series of program managers, beginning with Ira Richer, were responsible for the effort at DARPA. Many others at both NSF and DARPA were also involved over the duration of the effort. In addition, each testbed had its own internal management structure consisting of at least one representative from each participating organization in that testbed; the particular form and style of internal management was left to each testbed's discretion.
The coordinating role of a lead organization, played by CNRI, was essential in helping to bridge the many gaps between the individual research projects, industry, government ag encies and potential user communities. At the time this effort began, there did not appear to be a clearly visible path to make this kind of progress happen.
To provide an independent critique of project goals and progress, an advisory group was formed by CNRI consisting of six internationally recognized experts in networking and computer applications. A different, yet similar by constituted, panel was formed by NSF to review progress during the second year of the project.
Administrative coordination of the testbeds was carried out in part through the formation of the Gigabit Testbed Coordinating Committee ("Gigatcc"), made up of one to two representatives from each participating testbed organization and CNRI/NSF/DARPA project management. The Gigatcc, chaired by Professor Farber, met approximately 3-4 times per year during the course of the initiative. In addition, each research organization provided CNRI with quarterly material summarizing progress, and each testbed submitted annual reports at the completion of each of the first three years of the initiative. Final reports for each testbed were prepared and are being submitted along with this document.
To encourage cross-fertilization of ideas and information sharing between the testbeds, CNRI held an annual three-day workshop attended by researchers and others from the five testbeds, plus invited attendees from government, industry, and the general networking research community. Attendance at these workshops typically ranged from 200-300 people, and served both as a vehicle for information exchange among project participants and as a stimulus for the transfer of testbed knowledge to industry. CNRI also assisted the U.S. Government in hosting a Gigabit Symposium in 1991,attended by over 600 individuals and chaired by Al Vezza of MIT.
A number of small inter-testbed workshops were also held during the course of the project to address specific testbed-related topics which could especially benefit from intensive group discussion. A total of seven such workshops were held on the following topics: HIPPI/ATM/SONET interworking, gigabit TCP/IP implementation, gigabit applications and support tools, and operating system issues. In addition, an online database was established at CNRI early in the project to make information available via the Internet to project participants about new vendor products relevant to gigabit networking, and to maintain a list of publications and reports generated by testbed researchers.
The five testbeds were geographically located around the U.S. as shown in Figure 3-1.
The original four Aurora testbed research participants were Bellcore, IBM Research, MIT, and Penn. Arizona was added as a research collaborator during 1992. Both Bellcore and IBM contributed state-of-the-art laboratory equipment and Bellcore provided financial support to MIT and the University of Pennsylvania. The participating carriers were Bell Atlantic and NYNEX for regional connectivity and MCI for long distance links between Bell Atlantic and NYNEX facilities.
Figure 3-2. Aurora Physical Topology
Four research sites were connected by carrier-provided SONET links through regional carrier equipment offices (the rectangles in above), using commercial SONET termination equipment at each research site. The research locations were Cambridge, Massachusetts; White Plains, New York; Morristown, New Jersey (via West Orange, N.J.), and Philadelphia, Pennsylvania.The facility design allowed two distinct linear topologies to be used in parallel, one for ATM experiments and one for PTM experiments, as shown in Figure 3-3.
Figure 3-3. Aurora Logical Topologies
All the intersite physical links used SONET OC-48 2.5 Gbps transmission rates, with each link containing four SONET OC-12 622 Mbps full duplex logical channels. Three of these channels were used in the testbeds to realize the dual linear topologies shown in Figure 3-3, using manually configured SONET equipment routing connections within the carrier sites. Thus each research site could act as a switching point between two other sites for one of the topologies using two of the 622 Mbps channels, and as an endpoint in the other topology using the third 622 Mbps channel. Prototype wide-area ATM switches were provided by Bellcore, and prototype wide-area PTM switches were provided by IBM.
The research goals of the Aurora testbed were focused on gigabit networking technology for interconnecting workstation-class computers. Specific research topics included exploration of both ATM and PTM end-to-end wide-area switching technologies, design and evaluation of local ATM and PTM distribution technologies and their wide-area interworking, investigation of workstation interface architectures and operating system issues for gigabit networking, design and evaluation of wide-area control algorithms, and investigation of distributed shared memory for wide-area gigabit networking.
Research participants in the Blanca testbed included AT&T Bell Laboratories, UCBerkeley, Illinois, Lawrence Berkeley laboratory (LBL), the National Center for Supercomputing Applications (NCSA), and Wisconsin. AT&T provided the long distance facilities and wide-area ATM prototype switches. AT&T also provided portions of the local link facilities along with Pacific Bell and Bell Atlantic. Astronautics was also involved during part of the work. The physical topology of Blanca is shown in Figure 3-4, where solid lines indicate 622 Mbps or higher data rate links and dashed lines indicate 45 Mbps T3 links. Approximate distances are shown in the figure for the high speed wide-area portion of the testbed between Wisconsin and Illinois.
Two separate high speed portions of Blanca were established, one connecting the Illinois-NCSA campus located in Urbana-Champaign with Madison, Wisconsin via Chicago using all-optical links, and the second, using HIPPI technology, connecting the UCBerkeley Computer Science department with LBL, both located in Berkeley, California. The 45 Mbps links were part of a lower-speed university testbed previously established by AT&T called Xunet (experimental university network) which involved other sites besides those shown in Figure 3-4.
Blanca research goals included both wide-area network technology and supercomputer applications, with a strong emphasis on collaboration between the network and application researchers. Network technology research was focused on multimedia quality-of-service multiplexing strategies, flow/congestion control, ATM fast call setup, switch control software, distributed shared memory latency hiding strategies, and supercomputer gigabit-rate I/O software. Applications research was focused on remote visualization and collaboration for supercomputer modeling, for example thunderstorm modeling, and on the development of middleware software to support the remote visualization and collaboration involved in the applications.
Casa testbed research participants included Caltech, JPL, Los Alamos National Laboratory (LANL), and the San Diego Supercomputer Center (SDSC) in conjunction with UCLA. Additional software development support was provided by the Parasoft Corporation. The participating carriers were MCI, Pacific Bell and US West, each providing SONET transmission and termination equipment. The physical Casa topology was that shown in Figure 3-5, with MCI providing long distance SONET OC-48 2.5 Gbps links between Las Cruces, New Mexico and a junction point in Los Angeles, and from Los Angeles to termination equipment located at SDSC in San Diego. Pacific Bell provided these SONET links between Los Angeles and Pasadena to connect Caltech and JPL, and US West provided the link from Las Cruces to Los Alamos, New Mexico. The approximate distance of each major link is shown in the figure.
Figure 3-5. Casa Physical Topology
In Casa, the 2.5 Gbps links were divided into 16 SONET OC-3c 155 Mbps logical channels, with 8 of these channels made available at each site to provide a 1.2 Gbps aggregate full duplex rate for use by the researchers. The testbed logical topology is shown in Figure 3-6. While the Los Angeles junction for the LANL SONET link in Figure 3-5 was much closer to Caltech than to SDSC, carrier provisioning considerations resulted in the continuation of this link to SDSC using 1.2 Gbps of the OC-48 physical link between Los Angeles and San Diego, with the other 1.2 Gbps used to connect SDSC through Los Angeles to Caltech.
Figure 3-6. Casa Logical Topology
The primary focus of Casa research was on distributed heterogeneous supercomputing applications involving very large computational problems. Three representative applications were selected for investigation: global climate modeling, quantum chemical dynamics modeling, and geophysical modeling, using state-of-the-art supercomputers distributed among the sites. To support application distribution and experimentation, the Casa work included development of new software to provide distributed communication, execution control, and progress monitoring. Network technology research also played an important role in Casa, through the development of gateway technology to interface local HIPPI site distribution technology to wide-area SONET links and to use the local HIPPI sites for wide-area PTM switching, the exploration of outboard protocol processing using simple interfacing protocols, and the investigation of wide-area transport protocol performance at gigabit speeds..
The Nectar testbed research collaborators consisted of Carnegie-Mellon University (CMU), the Pittsburgh Supercomputing Center (PSC), and Bellcore. The participating carrier in this testbed was Bell Atlantic. Nectar explored issues for wide-area gigabit networking using the physical testbed shown in Figure 3-7. This configuration consisted of a carrier-supplied connection between the CMU campus and the PSC with specially designed ATM/SONET transmission equipment located at each site. The optical transmission portion was developed by Alcatel and ran at 2.4 Gbps. A HIII-ATM-SONET box developed by Bellcore allowed multiple OC-3 SONET channels to be aggregated. Each OC-3 channel contained HIPPI packets broken into multiple ATM cells.
This testbed was built on a previously established testbed local to the CMU campus, which originally operated at speeds of 100 Mbps. Some aspects of the earlier testbed's local networking architecture were carried over to the gigabit testbed, such as the use of crossbar switches for local distribution. The gigabit testbed connectivity allowed a number of different supercomputer architectures located at PSC during the project to be used in conjunction with workstations and an experimental parallel computer located at CMU.
Nectar research goals included both networking technology and distributed computation, with a particular focus on developing new software programming environments for general application classes using both supercomputers and workstations. In the network technology area, the testbed investigated solutions to the problem of wide-area ATM/SONET carrier network access at gigabit rates when different local area technologies are used, and on host I/O operating system and protocol issues for both workstations and highly parallel distributed memory machines. Two types of applications were used in the work, one for distributed heterogeneous computation of chemical plant process modeling and the second for distributed homogeneous computation of large combinatorial problems using a network of workstations.
Vistanet testbed research participants included the Computer Science and Radiation Oncology departments at the University of North Carolina (UNC), BellSouth Research, GTE Laboratories, and MCNC in conjunction with North Carolina State University (NCSU). BellSouth and GTE also provided financial support to the research participants plus SONET transmission facilities for this testbed, with the physical topology and distances involved shown in Figure 3-8. In addition, BellSouth provided a prototype ATM wide-area carrier switch in their Chapel Hill central office (CO) for testbed use and GTE provided a cross-current switch in their central office.
The Vistanet testbed was unique in its use of ATM switching within a carrier CO. This provided a star switching topology in which all ATM traffic between MCNC, UNC/CS and UNC/RO was switched within the Chapel Hill CO. The testbed was also unique in its use of synchronous OC-12c 622 Mbps SONET channels, rather than striping over lower speed channels, in conjunction with its OC-48 2.5 Gbps trunking.
Vistanet research goals were centered around the use of an interactive 3-D cancer radiation treatment planning medical application, which provided a focus for investigating a number of networking as well as application issues. The testbed provided connectivity between a supercomputer at MCNC, a special-purpose parallel rendering computer at UNC/CS (Computer Science), and a medical visualization workstation at UNC/RO (Radiation Oncology). In addition to investigating how to achieve interactive user response times in conjunction with the distributed computation, a specific research focus involved an exploration of new software graphics techniques for providing gigabit network-based interaction. Network technology goals included developing techniques for interfacing local networks with wide-area switched ATM/SONET at synchronous 622 Mbps speeds, exploring outboard protocol processing, evaluating different transport protocols, and investigating real-time data traffic capture and characterization using actual application traffic.