Second Progress Report on the HICID Project,

March 1, 1998 - May 31, 1998.

 

Peter T. Kirstein and Jon Crowcroft

May 28, 1998

 

 

  1. INTRODUCTION
  2.  

    In the last report [1], we stated that we have devoted effort to the project really started only from February 1 1998. Moreover, its budget envisages only 10 mm, and much of it is designated to undertake QoS experiments with running multimedia sessions carried out under the HIGHVIEW and JAVIC projects. In addition we intend to carry out many of the experimental activities using LEARNET and MECCANO - as mentioned in the original proposal.

     

    In practice there have been further delays in the accompanying projects. EU bureaucracy delayed MECCANO; it has now started - but only on May 1, 1998. LEARNET has been delayed further by problems in accessing the BT tower termination; the UCL connection is now not expected to be complete before June 1998. JAVIC does not expect to do any on-line WAN sessions before 1999. HIGHVIEW has been doing a number of trials over SuperJanet, so we could have been instrumenting the network performance at the same time. In addition to the above, the CAIRN routers have been undergoing further substantial upgrades during the present period. These are very significant, and are discussed further below. Finally, Jon Crowcroft has returned from his Sabbatical only towards the end of April.

     

    In view of the above, we have still been charging minimal effort to the project - though we have concentrated on building a firm technical base for the time when we devote more. This allows us to carry out the requisite support experiments with JAVIC and HICID until the third quarter of 1999 - using the workplan outlined in the proposal. We have maintained the plan mentioned in [1], not to do serious testing of our QoS tools before August 1998. We have modified it in one important respect; we will now use only the version of the CAIRN routers that have a dual IPv6/IPv4 stack. This will make the project look much further into the future.

     

  3. The Proposed Work-Plan
  4.  

    The work plan envisaged that many of the basic router configurations would be set up under the DARPA and MECCANO projects. The former has been proceeding; the latter has been delayed as stated.

     

    I repeat below the progress originally promised, and our revised schedule, based on a start date of February 1, 1998.

     

     

    NO

    ACTIVITY

    Proposal

    Month

    Revised

    Month

    1

    Set up workstations, conference rooms, networks, basic routers

    4

    8

    2

    Set up gateways, measurement facilities, servers, routers with PIM and with DVRMP

    2

    6

    3

    Do test experiments with PIM and DVMRP; set up RSVP, CBT and WFQ and RED

    3

    10

    4

    Do test experiments with WFQ + RSVP, CBT and RSVP, WFQ + RED

    6

    10

    5

    Do application level experiments and pilot trials on MECCANO, HIGHVIEW and JAVIC tools with different Servers with single streams

    9

    12

    6

    Extend test and application trials to use the later versions of the above tools, with improved QoS reservations.

    15

    18

     

    There are clear delays; the reasons for them are discussed further in Section 3. We believe the resulting system will be much more in tune with future Internet development. Moreover, the much longer duration of the HIGHVIEW and JAVIC projects proposed originally will ensure that we will, as a result, be able to support these projects for longer.

  5. Progress to Date
    1. Main Activities
    2. The main activity in the last quarter has been to understand the current status of the routers we want to use in HICID, and to set up a suitable testbed inside UCL to test the system with different QoS algorithms. We describe below the reasons for choosing a particular router implementation, its capability to support ATM, a mechanism for adding QoS algorithms, an analysis of the queuing models to be studied, and a status report on the implementation.

    3. Choice of Router
    4. Fundamental to our approach, is the use of QoS-aware PC routers. We are deploying IP routers built out of PCs and using them as tools for research and experimental network development. The platform is based on BSD UNIX; we are using a FreeBSD 2.2.X Release in particular; this is a 4.4 BSD UNIX derivative with non-restrictive copyrights. FreeBSD was chosen because it is still the reference TCP/IP implementation, whenever there is a need for an open platform where new protocols and architectures can easily be imported and tested. This is the case in HICID and related UCL activities, which require support for high speed network technologies like ATM or satellite network interface cards combined with a low-overhead IP forwarding path. PCs are particularly suitable for deploying in significant numbers for experiments; their processing power is increasing rapidly, while their cost is dropping, These router configurations are already being successfully deployed in the U.S nation-wide CAIRN testbed - of which UCL is the only non US site. We have liased also on this router with Alan O'Neill's group at BT Laboratory; they have been interested in acquiring it, and we have facilitated that process.

       

      IP Integrated services and QoS management protocol implementations are available for this platform. Also prototypes of future services such as IPv6, security models (IPSEC), Mobile IP and Internet service discrimination and pricing are all already available (or will be in the near future) for the above platform.

       

    5. ATM support
    6. A critical criterion for the platform selection is its support of high-speed network technologies such as ATM. An ATM driver has been written for BSD UNIX by C.Cranor; it was modified to support BPF (Berkeley Packet Filter) for efficient packet capture on network interfaces and also rate limiting on PVCs. The latter is a particularly useful feature for wide area ATM links. The implementation also assigns separate logical interfaces (pseudo-interfaces) per PVC created; this is a very useful feature for routing protocols. It allows each PVC to be treated as a separate link and the same IP address is not used for terminating multiple PVCs. This implementation also has support for native multicast. For more information see the documentation on the modified driver distribution from http://www.cairn.net/contributions.html

       

    7. ALTQ framework for Alternative Queuing
    8. The importance for traffic management in the Internet has long been realised and traffic management can be implemented by a diverse set of mechanisms but the heart of the technology is the packet scheduling mechanism (queuing). Disciplines for traffic management have been proposed and studied by analysis and simulation because it is difficult to implement in network equipment. BSD UNIX only has a simple drop tail FIFO queue. ALTQ is a framework for incorporating a variety of queuing disciplines with different components such as scheduling strategies, packet drop strategies, buffer allocation strategies, multiple priority levels. It has been developed at Sony Computer Science Laboratories. ALTQ today supports CBQ, RED, Weighted Fair Queuing, RSVP stubs for CBQ and Explicit Congestion Notification (ECN). Our work will be mostly centred on CBQ and RED deployment. We will use ALTQ in our routers - as does much of the CAIRN community.

       

    9. The Queuing Models to be Studied
    10. We give some details here of the different algorithms mentioned in Section 3.4.

       

      CBQ (Class Based Queuing) CBQ was introduced by Jacobson and Floyd and achieves partitioning and sharing of link bandwidth by hierarchically structured classes. Each class has each own queue and is assigned its share of bandwidth. A child class can borrow bandwidth from a parent class in the hierarchy as long as excess bandwidth is available. The basic components are the classifier and the estimator. The classifier assigns arriving packets to the appropriate class and the estimator estimates the bandwidth recently used a class. If a class has exceeded its predefined limit, the estimator marks the class as overlimit. The scheduler determines the next packet to be sent from the various classes, based on priorities and states of the classes.

       

      RED (Random Early Detection) RED was introduced by S.Floyd and V.Jacobson and is an active queue management for best effort services. It seems to be more attractive than CBQ, since it does not require Flow State or Class State and is simpler and more scalable. It can be used as an implicit congestion notification mechanism that exercises packet dropping or packet marking stochastically according to the average queue length. Since RED does not require Per-flow State, it is considered scalable and suitable for backbone routers. At the same time, RED can be viewed as a buffer management mechanism and can be integrated into other packet scheduling schemes.

       

      RSVP (Reservation Set-up Protocol) RSVP is a resource reservation set-up protocol for integrated services. It is not part of ALTQ and the reference implementation is from ISI. RSVP is a signalling protocol to set up the traffic control module of the routers along a path. However there is no traffic control module coming with the ISI distribution, so there is great demand for a queuing implementation capable of traffic control. CBQ was not designed for use with RSVP, but Sun has used it as a traffic control module for RSVP in the Solaris implementation.

       

    11. The Current State of the Implementations
    12. We have made progress, but are not yet ready to deploy the routers outside UCL. This is one of the reasons for the significant delays mentioned in Section 2.

       

      We have identified problems in the process of integrating the above mentioned modules. These are mostly due to the fact that the modules are developed from different groups and interoperability is not complete. We have mostly concentrated on the ATM case, since ATM links are used in the wide area where bandwidth is scarce and resource management is particularly important. The ALTQ distribution supports the original ATM driver; however this version of the driver does not have support for rate limiting on PVC. This renders it inappropriate for our purposes because the wide-area PVCs have a bandwidth limitation; thus the driver must provide traffic shaping. Otherwise there will be traffic bursts at a rate, which the PVC cannot support - leading to cell loss. Because of the way an IP packet is made up of many cells, relatively small amounts of regular cell loss can lead to large losses of IP packets. This version of the driver could only be used for local tests where the PVC can easily be configured to have adequate capacity so that is not effected by the bursty traffic. We have had success in integrating other versions of the driver that do provide traffic shaping. We are still having difficulty with adding CBQ; it seems that this can only be supported on only one of the pseudo interfaces (namely en0). We are in the process of verifying this.

       

      ALTQ maintains some state about the network interface queues but most queue management is done in the driver so most of the drivers where ALTQ is supported have been modified. The main problem is that when the packet is ready to go out on the wire the driver has to manipulate the state required by ALTQ. Specifically that (pseudo) network interface which actually sent that packet has to update its queue state. However the ATM card at the time of packet transmission has no idea at what network interface the packet arrived on, all it knows is a transmission slot and that slot may be shared by more than one PVC. Each slot is associated with a particular rate limit and each PVC with a particular rate limit is mapped to the same transmission slot. The driver supports only 7 PVCs and since there are about 10 transmission slots we can have an one-to-one mapping from interfaces to transmission slots, but this has not been implemented yet and only this will allow ALTQ to be supported on multiple interfaces on a single physical ATM card. Until then the driver for ALTQ purposes assumes that ALTQ related packets have come from a single interface and updates that interface state (en0).

       

    13. Workstations
    14. We have not yet set up any workstations (PCs), but have acquired two for a small test cell at UCL. These are just being set up. We are using CAIRN routers with dual stacks for Pv4 and IPv6. We expect, therefore, not only to support the IPv4 applications we have used hitherto, but also some which can exercise the QoS parameters from the application. As a first example of this, we have negotiated with ISI to supply a version of the Mbone tools, which use IPv6, and will allow the scope to be set from within the application.

       

      The process of integrating and maintaining such a diversity of third party sources (various drivers, alternative queuing schemes, and IPv6) in the kernel is very tedious. For this reason we are currently putting some effort in using the Concurrent Version System (CVS) for managing the sources and being able to retrieve specific configurations depending on the functionality required."

       

    15. Other Activities

    We have appointed one LEARNET Research Fellow, and provided fibre connectivity between the Electronic Engineering Department and the Computer Science Department. While these are not strictly HICID activities, they are germane to the full deployment of the HICID system

  6. Plans for Next Quarter
  7.  

    During the next quarter, we expect to complete the connection to LEARNET at UCL. This will allow us to put CAIRN routers at Essex U and UCL, and enable the implementation of QoS between the two sites. Because of the way that HIGHVIEW has Server-based traffic generation, we should be able put up enough traffic to require the QoS traffic facilities. We will investigate which of the protocols of Section 3.4 would be the easiest to put up first, and should have real experimental results by the end of the quarter.

     

  8. REFERENCES

 

  1. Kirstein, PT and J Crowcroft, "First Progress Report on the HICID Project", October 1, 1997 - March 31, 1998.
  2. "Kenjiro Cho. "A Framework for Alternate Queuing: Towards Traffic Management by PC-UNIX Based Routers." , to appear in USENIX '98.
  3. Shuresh Bhogavilli (ISI) private communication."