nanog mailing list archives

Re: Attempt to summarize Links on the Blink


From: "Dave O'Leary" <doleary () cisco com>
Date: Mon, 13 Nov 1995 13:39:58 -0800


Various responses included below.

From: Avi Freedman <freedman () netaxs com>
Subject: Re: Attempt to summarize Links on the Blink
To: cook () cookreport com
Date: Mon, 13 Nov 1995 11:16:14 -0500 (EST)
Cc: nanog () merit edu

I would like to try to understand better where this discussion seems to 
have come to rest.  Yesterday the suggestion was made that the major 
providers add more bandwidth to their backbones.  There seemed to be no 
assertion as to how this could be done.

1.  OC-3 is not yet routable on backbones.  Is that correct?

Well, I was told that the Cisco AIP card can talk point-to-point to another
AIP card at OC-3 speed using either HDLC or PPP.

It is true that AIPs can talk back to back through an OC3 link, however
it doesn't use PPP or HDLC, just sending and receiving cells (i.e. std
ATM, just no switch).  So you get 155 Mb/s minus the ATM overhead,
or about 130 Mb/s (depending on what you count as data, type of traffic, 
etc.

2.  What is the routing impact of parrallel T-3s?  Or the creation of
mesh of T-3s? I have the impression that this is not feasible because it 
would expand the routing tables unacceptably or because of the questions 
of how you would load balance among them??

If data can be routed on parallel T3s on a per-connection basis so that
there isn't a scrambling of ordering of packets per connection, then some
benefit is achived, though no single application or site can use more
than a T3 of bandwidth.

This is the way that our boxes work (i.e. destination locked onto a 
single interface).  Packet reordering isn't too much of a problem
from a functionality perspective for TCP, it just ( ;-) ) affects the
performance.  It is also atypical for a single site to need more than
T3 bandwidth (well, they may want it but they can't really but it yet).

3.  There seems to be some consensus that we will see an increase in the 
numbers of NAP or MAE like interchange points which could cut down on the 
traffic that must traverse long haul backbones.  *BUT* doesn't each 
additional interchange point used by all the top level providers mean 
another new set of global routes crowding router memories?

It depends.  If routing decisions are made locally and the routes heard
at smaller or private exchange points by NSP x are not distributed to
NSP x's larger peering/route-decision routers, then possibly no.  That
would mean only hearing routes at private exchange points that were also
heard elsewhere (at a major peering point).

Private interconnects (point to point) between the larger backbone
providers and new "public" interconnects can be used as appropriate
depending on what problem you are trying to solve, and what tradeoffs
you are ready to make.

4.  How much help will regional NAPs like Tucson be?  Their goal is to 
keep local traffic local and off long haul backbones.  What liklihood is 
there that these will grow in numbers quickly enough to make a 
difference?  If the majors start showing up at these points does their 
arrival mean that the problem of crowding memory in their backbone 
routers will be increased?

See above.

Avi

This depends on a number of factors like how well the local providers
are able to aggregate routes, and how the local interconnects link
into the rest of the world - for example of the group of local providers
gets a large CIDR block to use as a group then they can appear as a 
single prefix to the rest of the net, while providing portability of 
numbers for customers that might move between them.  

                                        dave



Current thread: