nanog mailing list archives

Re: CIDR Report


From: Chris Williams <chris.williams () third-rail net>
Date: Mon, 15 May 2000 10:29:29 -0400


The  group  of  providers  can transfer  routing  information  between
themselves using the routing protocol of their choice. This would mean
a small  increase in the  size of local  (i.e.  within the ASs  of the
group) routing  tables, but a negligible  increase in the  size of the
global BGP tables.

The problem with this strategy is that it does not eliminate the single
point of failure of an incompetant routing engineer at one of the
providers in the group -- the chance of a single mistake by one provider
in the group bringing down the whole group would be MUCH higher than the
chance of a single human error bringing down two unrelated providers.

Speaking as a former employee of a small, multi-homed company with
several diverse /24s, I can say that there are definitely valid reasons
for this configuration. We were not large enough to enough request our
own address space, and our upstream would not give us all the address
space we eventually needed at the outset, so we ended up with a bunch of
random class Cs. In order to achieve the level or reliability we needed,
we had to be multihomed to seperate providers; during the time I worked
there, more than 70% of outages were due to one of our upstreaming
goofing, not due to a single circuit being down, so any solution which
results in multiple paths to non-independant networks would not have met
our requirements.

It seems to me it is kind of approaching the problem backward to say
"Well, these are the limits of our routers, so this is what services we
can offer moving forwards." Wouldn't it make more sense to identify what
the actual needs of end users are (and I think portable /24s is a
legitimate need!), and then plan the future of the backbone based upon
those needs?
If we need to go to Cisco and say, "Hey, make a GSR that does BGP
updates faster", then that's what we need to do! Imposing limitations on
end users which make the internet less useful is not a solution to the
problem, at best it's a kludge which reduces the headache for backbone
providers, but doesn't actually solve any long-term problems.

Also, I don't really buy the "how do we manage 250K routes?" arguement.
Any well-designed system which can effectively manage 10K of something,
in general, ought to be able to handle 250K of it; it's just a question
of scaling it up, and there's no question that processors and getting
faster and memory cheaper every day. If there's some magic number of
routes that suddenly becomes unmanagable, I'd love to hear why.



Current thread: