nanog mailing list archives

RE: multi-homing fixes


From: RJ Atkinson <rja () inet org>
Date: Thu, 23 Aug 2001 19:07:46 -0400


At 18:23 23/08/01, Roeland Meyer wrote:
<quote>
"Half of the companies that are multihomed should have gotten better service
from their providers," says Patrik Faltstrom, a Cisco engineer and co-chair
of the IETF's Applications Area. "ISPs haven't done a good enough job
explaining to their customers that they don't need to multihome."
</quote>

Is Patrik Faltstrom still an IETF co-chair? Is he still helping the
[failing] credibility of the IETF? Maybe, that's why? How can any ISP, or
anyone else, credibly guarantee that they'll still be in business next year?
Or, that they wont sell out to the very rich bad guys? Or, that circuit
provisioning will drop to under 5 calendar days?  Because, that is the
*only* way you will convince business customers that they don't need to
multi-home.

        Rather than just bash the IETF (which is easy), it might be 
just slightly more productive to wander over, subscribe to the 
relevant list(s), and inject some operational perspective and/or clue.

        Browsing http://www.ietf.org will yield information on current
draft, WG charters, and how to join any lists of interest.


At $99US for 512MB of PC133 RAM (the point is, RAM is disgustingly cheap and
getting cheaper), more RAM in the routers is a quick answer. Router clusters
are another answer, and faster CPUs are yet another. All of the above,
should get us by until we get a better router architecture. If the IETF is
being at all effective, that should start now and finish sometime next year,
so that we can start the 5-year technology roll-out cycle.

        One belief (right or wrong) is the end-to-end path convergence 
algorithm in BGP is close to hitting its scaling limits.  That's 
a problem that infinite RAM could not solve.  A proof that the 
algorithm is not a danger here would be most welcome in many circles.  
If you've got such a proof, please do share.

Ran
rja () inet org


Current thread: