nanog mailing list archives

Re: Selfish routing


From: Jack Bates <jbates () brightok net>
Date: Sun, 27 Apr 2003 16:09:04 -0500


alex () yuriev com wrote:
And I wouldn't evangelize that faith, as stated.  I do happen to believe
in "special" (or if you prefer, "selfish") technology that measures
problems in networks I do not control, and if they can be avoided (say by
using a different network or a different injection point), avoid them.  In
practice, that extra "if" doesn't change the equation much, since:


So, the brilliant technology costs money but does not provide excellent
results under all circumstances? Simply not making stupid mistakes
designing the network *already* achieves exactly the same result for no
additional cost.


And what, pray tell, governs stupid mistakes designing the network? For that matter, which network? I've run traffic through some networks for years without a problem. Then one day, that network makes a mistake and clobbers the traffic I send through it. Naturally, I redirect traffic via other networks, but the spare capacity via the other networks does not equate to the traffic I'm shifting, so while improving QoS for my customers, I have still shorted them.

It could be argued that more spare capacity should have been allotted for the other networks, yet then if the first network hadn't had a problem, money would have been wasted on capacity that wasn't needed. It is an art to establish enough bandwidth to handle redirects from networks having problems and yet keep costs at a reasonable level.

Hypothetical: You are interconnected with 3 networks pushing 100Mb/s through each. Slammer worm appears and makes 2 networks untrustworthy because of their internal policies. The third network is fine, but your capacity to it probably won't hold 300Mb/s. Do you a) spend the money to handle your full capacity out every peer and pay two - three times what your normal traffic is to the peer, or b) allot reasonable capacity for each peer, and recognize that there are times when the capacity will fall short?

Network planning is not just about whether you make a mistake or not. Performance is dependant upon the networks we interconnect with, issues they may have, and how well we can scale our network to circumvent those issues while remaining cost effective. My hypothetical is simplistic. Increase the number of peers, as well as the number of peering points to said peers, determine the cost and capacity necessary during multiple points of failure, plus the cost within your own network of redirecting traffic that normally takes more diverse routes, apply chaos theory and recalculate.

-Jack


Current thread: