nanog mailing list archives

Re: Traffic Engineering


From: "James R Grinter" <jrg () watching org>
Date: Sat, 20 Sep 1997 15:17:14 +0100



freedman () netaxs com (Avi Freedman) writes:
smd () clock org (Sean Doran) writes:
P.S.: Curtis Villamizar had another interesting approach
      which involved pushing content far afield to
      machines with the same transport-layer (IP)
      addresses, relying upon closest-exit routing to
      connect one to the topologically-closest replication
      machine.  Unfortunately, while this could be really

(this is something we've done a little of at work: we do have it in
production use, and it's not so effective when there's asymmetric
routing - which we have quite a bit for some European networks, eg two
peerings with Ebone. Duplicating IP addresses definitely has its
place, eg internal to a network for directing a bunch of RASs to a
local version of a 'service' address.)

And Alec Peterson (now of Erols) has figured out an even
arguably slicker way to do it.

This is a problem I've been musing over lately.  The best way that I
can see is to return suitable DNS answers to people.  This avoids the
route instability problems that Sean comments on, above:

 Client gets TCP session established.

 Routes change, their inbound route now reaches a different system
 which clearly doesn't know about the connection and *boom* goes
 the TCP session.

 Routes return to their previous situation but by this time it's too late.

(Even worse is when the client connects *during* the route instability.)

but with DNS you've still got to return a suitable answer. (You could
route to your DNS server, but that doesn't address the asymmetry
problems.)

If the bulk of your data is from you to the client (web, euuw), then
it's ok because you are in a position to decide what the best route
back across your network is and pick the nearest/most appropriate
server. Requests usually come from the client's nearest
recursing/resolving DNS server and so that's just a SMOP to decide the
best way back.

Where you get really screwed up is having to support large numbers of
IP addresses - good ol' HTTP/1.0 requests (it's that damn web, again).
Using (n x number of "sites") isn't practical. Maybe the real
operational issue is working out when we stop having to assign all
these zillions of IP addresses for web servers. The marketting people
don't want to make the change whilst their competitors aren't.

James.


Current thread: