nanog mailing list archives

Re: Peering Policies and Route Servers


From: Matt Zimmerman <mdz () netrail net>
Date: Tue, 30 Apr 1996 15:36:31 -0400 (EDT)

On Tue, 30 Apr 1996, Randy Bush wrote:

peerage, like sex these years, has a great potential for the transmission
of disease.  so one is very careful about with whom one peers.  the route
servers are more like a bath house, you're excahnging bodily fluids with
every and any body.

Interesting analogy...but not quite accurate.  RS-based peering would be
more like...a routing orgy among a group of peers, where a central entity
organizes the bodily routing fluids and directs them, according to
prearranged policy, to the appropriate indirect peers.  It breaks down
there...:-)

Your statement implies that, when peering via an RA route server, one is
required to accept (and advertise) any and all routes.  This is wholly
untrue.  The RS's use policy information from the RADB to do routing
calculations, resulting in a centralized routing model rather than a
distributed one.  Yes, it makes policy decisions for you...but based on
your own policy indications.

some exchanges have multi-lats.  how many are actually used by the bigger
players and how many are ignored in favor of multiple bi-lats?

I don't know of very many exchanges that have MLPA's, but at the Ameritech
(Chicago) NAP...

-----

Date: 29 Apr 96 08:21:16 -0500
From: MARK.A.CNOTA () x400gw ameritech com
To: nathan () netrail net
Cc: nap-info () aads net
Subject: Multi-Lateral Peering Agreement

Nathan,

Below is the current list of MLPA participants.

Mark Cnota
AADS Operations - Chicago


Alpha-Net
AGIS
Argonne Nat. Laboratory
Concentric Research
Hyperspace Networks
ISI/Merit (Routing Arbiter)
NAP.NET
Network 99
Netcom On-Line
Univ. of Chicago
One Call Communications

// Matt Zimmerman       Chief of System Management           NetRail, Inc.
// mdz () netrail net                                       sales () netrail net
// (703) 524-4800 [voice]    (703) 524-4802 [data]    (703) 534-5033 [fax]





Current thread: