nanog mailing list archives

Re: The 100 Gbit/s problem in your network


From: Saku Ytti <saku () ytti fi>
Date: Mon, 11 Feb 2013 14:23:16 +0200

On (2013-02-11 12:16 +0000), Aled Morris wrote:

I don't see why, as an ISP, I should carry multiple, identical, payload
packets for the same content.  I'm more than happy to replicate them closer
to my subscribers on behalf of the content publishers.  How we do this is
the question, i.e. what form the "multi"-"casting" takes.

It would be nice if we could take advantage of an inherent design of IP and
the hardware it runs on, to duplicate the actual packets in-flow as near as
is required to the destination.

Installing L7 content delivery boxes or caches is OK, but doesn't seem as
efficient as an overall technical solution.

As an overall technical solution Internet scale multicast simply does not
work today. 
If it did work, then our next hurdle would be, how to get tier1 to play
ball, they get money on bits transported, it's not in their best interested
to reduce that amount.

Now maybe, if we really did want, we could do some N:1 compression of IP
traffic, where N is something like 3<10. Far worse than multicast, but with
this method, we might be able to device technical solution where IP core
does not learn replication states at all.
We could abuse long IPv6 addresses DADDR + SADDR + extension header to pack
information about destination ASN who should receive this group, this could
be handled without states in HW in core networks, only at ASN edge, you'd
need to add classic multicast state intelligence.
-- 
  ++ytti


Current thread: