nanog mailing list archives

Re: pontification bloat (was 10GE TOR port buffers (was Re: 10G switch recommendaton))


From: Leo Bicknell <bicknell () ufp org>
Date: Fri, 27 Jan 2012 17:56:01 -0800

In a message written on Sat, Jan 28, 2012 at 10:31:20AM +0900, Randy Bush wrote:
when a line card is designed to buffer the b*d of a trans-pac 40g, the
oddities on an intra-pop link have been observed to spike to multiple
seconds.

Please turn that buffer down.

It's bad enough to take a 100ms hop across the pacific.  It's far
worse when there is +0-100ms of additional buffer. :(

Unless that 40G has like 4x10Gbps TCP flows on it you don't need
b*d of buffer.  I bet many of your other problems go away.  10ms
of buffer would be a good number.

so, do you have wred enabled anywhere?  who actually has it enabled?

(embarrassed to say, but to set an honest example, i do not believe iij
does)

My current employment offers few places where it is appropriate.
However, cribbing from a previous ob where I rolled it out network wide:

policy-map atm-queueing-out
  class class-default
   fair-queue
   random-detect
   random-detect precedence 0   10    40    10
   random-detect precedence 1   13    40    10
   random-detect precedence 2   16    40    10
   random-detect precedence 3   19    40    10
   random-detect precedence 4   22    40    10
   random-detect precedence 5   25    40    10
   random-detect precedence 6   28    40    10
   random-detect precedence 7   31    40    10

int atm1/0.1
 pvc 1/105
  vbr-nrt 6000 5000 600
  tx-ring-limit 4
  service-policy output atm-queueing-out

Those packet thresholds were computed as the best balance for
6-20MMbps PVC's on an ATM interface.  Also notice that the hardware
tx-ring-limit had to be reduced in order to make it effective.
There is a hardware buffer that is way too big below the software
wred on the platforms in question (7206XVR's).

Here's one to wrap your head around.  You have an ATM OC-3, it has on it
40 PVC's.  Each PVC has a WRED config on it allowing up to 40 packets to
be buffered.  Some genius in security fires off a network scanning tool
across all 40 sites.

Yes, you now have 40*40, or 1600 packets of buffer on your single
physical port.  :(  If you work with Frame or ATM, or even dot1q 
vlans you have to be careful of buffering per-subinterface.  It can
quickly get absurd.

-- 
       Leo Bicknell - bicknell () ufp org - CCIE 3440
        PGP keys at http://www.ufp.org/~bicknell/

Attachment: _bin
Description:


Current thread: