nanog mailing list archives

Re: High throughput bgp links using gentoo + stipped kernel


From: Phil Fagan <philfagan () gmail com>
Date: Sun, 19 May 2013 11:34:59 -0600

Not noise!
On May 19, 2013 10:20 AM, "Nick Khamis" <symack () gmail com> wrote:

On 5/19/13, Zachary Giles <zgiles () gmail com> wrote:
I had two Dell R3xx 1U servers with Quad Gige Cards in them and a few
small
BGP connections for a few year. They were running CentOS 5 + Quagga with
a
bunch of stuff turned off. Worked extremely well. We also had really
small
traffic back then.

Server hardware has become amazingly fast under-the-covers these days. It
certainly still can't match an ASIC designed solution from Cisco etc, but
it should be able to push several GB of traffic.
In HPC storage applications, for example, we have multiple servers with
Quad 40Gig and IB pushing ~40GB of traffic of fairly large blocks. It's
not
network, but it does demonstrate pushing data into daemon applications
and
back down to the kernel at high rates.
Certainly a kernel routing table with no iptables and a small Quagga
daemon
in the background can push similar.

In other words, get new hardware and design it flow.

What we are having a hard time with right now is finding that
"perfect" setup without going the whitebox route. For example the
x3250 M4 has one pci-e gen 3 x8 full length (great!), and one gen 2
x4 (Not so good...). The ideal in our case would be a newish xserver
with two full length gen 3 x8 or even x16 in a nice 1u for factor
humming along and being able to handle up to 64 GT/s of traffic,
firewall and NAT rules included.

Hope this is not considered noise to an old problem however, any help
is greatly appreciated, and will keep everyone posted on the final
numbers post upgrade.

N.




Current thread: