nanog mailing list archives

Re: scaling linux-based router hardware recommendations


From: joel jaeggli <joelja () bogus com>
Date: Mon, 26 Jan 2015 19:23:47 -0800

On 1/26/15 5:43 PM, Mike Hammett wrote:
Aren't most of the new whitebox\open source platforms based on
switching and not routing? I'd assume that the "cloud-scale" data
centers deploying this stuff still have more traditional big iron at
their cores.

A L3 ethernet switch and a "router" are effectively indistinguishable.
the actual feature set you need drives what platforms are appropiate.

A signficant push for DCs particularly those with CLOS archectures is
away from modular chassis based switches towards dense but fixed
configuration switches. This drives the complexity and a signficant
chunk of the cost out of these switches.

The small\medium sized ISP usually is left behind. They're not big
enough to afford the big new hardware, but all of their user's
NetFlix and porn and whatever else they do is chewing up bandwidth.

Everyone in the industry is under margin pressure. Done well every
subsequent generation of your infrastrucuture is less costly per bit
delivered while also being faster.

For example, the small\medium ISPs are at the Nx10GigE stage now. The
new hardware is expensive, the old hardware (besides being old) is
likely in a huge chassis if you can get any sort of port density at
all.

If you're a small consumer based ISP how many routers do you actually
need the have a full table (the customer access network doesn't need it).

48 port GigE switches with a couple 10GigE can be had for $100.

I'm not aware of that being the case. With respect to merchant silicon
there a limited number of comon l3 switch asic building blocks which all
switch/router vendors can avail themselves of.

broadcom trident+ trident 2 and arad, intel fm6000, marvell prestera etc.

A
minimum of 24 port 10GigE switches (except for the occasional IBM
switch ) is 30x to 40x times that. Routers (BGP, MPLS, etc.) with
that more than just a couple 10GigEs are even more money, I'd assume.

a 64 port 10 or mixed 10/40Gb/s switch can forward more half a Tb/s
worth of 64byte packets, do so with cut-through forwarding and in a
thermal enevelope of 150 watts. device like that retail for ~20k, in
reality you need more than one. the equivalent gigabit product is 15 or
20% of the price.

you mention mpls support so that dictates appropriate support which is
available in some platforms and asics.


I thought vMX was going to save the day, but it's pricing for 10 gigs
of traffic (licensed by throughput and standard\advanced licenses) is
really about 5x - 10x what I'd be willing to pay for it.

The servers capable of relatively high-end forwarding feats aren't free
either nor are the equivalent.

Haven't gotten a quote from AlcaLu yet.

Vyatta (last I checked, which was admittedly some time ago) doesn't
have MPLS.

The FreeBSD world can bring zero software cost and a stable platform,
but no MPLS.

mpls implementions have abundant ipr, which among other things prevents
practical merging with the linux kernel.

Mikrotik brings most (though not all) of the features one would
want... a good enough feature set, let's say... but is a non-stop
flow of bugs. I don't think a week or two goes by where one of my
friends doesn't submit some sort of reproducible bug to Mikrotik.
They've also been "looking into" DPDK for 2.5 years now. hasn't shown
up yet. I've used MT for 10 years and I'm always left wanting just a
little more, but it may be the best balance between the features and
performance I want and the ability to pay for it.




----- Mike Hammett Intelligent Computing Solutions 
http://www.ics-il.com

----- Original Message -----

From: "Mehmet Akcin" <mehmet () akcin net> To: "micah anderson"
<micah () riseup net> Cc: nanog () nanog org Sent: Monday, January 26, 2015
6:06:53 PM Subject: Re: scaling linux-based router hardware
recommendations

Cumulus Networks has some stuff,

http://www.bigswitch.com/sites/default/files/presentations/onug-baremetal-2014-final.pdf


Pretty decent presentation with more details you like.

Mehmet

On Jan 26, 2015, at 8:53 PM, micah anderson <micah () riseup net>
wrote:


Hi,

I know that specially programmed ASICs on dedicated hardware like
Cisco, Juniper, etc. are going to always outperform a general
purpose server running gnu/linux, *bsd... but I find the idea of
trying to use proprietary, NSA-backdoored devices difficult to
accept, especially when I don't have the budget for it.

I've noticed that even with a relatively modern system (supermicro
with a 4 core 1265LV2 CPU, with a 9MB cache, Intel E1G44HTBLK
Server adapters, and 16gig of ram, you still tend to get high
percentage of time working on softirqs on all the CPUs when pps
reaches somewhere around 60-70k, and the traffic approaching
600-900mbit/sec (during a DDoS, such hardware cannot typically
cope).

It seems like finding hardware more optimized for very high packet
per second counts would be a good thing to do. I just have no idea
what is out there that could meet these goals. I'm unsure if faster
CPUs, or more CPUs is really the problem, or networking cards, or
just plain old fashioned tuning.

Any ideas or suggestions would be welcome! micah




Attachment: signature.asc
Description: OpenPGP digital signature


Current thread: