nanog mailing list archives

Re: Software router state of the art


From: Sargun Dhillon <sdhillon () decarta com>
Date: Mon, 28 Jul 2008 08:54:38 -0700

This is not exactly true. The modern Linux kernel (2.6) uses some amount of flow tracking in order to do route caching. You can check this out on your system by:
"ip route show cache"

It keeps track of Src/Dst/QoS/Ethernet adapters/etc.. Additionally most systems have the iptables modules loaded in kernel and the conntrack module in kernel. This immediately activates connection tracking, therefore considerably slowing down software routing. The most optimal way of speeding this up would be sticking the route cache into somewhat faster memory. Though it would be fairly nice to get rid of the route cache as that can cause problem with eccentric setups. Also, as cache entries take a moment to be deleted, or degrade leading to convergence times being higher.





Joe Greco wrote:
On Sat, Jul 26, 2008, Florian Weimer wrote:
Was this with one packet flow, or with millions of them?
I believe it was >1 flow. The guy is using an Ixia; I don't know how
he has it configured.

Traditionally, software routing performance on hosts systems has been
optimized for few and rather long flows.
Yup.

And I always ask that question when people claim really high(!) throughput on
software forwarding. It turns out their throughput was single source/single
dest, and/or large packets (so high throughput, but low pps.)

I'm not sure where the claims about "{one, few} flow{s}" are coming from.
Certainly the number of flows on a typical UNIX box acting as a router is
not that relevant unless you specifically configure something like stateful firewalling, because the typical UNIX box simply doesn't have a
*concept* of "flows."  It deals with packets.  This has its own problems,
of course, but handling high levels of traffic in many flows is not one of
them.

There are other software routing platforms that DO flows, but the above
mentions "host[s] systems", so I'm reading that as "UNIX router."

On the flip side, packet size is definitely a consideration.  This topic
has been beaten to death on the Zebra mailing lists by myself and others
in the past.
With yesterday's technology (P4 3.0G, 512MB RAM, PCI-X, FreeBSD 4) we were
successfully dealing with >300Kpps about 3 years ago, without substantial
work.  That was single source/single dest, but with a full routing table.
There's no real optimization for that within the FreeBSD framework, so it
is about the same performance you'd have gotten with multi source/multi
dest.

... JG



--
+1.925.202.9485
Sargun Dhillon
deCarta
sdhillon () decarta com
www.decarta.com





Current thread: