Firewall Wizards mailing list archives

Re: Cisco ASA and FWSM


From: Chuck Swiger <chuck () codefab com>
Date: Wed, 9 May 2007 11:24:44 -0700

On May 4, 2007, at 6:42 AM, <nick.nauwelaerts () thomson com> wrote:
Timo Schoeler wrote:
yeah, and for the ASA-5520 (e.g.) they share one single interrupt.
worst hardware design ever.

Why would that be? Sharing interrupts will result in less context
switches. Different interrupts will result in a context switch for  
each
interrupt, while shared interrupts can handle multiple interrupts  
can be
handled in the same context switch.

There's going to be a context switch for each interrupt, regardless  
of whether they are shared or not.  It's typically the case that you  
disable receiving new interrupts while running the interrupt service  
routine (ISR), which means that you aren't going to get or handle  
multiple interrupts in a single context switch.

Some of the better designed systems use fine-grained mutex locking  
for each instance of a device driver and/or have a "fast interrupt  
handler" which returns as quickly as it can to minimize interrupt  
service latency, but depends on having a separate kernel thread to  
perform the bulk of the work which needed to be done and would  
normally have to be done within a traditionally designed interrupt  
handler before it would be allowed to return.

Whenever you have a shared interrupt, all of the associated ISRs need  
to run until one of them recognizes that the interrupt came from the  
associated device, which causes more work and adds latency to the  
ISR.  And even if you have devices which have a well-written ISR  
which is SMP-safe and does the necessary fine-grained locking, if  
another device sharing that interrupt is not SMP-safe, then the  
system will still have to obtain "Giant"/"the big kernel lock"/etc  
for that ISR to run, which causes significant slowdowns.

In existing SMP x86 hardware, you'll commonly see that when you have  
something like a USB port sharing an IRQ line with a NIC, the result  
will be significantly reduced network performance.

And running their drivers in polling mode instead of interrupt mode  
would make this even matter less, which
would make quite some sense in the ASA's case.

polling has some significant advantages in that it it tries to work  
on a "process to completion" model and was designed to service  
multiple outstanding requests during a single context switch, but you  
end up running the polling service routine very often-- typically  
some significant fraction, like 50%, of scheduler ticks-- and you'll  
generally want to be using a scheduler quantum of around 1ms or so,  
which means you're doing perhaps 500 polling ISRs per second  
regardless of load.

If the device was seeing more than 500 interrupts/sec, then polling  
would typically improve efficiency, but if it was mostly idle, it's  
still going to be using a lot of CPU if polling is being used.  Newer  
NICs have much bigger packet buffers and can use interrupt mitigation  
techinques to delay firing the IRQ line so that they might accumulate  
several packets before the ISR activates, and thus they gain some of  
the advantages of polling without also gaining the disadvantage of  
keeping the CPU busy even when the device is mostly idle.

-- 
-Chuck

_______________________________________________
firewall-wizards mailing list
firewall-wizards () listserv icsalabs com
https://listserv.icsalabs.com/mailman/listinfo/firewall-wizards


Current thread: