Firewall Wizards mailing list archives

Re: Firewalls Compared


From: Devdas Bhagat <devdas () dvb homelinux org>
Date: Wed, 23 Jun 2004 01:47:59 +0530

On 22/06/04 15:18 -0400, Paul D. Robertson wrote:
On Tue, 22 Jun 2004, Devdas Bhagat wrote:

On 22/06/04 12:28 -0400, Paul D. Robertson wrote:
<snip>
While the incidence of worms and DDoS attacks are high, the event costs
often pale in comparison to an insider abuse or critical intrusion.
The costs of worms and DDoS are spread across a very large number of
people, including those of us who actually follow best practices.

Sure, but other than a few specific instances, the costs are relatively
low, especially at places that do things well.  On the other hand, most
the insider stuff I've seen has been in the millions of dollars range
damage-wise, and sometimes into the hundreds of millions.
But those few sites have the resources to do things right. And damage to
those sites does not necessarily affect a large number of other people.
This in no way implies that insider attacks should not be defended
against.
Also, the costs per hit are low. The trouble is the that the hitrate is
so high that the numbers start to mount up.
Just the costs of dealing with spam in the US alone has been estimated
at a few *billion* dollars [1]. Most of that spam is from worm infested
systems. And thats just spam. Then the other damages (see the recent
firewall-wizards thread on airgaps which you started for an example).

Thinking about it a bit more, I guess that what I am saying is that
people who actually follow BCP are collateral damage. *That* is not
appreciated.`



Frequency of attack with a lower cost will surpass infrequent attacks with
a higher cost in many cases.  Still, longer-term, and strategically,
intrusions and infrastructure compromise are much more worrisome than
local desktop disruption.  DDoS can be taken care of with end-to-end QoS,
The destkop disruption is potentially damaging because of the sheer
number of desktops.

Sure, however the insider abuse is potentially damaging because of
specialized knowlege of the environment and the sheer amount of access
afforded.
True. Most people try to defend themselves against one big attack. They
forget the costs of dealing with tons of little ones.

an evil we may eventually have to bite the bullet on, just like voice
Except that a few thousand zombies spewing out a few kbit of traffic
will not really be QoSable.
64 kbps of traffic * 4000 zombies [1] = 256 Mbps of traffic.

Sure they will- if QoS is enforced on the leaf nodes.

Are you going to enforce QoS at that level? It need not be ICMP traffic
either. 64 kbit/sec of http traffic, or SMTP, or anything else.
It would be easy to write a program that generates a limited amount of
traffic against a site from a single host and distribute it to a large
number of hosts.

Sure, but if you have end-to-end QoS, you can potentially allow things
per-flow once you move control out of band.  QoS allows you to do things
One packet establishes a flow? The router in the middle dies.

like allow a routing arbiter to get through, or even
authentication/authorization traffic like the tagged packet "marker dye"
stuff.
Yeah. But that doesn't let the user work.

QoS will *not* work against a DDoS for most sites. The large sites have
enough bandwidth to handle the traffic, but for those people whose
servers are not in large datacenters but on T1 or equivalent lines, this
could be a nightmare.

It depends on architecture, if we go end-to-end QoS with channelized
bandwidth, it very well could work, especially if you get trusted neighbor
damping, the damping packet would just have to flow back to the origin-
but we're already doing forward path based routing, so I think it'd be a
fairly minor thing.

http://groups.google.com/groups?dq=&hl=en&lr=&ie=UTF-8&c2coff=1&safe=off&threadm=cb3ace%241ddm%241%40FreeBSD.csie.NCTU.edu.tw&prev=/groups%3Fdq%3D%26num%3D25%26hl%3Den%26lr%3D%26ie%3DUTF-8%26group%3Dmailing.postfix.users%26c2coff%3D1%26safe%3Doff%26start%3D50

The damamge from Blaster/Welchia/Nachi could not really have been controlled
by QoS. One 92 byte ICMP packet going out is *NOT* QoSable. It did cause
routers to fall over and die due to the sheer number of packets being
originated to different destinations.

It could if you did QoS on the switch- you just have to be able to policy
QoS out to the leaf nodes, where the bandwidth matters.  More integrated
Again, we are speaking of home users on broadband. What switch?

QoS to take into account spikes is probably a better improvement, if we
can't go to channelized stuff, but I really think channels are the way to
go- you fight for the channel that user traffic goes on, and we'll treat
floods like collisions- DWDM makes that almost interesting in a MAN
environment...

<snip> 
Or as MJR had once proposed, don't give them full Turing complete
systems. Give them embedded systems (or dumbed down PCs) which do very
few things.

That's really only feasible once we slow down innovation-wise.
I agree with your point. I pointed out one more solution.
Though a Linux box with OS and apps in ROM would be interesting.

<snip>
In my experience, it's been more ignorace that they *could* set the
firewall up that way or lack of power to set it up that way.
Different locations, different perspectives :).
The cost of the system is a very large factor in influencing firewall
purchases.

Devdas Bhagat

[1] http://word-to-the-wise.com/asrg.ppt

_______________________________________________
firewall-wizards mailing list
firewall-wizards () honor icsalabs com
http://honor.icsalabs.com/mailman/listinfo/firewall-wizards


Current thread: