Firewall Wizards mailing list archives

Re: Proxy advantage


From: David Lang <david () lang hm>
Date: Mon, 29 Apr 2013 08:25:09 -0700 (PDT)

If you start with the premise that the only thing that's a firewall is a packet filter, especially with deep packet inspection being optionsl, then you are going to be in rather bad shape.

I have run a fairly large organization with proxy firewalls (800+ people, 100+ separate networks), it can be done. In some areas it bypasses whole classes of problems.

Even for user desktops you can do it, but you need to get a good proxy, not just install squid and think that you've gained a lot.

Yes, it breaks some things, but rather than there being 10% 'good' apps, it's more like 1% completely broken apps, and 20% apps that need special configuration (the vast majority of this 20% are not desktop apps, and if you are willing to look at other tools rather than sticking with fighting to make a tool work that's not proxy friendly, it's usually not a big problem)

Remember that you will need to do SSL MITM with your proxy, so you will need to deploy your own CA certs on desktops.

David Lang

On Tue, 16 Apr 2013, Magosányi Árpád wrote:

On 04/15/2013 11:13 PM, Paul D. Robertson wrote:
I've always railed against DNS tunneling.  It seems to be rearing its ugly head again.  Today with all the in-band HTTP 
attacks, it once again seems the major advantage of a proxy server is not having to pass DNS down to the client.  Should 
this be a best practice?

It seems like a good idea, which is easy to execute. I see you ending up
with either hundreds of angry end-users who were using non-http
applications, or carefully migrating thousands of them one-by-one to a
new AD domain which does not know about your real DNS servers. And after
two months busily analysing http proxy logs to figure out how much of
your users were connected to the C&C.
Okay, I am exaggerating, and I do think that the idea is worth a
thought. Just wanted to point out that
1) there are exceptions, and this is without exception
you will still have to provide internet dns to them, and have the
measures against dns tunneling.
And yes, it is much easier if you know that > 10 lookup/min is either
your http proxy, or a reverse proxy.
2) you will still be hit by http reverse proxies
 And yes, you can at least have the opportunity to control them from a
central point, as before.

On a general level:

The best practice would be to proxy everything, and let in only the
traffic which adheres to the respective standards, the firewall
understands and finds harmless.
Let's see how it works out in real world:
1. Adheres to standards
   Maybe 10% of the current traffic? Proprietary protocols and protocol
extensions, misimplementations, horrific web pages, etc.
2. The firewall understands it
   Your average packet filter is ignorant to nearly anything which is
not needed for pushing the traffic through the device.
   Your average proxy firewall, which knows a bit more about the basic
protocols, so it can stop some attacks on that level.
   And there are the toolkit firewalls (I know only Zorp as an instance
of this kind), which know all the ins and outs of the basic protocols,
can do anything with them, and relatively easy to teach them higher
level ones. But they need a lot of tuning to get to the level which
really gives better protection than an average firewall.
   There are high-level gateways (like the xml proxies) which may
understand things even on layer 7, but know only very few protocols, and
in most cases only a subset of them.
   And there are the ESBs, which can do anything with the cost of
configuration complexity - nearly like a toolkit firewall, but maybe for
less protocols - , but have a distinct use case, which is not about
security.
3. the firewall finds it harmless
   If adheres to standards and we understood it, then we alredy know
whether it is harmless. With protocols and passive contents it is easy,
and we can proof that we understood the content by disassembling and
reassembling it (this is what Zorp and ESBs do).
   But active content (from software updates through pdf/word documents
to javascript) is another thing. We either trust them based on the
provider of content, deny them, try to get some assurance, or use some
kind of sandbox (from the one built in to the web browser/java vm to
malware isolation products). They are either unacceptable from the
business perspective (deny), inherently insecure (most of the malware
detection stuff violates the "default deny" principle), have extensive
operational burden (maintaining trust related database/ensuring leakless
sandboxen), or all of the above.

Once upon a time we optimistically assumed that if enough operators deny
non-adhering, potentially harmful content, providers of such content
will adhere to safe standards. It turned out to be a dream.

_______________________________________________
firewall-wizards mailing list
firewall-wizards () listserv icsalabs com
https://listserv.icsalabs.com/mailman/listinfo/firewall-wizards
_______________________________________________
firewall-wizards mailing list
firewall-wizards () listserv icsalabs com
https://listserv.icsalabs.com/mailman/listinfo/firewall-wizards

Current thread: