Firewall Wizards mailing list archives
Re[2]: Next Generation Firewall (long)
From: Rick_Giering_at_mpg003 () ccmailgw mcgawpark baxter com
Date: Wed, 3 Dec 1997 16:56:25 -0600
Subject: Re: Next Generation Firewall Author: James Slupsky <jslu () alc ca> Date: 12/2/97 9:41 AM > >Rick Giering said: >> >>C) Developers will move to encryption/compression to both protect >>their content and applets and as a way to defeat these controls. >>(ie. has anyone tried to detect stuff coming through an SSL pipe? I >>don't think so) >>D) Once MicroSoft makes it brain dead easy to develop client >>server apps using RPC (probably using COM/DCOM), Developers will >>move to it very quickly. The result will be many holes punched >>through a firewall; one for each application/version. > >Rick, I have a few questions: > >1. If the system of encryption and authentication for the RPCs is a >good one (such as Kerberos or DCE version 1.1), then don't some of >these concerns go away? (specifically, the authorized user concerns) It sounds like your looking at a "customer/vendor" model where the customer is "out there somewhere" connecting to your services. Remember this is only one half of the environment. But, given your model ... First, you'll have to assume that the authentication used is a good one (ie can't be faked, out-guessed, etc.) and, more importantly, that the application developers have implemented it correctly. It's very easy to circumvent good security for the sake of things like fast-authentication and auto-reconnect. Second, you'll have to assume that the application is free from bugs, buffer overflows, and the like. Third, you'll have to assume that the developers didn't put in back doors so they could debug and test their application from the "outside." I don't think any of these assumptions are valid but let's assume they are... >2. If we can be pretty sure of who is connecting to our systems, >then is your concern above primarily for DOS attacks? I feel I am >missing something here (I am a newbie to this group, so forgive me). Nothing wrong with being a newbie. We all were one at one time and I consider myself one when compared to the likes of MJR. In the case you've layed out (a customer attaching to a vendor), recognise that you've established a tie between two "trusted" zones. The tie is the application interface and the trust zones are your company and the customers environment (either home or company). Also remember that a server side application is frequently completely trusted on the system it's running on and most client side applications have free reign on their systems. You have assumed your customer's intentions honorable, that his system is secure from those that aren't so honorable, that he practices good system administration, and that his system is free from agents watching and waiting for connections like this. (I won't go into why these assumptions aren't necessarily valid) If all of these assumptions are true, you are safe. Unfortunately, history has proven that none of these assumptions. Even exhaustive code examination and testing doesn't attack the assumptions about your "customers" environment. Someone correct me if I'm wrong, but this is like a "sendmail" situation to me. Sendmail accepts mail from a "customer" and agrees to perform some activity (delivering the message) on behalf of that "customer" in the trusted environment. Once it has the message, the trust associated with the message escalates from that of the "customer" (probably very little) to that of the internal mail delivery system (probably pretty high). James, if you don't know about sendmail vulnerabilities, I suggest strongly that you read up on them because I consider them classic and good indicator of things to watch out for. Also, don't forget to research buffer overflow and other common program bugs. If you need a starting point, check out books on the "Internet Worm" and look at how it expoited "finger." And even if you assume the sendmail interface is perfect, it still doesn't address the issues of embedded applets or scripts (html formatted mail), embedded commands (the infamous "TO: |..."), and embedded virus's in any attachments. These all present dangers injected into your trusted zone through this supposedly perfect interface. >And some comments: >If you used something like DCE on your firewall, and had seperate >domains (one for the protected environment and one for the internal), >then I believe the concern about RPC ports would go away. In this >case, your firewall would act as a sort of RPC proxy, or in >client-server architectures, it would represent the middle tier in a >3-tier architecture. Yes, the firewall would be turned into an RPC proxy. But, without a knowledge of the underlying functions and activities, it can do nothing but act as a simple pass through. However, this removes the protection the firewall was put in place to provide. Part of my assumptions about the future is that the number of new RPC based applications will quickly outstrip any way to keep up with them. Also, if any of the RPC calls/results include system names, IP addresses, port #'s, and the like, pass through probably won't work. An example is "port mapper," a simple pass through from an outside caller to an inside system of port mapper calls are useless because the results include port #'s to a system the outside system can't get to! The firewall is forced to completely understand the port mapper programatic mechanism/semantics (not just the RPC calls) and create a a secure equivalent. And, this may not even be possible! For the average developer using canned tools to use RPC, this means understanding how RPC works (invalidating the investing in the tools) and doing double the work; first, to create the application assuming no firewall and second, to create the application assuming a firewall is in place. I don't think the development management will want the cost of a project to double just because it's going through a firewall. The other alternative is to use a transport that is already "pass through" on most firewalls (can everyone say "http"? I thought you could :). In the end, the application team will either find a way around the firewall (probably using http) or require a hole punched in it. Any hole opens another avenue of exploitation. Once there are a lot of holes, why have a firewall? >I agree that security frequently takes a back seat among IT managers >(especially those responsible for systems development). For some >reason, a manager responsible for operation of a network is more apt >to listen to security concerns. I agree. I think it comes back to vision from the top. Since top management doesn't place a high value on security, no one else does. In companies where there is a concern and understanding regarding security at the highest level, everyone becomes concerned. This is already so long, here's a small story about how top management concerns about security have a direct impact. I know of a company where over a period of months laptops and fax machines turned up missing. This led to a policy for the security guards who see an unattended laptop out in the open to pick it up and leave a note. FYI- only top executives have doors and even those don't lock. Within two weeks, a guard picked up an executive's laptop who had left it on top of his desk over a long weekend. The executive didn't like this and the policy was quickly changed to exclude all executive laptops from this protection. This sent a message through the entire organization and, over the ensueing months, all manor of security issues started to be questioned. Within a couple of weeks, the guards stopped picking up any laptops at all. >James Slupsky. P.Eng. >Sr Telecom Analyst >Atlantic Lottery Corp >(506)867-5466 >jslu () alc ca
Current thread:
- Re[2]: Next Generation Firewall (long) Rick_Giering_at_mpg003 (Dec 03)