Firewall Wizards mailing list archives

Re: Why is this secure??


From: chuck <chuck () yerkes com>
Date: Fri, 26 Nov 1999 11:18:48 -0800

Steve,

It CAN be more secure because you are using cgi-scripts to limit the
queries that can be made.  That said, you are trusting your business
and security to those CGI scripts and to the limits that your database
server can impose.

The firewall should keep undesired protocols from getting "inside"
but those services allowed IN are unmonitored by the firewall.

A common way around that it to use a proxy.  For example, for ftp
you might use the TIS FWTK ftp proxy.  With this, you connect TO the
proxy and the proxy connect to the target FOR YOU.  The proxy ALSO
allows the admin to 'monitor' at the protocol level.  For example,
I might allow ftp "GET"s for my users, but not allow them to PUT
files out.  The proxy also does some bounds checking.  5000 character
lines might overflow buffers on an ftp server.  The proxy can sit
their and check for that and stop any "wrong" packets or lines
from transiting.

So the firewall may allow a protocol through, but if that the data
coming in on that protocol is unsecure, then you're screwed.  So
when you put a firewall in front of your IIS server, the IIS server
is STILL vulnerable to attacks using port 80, but may be protected
from scans, and ICMP floods, etc by the firewall.  The firewall
is protecting the machine, but the protocols allowed through still
provide a vulnerability.

That said, with your proposal, the CGI scripts are being that proxy.
Now anything that get be done that's bad through the CGI scripts will
not be stopped by the firewall.  This is why it's CRITICAL that
these scripts:
(1) don't trust the input - it must be examined closely.
    So SQL commands are derived by parsing the data, not by executing
    it.  And you do bounds checking and check the whole thing for
    legal characters (eg. no NewLines, ESCs, etc).
    Basically, you want the script to parse it, figure it out and
    rebuild it with its own routines.
(2) don't allow arbitrary forms to run this script.  It's usually
    a simple hack to take someone else form, modify it to your needs
    and run it with THEIR CGI script.  I especially love forms that use
    "hidden" variables for things like filenames.  Just change those on
    your version of the form and you can read any file.  My users get
    cranky that I won't allow CGI that I haven't audited, but I find
    this a fair amount.
(3) These servers have to be as secure as you want your data.  I get
    retentive and lock apache into a chroot area over a read only
    disk.  And it's on a DMZ protected by a firewall.  The only
    read/write area might be for logs.
    Oh yeah, expect that someone can break into this machine anyhow and
    plan for that.  That's where the DMZ comes it.  Once they have the
    DMZ machine, they shouldn't be allowed anywhere else.  Were this on
    the inside, and you have the typical 'soft chewy center' of most
    companies, then you're immediately screwed.
    Again, treat the DMZ machine as a hostile, pre-corrupted machine.
(4) Secure the actual DATA server.  Perhaps it only allows user WEB
    to come from the DMZ hosts and user WEB can only see certain parts
    of certain tables.


Need to justify costs?
There was a group, I know of, who went cheap on support in their
organization.  Hired a grad student to do part time SA work instead of
buying into the organization IT setup.  Much cheaper, it seemed, to do
it themselves.  Patches were neglected, but ya know, they were mostly
just desktops, no rocket science.  No firewalls, but whose gonna want
these machines?

Then they found out they'd been broken into (sniffers everywhere,
etc).  The org unplugged them from the net.  They had to entirely
rebuild EVERY machine.  No binaries were allowed back on. All source
that was developed (little utilities to digest lab data for students,
that sort) on those had to be audited for back doors.  Every bit of
DATA was suspect.  They had to go back to old tapes to compare the
pre-breaking data, one file at a time.  Had to get the paper books
to compare post-breakin data.

No net access for a couple MONTHS.  No new work for weeks while they
rebuilt their old data.  Had to hire a couple folks to come rebuild
their network.  No trust from the IT folks.

The crackers didn't get anything of value, this target offered
NOTHING.  But they still cost this group 10's of thousands of dollars
DIRECTLY and a hard to estimate amount indirectly for work that wasn't
getting done.

Odds of this happening?  More than none.  Cost of your data being
(1) copied or (2) changed?  More than a firewall and SQL proxy?
(in cash and in administrator time to fix it).

Should they insist on no protection then be sure to get something
in writing that they chose that.  This might be a memo from you
to them saying something like "this is to reiterate that your
decision is to not implement the protections that we have outlined
to protect the company's data and network resources"

File a copy of this.  At home.

chuck

Quoting Steve Meeters (meeters () excite com):
I'm not a security expert but have been asked to find a way to allow
customers on the Internet to look up parts information on a server behind
our firewall. The server has a lot of business applications on it and can't
be put in front of the firewall. We are using a Gauntlet firewall. 

I have been reading and following discussions on this list for a while and
have come up with a plan to put an external web server on the third leg of
the firewall and have customers go to this web server, fill out a request
form and submit it. Using cgi scripting, the web server will send the
request through the firewall to the internal server which will then send the
requested information back to the web server, which will forward it to the
customer. 

Like I said, I'm not an expert at this and have come up with this plan based
on what I've read here and in some books. What I need to know is why is this
more secure than letting Internet traffic through the firewall directly to a
web server on this internal system? Putting up an external server is going
to cost more, we'll need another system, web software, and another interface
for the firewall. 

What threats am I specifically opening our network up to by creating a rule
that allows all traffic to the internal server? I read this is a bad idea
but why can't the firewall protect against this? Assume for the sake of
argument the firewall is secure.

What protection does this type firewall still provide to our network if this
rule is in place? At what OSI levels?

In my plan a rule will be created that will only allow traffic coming from
the external web server to pass through the firewall to the internal server.
This narrows the field from everyone on the Internet to just the one server.

How does this help secure the internal network?

If the external server is compromised doesn't the attacker now have a open
path to the internal server, the same as if the external server wasn't there
at all?

I know these questions sound elementary to you but I drew the short straw on
this one. I think I am heading towards a relatively secure solution, but I
need to justify the $$$.



Current thread: