Firewall Wizards mailing list archives

Re: Web Site Hacks


From: mcnabb () argus-systems com (Paul McNabb)
Date: Tue, 9 Dec 1997 10:25:23 -0600

 From: Aleph One <aleph1 () dfw net>
 
 On Fri, 5 Dec 1997, Chad Schieken wrote:
 
The question I wrestle with every day is how to protect the webservers
from themselves (CGI, NSAPI, server plugins, etc). It's been my experience
that most of the web applications being developing take very few steps to
protect themselves. 

My solution has been individual reviews of each app. This is hugely
expensive, and not reliable (IMHO). But what alternatives are their? 

Even putting the "perfect" firewall in front of the webserver doesn't
protect it from the biggest liability, itself. 

I think the webservers need to implement some sort of sanity checking
of input to the various server side applications, like CGI, or server
plugins, etc. 

Has anyone ever seen this even considered in any webserver?
 
 The solution is to use a trusted operating system and run each CGI script
 or set of CGI scripts in its own conpartments. Of curse this helps you
 little if you are using NSAPI or some other web server API where the
 program actually runs in the same address space as the web server. Another
 alternative is to have the web server forward CGI request to another
 server for execution and foward the results to the browser. This should be
 easily accomplished. Locate the CGI server in some network firewalled
 where it cant do damage.

Chad is absolutely correct here.  As has been said a million times here
(well, maybe only a few hundred times), a firewall is at best only part
of the solution.  You have to have a security policy, documentation,
training, products, and managerial/corporate willpower.  A firewall
lets SOME traffic through, otherwise a snip with some wirecutters does
a much better and cheaper job.  It is nearly impossible to analyze every
possible packet that could come through, so the bottom line is that you
have unknown people on the outside getting your machines on the inside
to do something on the outsiders' behalf.  And if there are any bugs or
holes in the software or hardware running on the inside machines, those
might be exploitable and allow a breakin, no matter how powerful the
firewall.  Complex protocols, such as file transfer, remote login, http,
and mail services are particularly prone to nasty, exploitable holes.

How can you "crack" a 1024 bit key in less than .1 second?

Get the OS that is supposed to be protecting it to reveal it.

After all, the SecureID database, the PGP keys, the SSL keys, the (I hope
encrypted) user passwords, the host tables, and everything else that is
used for security is, in the best case, stored in memory, and most things
ared stored somewhere on disk.  Add to that the fact that the OS and all
executable programs are just files that are protected by the OS and it
becomes clear that OS security needs to be looked at just as much as
network encryption, firewalls, authentication tokens, intrusion dectection,
and other more popular and publicized forms of security.

B-level security does a great job in protecting websites, web servers,
administration machines, network servers, etc. from poorly designed,
poorly implemented, poorly integrated, or poorly configured software.
BTW, B-level security isn't the only way to harden an OS, but it is one
of the best ways; and it has the advantage of having thousands of manyears
of thought put into its design, development, and deployment.

paul

---------------------------------------------------------
Paul McNabb                     Argus Systems Group, Inc.
Vice President and CTO          1809 Woodfield Drive
mcnabb () argus-systems com        Savoy, IL 61874 USA
TEL 217-355-6308
FAX 217-355-1433                "Securing the Future"
---------------------------------------------------------



Current thread: