Firewall Wizards mailing list archives

Re: WWW protectors


From: Bennett Todd <bet () rahul net>
Date: Fri, 6 Mar 1998 08:45:39 -0800

1998-03-06-11:39:10 Mark Le Vea:
I'm looking for pointers to packages that protect web servers.

So far, I've always been happy with a simple approach:

1) Configure up the server as tight as possible. If you have to have
   complex active content, databases, etc. split it up onto different
   servers as much as possible to help confine penetrations. Don't do
   any DNS at all. On the base server don't allow anything except port
   80, and use none but the most closely-audited CGI scripts. If other
   protocols or large complex CGI scripts are required, evict them onto
   a separate server.

2) Stick the server[s] behind a screening router. Allow nothing but port
   80 to the main front server from the internet. If you have split into
   multiple servers, hang each of them off their own router interface
   and use the tightest possible rules to govern what traffic is
   permitted between machines.

3) Manage the content on the server[s] with ssh configured up really
   tight. I like to have an inside development area, developers
   permitted to check into a CVS tree, and separate people who _can't_
   check stuff into the tree, who can run a script to

        (a) check the new snapshot out;
        (b) run validity checks --- e.g. weblint;
        (c) pop up their browser of choice, to let them review the content;
        (d) tag the CVS tree to document the push; and finally
        (e) use rsync over ssh to update the real live content.

   The people authorized to update the live site would typically be the
   same people who are authorized to issue press releases and
   advertising and so on.

With an architecture like this I don't see where a ``package to protect
a web server'' could add anything. And I don't see how such a package
could  match the overall protection of this architecture.

-Bennett



Current thread: