nanog mailing list archives

Re: problems sending to prodigy.net hosted email


From: Stephen Satchell <list () satchell net>
Date: Mon, 19 Mar 2018 18:20:20 -0700

On 03/17/2018 02:04 PM, Chris wrote:
Stephen Satchell wrote:

(I know in my consulting practice I strongly discourage having ANY
other significant services on DNS servers.  RADIUS and DHCP, ok, but
not mail or web.  For CPanel and PLESK web boxes, have the NS records
point to a pair of DNS-dedicated servers, and sync the zone files
with the ones on the Web boxes.)
Why not? Never had a problem with multiple services on linux, in
contrast to windows where every service requires its own box (or at
least vm).

1. Spreading the attack surface across multiple system. Just because someone is nailing your web server doesn't affect your DNS server, or mail, or authentication, or logging, or...

2. Robustness during maintenance. Browsers cache DNS responses, including NXDOMAIN responses. Just because your web server is inaccessible while you do something with it doesn't mean the browser is completely disconnect when you bring it back up. Ditto mail servers

3. Application-specifc attack mitigation on each type of server. It's far easier to lock down a DNS server if you don't have any other significant servers running. Ditto for mail, ditto for Web, ditto for syslog servers, and so forth.

4. Limit what an attacker can do if s/he "breaks through" your protections. I even go so far as to impose severe limits on the internet, nominally "trusted" network, to minimize cross-server attacks through the local network. In short, systems should *not* blindly trust neighboring systems.

I don't like publicly facing VMs. They are find for internal functions that are *not* exposed to the world. (You can't help it with cloud-hosted objects, but just remember that cloud servers can be compromised just as easily as iron-hosted servers, perhaps even more easily.)

About cloud: I prefer that any cloud-hosted servers not have mission-critical functions. The best use of cloud servers, in my opinion, is to host user interface tasks, and tunnel through to servers in my machine room for data. Depends on the application, but my critical data is in my machines.

I still recall when I was working on APRAnet and the Morris worm first launched. I also recall when the IMPs were flipping bits so that packets would have addresses changed and start running forever until the entire ARPAnet was restarted. That's when TTL was introduced into the NCP implementation at the time.


Current thread: