nanog mailing list archives

Re: Filter NTP traffic by packet size?


From: James R Cutler <james.cutler () consultant com>
Date: Thu, 20 Feb 2014 16:40:26 -0500


On Feb 20, 2014, at 4:05 PM, Laszlo Hanyecz <laszlo () heliacal net> wrote:

Filtering will always break something.  Filtering 'abusive' network traffic is intentionally difficult - you either 
just let it be, or you filter it along with the 'good' network traffic that it's pretending to be.  How can you even 
tell it's NTP traffic - maybe by the port numbers?  What if someone is running OpenVPN on those ports?  What about IP 
options?  Maybe some servers return extra data?  

This is really not a network operator problem, it's an application problem if anything.  While it makes sense to 
temporarily filter a large flood to keep the rest of your customers online, it's a very blunt instrument, as the 
affected customer is usually still taken offline - but I'm talking about specific targeted filters anyway.  Doing 
blanket filtering based on packet sizes is sure to generate some really hard to debug failure cases that you didn't 
account for.

Unfortunately, as long as Facebook loads, most of the users are happy, and so these kinds of practices will likely be 
implemented in many places, with some people opting to completely filter NTP or UDP.  Maybe it will buy you a little 
peace and quiet today, but tomorrow it's just going to be happening on a different port/protocol that you can't 
inspect deeply, and you don't dare block.  I can imagine 10 years from now, where we're writing code that fragments 
replies into 100 byte packets to get past this, and everyone loses.  Your filter is circumvented, the application 
performs slower, and the 'bad guys' found another way that you can't filter.  When all that's left is TCP port 443, 
that's what all the 'abuse' traffic will be using too.

Laszlo


On Feb 20, 2014, at 8:41 PM, Edward Roels <edwardroels () gmail com> wrote:

Curious if anyone else thinks filtering out NTP packets above a certain
packet size is a good or terrible idea.

From my brief testing it seems 90 bytes for IPv4 and 110 bytes for IPv6 are
typical for a client to successfully synchronize to an NTP server.

If I query a server for it's list of peers (ntpq -np <ip>) I've seen
packets as large as 522 bytes in a single packet in response to a 54 byte
query.  I'll admit I'm not 100% clear of the what is happening
protocol-wise when I perform this query.  I see there are multiple packets
back forth between me and the server depending on the number of peers it
has?


Would I be breaking something important if I started to filter NTP packets
200 bytes into my network?



While filtering NTP packets may be a work-around, for any network with firewall isolation from the general Internet it 
would make more sense to:

1.  Establish an internal peer group of NTP Server instances.  As noted, a distributed group of four is the absolute 
minimum, six is more than sufficient.
2.  Default restrict noquery on all internal NTP servers.
2.  Use a common list of external NTP servers for all internal servers.
3.  Provide that list of external NTP servers to the firewall engineer to add to a permit ACL (deny all others)

James R. Cutler - james.cutler () consultant com
PGP keys at http://pgp.mit.edu



Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail


Current thread: