nanog mailing list archives

RE: VeriSign's rapid DNS updates in .com/.net


From: Sam Stickland <sam_ml () spacething org>
Date: Thu, 22 Jul 2004 12:03:43 +0100 (BST)


Well, a naive calculation, based on reducing the TTL to 15 mins from 24
hours to match Verisign's new update times, would suggest that the number
of queries would increase by (24 * 60) / 15 = 96 times? (or twice that if 
you factor in for the Nyquist interval).

Any there any resources out there there that have information on global 
DNS statistics? ie. the average TTL currently in use.

But I guess it remains to be seen if this will have a knock on effect like 
that described below. Verisign are only doing this for the nameserver 
records at present time - it just depends on whether expection for such 
rapid changes gets pushed on down.

Sam

On Thu, 22 Jul 2004, Ray Plzak wrote:


Good point!  You can reduce TTLs to such a point that the servers will
become preoccupied with doing something other than providing answers.

Ray

-----Original Message-----
From: owner-nanog () merit edu [mailto:owner-nanog () merit edu] On Behalf Of
Daniel Karrenberg
Sent: Thursday, July 22, 2004 3:12 AM
To: Matt Larson
Cc: nanog () merit edu
Subject: Re: VeriSign's rapid DNS updates in .com/.net


Matt, others,

I am a quite concerned about these zone update speed improvements
because they are likely to result in considerable pressure to reduce
TTLs **throughout the DNS** for little to no good reason.

It will not be long before the marketeers will discover that they do not
deliver what they (implicitly) promise to customers in case of **changes
and removals** rather than just additions to a zone.

Reducing TTLs across the board will be the obvious *soloution*.

Yet, the DNS architecture is built around effective caching!

Are we sure that the DNS as a whole will remain operational when
(not if) this happens in a significant way?

Can we still mitigate that trend by education of marketeers and users?

Daniel



Current thread: