nanog mailing list archives

Re: "2M today, 10M with no change in technology"? An informal survey.


From: "David W. Hankins" <David_Hankins () isc org>
Date: Mon, 27 Aug 2007 09:59:43 -0700

On Sat, Aug 25, 2007 at 08:44:45PM -0400, William Herrin wrote:
NNTP is similar to BGP in that every message must spread to every
node. Usenet scaled up beyond what anyone thought it could. Sort of.
Its not exactly fast and enough messages are lost that someone had to
go invent "par2".

I think the context of (the other) David's question was wether or not
there need to be any changes in technology.

In that context, I don't think NNTP is a good analogy to prove the
point that no changes in technology are necessary.

NNTP acheived its ends in large part due to a protocol update for
'streaming' feeds - the CHECK and TAKETHIS commands to de-synchronize
sender and receiver (supplanting 'IHAVE' and 'SENDME') allowed servers
to fill the socket buffer and make full use of TCP large-window and
selective-ACK.  I do not think I overstate the importance of this
change to call it an 'NNTP rewrite'; it literally reversed NNTP's core
design.  There was at least one company that sold commercial NNTP
software - and provided a catalyst that caused most other software to
reflect upon itself and redesign core processes.  Virtually all
software changed significantly (and there's some debate wether it was
for the better).

But the biggest part of NNTP's survival, I think, were the behind the
scenes news mega hubs - expensive machines with a lot of memory
bandwidth, solid state disks, and fat network connections, taking and
giving feeds to anyone who would ask.  Some (most I think) were
operated at a loss - purely to support the network.

-- 
Ash bugud-gul durbatuluk agh burzum-ishi krimpatul.
-- 
David W. Hankins        "If you don't do it right the first time,
Software Engineer                    you'll just have to do it again."
Internet Systems Consortium, Inc.               -- Jack T. Hankins

Attachment: _bin
Description:


Current thread: