nanog mailing list archives

Re: off-topic: historical query concerning the Internet bubble


From: Haudy Kazemi <kaze0010 () umn edu>
Date: Fri, 13 Aug 2010 18:01:20 -0500

Roland Perry wrote:
Kenny Sallee writes
So the whole 'myth' of Internet doubling every 100 days to me is something someone (ODell it seems) made up to appease someone higher in the chain or a government committee that really doesn't get it.

[Whether it was really 100 days, or 200 days...] a statistic like this has very real operational significance, because it sets expectations in the minds of senior management and investors that the new shiny hardware (or leased line, or peering agreement...) you just put in place isn't going to last "a lifetime", and will need replacing/upgrading really quite soon.

Part of this rapid hardware replacement cycle almost certainly had to do with the rapid growth in CPU capabilities in comparison to software. New classes of applications and capabilities were opening up just as fast as CPUs would allow. Many network appliances use embedded processors based on the same chips used in desktops or laptops of similar vintage. They run custom software and may have additional dedicated chips. Thus the development of the FrankenPix and custom Linksys wireless router firmwares.

Today even a 3(+) year old machine can do a fine job running office tasks (given enough RAM), whereas in the late 90s/early 00s, a three year old PC was not likely to be able to run the then current software very well. Today CPUs have progressed so far ahead of most software that we're able to combine multiple systems into one through virtualization and still obtain good performance. Tasks formerly given to dedicated chips (RAID, sampling rate conversions, compression) are now commonly done on CPUs and GPUs.

I also recall articles/webpages/blog-precursors talking about how many packets a particular CPU could route per second. The articles might have been in relation to building custom Linux based routers and router hybrids (such as router-bridges, adding QoS, etc.)

I feel that recently many changes in information technology have become less revolutionary and more evolutionary as we look for the reasons to build newer/faster/stronger/better equipment. The rise of netbooks as low CPU/GPU power machines underlines the evolutionary changes. The next series of revolutionary changes are still waiting in the wings (compact/portable devices, realtime 3D, gaming, scientific, and rendering applications are still pushing the envelope).

Another meme at the time (at least in the UK) was the idea of "Internet Time", where things happened four times as fast as "real life". So you'd realise that things like a "five year plan" were really only going to last just over a year. And, of course, policy and law related to the Internet gets out of date four times as fast, too.

I know organizations where equipment refresh/purchase cycles have been stretched from 3 years in the early 2000s to 5 years now, as they've observed both a slowing in need for the latest and greatest, as well as this being a response to budget pressures. Replacement periods are becoming less based on technological obsolescence than on equipment failures and end of warranties.



Current thread: