Firewall Wizards mailing list archives

Re: Counterrant (Was Rant, Was Our friend FTP)


From: Bennett Todd <bet () newritz mordor net>
Date: Mon, 19 Apr 1999 17:01:38 +0000

I don't necessarily disagree with everything you are saying; there's certainly
an appeal to the idea that we can get where we want to go via small steps.
But you wrote:

1999-04-17-01:19:09 Robert Graham:
Consider the problems coax Ethernet had. The cable was very sensitive to
faults, leading to frequent LAN-wide outages. Furthermore, it didn't scale
well beyond 30% utilization, due to all the collisions. These were problems
that could not be fixed.

Actually, these were problems that did not exist, outside of lies told by
assault salesmen trying to push token ring on the ignorant.

I was there, then. One of my first jobs, we wired a building for ethernet; we
bought a big spool of the fat orange coax, a ladder, and a sledgehammer,
knocked holes through the walls up above the suspended ceiling, and hauled the
coax through. Didn't have the right equipment to terminate that stuff, so we
just used a knife, cut the end in layers, and soldered on some resistors.
Looked pretty disgusting, worked like a champ. We used the vampire
transceivers everyone was selling; as I recall they came from AMP or some such
manufacturer, some of ours were labelled cabletron, some black box. Just was a
little careful installing them and never had a "LAN-wide outage". For sure,
LAN-wide outages were not frequent. I knew a bunch of labs that were using
ether at Duke in the mid-to-late '80s, and just can't substantiate your claims
from my experience.

As for the "30% utilitzation" caca, I wish I could remember to find the paper,
but I know there was a paper published that debunked that one --- and again,
my experience definitely disagrees with it: keep your ether up over 75-80%
utilization and yeah, some things will feel slow, but the "knee" at 30% only
shows up under hard-to-produce-outside-of-a-simulation pessimal circumstances.

IBM came out with a new technology called Token Ring. It wired the network
in a star so that any cable problem only affected one user rather than all
users. It used a much saner token passing mechanism rather than a collision
detection/backoff.

Token ring was far less inter-operable than ethernet; at any point in time you
could get ethernet for a whole lot more different sorts of machines than had
token ring support available.

Token ring wasn't intrinsically a star; if ethernet was intrinsically a
bus, then token ring was by nature a ring. If you want to fantasize about
theoretical concerns, in token ring idle workstations need to pass the
token on around so someone else can grab it; that's a worrisomely fragile
alternative to CSMA/CD. With suitably homogenous gear you can make _any_ thing
work, and since there were so few machines that supported token ring they
kluged it into working.

Which technology won the race? Ethernet, of course.

Somehow I don't even remember a race. Ethernet is the only LAN technology I've
ever heard of that was competitive for interconnecting diverse machines;
various vendor-specific alternatives like token ring and appletalk never
really tried to compete in the multi-vendor arena, and the multi-vendor arena
was the only one I played in:-).

And now that you mention it, you don't see a lot of coax these days. There's
not much CSMA/CD left in a modern switched net wired in a star with cat-5
drops. But computers haven't had to notice.

The collision-based architecture only had scalability problems in widely
separated, repeater-interconnected, terminal-based (small frames) networks,
which would melt at 30% load.

A suitably rigged simulation could claim such problems, but I've never heard
of anyone who succeeded in assembling a real net that could demonstrate them.

Back in the early 1990s, the prediction was that the Internet was about to
meltdown.

I was on Usenet back shortly after it was born, when it was a star-configured
network with "duke" at the hub. "Death of the net expected; news at 11" was
old then.

Was anybody on this list around when the first generation of ARPAnet was
rolling out? Was the imminent death of the net an old prediction in the early
'70s?

Routing tables were exploding, IP wasn't secure, mobile computers didn't
work, yadda yadda.

Growing pains, they drive progress. Incremental progress by and large, as you
point out.

A comprehensive, complete solution to this problem has already been
proposed. It is called the OSI/ISO suite of protocols.

Were they a solution to this problem? I always thought they were a solution to
the problem that the internet --- and TCP/IP protocols --- worked great; this
was regarded as a problem by the telephone companies, particulary the european
PTTs, so they did a brilliant job of designing a bunch of official standards
that were guaranteed to not work, then stood back puzzled when the internet
failed to conform to their new standards.

Anyone who has been involved with IETF knows the difficulty faced
with trying to replace any existing protocol. It would take years to
get consensus, everyone needs to have their opinions heard, neophytes
simultaneously will hold very strong opinions about things they don't
understand and need severe amounts of education.

Depends on what you mean by "replace an existing protocol". Sure, trying to
club people with a mandate from a standards committee won't tend to work too
well; instead, you've got to lure 'em into doing something new and better.
HTTP is a terrific example of how the process can work; provide functionality
that people like and document the protocol you use to implement it, provide
portable source (even if it's poor source) to let people hack about with
implementations, and you can roll out a new protocol pretty darned quick. The
hardest part is providing the bait.

You want to roll out a new protocol suite? I'll give you the formula, for
free.

 - Provide some new functionality. How about web browsing style downloads
   plus and efficient batch file anonymous downloads plus easy and efficient
   uploads, all combined with a nice elegant security model making it useable
   as a replacement for https, etc. Make it the dream protocol.

 - Now put up a big fat machine (they don't cost much these days) with some
   exciting content. Start by just being the mirror site from hades. Add other
   services.

 - Publish a teaser from a lean, simple web server; description of what
   content would be available of people hit your site with your protocol,
   and downloads of clients for popular platforms and source code and
   documentation.

Add frills as fast as you can hack 'em into place. Provide a free "webmail"
service that works great. Provide perl modules to make it really easy to
publish new content.

The OSI suite failed because the design methodology was to scrap what had
gone on before and to start from a clean slate, which throws out a huge
amount of implementation experience with the bath water.

No, the OSI suite failed because its goal was to prevent people from
communicating with the freedom and reliability they enjoyed with IP, and the
promulgators of the OSI suite couldn't find anyone to club into submission to
force 'em to use OSI. I seem to remember the U.S. Govt tried to mandate it
(GOSIP?), didn't get anywhere. IP was designed to weather anything, and OSI
didn't even make it burp:-).

My 2-cents is that the IETF (or any other "body") will only be able to solve
problems with incremental improvements. Any revolutionary change will come
about by somebody implementing a solution. In particular, you've got a great
opportunity to drop any change into open Mozilla and Apache. You could also
choose the stick instead of the carrot: create hacker programs that exploit
the vulnerabilities and spread them wide on the Internet. Either way, don't
hope that "they" will solve the problem, work on it yourself. "They" will
never be competent.

I think we actually agree on the essential point here, I just don't like some
of your examples:-).

-Bennett



Current thread: