nanog mailing list archives

Re: Is anyone actually USING IP QoS?


From: Danny McPherson <danny () qwest net>
Date: Tue, 15 Jun 1999 15:12:55 -0600



Just bare logics (a lost art in the modern datacom world, i guess).

logics/insanity, l2/l3, no real difference, right...

Make this gedunkenexperiment: take each mcast packet replicating point
and replace it with a cache with very small retention time.  The thing
will replicate data in exactly the same way; and the packets will flow
along the same paths.  Therefore it is _at least_ as efficient

So you're saying that an application-specific cache would be more efficient 
than non-application-specific multicast?  OK, let's consider this.  How would 
distribution/replication be done?  What was the destination address of the 
original packet .. let's say, 225.X.X.X?  Wow!

(Note that i do not consider possibility of building mcast trees dependent
on traffic or bandwidth reservation - the algorithmic complexity involved
makes that an intractable problem (it is believed to be NP-complete in
general case); even the best heuristic algorithms for such planning place
it beyond realm of computable for the networks fraction of size of present
Internet).

But we're not discussing that. we're discussing multicast v/s unicast (or was 
it QoS?), but you're somehow tying the two together when they're completely 
different things.

and are you suggesting that dense-mode (blindly distributing), to coin the 
term, is more efficient than say, sparse-mode (multicast?) .. not even in your 
bare logics world.
 
Caching is not employing any routing information exchange. Therefore
it is a) oblivious to the state of other caches or to the changes in
network topology and b) is invulnerable to the bogus routing information
and flap-like DoS attacks.

a) Yes, it indeed does have different challenges than multicast.  It still 
relies on the underlying layers which b) are not oblivious to such DoS attacks 
.. in todays Internet (maybe not yours, but todays) where 2 million line 
prefix filters are about as real as millions of multicast groups.

99% of Web content is write-once.  It does not need any fancy management.
The remaining 1% can be delivered end-to-end.

I don't recall scoping this to only "Web", but it does seem to best augment 
your argument, better stick to it :-)  I'm of the opinion that an 
application-agnostic approach is a better idea.

(BTW, i do consider intelligent cache-synchronization development efforts
seriously misguided; there's a much simpler and much more scalable solution
to the cache performance problem.  If someone wants to invest, i'd like
to talk about it :)

Ahh, I see.  Everything sux, but give me cash (pun intended :-) and I'll give 
you cold fusion.

It is not that caching is a silver bullet, it is rather that multicating
is unuseable at a large scale.

many to many, perhaps it does have it's challenges.  one to many, it works 
today.
 
Well, philosophical note: science is _all_ about generalizing.  For an inventor
of perpetuum mobile the flat refusal of a modern physicist to look into
details to assert that it will not work sure looks as an oversimplifying.
After all, the details of actual construction sure are a lot more complex than
the second law of thermodynamics.

Second law of thermodynamics:  OK, coke loses it's fizz, but putting a cap on 
the bottle wasn't that hard.

In this case, i just do not care to go into details of implementations.  The
L2/L3 mcasting is not scalable and _cannot be made_ scalable for reasons having
nothing to do with deficiencies of protocols.

Summary:  It sux, I can't can't tell you why, but it just sux!

Caching algorithms do not have similar limitations, solely because they do
not rely on distributed computations.  So they have a chance of working.
Of course, nothing "just works".


PS To those who point that provider ABC already sells mcast service: there's an
   old saying at NASA that with enough thrust even pigs can fly.  However, no
   reactively propulsed hog is likely to make it to an orbit all on its own.


However, if he sends enough mail to NANOG, he'll likely find someone to fund 
even shoving a rocket up a dogs ass :-)

-danny (who won't waste any additional time in this thread)





Current thread: