nanog mailing list archives

RE: Is anyone actually USING IP QoS?


From: Jamie Scheinblum <jamie () fast net>
Date: Tue, 15 Jun 1999 17:14:51 -0400


While this thread is slowly drifting, I disagree with your assertion that so
much of the web traffic is cacheable (nlanr's caching effort, if I remember,
only got around 60% of requests hit in the cache, pooled over a large number
of clients.  That probably should be the correct percentage of cacheable
content on the net).  If anything, the net is moving to be *more* dynamic.
The problem is that web sites are putting unrealistic expires on images and
html files because they're being driven by ad revenues. I doubt that any of
the US based commercial websites are interested in losing the entries in
their hit logs.  Caching is the type of thing is totally broken by
session-ids, (sites like amazon.com and cdnow).

The only way caching is going to truly be viable in the next 5 years is
either by a commercial company stepping in and working with commercial
content providers (which is happening now), or webserver software vendors
work with content companies on truly embracing a hit reporting protocol.

So basically, my assertion is that L4 caching on any protocol will not work
if the content provider is given any control of TTL and metrics.  The only
way web caching *really* works is when people get aggressive and ignore the
expire tags from a network administrator point of view, not a content
company's.  From what I remember, that was the only way the some Australian
isps were able to make very aggressive caching work for them.  Further, the
more you rely on L4 implementations for caching, the more it seems you would
be open to broken implementations... Although that is a broad statement...

-jamie () networked org

-----Original Message-----
From: Vadim Antonov [SMTP:avg () kotovnik com]
Sent: Tuesday, June 15, 1999 4:23 PM
To:   Brett_Watson () enron net; nanog () merit edu
Subject:      Re: Is anyone actually USING IP QoS?

99% of Web content is write-once.  It does not need any fancy management.
The remaining 1% can be delivered end-to-end.

(BTW, i do consider intelligent cache-synchronization development efforts
seriously misguided; there's a much simpler and much more scalable
solution
to the cache performance problem.  If someone wants to invest, i'd like
to talk about it :)

even if i assume caching is as efficient, or
more so, than multicast, i'm still just trading one set of
security/scalability concerns for others.  caching is no more a silver
bullet than multicast.

It is not that caching is a silver bullet, it is rather that multicating
is unuseable at a large scale.

i won't deny the potential scalability problems but i think your
generalizing/oversimplifying to say caching just works and has no
security
or scalability concerns.

Well, philosophical note: science is _all_ about generalizing.  For an
inventor
of perpetuum mobile the flat refusal of a modern physicist to look into
details to assert that it will not work sure looks as an oversimplifying.
After all, the details of actual construction sure are a lot more complex
than
the second law of thermodynamics.

In this case, i just do not care to go into details of implementations.
The
L2/L3 mcasting is not scalable and _cannot be made_ scalable for reasons
having
nothing to do with deficiencies of protocols.

Caching algorithms do not have similar limitations, solely because they do
not rely on distributed computations.  So they have a chance of working.
Of course, nothing "just works".

--vadim

PS To those who point that provider ABC already sells mcast service:
there's an
   old saying at NASA that with enough thrust even pigs can fly.  However,
no
   reactively propulsed hog is likely to make it to an orbit all on its
own.



Current thread: