Interesting People mailing list archives

QOS Author is Motorola, Chief Software Architect


From: David Farber <dave () farber net>
Date: Mon, 23 Jun 2008 03:53:51 -0700


________________________________________
From: Waclawsky John-A52165 [jgw () motorola com]
Sent: Monday, June 23, 2008 1:08 AM
To: David Farber
Subject: RE: [IP] Re:   Net Neutrality: A Radical Form of Non-Discrimination by Hal Singer

Hi Dave, Some QoS perspectives that I have learned: First, the main
problem. QoS really isn't needed when you have big pipes.  The Internet
has plenty of capacity and most applications don't really need huge
amounts of bandwidth to work well. Go to:
http://www.networkworld.com/news/2007/021507-dont-expect-video.html and
read the 5th paragraph. It begins with "In the long haul....".  So what
is the average utilization of these big pipes. ...single digits???, and
for lots of good reasons. I used to professionally work on this stuff in
another life and what I remember about it below and I think most, if not
all of it, still applies.

The main problem I see is with the term itself: "QoS" is NOT about
Quality and it is NOT about Service. It is about billing! I think the
entertaining satire at:
http://ss7.net/ss7-blog/2006/05/16/mr-bandwidth-qos/ that mocks QoS, has
figured this out. Looking at it in a billing context is the only one
that makes any sense IMHO. Technically it doesn't really work! Even the
Internet crowd has figured this out
https://www.educause.edu/ir/library/pdf/CSD4577.pdf and they seem to be
saying if you don't have scarcity, what good is it and it's really more
trouble than it is worth!

You have probably figured out by now that I am not a big fan (or even a
believer) of QoS. I do admit it is great theory (with lots of "theory"
papers that go back decades) but there is too much experimentation to
show any practical advantage over just having inexpensive and reliable
capacity. The practical issues include, ROI, cost, complexity,
identifying the traffic flows, application of policy, and federation
across networks (end to end) etc.

With that said, I can still see simple prioritization mechanisms being
useful at the edge of the network, if they are under control of the end
user who is really the final arbitrator of what he thinks is important
to him on his bottleneck access link and knows what traffic flows "he"
wants to give an advantage. Also prioritization could be applied at
specific bottleneck locations if additional capacity is NOT available.
But remember that trying to tie prioritization into a network wide
scheme called QOS, administering QoS and other trappings of QoS are much
more expensive than simple capacity. So it is a situation of only
prioritization when there is NO other option, but never QoS. Of course
these prioritization mechanisms can evolve to become attractive
nuisances for billing purposes too ...it's a slippery slope. But the
industry shouldn't make end to end promises they can't keep (as I
mentioned there are huge QoS issues in federation, security,
configuration management over different boxes from different vendors
with different OS's with different releases, with different quality
definitions, with different parameter names and the same parameter names
that doing different things depending on vendor..etc.). In my experience
people running networks don't want to deal with QoS. Here is list of
what I experienced/remember...

1) First, let's consider the future that is emerging! How can ANY
technology like QoS which relies on extensive core network control and
takes an application focus, adapt to overlay techniques found in P2P
networks as well as trends (such as mash-ups) related to dynamically
composed and instantiated concoctions (formally known as applications)
at the edge of the network? Now consider emerging technologies like
traffic scattering (www.asankya.com) and network coding. How can all the
data streams associated with ideas like mash-ups, traffic scattering or
network coding be QoS managed?

2) QoS will encourage end users to either use encryption of use
packet-obfuscation techniques/technology as the network provider tries
to inspect/control their traffic. QoS simply creates a major incentive
to hide one type of data (e.g., a video stream) as another type of data
(e.g., VoIP).  Also, there is a difficult question of what to do with
data that really can't be identified because of encryption (and number 1
above) -- do you only provide the lowest prioritization for all
encrypted traffic?  If don't the it is likely that everyone will encrypt
everything (you are seeing this with P2P traffic today); if you do, then
you've introduced a communications medium where privacy is
systematically discriminated against.

3) How do network providers find the people to manage QoS?  ...and
when/if you do find someone "qualified" to hire they are too expensive.
Consider technology employee burden rates per individual of well over
150k per year and they complain about office space, parking, lighting,
heating, window views, health care etc.,   ...and bandwidth with ever
declining prices looks better and better and better - and bandwidth
never complains  :-).  Even if you do hire someone with real practical
skills (and that's another question "finding them") you won't let them
touch the production network (aren't the majority of network outages
from people touching the network?). These expensive people just wind up
looking at utilization reports and ordering bandwidth. I built a tool
for IBM to supply information about packet flows useful for QoS and
Policy applications - so I got to see first hand how it was
used/applied, rather NOT used/applied).

4) Networks designs that require any level of reliability have fail over
designs/mechanisms which mean everything in the network must run at
reasonably low utilization and be able to take on additional load if
something else fails (or multiple things fail - a hub box). If you don't
do this, you lose your job when the whole network goes down because of a
single link or box failure cascades (I have seen it happen).

5) Network capacity must be provisioned and sufficient for peak loading
failures (even if the peaks occur once a year).  This statement and the
previous network design statement means networks are routinely run at
very very low capacity.

6) IBM did a study about boxes running at high utilizations a long time
ago and correlated the fact that the higher the utilization the more
likely it will fail (it is less reliable at high utilization) because of
problems you don't see at lower utilizations, like multiple buffer pools
being filled at the same time confusing the task dispatcher, strange
race conditions and control block and data, and buffer threads become
high latency paths etc. etc.

7) What does it mean to run a link/box at 10% or less (typical Internet
link), 90% of the time there is no queue and at least 90% of the time
you don't need QoS and in reality the numbers are worse. What is needed
is a queue depth of at least three packets (one in transmission and two
waiting before you could do anything meaningful) and even this doesn't
occur very often with high speed link utilizations of less than 10%

8) QoS is in a race with Moore's Law which says the link queue can empty
faster than you can run instructions and check policy data bases etc. to
make QoS decisions. By the time you made a QoS/policy decision the link
is empty or the packets being considered have left. Or worse yet,
another set of packets are there, in the queue, and you need to re-run
the QoS stuff again. This gives you high data base utilization, high QoS
and policy usage numbers and you might actually think you are really
helping the network with QoS when all you are really doing is only
SPINNING on QoS, policy, database dips..etc. that are busy but NOT
productive.

9) Most of the time the SERVERS ARE SLOW and NOT the network, this makes
the problem, a task dispatching/resource one at the servers, assuming
they have multiple types of tasks, if not it is a simple server capacity
planning problem. Is QoS tied back from the end user through the
network, or rather multiple networks (called the Internet), all the way
back to the server and even database administrator? Of course not, what
real QoS guarantee can you offer. So, I can't see how QoS is enforceable
across a collection of heterogeneous networks.  It seems to be embraced
by the Telco's as one possible way to manage (or possibly stop
investing) in capacity and therefore it basically encourages artificial
scarcity so a need for QoS is artificially created, and thus it is all
about billing.

10) All code has bugs per Kloc, With QoS code you are adding bugs and
unreliably to your network.

11) In general, QoS adds complexity and all the problems that come with
complexity, this is a VERY huge slam against QoS ...re-read all the
above  :-)

12) I believe QoS has a negative ROI (it's funny you can never find any
numbers showing real tangible benefit, don't you think if it was so
wonderful or even mildly useful some one "with QoS to sell" would put
out some numbers, even questionable marketing ones...   ;-)  But when
you think about the marketing/savings angle and if you really wanted to
save money you might be tempted to ignore QoS and all it's trappings and
just manage by the bandwidth (of course you need to do capacity planning
and traffic measurements - but these can be done simply and cheaply and
I have done them with clients with simple utilization reports with
thresholds, not elegant but it worked every time).

13) No one can really understand or predict what it will do in a
production network (sure you can look at individual specifics, but not
over the entire network) etc, you should get my drift by now... In my
experience QoS is really meaningless/bogus stuff.

14) And Finally, who you going to call when it is broken, or even to try
to understand IF it is broken. Problem Determination is a challenge to
say the least...

I view QoS like folk lore and Bigfoot/Chimera. People talk about it but
it really doesn't exist in practice. Consider one trapping of QoS called
queuing theory. Many QoS advocates try the complex queuing theory math
as one way to make their point for the need of QoS through mathematical
proofs. I turned negative after studying Queuing theory and attempting
to apply it to real networks. I could never ever find any Poisson's or
exponentials (traffic patterns or work quantum per packet or otherwise)
in any network data, and I looked at thousands of network traces.
Network traffic is all deterministic and the self-similar nature of
networks is at odds with any queuing theory approach. My 2 cents

Best Regards
John


-----Original Message-----
From: David Farber [mailto:dave () farber net]
Sent: Sunday, June 22, 2008 11:39 PM
To: ip
Subject: [IP] Re: Net Neutrality: A Radical Form of Non-Discrimination
by Hal Singer


________________________________________
From: Valdis.Kletnieks () vt edu [Valdis.Kletnieks () vt edu]
Sent: Sunday, June 22, 2008 11:23 PM
To: David Farber
Cc: ip
Subject: Re: [IP] Net Neutrality: A Radical Form of Non-Discrimination
by Hal Singer

On Sun, 22 Jun 2008 09:06:43 EDT, David Farber said:

http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1001480

"Net neutrality represents the prohibition of any contracting for
enhanced service or guaranteed quality of service (QoS) between a
broadband service provider and an Internet content provider. Such a
prohibition would unwind existing contracts for QoS between broadband
service providers and content providers. The anticompetitive harms
that would be allegedly spared from such a prohibition pale in
comparison to the efficiencies made possible by such contracting."

"efficiencies". Yeah, right.

There's exactly *3* cases to deal with:

1) There's enough bandwidth available end-to-end. QoS is totally
meaningless in this case, and does nothing.

2) There's a bottleneck, and some traffic has been flagged as "this data
gets preferential treatment".  If QoS takes effect, then some *other*
traffic will by necessity be pushed to the rear of the queue or totally
dropped.
This is what most providers call QoS.  The problem is that very rarely
does the dropped traffic belong to the same customer that asked for the
QoS.
In other words, if *I* flag a data stream as "preferred" because it's
VoIP or something, and the provider drops some other of *my* traffic,
that's not a big problem.  The "network neutrality" problem is when
Content Provider X flags something with QoS, and in the process of
providing that traffic to some other customer of my provider, *my*
traffic gets dropped.

3) There's a bottleneck, and some traffic has been flagged as "bandwidth
scavenger/can be dropped".  When QoS kicks in, it of course is the first
data to get heaved over the side. This would be nice if it happened, but
as far as I can tell, it's basically a mythical beast that's rarely if
ever actually sighted in the wild.

The problem is that "efficiencies" (mostly not needing as much of an
upstream pipe because you know what data has requested dropping) happen
in the third case, but most providers try very hard to conflate that
case with the second case, which can provide a revenue stream for
them...





-------------------------------------------
Archives: http://www.listbox.com/member/archive/247/=now
RSS Feed: http://www.listbox.com/member/archive/rss/247/
Powered by Listbox: http://www.listbox.com



-------------------------------------------
Archives: http://www.listbox.com/member/archive/247/=now
RSS Feed: http://www.listbox.com/member/archive/rss/247/
Powered by Listbox: http://www.listbox.com


Current thread: