nanog mailing list archives

RE: The Qos PipeDream [Was: RE: Two Tiered Internet]


From: "Christopher L. Morrow" <christopher.morrow () mci com>
Date: Fri, 16 Dec 2005 19:02:23 +0000 (GMT)



On Fri, 16 Dec 2005, Min Qiu wrote:

Hi Chris,


hey :)


-----Original Message-----
From: owner-nanog () merit edu on behalf of Christopher L. Morrow
Sent: Thu 12/15/2005 10:29 PM
To: John Kristoff
Cc: nanog () merit edu
Subject: Re: The Qos PipeDream [Was: RE: Two Tiered Internet]

snip...

Speaking to MCI's offering on the public network it's (not sold much) just
qos on the end link to the customer... It's supposed to help VOIP or other
jitter prone things behave 'better'. I'm not sure that we do much in the
way of qos towards the customer aside from respecting the bits on the
packets that arrive (no remarking as I recall). So, what does this get you
aside from 'feeling better' ?

Not 100% true.  Through I agree QoS has little impact in the core
that has OCxx non-congested backbone (more comments below).  In the
edge, it does has its place, as Stephen Sprunk and Mikael Abrahamsson
explained/described.  I recalled we were busy at one time to find out
why one of our _most_ important T1 customer's poor VoIP performance.
It turned out his T1 was peaked in those peroid.

yup, for t1 customers (or dsl or dial) qos matters only if your like is
full when you want to do something with stringent delay/jitter/loss
requirements (voip).  Possibly a better solution for both parties in the
above case would have been MLFR ... possibly. (someone would have to run
the numbers, I'm not sure how much the 'qos' service costs in real $$ not
sales marked-down-for-fire-sale $$)



snip...

most large networks (as was said a few times I think) don't really need it
in their cores. I think I've seen a nice presentation regarding the
queuing delay induced on 'large pipe' networks, basically showing that qos
is pointless if your links are +ds3 and not 100% full. Someone might have
a pointer handy for that?

There is a little problem here.  Most of the studies assume packet arrive
rate governed by poision rule.   Those data collections were from application
sessions-->normal distribution when number of sessions-->infinity.  However,
this can only apply to core, specially two tiered core where packet arrive
rate are smoomthed/aggregated.  I did experied long delay on a DC3 backbone
when the utilization reach to 75%~80%.  Packet would drop crazy when the
link util reach to ~90% (not 100% tied to quenueing, I guessed).  That said,

i think this is where WRED is used... avoid the sawtooth effect of tcp
sessions, random drop some packets and force random flows to backoff and
behave. I think I recall WRED allowing (with significant number of flows)
usage to reach 95+% or so smoothly on a ds3... though that is from some
cisco marketting slides)

it only move the threahold in the core from DC3 to OC12 or OC48 (see Ferit
and Erik's paper "Network Characterization Using Constraint-Based Definitions
of Capacity, Utilization, and Efficiency"
 (http://www.comsoc.org/ci1/Public/2005/sep/current.html I don't have the
access).  I'm not sure the study can applied to customer access edge
where traffic tend to be burst and the link capacity is smaller in
general.

Maybe part of the discussion problem here is the overbroad use of 'QOS in
the network!' ? Perhaps saying, which I think people have, that QOS
applied to select speed edge interfaces is perhaps reasonable, I'd bet it
still depends on the cost to the operator and increased cost to the
end-user. it may be cheaper to get a second T1 than it is to do QOS, and
more effective.

Alternately, customers could use other methods aside from QOS to do the
shaping, assuming 'QOS' is defined as tos bit setting and DSCP-like
functions, not rate-shaping on protocol or port or source/dest pairs.


Current thread: