nanog mailing list archives

Re: CAR


From: "Ken Yeo" <kenyeo () on-linecorp com>
Date: Thu, 18 Apr 2002 17:09:32 -0500


Hi Mathew,

Thanks for the explaination. Based on your email, if CAR is applied at
router B to rate limit inbound and outbound traffics at 128kbps but the
realtime video takes 512kbps, the traffics will transverse from
upstream-->backbone and CAR will drop packets at 384kbps in router B. At
that time, I guess the RTSP or H.323 application will error out and stop
requesting traffics? So in order to use CAR to rate limit traffics to
customers, we need to apply CAR at ingress and egress routers. How is
everyone else doing that?


Suan "Ken" Yeo
Network Engineer
Aurum Technology
ken.yeo () aurumtechnology com
----- Original Message -----
From: "Mathew Lodge" <mathew () cplane com>
To: "Ken Yeo" <kenyeo () on-linecorp com>; <nanog () merit edu>
Sent: Thursday, April 18, 2002 12:23 PM
Subject: Re: CAR


At 11:20 AM 4/18/2002 -0500, Ken Yeo wrote:
-For UDP based audio/video trafffics, if the applications use RTSP and
H.323, RTCP/H.245 will signal the sender to slowdown the transmission if
the
receiver lost packets.

No, that won't happen. Much real-time voice and video traffic is constant
bit-rate, though there are CODECs that offer variable bit rates within an
envelope of min/max bit rates. When packet loss is encountered, the
equipment will usually try error concealment to deal with the lost data --
how sophisticated this is depends on the equipment. Re-sending the lost
data is pointless because it will arrive too late.

More importantly, the sender will not slow down its transmission. It can't
-- at any instant, there's a fixed amount of bandwidth required to carry
the real-time voice and video, and there's no way to magically use less
bandwidth. Buffering options are limited because the stream is real time
and end-to-end latency must be bounded. And if the required bandwidth of
the stream is greater than the available bandwidth over a long period of
time (long in this case means seconds), no amount of buffering can help
you. You need to abandon real-time and switch to a store-and-forward
paradigm.

For example, a G.729a encoded real-time voice call requires 8Kbit/sec
constant bit rate (excluding IP and RTP packet headers). If it can't get
8Kbit/sec, the user gets crappy voice quality.

Some streaming systems (non real-time -- the content is stored for
on-demand viewing) encode the video/audio at several different data rates
and try to guess the available bandwidth for a customer's connection. This
process typically happens at the start of the streaming session, However,
automatically switching to a lower rate later is hard to do, so it's rare
to see it implemented. But none of this can be done for real-time voice or
video.

Did I miss anything? How about UDP traffics that are not using
RTSP/H.323?

H.323 is all about call signalling. It doesn't have anything to do with
congestion management. Also, I think you mean H.323 (optionally including
H.245) paired up with RTP -- this is the usual combination for VoIP calls.
RTSP is typically used for streamed content.

Other real-time UDP protocols (protocols used by games such as Half Life,
for example) typically use the same kind of concepts and techniques that
are in RTSP to detect packet loss so that they can conceal it or react in
some other way. But if they're carrying a real-time service that has
minimum bandwidth requirements, then there's no way for the sender to slow
down.

Cheers,

Mathew



| Mathew Lodge                 | mathew () cplane com     |
| Director, Product Management | Ph: +1 408 789 4068   |
| CPLANE, Inc.                 | http://www.cplane.com |


Current thread: