nanog mailing list archives

Re: SLA for voice and video over IP/MPLS


From: Anton Kapela <tkapela () gmail com>
Date: Mon, 28 Feb 2011 20:08:58 -0600

On Mon, Feb 28, 2011 at 5:23 PM, William Herrin <bill () herrin us> wrote:

Who uses BER to measure packet switched networks?

I do, some 'packet' test gear can, bitstream oriented software often will, etc.

Is it even possible
to measure a bit error rate on a multihop network where a corrupted
packet will either be discarded in its entirety or transparently
resent?

Absolutely -- folks can use BER in the context of packet networks,
given that many bit-oriented applications are often packetized. Once
processed by a bit, byte, or other message-level interleaving
mechanism and encoded (or expanded with CRC and FEC-du-jour), BER is
arguably more applicable. These types of packetized bitstreams, when
subjected to a variable and sundry packet loss processes, may only
present a few bits of residual error for to application. I would argue
that in this way, BER and PER are flexible terms given (the OP's A/V)
context.

For example, if we have 1 bit lost in 1000000, that'd be ~1 packet
lost every 82 packets we receive, for a IP packet of 1500 bytes. More
importantly, this assumes we're able to *detect* a single bit error
(eg. CRC isn't absolute, it's probabilistic). Such error-expansion due
to packetization has the effect of making 10E-6 appear as if we lost
the nearest 11,999 bits as well. However, not all networks check L2
CRC's, and some are designed to explicitly ignore them--an advantage
given application-level encoding data encoding schemes.

It follows that if 1 in ~82 packets becomes corrupted, regardless of a
CRC system detecting and dropping it, then we have a link no *better*
than 10E-6. If the CRC system detected an error, then it's possible
that >1 bit was corrupted. This implies that we can't know precisely
how much *worse* than 10E-6 the link is, since we're aggregated (or
limited) to a resolution of +/- 12k bits at a time.

-Tk


Current thread: