nanog mailing list archives

Re: Calculating Jitter


From: Fred Baker <fred () cisco com>
Date: Fri, 10 Jun 2005 09:56:38 -0700


you saw marshall's comment. If you're interested in a moving average, he's pretty close.

If I understood your question, though, you simply wanted to quantify the jitter in a set of samples. I should think there are two obvious definitions there.

A statistician would look, I should think, at the variance of the set. Reaching for my CRC book of standard math formulae and tables, it defines the variance as the square of the standard deviation of the set, which is to say

        sum of ((x(i) - xmean)^2)
     ------------------------
                 n - 1

where the n values x(i) are the members of the set, xmean is the mean of those values, and n is the number of x(i).

A sample set with a larger standard deviation or variance than another set has contains more jitter.

In this context, the other thought that comes to mind is the variation from nominal. If the speed-of-light delay between here and there is M, the jitter might be defined as the root-mean-square difference from M, which would be something like

        sum of ((x(i) - xmin)^2)
     -----------------------
              n - 1

with the same variables except that xmin is the least value in the set.


Current thread: