Matrices
Mu-Law Companding
Digital Audio Number Systems
Contents
Global Contents
Global Index
  Index
  Search
Round-Off Error Variance
This appendix shows how to derive that the noise power of amplitude
quantization error is
, where
is the quantization step
size.
Each round-off error in quantization noise
is modeled as a
uniform random variable between
and
. It therefore
has the
probability density function (pdf)
Thus, the probability that a given roundoff error
lies in the
interval
is given by
assuming of course that
and
lie in the allowed range
. We might loosely refer to
as a probability
distribution, but technically it is a probability density function,
and to obtain probabilities, we have to integrate over one or more
intervals, as above. We use probability distributions for variables
which take on discrete values (such as dice), and we use probability
densities for variables which take on continuous values (such
as round-off errors).
The mean of
a random variable is defined as
In our case, the mean is zero because we are assuming the use of
rounding (as opposed to truncation, etc.).
The mean of a signal
is the same thing as the
expected value of
, which we write as
.
In general, the expected value of any function
of a
random variable
is given by
Since the quantization-noise signal
is modeled as a series of
independent, identically distributed (iid) random variables, we can
estimate the mean by averaging the signal over time.
Such an estimate is called a sample mean.
Probability distributions are often be characterized by their
moments.
The
th moment of the pdf
is defined as
Thus, the mean
is the first moment of the
pdf. The second moment is simply the expected value of the random variable
squared, i.e.,
.
The
variance
of a random variable
is defined as
the
second central moment of the pdf:
``Central'' just means that the moment is evaluated after subtracting out
the mean, that is, looking at
instead of
. In
the case of round-off errors, the mean is zero, so subtracting out the mean
has no effect. Plugging in the constant pdf for our random variable
which we assume is uniformly distributed on
, we obtain the
variance
Note that the variance of
can be estimated by averaging
over time, that is, by computing the mean square. Such an estimate
is called the sample variance. For sampled physical processes, the
sample variance is proportional to the average power in the signal.
Finally, the square root of the sample variance (the rms level) is
sometimes called the standard deviation of the signal, but this term
is only precise when the random variable has a Gaussian pdf.
Some good textbooks in the area of statistical signal processing
include [22,41,52,27].
Matrices
Mu-Law Companding
Digital Audio Number Systems
Contents
Global Contents
Global Index
  Index
  Search
``Mathematics of the Discrete Fourier Transform (DFT)'',
by Julius O. Smith III,
W3K Publishing, 2003, ISBN 0-9745607-0-7.
(Browser settings for best viewing results)
(How to cite this work)
(Order a printed hardcopy)
Copyright © 2003-10-09 by Julius O. Smith III
Center for Computer Research in Music and Acoustics (CCRMA),
Stanford University
(automatic links disclaimer)