next Other Lp Norms
previous Vector Subtraction
up Geometric Signal Theory   Contents   Global Contents
global_index Global Index   Index   Search


Signal Metrics

This section defines some useful functions of signals.

The mean of a signal $ x$ (more precisely the ``sample mean'') is defined as its average value:

$\displaystyle \mu_x \isdef \frac{1}{N}\sum_{n=0}^{N-1}x_n$   $\displaystyle \mbox{(mean of $x$)}$

The total energy of a signal $ x$ is defined the sum of squared moduli:

$\displaystyle {\cal E}_x \isdef \sum_{n=0}^{N-1}\left\vert x_n\right\vert^2$   $\displaystyle \mbox{(energy of $x$)}$

Energy is the ``ability to do work.'' In physics, energy and work are in units of ``force times distance,'' ``mass times velocity squared,'' or other equivalent combinations of units.5.3

The average power of a signal $ x$ is defined the energy per sample:

$\displaystyle {\cal P}_x \isdef \frac{{\cal E}_x}{N} = \frac{1}{N} \sum_{n=0}^{N-1}\left\vert x_n\right\vert^2$   $\displaystyle \mbox{(average power of $x$)}$

Another common description when $ x$ is real is the ``mean square.'' When $ x$ is a complex sinusoid, i.e., $ x(n) = Ae^{j(\omega nT +
\phi)}$, then $ {\cal P}_x = A^2$; in other words, for complex sinusoids, the average power equals the instantaneous power which is the amplitude squared. For real signals, $ y =$   re$ \left\{x\right\}$, $ {\cal P}_y =
A^2/2$.

Power is always in physical units of energy per unit time. It therefore makes sense to define the average signal power as the total signal energy divided by its length. We normally work with signals which are functions of time. However, if the signal happens instead to be a function of distance (e.g., samples of displacement along a vibrating string), then the ``power'' as defined here still has the interpretation of a spatial energy density. Power, in contrast, is a temporal energy density.

The root mean square (RMS) level of a signal $ x$ is simply $ \sqrt{{\cal P}_x}$. However, note that in practice (especially in audio work) an RMS level may be computed after subtracting out the mean value. Here, we call that the variance.

The variance (more precisely the sample variance) of the signal $ x$ is defined as the power of the signal with its mean removed:5.4

$\displaystyle \sigma_x^2 \isdef \frac{1}{N}\sum_{n=0}^{N-1}\left\vert x_n - \mu_x\right\vert^2$   $\displaystyle \mbox{(sample variance of $x$)}$

It is quick to show that, for real signals, we have

$\displaystyle \sigma_x^2 = {\cal P}_x - \mu_x^2
$

which is the ``mean square minus the mean squared.'' We think of the variance as the power of the non-constant signal components (i.e., everything but dc). The terms ``sample mean'' and ``sample variance'' come from the field of statistics, particularly the theory of stochastic processes. The field of statistical signal processing [22,28,52] is firmly rooted in statistical topics such as ``probability,'' ``random variables,'' ``stochastic processes,'' and ``time series analysis.'' In this book, we will only touch lightly on a few elements of statistical signal processing in a self-contained way.

The norm of a signal $ x$ is defined as the square root of its total energy:

$\displaystyle \Vert x\Vert \isdef \sqrt{{\cal E}_x} = \sqrt{\sum_{n=0}^{N-1}\left\vert x_n\right\vert^2}$   $\displaystyle \mbox{(norm of $x$)}$

We think of $ \Vert x\Vert$ as the length of $ x$ in $ N$-space. Furthermore, $ \Vert x-y\Vert$ is regarded as the distance between $ x$ and $ y$. The norm can also be thought of as the ``absolute value'' or ``radius'' of a vector.5.5

Example: Going back to our simple 2D example $ x=[2, 3]$, we can compute its norm as $ \Vert x\Vert = \sqrt{2^2 + 3^2} = \sqrt{13} =
3.6056\ldots\,$. The physical interpretation of the norm as a distance measure is shown in Fig. 5.5.

Figure: Geometric interpretation of a signal norm in 2D.
\scalebox{0.7}{\includegraphics{eps/vec2dlen.eps}}

Example: Let's also look again at the vector-sum example, redrawn in Fig. 5.6.

Figure: Length of vectors in sum.
\scalebox{0.7}{\includegraphics{eps/vecsumdist.eps}}

The norm of the vector sum $ w=x+y$ is

$\displaystyle \Vert w\Vert \isdef \Vert x+y\Vert \isdef \Vert(2, 3) + (4, 1)\Vert
= \Vert(6, 4)\Vert = \sqrt{6^2 + 4^2} = \sqrt{52} = 2\sqrt{13}
$

while the norms of $ x$ and $ y$ are $ \sqrt{13}$ and $ \sqrt{17}$, respectively. We find that $ \Vert x+y\Vert<\Vert x\Vert+\Vert y\Vert$ which is an example of the triangle inequality. (Equality occurs only when $ x$ and $ y$ are collinear, as can be seen geometrically from studying Fig. 5.6.)

Example: Consider the vector-difference example shown in Fig. 5.7.

Figure: Length of a difference vector.
\scalebox{0.7}{\includegraphics{eps/vecdist.eps}}

The norm of the difference vector $ w=x-y$ is

$\displaystyle \Vert w\Vert \isdef \Vert x-y\Vert \isdef \Vert(2, 3) - (4, 1)\Vert
= \Vert(-2, 2)\Vert = \sqrt{(-2)^2 + (2)^2} = 2\sqrt{2}.
$



Subsections
next Other Lp Norms
previous Vector Subtraction
up Geometric Signal Theory   Contents   Global Contents
global_index Global Index   Index   Search

``Mathematics of the Discrete Fourier Transform (DFT)'', by Julius O. Smith III, W3K Publishing, 2003, ISBN 0-9745607-0-7.

(Browser settings for best viewing results)
(How to cite this work)
(Order a printed hardcopy)

Copyright © 2003-10-09 by Julius O. Smith III
Center for Computer Research in Music and Acoustics (CCRMA),   Stanford University
CCRMA  (automatic links disclaimer)