next Solving Linear Equations Using Matrices
previous Matrices
up Matrices   Contents   Global Contents
global_index Global Index   Index   Search

Matrix Multiplication

Let $ A^{\hbox{\tiny T}}$ be a general $ M\times L$ matrix and let $ B$ denote a general $ L\times N$ matrix. Denote the matrix product by $ C=A^{\hbox{\tiny T}}B$ or $ C=A^{\hbox{\tiny T}}\cdot
B$. Then matrix multiplication is carried out by computing the inner product of every row of $ A^{\hbox{\tiny T}}$ with every column of $ B$. Let the $ i$th row of $ A^{\hbox{\tiny T}}$ be denoted by $ \underline{a}^{\hbox{\tiny T}}_i$, $ i=1, 2,\ldots,M$, and the $ j$th column of $ B$ by $ \underline{b}_j$, $ j=1,2,\ldots,N$. Then the matrix product $ C=A^{\hbox{\tiny T}}B$ is defined as

$\displaystyle C = A^{\hbox{\tiny T}}B = \left[\begin{array}{cccc}
<\underline{...
...\cdots & <\underline{a}^{\hbox{\tiny T}}_M,\underline{b}_N>
\end{array}\right]
$

This definition can be extended to complex matrices by using a definition of inner product which does not conjugate its second argument.D.2

Examples:

$\displaystyle \left[\begin{array}{cc} a & b \\ c & d \\ e & f \end{array}\right...
...gamma & c\beta+d\delta \\
e\alpha+f\gamma & e\beta+f\delta
\end{array}\right]
$

$\displaystyle \left[\begin{array}{cc} \alpha & \beta \\ \gamma & \delta \end{ar...
...ma a + \delta b & \gamma c + \delta d & \gamma e + \delta f
\end{array}\right]
$

$\displaystyle \left[\begin{array}{c} \alpha \\ \beta \end{array}\right]
\cdot
\...
...pha a & \alpha b & \alpha c \\
\beta a & \beta b & \beta c
\end{array}\right]
$

$\displaystyle \left[\begin{array}{ccc} a & b & c \end{array}\right]
\cdot
\left...
...} \alpha \\ \beta \\ \gamma \end{array}\right]
= a \alpha + b \beta + c \gamma
$

An $ M\times L$ matrix $ A$ can be multiplied on the right by an $ L\times N$ matrix, where $ N$ is any positive integer. An $ L\times N$ matrix $ A$ can be multiplied on the left by a $ M\times L$ matrix, where $ M$ is any positive integer. Thus, the number of columns in the matrix on the left must equal the number of rows in the matrix on the right.

Matrix multiplication is non-commutative, in general. That is, normally $ AB\neq BA$ even when both products are defined (such as when the matrices are square.)

The transpose of a matrix product is the product of the transposes in reverse order:

$\displaystyle (A B)^{\hbox{\tiny T}} = B^{\hbox{\tiny T}} A^{\hbox{\tiny T}}
$

The identity matrix is denoted by $ I$ and is defined as

$\displaystyle I \isdef \left[\begin{array}{ccccc}
1 & 0 & 0 & \cdots & 0 \\
0...
...dots & \vdots & \cdots & \vdots \\
0 & 0 & 0 & \cdots & 1
\end{array}\right]
$

Identity matrices are always square. The $ N\times N$ identity matrix $ I$, sometimes denoted as $ I_N$, satisfies $ A\cdot I_N =A$ for every $ M\times N$ matrix $ A$. Similarly, $ I_M\cdot A=A$, for every $ M\times N$ matrix $ A$.

As a special case, a matrix $ A^{\hbox{\tiny T}}$ times a vector $ \underline{x}$ produces a new vector $ \underline{y}= A^{\hbox{\tiny T}}\underline{x}$ which consists of the inner product of every row of $ A^{\hbox{\tiny T}}$ with $ \underline{x}$

$\displaystyle A^{\hbox{\tiny T}}\underline{x}= \left[\begin{array}{c}
<\underl...
...\vdots \\
<\underline{a}^{\hbox{\tiny T}}_M,\underline{x}>
\end{array}\right]
$

A matrix $ A^{\hbox{\tiny T}}$ times a vector $ \underline{x}$ defines a linear transformation of $ \underline{x}$. In fact, every linear function of a vector $ \underline{x}$ can be expressed as a matrix multiply. In particular, every linear filtering operation can be expressed as a matrix multiply applied to the input signal. As a special case, every linear, time-invariant (LTI) filtering operation can be expressed as a matrix multiply in which the matrix is Toeplitz, i.e., $ A^{\hbox{\tiny T}}[i,j] = A^{\hbox{\tiny T}}[i-j]$ (constant along diagonals).

As a further special case, a row vector on the left may be multiplied by a column vector on the right to form a single inner product:

$\displaystyle \underline{y}^{\ast }{\underline{x}} = <\underline{x},\underline{y}> % \ip brackets huge due to y-underbar
$

Use of the ``Hermitian transpose'' notation ``$ \ast $'' (defined in §6.10) allows the the above result to hold also for complex vectors. We may now rewrite the general matrix multiply as

$\displaystyle C = A^{\hbox{\tiny T}}B = \left[\begin{array}{cccc}
\underline{a...
...& \cdots & \underline{a}^{\hbox{\tiny T}}_M \underline{b}_N
\end{array}\right]
$


next Solving Linear Equations Using Matrices
previous Matrices
up Matrices   Contents   Global Contents
global_index Global Index   Index   Search

``Mathematics of the Discrete Fourier Transform (DFT)'', by Julius O. Smith III, W3K Publishing, 2003, ISBN 0-9745607-0-7.

(Browser settings for best viewing results)
(How to cite this work)
(Order a printed hardcopy)

Copyright © 2003-10-09 by Julius O. Smith III
Center for Computer Research in Music and Acoustics (CCRMA),   Stanford University
CCRMA  (automatic links disclaimer)