7. Matrix-vector algebra¶
Matrices support addition/subtraction and scalar multiplication just as vectors do, by acting elementwise. But things get more complicated when we start considering how to multiply matrices and vectors together.
7.1. Matrix times vector¶
The linear combination definition serves as the foundation of multiplication between a matrix and a vector.
Given \(\bfA\in\cmn{m}{n}\) and \(\bfx\in\complex^{n}\), the product \(\bfA\bfx\) is defined as
where \(\bfa_j\) refers to the \(j\)th column of \(\bfA\).
Warning
In order for \(\bfA\bfx\) to be defined, the number of columns in \(\bfA\) has to be the same as the number of elements in (dimension of) \(\bfx\).
Note that when \(\bfA\) is \(m\times n\), then \(\bfx\) must have dimension \(n\) and \(\bfA\bfx\) has dimension \(m\).
Example
Calculate the product
Solution
The product is equivalent to
We often don’t write out the product in this much detail just to calculate an instance. Instead we “zip together” the rows of the matrix with the entries of the vector:
You might recognize the “zip” expressions in this vector as dot products from vector calculus.
What justifies calling this operation multiplication? In large part, it’s the natural distributive properties
which can be checked with a little effort. It’s also true that \(\bfA(c\bfx)=c(\bfA\bfx)\) for any scalar \(c\). However, there is a big departure from multiplication as we usually know it.
Warning
Matrix-vector products are not commutative. In fact, \(\bfx\bfA\) is not defined even when \(\bfA\bfx\) is.
7.2. Connection to linear systems¶
Earlier we said that a linear system is equivalent to a statement about a linear combination of vectors. Now we have said that a linear combination of vectors is equivalent to a matrix-vector multiplication. Putting these together,
The linear system with coefficient matrix \(\bfA\), right-side vector \(\bfb\), and solution \(\bfx\) is equivalent to the equation \(\bfA\bfx=\bfb\).
This observation finally brings us back around to the introduction of linear systems through the insultingly simple scalar equation \(ax=b\). But to fully solve it in the vector case, we need a bit more preparation.
7.3. MATLAB¶
We can now connect solving linear systems with backslash to matrix-vector multiplication. The system defined by
A = [ 1 -1 0; 2 2 -3; 4 0 1 ];
b = [ 3; -1; 1 ];
ans =
'9.7.0.1296695 (R2019b) Update 4'
is solved via
x = A \ b
x =
0.500000000000000
-2.500000000000000
-1.000000000000000
We check the result:
b - A*x
ans =
0
0
0
In general, we cannot expect all the entries of the vector to be exactly zero, due to rounding errors.