10. Detecting singularity¶
There are many ways to characterize singular matrices, some of which are computationally attractive. We focus on just two of them.
10.1. RREF¶
Here is a simple criterion based on something we already know how to do.
A square matrix is invertible if and only if its RREF is an identity matrix.
Note
In this section we are referring to the RREF of just the coefficient matrix, not the augmented matrix.
In lieu of a formal proof, let’s consider how limited the options are for the RREF of a square matrix. There are just as many rows as there are columns. So if each row is to have a leading one, then they must march along the diagonal of the matrix, i.e., the leading one of row \(i\) is in column \(i\). But those are the only nonzeros in their columns, so they are the only nonzeros in the entire matrix. Voila, an identity matrix!
The contrapositive observation is that if \(\bfA\) is singular, then it must have one or more rows, and therefore one or more columns, without a leading one. That is,
A square matrix is singular if and only if its RREF has at least one row of zeros.
Example
Determine whether
is singular.
Solution
Its RREF is
Hence this matrix is singular.
10.2. Determinant¶
You probably saw some \(2\times 2\) and \(3\times 3\) determinants in vector calculus. The \(2\times 2\) case is easy to describe:
This definition can be extended to create a real-valued function for square matrices of any size. The formalities can be complicated, but we are going to use a practical approach.
If \(\bfA\) is \(n\times n\), then its determinant is
where the sum is taken over any row or column of \(\bfA\) and \(\mathbf{M}_{ij}\) is the matrix that results from deleting row \(i\) and column \(j\) from \(\bfA\).
The definition, which is called cofactor expansion, is recursive: the \(n\times n\) case is defined in terms of the \((n-1)\times (n-1)\) case, and so on all the way back down to \(2\times 2\). Since expanding along any row or column gives the same result, it can be advantageous to choose one with lots of zeros to cut down on the total computation.
Example
Compute the determinant of
Solution
Using cofactor expansion along the first row,
In this case it might have been a tad easier to exploit the zeros by expanding along the second column instead:
There are a few facts about determinants that are good to know.
Let \(\bfA\) and \(\bfB\) be \(n\times n\), and let \(c\) be a scalar. Then
\(\det(c\bfA) = c^n \det(\bfA)\),
\(\det(\bfA\bfB) = \det(\bfA)\det(\bfB)\),
\(\det(\bfA)=0\) if and only if \(\bfA\) is singular, and
If \(\bfA\) is nonsingular, \(\det(\bfA^{-1})=\bigl[\det(\bfA)\bigr]^{-1}\).
It’s the third property above that we will be using. The determinant is often the easiest way to check for singularity of a small matrix by hand.
10.2.1. Cramer’s Rule¶
Even though a 2x2 inverse is easy, it’s still not the most convenient way to solve a linear system \(\bfA\bfx=\bfb\) by hand. There is an even faster equivalent shortcut known as Cramer’s Rule:
Obviously this does not work if \(\det(\bfA)=0\), i.e., when the matrix is singular. Instead you have to fall back on our other methods.
Example
Solve
by Cramer’s Rule.
Solution
Plug and play (or is it plug and pray?):