10. Detecting singularity

There are many ways to characterize singular matrices, some of which are computationally attractive. We focus on just two of them.

10.1. RREF

Here is a simple criterion based on something we already know how to do.

Theorem 10.1

A square matrix is invertible if and only if its RREF is an identity matrix.

Note

In this section we are referring to the RREF of just the coefficient matrix, not the augmented matrix.

In lieu of a formal proof, let’s consider how limited the options are for the RREF of a square matrix. There are just as many rows as there are columns. So if each row is to have a leading one, then they must march along the diagonal of the matrix, i.e., the leading one of row \(i\) is in column \(i\). But those are the only nonzeros in their columns, so they are the only nonzeros in the entire matrix. Voila, an identity matrix!

The contrapositive observation is that if \(\bfA\) is singular, then it must have one or more rows, and therefore one or more columns, without a leading one. That is,

Theorem 10.2

A square matrix is singular if and only if its RREF has at least one row of zeros.

Example

Determine whether

\[\begin{split}\bfA = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{bmatrix}\end{split}\]

is singular.

10.2. Determinant

You probably saw some \(2\times 2\) and \(3\times 3\) determinants in vector calculus. The \(2\times 2\) case is easy to describe:

\[\det\left( \twomat{a}{b}{c}{d} \right) = \twodet{a}{b}{c}{d} = ad-bc.\]

This definition can be extended to create a real-valued function for square matrices of any size. The formalities can be complicated, but we are going to use a practical approach.

Definition 10.3 (Determinant)

If \(\bfA\) is \(n\times n\), then its determinant is

\[\det(\bfA) = \sum (-1)^{i+j} a_{ij} \det\bigl( \mathbf{M}_{ij} \bigr),\]

where the sum is taken over any row or column of \(\bfA\) and \(\mathbf{M}_{ij}\) is the matrix that results from deleting row \(i\) and column \(j\) from \(\bfA\).

The definition, which is called cofactor expansion, is recursive: the \(n\times n\) case is defined in terms of the \((n-1)\times (n-1)\) case, and so on all the way back down to \(2\times 2\). Since expanding along any row or column gives the same result, it can be advantageous to choose one with lots of zeros to cut down on the total computation.

Example

Compute the determinant of

\[\begin{split} \begin{bmatrix} 2 & 0 & -1 \\ -2 & 3 & -1 \\ 2 & 0 & 1 \end{bmatrix}. \end{split}\]

There are a few facts about determinants that are good to know.

Property 10.4

Let \(\bfA\) and \(\bfB\) be \(n\times n\), and let \(c\) be a scalar. Then

  1. \(\det(c\bfA) = c^n \det(\bfA)\),

  2. \(\det(\bfA\bfB) = \det(\bfA)\det(\bfB)\),

  3. \(\det(\bfA)=0\) if and only if \(\bfA\) is singular, and

  4. If \(\bfA\) is nonsingular, \(\det(\bfA^{-1})=\bigl[\det(\bfA)\bigr]^{-1}\).

It’s the third property above that we will be using. The determinant is often the easiest way to check for singularity of a small matrix by hand.

10.2.1. Cramer’s Rule

Even though a 2x2 inverse is easy, it’s still not the most convenient way to solve a linear system \(\bfA\bfx=\bfb\) by hand. There is an even faster equivalent shortcut known as Cramer’s Rule:

\[\begin{align*} x_1 & = \frac{ \twodet{b_1}{A_{12}}{b_2}{A_{22}} }{ \det(\bfA) }\\[1ex] x_2 & = \frac{ \twodet{A_{11}}{b_1}{A_{21}}{b_2} }{ \det(\bfA) }. \end{align*}\]

Obviously this does not work if \(\det(\bfA)=0\), i.e., when the matrix is singular. Instead you have to fall back on our other methods.

Example

Solve

\[\begin{align*} -x + 3y & = 1 \\ 3x + y & = 7 \end{align*}\]

by Cramer’s Rule.