13. Eigenvalues¶
The last stop on our whirlwind tour of linear algebra is the hardest to motivate for a little while. However, the topic is important to the differential equations we study in future chapters, in the sense of oxygen being an important part of breathing.
We are still operating with square matrices only.
Suppose \(\bfA\in\cmn{n}{n}\). If there exist a scalar \(\lambda\) and a nonzero vector \(\bfv\) such that
then \(\lambda\) is an eigenvalue of \(\bfA\) with associated eigenvector \(\bfv\).
If you think of \(\bfA\) as acting on vectors, then an eigenvector is a direction in which the action of \(\bfA\) is the same as a scalar; we have found a little one-dimensional oasis in which the behavior of \(\bfA\) is easy to comprehend.
13.1. Eigenspaces¶
An eigenvalue is a clean, well-defined target. Eigenvectors are a little slipperier. For starters, if \(\bfA\bfv=\lambda\bfv\), then
Note
Every nonzero multiple of an eigenvector is also an eigenvector for the same eigenvalue.
But there can be even more ambiguity than scalar multiples.
Example
Let \(\meye\) be an identity matrix. Then \(\meye\bfx=\bfx\) for any vector \(\bfx\), so every nonzero vector is an eigenvector!
Fortunately we already have the tools we need to describe a more robust target, based on the very simple reformulation
Let \(\lambda\) be an eigenvalue of \(\bfA\). The eigenspace associated with \(\lambda\) is the general solution of \((\bfA-\lambda\meye)\bfx = \bfzero\).
Eigenspaces, unlike eigenvectors, are unique. We have to be a bit careful, though, because we usually express such spaces using basis vectors, and those bases are not themselves unique. It’s also not unusual for problems and discussions to use eigenvectors and just put up with the nonuniqueness.
13.2. Computing eigenvalues and eigenvectors¶
Note that if \(\lambda\) is not an eigenvalue, then by definition the only solution of \((\bfA-\lambda\meye)\bfv=\bfzero\) is \(\bfv=\bfzero\). That requires \(\bfA-\lambda\meye\) to be invertible.
\(\lambda\) is an eigenvalue of \(\bfA\) if and only if \(\bfA-\lambda\meye\) is singular.
In practice the most common way to find eigenvalues by hand is through the equivalent condition \(\det(\bfA-\lambda\meye)=0\). This determinant has a particular form and name.
Suppose \(\bfA\) is an \(n\times n\) matrix. The function \(p(z) = \det(\bfA-z\meye)\) is a polynomial of degree \(n\) in \(z\), known as the characteristic polynomial of \(\bfA\).
Given an \(n\times n\) matrix \(\bfA\):
Find the characteristic polynomial \(p\) of \(\bfA\).
Let \(\lambda_1,\ldots,\lambda_k\) be the distinct roots of \(p\). These are the eigenvalues. (If \(k<n\), it’s because one or more roots has multiplicity greater than 1.)
For each \(\lambda_j\), find the general solution of \((\bfA-\lambda_j\meye)\bfv=\bfzero\). This is the eigenspace associated with \(\lambda_j\).
Example
Find the eigenvalues and eigenspaces of
Solution
Start by computing the characteristic polynomial:
We find eigenvalues by finding its roots, in this case \(\lambda_1=3\) and \(\lambda_2=-1\).
For \(\lambda_1=3\),
The homogeneous solution can be expressed as \(x_1=s/2\), \(x_2=s\), or \(\bfx=s\cdot[1/2;\,1]\). So \([1/2;\,1]\) is a basis for this eigenspace. Since eigenvectors can be rescaled at will, we prefer to use \(\twovec{1}{2}\) as the basis vector.
For \(\lambda_2=-1\),
leading to the eigenspace basis \([-1/2;\,1]\) or equivalently, \(\twovec{-1}{2}\).
13.3. MATLAB¶
MATLAB computes eigenvalues (through an entirely different process) with the eig
command. From the preceding example, for instance,
A = [ 1 1; 4 1 ];
lambda = eig(A)
ans =
'9.7.0.1296695 (R2019b) Update 4'
lambda =
3.000000000000000
-1.000000000000000
If you want eigenvectors as well, use an alternate form for the output:
[V,D] = eig(A)
V =
0.447213595499958 -0.447213595499958
0.894427190999916 0.894427190999916
D =
3.000000000000000 0
0 -1.000000000000000
In most cases, column V(:,k)
is an eigenvector for the eigenvalue D(k,k)
. (For eigenvalues of multiplicity greater than 1, the interpretation can be more complicated.) Keep in mind that any scalar multiple of an eigenvector is equally valid.
13.4. Eigenvectors for \(2\times 2\)¶
Finding the exact roots of a cubic polynomial is not an easy matter unless the polynomial is special. Thus most of our hand computations will be with \(2\times 2\) matrices. Suppose \(\lambda\) is known to be an eigenvalue of \(\bfA\). Then \(\bfA-\lambda\meye\) must be singular, and its RREF has at least one free column. Hence row elimination will zero out the second row entirely, and we can ignore it. That allows us to deduce the following.
Let \(\lambda\) be an eigenvalue of \(\bfA\).
Let the first row of \(\bfA-\lambda\meye\) be designated by \([\alpha,\beta]\).
If \(\alpha=\beta=0\), then \(\bfA-\lambda\meye\) is a zero matrix and all of \(\complex^2\) is the eigenspace of \(\lambda\).
Otherwise, the vector \([\beta;\,-\alpha]\) is a basis of the eigenspace of \(\lambda\).
Example
Find the eigenstuff of
Solution
We start by finding eigenvalues.
The eigenvalues are therefore roots of \(\lambda^2 - 2\lambda + 2\), or
This is our first case of a real matrix that has complex eigenvalues. We continue as always, only using complex arithmetic.
The eigenspace for \(\lambda_1=1+i\) is the homogeneous solution of
To find a basis we just use the first row as explained above, getting \(\twovec{1}{i}\).
Now we get a nice reward for using complex numbers. Since the matrix is real, the other eigenvalue is the conjugate of \(\lambda_1\), and it turns out that the same is true of the eigenspace as well. So \(\twovec{1}{-i}\) is a basis for the second eigenspace.