8.4. Krylov subspaces#

The power and inverse iterations have a flaw that seems obvious once it is pointed out. Given a seed vector \(\mathbf{u}\), they produce a sequence of vectors \(\mathbf{u}_1,\mathbf{u}_2,\ldots\) that are scalar multiples of \(\mathbf{u},\mathbf{A}\mathbf{u},\mathbf{A}^{2}\mathbf{u},\ldots\), but only the most recent vector is used to produce an eigenvector estimate.

It stands to reason that we could do no worse, and perhaps much better, if we searched among all linear combinations of the vectors seen in the past. In other words, we seek a solution in the range (column space) of the matrix

(8.4.1)#\[\mathbf{K}_m = \begin{bmatrix} \mathbf{u} & \mathbf{A}\mathbf{u} & \mathbf{A}^{2} \mathbf{u} & \cdots & \mathbf{A}^{m-1} \mathbf{u} \end{bmatrix}.\]
Definition 8.4.1 :  Krylov matrix and subspace

Given \(n\times n\) matrix \(\mathbf{A}\) and \(n\)-vector \(\mathbf{u}\), the \(m\)th Krylov matrix is the \(n\times m\) matrix (8.4.1). The range (i.e., column space) of this matrix is the \(m\)th Krylov subspace \(\mathcal{K}_m\).

In general, we expect that the dimension of the Krylov1 subspace \(\mathcal{K}_m\), which is the rank of \(\mathbf{K}_m\), equals \(m\), though it may be smaller.

Properties#

As we have seen with the power iteration, part of the appeal of the Krylov matrix is that it can be generated in a way that fully exploits the sparsity of \(\mathbf{A}\), simply through repeated matrix-vector multiplication. Furthermore, we have some important mathematical properties.

Theorem 8.4.2

Suppose \(\mathbf{A}\) is \(n\times n\), \(0<m<n\), and a vector \(\mathbf{u}\) is used to generate Krylov subspaces. If \(\mathbf{x}\in\mathcal{K}_m\), then the following hold:

  1. \(\mathbf{x} = \mathbf{K}_m \mathbf{z}\) for some \(\mathbf{z}\in\mathbb{C}^m\).

  2. \(\mathbf{x} \in \mathcal{K}_{m+1}\).

  3. \(\mathbf{A}\mathbf{x} \in \mathcal{K}_{m+1}\).

Proof

If \(\mathbf{x}\in\mathcal{K}_m\), then for some coefficients \(c_1,\ldots,c_m\),

\[\mathbf{x} = c_1 \mathbf{u} + c_2 \mathbf{A} \mathbf{u} + \cdots + c_m \mathbf{A}^{m-1} \mathbf{u}.\]

Thus let \(\mathbf{z}= \begin{bmatrix} c_1 & \cdots & c_m \end{bmatrix}^T\). Also \(\mathbf{x}\in\mathcal{K}_{m+1}\), as we can add zero times \(\mathbf{A}^{m}\mathbf{u}\) to the sum. Finally,

\[\mathbf{A}\mathbf{x} = c_1 \mathbf{A} \mathbf{u} + c_2 \mathbf{A}^{2} \mathbf{u} + \cdots + c_m \mathbf{A}^{m} \mathbf{u} \in \mathcal{K}_{m+1}.\]

Dimension reduction#

The problems \(\mathbf{A}\mathbf{x}=\mathbf{b}\) and \(\mathbf{A}\mathbf{x}=\lambda\mathbf{x}\) are statements about a very high-dimensional space \(\mathbb{C}^n\). One way to approximate them is to replace the full \(n\)-dimensional space with a much lower-dimensional \(\mathcal{K}_m\) for \(m\ll n\). This is the essence of the Krylov subspace approach.

For instance, we can interpret \(\mathbf{A}\mathbf{x}_m\approx \mathbf{b}\) in the sense of linear least-squares—that is, using Theorem 8.4.2 to let \(\mathbf{x}=\mathbf{K}_m\mathbf{z}\),

(8.4.2)#\[\min_{\mathbf{x}\in\mathcal{K}_m} \| \mathbf{A}\mathbf{x}-\mathbf{b} \| = \min_{\mathbf{z}\in\mathbb{C}^m} \| \mathbf{A}(\mathbf{K}_m\mathbf{z})-\mathbf{b} \| = \min_{\mathbf{z}\in\mathbb{C}^m} \| (\mathbf{A}\mathbf{K}_m)\mathbf{z}-\mathbf{b} \|.\]

The natural seed vector for \(\mathcal{K}_m\) in this case is the vector \(\mathbf{b}\). In the next example we try to implement (8.4.2). We do take one precaution: because the vectors \(\mathbf{A}^{k}\mathbf{b}\) may become very large or small in norm, we normalize after each multiplication by \(\mathbf{A}\), just as we did in the power iteration.

Demo 8.4.3

First we define a triangular matrix with known eigenvalues, and a random vector \(b\).

λ = @. 10 + (1:100)
A = triu(rand(100,100),1) + diagm(λ)
b = rand(100);

Next we build up the first ten Krylov matrices iteratively, using renormalization after each matrix-vector multiplication.

Km = [b zeros(100,29)]
for m in 1:29      
    v = A*Km[:,m]
    Km[:,m+1] = v/norm(v)
end

Now we solve least-squares problems for Krylov matrices of increasing dimension, recording the residual in each case.

resid = zeros(30)
for m in 1:30  
    z = (A*Km[:,1:m])\b
    x = Km[:,1:m]*z
    resid[m] = norm(b-A*x)
end

The linear system approximations show smooth linear convergence at first, but the convergence stagnates after only a few digits have been found.

plot(0:29,resid,m=:o,
    xaxis=(L"m"),yaxis=(:log10,L"\| b-Ax_m \|"), 
    title="Residual for linear systems",leg=:none)
Residual for linear systems

The Arnoldi iteration#

The breakdown of convergence in Demo 8.4.3 is due to a critical numerical defect in our approach: the columns of the Krylov matrix (8.4.1) increasingly become parallel to the dominant eigenvector, as (8.2.3) predicts, and therefore to one another. As we saw in Section 3.3, near-parallel vectors create the potential for numerical cancellation. This manifests as a large condition number for \(\mathbf{K}_m\) as \(m\) grows, eventually creating excessive error when solving the least-squares system.

The polar opposite of an ill-conditioned basis for \(\mathcal{K}_m\) is an orthonormal one. Suppose we had a thin QR factorization of \(\mathbf{K}_m\):

\[\begin{align*} \mathbf{K}_m = \mathbf{Q}_m \mathbf{R}_m & = \begin{bmatrix} \mathbf{q}_1& \mathbf{q}_2 & \cdots & \mathbf{q}_m \end{bmatrix} \begin{bmatrix} R_{11} & R_{12} & \cdots & R_{1m} \\ 0 & R_{22} & \cdots & R_{2m} \\ \vdots & & \ddots & \\ 0 & 0 & \cdots & R_{mm} \end{bmatrix}. \end{align*}\]

Then the vectors \(\mathbf{q}_1,\ldots,\mathbf{q}_m\) are the orthonormal basis we seek for \(\mathcal{K}_m\). By Theorem 8.4.2, we know that \(\mathbf{A}\mathbf{q}_m \in \mathcal{K}_{m+1}\), and therefore

(8.4.3)#\[\mathbf{A} \mathbf{q}_m = H_{1m} \, \mathbf{q}_1 + H_{2m} \, \mathbf{q}_2 + \cdots + H_{m+1,m}\,\mathbf{q}_{m+1}\]

for some choice of the \(H_{ij}\). Note that by using orthonormality, we have

(8.4.4)#\[\mathbf{q}_i^* (\mathbf{A}\mathbf{q}_m) = H_{im},\qquad i=1,\ldots,m.\]

Since we started by assuming that we know \(\mathbf{q}_1,\ldots,\mathbf{q}_m\), the only unknowns in (8.4.3) are \(H_{m+1,m}\) and \(\mathbf{q}_{m+1}\). But they appear only as a product, and we know that \(\mathbf{q}_{m+1}\) is a unit vector, so they are uniquely defined (up to sign) by the other terms in the equation.

We can now proceed iteratively.

Algorithm 8.4.4 :  Arnoldi iteration

Given matrix \(\mathbf{A}\) and vector \(\mathbf{u}\):

  1. Let \(\mathbf{q}_1= \mathbf{u} \,/\, \| \mathbf{u}\|\).

  2. For \(m=1,2,\ldots\)

    a. Use (8.4.4) to find \(H_{im}\) for \(i=1,\ldots,m\).

    b. Let

    (8.4.5)#\[\mathbf{v} = (\mathbf{A} \mathbf{q}_m) - H_{1m} \,\mathbf{q}_1 - H_{2m}\, \mathbf{q}_2 - \cdots - H_{mm}\, \mathbf{q}_m.\]

    c. Let \(H_{m+1,m}=\|\mathbf{v}\|\).

    d. Let \(\mathbf{q}_{m+1}=\mathbf{v}\,/\,H_{m+1,m}\).

The Arnoldi iteration finds nested orthonormal bases for a family of nested Krylov subspaces.

Demo 8.4.5

We illustrate a few steps of the Arnoldi iteration for a small matrix.

A = rand(1.:9.,6,6)
6×6 Matrix{Float64}:
 1.0  7.0  1.0  9.0  2.0  8.0
 4.0  7.0  1.0  5.0  2.0  1.0
 1.0  7.0  9.0  9.0  5.0  8.0
 7.0  1.0  4.0  9.0  9.0  8.0
 7.0  4.0  3.0  9.0  4.0  1.0
 3.0  5.0  1.0  3.0  2.0  6.0

The seed vector we choose here determines the first member of the orthonormal basis.

u = randn(6)
Q = u/norm(u);

Multiplication by \(\mathbf{A}\) gives us a new vector in \(\mathcal{K}_2\).

Aq = A*Q[:,1];

We subtract off its projection in the previous direction. The remainder is rescaled to give us the next orthonormal column.

v = Aq - dot(Q[:,1],Aq)*Q[:,1]
Q = [Q v/norm(v)];

On the next pass, we have to subtract off the projections in two previous directions.

Aq = A*Q[:,2]
v = Aq - dot(Q[:,1],Aq)*Q[:,1] - dot(Q[:,2],Aq)*Q[:,2]
Q = [Q v/norm(v)];

At every step, \(\mathbf{Q}_m\) is an ONC matrix.

@show opnorm( Q'*Q - I );
opnorm(Q' * Q - I) = 4.2302354693299417e-16

And \(\mathbf{Q}_m\) spans the same space as the three-dimensional Krylov matrix.

K = [ u A*u A*A*u ];
@show rank( [Q K] );
rank([Q K]) = 3

Key identity#

Up to now we have focused only on finding the orthonormal basis that lies in the columns of \(\mathbf{Q}_m\). But the \(H_{ij}\) values found during the iteration are also important. Taking \(j=1,2,\ldots,m\) in (8.4.3) leads to

(8.4.6)#\[\begin{split}\begin{split} \mathbf{A}\mathbf{Q}_m &= \begin{bmatrix} \mathbf{A}\mathbf{q}_1 & \cdots \mathbf{A}\mathbf{q}_m \end{bmatrix}\\ & = \begin{bmatrix} \mathbf{q}_1 & \mathbf{q}_2 & \cdots & \mathbf{q}_{m+1} \end{bmatrix}\: \begin{bmatrix} H_{11} & H_{12} & \cdots & H_{1m} \\ H_{21} & H_{22} & \cdots & H_{2m} \\ & H_{32} & \ddots & \vdots \\ & & \ddots & H_{mm} \\ & & & H_{m+1,m} \end{bmatrix} = \mathbf{Q}_{m+1} \mathbf{H}_m, \end{split}\end{split}\]

where the matrix \(\mathbf{H}_m\) has a particular “triangular plus one” structure.

Definition 8.4.6 :  Upper Hessenberg matrix

A matrix \(\mathbf{H}\) is upper Hessenberg if \(H_{ij}=0\) whenever \(i>j+1\).

Equation (8.4.6) is a fundamental identity of Krylov subspace methods.

Implementation#

Function 8.4.7 :  arnoldi

Arnoldi iteration for Krylov subspaces

 1"""
 2    arnoldi(A,u,m)
 3
 4Perform the Arnoldi iteration for `A` starting with vector `u`, out
 5to the Krylov subspace of degree `m`. Returns the orthonormal basis
 6(`m`+1 columns) and the upper Hessenberg `H` of size `m`+1 by `m`.
 7"""
 8function arnoldi(A,u,m)
 9    n = length(u)
10    Q = zeros(n,m+1)
11    H = zeros(m+1,m)
12    Q[:,1] = u/norm(u)
13    for j in 1:m
14        # Find the new direction that extends the Krylov subspace.
15        v = A*Q[:,j]
16        # Remove the projections onto the previous vectors.
17        for i in 1:j
18            H[i,j] = dot(Q[:,i],v)
19            v -= H[i,j]*Q[:,i]
20        end
21        # Normalize and store the new basis vector.
22        H[j+1,j] = norm(v)
23        Q[:,j+1] = v/H[j+1,j]
24    end
25    return Q,H
26end

An implementation of the Arnoldi iteration is given in Function 8.4.7. A careful inspection shows that the loop starting at line 17 does not exactly implement (8.4.4) and (8.4.5). The reason is numerical stability. Though the described and implemented versions are mathematically equivalent in exact arithmetic (see Exercise 6), the approach in Function 8.4.7 is more stable.

In the next section we revisit the idea of approximately solving \(\mathbf{A}\mathbf{x}=\mathbf{b}\) over a Krylov subspace \(\mathcal{K}_m\), using the ONC matrix \(\mathbf{Q}_m\) in place of \(\mathbf{K}_m\). A related idea explored in Exercise 7 is used to approximate the eigenvalue problem for \(\mathbf{A}\), which is the approach that underlies eigs for sparse matrices.

Exercises#

  1. ✍ Let \(\mathbf{A}=\displaystyle \begin{bmatrix} 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \end{bmatrix}.\)

    (a) Find the Krylov matrix \(\mathbf{K}_3\) for the seed vector \(\mathbf{u}=\mathbf{e}_1\).

    (b) Find \(\mathbf{K}_3\) for the seed vector \(\mathbf{u}=\begin{bmatrix}1; \: 1;\: 1; \: 1\end{bmatrix}.\)

  2. ⌨ For each matrix, make a table of the 2-norm condition numbers \(\kappa(\mathbf{K}_m)\) for \(m=1,\ldots,10\). Use a vector of all ones as the Krylov seed.

    (a) Matrix from Demo 8.4.3

    (b) \(\begin{bmatrix} -2 & 1 & & & \\ 1 & -2 & 1 & & \\ & \ddots & \ddots & \ddots & \\ & & 1 & -2 & 1 \\ & & & 1 & -2 \end{bmatrix} \: (100\times 100)\)

    (c) \(\begin{bmatrix} -2 & 1 & & & 1 \\ 1 & -2 & 1 & & \\ & \ddots & \ddots & \ddots & \\ & & 1 & -2 & 1 \\ 1 & & & 1 & -2 \end{bmatrix} \:(200\times 200) \)

  3. ✍ Show that if \(\mathbf{x}\in\mathcal{K}_m\), then \(\mathbf{x}=p(\mathbf{A})\mathbf{u}\) for a polynomial \(p\) of degree at most \(m-1\). (See (7.2.6) for applying a polynomial to a matrix.)

  4. ✍ Compute the asymptotic flop requirements for Function 8.4.7. Assume that due to sparsity, a matrix-vector multiplication \(\mathbf{A}\mathbf{u}\) requires only \(c n\) flops for a constant \(c\), rather than the usual \(O(n^2)\).

  5. ⌨ When Arnoldi iteration is performed on the Krylov subspace generated using the matrix \(\mathbf{A}=\displaystyle \begin{bmatrix} 2& 1& 1& 0\\ 1 &3 &1& 0\\ 0& 1& 3& 1\\ 0& 1& 1& 2 \end{bmatrix}\), the results can depend strongly on the initial vector \(\mathbf{u}\).

    (a) Apply Function 8.4.7 and output Q and H when using the following seed vectors.

    (i) u=[1,0,0,0] \(\qquad\) (ii) u=[1,1,1,1] \(\qquad\) (iii) u=rand(4)

    (b) Can you explain why case (ii) in part (a) cannot finish successfully? (Hint: What line(s) of the function can possibly return NaN when applied to finite values?)

  6. ✍ As mentioned in the text, Function 8.4.7 does not compute \(H_{ij}\) as defined by (8.4.4), but rather

    \[ S_{ij} = \mathbf{q}_i^* ( \mathbf{A}\mathbf{q}_j - S_{1j}\,\mathbf{q}_1 - \cdots - S_{i-1,j}\,\mathbf{q}_{i-1} ) \]

    for \(i=1,\ldots,j\). Show that \(S_{ij}=H_{ij}\). (Hence the function is mathematically equivalent to our Arnoldi formulas.)

  7. One way to approximate the eigenvalue problem \(\mathbf{A}\mathbf{x}=\lambda\mathbf{x}\) over \(\mathcal{K}_m\) is to restrict \(\mathbf{x}\) to the low-dimensional spaces \(\mathcal{K}_m\).

    (a) ✍ Show starting from (8.4.6) that

    \[ \mathbf{Q}_m^* \mathbf{A} \mathbf{Q}_m = \tilde{\mathbf{H}}_m, \]

    where \(\tilde{\mathbf{H}}_m\) is the upper Hessenberg matrix resulting from deleting the last row of \(\mathbf{H}_m\). What is the size of this matrix?

    (b) ✍ Show the reasoning above leads to the approximate eigenvalue problem \(\tilde{\mathbf{H}}_m\mathbf{z} \approx \lambda\mathbf{z}\). (Hint: Start with \(\mathbf{A}\mathbf{x} \approx \lambda\mathbf{x}\), and let \(\mathbf{x}=\mathbf{Q}_m\mathbf{z}\) before applying part (a).)

    (c) ⌨ Apply Function 8.4.7 to the matrix of Demo 8.4.3 using a random seed vector. Compute eigenvalues of \(\tilde{\mathbf{H}}_m\) for \(m=1,\ldots,40\), keeping track in each case of the error between the largest of those values (in magnitude) and the largest eigenvalue of \(\mathbf{A}\). Make a log-linear graph of the error as a function of \(m\).


1

The proper pronunciation of “Krylov” is something like “kree-luv,” but American English speakers often say “kreye-lahv.”