3.3. The QR factorization#

Sets of vectors satisfying a certain property are useful both theoretically and computationally.

Definition 3.3.1 :  Orthogonal vectors

Two vectors \(\mathbf{u}\) and \(\mathbf{v}\) in \(\mathbb{R}^n\) are orthogonal if \(\mathbf{u}^T\mathbf{v}=0\). We say that a collection of vectors \(\mathbf{q}_1,\ldots,\mathbf{q}_k\) is orthogonal if

(3.3.1)#\[i \neq j \quad \Rightarrow \quad \mathbf{q}_i^T\mathbf{q}_j = 0.\]

If (3.3.1) applies and also \(\mathbf{q}_i^T\mathbf{q}_i=1\) for all \(i=1,\ldots,n\), we say the vectors are orthonormal.

In two and three dimensions, orthogonality is the same as perpendicularity.

Orthogonal vectors simplify inner products. For example, if \(\mathbf{q}_1\) and \(\mathbf{q}_2\) are orthogonal, then

(3.3.2)#\[\| \mathbf{q}_1 - \mathbf{q}_2 \|_2^2 = (\mathbf{q}_1-\mathbf{q}_2)^T(\mathbf{q}_1-\mathbf{q}_2) = \mathbf{q}_1^T\mathbf{q}_1 - 2 \mathbf{q}_1^T\mathbf{q}_2 + \mathbf{q}_2^T\mathbf{q}_2 = \|\mathbf{q}_1\|_2^2 + \|\mathbf{q}_2\|_2^2.\]

As in the rest of this chapter, we will be using the 2-norm exclusively.

Equation (3.3.2) is the key to the computational attractiveness of orthogonality. Fig. 3.3.1 shows how nonorthogonal vectors can allow a multidimensional version of subtractive cancellation, in which \(\|\mathbf{x}-\mathbf{y}\|\) is much smaller than \(\|\mathbf{x}\|\) and \(\|\mathbf{y}\|\). As the figure illustrates, orthogonal vectors do not allow this phenomenon. By (3.3.2), the magnitude of a vector difference or sum is larger than the magnitudes of the original vectors.

../_images/nonorthogonal.svg

Fig. 3.3.1 Nonorthogonal vectors can cause cancellation when subtracted, but orthogonal vectors never do.#

Observation 3.3.2

Addition and subtraction of vectors are guaranteed to be well conditioned when the vectors are orthogonal.

Orthogonal and ONC matrices#

Statements about orthogonal vectors are often made more easily in matrix form. Let \(\mathbf{Q}\) be an \(n\times k\) matrix whose columns \(\mathbf{q}_1, \ldots, \mathbf{q}_k\) are orthogonal vectors. The orthogonality conditions (3.3.1) become simply that \(\mathbf{Q}^T\mathbf{Q}\) is a diagonal matrix, since

\[\begin{split}\mathbf{Q}^T \mathbf{Q} = \begin{bmatrix} \mathbf{q}_1^T \\[1mm] \mathbf{q}_2^T \\ \vdots \\ \mathbf{q}_k^T \end{bmatrix} \begin{bmatrix} \mathbf{q}_1 & \mathbf{q}_2 & \cdots & \mathbf{q}_k \end{bmatrix} = \begin{bmatrix} \mathbf{q}_1^T\mathbf{q}_1 & \mathbf{q}_1^T\mathbf{q}_2 & \cdots & \mathbf{q}_1^T\mathbf{q}_k \\[1mm] \mathbf{q}_2^T\mathbf{q}_1 & \mathbf{q}_2^T\mathbf{q}_2 & \cdots & \mathbf{q}_2^T\mathbf{q}_k \\ \vdots & \vdots & & \vdots \\ \mathbf{q}_k^T\mathbf{q}_1 & \mathbf{q}_k^T\mathbf{q}_2 & \cdots & \mathbf{q}_k^T\mathbf{q}_k \end{bmatrix}.\end{split}\]

If the columns of \(\mathbf{Q}\) are orthonormal, then \(\mathbf{Q}^T\mathbf{Q}\) is the \(k\times k\) identity matrix. This is such an important property that we will break with common practice here and give this type of matrix a name.

Definition 3.3.3 :  ONC matrix

An ONC matrix is one whose columns are an orthonormal set of vectors.

Theorem 3.3.4 :  ONC matrix

Suppose \(\mathbf{Q}\) is a real \(n\times k\) ONC matrix (matrix with orthonormal columns). Then:

  1. \(\mathbf{Q}^T\mathbf{Q} = \mathbf{I}\) (\(k\times k\) identity).

  2. \(\| \mathbf{Q}\mathbf{x} \|_2 = \| \mathbf{x} \|_2\) for all \(k\)-vectors \(\mathbf{x}\).

  3. \(\| \mathbf{Q} \|_2=1\).

Proof

The first part is derived above. The second part follows a pattern that has become well established by now:

\[\| \mathbf{Q}\mathbf{x} \|_2^2 = (\mathbf{Q}\mathbf{x})^T(\mathbf{Q}\mathbf{x}) = \mathbf{x}^T \mathbf{Q}^T \mathbf{Q} \mathbf{x} = \mathbf{x}^T \mathbf{I} \mathbf{x} = \| \mathbf{x} \|_2^2.\]

The last part of the theorem is left to the exercises.

Of particular interest is a square ONC matrix.1

Definition 3.3.5

An orthogonal matrix is a square matrix with orthonormal columns.

Orthogonal matrices have properties beyond Theorem 3.3.4.

Theorem 3.3.6 :  Orthogonal matrix

Suppose \(\mathbf{Q}\) is an \(n\times n\) real orthogonal matrix. Then:

  1. \(\mathbf{Q}^T = \mathbf{Q}^{-1}\).

  2. \(\mathbf{Q}^T\) is also an orthogonal matrix.

  3. \(\kappa(\mathbf{Q})=1\) in the 2-norm.

  4. For any other \(n\times n\) matrix \(\mathbf{A}\), \(\| \mathbf{A}\mathbf{Q} \|_2=\| \mathbf{A} \|_2\).

  5. If \(\mathbf{U}\) is another \(n\times n\) orthogonal matrix, then \(\mathbf{Q}\mathbf{U}\) is also orthogonal.

Proof

Since \(\mathbf{Q}\) is an ONC matrix, \(\mathbf{Q}^T\mathbf{Q}=\mathbf{I}\). All three matrices are \(n\times n\), so \(\mathbf{Q}^{-1}=\mathbf{Q}^T\). The proofs of the other statements are left to the exercises.

Orthogonal factorization#

Now we come to another important way to factor a matrix: the QR factorization. As we will show below, the QR factorization plays a role in linear least squares analogous to the role of LU factorization in linear systems.

Theorem 3.3.7 :  QR factorization

Every real \(m\times n\) matrix \(\mathbf{A}\) (\(m\ge n\)) can be written as \(\mathbf{A}=\mathbf{Q}\mathbf{R}\), where \(\mathbf{Q}\) is an \(m\times m\) orthogonal matrix and \(\mathbf{R}\) is an \(m\times n\) upper triangular matrix.

In most introductory books on linear algebra, the QR factorization is derived through a process known as Gram–Schmidt orthogonalization. However, while it is an important tool for theoretical work, the Gram–Schmidt process is numerically unstable. We will consider an alternative construction in Section 3.4.

When \(m\) is much larger than \(n\), which is often the case, there is a compressed form of the factorization that is more efficient. In the product

\[\begin{split}\mathbf{A} = \begin{bmatrix} \mathbf{q}_1 & \mathbf{q}_2 & \cdots & \mathbf{q}_m \end{bmatrix} \begin{bmatrix} r_{11} & r_{12} & \cdots & r_{1n} \\ 0 & r_{22} & \cdots & r_{2n} \\ \vdots & & \ddots & \vdots\\ 0 & 0 & \cdots & r_{nn} \\ 0 & 0 & \cdots & 0 \\ \vdots & \vdots & & \vdots \\ 0 & 0 & \cdots & 0 \end{bmatrix},\end{split}\]

the vectors \(\mathbf{q}_{n+1},\ldots,\mathbf{q}_m\) always get multiplied by zero. Nothing about \(\mathbf{A}\) is lost if we delete them and reduce the factorization to the equivalent form

(3.3.3)#\[\begin{split}\mathbf{A} = \begin{bmatrix} \mathbf{q}_1 & \mathbf{q}_2 & \cdots & \mathbf{q}_n \end{bmatrix} \begin{bmatrix} r_{11} & r_{12} & \cdots & r_{1n} \\ 0 & r_{22} & \cdots & r_{2n} \\ \vdots & & \ddots & \vdots\\ 0 & 0 & \cdots & r_{nn} \end{bmatrix} = \hat{\mathbf{Q}} \hat{\mathbf{R}}.\end{split}\]
Definition 3.3.8 :  Thin QR factorization

The thin QR factorization is \(\mathbf{A} = \hat{\mathbf{Q}} \hat{\mathbf{R}}\), where \(\hat{\mathbf{Q}}\) is \(m\times n\) and ONC, and \(\hat{\mathbf{R}}\) is \(n\times n\) and upper triangular.

Demo 3.3.9

Julia provides access to both the thin and full forms of the QR factorization.

A = rand(1.:9.,6,4)
@show m,n = size(A);
(m, n) = size(A) = (6, 4)

Here is a standard call:

Q,R = qr(A);
Q
6×6 LinearAlgebra.QRCompactWYQ{Float64, Matrix{Float64}, Matrix{Float64}}
R
4×4 Matrix{Float64}:
 -13.3791  -9.94089  -12.4074   -10.1651
   0.0     -9.06525   -3.27175   -2.31099
   0.0      0.0       -7.09588   -2.01752
   0.0      0.0        0.0       -3.90632

If you look carefully, you see that we seemingly got a full \(\mathbf{Q}\) but a thin \(\mathbf{R}\). However, the \(\mathbf{Q}\) above is not a standard matrix type. If you convert it to a true matrix, then it reverts to the thin form.

To enter the accented character , type Q\hat followed by Tab.

 = Matrix(Q)
6×4 Matrix{Float64}:
 -0.523205   0.463431   -0.426249    0.53949
 -0.373718  -0.582986    0.217627    0.437008
 -0.149487  -0.608253   -0.162799    0.0649388
 -0.298974  -0.113393   -0.552364   -0.405609
 -0.523205   0.242808    0.661963   -0.148022
 -0.448461  -0.0597777  -0.0338473  -0.572127

We can test that \(\mathbf{Q}\) is an orthogonal matrix:

opnorm(Q'*Q - I)
6.131630276306551e-16

The thin \(\hat{\mathbf{Q}}\) cannot be an orthogonal matrix, because it is not square, but it is still ONC:

Q̂'*Q̂ - I
4×4 Matrix{Float64}:
 -1.11022e-16   8.27629e-17  -9.64813e-17  -2.85109e-17
  8.27629e-17  -2.22045e-16   1.02104e-16  -3.04724e-17
 -9.64813e-17   1.02104e-16  -2.22045e-16  -1.70738e-16
 -2.85109e-17  -3.04724e-17  -1.70738e-16  -1.11022e-16

Least squares and QR#

If we substitute the thin factorization (3.3.3) into the normal equations (3.2.1), we can simplify expressions a great deal.

\[\begin{split}\begin{split} \mathbf{A}^T\mathbf{A} \mathbf{x} &= \mathbf{A}^T \mathbf{b}, \\ \hat{\mathbf{R}}^T \hat{\mathbf{Q}}^T \hat{\mathbf{Q}} \hat{\mathbf{R}} \mathbf{x} &= \hat{\mathbf{R}}^T \hat{\mathbf{Q}}^T \mathbf{b}, \\ \hat{\mathbf{R}}^T \hat{\mathbf{R}} \mathbf{x}& = \hat{\mathbf{R}}^T \hat{\mathbf{Q}}^T \mathbf{b}. \end{split}\end{split}\]

In order to have the normal equations be well posed, we require that \(\mathbf{A}\) is not rank-deficient (as proved in Theorem 3.2.4). This is enough to guarantee that \(\hat{\mathbf{R}}\) is nonsingular (see Exercise 4). Therefore, its transpose is nonsingular as well, and we arrive at

(3.3.4)#\[\hat{\mathbf{R}} \mathbf{x}=\hat{\mathbf{Q}}^T \mathbf{b}.\]
Algorithm 3.3.10 :  Solution of linear least squares by thin QR
  1. Compute the thin QR factorization \(\hat{\mathbf{Q}}\hat{\mathbf{R}}=\mathbf{A}\).

  2. Compute \(\mathbf{z} = \hat{\mathbf{Q}}^T\mathbf{b}\).

  3. Solve the \(n\times n\) linear system \(\hat{\mathbf{R}}\mathbf{x} = \mathbf{z}\) for \(\mathbf{x}\).

This algorithm is implemented in Function 3.3.11. It is essentially the algorithm used internally by Julia when A\b is called. The execution time is dominated by the factorization, the most common method for which is described in Section 3.4.

Function 3.3.11 :  lsqrfact

Linear least-squares solution by QR factorization

 1"""
 2    lsqrfact(A,b)
 3
 4Solve a linear least-squares problem by QR factorization. Returns
 5the minimizer of ||b-Ax||.
 6"""
 7function lsqrfact(A,b)
 8    Q,R = qr(A)
 9    c = Q'*b
10    x = backsub(R,c)
11    return x
12end

The solution of least-squares problems via QR factorization is more stable than when the normal equations are formulated and solved directly.

Demo 3.3.12

We’ll repeat the experiment of Demo 3.2.10, which exposed instability in the normal equations.

t = range(0,3,length=400)
f = [ x->sin(x)^2, x->cos((1+1e-7)*x)^2, x->1. ]
A = [ f(t) for t in t, f in f ]
x = [1.,2,1]
b = A*x;

The error in the solution by Function 3.3.11 is within the bound predicted by the condition number.

observed_error = norm(FNC.lsqrfact(A,b)-x)/norm(x);
@show observed_error;
κ = cond(A)
@show error_bound = κ*eps();
observed_error = 4.665273501889628e-9
error_bound = κ * eps() = 4.053030228488391e-9

Exercises#

  1. ✍ Prove part 3 of Theorem 3.3.4.

  2. ✍ Prove Theorem 3.3.6. For the third part, use the definition of the 2-norm as an induced matrix norm, then apply some of our other results as needed.

  3. ⌨ Let \(t_0,\ldots,t_m\) be \(m+1\) equally spaced points in \([-1,1]\). Let \(\mathbf{A}\) be the matrix in (3.1.2) for \(m=400\) and fitting by polynomials of degree less than 5. Find the thin QR factorization of \(\mathbf{A}\), and, on a single graph, plot every column of \(\hat{\mathbf{Q}}\) as a function of the vector \(t\).

  4. ✍ Prove that if the \(m\times n\) (\(m\ge n\)) matrix \(\mathbf{A}\) is not rank-deficient, then the factor \(\hat{\mathbf{R}}\) of the thin QR factorization is invertible. (Hint: Suppose on the contrary that \(\hat{\mathbf{R}}\) is singular. Show using the factored form of \(\mathbf{A}\) that this would imply that \(\mathbf{A}\) is rank-deficient.)

  5. ✍ Let \(\mathbf{A}\) be \(m\times n\) with \(m>n\). Show that if \(\mathbf{A}=\mathbf{Q}\mathbf{R}\) is a QR factorization and \(\mathbf{R}\) has rank \(n\), then \(\mathbf{A}^+=\mathbf{R}^+\mathbf{Q}^T\).

  6. ✍ Let \(\mathbf{A}\) be \(m\times n\) with \(m>n\). Show that if \(\mathbf{A}=\hat{\mathbf{Q}}\hat{\mathbf{R}}\) is a thin QR factorization and \(\hat{\mathbf{R}}\) is invertible, then \(\mathbf{A}^+=\hat{\mathbf{R}}^{-1}\hat{\mathbf{Q}}^T\).

  7. ⌨ Repeat Exercise 3.1.2, but use thin QR factorization rather than the backslash to solve the least-squares problem.

  8. ✍ The matrix \(\mathbf{P}=\hat{\mathbf{Q}} \hat{\mathbf{Q}}^T\) derived from the thin QR factorization has some interesting and important properties.

    (a) Prove that \(\mathbf{P}=\mathbf{A}\mathbf{A}^+\).

    (b) Prove that \(\mathbf{P}^2=\mathbf{P}\). (This property defines a projection matrix.)

    (c) Any vector \(\mathbf{x}\) may be written trivially as \(\mathbf{x}=\mathbf{u}+\mathbf{v}\), where \(\mathbf{u}=\mathbf{P}\mathbf{x}\) and \(\mathbf{v}=(\mathbf{I}-\mathbf{P})\mathbf{x}\). Prove that \(\mathbf{u}\) and \(\mathbf{v}\) are orthogonal. (Together with part (b), this means that \(\mathbf{P}\) is an orthogonal projector.)


1

Confusingly, a square matrix whose columns are orthogonal is not necessarily an orthogonal matrix; the columns must be orthonormal, which is a stricter condition.