Singular value decomposition
Contents
7.3. Singular value decomposition#
We now introduce another factorization that is as fundamental as the EVD.
The singular value decomposition of an \(m\times n\) matrix \(\mathbf{A}\) is
where \(\mathbf{U}\in\mathbb{C}^{m\times m}\) and \(\mathbf{V}\in\mathbb{C}^{n\times n}\) are unitary and \(\mathbf{S}\in\mathbb{R}^{m\times n}\) is real and diagonal with nonnegative elements.
The columns of \(\mathbf{U}\) and \(\mathbf{V}\) are called left and right singular vectors, respectively. The diagonal elements of \(\mathbf{S}\), written \(\sigma_1,\ldots,\sigma_r\), for \(r=\min\{m,n\}\), are called the singular values of \(\mathbf{A}\) and are ordered so that
We call \(\sigma_1\) the principal singular value and \(\mathbf{u}_{1}\) and \(\mathbf{v}_{1}\) the principal singular vectors.
Every \(m\times n\) matrix has an SVD. The singular values of a matrix are unique, but the singular vectors are not. If the matrix is real, then \(\mathbf{U}\) and \(\mathbf{V}\) in (7.3.1) can be chosen to be real, orthogonal matrices.
The nonuniqueness is easy: for instance, we can replace \(\mathbf{U}\) and \(\mathbf{V}\) by their negatives without affecting (7.3.1). Proof of the other statements usually relies on induction in the size of \(\mathbf{A}\) and can be found in advanced linear algebra texts.
It is easy to check that
meets all the requirements of an SVD. Interpreted as a matrix, the vector \([3,4]\) has the lone singular value 5.
Suppose \(\mathbf{A}\) is a real matrix and that \(\mathbf{A}=\mathbf{U}\mathbf{S}\mathbf{V}^T\) is an SVD. Then \(\mathbf{A}^T=\mathbf{V}\mathbf{S}^T\mathbf{U}^T\) meets all the requirements of an SVD for \(\mathbf{A}^T\): the first and last matrices are orthogonal, and the middle matrix is diagonal with nonnegative elements. Hence \(\mathbf{A}\) and \(\mathbf{A}^T\) have the same singular values.
Connections to the EVD#
The nonzero eigenvalues of \(\mathbf{A}^*\mathbf{A}\) are the squares of the singular values of \(\mathbf{A}\).
Let \(\mathbf{A}=\mathbf{U}\mathbf{S}\mathbf{V}^*\) be \(m\times n\), and compute the square hermitian matrix \(\mathbf{B}=\mathbf{A}^*\mathbf{A}\):
Note that \(\mathbf{S}^T\mathbf{S}\) is a diagonal \(n \times n\) matrix. There are two cases to consider. If \(m \ge n\), then
On the other hand, if \(m<n\), then
Except for some unimportant technicalities, the eigenvectors of \(\mathbf{A}^*\mathbf{A}\), when appropriately ordered and normalized, are right singular vectors of \(\mathbf{A}\). The left singular vectors could then be deduced from the identity \(\mathbf{A}\mathbf{V} = \mathbf{U}\mathbf{S}\).
Another close connection between EVD and SVD comes via the \((m+n)\times (m+n)\) matrix
If \(\sigma\) is a singular value of \(\mathbf{B}\), then \(\sigma\) and \(-\sigma\) are eigenvalues of \(\mathbf{C}\), and the associated eigenvector immediately reveals a left and a right singular vector (see Exercise 11). This connection is implicitly exploited by software to compute the SVD.
Interpreting the SVD#
Another way to write \(\mathbf{A}=\mathbf{U}\mathbf{S}\mathbf{V}^*\) is
Taken columnwise, this equation means
In words, each right singular vector is mapped by \(\mathbf{A}\) to a scaled version of its corresponding left singular vector; the magnitude of scaling is its singular value.
Both the SVD and the EVD describe a matrix in terms of some special vectors and a small number of scalars. Table 7.3.1 summarizes the key differences. The SVD sacrifices having the same basis in both source and image spaces—after all, they may not even have the same dimension—but as a result gains orthogonality in both spaces.
EVD |
SVD |
---|---|
exists for most square matrices |
exists for all rectangular and square matrices |
\(\mathbf{A}\mathbf{x}_k = \lambda_k \mathbf{x}_k\) |
\(\mathbf{A} \mathbf{v}_k = \sigma_k \mathbf{u}_k\) |
same basis for domain and range of \(\mathbf{A}\) |
two orthonormal bases |
may have poor conditioning |
perfectly conditioned |
Thin form#
In Section 3.3 we saw that a matrix has both full and thin forms of the QR factorization. A similar situation holds with the SVD.
Suppose \(\mathbf{A}\) is \(m\times n\) with \(m > n\) and let \(\mathbf{A}=\mathbf{U}\mathbf{S}\mathbf{V}^*\) be an SVD. The last \(m-n\) rows of \(\mathbf{S}\) are all zero due to the fact that \(\mathbf{S}\) is diagonal. Hence
in which \(\hat{\mathbf{U}}\) is \(m\times n\) and \(\hat{\mathbf{S}}\) is \(n\times n\). This allows us to define the thin SVD
in which \(\hat{\mathbf{S}}\) is square and diagonal and \(\hat{\mathbf{U}}\) is ONC but not square.
Given the full SVD of Example 7.3.3, the corresponding thin SVD is
The thin form retains all the information about \(\mathbf{A}\) from the SVD; the factorization is still an equality, not an approximation. It is computationally preferable when \(m \gg n\), since it requires far less storage than a full SVD. For a matrix with more columns than rows, one can derive a thin form by taking the adjoint of the thin SVD of \(\mathbf{A}^*\).
SVD and the 2-norm#
The SVD is intimately connected to the 2-norm, as the following theorem describes.
Let \(\mathbf{A}\in\mathbb{C}^{m\times n}\) have an SVD \(\mathbf{A}=\mathbf{U}\mathbf{S}\mathbf{V}^*\) in which (7.3.2) holds. Then:
The 2-norm satisfies
(7.3.6)#\[\| \mathbf{A} \|_2 = \sigma_1.\]The rank of \(\mathbf{A}\) is the number of nonzero singular values.
Let \(r=\min\{m,n\}\). Then
(7.3.7)#\[\kappa_2(\mathbf{A}) = \|\mathbf{A}\|_2\|\mathbf{A}^+\|_2 = \frac{\sigma_1}{\sigma_r},\]where a division by zero implies that \(\mathbf{A}\) does not have full rank.
The conclusion (7.3.6) can be proved by vector calculus. In the square case \(m=n\), \(\mathbf{A}\) having full rank is identical to being invertible. The SVD is the usual means for computing the 2-norm and condition number of a matrix.
We verify some of the fundamental SVD properties using standard Julia functions from LinearAlgebra
.
A = [i^j for i=1:5, j=0:3]
5×4 Matrix{Int64}:
1 1 1 1
1 2 4 8
1 3 9 27
1 4 16 64
1 5 25 125
To get only the singular values, use svdvals
.
σ = svdvals(A)
4-element Vector{Float64}:
146.69715365883005
5.738569780953698
0.9998486640841032
0.1192808268524209
Here is verification of the connections between the singular values, norm, and condition number.
@show opnorm(A,2);
@show σ[1];
opnorm(A, 2) = 146.69715365883005
σ[1] = 146.69715365883005
@show cond(A,2);
@show σ[1]/σ[end];
cond(A, 2) = 1229.8468876337497
σ[1] / σ[end] = 1229.8468876337497
To get singular vectors as well, use svd
. The thin form of the factorization is the default.
U,σ,V = svd(A);
@show size(U);
@show size(V);
size(U) = (5, 4)
size(V) = (4, 4)
We verify the orthogonality of the singular vectors as follows:
@show opnorm(U'*U - I);
@show opnorm(V'*V - I);
opnorm(U' * U - I) = 1.6204179089100082e-15
opnorm(V' * V - I) = 8.307352235414462e-16
Exercises#
✍ Each factorization below is algebraically correct. The notation \(\mathbf{I}_n\) means an \(n\times n\) identity. In each case, determine whether it is an SVD. If it is, write down \(\sigma_1\), \(\mathbf{u}_1\), and \(\mathbf{v}_1\). If it is not, state all of the ways in which it fails the required properties.
(a) \(\begin{bmatrix} 0 & 0 \\ 0 & -1 \end{bmatrix} = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}\qquad \) (b) \(\begin{bmatrix} 0 & 0 \\ 0 & -1 \end{bmatrix} = \mathbf{I}_2 \begin{bmatrix} 0 & 0 \\ 0 & -1 \end{bmatrix} \mathbf{I}_2\)
(c) \(\begin{bmatrix} 1 & 0\\ 0 & \sqrt{2}\\ 1 & 0 \end{bmatrix} = \begin{bmatrix} \alpha & 0 & -\alpha \\ 0 & 1 & 0 \\ \alpha & 0 & -\alpha \end{bmatrix} \begin{bmatrix} \sqrt{2} & 0 \\ 0 & \sqrt{2} \\ 0 & 0 \end{bmatrix} \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}, \quad \alpha=1/\sqrt{2}\)
(d) \(\begin{bmatrix} \sqrt{2} & \sqrt{2}\\ -1 & 1\\ 0 & 0 \end{bmatrix} = \mathbf{I}_3 \begin{bmatrix} 2 & 0 \\ 0 & \sqrt{2} \\ 0 & 0 \end{bmatrix} \begin{bmatrix} \alpha & \alpha \\ -\alpha & \alpha \end{bmatrix}, \quad \alpha=1/\sqrt{2}\)
✍ Apply Theorem 7.3.5 to find an SVD of \(\mathbf{A}=\displaystyle \begin{bmatrix} 1 & 0 \\ 0 & 0 \\ 0 & 1 \\ -1 & -1 \end{bmatrix}.\)
⌨ Let
x
be a vector of 1000 equally spaced points between 0 and 1. Suppose \(\mathbf{A}_n\) is the \(1000\times n\) matrix whose \((i,j)\) entry is \(x_i^{j-1}\) for \(j=1,\ldots,n\).(a) Print out the singular values of \(\mathbf{A}_1\), \(\mathbf{A}_2\), and \(\mathbf{A}_3\).
(b) Make a log-linear plot of the singular values of \(\mathbf{A}_{40}\).
(c) Repeat part (b) after converting the elements of
x
to typeFloat32
(i.e., single precision).(d) Having seen the plot for part (c), which singular values in part (b) do you suspect may be incorrect?
⌨ See Demo 7.1.7 for how to get the “mandrill” test image. Make a log-linear scatter plot of the singular values of the matrix of grayscale intensity values. (The shape of this graph is surprisingly similar across a wide range of images.)
✍ Prove that for a square real matrix \(\mathbf{A}\), \(\| \mathbf{A} \|_2=\| \mathbf{A}^T \|_2\).
✍ Prove (7.3.7) of Theorem 7.3.7, given that (7.3.6) is true. (Hint: If the SVD of \(\mathbf{A}\) is known, what is the SVD of \(\mathbf{A}^{+}\)?)
✍ Suppose \(\mathbf{A}\in\mathbb{R}^{m\times n}\), with \(m>n\), has the thin SVD \(\mathbf{A}=\hat{\mathbf{U}}\hat{\mathbf{S}}\mathbf{V}^T\). Show that the matrix \(\mathbf{A}\mathbf{A}^{+}\) is equal to \(\hat{\mathbf{U}}\hat{\mathbf{U}}^T\). (You must be careful with matrix sizes in this derivation.)
✍ In (3.2.3) we defined the 2-norm condition number of a rectangular matrix as \(\kappa(\mathbf{A})=\|\mathbf{A}\|\cdot \|\mathbf{A}^{+}\|\), and then claimed (in the real case) that \(\kappa(\mathbf{A}^*\mathbf{A})=\kappa(\mathbf{A})^2\). Prove this assertion using the SVD.
✍ Show that the square of each singular value of \(\mathbf{A}\) is an eigenvalue of the matrix \(\mathbf{A}\mathbf{A}^*\) for any \(m\times n\) matrix \(\mathbf{A}\). (You should consider the cases \(m>n\) and \(m\le n\) separately.)
✍ In this problem you will see how (7.3.6) is proved in the real case.
(a) Use the technique of Lagrange multipliers to show that among vectors that satisfy \(\|\mathbf{x}\|_2^2=1\), any vector that maximizes \(\|\mathbf{A}\mathbf{x}\|_2^2\) must be an eigenvector of \(\mathbf{A}^T\mathbf{A}\). It will help to know that if \(\mathbf{B}\) is any symmetric matrix, the gradient of the scalar function \(\mathbf{x}^T\mathbf{B}\mathbf{x}\) with respect to \(\mathbf{x}\) is \(2\mathbf{B}\mathbf{x}\).
(b) Use the result of part (a) to prove (7.3.6) for real matrices.
✍ Suppose \(\mathbf{A}\in\mathbb{R}^{n \times n}\), and define \(\mathbf{C}\) as in (7.3.3).
(a) Suppose that \(\mathbf{v}=\begin{bmatrix} \mathbf{x} \\ \mathbf{y} \end{bmatrix}\), and write the block equation \(\mathbf{C}\mathbf{v} = \lambda \mathbf{v}\) as two individual equations involving both \(\mathbf{x}\) and \(\mathbf{y}\).
(b) By applying some substitutions, rewrite the equations from part (a) as one in which \(\mathbf{x}\) was eliminated, and another in which \(\mathbf{y}\) was eliminated.
(c) Substitute the SVD \(\mathbf{A}=\mathbf{U}\mathbf{S}\mathbf{V}^T\) and explain why \(\lambda^2=\sigma_k^2\) for some singular value \(\sigma_k\).
(d) As a more advanced variation, modify the argument to show that \(\lambda=0\) is another possibility if \(\mathbf{A}\) is not square.