```
```{raw} latex
%%start demo%%
```
We set up a $5\times 5$ triangular matrix with prescribed eigenvalues on its diagonal.
```{code-cell}
λ = [1,-0.75,0.6,-0.4,0]
# Make a triangular matrix with eigenvalues on the diagonal.
A = triu(ones(5,5),1) + diagm(λ)
```
We run inverse iteration with the shift $s=0.7$ and take the final estimate as our "exact" answer to observe the convergence.
```{code-cell}
s = 0.7
β,x = FNC.inviter(A,s,30)
eigval = β[end]
```
As expected, the eigenvalue that was found is the one closest to 0.7. The convergence is again linear.
```{code-cell}
err = @. abs(eigval-β)
plot(0:28,err[1:end-1],m=:o,
title="Convergence of inverse iteration",
xlabel=L"k",yaxis=(L"|\lambda_3-\beta_k|",:log10,[1e-16,1]))
```
The observed linear convergence rate is found from the data.
```{code-cell}
@show observed_rate = err[22]/err[21];
```
```{index} ! Julia; sortperm
```
::::{panels}
:column: col-7 left-side
:card: border-0 shadow-none
```{raw} latex
\begin{minipage}[t]{0.5\textwidth}
```
We reorder the eigenvalues to enforce {eq}`shiftorder`.
```{raw} latex
\end{minipage}\hfill
```
---
:column: col-5 right-side
:card: shadow-none comment
```{raw} latex
\begin{minipage}[t]{0.4\textwidth}\begin{mdframed}[default]\small
```
The `sortperm` function returns the index permutation needed to sort the given vector, rather than the sorted vector itself.
```{raw} latex
\end{mdframed}\end{minipage}
```
::::
```{code-cell}
λ = λ[ sortperm(abs.(λ.-s)) ]
```
Hence the theoretical convergence rate is
```{code-cell}
@show theoretical_rate = (λ[1]-s) / (λ[2]-s);
```
```{raw} html

```
```{raw} latex
%%end demo%%
```
## Dynamic shifting
There is a clear opportunity for positive feedback in {numref}`Algorithm {number}
```
```{raw} latex
%%start demo%%
```
```{code-cell}
λ = [1,-0.75,0.6,-0.4,0]
# Make a triangular matrix with eigenvalues on the diagonal.
A = triu(ones(5,5),1) + diagm(λ)
```
We begin with a shift $s=0.7$, which is closest to the eigenvalue 0.6.
```{code-cell}
s = 0.7
x = ones(5)
y = (A-s*I)\x
β = x[1]/y[1] + s
```
Note that the result is not yet any closer to the targeted 0.6. But we proceed (without being too picky about normalization here).
```{code-cell}
s = β
x = y/y[1]
y = (A-s*I)\x
β = x[1]/y[1] + s
```
Still not much apparent progress. However, in just a few more iterations the results are dramatically better.
```{code-cell}
for k in 1:4
s = β
x = y/y[1]
y = (A-s*I)\x
@show β = x[1]/y[1] + s
end
```
```{raw} html

```
```{raw} latex
%%end demo%%
```
There is a price to pay for this improvement. The matrix of the linear system to be solved, $(\mathbf{A}-s\mathbf{I})$, now changes with each iteration. That means that we can no longer do just one LU factorization for the entire iteration. The speedup in convergence usually makes this tradeoff worthwhile, however.
In practice power and inverse iteration are not as effective as the algorithms used by `eigs` and based on the mathematics described in the rest of this chapter. However, inverse iteration can be useful for turning an eigenvalue estimate into an eigenvector estimate.
## Exercises
(problem-invitercomp)=
1. ⌨ Use {numref}`Function {number}