9.3. Stability of polynomial interpolation#
With barycentric interpolation available in the form of Function 9.2.3, we can explore polynomial interpolation using a numerically stable algorithm. Any remaining sensitivity to error is due to the conditioning of the interpolation process itself.
We choose a function over the interval \([0,1]\).
f = x -> sin(exp(2*x));
Here is a graph of \(f\) and its polynomial interpolant using seven equally spaced nodes.
Show code cell source
plot(f,0,1,label="function",legend=:bottomleft)
t = (0:6)/6
y = f.(t)
scatter!(t,y,label="nodes")
p = FNC.polyinterp(t,y)
plot!(p,0,1,label="interpolant",title="Equispaced interpolant, n=6")
This looks pretty good. We want to track the behavior of the error as \(n\) increases. We will estimate the error in the continuous interpolant by sampling it at a large number of points and taking the max-norm.
Show code cell source
n = 5:5:60; err = zeros(size(n))
x = range(0,1,length=2001) # for measuring error
for (i,n) in enumerate(n)
t = (0:n)/n # equally spaced nodes
y = f.(t) # interpolation data
p = FNC.polyinterp(t,y)
err[i] = norm( (@. f(x)-p(x)), Inf )
end
plot(n,err,m=:o,title="Interpolation error for equispaced nodes",
xaxis=(L"n"),yaxis=(:log10,"max error"),)
The error initially decreases as one would expect but then begins to grow. Both phases occur at rates that are exponential in \(n\), i.e., \(O(K^n\)) for a constant \(K\), appearing linear on a semi-log plot.
Runge phenomenon#
The disappointing loss of convergence in Demo 9.3.1 is a sign of ill conditioning due to the use of equally spaced nodes. We will examine this effect using the error formula (9.1.5) as a guide:
While the dependence on \(f\) is messy here, the error indicator \(\Phi(x)\) can be studied as a function of the nodes only.
We plot \(|\Phi(x)|\) over the interval \([-1,1]\) with equispaced nodes for different values of \(n\).
Show code cell source
plot(xaxis=(L"x"),yaxis=(:log10,L"|\Phi(x)|",[1e-25,1]),legend=:bottomleft)
x = range(-1,1,length=2001)
for n in 10:10:50
t = range(-1,1,length=n+1)
Φ = [ prod(xₖ.-t) for xₖ in x ]
scatter!(x,abs.(Φ),m=(1,stroke(0)),label="n=$n")
end
title!("Error indicator for equispaced nodes")
Each time \(\Phi\) passes through zero at an interpolation node, the value on the log scale should go to \(-\infty\), which explains the numerous cusps on the curves.
Two observations from the result of Demo 9.3.2 are important. First, \(|\Phi|\) decreases exponentially at each fixed location in the interval (note that the spacing between curves is constant for constant increments of \(n\)). Second, \(|\Phi|\) is larger at the ends of the interval than in the middle, by an exponentially growing factor. This gap is what can ruin the convergence of polynomial interpolation.
This function has infinitely many continuous derivatives on the entire real line and looks easy to approximate over \([-1,1]\).
f = x -> 1/(x^2 + 16)
plot(f,-1,1,title="Test function",legend=:none)
We start by doing equispaced polynomial interpolation for some small values of \(n\).
Show code cell source
plot(xaxis=(L"x"),yaxis=(:log10,L"|f(x)-p(x)|",[1e-20,1]))
x = range(-1,1,length=2501)
n = 4:4:12
for (k,n) in enumerate(n)
t = range(-1,1,length=n+1) # equally spaced nodes
y = f.(t) # interpolation data
p = FNC.polyinterp(t,y)
err = @. abs(f(x)-p(x))
plot!(x,err,m=(1,:o,stroke(0)),label="degree $n")
end
title!("Error for low degrees")
The convergence so far appears rather good, though not uniformly so. However, notice what happens as we continue to increase the degree.
Show code cell source
n = @. 12 + 15*(1:3)
plot(xaxis=(L"x"),yaxis=(:log10,L"|f(x)-p(x)|",[1e-20,1]))
for (k,n) in enumerate(n)
t = range(-1,1,length=n+1) # equally spaced nodes
y = f.(t) # interpolation data
p = FNC.polyinterp(t,y)
err = @. abs(f(x)-p(x))
plot!(x,err,m=(1,:o,stroke(0)),label="degree $n")
end
title!("Error for higher degrees")