Finite differences
Contents
5.4. Finite differences#
Now we turn to one of the most common and important applications of interpolants: finding derivatives of functions. Because differentiation is a linear operation, we will constrain ourselves to formulas that are linear in the nodal values.
A finitedifference formula is a list of values \(a_{p},\ldots,a_q\), called weights, such that for all \(f\) in some class of functions,
The weights are independent of \(f\) and \(h\). The formula is said to be convergent if the approximation becomes equality in the limit \(h\to 0\) for a suitable class of functions.
Note that while (5.4.1) is about finding the derivative at a single point \(x\), the same formula can be applied for different \(x\). The usual situation is a regularly spaced grid of nodes, \(a,a+h,a+2h,\ldots,b\), and then the value of \(f\) at each node takes part in multiple applications of the formula. This will be demonstrated in Example 5.4.3 below.
Common examples#
There are three appealing special cases of (5.4.1) that get special attention.
A forward difference formula is characterized by (5.4.1) with \(p=0\), a backward difference formula has \(q=0\), and a centered difference formula has \(p=q\).
The simplest example of a forward difference formula is inspired by the familiar limit definition of a derivative:
which is (5.4.1) with \(p=0\), \(q=1\), \(a_0=1\), and \(a_1=1\). Analogously, we have the backward difference
in which \(p=1\), \(q=0\).
Suppose \(f(x)=x^2\), and we take \(h=\frac{1}{4}\) over the interval \([0,1]\). This results in the nodes \(0,\frac{1}{4},\frac{1}{2},\frac{3}{4},1\). We evaluate \(f\) at the nodes to get
This gives four forward difference estimates,
We also get four backward difference estimates,
Notice that it’s the same four differences each time, but we’re interpreting them as derivative estimates at different nodes.
As pointed out in Example 5.4.3, the only real distinction between (5.4.2) and (5.4.3) is whether we think that \(f'\) is being evaluated at the left node or the right one. Symmetry would suggest that we should evaluate it halfway between. That is the motivation behind centered difference formulas.
Let’s derive the shortest centered formula using \(p=q=1\). For simplicity, we will set \(x=0\) without affecting the result. This means that \(f(h)\), \(f(0)\), and \(f(h)\) are all available in (5.4.1).
Note that (5.4.2) is simply the slope of the line through the points \(\bigl(0,f(0)\bigr)\) and \(\bigl(h,f(h)\bigr)\). One route to using all three function values is to differentiate the quadratic polynomial that interpolates \(\bigl(h,f(h)\bigr)\) as well (see Exercise 1):
This leads to
This result is equivalent to (5.4.1) with \(p=q=1\) and weights \(a_{1}=\frac{1}{2}\), \(a_0=0\), and \(a_1=\frac{1}{2}\). Observe that while the value of \(f(0)\) was available during the derivation, its weight ends up being zero.
Besides the aesthetic appeal of symmetry, in Section 5.5 we will see another important advantage of (5.4.5) compared to the onesided formulas.
We can in principle derive any finitedifference formula from the same process: Interpolate the given function values, then differentiate the interpolant exactly. Some results of the process are given in Table 5.4.1 for centered differences, and in Table 5.4.2 for forward differences. Both show the weights for estimating the derivative at \(x=0\). To get backward differences, you change the signs and reverse the order of the coefficients in any row of Table 5.4.2; see Exercise 2.
order 
\(4h\) 
\(3h\) 
\(2h\) 
\(h\) 
\(0\) 
\(h\) 
\(2h\) 
\(3h\) 
\(4h\) 

2 
\(\frac{1}{2}\) 
\(0\) 
\(\frac{1}{2}\) 

4 
\(\frac{1}{12}\) 
\(\frac{2}{3}\) 
\(0\) 
\(\frac{2}{3}\) 
\(\frac{1}{12}\) 

6 
\(\frac{1}{60}\) 
\(\frac{3}{20}\) 
\(\frac{3}{4}\) 
\(0\) 
\(\frac{3}{4}\) 
\(\frac{3}{20}\) 
\(\frac{1}{60}\) 

8 
\(\frac{1}{280}\) 
\(\frac{4}{105}\) 
\(\frac{1}{5}\) 
\(\frac{4}{5}\) 
\(0\) 
\(\frac{4}{5}\) 
\(\frac{1}{5}\) 
\(\frac{4}{105}\) 
\(\frac{1}{280}\) 
order 
\(0\) 
\(h\) 
\(2h\) 
\(3h\) 
\(4h\) 

1 
\(1\) 
\(1\) 

2 
\(\frac{3}{2}\) 
2 
\(\frac{1}{2}\) 

3 
\(\frac{11}{6}\) 
3 
\(\frac{3}{2}\) 
\(\frac{1}{3}\) 

4 
\(\frac{25}{12}\) 
\(4\) 
\(3\) 
\(\frac{4}{3}\) 
\(\frac{1}{4}\) 
The main motivation for using more function values in a formula is to improve the accuracy. This is measured by order of accuracy, which is shown in the tables and explored in Section 5.5.
According to the tables, here are three specific finitedifference formulas:
If \(f(x)=e^{\,\sin(x)}\), then \(f'(0)=1\).
f = x > exp(sin(x));
Here are the first two centered differences from Table 5.4.1.
h = 0.05
CD2 = (  f(h) + f(h) ) / 2h
CD4 = (f(2h)  8f(h) + 8f(h)  f(2h)) / 12h
@show (CD2,CD4);
(CD2, CD4) = (0.9999995835069508, 1.0000016631938748)
Here are the first two forward differences from Table 5.4.2.
FD1 = ( f(0) + f(h) ) / h
FD2 = (3f(0) + 4f(h)  f(2h)) / 2h
@show (FD1,FD2);
(FD1, FD2) = (1.024983957209069, 1.0000996111012461)
Finally, here are the backward differences that come from reversenegating the forward differences.
BD1 = ( f(h) + f(0)) / h
BD2 = ( f(2h)  4f(h) + 3f(0)) / 2h
@show (BD1,BD2);
(BD1, BD2) = (0.9750152098048326, 0.9999120340342049)
Higher derivatives#
Many applications require the second derivative of a function. It’s tempting to use the finite difference of a finite difference. For example, applying (5.4.5) to \(f'\) gives
Then applying (5.4.5) to approximate the appearances of \(f'\) leads to
This is a valid formula, but it uses values at \(\pm 2h\) rather than the closer values at \(\pm h\). A better and more generalizable tactic is to return to the quadratic \(Q(x)\) in (5.4.4) and use \(Q''(0)\) to approximate \(f''(0)\). Doing so yields
which is the simplest centered seconddifference formula. As with the first derivative, we can choose larger values of \(p\) and \(q\) in (5.4.1) to get new formulas, such as
and
For the second derivative, converting a forward difference to a backward difference requires reversing the order of the weights, while not changing their signs.
If \(f(x)=e^{\,\sin(x)}\), then \(f''(0)=1\).
f = x > exp(sin(x));
Here is a centered estimate given by (5.4.7).
h = 0.05
CD2 = (f(h)  2f(0) + f(h)) / h^2
@show CD2;
CD2 = 0.9993749480847745
For the same \(h\), here are forward estimates given by (5.4.8) and (5.4.9).
FD1 = ( f(0)  2f(h) + f(2h) ) / h^2
FD2 = (2f(0)  5f(h) + 4f(2h)  f(3h)) / h^2
@show (FD1,FD2);
(FD1, FD2) = (0.9953738443129188, 1.0078811479598213)
Finally, here are the backward estimates that come from reversing (5.4.8) and (5.4.9).
BD1 = ( f(2h)  2f(h) + f(0)) / h^2
BD2 = (f(3h) + 4f(2h)  5f(h) + 2f(0)) / h^2
@show (BD1,BD2);
(BD1, BD2) = (0.9958729691748489, 1.0058928192789194)
Arbitrary nodes#
Although function values at equally spaced nodes are a common and convenient situation, the node locations may be arbitrary. The general form of a finitedifference formula is
We no longer assume equally spaced nodes, so there is no “\(h\)” to be used in the formula. As before, the weights may be applied after any translation of the independent variable. The weights again follow from the interpolate/differentiate recipe, but the algebra becomes complicated. Fortunately there is an elegant recursion known as Fornberg’s algorithm that can calculate these weights for any desired formula. We present it without derivation as Function 5.4.7.
Fornberg’s algorithm for finitedifference weights
1"""
2 fdweights(t,m)
3
4Compute weights for the `m`th derivative of a function at zero using
5values at the nodes in vector `t`.
6"""
7function fdweights(t,m)
8# This is a compact implementation, not an efficient one.
9 # Recursion for one weight.
10 function weight(t,m,r,k)
11 # Inputs
12 # t: vector of nodes
13 # m: order of derivative sought
14 # r: number of nodes to use from t
15 # k: index of node whose weight is found
16
17 if (m<0)  (m>r) # undefined coeffs must be zero
18 c = 0
19 elseif (m==0) && (r==0) # base case of onepoint interpolation
20 c = 1
21 else # generic recursion
22 if k<r
23 c = (t[r+1]*weight(t,m,r1,k) 
24 m*weight(t,m1,r1,k))/(t[r+1]t[k+1])
25 else
26 numer = r > 1 ? prod(t[r]x for x in t[1:r1]) : 1
27 denom = r > 0 ? prod(t[r+1]x for x in t[1:r]) : 1
28 β = numer/denom
29 c = β*(m*weight(t,m1,r1,r1)  t[r]*weight(t,m,r1,r1))
30 end
31 end
32 return c
33 end
34 r = length(t)1
35 w = zeros(size(t))
36 return [ weight(t,m,r,k) for k=0:r ]
37end
We will estimate the derivative of \(\cos(x^2)\) at \(x=0.5\) using five nodes.
t = [ 0.35,0.5,0.57,0.6,0.75 ] # nodes
f = x > cos(x^2)
dfdx = x > 2*x*sin(x^2)
exact_value = dfdx(0.5)
0.24740395925452294
We have to shift the nodes so that the point of estimation for the derivative is at \(x=0\). (To subtract a scalar from a vector, we must use the .
operator.)
w = FNC.fdweights(t.0.5,1)
5element Vector{Float64}:
0.5303030303030298
21.61904761904763
45.09379509379508
23.333333333333307
0.38888888888888845
The finitedifference formula is a dot product (i.e., inner product) between the vector of weights and the vector of function values at the nodes.
fd_value = dot(w,f.(t))
0.247307422906135
We can reproduce the weights in the finitedifference tables by using equally spaced nodes with \(h=1\). For example, here is a onesided formula at four nodes.
FNC.fdweights(0:3,1)
4element Vector{Float64}:
1.8333333333333333
3.0
1.5
0.3333333333333333
By giving nodes of type Rational
, we can get exact values instead.
FNC.fdweights(Rational.(0:3),1)
4element Vector{Rational{Int64}}:
11//6
3//1
3//2
1//3
Exercises#
✍ This problem refers to \(Q(x)\) defined by (5.4.4).
(a) Show that \(Q(x)\) interpolates the three values of \(f\) at \(x=h\), \(x=0\), and \(x=h\).
(b) Show that \(Q'(0)\) gives the finitedifference formula defined by (5.4.5).
(a) ✍ Table 5.4.2 lists forward difference formulas in which \(p=0\) in (5.4.1). Show that the change of variable \(g(x) = f(x)\) transforms these formulas into backward difference formulas with \(q=0\), and write out the table analogous to Table 5.4.2 for backward differences.
(b) ⌨ Suppose you are given the nodes \(t_0=0.9\), \(t_1=1\), and \(t_2=1.1\), and \(f(x) = \sin(2x)\). Using formulas from Table 5.4.1 and Table 5.4.2, compute secondorder accurate approximations to \(f'\) at each of the three nodes.
⌨ Let \(f(x)=e^{x}\), \(x=0.5\), and \(h=0.2\). Using Function 5.4.7 to get the necessary weights on five nodes centered at \(x\), find finitedifference approximations to the first, second, third, and fourth derivatives of \(f\). Make a table showing the derivative values and the errors in each case.
⌨ In the manner of Demo 5.4.8, use Function 5.4.7 on centered node vectors of length 3, 5, 7, and 9 to produce a table analogous to Table 5.4.1 for the second derivative \(f''(0)\). (You do not need to show the orders of accuracy, just the weights.)
⌨ For this problem, let \(f(x)=\tan(2x)\).
(a) ⌨ Apply Function 5.4.7 to find a finitedifference approximation to \(f''(0.3)\) using the five nodes \(t_j=0.3+jh\) for \(j=2,\ldots,2\) and \(h=0.05\). Compare to the exact value of \(f''(0.3)\).
(b) ⌨ Repeat part (a) for \(f''(0.75)\) on the nodes \(t_j=0.75+jh\). Why is the finitedifference result so inaccurate? (Hint: A plot of \(f\) might be informative.)
✍ Find the finitedifference formula for \(f''(0)\) that results from applying (5.4.2) on \(f'\) and then (5.4.3) on \(f'\) within that result.
(a) ✍ Show using L’Hôpital’s Rule that the centered formula approximation (5.4.5) converges to an equality as \(h\to 0\).
(b) ✍ Derive two conditions on the finitedifference weights in (5.4.1) that arise from requiring convergence as \(h\to 0\). (Hint: Consider what is required in order to apply L’Hôpital’s Rule, as well as the result of applying it.)