# Runge–Kutta methods

## Contents

# 6.4. Runge–Kutta methods¶

We come now to one of the major and most-used types of methods for initial-value problems: **Runge–Kutta** (RK) methods.1 They are one-step methods in the sense of (6.2.5), though they are not often written in that form. RK methods boost the accuracy past first order by evaluating the ODE function \(f(t,u)\) more than once per time step.

## A second-order method¶

Consider a series expansion of the exact solution to \(u'=f(t,u)\),

If we replace \(\hat{u}'\) by \(f\) and keep only the first two terms on the right-hand side, we would obtain the Euler method. To get more accuracy we will need to compute or estimate the third term as well. Note that

where we have applied the multidimensional chain rule to the derivative, because both of the arguments of \(f\) depend on \(t\). Using this expression in (6.4.1), we obtain

We have no desire to calculate and then code those partial derivatives of \(f\) directly; an approximate approximation is called for. Observe that

Matching this expression to the term in brackets in (6.4.2), it seems natural to select \(\alpha = h/2\) and \(\beta = \frac{1}{2}h f\bigl(t_i,\hat{u}(t_i)\bigr)\). Doing so, we find

Truncation of the series here results in a new one-step method.

The improved Euler method is the one-step formula

Thanks to the definitions above of \(\alpha\) and \(\beta\), the omitted terms are of size

Therefore \(h\tau_{i+1}=O(h^3)\), and the order of accuracy of improved Euler is two.

## Implementation¶

Runge–Kutta methods are called *multistage* methods. We can see why if we interpret (6.4.4) from the inside out. In the first stage, the method takes an Euler half-step to time \(t_i+\frac{1}{2}h\):

The second stage employs an Euler-style strategy over the whole time step, but using the value from the first stage to get the slope:

Our implementation of IE2 is shown in Function 6.4.2.

**Improved Euler method for an IVP**

```
1"""
2 ie2(ivp,n)
3
4Apply the Improved Euler method to solve the given IVP using `n`
5time steps. Returns a vector of times and a vector of solution
6values.
7"""
8function ie2(ivp,n)
9 # Time discretization.
10 a,b = ivp.tspan
11 h = (b-a)/n
12 t = [ a + i*h for i in 0:n ]
13
14 # Initialize output.
15 u = fill(float(ivp.u0),n+1)
16
17 # Time stepping.
18 for i in 1:n
19 uhalf = u[i] + h/2*ivp.f(u[i],ivp.p,t[i]);
20 u[i+1] = u[i] + h*ivp.f(uhalf,ivp.p,t[i]+h/2);
21 end
22 return t,u
23end
```

## More Runge–Kutta methods¶

While it is interesting to interpret IE2 as a pair of Euler-like steps, the Taylor series derivation is the only way to see that it will be more accurate than Euler, and it is also the path to deriving other methods. Moving to higher orders of accuracy requires introducing additional stages, each having free parameters so that more terms in the series may be matched. The amount of algebra grows rapidly in size and complexity, though there is a sophisticated theory for keeping track of it. We do not give the derivation details.

A generic \(s\)-stage RK method takes the form

This recipe is completely determined by the number of stages \(s\) and the constants \(a_{ij}\), \(b_j\), and \(c_i\). Often an RK method is presented as just a table of these numbers, as in

For example, IE2 is given by

Here are two more two-stage, second-order methods, **modified Euler** and **Heun’s method**, respectively:

Attention

Euler, improved Euler (IE2), and modified Euler (ME2) are all distinct numerical methods.

The most commonly used RK method, and perhaps the most popular IVP method of all, is the fourth-order one given by

This formula is often referred to as *the* fourth-order RK method, even though there are many others, and we refer to it as **RK4**. Written out, the recipe is as follows.

Our implementation is given in Function 6.4.4.

**Fourth-order Runge-Kutta for an IVP**

```
1"""
2 rk4(ivp,n)
3
4Apply the common Runge-Kutta 4th order method to solve the given
5IVP using `n` time steps. Returns a vector of times and a vector of
6solution values.
7"""
8function rk4(ivp,n)
9 # Time discretization.
10 a,b = ivp.tspan
11 h = (b-a)/n
12 t = [ a + i*h for i in 0:n ]
13
14 # Initialize output.
15 u = fill(float(ivp.u0),n+1)
16
17 # Time stepping.
18 for i in 1:n
19 k₁ = h*ivp.f( u[i], ivp.p, t[i] )
20 k₂ = h*ivp.f( u[i]+k₁/2, ivp.p, t[i]+h/2 )
21 k₃ = h*ivp.f( u[i]+k₂/2, ivp.p, t[i]+h/2 )
22 k₄ = h*ivp.f( u[i]+k₃, ivp.p, t[i]+h )
23 u[i+1] = u[i] + (k₁ + 2(k₂+k₃) + k₄)/6
24 end
25 return t,u
26end
```

We solve the IVP \(u'=\sin[(u+t)^2]\) over \(0\le t \le 4\), with \(u(0)=-1\).

```
f = (u,p,t) -> sin((t+u)^2)
tspan = (0.0,4.0)
u₀ = -1.0
ivp = ODEProblem(f,u₀,tspan)
```

```
ODEProblem with uType Float64 and tType Float64. In-place: false
timespan: (0.0, 4.0)
u0: -1.0
```

We use a `DifferentialEquations`

solver to construct an accurate approximation to the exact solution.

```
u_ref = solve(ivp,Tsit5(),reltol=1e-14,abstol=1e-14);
```

Now we perform a convergence study of our two Runge–Kutta implementations.

```
n = [ round(Int,2*10^k) for k in 0:0.5:3 ]
err_IE2,err_RK4 = [],[]
for n in n
t,u = FNC.ie2(ivp,n)
push!( err_IE2, maximum( @.abs(u_ref(t)-u) ) )
t,u = FNC.rk4(ivp,n)
push!( err_RK4, maximum( @.abs(u_ref(t)-u) ) )
end
pretty_table([n err_IE2 err_RK4],["n","IE2 error","RK4 error"])
```

```
┌──────┬─────────────┬─────────────┐
│ n │ IE2 error │ RK4 error │
├──────┼─────────────┼─────────────┤
│ 2 │ 1.76903 │ 0.820651 │
│ 6 │ 0.512684 │ 0.791925 │
│ 20 │ 0.0240594 │ 0.00081269 │
│ 63 │ 0.00225327 │ 8.06216e-6 │
│ 200 │ 0.000222419 │ 7.60655e-8 │
│ 632 │ 2.22528e-5 │ 7.513e-10 │
│ 2000 │ 2.22177e-6 │ 7.45259e-12 │
└──────┴─────────────┴─────────────┘
```

The amount of computational work at each time step is assumed to be proportional to the number of stages. Let’s compare on an apples-to-apples basis by using the number of \(f\)-evaluations on the horizontal axis.

```
plot([2n 4n],[err_IE2 err_RK4],m=3,label=["IE2" "RK4"],
xaxis=(:log10,"f-evaluations"),yaxis=(:log10,"inf-norm error"),
title="Convergence of RK methods",leg=:bottomleft)
plot!(2n,1e-5*(n/n[end]).^(-2),l=:dash,label=L"O(n^{-2})")
plot!(4n,1e-10*(n/n[end]).^(-4),l=:dash,label=L"O(n^{-4})")
```

The fourth-order variant is more efficient in this problem over a wide range of accuracy.

## Efficiency¶

As with rootfinding and integration, the usual point of view is that evaluations of \(f\) are the only significant computations and are therefore to be minimized in number. One of the most important characteristics of a multistage method is that each stage requires an evaluation of \(f\); that is, a single time step of an \(s\)-stage method requires \(s\) evaluations of \(f\).

The error decreases geometrically as \(s\) is incremented, so trading a stage for an increase in order is a good deal. But \(s=5\), \(6\), or \(7\) gives a maximal order of accuracy of \(s-1\); this decreases to \(s-2\) for \(s=8\) and \(s=9\), etc. Fourth order is considered adequate and the sweet spot for many applications.

## Exercises¶

✍ For each IVP, write out (possibly using a calculator) the first time step of the improved Euler method with \(h=0.2\).

**(a)**\(u' = -2t u, \ 0 \le t \le 2, \ u(0) = 2;\ \hat{u}(t) = 2e^{-t^2}\)**(b)**\(u' = u + t, \ 0 \le t \le 1, \ u(0) = 2;\ \hat{u}(t) = -1-t+3e^t\)**(c)**\((1+x^3)uu' = x^2,\ 0 \le x \le 3, \ u(0) = 1;\ \hat{u}(x) = [1+(2/3)\ln (1+x^3)]^{1/2}\)✍ Use the modified Euler method to solve the problems in the preceding exercise.

⌨ Modify Function 6.4.4 to implement the modified Euler method. Test your function on the IVP in part (a) of Exercise 1 by solving with \(n=30,60,90,\ldots,300\) and plotting the convergence of the error at the final time together with a line showing \(O(n^{-2})\).

✍ Use Heun’s method to solve the problems in Exercise 1 above.

⌨ Modify Function 6.4.4 to implement Heun’s method. Test your function on the IVP in part (a) of Exercise 1 by solving with \(n=30,60,90,\ldots,300\) and plotting the convergence of the error at the final time together with a line showing \(O(n^{-2})\).

✍ Use RK4 to solve the problems in Exercise 1 above.

✍ Using (6.4.2) and (6.4.3), show that the modified Euler method has order of accuracy at least 2.

✍ Using (6.4.2) and (6.4.3), show that Heun’s method has order of accuracy at least 2.

⌨ For each IVP, compute the solution using Function 6.4.4. (i) Plot the solution for \(n=300\). (ii) For \(n=100,200,300,\ldots,1000\), compute the error at the final time and make a log-log convergence plot, including a reference line for fourth-order convergence.

**(a)**\(u''+ 9u = 9t, \: 0< t< 2\pi, \: u(0) = 1,\: u'(0) = 1; \: \hat{u}(t) = t+\cos (3t)\)**(b)**\(u''+ 9u = \sin(2t), \: 0< t< 2\pi, \: u(0) = 2,\: u'(0) = 1\);\(\quad \hat{u}(t) = (1/5) \sin(3t) + 2 \cos (3t)+ (1/5) \sin (2t)\)

**(c)**\(u''- 9u = 9t, \: 0< t< 1, \: u(0) = 2,\: u'(0) = -1; \: \hat{u}(t) = e^{3t} + e^{-3t}-t\)**(d)**\(u''+ 4u'+ 4u = t, \: 0< t< 4, \: u(0) = 1,\: u'(0) = 3/4; \: \hat{u}(t) = (3t+5/4)e^{-2t} + (t-1)/4\)**(e)**\(x^2 y'' +5xy' + 4y = 0,\: 1<x<e^2, \: y(1) = 1, \: y'(1) = -1, \: \hat{y}(x) = x^{-2}( 1 + \ln x)\)**(f)**\(2 x^2 y'' +3xy' - y = 0,\: 1<x<16, \: y(1) = 4, \: y'(1) = -1, \: \hat{y}(x) = 2(x^{1/2} + x^{-1})\)**(g)**\(x^2 y'' -xy' + 2y = 0,\: 1<x<e^{\pi}, \: y(1) = 3, \: y'(1) = 4\);\(\quad \hat{y}(x) = x \left[ 3 \cos \left( \ln x \right)+\sin \left( \ln x \right) \right]\)

**(h)**\(x^2 y'' + 3xy' + 4y = 0,\: e^{\pi/12} < x < e^{\pi}, \: y(e^{\pi/12}) = 0, \: y'(e^{\pi/12}) = -6\);\(\quad \hat{y}(x) = x^{-1} \left[ 3 \cos \left( 3 \ln x \right)+\sin \left( 3 \ln x \right) \right]\)

⌨ Do Exercise 6.3.4, but using Function 6.4.4 instead of

`solve`

.✍ Consider the problem \(u'=c u\), \(u(0) = 1\) for constant \(c\) and \(t>0\).

**(a)**Find an explicit formula in terms of \(h\) and \(c\) for \(u_{i+1}/u_i\) in the modified Euler method.**(b)**Show that if \(ch=-3\), then \(|u_i|\to\infty\) as \(i\to\infty\) while the exact solution \(\hat{u}(t)\) approaches zero as \(t\to\infty\).

- 1
Americans tend to pronounce these German names as “run-ghuh kut-tah.”