# Nonlinearity and boundary conditions

## Contents

# 10.5. Nonlinearity and boundary conditions¶

Collocation for nonlinear differential equations operates on the same principle as for linear problems: replace functions by vectors and replace derivatives by differentiation matrices. But because the differential equation is nonlinear, the resulting algebraic equations are as well. We will therefore need to use a quasi-Newton or similar method as part of the solution process.

We consider the TPBVP (10.1.1), reproduced here:

As in Section 10.4, the function \(u(x)\) is replaced by a vector \(\mathbf{u}\) of its approximated values at nodes \(x_0,x_1,\ldots,x_n\) (see Equation (10.4.2)). We define derivatives of the sampled function as in (10.4.3) and (10.4.4), using suitable differentiation matrices \(\mathbf{D}_x\) and \(\mathbf{D}_{xx}\).

The collocation equations, ignoring boundary conditions for now, are

where

and \(\mathbf{u}'=\mathbf{D}_x\mathbf{u}\).

We impose the boundary conditions in much the same way as in Section 10.4. Again define the rectangular boundary removal matrix \(\mathbf{E}\) as in (10.4.8), and replace the equations in those two rows by the boundary conditions:

The left-hand side of (10.5.3) is a nonlinear function of the unknowns in the vector \(\mathbf{u}\), so (10.5.3) is an \((n+1)\times 1\) set of nonlinear equations, amenable to solution by the techniques of Chapter 4.

Given the BVP

we compare to the standard form (10.5.1) and recognize

Suppose \(n=3\) for an equispaced grid, so that \(h=\frac{1}{2}\), \(x_0=0\), \(x_1=\frac{1}{2}\), \(x_2=1\), and \(x_3=\frac{3}{2}\). There are four unknowns. We compute

## Implementation¶

Our implementation using second-order finite differences is Function 10.5.2. It’s surprisingly short, considering how general it is, because we have laid a lot of groundwork already.

**Solve a nonlinear boundary-value problem**

```
1"""
2 bvp(ϕ,xspan,lval,lder,rval,rder,init)
3
4Finite differences to solve a two-point boundary value problem with
5ODE u'' = `ϕ`(x,u,u') for x in `xspan`, left boundary condition
6`g₁`(u,u')=0, and right boundary condition `g₂`(u,u')=0. The value
7`init` is an initial estimate for the values of the solution u at
8equally spaced values of x, which also sets the number of nodes.
9
10Returns vectors for the nodes and the values of u.
11"""
12function bvp(ϕ,xspan,g₁,g₂,init)
13 n = length(init) - 1
14 x,Dₓ,Dₓₓ = diffmat2(n,xspan)
15 h = x[2]-x[1]
16
17 function residual(u)
18 # Residual of the ODE at the nodes.
19 du_dx = Dₓ*u # discrete u'
20 d2u_dx2 = Dₓₓ*u # discrete u''
21 f = d2u_dx2 - ϕ.(x,u,du_dx)
22
23 # Replace first and last values by boundary conditions.
24 f[1] = g₁(u[1],du_dx[1])/h
25 f[n+1] = g₂(u[n+1],du_dx[n+1])/h
26 return f
27 end
28
29 u = levenberg(residual,init)
30 return x,u[end]
31end
```

About the code

The nested function `residual`

uses differentiation matrices computed externally to it, rather than computing them anew on each invocation. As in Function 10.4.1, there is no need to form the row-deletion matrix \(\mathbf{E}\) explicitly. In lines 23–24, we divide the values of \(g_1\) and \(g_2\) by a factor of \(h\). This helps scale the residual components more uniformly and improves the robustness of convergence a bit.

In order to solve a particular problem, we must write a function that computes \(\phi\) for vector-valued inputs \(\mathbf{x}\), \(\mathbf{u}\), and \(\mathbf{u}'\), and functions for the boundary conditions. We also have to supply `init`

, which is an estimate of the solution used to initialize the quasi-Newton iteration. Since this argument is a vector of length \(n+1\), it sets the value of \(n\) in the discretization.

Suppose a damped pendulum satisfies the nonlinear equation \(\theta'' + 0.05\theta'+\sin \theta =0\). We want to start the pendulum at \(\theta=2.5\) and give it the right initial velocity so that it reaches \(\theta=-2\) at exactly \(t=5\). This is a boundary-value problem with Dirichlet conditions \(\theta(0)=2.5\) and \(\theta(5)=-2\).

The first step is to define the function \(\phi\) that equals \(\theta''\).

```
ϕ = (t,θ,ω) -> -0.05*ω - sin(θ);
```

Next, we define the boundary conditions.

```
g₁(u,du) = u - 2.5
g₂(u,du) = u + 2;
```

The last ingredient is an initial estimate of the solution. Here we choose \(n=100\) and a linear function between the endpoint values.

```
init = collect( range(2.5,-2,length=101) );
```

We find a solution with negative initial slope, i.e., the pendulum is initially pushed back toward equilibrium.

```
t,θ = FNC.bvp(ϕ,[0,5],g₁,g₂,init)
plot(t,θ,xaxis=(L"t"),yaxis=(L"\theta(t)"),
title="Pendulum over [0,5]")
```

If we extend the time interval longer for the same boundary values, then the initial slope must adjust.

```
t,θ = FNC.bvp(ϕ,[0,8],g₁,g₂,init)
plot(t,θ,xaxis=(L"t"),yaxis=(L"\theta(t)"),
title="Pendulum over [0,8]")
```

This time, the pendulum is initially pushed toward the unstable equilibrium in the upright vertical position before gravity pulls it back down.

The initial solution estimate can strongly influence how quickly a solution is found, or whether the quasi-Newton iteration converges at all. In situations where multiple solutions exist, the initialization can determine which is found.

We look for a solution to the parameterized membrane deflection problem from Example 10.1.4,

Here is the problem definition. We use a truncated domain to avoid division by zero at \(r=0\).

```
domain = [eps(),1]
λ = 0.5
ϕ = (r,w,dwdr) -> λ/w^2 - dwdr/r
g₁(w,dw) = dw
g₂(w,dw) = w-1;
```

First we try a constant function as the initialization.

```
init = ones(301)
r,w₁ = FNC.bvp(ϕ,domain,g₁,g₂,init)
plot(r,w₁,xaxis=(L"r"),yaxis=(L"w(r)"),
title="Solution of the membrane problem")
```

It’s not necessary that the initialization satisfy the boundary conditions. In fact, by choosing a different constant function as the initial guess, we arrive at another valid solution.

```
init = 0.5*ones(301)
r,w₂ = FNC.bvp(ϕ,domain,g₁,g₂,init)
plot!(r,w₂,title="Two solutions of the membrane problem")
```

## Parameter continuation¶

Sometimes the best way to get a useful initialization is to use the solution of a related easier problem, a technique known as **parameter continuation**. In this approach, one solves the problem at an easy parameter value, and gradually changes the parameter value to the desired value. After each change, the most recent solution is used to initialize the iteration at the new parameter value.

We solve the stationary **Allen–Cahn equation**,

```
ϕ = (x,u,dudx) -> (u^3 - u) / ϵ;
g₁(u,du) = du
g₂(u,du) = u-1;
```

Finding a solution is easy at larger values of \(\epsilon\).

```
ϵ = 0.02
init = collect( range(-1,1,length=141) )
x,u₁ = FNC.bvp(ϕ,[0,1],g₁,g₂,init)
plot(x,u₁,label=L"\epsilon = 0.02",leg=:bottomright,
xaxis=(L"x"),yaxis=(L"u(x)"),title="Allen–Cahn solution")
```

However, finding a good initialization is not trivial for smaller values of \(\epsilon\). Note below that the iteration stops without converging to a solution.

```
ϵ = 0.002;
x,z = FNC.bvp(ϕ,[0,1],g₁,g₂,init);
```

```
┌ Warning: Maximum number of iterations reached.
└ @ FundamentalsNumericalComputation /Users/driscoll/.julia/dev/FundamentalsNumericalComputation/src/chapter04.jl:166
```

The iteration succeeds if we use the first solution instead as the initialization here.

```
x,u₂ = FNC.bvp(ϕ,[0,1],g₁,g₂,u₁)
plot!(x,u₂,label=L"\epsilon = 0.002")
```

In this case we can continue further.

```
ϵ = 0.0005
x,u₃ = FNC.bvp(ϕ,[0,1],g₁,g₂,u₂)
plot!(x,u₃,label=L"\epsilon = 0.0005")
```

## Exercises¶

✍ This exercise is about the nonlinear boundary-value problem

\[ u'' = \frac{3(u')^2}{u} , \quad u(-1) = 1, \; u(2) = \frac{1}{2}. \]**(a)**Verify that the exact solution is \(u(x) = ( x+2 )^{-1/2}\).**(b)**Write out the finite-difference approximation (10.5.3) with a single interior point (\(n=2\)).**(c)**Solve the equation of part (b) for the lone interior value \(u_1\).⌨

**(a)**Use Function 10.5.2 to solve the problem of Exercise 1 for \(n=80\). In a 2-by-1 subplot array, plot the finite-difference solution and its error.**(b)**⌨ For each \(n=10,20,40,\ldots,640\), find the infinity norm of the error on the same problem. Make a log-log plot of error versus \(n\) and include a graphical comparison to second-order convergence.⌨ (Adapted from [AP98].) Use Function 10.5.2 twice with \(n=200\) to solve

\[ u'' + e^{u+0.5} = 0, \quad y(0) = y(1) = 0, \]with initializations \(7 \sin(x)\) and \(\frac{1}{4} \sin(x)\). Plot the solutions together on one graph.

⌨ Use Function 10.5.2 to compute the solution to the Allen–Cahn equation in Demo 13.4.5 with \(\epsilon=0.02\). Determine numerically whether it is antisymmetric around the line \(x=0.5\)—that is, whether \(u(1-x)=-u(x)\). You should supply evidence that your answer is independent of \(n\).

⌨ Consider the pendulum problem from Example 10.1.3 with \(g=L=1\). Suppose we want to release the pendulum from rest such that \(\theta(5)=\pi/2\). Use Function 10.5.2 with \(n=200\) to find one solution that passes through \(\theta=0\), and another solution that does not. Plot \(\theta(t)\) for both cases together.

⌨ The BVP

\[ u'' = x \operatorname{sign}(1-x) u, \quad u(-6)=1, \; u'(6)=0, \]forces \(u''\) to be discontinuous at \(x=1\), so finite differences may not converge to the solution at their nominal order of accuracy.

**(a)**Solve the problem using Function 10.5.2 with \(n=1400\), and make a plot of the solution. Store the value at \(x=6\) for use as a reference high-accuracy solution.**(b)**For each \(n=100,200,300,\ldots,1000\), apply Function 10.5.2, and compute the error at \(x=6\). Compare the convergence graphically to second order.⌨ The following nonlinear BVP was proposed by Carrier (for the special case \(b=1\) in [Car70]):

\[ \epsilon u'' + 2(1-x^2)u +u^2 = 1, \quad u(-1) = u(1) = 0. \]In order to balance the different components of the residual, it’s best to implement each boundary condition numerically as \(u/\epsilon=0\).

**(a)**Use Function 10.5.2 to solve the problem with \(\epsilon=0.003\), \(n=200\), and an initial estimate of all zeros. Plot the result; you should get a solution with 9 local maxima.**(b)**Starting with the result from part (a) as an initialization, continue the parameter through the sequence\[ \epsilon = 3\times 10^{-3}, 3\times 10^{-2.8}, 3\times 10^{-2.6},\ldots, 3\times 10^{-1}. \]The most recent solution should be used as the initialization for each new value of \(\epsilon\). Plot the end result for \(\epsilon=0.3\); it should have one interior local maximum.

**(c)**Starting with the last solution of part (b), reverse the continuation steps to return to \(\epsilon=0.003\). Plot the result, which is an entirely different solution from part (a).⌨ Demo 13.4.3 finds two solutions at \(\lambda=0.5\). Continue both solutions by taking 50 steps from \(\lambda=0.5\) to \(\lambda=0.79\). Make a plot with \(\lambda\) on the horizontal axis and \(w(0)\) on the vertical axis, with one point to represent each solution found. You should get two paths that converge as \(\lambda\) approaches \(0.79\) from below.

The

`collect`

function turns a range object into a true vector.