10

Introduction to Finite Difference Schemes for Ordinary Differential Equations

Consideration of problems in the natural sciences and engineering often leads to differential equations, and the study of such problems requires solving these equations. In some cases, it is possible to write their solutions in terms of known elementary functions. As a rule, however, this in principle is impossible, so that the construction of a solution in terms of an explicit closed-form expression cannot be considered as a standard method for solving differential equations. One cannot say that the analytic approach has lost its value. It remains a necessary instrument for the study of simplified, so called model problems. The study of model problems allows one to draw some conclusions about the nature of the more complicated original problems.

10.1 Elementary Example of a Finite Difference Scheme

For the sake of simplicity, consider an example of a difference scheme for the numerical solution of the equation

image (10.1)

Firstly, we introduce a set of points on the interval [0,l]: 0 = x0 < x1. < … < xN = l. This is an example of the simplest computational grid, and the distance between the points hn = xn + 1 − xn is called the step size. In what follows we shall consider the more simple case of the uniform grid, when hn = h = const. The approximate solution of equation (10.1) is determined at the grid points: that is, it is a set of values

image

This set of values is often called the grid function. Secondly, we need to replace the derivative by the difference approximation. The simplest way to do this is based on the derivative definition:

image

and it is permissible if Δx is taken sufficiently small. After introduction of this difference approximation we get, in place of differential equation (10.1), the difference equation at every point of the grid (x →xn, Δx h):

image (10.2)

The set of difference equations defined on some grid is called the finite difference scheme (or simply the difference scheme). Difference scheme (10.2) can be used to compute the approximate solution. To implement this computation we rewrite (10.2) in the form of a recursion relation:

image

Starting with

image

and sequentially taking n = 0, …, N − 1, we find the approximate solution at all grid points. Generally speaking, there are many ways to construct a difference scheme for a given differential equation. For instance, we can write the following difference schemes for equation (10.1):

image (10.3)

with the recursion form

image

or

image (10.4)

with the recursion form

image

Naturally, these schemes have different properties. We can see these differences in Figure 10.1, which shows the results of computations with difference schemes (10.2), (10.3), and (10.4), together with the exact solution of equation (10.1).

image

Figure 10.1 Approximate solutions for problem (10.1) generated by different schemes (l = 5, h = 0.2); open circles, exact solution u(x) = (l + x)exp(-x); solid line, scheme (10.2); shortdashed line, scheme (10.3); dashed line, scheme (10.4).

In the following program, we solve the problem in equation 10.1 using the strategies given in equations 10.3 and 10.4.

Listing 10.1

1. l = 5; h = 0.1; X = 0:h:5; n = length(x);

2. u = zeros(l, n); u(l) = 1;

3. for i = 2:n

4. u(i) = ((l + x(i))/(l + (l + h) *x(i))) * u(i-1);

5. end;

6. plot(x, u); hold on;

7. v = zeros(l, n); v(l) = 1; v(2) = 1;

8. for i = 2:n-1

9. v(i + 1) = v(i-1) - ((2 *h*x(i-1)*v(i)) / (1 + x(i-1)));

10. end;

11. plot(x, v); hold off;

Line 1, sets the length of x i.e. l, defines the step size h and creates the vector x. Line 2, initializes the output vector u and sets the initial value. Lines 3–5 implement the recursive form of equation 10.3, Line 7, initializes the output vector for second scheme. Lines 8–10 implement the second recursive scheme in equation 10.4. The following output of the gnuplot shows both results and they are same as the plots in figure 10.1.

10.2 Approximation and Stability

In this section, we consider definitions of approximation, stability, and convergence. Suppose that a differential initial- or boundary value problem is given on some interval I. This means that we want to find the solution u(x) of differential equation (or system of equations) in the interval l under auxiliary conditions on u(x) at one or both ends of this interval. We will write the problem in the symbolic form

image (10.5)

where D is a given differential operator and ƒ is a given right-hand side. Thus, for example, to write problem (10.1) in form (10.5) we need only take

image

We will assume that the solution u(x) of problem (10.5) on the interval I exists. In order to calculate this solution by the method of finite differences, we choose a finite set of points on the interval, I. This set is called a grid and will be designated as Ih = {xn} (Figure 10.2).

image

Figure 10.2

image

Figure 10.3 One-dimensional grid.

Next, we set out to find not the solution u(x) of problem (10.5), but a grid function

image

which is a set of values of the solution at the points of the grid, Ih. It is assumed that the grid Ih depends on a parameter h = xn + 1 − xn, which can take on positive values as small as desired. We are concerned with the computation of the grid function u(e) because, as the grid is refined, that is as h→0, it gives us an increasingly complete representation of the solution. Via interpolation it is possible, with increasing accuracy, as h→0, to construct the solution everywhere within I. However, we will not succeed in computing u(e) exactly. Instead of the grid function u(e), we look for another grid function

image

which converges to u(e) as the grid is refined. For this purpose, one can make use of difference equations. Suppose that for the approximate computation of the solution of problem (10.5), we have constructed a difference scheme, which we will write symbolically, by analogy with equation (10.5), in the form

image (10.6)

For example, difference scheme (10.2) may be written in the form (10.6) if we set

image

Before proceeding further, we need some way of measuring a grid function. In fact, we know this, because grid functions defined on the grid Ih are elements of the linear space, RN (vectors). Therefore, all definitions considered in section 8.1 are applicable here and the measure of the deviation of u(a) from u(e) is taken to be the norm of their difference, that is, the quantity imageu(e) − u(a)image.

When u(e) is substituted into difference scheme (10.6), some sort of residual will form

image

We will say that the difference scheme Dhu(a) = f(a) approximates the problem Du = f on the solution u(x) if imageδf(a)image → 0 as h→0. If, in addition, the inequality

image

is satisfied, where C > 0 and k > 0 are constants, then we will say that the approximation (difference scheme) is of the order k with respect to the magnitude of h.

In the above examples, we constructed difference schemes by replacing derivatives in the differential equation with difference expressions. This is a general approach, such that for any differential problem with a smooth enough solution u(x), we can construct a difference scheme with any prescribed order of approximation. Consider examples for constructing some difference expressions that are useful in practice:

image

Consider the perturbed difference problem

image (10.7)

obtained from problem (10.6) through the addition of a perturbation e(a) to the right-hand side. We will call difference scheme (10.6) stable if there exist numbers h0 > 0 and δ > 0, such that for any h < h0 and imagee(a)image < δ, difference problem (10.7) has only the solution z(a). This solution deviates from the solution u(a) by a grid function z(a) − u(a), satisfying the bound imagez(a) − u(a)image ≤ Cimagee(a)image, where constant C does not depend on h. This inequality means that a small perturbation e(a) of the right-hand side of difference scheme (10.6) evokes a small perturbation in the solution.

When the differential operator in (10.5) is linear, the following definition of stability is equivalent to that given above. We will call the difference scheme (10.6) stable if for any f(a) the equation Dhu(a) = f(a) has the unique solution u(a) and

image (10.8)

where C is some constant not dependent on h.

In practice, we are concerned with how strongly the approximate solution deviates from the exact one and how this deviation can be made sufficiently small.

Suppose that difference scheme (10.6) approximates problem (10.5) on the solution u(x) to order hk and is stable. Then, the approximate solution u(a) converges to u(e), that is, imageu(a) − u(e)image → 0 as h  0. In addition, the inequality

image

is satisfied, where C is some constant not depending on h. In this case we say that convergence is of order hk, or that difference scheme (10.6) has accuracy of kth order. The requirement that a difference scheme be convergent is fundamental. When this requirement is met, then the solution u(e) can be approximated to any prescribed accuracy, within machine precision, if h is taken sufficiently small.

10.3 Numerical Solution of Initial Value Problems

In the first part of this section we consider various methods for approximating the solution u(x) to a problem of the form

image (10.9)

subject to an initial condition u(a= α.

Later in the section we deal with the extension of those methods to a system of first-order differential equations in the form

image (10.10)

or

image

for a ≤ x ≤ b, subject to the initial conditions

image

We assume that the initial value problem (10.10) (or (10.9)) has a unique and stable solution. Solution u(x) is called stable if for any ε > 0 there is a δ > 0 such that any other solution u*(x) satisfying imageu(a) − u * (a)image ≤ δ also satisfies imageu(x) − u * (x)image ≤ ε for all x ≥ a.. Solution u(x) is called asymptotically stable if, in addition to being stable, it satisfies imageu(x) − u * (x)image → 0 as x → ∞. The solution u(x) is called unperturbed and u*(x) perturbed. In other words, stability means that small perturbation in the initial data results in small perturbation in the solution.

Before describing the methods for approximating the solution to problem (10.9), we consider one of the methods for analyzing stability of difference schemes for initial value problems.

10.3.1 Stability of difference schemes for linear problems

In solving initial-value problems, the grid function u(a) is computed in moving sequentially from one node of the grid to another neighboring node. If we can get a bound on the growth of the solution u(a) = {un(a)} after each such move, we will have at our disposal one of the most widely used methods for the study of stability. We develop this method for a simple test equation,

image (10.11)

Any difference scheme for this equation may be represented in the following canonical form

image (10.12)

where yn (generally a vector) depends on u(a), and R = R(h) is the transition operator (generally some matrix). In this case imageu(a)image = max imageynimage and imagef(a)image = imagey0image. It is easy to obtain that yn = Rny0. Thus we can write

image

According to our definition of stability (10.8), we get the following condition

image

or

image

which means that the norm of powers of the transition operator is bounded. We certainly satisfy this condition for any N if imageRimage ≤ 1. It is known from linear algebra that the spectral radius of a matrix is less than or equal to any norm of the matrix. Taking this into account, we obtain the necessary criterion for stability of a difference scheme for problem (10.11):

image (10.13)

where λm(R) are eigenvalues of the transition operator R.

The stability regions for various schemes, considered below, in the case of test equation (10.11) are given in Table 10.1 The stability condition has the form γh ≤ ß, so the following table provides values of parameter ß. The sign means unconditional stability.

Table 10.1

Stability regions

image

This method is specifically designed for test equation (10.11). However, it can be used for the analysis of difference schemes for equation (10.9). Suppose that the desired integral curve of equation (10.9) passes through the point with coordinates x = x*, u* = u(x*). Near this point we have

image

Therefore, equation (10.9), to a certain accuracy, may be replaced by equation

image

Where

image

image

Here we assume that differential equation (10.9) is stable, so ∂f/∂u is nonpositive. This equation appears as our test equation (10.11) (the right-hand function φ(x) can be ignored because it has no effect on stability), so the results of stability analysis of difference schemes for that equation may be applied in the general case of equation (10.9). Any conditionally stable scheme applied to test equation (10.11) has a restriction on choosing the step size h ≤ β/γ, β = const > 0. Of course for different points of the integral curve the value of the coefficient γ = γ * will differ. Therefore, we should modify this condition, taking into account not only one value of γ *, but a whole set of such values which sample the range of variations of ∂f/∂u along the integral curve. A list of two ways to do this follows:

1) we can compute with the variable step size hn, which satisfies the condition

image

2) if we can estimate the value of u(x) on the integration interval, then we can choose the step size from the condition

image

and this value of h provides stability of a difference scheme for the whole integration interval.

In the majority of cases encountered in practice, such approaches turn out to be good enough to achieve stability.

10.3.2 Runge-Kutta methods

The simplest Runge-Kutta (RK) scheme for problem (10.9) is one we have already met (see scheme (10.2)). This is the Euler scheme

image (10.14)

which possesses first-order approximation (and accuracy). First-order schemes provide rather slow convergence, and this results in a considerable amount of computational work in order to achieve the required accuracy. The explicit RK schemes with higher-order approximation are constructed in the following manner:

image (10.15)

where s is called the number of stages, and bl, cl, alm(l = 1, .... s; m = l, …, s–l) are some real parameters. Upon specifying the number of stages, we can determine these parameters in such a way as to provide the highest possible order of approximation. The Runge-Kutta schemes (10.15) may be symbolically represented in the form of the Butcher table:

image

There is a family of RK schemes for each order of approximation. For example, two schemes with the second-order approximation are

image

(10.16)

or

image

Next, several higher-order schemes are exemplified. The table

image

(10.17)

defines the coefficient for the third-order scheme, and tables

image

(10.18)

and

image

define coefficients for the fourth-order schemes. The explicit Runge-Kutta schemes are conditionally stable (see Table 10.1). Those results are valid not only for schemes (10.16), (10.17), and (10.18), but also for a variety of other schemes because all p-stage explicit RK schemes of order p have the same stability regions.

The following function implements the fourth order RK scheme. The first argument is a pointer to a function which is the implementation of the function on the right hand-side of equation 10.14. The second argument, range, defines the lowest and highest values of independent variable, x. The third argument, initial_value, is the initial value of dependent variable, u. The last argument, N, defines the number of steps to solve the problem or the number of partitions of the range.

Listing 10.2

1. function [x y] = rk4(f, range, initial_ value, N)

2. % This method computes the rk4 approximation of the

3. % input function f in the range [a,b] with the initial

4. % condition y(a) = initial_value and number of steps N

5. a = range(l); b = range(2); h = (b-a)/N;

6. x = a:h:b; y = zerost(l, N + l); y(l) = initial_value;

7. for n = I:length(y)-1

8. fl = h *fevalt(f, x(n), y(n));

9. f2 = h *feval(f, x(n) + h / 2, y(n) + h *f1 /2);

10. f3 = h *feval(f, x(n) + h 12, y(n) + h *f2 /2);

11. f4 = h *feval(f, x(n) + h, y(n) + h *f3);

12. y(n + 1) = y(n) + (f1 + 2 *f2 + 2 *f3 + f4) /6;

13. end;

14. x = x';

15. y = y';

Lines 8–11, make four calls to the “feval” function. This is a rather expensive function since it dynamically loads the input function and calculates the values for the pointed function. These lines are enclosed in a “for” loop which is executed N times. This increases the number of calls to “feval” by a factor of N. This rather clean implementation has a lot of room for improvement in terms of performance. A simple approach is to replace the “feval” call to explicit calculation of the function, f.

Listing 10.3

1. 1 function [x y] = rk4_2(range, initial_value, N)

2. a = range(1); b = range(2); h = (b-a)/N;

3. x = a:h:b; y = zeros(1, N + 1); y(1) = initial_value;

4. for n = l:length(y)-1

5. t1 = - (x(n) *y(n))/(1 + x(n)); f1 = h *t1;

6. t2 = - ((x(n) + h/2) * (y(n) + h*f1/2))/(1 + (x(n) + h/2)); f2 = h * t2;

7. t3 = - ((x(n) + h/2) * (y(n) + h*f2/2))/(1 + (x(n) + h/2)); f3 = h * t3;

8. t4 = - ((x(n) + h) * (y(n) + h*f3))/(1 + (x(n) + h)); f4 = h * t4;

9. y(n + 1) = y(n) + (f1 + 2 *f2 + 2 *f3 + f4) /6;

10. end;

11. x = x'; y = y';

The t’s within the lines 5–8 in the above code listing are the explicitly calculated values of the function f, in equation (10.1). The disadvantage is that this function could not be applied to any arbitrary problem, unlike the previous implementation. An even more efficient implementation of RK scheme is shown in the following function.

Listing 10.4

1. 1 public static List jrk4_2(IMatrix range, IMatrix initialValue, IMatrix n) {

2. double a = range.getMatrixElement(1, 1).getReal();

3. double b = range.getMatrixElement(1, 2).getReal();

4. int N = (int) n.getMatrixElement(1).getReal();

5. double h = (b - a) / N;

6. double[][] x = new double[N + 1][I]; double[][] y = new double[N + 1][I];

7. x[0][0] = a; y[0][0] = initialValue.getMatrixElement(1, l).getReal();

8. double f1 = 0; double f2 = 0; double f3 = 0; double f4 = 0;

9. for (int i = 1; i < = N; i++) {

10. x[i][0] = x[i-1][0] + h;

11. t1 = - (x[i-1][0] *y[i-l][0])/(1 + x[i-1][0]);f1 = h * t1;

12. t2 = - ((x[i-l][0] + h/2) * (y[i-1][0] + h*f1/2))/(1 + (x[i-l][0] + h/2));f2 = h*t2;

13. t3 = - ((x[i-1][0] + h/2) * (y[i-1][0] + h*f2/2))/(1 + (x[i-l][0] + h/2));f3 = h *t3;

14. t4 = - ((x[i-1][0] + h) * (y[i-1][0] + h*f3))/(1 + (x[i-1][0] + h));f4 = h * t4;

15. y[i][0] = y[i-1][0] + (f1 + 2*f2 + 2*f3 + f4) /6;

16. }

17. List rtn = new ArrayList();

18. rtn.add(alg.createMatrix(x));

19. rtn.add(alg.createMatrix(y));

20. return rtn; }

This implementation is the java translation of the previous implementation in m-script, except it is much faster but at the cost of code complexity.

10.3.3 Adams type methods

To obtain un + l by a Runge-Kutta scheme, with un given, one must evaluate the function J(x,u) s times at some intermediate points. The computed s values are then not used any further. In the Adams schemes, to compute the next value un + l we include not only un, but also some approximations prior to un. In addition, computation of the value un + 1 requires not more than one evaluation of f(x,u), regardless of the order of approximation. The Adams schemes may be obtained as follows. Suppose u(x) is the solution of problem (10.14). If we define f(x,u(x)) = F(x), then

image

It is known that there is one and only one polynomial Pk(x) of order not higher than k, which at the points xn, xn–1, …, xn–k takes on the values of F(xn), F(xn–1), …, F(xn–k), respectively. This polynomial, for a sufficiently smooth function F(x), deviates from F(x) on the interval [xn, xn + 1] by a quantity of order hk + 1, so that

image

The explicit Adams schemes (Adams-Bashforth methods) have the form

image (10.19)

In general, the explicit Adams schemes may be represented in the following form

image (10.20)

where parameters ak are given in Table 10.2.

Table 10.2

Coefficients of explicit Adams schemes up to order four.

image

To start computing via scheme (10.20) when p ≥ 2, one must know p values of um(a),m = 0, …, p–1, but only u0(a) = α is given. These values may be found by the Runge-Kutta methods or by expansion of the solution in a Taylor series about the point x = a. Let us consider the latter possibility more closely. The value u(xm) = u(a + mh) may be calculated through the following expansion

image

Using equation (10.9) one can express the higher derivatives of u(x) via the function f(x,u):

image

Thus the starting values um(a) are determined as follows:

image

One of the features of the Adams methods is the fact that in the computation of un + 1 (given the values of f(xn − 1,un − 1), f(xn − 2,un − 2), …, already found in the calculation of un, un–1, …), one needs to compute only the value of f(xn, un). Thus the advantage of the Adams methods over the Runge-Kutta methods consists of the smaller computational effort required for each step. The basic disadvantages are the need for special starting procedures, and the fact that one cannot (without complicating the computational scheme) change the step size h in the course of the computation. The stability regions for explicit Adams schemes in the case of test equation (10.11) are more restrictive than those for the Runge-Kutta methods (see Table 10.1).

If we use the point xk + l to construct polynomial Pk(x) in (10.19), then we obtain the implicit Adams schemes (Adams-Moulton methods). They may be represented in the following form

image (10.21)

where parameters bk are given in Table 10.3.

Table 10.3

Coefficients of implicit Adams schemes up to order four

image

Table 10.4

Order of approximation, p αp αc
1 1/2 –1/2
2 5/12 –1/12
3 3/8 –1/24
4 251/720 –19/720

The implicit Adams schemes produce more accurate results than the explicit schemes of the same order. In addition, the implicit schemes have wider stability regions. However, the implicit schemes are not computationally efficient, because, in general, difference equation (10.21) cannot be solved explicitly for un + 1. We have to solve a nonlinear equation with respect to un + 1 using some iterative technique, but this increases the amount of computational work considerably.

In practice, implicit Adams schemes usually are not used alone. Often, they are used to improve approximations obtained by explicit methods. The combination of an explicit and implicit technique is called a predictor-corrector method. The explicit scheme predicts an approximation, and the implicit scheme corrects this prediction. This procedure can be organized in the following manner:

(1) calculate an approximation image using the explicit Adams scheme of order p as the predictor:

image

(2) use the implicit Adams scheme of order p as the corrector to improve the approximation:

image

When p > 2, predictor-corrector methods are still more efficient then RK schemes. These schemes are explicit and they have much wider stability regions in comparison with the explicit Adams methods.

The following function implements the fourth order Adam’s scheme. The first argument is a pointer to a function which is the implementation of the function on the right hand-side of equation (10.14). The second argument, range, defines the lowest and highest values of independent variable, x. The third argument, initial_value, is the initial value of dependent variable, u. The last argument, N, defines the number of steps to solve the problem or the number of partitions of the range.

Listing 10.5

1. 1 function [x y] = adams(func, range, initial_value, N)

2. % This method computes the Adams approximation of the

3. % input function func in the range [a,b] with the initial

4. % condition y(a) = initial value and number of steps N

5. a = range(1); b = range(2); h = (b-a)/N;

6. x = a:h:b; y = zeros(1, N + 1);

7. f = zeros(1, N + 1); y(1) = initial_value;f(1) = feval(func, x(1), y(1));

8. y(2) = y(1) + (h/24) * (55*f(1)); %prediction

9. f(2) = feval(func, x(2), y(2));

10. y(2) = y(1) + (h/24) * (19*f(1) + 9*/(2)); %correction

11. f(2) = feval(func, x(2), y(2));

12. y(3) = y(2) + (h/24) * (-59*f(1) + 55*f(2)); %prediction

13. f(3) = feval(func, x(3), y(3));

14. y(3) = y(2) + (h/24) * (-5*f(1) + 19*f(2) + 9*/(3)); %correction

15. f(3) = feval(func, x(3), y(3));

16. y(4) = y(3) + (h/24) * (37*f(1) - 59*f(2) + 55*f(3)); %prediction

17. f(4) = feval(func, x(4), y(4));

18. y(4) = y(3) + (h/24) * (f(1) - 5*f(2) + 19*f(3) + 9*f(4)); %correction

19. 19f(4) = feval(func, x(4), y(4));

20. for n = 4:length(y)-1

21. y(n + 1) = y(n) + (h/24) * (-9*f(n-3) + 37*f(n-2) - 59*f(n-1) + 55*f(n)); %prediction

22. f(n + 1) = feval(func, x(n + 1), y(n + 1));

23. y(n + 1) = y(n) + (h/24) * (f(n-3) - 5*f(n-2) + 19*f(n-1) + 9*/(n)); %correction

24. f(n + 1) = feval(func, x(n + 1), y(n + 1));

25. end;

26. x = x'; y = y';

The above code is commented to indicate the expressions for prediction and correction. There are a number of calls to “feval” function which results in poor performance. As previously mentioned, one solution is to replace these calls with explicit calculation of the function in question or implement the scheme using java language as demonstrated in the case of RK implementation.

10.3.4 Systems of differential equations

In practice, we are frequently faced with an initial value problem, which involves not just a single first-order differential equation but a system of N simultaneous first-order differential equations. All of the above schemes for the numerical solution of the initial-value problem for the first-order differential equation (10.9) automatically generalize to a system of first-order equations (10.10). To see this, we must change un(a) to un(a) and f(x,un(a)) to f(x,un(a)). Then the Runge-Kutta and Adams type schemes preserve their meaning and applicability. For example, the system of equations

image (10.22)

may be written in the form

image

if we take

image

For example, the equation for un(a) in Runge-Kutta scheme (10.16),

image

may be written out for equations (10.22) as:

image

The initial value problem

image

may be reduced to a system of first-order equations (10.10) via changes in the dependent variables. How this can be accomplished is clear from the following example. The equation

image

will take the required form if we set

image

We then get

image

The following program solves the above system of simultaneous differential equations using the Euler method.

Listing 10.6

1. 1 a = 0; b = 5; N = 100;

2. initial_values = [1 0];

3. h = (b-a)/N; x = a:h:b;

4. ul = zeros(1, N + 1); u2 = zeros(1, N + 1);

5. ul(1) = initial_values(1); u2(1) = initial_values(2);

6. for n = I:length(x)-1

7. ul(n + l) = ul(n) + h *u2(n);

8. u2(n + 1) = u2(n) + h * -sin(x(n) *u2(n) + u1 (n).^2);

9. end;

10. plot(x, u2);

Line 1 defines the start and end values, [a, b], of x and the number of iterations, N. Line 2 sets the initial values α1 and α2. Line 3 sets the steps size and the values of x. Line 4 initializes the output vectors u1 and u2 to zeros. Line 5 sets their initial values. Lines 6–9 implement the Euler scheme for the simultaneous differential equations. The resultant is plotted in the following figure.

image

Figure 10.4

10.4 Numerical Solution of Boundary Value Problems

In this section, we show how to approximate the solution to boundary value problems – differential equations where conditions are imposed at different points. There is a great diversity of forms of boundary value problems. We will develop some methods for the numerical solution of such problems using an example of the differential equations of second order, specifically of the form:

image

with the boundary conditions prescribed at x = a and x = b. We will assume that ƒ(…) and its partial derivatives with respect to u and du/dx are continuous, and the partial derivative with respect to u is positive, and with respect to du/dx is bounded. These are all reasonable conditions for boundary value problems, which represent physical problems.

10.4.1 Conversion of difference schemes to systems of equations

Here, we consider the suitability of the discretizations studied in section 10.2. To begin with, we look at the linear second-order boundary value problem

image (10.23)

Let us introduce a grid with nodes xn = a + nh, n = 0, …, N, where h = (b–a)/N is the step size. The next step is to substitute, for derivatives in (10.23), the approximations

image (10.24)

After performing such substitution, we obtain the following difference scheme:

image

This difference scheme approximates problem (10.23) to order h2, because difference approximations for derivatives are of order h2, and boundary conditions are satisfied exactly. After collecting like terms, our difference scheme may be written as

image (10.25)

It is easy to see that these expressions define a system of linear equations with a tridiagonal matrix (see section 8.6). If h|pn|/2 < 1 and qn > 0, then conditions (8.9) are satisfied and the sweep method provides a stable and efficient technique for solving equations (10.25). Upon solving this system, we obtain {un}, the approximate solution at all grid points.

The construction of a difference scheme for the general nonlinear boundary value problem

image (10.26)

is similar to that applied to linear problems. However, the system of equations will not be linear, so some iterative method is required to solve it. If we replace derivatives in (10.26) by their difference approximations (10.24), then the difference scheme may be written as

image (10.27)

This scheme can be represented as a system of nonlinear equations of the form

image (10.28)

The approximate solution of this system v = (v0, …, vN) approximates u(a). To solve system (10.28) we can use various iterative methods. The initial approximation v(0) to v may be taken as a linear function which passes through the points (a, α.) and (b, β):

image

The following program solves the above system of simultaneous differential equations using the Euler method.

Listing 10.7

1. 1 a = 0; b = 5; N = 100;

2. initial_values = [1 0];

3. h = (b-a)/N; x = a:h:b;

4. u1 = zeros(1, N + I); u2 = zeros(1, N + 1);

5. u1 (1) = initial_values(1); u2(1) = initial_values(2);

6. for n = 1:length(x)-1

7. u1(n + 1) = u1(n) + h * u2(n);

8. u2(n + 1) = u2(n) + h * -sin(x(n) *u2(n) + u1 (n).^2);

9. end;

10. plot(x, u2);

Line 1 defines the start and end values, [a, b], of x and the number of iterations, N. Line 2 sets the initial values α1 and α2. Line 3 sets the steps size and the values of x. Line 4 initialize the output vectors u1 and u2 to zeros. Line 5 sets their initial values. Lines 6–9 implement the Euler scheme for the simultaneous differential equations.

10.4.2 Method of time development

The solution of a boundary value problem may be treated as some equilibrium state. One may consider this equilibrium as the result of the approach to steady state of processes developing in time. Often, the computational treatment of such processes is simpler than the direct calculation of the equilibrium itself. We illustrate the use of the method of time development via the example of an algorithm for the numerical solution of the problem:

image (10.29)

To begin with, we present some introductory considerations which outline the method of time development. Consider the auxiliary nonstationary problem

image (10.30)

where g0(x) describes an arbitrary initial condition. Since the boundary conditions are time independent, it is natural to expect that the solution v(x,t) will change more and more slowly with time. Thus, this solution, in the limit as t→∞, will evolve into the equilibrium solution u(x), characterized by problem (10.29), that is

image

Therefore, instead of the stationary problem (10.29) one can solve the nonstationary problem (10.30) until time t, when the solution stops changing within the accuracy we require. This is the idea behind the solution of stationary problems by the method of time development.

In accordance with these considerations we will construct a difference scheme for problem (10.30) instead of (10.29). First, we introduce a computational grid with nodes (xn,tk). Now the grid has a little more complex structure than it had before. It is defined by two parameters: h = xn + 1–xn, and Tk = tk + l – tk which are called the space step and time step, respectively.

As before, we replace derivatives in (10.30) by their difference approximations, and this results in the following difference scheme:

image (10.31)

here upper superscript k denotes kth time level, not the kth power.

To implement the computation of the approximate solution, we rewrite this difference scheme in the form

image (10.32)

Since the initial condition v(x,0= g0(x) implies that vn0 = g0(xn) for each n = 0, …, N, these values can be used in difference equation (10.32) to find the value of v1 = {v(xn,t1)}. Reapplying this procedure once all the approximations v1 are known, the values v2, v3, … can be obtained in a similar manner.

It is evident from the previous discussion that difference scheme (10.31) is explicit; as we have seen, explicitness implies conditional stability. If we apply the stability analysis to difference scheme (10.31), we can obtain the following criterion

image

where

image

The method of time development is a very reliable technique, because if this method converges, it converges from any initial solution. When the approximate solution vk stops changing within the accuracy we require, that is,

image

the processes of time development is completed.

10.4.3 Accurate approximation of boundary conditions when derivatives are specified at boundary points

So far we considered the boundary condition of the first kind, when the solution itself is specified at boundary points. Many physical problems have boundary conditions involving derivatives specified at boundary points. By way of example let us consider the following problem:

image (10.33)

The finite-difference approximation of the differential operator is the same as that for (10.27):

image (10.34)

The obvious approach to replace du/dx at the point a is to use a forward difference approximation

image (10.35)

However, this gives an approximation that has order of only h, whereas difference operator (10.34) has approximation of order h2. This means that the difference scheme described by (10.34) and (10.35) has approximation of order h only. This circumstance can be eliminated. Let us introduce a fictitious node at x–1 = a–h. This node allows for constructing a more accurate (central-difference) approximation for the boundary condition du/dx = α:

image (10.36)

To exclude the unknown u–1, we write equation (10.34) for n = 0:

image

After substituting (10.36) into previous expression, we finally get

image (10.37)

To find the approximate solution of problem (10.33) we need to solve system (10.28), except that the first equation of that system should be replaced by the following equation:

image

What we have considered is a simple implementation of the more general approach called the method of fictitious domains. This method is successfully used in various situations where boundary conditions involve derivatives.

10.5 Error Estimation and Control

In this section we will investigate several ways to estimate the error and to control it. By specifying an error tolerance εp, we will require either a more accurate and more expensive solution or a less accurate and cheaper one.

The essential idea of the methods described in what follows is to use two different approximate solutions to estimate the error. We begin with error estimation in the case of the initial-value problem. Let us calculate two solutions un(a) and image at xn, where image denotes the more accurate solution. Then image gives an estimate of the local error

image

for the less accurate approximate solution. A pair of Runge-Kutta methods of orders p and p + 1 (called an embedded pair) may be used to compute these solutions. The essential idea of embedded methods is that such a pair will share stage computations. Thus, an s-stage embedded pair may be represented as

image

where sets of parameters bn(1) and bn(2) (n = 1, …, s) give methods of orders p and p + 1, respectively.

Once approximate solutions un(a) and image have been computed with the step size hn,1, we can check if

image

If this inequality is not satisfied, then the step size hn,1 is rejected and another step size hn,2 is selected instead. If the method for finding un(a) has order p, then

image

so we choose hn,2 to satisfy

image

This process is repeated until an acceptable step size is found. If the step size is accepted then it can be used as an initial step size hn + 1.1 = hn,2 for the next step.

For predictor-corrector methods, the error estimate can be expressed in terms of the difference between the predictor and the corrector. Let us consider the error estimation when the predictor (Adams-Bashforth scheme) and the corrector (Adams-Moulton scheme) have the same order of approximation. Now, for a method of order p, the local error can be written as a predictor,

image

or a corrector,

image

where αc and αp are independent of h. Parameters αc and αp are obtained from the approximation analysis, and they are given in Table 10.4.

Thus, from these two formulae we can express the principal error term chp:

image

Neglecting terms of degree p + 1 and above, it is easy to make an estimate of the local error for the corrected (more accurate) values:

image

By performing local extrapolation, one can obtain a more accurate approximate solution. Indeed, if we write

image

then the approximate solution

image

has the principal error of order hp + l.

In the case of the boundary value problem, we can estimate the global error e = u(e)u(a). Such an estimate is compared against a specified tolerance or used to select a new grid. We discretize and solve a given boundary value problem on a sequence of grids. The error of the solution on the current grid is estimated and this information is used to decide what the next grid should be, if there is a need for a next grid. The first grid is a guess. The error estimation can be achieved using a process of extrapolation. For second-order boundary value problems, we constructed difference schemes which approximate those problems with the order of h2. Given a solution u(a) obtained with step size h, and another one v(a) obtained with step size h/2, we have

image

So

image

and

image

If imagee2image ≤ εp, then solution v(a) is accepted, and no further grid refinement is needed. One can obtain the extrapolated solution

image

which is a fourth-order accurate solution. However, we do not get a good error estimate for this solution.

10.6 Exercises

Consider an initial value problem for the first order ordinary differential equation

image

Calculate the approximate solutions of this equation with steps 0.05, 0.02, 0.01 using one of the difference schemes. Show the convergence of approximate solution. Plot the obtained solution. To test the validity of the program, solve the test problem

image

which has the exact solution u(x= l–exp(cos(x)–1).

Suggested difference schemes are

(1) second-order Runge-Kutta scheme,

(2) third-order Runge-Kutta scheme,

(3) fourth-order Runge-Kutta scheme,

(4) second-order explicit Adams scheme,

(5) third-order explicit Adams scheme,

(6) fourth-order explicit Adams scheme,

(7) second-order predictor-corrector method,

(8) third-order predictor-corrector method,

(9) fourth-order predictor-corrector method.

Set of initial value problems:

1. u′ = 1 + 0.2u sin(x) − u2u(0) = 0,

2. u′ = cos(x + u) + 0.5(x − u), u(0) = 0

3. image

4. image

5. image

6. image

7. image

8. image

9. image

10. image

11. image

12. image

13. image

14. image

15. image

16. image

17. image

18. image

19. image

20. image

Consider a boundary value problem for the second order ordinary differential equation

image

Calculate the approximate solutions of this equation with two steps: h and h/2. Estimate the error of the more accurate solution. Plot the obtained solution. Suggested difference schemes are

(1) difference scheme (10.25) (problems 1–20),

(2) method of time development (10.32) (problems 11–20)

Set of boundary value problems:

image

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset