27.3.4 Solving the System of Coupled ODEs Using the Forward Euler Method

As an illustration, we will solve the system of coupled ODEs given by Eq. 27.43 using the Euler method. For the sake of simplicity, we assume unitless numbers for the variables using

D = 1

k = 1

m = 5

S = 1

h = 0.5

As we can see from Fig. 27.12, this is an underdamped oscillation. We start with the initial values F(…,0) at t = 0 which are given by

F(s,0) = 1

F(v,0) = 1

from which we can calculate the initial values X(…,0) at t = 0 using Eq. 27.43a and Eq. 27.43b to find

Xs0=v0=Fv0Xv0=-1mkFv0+DFs0=-150+1=-0.2

si142_e

Calculating the First Step. We now apply the forward Euler method according to Eq. 27.8 to find the values F(…,1) as

Fs1=Fs0+hXs0=1+0.5·0=0Fv1=Fv0+hXv0=0-0.5·0.2=-0.1

si143_e

These values form the basis for the subsequent step. Obviously, we need values of X(…,1) at t = 0.5, which we obtain using Eq. 27.43a and Eq. 27.43b as

Xs1=Fv1=-0.1Xv1=-1mkFv1+DFs1=-15-1·0.1+1·1=0.45

si144_e

Iteration. In the next step, we would use F(s,1), F(v,1), X(s,1), and X(v,1) to calculate the next step in time. Obviously, it is more reasonable to stop the manual calculation here and transfer this scheme to an algorithm for making the calculations easier.

27.3.5 Maple Worksheet for the Forward Euler Method

Listing 27.7 shows the listing we apply for solving higher-order ODEs or system of ODEs using the Euler method. As usual, we first restart the Maple server and include Core.txt in line 1. In the following, we define functions for the different oscillation cases (see line 4). These functions implement Eq. 27.41, Eq. 27.39, and Eq. 27.37. In line 8 we implement the forward Euler method again. This is almost the same implementation as used in listing 27.1. However, in this case, we expect a system of ODEs as the parameter df and a list of dependent variables as parameter dependent. In comparison, we expected only one ODE in listing 27.1 and therefore, we did also not require the dependent variables. We then proceed by first finding the number of dependent variables in line 10 which also defines the size of the return array which must be big enough to hold solutions to all of the dependent variables (see line 10). We then calculate the step width h as the variable delta. Given the fact that we have several dependent variables, we require an initial value for each, which is why the parameter initialvalues is expected to be an array as well. We insert the initial values in line 14.

Listing 27.7

[Maple] Listing for the Euler method applied to solving higher-order ODEs and systems of ODEs. This is the first part of the listing NumericsODESystem.mw which can be found in the supplementary material.

 1  restart:read "Core.txt":
 2
 3  #these are the oscillation cases
 4  underdamped:=(D,k,m,t)->exp(− k/(2*m)*t)*(k/(sqrt(4*m*D−k^2))*sin((sqrt(4*m*D−k^2))/(2*m)*t) + cos(sqrt(4*m*D−k^2)/(2*m)*t));
 5  overdamped:=(D,k,m,t)->l/(2*(sqrt(k^2−4*m*D)))*((k+sqrt(k^2−4*m*D))*exp((− k+sqrt(k^2−4*m*D))/(2*m)*t) + (− k+sqrt(k^2-4*m*D))*exp((− k-sqrt(k^2−4*m*D)/(2*m)*t)));
 6  criticallydamped:=(D,k,m,t)->(1+k/(2*m)*t)*exp(− k/(2*m)*t);
 7
 8  EulerODEs:=proc(df,dependent,_from,_to,initialvalues,points) local ret,delta,i,j,numdep:
 9   numdep:=numelems(dependent):
10   ret:=Matrix(points,1+numdep):
11   delta:=(_to-_from)/(points-1):
12   ret [1,1] : = _from:
13   for i to numdep do
14     ret[1,1+i]:=initialvalues[i]:
15   end do:
16   for i from 2 to points do:
17     ret[i,1]:=evalf(ret[i-1,1]+delta):
18     for j to numdep do:
19       ret[i, j+1]:=evalf(ret[i-1,j+1]+delta*df[j](ret[i-1,1],seq(ret[i-1,1+k],k=l..numdep))):
20    end do:
21   end do:
22  return ret;
23  end proc:
24
25  display(quickplot([t->overdamped(l,3,1,t)],0..30,"t","s(t)"),pointplot(extracttoplot(EulerODEs([(t,s,u)->u,(t,s,u)->-(3*u+s)],[s,u],0,30,[1,0],60),[1,2]),symbolsize=7,color=black,symbol = solidcircle));
26  display(quickplot([t->criticallydamped(l,2,1,t)],0..30,"t","s(t)"),pointplot(extracttoplot(EulerODEs([(t,s,u)->u,(t,s,u)->-(2*u+s)],[s,u],0,30,[1,0],60),[1,2]),symbolsize=7,color=black, symbol=solidcircle));
27  display(quickplot([t->underdamped(l,1,5,t)],0..30,"t","s(t)"),pointplot(extracttoplot(EulerODEs([(t,s,u)->u,(t,s,u)->−1/5*(u+s)],[s,u],0,30,[1,0],60),[1,2]),symbolsize=7,color=black,symbol = solidcircle));

The loop starting in line 16 implements the forward Euler method. We first calculate the current value of the independent variable in line 17 before calculating each of the dependent variables inside of the loop in line 18. The calculation of F(j,i) is done in line 19. This line calculates for each dependent variable the value F(j,i) from F(j,i- 1) It expects the ODE for the dependent variable j at the position j in the array df. It expects this entry to be given as a function of the independent variable and the dependent variable in the correct order. It calls this ODE, i.e., the function with the parameter list consisting of the value of the independent function and the values of all dependent functions at F(…,i- 1). The Maple function seq allows us to create a comma-separated list of the dependent values which we can pass to the function stored at df[j] as parameter list.

Line 25 applies the function EulerODEs to the three cases for which we derived analytical solutions, i.e., underdamped, critically-damped, and overdamped oscillation. The analytical solutions have already been shown in Fig. 27.12. The three lines starting from line 25 display the output of EulerODEs alongside the analytical solutions. These plots are shown in Fig. 27.13. We will discuss them in just a moment. First, we want to look at the way the ODEs are passed to the function EulerODEs. Each line consists of a display command which packs one quickplot (which displays the analytical solution) with one pointplot in a single plot. Within each pointplot, we call the function extracttoplot which extracts the data to be plotted from the return value of EulerODEs. For the plot we need the first and the second column of the return array, as we are only interested in a plot of the dependent variable s (which is in the second column) as a function of the independent variable t (which is in the first column). The third column contains the second dependent variable we introduced in order to reduce the second-order ODE to two first-order ODEs. This second dependent variable is the velocity v, which we could obviously also plot by extracting the first and third column for the plot.

f27-13-9781455731411
Fig. 27.13 Numerical output of the forward Euler and the fourth-order Runge-Kutta method applied for solving the three cases found for the harmonic oscillation: overdamped, critically-damped, and underdamped oscillation. The analytical solution given byEq. 27.41, Eq. 27.39, and Eq. 27.37are shown in comparison. Points show the discreet solution of the respective methods, the solid line is the analytical solution. As can be seen, the fourth-order Runge-Kutta method gives a really good approximation to the analytical solution. The stepwidth used was h = 0.5. Parameters used for the overdamped oscillation: m = 1, D = 1, k = 3; parameters used for the critically-damped oscillation: m = 1, D = 1, k = 2; parameters used for the underdamped oscillation: m = 5, D = 1, k = 1.

As parameters, we pass to EulerODEs first the ODEs in the order we also indicate in the parameter dependent. Here we choose to pass s before v which is why we first pass the ODE for s (see Eq. 27.43b), and then the ODE for v (see Eq. 27.43a). We also keep this order while passing the initial values: First we pass the value for s (which is 1) before passing the value for v (which is 0). We request the solution to be created on the interval 0 = t = 30 and a total of 60 points which makes h = 0.5. Obviously, we must pass the ODEs without variables which is why we insert values for m, k, and D.

27.3.6 Maple Worksheet for the Fourth-Order Runge-Kutta Method

Before discussing the output of the forward Euler method created with line 25, we will quickly complement listing 27.7 with a second method, which implements the fourth-order Runge-Kutta method. We will apply this function in solving the same system of ODEs and therefore, have a comparison of two numerical methods: The Euler method (being a fairly simple numerical method) and the fourth-order Runge-Kutta method (being somewhat more computationally expensive but also more exact).

For this we add a second function shown in listing 27.8. This method implements the fourth-order Runge- Kutta method. It is analogous to the implementation shown in listing 27.6 with the one exception that we use it to solve a system of ODEs. As in listing 27.7, we therefore expect the parameters df, dependent, and initialvalues to be arrays. As before, the function first determines the number of dependent variables, prepares the return value matrix, and sets the initial values. We know that the Runge-Kutta method requires four variables cl, c2, c3, and c4 in order to calculate F(…,i) from F(…,i- 1) Obviously, each of the dependent variables requires a separate set of these variables, which is why we create the array c which holds these values for each dependent variable. The implementation of the fourth-order Runge-Kutta method is the loop starting in line 10. We first calculate the value of the independent variable in line 11 before calculating the constants. It is important to note that we only need F(t,i- 1) to calculate the values of cl for each dependent variable. In order to calculate the values for c2, we require the values of cl and F(t,i- 1). This is the reason why we loop through the calculation of the constants sequentially starting with c1 in line 13 before calculating c2 in line 16, etc. As you can see, for each of these constants, we need to evaluate the respective ODE at intermediary support points which are calculated with the help of c1, c2, c3, and c4.

Listing 27.8

[Maple] Listing for the fourth-order Runge-Kutta method applied to solving higher-order ODEs and systems of ODEs. This is the second part of the listing NumericsODESystem.mw which can be found in the supplementary material.

 1 RungeKutta4ODEs:=proc(df,dependent,_from,_to,initialvalues,points) local ret,numdep,delta,i,j,c:
 2  numdep:=numelems(dependent):
 3  ret:=Matrix(points,1+numdep):
 4  c:=Matrix(numdep,4):
 5  delta:=(_to–_from)/(points-1):
 6  ret[1,1]:=_from:
 7  for j to numdep do:
 8    ret[1,1+j]:=initialvalues[j]:
 9  end do:
10  for i from 2 to points do:
11    ret[i,1]:=evalf(ret[i−1,1]+delta):
12    for j to numdep do:
13      c[j,1]:=evalf(delta*df[j](ret[i−1,1],seq(ret[i−1,1+k],k=1..numdep))):
14     end do:
15     for j to numdep do:
16      c[j,2]:=evalf(delta*df[j](ret[i−1,1]+delta/2,seq(ret[i−1,1+k]+c[k,1]/2,k=1..numdep))):
17    end do:
18    for j to numdep do:
19     c[j,3]:=evalf(delta*df[j](ret[i−1,1]+delta/2,seq(ret[i−1,1+k]+c[k,2]/2,k=1..numdep))):
20    end do:
21    for j to numdep do:
22      c[j,4]:=evalf(delta*df[j](ret[i−1,1]+delta,seq(ret[i−1,1+k]+c[k,3],k=1..numdep))):
23    end do:
24    for j to numdep do:
25     ret[i,1+j]:=evalf(ret[i−1,1+j]+1/6*(c[j,1]+2*c[j,2]+2*c[j,3]+c[j,4])):
26    end do:
27   end do:
28  return ret;
29 end proc:
30
31 display(quickplot([t–>underdamped(1,3,1,t)],0..30,"t","s(t)"),pointplot(extracttoplot(RungeKutta4ODEs([(t,s,u)–>u,(t,s,u)–>–(3*u+s)],[s,u],0,30,[1,0],60),[1,2]),symbolsize=7,color=black,symbol=solidcircle));
32 display(quickplot([t–>criticallydamped(1,2,1,t)],0..30,"t","s(t)"),pointplot(extracttoplot(RungeKutta4ODEs([(t,s,u)–>u,(t,s,u)–>– (2*u+s)],[s,u],0,30,[1,0],60),[1,2]),symbolsize=7,color=black,symbol=solidcircle));
33 display(quickplot([t–>underdamped(1,1,5,t)],0..30,"t","s(t)"),pointplot(extracttoplot(RungeKutta4ODEs([(t,s,u)–>u,(t,s,u)–>–1/5*(u+s)],[s,u],0,30,[1,0],60),[1,2]),symbolsize=7,color=black,symbol=solidcircle));

Having all of these values available, we can then apply them to calculate F(j,i) in line 25. Line 31 is identical to the code used in listing 27.7, but uses the fourth-order Runge-Kutta method instead of the forward Euler method. These plots are shown side-by-side with the output of the forward Euler method in Fig. 27.13.

27.3.7 Comparison of the Forward Euler and the Fourth-Order Runge-Kutta Method

Fig. 27.13 shows the numerical output for the solution of the three cases we find for the harmonic oscillation: overdamped, critically-damped, and underdamped oscillation. We already derived the analytical solutions in Eq. 27.41, Eq. 27.39, and Eq. 27.37. The plot shows the output of listing 27.7, line 25 and listing 27.8, line 31 side-by-side. As you can see, the forward Euler method does a decent job at finding a solution to this ODE system, whereas the fourth-order Runge-Kutta method yields very precise results. All of the three cases (which we had to distinguish analytically) can be applied to both of our functions.

In order to increase the precision of the forward Euler method, we would require smaller stepwidths. Try increasing the number of data points in listing 27.7 to 600. You would see that the forward Euler method will give a substantially better result if increasing the data points. In contrast, the fourth-order Runge-Kutta method does not require additional data points for a better result. Recall that Runge-Kutta methods make use of intermediary points within the stepwidth interval for increasing the precision. Therefore, this method relies on a larger number of points internally. This obviously increases the computational cost. It is therefore often necessary to balance performance and precision. The forward Euler method is simple to implement and gives decent results at larger stepwidths, and increasingly better results at higher stepwidths. The computational cost increases linearly with the number of points. The fourth-order Runge-Kutta method is computationally more expensive as it requires several evaluations of the ODEs within a single step. Therefore, if larger number of points are required or the computational domain is large, the Runge-Kutta method may be too expensive to use.

ExportingNumericsODESystemRK. The implemented method NumericsODESystemRK for solving ODE systems using the fourth-order Runge-Kutta method is a very handy method that we will use later on. This is why we will export it to CoreFunctions.txt to have it available for later use.

27.3.8 Conclusion

In this section we have studied methods used to solve higher-order ODEs. These can always be converted to a system of first-order ODEs, which can be solved with any of the methods discussed in section 27.2. We have discussed this procedure using one of the classical second-order ODE in engineering mechanics, i.e., the harmonic oscillation, and the three cases derived from it: overdamped, critically-damped, and underdamped oscillation. We have adapted our implementations for the forward Euler method and the fourth-order Runge-Kutta method such that they allow solving systems of ODEs. Obviously, any numerical method suitable for solving ODEs can likewise be adapted for systems of ODEs.

27.4 Numerical Solutions to Systems of Ordinary Differential Equations with Boundary Conditions

27.4.1 Introduction

Until now, we have looked at higher-order ODEs and systems of ODEs derived thereof and found very convenient numerical methods for solving them. If we look back at the example of the harmonic oscillation (see section 27.3.1), we could almost effortlessly find solutions to all three cases discussed once we had the numerical schemes implemented. The ODE we found for the harmonic oscillation was of second-order, which is already a very realistic application scenario.

However, we had one advantage in this example: The system did not have boundary conditions. To be fully correct, the system had boundary conditions, but we interpreted them as initial conditions. We stated that at the beginning of our experiment, the mass was displaced and at rest. This gave us the two initial conditions

F(s,0) = 1 (normalized)

F(v,0) = 0

In this case, we had boundary conditions for both dependent variables s and v at the same value of the independent variable t, i.e., at the beginning of the experiment for t = 0. However, this is rarely the case and in fact, we do not have to look very long to find an example, where this is not the case.

27.4.2 Navier-Stokes Equation for the Circular Cross-Section

We already know the first example we will use. It is the Navier-Stokes equation for the circular cross-section, i.e., the underlying ODE for the Hagen-Poiseuille flow given by Eq. 16.18 as

1rdvdr+d2vdr2=1ηdpdz

si145_e  (Eq. 27.45)

We know the analytical solution to this equation which is given by Eq. 16.19

vr=R24ηdpdzrR2-1

si146_e  (Eq. 27.46)

Setting Up. We will now try to find a numerical solution to this differential equation. For this we will make use of the function ODESystemsRungeKutta from listing 27.8, which we exported to CoreFunctions.txt. We proceed as outlined in section 27.3 by first rewriting our second-order ODE as a system of ODEs as

dvdr=u1ru+dudr=1ηdpdz

si147_e  (Eq. 27.47)

dudr=1ηdpdz-ur

si148_e  (Eq. 27.48)

We will use the following values for the variables

R=25µmdpdz=-0.1mbarmm-1η=1mPas

si149_e

These values are taken from Fig. 16.7b where we already showed the resulting velocity profiles. Inserting these values into Eq. 27.46 yields the analytical solution

vr=-1.5625mms-1rR2-1

si150_e  (Eq. 27.49)

Inserting the values into Eq. 27.47 and Eq. 27.48 yields

dvdr=us-1

si151_e  (Eq. 27.50)

dudr=-10000-urmm-1s-1

si500_e  (Eq. 27.51)

where the radius is expected to be given in mm. Eq. 27.50 and Eq. 27.51 are the two equations which we can now apply RungeKutta4ODEs to.

Boundary Conditions. However, we obviously still require boundary conditions. If you recall section 16.3.2 we have the following boundary conditions

dvdrr=R=0

si153_e  (Eq. 27.52)

ur=0=0

si154_e  (Eq. 27.53)

The first boundary condition is derived from the fact that the flow has no-slip boundary conditions at the wall, therefore the velocity at the wall must be zero. The second condition is due to the fact that the flow profile is expected to be symmetric, thus the derivative of the velocity profile in the center is zero. And this is where the problem begins: The boundary values are given for two different values of the independent variable. The first boundary condition is given for r = 0, the second is given for r = R. Obviously, we require two boundary conditions at the same value for the independent variable either at r = 0 or r = R. We cannot use numerical schemes if we do not have boundary values at the same position.

27.4.3 The Method of Shooting

One of the obvious ways of addressing this problem of mixed boundary conditions is the method of shooting. As the name suggests, this is a trial-and-error method which aims at “converting” one boundary value.

27.4.3.1 Choosing a Boundary Value

If we return to our problem, we have a boundary condition at r = 0 and one boundary condition at r = R. We now decide from which boundary we want to use our numerical schemes. If you look at Eq. 27.51, you can see that the independent variable r occurs in the denominator. This is a very good indication that choosing the boundary for r = 0 is not a good idea, as this would immediately produce a division by zero. Obviously, if this is the first value we calculate in our numerical scheme, it will render all subsequent calculations impossible. However, if this value occurs as the last value in our calculation, it may be significantly less problematic, as no subsequent calculations would have to deal with an infinite value. We have already encountered such a scenario when solving the contour of sessile and pendant drops numerically. Refer to section 24.4.2.3 for details.

These cases occur pretty often, especially when working in spherical or cylindrical coordinates. It is therefore, a good idea to check precisely which boundary is the best to start the calculation from. Obviously, choosing the boundary will not affect the result in terms of precision. Proceeding end-to-front should produce the same results as proceeding front-to-end. However, in many cases, one of these directions is not possible for reasons such as the one we just encountered.

Therefore we opt for r = R as the boundary condition from which we want to proceed. However, we still have not overcome the problem of the mixed boundary condition.

27.4.3.2 Rationale

In our case, we require a boundary value for u (r = R). The question is, how can we derive this value from the equations we have? The answer is simple: we cannot. We can only guess the value. Using a guessed value, we will make a “shot” at the problem and see, which value of u (r = 0) our guess for u (r = R) ends up producing. If this value is far from the correct value, we take a second guess checking if this guess produces a better result.

This is the reason why this method is referred to as shooting. It is pretty much an analogy to shooting arrows. If your first arrow is off to the left of the target, you would aim your second shot a bit further to the right. If the second shot is too far right of the target, you would aim your next shot a bit further to the left again. We can do this until we end up with a guess for u (r = R), which produces a value for u (r = 0) which is reasonably close to the value we expect. In our case, this value would be 0.

27.4.3.3 Implementation

We now use the method of shooting for solving Eq. 27.50 and Eq. 27.51. The implementation is shown in listing 27.9. In line 1 we restart the Maple server and include Core.txt and CoreFunctions.txt the latter of which contains the function RungeKutta4ODEs. The next line contains a directive that tells Maple to display an array with all elements. Usually, Maple will only display an array in abbreviated form once it becomes too large. However, it is easier to see the numerical results if we can see the whole matrix.

Listing 27.9

[Maple] Listing for application examples of the fourth-order Runge-Kutta method applied to solving higher-order ODEs and systems of ODEs. This is the first part of the listing NumericsODESystemRKApplications.mw which can be found in the supplementary material.

1 restart:read "Core.txt":read "CoreFunctions.txt":
2 interface(rtablesize=infinity):
3
4 guess1:=RungeKutta4ODEs([(r,v,u)->u,(r,v,u)->-10E3-u/r],[v,u],0.025,0,[0,1],11);
5 guess2:=RungeKutta4ODEs([(r,v,u)->u,(r,v,u)->-10E3-u/r],[v,u],0.025,0,[0,100],11);
6 guess3:=RungeKutta4ODEs([(r,v,u)->u,(r,v,u)->-10E3-u/r],[v,u],0.025,0,[0,−100],11);
7 guess4:=RungeKutta4ODEs([(r,v,u)->u,(r,v,u)->-10E3-u/r],[v,u],0.025,0,[0,−125],11);
8
9 display(quickplot([r->-1.5625*((r/0.025)^2-1)],0..0.025,typeset("r",[mm]),typeset(v[x]^empty,[mm/s])),pointplot(extracttoplot(guess3,[1,2]),symbolsize=7,color=black,symbol=solidcircle));
10 display(quickplot([r->-1.5625*((r/0.025)^2-1)],0..0.025,typeset("r",[mm]),typeset(v[x]^empty,[mm/s])),pointplot(extracttoplot(guess4,[1,2]),symbolsize=7,color=black,symbol=solidcircle));

Line 4 shows our first guess. We call the function RungeKutta4ODEs with Eq. 27.50 and Eq. 27.51 (in the correct order) and the two dependent variables v and u. We indicate the range to go from r = 0.025mm to r = 0mm, i.e., back-to-front in order to avoid the problem of the division by zero. Just to illustrate, you may want to change the order, in which case Maple will abort the method immediately with a division by zero error. The next argument is the critical point: the initial values. We know the exact initial value for v (R) = 0 (which is zero due to the no-slip boundary condition), but we do not know the boundary condition for u (R). Here, the first guess is 1 which is as good as any guess. The last parameter indicates that we want a total of 11 points to be calculated, which is not a lot, but sufficient.

If you run line 4, you will see that the first guess is horribly wrong. The array displayed shows us that the value the algorithm finds for the second boundary is u (0) = 8. This value results from the division by zero and would not be problematic. However, if you look at the second-to-last value you will see that it is u (0.0025) = 1247.5.

This value is definitely not approaching 0 and, in fact if we look at the calculated values of u (r), we can see that they are growing exponentially. This guess is therefore definitely wrong.

Our second guess (see line 5) makes things even worse as u (0.0025) = 2237.5 and the function is growing even more rapidly. Therefore it seems reasonable to assume a negative value for u (R). In fact, as we know the analytical solution given by Eq. 27.49, we know that this value must be negative. We even know its exact value to be

ur=dvdr=-1.5625mms-1·2rR2uR=-3.125mms-1125µm=-125s-1

si155_e

However, in most applications, we do obviously not have access to the correct solutions so shooting is the only option. Our third guess (see line 6) is definitely better. However the function u (r) still grows where it should actually converge to zero. If we plot the output of this guess together with the analytical solution, we obtain Fig. 27.14a. As you can see, the solution is off from the correct result. If we run the last guess (see line 7), we find that the function u (r) now converges to zero. If we plot the results from this guess, we obtain Fig. 27.14b. As you can see, this is the correct result. The plots are produced from line 9 and line 10.

f27-14-9781455731411
Fig. 27.14 Example of the method of shooting applied to the ODE for the circular channel cross-section using the fourth-order Runge-Kutta method. In this example, the boundary condition u (R) must be guessed. Points show the numerical output, the solid line is the analytical solution.

27.4.3.4 Pendant Drop Revisited

Introduction. We will introduce a second example of a shooting method using a system of ODEs that we have derived during the study of the pendant drop (see section 24.5). We have used a numerical scheme (see listing 24.2) to solve the rather complicated ODE given by Eq. 24.10, which describes the shape of the drop. For the approach of del Río and Neumann, we required Eq. 24.22, Eq. 24.23, Eq. 24.24, and Eq. 24.16 given by

dθds=-sinθx+∆ρgzγ+2rmaxdxds=cosθdzds=sinθdVds=πx2sinθdrmaxds=0

si156_e

At the time, we noted that using the path variable s is actually impractical because this value tells us little about our problem, and we do not require it as a result. Also it would be desirable to rewrite these equations as a function of the volume V of the drop because this is the actual independent variable. We can rewrite these equations using

ds=dVpx2sinθ

si157_e

in which case we find the following system of ODEs

dθdV=-1x3π+Δρgγzπx2sinθ+2rmaxπx2sinθ

si158_e  (Eq. 27.54)

dxdV=cosθπx2sinθ=1πx2tanθ

si159_e  (Eq. 27.55)

dzds=sinθπx2sinθ=1πx2sinθ

si160_e  (Eq. 27.56)

drmaxdV=0

si161_e  (Eq. 27.57)

where we have dropped the ODE for the path variable s, as we are not interested in this value.

Boundary Values. Obviously, our problem has a number of boundary values given by

θ0=0,λVmax=x0=0,xVmax=r0z0=0,zVmax=rmax0=0,rmaxVmax=

si162_e

where Vmax (the volume of the drop) and r0 (the diameter of the capillary to which it is suspended) are given and known values. Obviously, it would be ideal to start from the lower boundary, i.e., going from V = 0 to V = Vmax, as we know almost all boundary conditions and merely have to determine rmax. However, if we look at our ODEs, we can see that we find the value of x in several denominators therefore we would be faced with division by zero errors.

Therefore we will have to proceed from back-to-front starting from V = Vmax. Unfortunately, there are quite a number of values which we do not know and we have to shoot for three unknown boundary conditions at V = Vmax.

Implementation. Listing 27.10 shows the implementation of this shooting approach. It starts by implementing Eq. 27.54, Eq. 27.55, Eq. 27.56, and Eq. 27.57 (see line 1). The term Δρgγsi163_e is converted to the value - 0.13625 using the values for water, keeping in mind that the gravity for pendant drop is negative. We then apply the function RungeKutta40DEs from CoreFunctions.txt in line 6. We pass the four ODEs and the range for V. Here we use a water droplet with a total volume of 18.24 µl. This example is taken from listing 24.2 - it is the largest droplet studied in this listing. We use a total of 201 data points and pass the initial values we know which is only x (Vmax) = r0 = 0.5. This value is also taken from the example in listing 24.2.

Listing 27.10

[Maple] Listing for application examples of the fourth-order Runge-Kutta method applied to solving the pendant drop ODE. This is the second part of the listing NumericsODESystemRKApplications .mw which can be found in the supplementary material.

1 dphidV:=(V,phi,x,z,rmax)->-1/(x^3*Pi)-0.13625*z/(Pi*x^2*sin(phi))+2/(rmax*Pi*x^2*sin(phi));
2 dxdV:=(V,phi,x,z,rmax)->1/(Pi*x^2*tan(phi));
3 dzdV:=(V,phi,x,z,rmax)->1/(Pi*x^2);
4 drmaxdV:=(V,phi,x,z,rmax)->0;
5
6 sol:=RungeKutta40DEs([dphidV,dxdV,dzdV,drmaxdV],[phi,x,z,rmax ],18.24,0,[1.445,0.5,4.315,1.4356],201):sol [201];
7
8 quickplot([extracttoplot(sol,[3,4])],0..5,"x [mm]","z [mm]");

You will note that the output is written in a way that only the last row of the array created will be displayed (see line 6). The reason for this is that we are only interested in the values of this last row, which gives the values of the boundary for V = 0. These values must be close to the known values for the boundary. If the shooting was correct, we should see a similar output here. For the boundary values shown in listing 27.10, line 6 we obtain the following values at the lower boundary

θ0=0.022˜0x0=0.036˜0z0=-0.018˜0

si164_e

which are all sufficiently close to the desired values. Fig. 27.15 compares the values produced by our shooting method with the values we obtained previously in listing 24.2. As you can see, the results are very close.

f27-15-9781455731411
Fig. 27.15 Numerical output of the shooting method applied to the pendant drop ODE. The data shown are the values produced fromlisting 27.10using the fourth-order Runge-Kutta method with a shooting approach for fìnding the unknown boundary conditions compared to the results obtained inlisting 24.2. As you can see, the two data sets match very well.

Speeding Up the Shooting Process. It may take a significant amount of time to apply the shooting method to a complicated case such as the one shown here. Shooting three unknown boundary values obviously is a purely trial-and-error search for a good set of values. However, this process can be automated because an algorithm can try a wide range of values and determine the ones that yield the best result. This, in fact, is a strength of shooting. The quality of the result can be assessed by simply checking how close the produced values are to the desired ones. Obviously, in a case such as the one described here, searching manually is a very tedious process.

27.4.3.5 Summary

As we have seen in this section, the method of shooting is one of the easiest methods to implement whenever an ODE or system of ODEs with mixed boundary conditions is encountered. However, it may require a significant amount of trials until a good solution is obtained. In many application scenarios, it is reasonable to assume that a vague idea about the size or at least the sign of the unknown second boundary condition is available. This speeds up the solution process significantly. We have seen one example derived from the Navier-Stokes equation in cylindrical coordinates that we were able to solve sufficiently exact using a shooting approach to derive the unknown boundary condition.

27.4.4 Function Approximation Methods

27.4.4.1 Introduction

The method of shooting is not the only method we can use when we have mixed boundary conditions. One very important set of methods is referred to as function approximation methods. As the name suggests, we use a function that we assume can approximate the solution to the ODE. We usually refer to this function as the trial function. There are many different types of trial functions one can use, but they will contain some unknown parameters P0, P1, . . . , Pn which need to be determined. The trial function is usually denoted T (x, P0, P1, … , Pn) where x is the independent variable and P0, P1, . . . , Pn are the parameters.

In the next step, we would substitute the trial function into the ODE. If we are very lucky, we will find that the trial function solves the ODE. This would make the trial function a solution immediately. As can be imagined, this case rarely happens in practical applications. We would rather be left with an equation that has n + 1 unknown and, which (being a single equation) will therefore not yield a solution at all.

Collocation Methods. This is where we will use a collocation method. Instead of demanding that the substituted trial function will solve the original ODE on the whole domain of x, we simply demand that it satisfies it at n + 1 discreet values xi. Evaluating the substituted trial function at these values will give a set of n + 1 equations, which we have to solve for n + 1 unknowns, i.e., the parameters Pi. At each discreet value xi the ODE would then be satisfied. This is why these approaches are referred to as collocation methods. At n + 1 distinct collocation points, our solution will be exact. Obviously, the higher the number of parameters used, the more collocation points we will have and the more exact the solution.

Example. In order to illustrate this example, we will use the Navier-Stokes equation in cylindrical coordinates Eq. 27.45 again, which is given by

1rdvdr+d2vdr2=1ηdpdz

si145_e  (Eq. 27.58)

Obviously, we know the analytical solution to this ODE, which is given by Eq. 16.20. As we will see, we can also derive this solution using a collocation approach.

27.4.4.2 Trial Function

A very common type of trial functions are polynomial series for one simple reason: they are linear and can be linearly differentiated. As an example, consider the following trial function

Txp0p1pn=Σi=0npifix

si166_e  (Eq. 27.59)

where fi (r) are functions which are chosen because we assume that the solution may be well approximated by them. Given our ODE, we may assume that a suitable series of functions would be

Trp0p1pn=Σi=0npiri

si167_e  (Eq. 27.60)

If we differentiate this trial function with respect to the independent variable, we obtain the general form

dTdx=Σi=0npidfidxx

si168_e

This means that irrespective of the type of function fi, we obtain again a polynomial but this time of the

derivatives dfidxsi169_e. For our assumed solution we would obtain

dTdxrp0p1pn=Σi=1npiiri-1

si170_e

27.4.4.3 Residual Function

Obviously, the trial function must be applied to the ODE in order to test whether or not it is a good approximation for a solution. The resulting function is referred to as residual function ? (x, p0,p1, . . . , pn). If we apply our trial function Eq. 27.60 to our ODE Eq. 27.58 we obtain

1rΣi=1npiiri-1+Σi=2npiii-1ri-2=1ηdpdz

si171_e  (Eq. 27.61)

As you can see, we are now left with an ordinary equation as the differentials have disappeared. Unfortunately, we have the unknowns pi for which we do not have values at the moment. However, we are now free to evaluate this function for arbitrary values ri. If we do this for n values of ri, this leaves us with n residual equations. We now demand that at each of these points, Eq. 27.61 is fulfilled, i.e., the left-hand and the right-hand sides match. This gives us the required number of equations to solve for all unknowns Pi.

27.4.4.4 Adding Boundary Conditions

One point to keep in mind is that we do not require n equations because we have boundary conditions. The no-slip boundary condition allows us to set our trial function to zero for r = R

TRp0p1pn=Σi=0npiRi=0

si172_e

The symmetry condition allows us to set the derivative dTdr=0si173_e to zero at r = 0

dTdx0,p0,p1,pn=Σi=1npii

si174_e

This means that if we have a total of n + 1,we only require n - 1 additional equations as we already have two equations derived from the boundary conditions.

27.4.4.5 Example

In the following, we will illustrate the method using a trial function with n = 3 given by

Trp0p1p2p3=p0+p1r+p2r2+p3r3dTdrrp0p1p2p3=p1+2p2r+3p3r2d2Tdr2rp0p1p2p3=2p2+6p3r2

si175_e  (Eq. 27.62)

which we resubstitute into Eq. 27.58 to find the residual function

1rp1+2p2r+3p3r2+2p2+6p3r=1λdpdz

si176_e  (Eq. 27.63)

We now have the two boundary conditions, which give rise to the following two equations

p0+p1R+p2R2+p3R3=0no-slip

si177_e  (Eq. 27.64)

p1+2p20+3p302=0symmetry

si178_e  (Eq. 27.65)

We therefore only require two additional equations. We choose randomly, r1= 13si179_eR and r2 = 23si180_e in which case we find from Eq. 27.63 the two equations

3Rp1+2p2+13R+3p313R2+2p2+6p313R=1ηdpdz

si181_e

32Rp1+2p2+23R+3p323R2+2p2+6p323R=1ηdpdz

si182_e

which can be simplified to

3Rp1+4p2+3Rp3=1ηdpdz

si183_e  (Eq. 27.66)

32Rp1+4p2+6Rp3=1ηdpdz

si184_e  (Eq. 27.67)

In general, the four equations Eq. 27.64, Eq. 27.65, Eq. 27.66, and Eq. 27.67 make up a linear system of equations which can be solved by any method discussed in section 25.2 or section 25.3. However, in this case, the system is so simple that it can be solved by hand. From Eq. 27.65 we can see that P1= 0 in which case we are left with the following three equations

p0+R2p2+R3p3=0

si185_e  (Eq. 27.68)

+4p2+3Rp3=1ηdpdz

si186_e  (Eq. 27.69)

+4p2+6Rp3=1ηdpdz

si187_e  (Eq. 27.70)

Subtracting Eq. 27.69 from Eq. 27.70 yields

3Rp3=0p3=0

si188_e

in which case we find from Eq. 27.69 that

p2=14ηdpdz

si189_e

and from Eq. 27.68 that

p0=-R2p2=-R24ηdpdz

si190_e

At this point, all unknowns have been determined and we can note down the final solution according to Eq. 27.62

vr=Trp0p1p2p3=p0+p1r+p2r2+p3r3=14ηdpdzr2-R2=R24ηdpdzrR2-1

si191_e

which obviously is the correct solution (see Eq. 16.20).

Assessment. This result is somewhat astonishing as we did (in the classical sense) not even use numerical methods to solve it. This is due to the nature of the trial function we used. We picked the correct type of collocation function which allowed us to simply determine the missing coefficients. Obviously, if we had chosen a different collocation function we would have been left with somewhat different results, which would have required a numerical derivation.

27.4.4.6 Summary

In this section we have discussed function approximation methods and how they are used to solve ODEs with boundary conditions. Approximation methods are a large class of methods that are used in many aspects of numerical modeling. We have briefly outlined their use using the Navier-Stokes equation in cylindrical coordinates as exemplary ODE for which we were able to derive the correct solution.

27.5 Summary

In this section we have studied methods for solving first- and higher-order ordinary differential equations, as well as systems of differential equations numerically. As we have seen, there is a wide choice of methods available from which we can choose. These methods are usually selected based on their computational cost, as well as their numerical stability and the speed at which they converge to a solution. In most applications, a trade-off between speed, stability, and precision must be made. The choice of method usually depends on the application. As we have seen, one of the most commonly used numerical schemes for solving ODEs is the fourth-order Runge-Kutta method. You will find this method to be a very good first candidate whenever an ODE must be solved.

References

[1] L. Euler. Institutionum calculi integralis. Vol. 1. Acad. imp. Saènt., 1768 (cit. on p. 550).

[2] Crank J., Nicolson P. A practical method for numerical evaluation of solutions of partial differential equations of the heat-conduction type. In: Cambridge Univ Press; 50–67. Mathematical Proceedings of the Cambridge Philosophical Society. 1947;Vol. 43. 01 (cit. on p. 560).

[3] F. Bashforth and J. C. Adams. An attempt to test the theories of capillary action: by comparing the theoretical and measured forms of drops of fluid. University Press, 1883 (cit. on p. 563).

[4] Seidu B. A Matrix System for Computing the Coefficients of the Adams Bashforth-Moulton Predictor- Corrector formulae. 215–220. International Journal of Computational and Applied Mathematics. 2011;vol. 6 no. 3, (cit. on p. 566).

[5] Heun K. Neue Methoden zur approximativen Integration der Differentialgleichungen einer unabhängigen Veränderlichen. 23–38. Z. Math. Phys. 1900;Vol. 45 (cit. on p. 572).

[6] Kutta W. Beitrag zur näherungsweisen Integration totaler Differentialgleichungen. 435–453. Z. Math. Phys. 1901;Vol. 46 (cit. on p. 572).


1 John Crank was a British mathematician. He was interested in the study of mass and heat transport and has made important contributions to the numerical methods required for solving the underlying differential equations. Crank coauthored the Crank-Nicolson method, a FDM for solving differential equations [2].

2 Phyllis Nicolson was a British mathematician who may be best known for her contribution to the development of the Crank-Nicolson method [2].

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset