As an illustration, we will solve the system of coupled ODEs given by Eq. 27.43 using the Euler method. For the sake of simplicity, we assume unitless numbers for the variables using
D = 1
k = 1
m = 5
S = 1
h = 0.5
As we can see from Fig. 27.12, this is an underdamped oscillation. We start with the initial values F(…,0) at t = 0 which are given by
F(s,0) = 1
F(v,0) = 1
from which we can calculate the initial values X(…,0) at t = 0 using Eq. 27.43a and Eq. 27.43b to find
Calculating the First Step. We now apply the forward Euler method according to Eq. 27.8 to find the values F(…,1) as
These values form the basis for the subsequent step. Obviously, we need values of X(…,1) at t = 0.5, which we obtain using Eq. 27.43a and Eq. 27.43b as
Iteration. In the next step, we would use F(s,1), F(v,1), X(s,1), and X(v,1) to calculate the next step in time. Obviously, it is more reasonable to stop the manual calculation here and transfer this scheme to an algorithm for making the calculations easier.
Listing 27.7 shows the listing we apply for solving higher-order ODEs or system of ODEs using the Euler method. As usual, we first restart the Maple server and include Core.txt in line 1. In the following, we define functions for the different oscillation cases (see line 4). These functions implement Eq. 27.41, Eq. 27.39, and Eq. 27.37. In line 8 we implement the forward Euler method again. This is almost the same implementation as used in listing 27.1. However, in this case, we expect a system of ODEs as the parameter df and a list of dependent variables as parameter dependent. In comparison, we expected only one ODE in listing 27.1 and therefore, we did also not require the dependent variables. We then proceed by first finding the number of dependent variables in line 10 which also defines the size of the return array which must be big enough to hold solutions to all of the dependent variables (see line 10). We then calculate the step width h as the variable delta. Given the fact that we have several dependent variables, we require an initial value for each, which is why the parameter initialvalues is expected to be an array as well. We insert the initial values in line 14.
The loop starting in line 16 implements the forward Euler method. We first calculate the current value of the independent variable in line 17 before calculating each of the dependent variables inside of the loop in line 18. The calculation of F(j,i) is done in line 19. This line calculates for each dependent variable the value F(j,i) from F(j,i- 1) It expects the ODE for the dependent variable j at the position j in the array df. It expects this entry to be given as a function of the independent variable and the dependent variable in the correct order. It calls this ODE, i.e., the function with the parameter list consisting of the value of the independent function and the values of all dependent functions at F(…,i- 1). The Maple function seq allows us to create a comma-separated list of the dependent values which we can pass to the function stored at df[j] as parameter list.
Line 25 applies the function EulerODEs to the three cases for which we derived analytical solutions, i.e., underdamped, critically-damped, and overdamped oscillation. The analytical solutions have already been shown in Fig. 27.12. The three lines starting from line 25 display the output of EulerODEs alongside the analytical solutions. These plots are shown in Fig. 27.13. We will discuss them in just a moment. First, we want to look at the way the ODEs are passed to the function EulerODEs. Each line consists of a display command which packs one quickplot (which displays the analytical solution) with one pointplot in a single plot. Within each pointplot, we call the function extracttoplot which extracts the data to be plotted from the return value of EulerODEs. For the plot we need the first and the second column of the return array, as we are only interested in a plot of the dependent variable s (which is in the second column) as a function of the independent variable t (which is in the first column). The third column contains the second dependent variable we introduced in order to reduce the second-order ODE to two first-order ODEs. This second dependent variable is the velocity v, which we could obviously also plot by extracting the first and third column for the plot.
As parameters, we pass to EulerODEs first the ODEs in the order we also indicate in the parameter dependent. Here we choose to pass s before v which is why we first pass the ODE for s (see Eq. 27.43b), and then the ODE for v (see Eq. 27.43a). We also keep this order while passing the initial values: First we pass the value for s (which is 1) before passing the value for v (which is 0). We request the solution to be created on the interval 0 = t = 30 and a total of 60 points which makes h = 0.5. Obviously, we must pass the ODEs without variables which is why we insert values for m, k, and D.
Before discussing the output of the forward Euler method created with line 25, we will quickly complement listing 27.7 with a second method, which implements the fourth-order Runge-Kutta method. We will apply this function in solving the same system of ODEs and therefore, have a comparison of two numerical methods: The Euler method (being a fairly simple numerical method) and the fourth-order Runge-Kutta method (being somewhat more computationally expensive but also more exact).
For this we add a second function shown in listing 27.8. This method implements the fourth-order Runge- Kutta method. It is analogous to the implementation shown in listing 27.6 with the one exception that we use it to solve a system of ODEs. As in listing 27.7, we therefore expect the parameters df, dependent, and initialvalues to be arrays. As before, the function first determines the number of dependent variables, prepares the return value matrix, and sets the initial values. We know that the Runge-Kutta method requires four variables cl, c2, c3, and c4 in order to calculate F(…,i) from F(…,i- 1) Obviously, each of the dependent variables requires a separate set of these variables, which is why we create the array c which holds these values for each dependent variable. The implementation of the fourth-order Runge-Kutta method is the loop starting in line 10. We first calculate the value of the independent variable in line 11 before calculating the constants. It is important to note that we only need F(t,i- 1) to calculate the values of cl for each dependent variable. In order to calculate the values for c2, we require the values of cl and F(t,i- 1). This is the reason why we loop through the calculation of the constants sequentially starting with c1 in line 13 before calculating c2 in line 16, etc. As you can see, for each of these constants, we need to evaluate the respective ODE at intermediary support points which are calculated with the help of c1, c2, c3, and c4.
Having all of these values available, we can then apply them to calculate F(j,i) in line 25. Line 31 is identical to the code used in listing 27.7, but uses the fourth-order Runge-Kutta method instead of the forward Euler method. These plots are shown side-by-side with the output of the forward Euler method in Fig. 27.13.
Fig. 27.13 shows the numerical output for the solution of the three cases we find for the harmonic oscillation: overdamped, critically-damped, and underdamped oscillation. We already derived the analytical solutions in Eq. 27.41, Eq. 27.39, and Eq. 27.37. The plot shows the output of listing 27.7, line 25 and listing 27.8, line 31 side-by-side. As you can see, the forward Euler method does a decent job at finding a solution to this ODE system, whereas the fourth-order Runge-Kutta method yields very precise results. All of the three cases (which we had to distinguish analytically) can be applied to both of our functions.
In order to increase the precision of the forward Euler method, we would require smaller stepwidths. Try increasing the number of data points in listing 27.7 to 600. You would see that the forward Euler method will give a substantially better result if increasing the data points. In contrast, the fourth-order Runge-Kutta method does not require additional data points for a better result. Recall that Runge-Kutta methods make use of intermediary points within the stepwidth interval for increasing the precision. Therefore, this method relies on a larger number of points internally. This obviously increases the computational cost. It is therefore often necessary to balance performance and precision. The forward Euler method is simple to implement and gives decent results at larger stepwidths, and increasingly better results at higher stepwidths. The computational cost increases linearly with the number of points. The fourth-order Runge-Kutta method is computationally more expensive as it requires several evaluations of the ODEs within a single step. Therefore, if larger number of points are required or the computational domain is large, the Runge-Kutta method may be too expensive to use.
ExportingNumericsODESystemRK. The implemented method NumericsODESystemRK for solving ODE systems using the fourth-order Runge-Kutta method is a very handy method that we will use later on. This is why we will export it to CoreFunctions.txt to have it available for later use.
In this section we have studied methods used to solve higher-order ODEs. These can always be converted to a system of first-order ODEs, which can be solved with any of the methods discussed in section 27.2. We have discussed this procedure using one of the classical second-order ODE in engineering mechanics, i.e., the harmonic oscillation, and the three cases derived from it: overdamped, critically-damped, and underdamped oscillation. We have adapted our implementations for the forward Euler method and the fourth-order Runge-Kutta method such that they allow solving systems of ODEs. Obviously, any numerical method suitable for solving ODEs can likewise be adapted for systems of ODEs.
Until now, we have looked at higher-order ODEs and systems of ODEs derived thereof and found very convenient numerical methods for solving them. If we look back at the example of the harmonic oscillation (see section 27.3.1), we could almost effortlessly find solutions to all three cases discussed once we had the numerical schemes implemented. The ODE we found for the harmonic oscillation was of second-order, which is already a very realistic application scenario.
However, we had one advantage in this example: The system did not have boundary conditions. To be fully correct, the system had boundary conditions, but we interpreted them as initial conditions. We stated that at the beginning of our experiment, the mass was displaced and at rest. This gave us the two initial conditions
F(s,0) = 1 (normalized)
F(v,0) = 0
In this case, we had boundary conditions for both dependent variables s and v at the same value of the independent variable t, i.e., at the beginning of the experiment for t = 0. However, this is rarely the case and in fact, we do not have to look very long to find an example, where this is not the case.
We already know the first example we will use. It is the Navier-Stokes equation for the circular cross-section, i.e., the underlying ODE for the Hagen-Poiseuille flow given by Eq. 16.18 as
We know the analytical solution to this equation which is given by Eq. 16.19
Setting Up. We will now try to find a numerical solution to this differential equation. For this we will make use of the function ODESystemsRungeKutta from listing 27.8, which we exported to CoreFunctions.txt. We proceed as outlined in section 27.3 by first rewriting our second-order ODE as a system of ODEs as
We will use the following values for the variables
These values are taken from Fig. 16.7b where we already showed the resulting velocity profiles. Inserting these values into Eq. 27.46 yields the analytical solution
Inserting the values into Eq. 27.47 and Eq. 27.48 yields
where the radius is expected to be given in mm. Eq. 27.50 and Eq. 27.51 are the two equations which we can now apply RungeKutta4ODEs to.
Boundary Conditions. However, we obviously still require boundary conditions. If you recall section 16.3.2 we have the following boundary conditions
The first boundary condition is derived from the fact that the flow has no-slip boundary conditions at the wall, therefore the velocity at the wall must be zero. The second condition is due to the fact that the flow profile is expected to be symmetric, thus the derivative of the velocity profile in the center is zero. And this is where the problem begins: The boundary values are given for two different values of the independent variable. The first boundary condition is given for r = 0, the second is given for r = R. Obviously, we require two boundary conditions at the same value for the independent variable either at r = 0 or r = R. We cannot use numerical schemes if we do not have boundary values at the same position.
One of the obvious ways of addressing this problem of mixed boundary conditions is the method of shooting. As the name suggests, this is a trial-and-error method which aims at “converting” one boundary value.
If we return to our problem, we have a boundary condition at r = 0 and one boundary condition at r = R. We now decide from which boundary we want to use our numerical schemes. If you look at Eq. 27.51, you can see that the independent variable r occurs in the denominator. This is a very good indication that choosing the boundary for r = 0 is not a good idea, as this would immediately produce a division by zero. Obviously, if this is the first value we calculate in our numerical scheme, it will render all subsequent calculations impossible. However, if this value occurs as the last value in our calculation, it may be significantly less problematic, as no subsequent calculations would have to deal with an infinite value. We have already encountered such a scenario when solving the contour of sessile and pendant drops numerically. Refer to section 24.4.2.3 for details.
These cases occur pretty often, especially when working in spherical or cylindrical coordinates. It is therefore, a good idea to check precisely which boundary is the best to start the calculation from. Obviously, choosing the boundary will not affect the result in terms of precision. Proceeding end-to-front should produce the same results as proceeding front-to-end. However, in many cases, one of these directions is not possible for reasons such as the one we just encountered.
Therefore we opt for r = R as the boundary condition from which we want to proceed. However, we still have not overcome the problem of the mixed boundary condition.
In our case, we require a boundary value for u (r = R). The question is, how can we derive this value from the equations we have? The answer is simple: we cannot. We can only guess the value. Using a guessed value, we will make a “shot” at the problem and see, which value of u (r = 0) our guess for u (r = R) ends up producing. If this value is far from the correct value, we take a second guess checking if this guess produces a better result.
This is the reason why this method is referred to as shooting. It is pretty much an analogy to shooting arrows. If your first arrow is off to the left of the target, you would aim your second shot a bit further to the right. If the second shot is too far right of the target, you would aim your next shot a bit further to the left again. We can do this until we end up with a guess for u (r = R), which produces a value for u (r = 0) which is reasonably close to the value we expect. In our case, this value would be 0.
We now use the method of shooting for solving Eq. 27.50 and Eq. 27.51. The implementation is shown in listing 27.9. In line 1 we restart the Maple server and include Core.txt and CoreFunctions.txt the latter of which contains the function RungeKutta4ODEs. The next line contains a directive that tells Maple to display an array with all elements. Usually, Maple will only display an array in abbreviated form once it becomes too large. However, it is easier to see the numerical results if we can see the whole matrix.
Line 4 shows our first guess. We call the function RungeKutta4ODEs with Eq. 27.50 and Eq. 27.51 (in the correct order) and the two dependent variables v and u. We indicate the range to go from r = 0.025mm to r = 0mm, i.e., back-to-front in order to avoid the problem of the division by zero. Just to illustrate, you may want to change the order, in which case Maple will abort the method immediately with a division by zero error. The next argument is the critical point: the initial values. We know the exact initial value for v (R) = 0 (which is zero due to the no-slip boundary condition), but we do not know the boundary condition for u (R). Here, the first guess is 1 which is as good as any guess. The last parameter indicates that we want a total of 11 points to be calculated, which is not a lot, but sufficient.
If you run line 4, you will see that the first guess is horribly wrong. The array displayed shows us that the value the algorithm finds for the second boundary is u (0) = 8. This value results from the division by zero and would not be problematic. However, if you look at the second-to-last value you will see that it is u (0.0025) = 1247.5.
This value is definitely not approaching 0 and, in fact if we look at the calculated values of u (r), we can see that they are growing exponentially. This guess is therefore definitely wrong.
Our second guess (see line 5) makes things even worse as u (0.0025) = 2237.5 and the function is growing even more rapidly. Therefore it seems reasonable to assume a negative value for u (R). In fact, as we know the analytical solution given by Eq. 27.49, we know that this value must be negative. We even know its exact value to be
However, in most applications, we do obviously not have access to the correct solutions so shooting is the only option. Our third guess (see line 6) is definitely better. However the function u (r) still grows where it should actually converge to zero. If we plot the output of this guess together with the analytical solution, we obtain Fig. 27.14a. As you can see, the solution is off from the correct result. If we run the last guess (see line 7), we find that the function u (r) now converges to zero. If we plot the results from this guess, we obtain Fig. 27.14b. As you can see, this is the correct result. The plots are produced from line 9 and line 10.
Introduction. We will introduce a second example of a shooting method using a system of ODEs that we have derived during the study of the pendant drop (see section 24.5). We have used a numerical scheme (see listing 24.2) to solve the rather complicated ODE given by Eq. 24.10, which describes the shape of the drop. For the approach of del Río and Neumann, we required Eq. 24.22, Eq. 24.23, Eq. 24.24, and Eq. 24.16 given by
At the time, we noted that using the path variable s is actually impractical because this value tells us little about our problem, and we do not require it as a result. Also it would be desirable to rewrite these equations as a function of the volume V of the drop because this is the actual independent variable. We can rewrite these equations using
in which case we find the following system of ODEs
where we have dropped the ODE for the path variable s, as we are not interested in this value.
Boundary Values. Obviously, our problem has a number of boundary values given by
where Vmax (the volume of the drop) and r0 (the diameter of the capillary to which it is suspended) are given and known values. Obviously, it would be ideal to start from the lower boundary, i.e., going from V = 0 to V = Vmax, as we know almost all boundary conditions and merely have to determine rmax. However, if we look at our ODEs, we can see that we find the value of x in several denominators therefore we would be faced with division by zero errors.
Therefore we will have to proceed from back-to-front starting from V = Vmax. Unfortunately, there are quite a number of values which we do not know and we have to shoot for three unknown boundary conditions at V = Vmax.
Implementation. Listing 27.10 shows the implementation of this shooting approach. It starts by implementing Eq. 27.54, Eq. 27.55, Eq. 27.56, and Eq. 27.57 (see line 1). The term is converted to the value - 0.13625 using the values for water, keeping in mind that the gravity for pendant drop is negative. We then apply the function RungeKutta40DEs from CoreFunctions.txt in line 6. We pass the four ODEs and the range for V. Here we use a water droplet with a total volume of 18.24 µl. This example is taken from listing 24.2 - it is the largest droplet studied in this listing. We use a total of 201 data points and pass the initial values we know which is only x (Vmax) = r0 = 0.5. This value is also taken from the example in listing 24.2.
You will note that the output is written in a way that only the last row of the array created will be displayed (see line 6). The reason for this is that we are only interested in the values of this last row, which gives the values of the boundary for V = 0. These values must be close to the known values for the boundary. If the shooting was correct, we should see a similar output here. For the boundary values shown in listing 27.10, line 6 we obtain the following values at the lower boundary
which are all sufficiently close to the desired values. Fig. 27.15 compares the values produced by our shooting method with the values we obtained previously in listing 24.2. As you can see, the results are very close.
Speeding Up the Shooting Process. It may take a significant amount of time to apply the shooting method to a complicated case such as the one shown here. Shooting three unknown boundary values obviously is a purely trial-and-error search for a good set of values. However, this process can be automated because an algorithm can try a wide range of values and determine the ones that yield the best result. This, in fact, is a strength of shooting. The quality of the result can be assessed by simply checking how close the produced values are to the desired ones. Obviously, in a case such as the one described here, searching manually is a very tedious process.
As we have seen in this section, the method of shooting is one of the easiest methods to implement whenever an ODE or system of ODEs with mixed boundary conditions is encountered. However, it may require a significant amount of trials until a good solution is obtained. In many application scenarios, it is reasonable to assume that a vague idea about the size or at least the sign of the unknown second boundary condition is available. This speeds up the solution process significantly. We have seen one example derived from the Navier-Stokes equation in cylindrical coordinates that we were able to solve sufficiently exact using a shooting approach to derive the unknown boundary condition.
The method of shooting is not the only method we can use when we have mixed boundary conditions. One very important set of methods is referred to as function approximation methods. As the name suggests, we use a function that we assume can approximate the solution to the ODE. We usually refer to this function as the trial function. There are many different types of trial functions one can use, but they will contain some unknown parameters P0, P1, . . . , Pn which need to be determined. The trial function is usually denoted T (x, P0, P1, … , Pn) where x is the independent variable and P0, P1, . . . , Pn are the parameters.
In the next step, we would substitute the trial function into the ODE. If we are very lucky, we will find that the trial function solves the ODE. This would make the trial function a solution immediately. As can be imagined, this case rarely happens in practical applications. We would rather be left with an equation that has n + 1 unknown and, which (being a single equation) will therefore not yield a solution at all.
Collocation Methods. This is where we will use a collocation method. Instead of demanding that the substituted trial function will solve the original ODE on the whole domain of x, we simply demand that it satisfies it at n + 1 discreet values xi. Evaluating the substituted trial function at these values will give a set of n + 1 equations, which we have to solve for n + 1 unknowns, i.e., the parameters Pi. At each discreet value xi the ODE would then be satisfied. This is why these approaches are referred to as collocation methods. At n + 1 distinct collocation points, our solution will be exact. Obviously, the higher the number of parameters used, the more collocation points we will have and the more exact the solution.
Example. In order to illustrate this example, we will use the Navier-Stokes equation in cylindrical coordinates Eq. 27.45 again, which is given by
Obviously, we know the analytical solution to this ODE, which is given by Eq. 16.20. As we will see, we can also derive this solution using a collocation approach.
A very common type of trial functions are polynomial series for one simple reason: they are linear and can be linearly differentiated. As an example, consider the following trial function
where fi (r) are functions which are chosen because we assume that the solution may be well approximated by them. Given our ODE, we may assume that a suitable series of functions would be
If we differentiate this trial function with respect to the independent variable, we obtain the general form
This means that irrespective of the type of function fi, we obtain again a polynomial but this time of the
derivatives . For our assumed solution we would obtain
Obviously, the trial function must be applied to the ODE in order to test whether or not it is a good approximation for a solution. The resulting function is referred to as residual function ? (x, p0,p1, . . . , pn). If we apply our trial function Eq. 27.60 to our ODE Eq. 27.58 we obtain
As you can see, we are now left with an ordinary equation as the differentials have disappeared. Unfortunately, we have the unknowns pi for which we do not have values at the moment. However, we are now free to evaluate this function for arbitrary values ri. If we do this for n values of ri, this leaves us with n residual equations. We now demand that at each of these points, Eq. 27.61 is fulfilled, i.e., the left-hand and the right-hand sides match. This gives us the required number of equations to solve for all unknowns Pi.
One point to keep in mind is that we do not require n equations because we have boundary conditions. The no-slip boundary condition allows us to set our trial function to zero for r = R
The symmetry condition allows us to set the derivative to zero at r = 0
This means that if we have a total of n + 1,we only require n - 1 additional equations as we already have two equations derived from the boundary conditions.
In the following, we will illustrate the method using a trial function with n = 3 given by
which we resubstitute into Eq. 27.58 to find the residual function
We now have the two boundary conditions, which give rise to the following two equations
We therefore only require two additional equations. We choose randomly, r1= R and r2 = in which case we find from Eq. 27.63 the two equations
In general, the four equations Eq. 27.64, Eq. 27.65, Eq. 27.66, and Eq. 27.67 make up a linear system of equations which can be solved by any method discussed in section 25.2 or section 25.3. However, in this case, the system is so simple that it can be solved by hand. From Eq. 27.65 we can see that P1= 0 in which case we are left with the following three equations
Subtracting Eq. 27.69 from Eq. 27.70 yields
in which case we find from Eq. 27.69 that
and from Eq. 27.68 that
At this point, all unknowns have been determined and we can note down the final solution according to Eq. 27.62
which obviously is the correct solution (see Eq. 16.20).
Assessment. This result is somewhat astonishing as we did (in the classical sense) not even use numerical methods to solve it. This is due to the nature of the trial function we used. We picked the correct type of collocation function which allowed us to simply determine the missing coefficients. Obviously, if we had chosen a different collocation function we would have been left with somewhat different results, which would have required a numerical derivation.
In this section we have discussed function approximation methods and how they are used to solve ODEs with boundary conditions. Approximation methods are a large class of methods that are used in many aspects of numerical modeling. We have briefly outlined their use using the Navier-Stokes equation in cylindrical coordinates as exemplary ODE for which we were able to derive the correct solution.
In this section we have studied methods for solving first- and higher-order ordinary differential equations, as well as systems of differential equations numerically. As we have seen, there is a wide choice of methods available from which we can choose. These methods are usually selected based on their computational cost, as well as their numerical stability and the speed at which they converge to a solution. In most applications, a trade-off between speed, stability, and precision must be made. The choice of method usually depends on the application. As we have seen, one of the most commonly used numerical schemes for solving ODEs is the fourth-order Runge-Kutta method. You will find this method to be a very good first candidate whenever an ODE must be solved.