Chapter 29

Computational Fluid Dynamics

29.1 Introduction

Modern fluid mechanical problems would be impossible to solve without the use of computational fluid dynamics (CFD). As we have seen, the scope of analytical solutions to fundamental equations of fluid mechanics is very limited and, once a more difficult geometry is encountered, we usually have to choose a given numerical method for obtaining a solution. In the following sections we will study the most important techniques in CFD, namely weighted residual methods with the most important algorithm being the Galerkin method (see section 29.2), and spectral methods (see section 29.2.4), FDM (see section 30), FVM (see section 31), and FEM (see section 32). All of these methods have their advantages and disadvantages. In general, they vary according to their general applicability, their robustness, and the computational cost. As a rough rule of thumb, FEM is the most robust and widely applicable method. Its mesh generation is flexible and it can be used to suit almost all geometries. FVM is less flexible but more exact than FEM. It also allows adaptation to a wide range of geometries and usually yields better results. FDM yields even more precise results, but the method is less widely applicable. This is mainly due to the fact that it uses regular elements such as cubes (three-dimensional) and squares (two-dimensional), which are less flexible to adapt to, e.g., curved surfaces. In addition, the method is prone to numerical instability, especially if the solution involves singularities. Still, for regular geometries (as commonly encountered in microfluidics) FDM is suitable for obtaining exact solutions. Spectral methods can only be applied to a very small range of problems where periodic boundary conditions are encountered. However, spectral methods are robust and yield the highest numerical precision of all methods although at substantial computational costs. In this section we will begin by introducing the Galerkin method, which is the underlying concept used for all relevant numerical methods. As we will see, the underlying mathematics of all of these methods is not complicated and can be conveniently implemented algorithmically. Many commercial solvers, e.g., ANSYS (www.ansys.com) or COMSOL (www.comsol.com), implement one or several of these tools. They form the backbone of many solutions to problems in fluid mechanics and adjacent fields that cannot be solved analytically and which require numerical solutions.

The term CFD is used to refer to a wide spectrum of numerical methods used for solving complex three-dimensional and time-dependent flow problems. This scientific field originally developed from early approaches to solve the Navier-Stokes equation numerically. Solving this equation remains one of the most challenging problems in numerical physics even today.

Until the advent of modern computers, numerical solutions to fluid mechanical problems were restricted to special cases where simplification methods based, e.g., on the method of characteristics (see section 8.3.2) could be used. Relaxation schemes were among the first methods to be used more heavily as there were several simplifications to the manual schemes, which eased the calculations. One of the most important methods was described by Southwell [1]. Frankel was the first to introduce SOR (see section 25.3.4) for similar applications [2].

The first approaches to what would eventually become FEM were provided by Ritz [3], who used the concept of trial functions to satisfy the differential equation on small patches only. The problem with this method was that it did not cope well with boundary values. Courant1 later adapted the method, using triangular patches and introducing the concept of the pyramidal trial functions [4]. This concept was finally molded to the form we know today as FEM by Clough in 1960 [5]. Clough is also credited with having first used the term “finite element method.” FEM has since become one of the most important and influential methods for solving differential equations in structural mechanics and fluid mechanics.

The first paper of note on FDM is the paper by Courant, Friedrichs, and Lewy [6]. During the Second World War major contributions were made by von Neumann, as well as Crank and Nicolson [7]. In 1968 Chorin developed the concept of artificial compressibility, which is, as we will discuss in section 34.2.2, a very important prerequisite for solving fluid mechanical problems for fluids that are incompressible [8]. Based on this work Patankar and Spalding introduced the semiimplicit method for pressure-linked equations (SIMPLE) algorithm [9] and subsequent improvements [10]. The most recent method is FVM, which was introduced in the late 1980s with some of the most important contributions by Hu and Joseph [11]. More information on the development of CFD methods can be found in the excellent reviews by Thomée [12] and Afonso et al.[13].

In the following, we will discuss the most important methods, starting with the weighted residual methods.

29.2 Galerkin Method

29.2.1 Introduction

The Galerkin1method is one of the most important methods in finite element analysis. Originally developed for structural mechanical the method itself can be used to find solutions to all types of differential equations. It is especially suited for all applications of differential equations where boundary conditions must be applied.

The Galerkin method is a special type of weighted residual method. We will begin by shortly introducing more of these methods.

29.2.2 Weighted Residual Methods

29.2.2.1 Linear Independent Functions

In the following, we are looking for the dependent function g, which is a function of the independent variable x. In our example, we therefore restrict ourselves to a function with only one independent variable. We do not know the function g (x). The only thing we know of this function is given in form of a differential equation. Now if we have a set of linearly independent functions ψi we can express the unknown function g (x) as a infinite sum of these functions:

g(x)=i=1ciψi(x)

si1_e  (Eq. 29.1)

If we use an infinite number of terms, we obtain the function g (x) exactly. However, in order to make these methods computationally feasible, residual methods approximate the solution with a finite number of terms only. This will make the solution inexact. We therefore refer to this as an approximation functiong~(x) si2_e given by

g~(x)i=1Nciψi(x)

si3_e  (Eq. 29.2)

Here we have used a total of N terms to approximate g (x). Obviously, the higher N the more exact the solution will be but the more computationally expensive the method will become. In all cases, the functions ψi will be known or there will be good initial guesses about which functions should be used. What remains to be determined are the factors ci, which are referred to as the weighting factors. We still need to find a way to derive these constants.

29.2.2.2 Differential Operator

For finding ci we turn to the differential equation, which would be the basis of the analytical solution. We rewrite the differential equation to the general form using the linear differential operator L as

L(x,g,dgdx,d2gdx2)=0

si4_e  (Eq. 29.3)

In this notation, L is a function of the independent variables x, g,dgdx,d2gdx2 si5_e, which are functions of the (only real) independent variable x. Obviously Eq. 29.3 should be true for all values of x, i.e., at every point within the domain that the left-hand side amounts to zero.

29.2.2.3 Weighting Function

We could now multiply the left-hand side with an arbitrary weighting function w (x). Obviously, we make no mistake by doing this because the left-hand side amounts to zero for all values of x. Multiplying zero with an arbitrary function will amount again to zero. Therefore we remain exact if we write

w(x)L(x,g,dgdx,d2gdx2)=0

si6_e  (Eq. 29.4)

We could also integrate this equation to

w(x)L(x,g,dgdx,d2gdx2)dx=0

si7_e  (Eq. 29.5)

which would still be exact. The reason for this is the fact that integrating a function that is zero for all values of the independent variable will obviously yield a total of zero for the integral. At this point it is not obvious why we need the weighting function at all.

29.2.2.4 Approximation Functions

Our main problem is that we still do not know the function g (x). However, we have stated in Eq. 29.1 that we can simply replace the unknown function by a series, in which case we find Eq. 29.4 to be

w(x)L(x,i=1ciψi(x),ddx(i=1ciψi(x)),d2dx2(i=1ciψi(x)))=0w(x)L(x,i=1ciψi(x),i=1cidψidx,i=1cid2ψidx2)=0

si8_e  (Eq. 29.6)

where we still have an exact solution. However, as already stated we would need this solution to work also for only a finite number of N functions ψi in order to obtain a computationally useful method. Doing so, we obtain inexact solutions as indicated in Eq. 29.2. Eq. 29.6 would read

w(x)L(x,i=1Nciψi(x),i=1Ncidψidx,i=1Ncid2ψidx2)=

si9_e  (Eq. 29.7)

where we now have a nonzero right-hand side. The error we make by neglecting terms in the approximation is referred to as the residual si183_e. The problem with the residual is that we make small mistakes for all values of the independent variable. Obviously, we would like the overall deviation from the correct solution in the whole computational domain to be as small as possible. We obtain this if we demand the integral of the residual over the whole domain to be zero:

dx=!0

si10_e  (Eq. 29.8)

We therefore integrate Eq. 29.7 to find

w(x)L(x,i=1Nciψi(x),i=1Ncidψidx(x),i=1Ncid2ψidx2(x))dx=dxw(x)L(x,i=1Nciψi(x),i=1Ncidψidx(x),i=1Ncid2ψidx2(x))dx=!0

si11_e  (Eq. 29.9)

Obviously, we still require a weighting function. The different schemes for solving these types of problems deviate in the type of weighting function applied.

29.2.3 Galerkin Method

29.2.3.1 Weighting Function

The Galerkin function uses the basis functions ψj of our approximation of g (x) as weighting functions. We find the following equations:

ψj(x)L(x,i=1Nciψi(x),i=1Ncidψidx(x),i=1Ncid2ψidx2(x))dx=0

si12_e  (Eq. 29.10)

where we have used the index j for the weighting factors since each weighting function yields a separate equation. As we have a total of N basis functions ψj, we will obtain a total of N algebraic equations from Eq. 29.9. From these we can derive the unknown coefficients ci.

Weak Form.Eq. 29.10 is often referred to as weak form of a differential equation. This is due to the fact that a differential equation requires a solution to satisfy the equation locally at every point within the domain. In contrast to this, residual methods turn the differential equation into an integral form. In this form, the equation must be satisfied across the domain on a global level. This means that the solution must not necessarily fulfill the differential equation at every potential point within the domain. The solution we obtain will be designed such that the global error, i.e., the residual summed up over the whole domain, will be zero. This means that the solution will have (at certain points) values that are too high whereas at other points in the domain it will have values that are too low. In sum, the errors will cancel and we obtain an average solution to the differential equation. The reason that this solution is not necessarily exact at each and every point within the domain makes it a weak solution. In many applications, weak solutions are perfectly sufficient. However, it is important to keep in mind that many of methods commonly used in numerical analysis of fluid mechanical problems only yield weak solutions.

29.2.3.2 Example

Hagen-Poiseuille Flow. We will illustrate the Galerkin method using the ODE from which we derived the Hagen-Poiseuille flow profile, Eq. 16.18. We simplify the equation by assuming the following values for the variables:

η=1mPasdpdx0.001mbarmm11ηdpdz=100mm1s1

si13_e

We also normalize the radius, thereby bounding it to 0 ≤ r ≤ 1. In this case we find from Eq. 16.18

1rdvdr+d2vdr2=100d2vdr2+1rdvdr+100=0

si14_e  (Eq. 29.11)

where we dropped the unit by assuming that the solution would be obtained in the unit mm s−1. Obviously, the initial conditions are

dvdr(0)=0,symmetricalprofilev(1)=0,no-slip

si15_e

Basis Functions. We have to choose suitable basis functions ψi for our approximated solution. We have to make sure that we chose the functions such that they fulfill the boundary conditions. The chosen example is a very common case where the first derivative at the lower boundary and the value of the solution at the upper boundary are zero. One obvious function that fulfills this criterion is the cosine function. We therefore chose to use a series of cosines as basis functions given by

ψ0=cos(π2r)

si16_e  (Eq. 29.12)

ψ1=cos(3π2r)

si17_e  (Eq. 29.13)

ψ2=cos(5π2r)ψi=cos((2i+1)π2r)

si18_e  (Eq. 29.14)

where we will restrict ourselves to the first three terms. As you can see, when setting r = 1 we obtain zero in all cases. We therefore find the following approximation for v (r) and its derivatives:

v(r)c0cos(π2r)+c1cos(3π2r)+c2cos(5π2r)

si19_e  (Eq. 29.15)

dvdr(r)c0π2sin(π2r)c13π2sin(3π2r)c25π2sin(5π2r)

si20_e  (Eq. 29.16)

d2vdr2(r)c0(π2)2cos(π2r)c1(3π2)2cos(3π2r)c2(5π2)2cos(5π2r)

si21_e  (Eq. 29.17)

As you can see, the first derivative is made up of sine terms that are obviously zero for r = 0. In general, mixed boundary conditions usually give rise to sine or cosine approaches. However, in some cases it may also be suitable to choose a combination of a polynomial and a trigonometric functions, which we are free to do as long as the basis functions remain linearly independent. This is the main advantage of weighted residual methods: We can choose the basis functions to suit our needs.

Determining the Weighting Constants. Next we need to determine the unknown weighting constants. For this we apply Eq. 29.10 using each of the three basis functions ψ0, ψ1, and ψ2 in turn. Rewriting Eq. 29.11 with our approximation for v (r) and its derivatives yields

c0(π2)2cos(π2r)c1(3π2)2cos(3π2r)c2(5π2)2cos(5π2r)1r(c0π2sin(π2r)+c13π2sin(3π2r)+c25π2sin(5π2r))+100=0

si22_e  (Eq. 29.18)

We begin with ψ0 for which we find the first integral according to Eq. 29.10 as

01cos(π2r)(c0(π2)2cos(π2r)c1(3π2)2cos(3π2r)c2(5π2)2cos(5π2r)1r(c0π2sin(π2r)+c13π2sin(3π2r)+c25π2sin(5π2r))+100)dr=0

si23_e  (Eq. 29.19)

For ψ1 we would obtain

01cos(3π2r)(c0(π2)2cos(π2r)c1(3π2)2cos(3π2r)c2(5π2)2cos(5π2r)1r(c0π2sin(π2r)+c13π2sin(3π2r)+c25π2sin(5π2r))+100)dr=0

si24_e  (Eq. 29.20)

And finally for ψ2, we obtain

01cos(5π2r)(c0(π2)2cos(π2r)c1(3π2)2cos(3π2r)c2(5π2)2cos(5π2r)1r(c0π2sin(π2r)+c13π2sin(3π2r)+c25π2sin(5π2r))+100)dr=0

si25_e  (Eq. 29.21)

Obviously Eq. 29.19, Eq. 29.20, and Eq. 29.21 can be solved to yield the unknown constants although the calculation is somewhat tedious to carry out by hand.

29.2.3.3 Maple Worksheet

At this point, we will implement the Galerkin method in Maple. The listing is shown in listing 29.1. The function DoGalerkin implements the Galerkin method. It expects the operator L of the differential equation L as first argument. This operate must create the differential equation from the independent variables d2gd(independent)2 si26_e, dgd(independent) si27_e, g (independent), and independent, in this order. The argument independent indicates the independent variable. psi is expected to be a list of basis functions. The domain must be given as the argument range.

Listing 29.1

[Maple] Listing for the Galerkin method. This listing is part of the listing GalerkinMethod.mw, which can be found in the supplementary material.

 1 restart:read "Core.txt":
 2
 3 DoGalerkin:=proc(L,independent,psi,range) local n,i,j,wf,equs,approx,dapprox,ddapprox,currentpsi,sol;
 4   n:=numelems(psi):
 5   wf:=Vector(1..n):
 6   equs:=Vector(1..n):
 7   for i to n do
 8     wf[i]:=c[i];
 9   end do:
10   approx:=0;
11   for i to n do
12     approx:=approx+wf[i]*psi[i];
13   end do:
14   dapprox:=diff(approx,independent):
15   ddapprox:=diff(dapprox,independent):
16   for j to n do
17     currentpsi:=unapply(psi[j],independent);
18     equs[j]:=evalf(int(currentpsi(independent)*L(ddapprox,dapprox,approx,independent),independent=range))=0:
19   end do:
20   sol:=solve({seq(equs[i],i=1..n)});
21   approx:=unapply(approx,independent,seq(c[i],i=1..n));
22   return unapply(approx(independent,seq(rhs(sol[i]),i=1..n)),independent);
23 end proc:
24
25 fGalerkin:=DoGalerkin((ddg,dg,g,r)->ddg+1/r*dg+100,r,[cos(Pi/2*r),cos(3*Pi/2*r),cos(5*Pi/2*r)],0..1);
26 fGalerkinChebyshev:=DoGalerkin((ddg,dg,g,z)->ddg+100,z,[z^2-1],-1..1);

The function first determines the number of basis functions ψi from which the total number of weighting factors n is determined (see line 4). It then initializes the vector wf, which stores the weighting factors. These are initialized to the variables ci (see line 8). Next the function builds the estimation function approx by summing up the individual functions ψi multiplied with the respective weighting factor ci. It then builds the first (see line 14) and the second (see line 15) derivatives from this function. In our chosen example, these would be Eq. 29.15, Eq. 29.16, and Eq. 29.17. The loop starting from line 16 first converts the basis functions ψi to a function of the independent variable and then forms equation j as given by Eq. 29.10 by forming the integral of the product of basis function ψj with the differential operator L. As you can see, the operator is treated as a function that must “reassemble” the differential equation if provided with the second derivative of the estimation function, the first derivative of the estimation function, the estimation function itself, and the independent variable. The integral is then formed over the domain and set to 0. The equation is then stored as entry j in the vector equs (see line 18).

Line 20 solves the equations stored in equs and returns the found values for the weighting factors ci. Line 21 turns the estimation function in a function of the weighting factors and the independent variable. This step is required in order to facilitate assigning the actual values for the factors that are stored in sol. Line 22 performs this extraction and turns the approximation function into a function of the independent variable only.

Line 25 returns to our original example. As you can see, we provide the differential operator L in the form of a function. This function accepts the two derivatives of the estimation function, the estimation function itself, and the independent variable as parameters. If these values are given this function recovers Eq. 29.11. We then indicate the independent variable r and give the basis functions, Eq. 29.12, Eq. 29.13, and Eq. 29.14. The domain is obviously 0 ≤ r ≤ 1. If you execute this line, Maple will yield the result

r25.69265276cos(π2r)0.9857786406cos(3π2r)+0.1803217180cos(5π2r)

si28_e

which means that the weighting factors were determined to be

c0=25.69265276c1=0.9857786406c2=0.1803217180

si29_e

We will ignore line 26 for a moment as we will discuss this example later.

Comparison With the Correct Solution. We obviously know the analytical solution given by Eq. 16.19 to be

v(r)=25(r21)

si30_e

for our chosen constants. If we plot the analytical solution next to the method obtained for the Galerkin method we obtain Fig. 29.1a. As you can see, the solution is very close to the exact solution. Fig. 29.1b shows the absolute error between the analytical solution and the solution found with the Galerkin method with a different number of basis functions. As can be seen, the higher the number of terms the smaller the absolute error.

f29-01-9781455731411
Fig. 29.1 Example of the Galerkin method used to solve the differential equation for the Hagen-Poiseuille flow. The basis functions ψi used were a total of three cosine functions. a) As can be seen, the solution is very close to the analytical solution. b) Absolute error between the analytical solution and the solution found via the Galerkin method for different numbers of basis functions ψi. As can be seen, the errors become very small for a higher number of terms.

29.2.3.4 Multiple Independent Variables

Obviously, we have derived weighted residual functions with one single independent variable so far. However, the method can be applied to any number of independent variables, x0, x1, . . . , xn. In this case the function g becomes a function of multiple independent variables g (x0, x1, . . . , xn), and the differential operator L as well as the weighting function according to Eq. 29.5 must account for this. Eq. 29.5 would then be

w(x0,x1,,xn)L(x0,x1,,xn,g,dgdx0,dgdx1,,dgdxn,d2gdx02,d2gdx12,,d2gdxn2)dx0dx1,dxn=0

si31_e  (Eq. 29.22)

Obviously, the integration must then be performed over all the independent variables, i.e., the residual must be zero over all integrals. Obviously, we now also require a different approach for the estimation function g~ si32_e, which becomes a function of all independent variables g~(x0,x1,,xn) si33_e. It would consist of N basis functions ψj (x0, x1, . . . , xn), which would give rise to N linearly independent functions given by Eq. 29.10 as

ψj(x0,x1,,xn)L(x0,x1,,xn,i=1Nciψi(x0,x1,,xn),i=1Ncidψidx0(x0,x1,,xn),i=1Ncidψidx1(x0,x1,,xn),i=1Ncidψidxn(x0,x1,,xn),i=1Ncid2ψidx02(x0,x1,,xn),i=1Ncid2ψidx12(x0,x1,,xn),,i=1Ncid2ψidxn2(x0,x1,,xn))dx1dx2dxN

si34_e  (Eq. 29.23)

Obviously, this is simply a little more tedious to calculate but still not problematic. We will expand the function DoGalerkin from listing 29.1 to allow processing multiple independent variables.

Adapted Maple Worksheet. The listing is shown in listing 29.2. As you can see, the implementation of DoGalerkin2 is very similar to DoGalerkin apart from a few exceptions. The differential operator L given as parameter L is now expected to be a function of (in this order) the following:

Listing 29.2

[Maple] Listing for the adapted Galerkin method. This listing is part of the listing GalerkinMethod.mw, which can be found in the supplementary material.

 1 DoGalerkin2:=proc(L,independent,psi,range) local n,nind,i,j,wf,equs,approx,dapprox,ddapprox,currentpsi,sol;
 2   n:=numelems(psi):
 3   nind:=numelems(independent):
 4   wf:=Vector(1..n):
 5   equs:=Vector(1..n):
 6   for i to n do
 7     wf[i]:=c[i];
 8   end do:
 9   approx:=0;
10   for i to n do
11     approx:=approx+wf[i]*psi[i];
12   end do:
13   dapprox:=seq(diff(approx,independent[i]),i=1..nind):
14   ddapprox:=seq(diff(diff(approx,independent[i]),independent[i]),i=1..nind):
15   for j to n do
16     currentpsi:=unapply(psi[j],seq(independent[i],i=1..nind));
17     equs[j]:=evalf(int(currentpsi(seq(independent[i],i=1..nind))*L(ddapprox,dapprox,approx,independent),seq(independent[i]=range[i],i=1..nind)))=0:
18   end do:
19   sol:=solve({seq(equs[i],i=1..n)});
20   approx:=unapply(approx,seq(independent[i],i=1..nind),seq(c[i],i=1..n));
21   return unapply(approx(seq(independent[i],i=1..nind),seq(rhs(sol[i]),i=1..n)),seq(independent[i],i=1..nind));
22 end proc:
23
24 fGalerkinRectPoisson:=DoGalerkin2((ddgy,ddgz,dgy,dgz,g,y,z)->ddgy+ddgz+100,[y,z],[cos(Pi/2*y)*cos(Pi/2*z),cos(3*Pi/2*y)*cos(Pi/2*z),cos(Pi/2*y)*cos(3*Pi/2*z),cos(3*Pi/2*y)*cos(3*Pi/2*z)],[0..1,0..1]);

 The second derivative of the dependent variable with respect to the first independent variable x0 given in independent

 The second derivative of the dependent variable with respect to the second independent variable x1 given in independent

 The second derivative of the dependent variable with respect to the ith independent variable xi given in independent

 The first derivative of the dependent variable with respect to the first independent variable x0 given in independent

 The first derivative of the dependent variable with respect to the second independent variable x1 given in independent

 The first derivative of the dependent variable with respect to the ith independent variable xi given in independent

 The dependent variable g

 the independent variables x0, x1, . . . , xn in the order given in independent

The argument independent is now assumed to be a list of the independent variables. The functions given in the argument psi are now expected to be functions of the independent variables in the same order as given in independent. The ranges are now expected to be a list of ranges referring to the independent variables in the same order as given in independent.

The function then determines the number of independent variables and stores it in nind (see line 3). In line 13 the first derivative of the approximation function is created with respect to the individual independent variables. The result is a list that is stored in dapprox. The same is repeated for the second derivative. The rest of the implementation is identical to the implementation of DoGalerkin with the exception that each call to the independent variables is replaced by a list of the independent variables.

Example: Two-Dimensional Poisson Equation. As a simple test of the Galerkin method we will solve the Poisson equation for rectangular flow profiles given from Eq. 16.40 as

2vxz2+2vxy2=1ηdpdx

si35_e  (Eq. 29.24)

This example is interesting because we now have a PDE instead of the ODE of our last example (see Eq. 29.18). However, this fact is irrelevant for the Galerkin method since the differential operator may depend on ordinary or (as in this case) partial integrals; the concept is the same. We reuse the values that we have used in the last example to transfer the right-hand side to a numerical term. This sets the unit of our solution again to mm s−1. We can then rewrite Eq. 29.24 to

2vxz2+2vxy2+100=0

si36_e  (Eq. 29.25)

As you can see from the example in listing 29.2, line 24 the differential operator is simply the second derivative of the dependent function with respect to y plus the second derivative of the dependent function with respect to z plus 100. The independent variables are y and z. In this case, we need to come up with clever basis functions ψi. Similar to the last example we note that along each of the two axes, the function value must be zero at the boundaries due to the no-slip condition. We therefore chose our domains as −1 ≤ y ≤ 1 and −1 ≤ z ≤ 1. Along each axis, we can find several potential functions that would ensure the boundary conditions. However, we chose (again) a cosine series because we expect not only no-slip condition at the boundaries but also symmetry in the center, i.e., the derivative in the center must be zero along both directions.

Please note that this choice of basis functions is arbitrary. There are many other functions that would suit our needs here. For the cosine series to ensure the boundary conditions are fulfilled we chose

g~i,y(y)=cos((2i+1)π2y)g~j,z(z)=cos((2j+1)π2z)g~i,j(y,z)=g~i,y(y)g~j,z(z)=cos((2i+1)π2y)cos((2j+1)π2z)g~(y,z)=i=0imaxj=0jmaxcos((2i+1)π2y)cos((2j+1)π2z)

si37_e

where we have exploited the fact that multiplying the two equations will ensure that the boundary conditions are satisfied everywhere. Please note that this is not a particularly clever approach; there may be many more functions that would fulfill these requirements. However, as we will see, this is a completely sufficient approach.

If you look at listing 29.2, line 24, you can see that the chosen basis functions were the functions g0,0, g1,0, g0,1, and g1,1. If you execute this line, Maple will return

(y,z)32.85114320cos(π2y)cos(π2z)2.190076213cos(3π2y)cos(π2z)2.190076213cos(π2y)cos(3π2z)+.4055696692cos(3π2y)cos(3π2z)

si38_e

with the determined values of c0, c1, c2, and c3 for the N basis functions.

Comparison With Correct Solution. Obviously, we know the analytical solution to Eq. 16.40 for which we have actually derived several solutions analytically (see Eq. 16.53 and Eq. 16.58). If we compare these solutions to the solution found using the Galerkin method we find that they match well (see Fig. 29.2).

f29-02-9781455731411
Fig. 29.2 Example of the Galerkin method used to solve the two-dimensional Poisson equation for the rectangular channel flow. The basis functions ψi used were a total of four cosine functions. The analytical solution for comparison is Eq. 16.58 and was developed to nmax = 20. a) Again, the solution is very close to the analytical solution and it is almost impossible to distinguish them. b) Absolute error between the analytical solution and the solution found via the Galerkin method.

29.2.3.5 Systems of Differential Equations

As we have shown, it is intuitive to extend the Galerkin method to differential functions with multiple independent variables. The next obvious question is how to use the method if we have to solve a system of differential equations instead of a single equation.

Obviously, as we have a system consisting of M differential equations we would be looking for a total of M dependent variables g1, g2, . . . , gM, each dependent on a total of N independent variables x1, x2, . . . , xN. The differential operator Lk is then obviously different for each differential equation k. For each differential equation k we would chose suitable basis functions ψk,i. The basis functions can be identical but they may also be different for each differential equation j in order to adapt to specific properties of the respective function. We would then obtain a total of M integrals of the form

ψk,j(x0,x1,,xn)Lk(x0,x1,,xn,i=1Nck,iψk,i(x0,x1,,xn),i=1Ncidψk,idx0(x0,x1,,xn),i=1Nck,idψk,idx1(x0,x1,,xn),i=1Nck,idψk,idxn(x0,x1,,xn),i=1Ncid2ψk,idx02(x0,x1,,xn),i=1Nck,id2ψk,idx12(x0,x1,,xn),,i=1Nck,id2ψk,idxn2(x0,x1,,xn))dx1dx2dxN=0

si39_e  (Eq. 29.26)

where k is the index for the differential equation with 1 ≤ kM, i is the index of the basis functions with 1 ≤ i ≤ N, and j is the index of the weighting function with 1 ≤ j ≤ N, which is (for the Galerkin method) identical with the basis functions. For each differential equation k we have a set of N equations to determine the unknown coefficients ck,i. Once these coefficients have been determined, we have approximated solutions for each differential equation k.

Suitability for Algorithmic Implementation. We note one major problem with the Galerkin method for direct implementation as an algorithm. Each equation requires the evaluation of N integrals of the form given in Eq. 29.26. As long as we do not use “clever” basis functions that are very easy to integrate numerically the integral must be evaluated analytically. One very common approach to simplifying this step is to use methods with known integration patterns such as trigonometric or exponential functions. Spectral methods exploit these properties, which makes the integration step significantly easier (see section 29.2.4). In most cases, such simplifications will also allow turning the evaluated integral of Eq. 29.26 to simple expressions involving the unknown constants ck,i. These are the unknowns that need to be determined using the N · M equations of the form given by Eq. 29.26. If suitable basis functions are chosen, these equations can be converted to matrix form and solved using numerical methods for linear systems (see section 25.3).

29.2.4 Spectral Methods

There is another commonly employed technique for solving differential equations, which are the spectral methods. Spectral methods use the approach of the Galerkin method but use specific basis functions for ψj for Eq. 29.10. The type of approach used for the basis function depends on the boundary condition of the physical problem.

29.2.4.1 Periodic Boundary Condition

The methods of choice are sine and cosine functions of the form

ψj(x)=cos(j2πλx)+isin(j2πλx)=ei2πjλx

si40_e

where we have used the Euler formula (see Eq. 3.38) to rewrite the sine and cosine as an exponential term. These type of sine and cosine approaches are commonly used if the boundary conditions of a problem are periodic with the given periodicity λ, which shows up in the function.

The main advantage of spectral methods is the fact that the functions can be differentiated very conveniently due to the exponential term. The pth derivative is simply the original function multiplied with a constant. This makes these methods very easy to store computationally and thus very efficient. Spectral methods converge rapidly.

29.2.4.2 Nonperiodic Boundary Condition

Chebyshev Polynomials of First Kind. Unfortunately, in most physical problems, the boundary conditions are not periodic. As an example, take the no-slip condition on a microfluidic channel. This is a nonperiodic boundary condition. For these cases, there is a second commonly employed basis function class, which are to Chebyshev1polynomials.

The first polynomials are given by

T0=1T1=x

si41_e

T2=2x21T3=4x33xT4=8x48x2+1T5=16x520x3+5xTj=2zTj1(x)Tj2(x)

si42_e

As you can see, subsequent polynomials can be calculated recursively. Think of these polynomials as being tailored to fit the requirements of nonperiodic boundary conditions. In some ways they are similar to the Bessel functions (see section 3.2.4), which are common solutions to ODEs and PDEs in cylindrical and spherical coordinates. In fact, Chebyshev polynomials of the first kind Tν are solutions to the Chebyshev differential equation, given by

(1x2)d2ydx2xdydx+v2y=0

si43_e

The Chebyshev polynomials of second kind are defined on the domain −1 ≤ x ≤ 1, which is why any differential equation must be scaled to fit this domain in order to be solvable using these polynomials.

Chebyshev Polynomials of Second Kind. There are also Chebyshev polynomials of second kind Uν, which are defined by

U0=1U1=2xU2=4x21U3=8x34xU4=16x412x2+1U5=32x532x3+6xUj=2zUj1(x)Uj2(x)

si44_e

The Chebyshev polynomials of the second kind Uν are solutions to the Chebyshev differential equation of second kind given by

(1x2)d2ydx23xdydx+v(v+2)y=0

si45_e

Visualization. The first four polynomials of the first kind Tν and of the second kind Uν are plotted in Fig. 29.3.

f29-03-9781455731411
Fig. 29.3 The first four Chebyshev polynomials.

29.2.4.3 Example

We have unknowingly already used a spectral method approach in listing 29.1 and listing 29.2 when choosing a cosine function as a basis function to solve the differential equation, which yields the Hagen-Poiseuille flow profile. Here, we will add another example taken from the flow in an infinitesimally extended channel (see section 16.2) for which we found the ODE Eq. 16.9 given by

d2vxdz2=1ηdpdx

si46_e  (Eq. 29.27)

which we solved to yield the flow profile given by Eq. 16.13. This is a very good example of the use of Chebyshev polynomials.

We first normalize the channel height to h = 2 and shift the origin of the coordinate system into the center of the channel. Thus the independent variable z is defined on the domain −1 ≤ z ≤ 1, where the no-slip condition implies that vx (z = −1) = vx (z = 1) = 0. As stated, these are the prerequisites for using the Chebyshev polynomials. We are then virtually free to choose any of the polynomials. However, as Eq. 29.27 contains a second derivative, it makes little sense to use a polynomial with an order less than 2, since this would disappear during the derivation. The first polynomial that fulfills this requirement is T2.

We have skipped listing 29.1, line 26 earlier since this is the illustration of the usage of the Chebyshev polynomials. For this example, we use T2 as the only basis for the Galerkin method. If you execute this line, Maple will return the solution

z50z2+50

si47_e  (Eq. 29.28)

which is the correct solution if you compare it with Eq. 16.13, which yields for the values chosen

η=1mPasdpdx=0.001mbarmm11ηdpdx=100mm1s1h28ηdpdx=50mms1

si48_e

and therefore

vx(z)=50mms1(z2+1)

si49_e

As you can see, using the Chebyshev polynomial the Galerkin method finds the correct solution without requiring additional terms as we have seen previously when using the cosine series. This is due to the fact that this example is prototypical for the use of the Chebyshev polynomials, which are especially suitable for these types of problems.

As you can see, spectral methods have a wide range of applications in fluid mechanics because the requirement of having symmetrical boundary conditions (especially for no-slip boundary conditions) is easily fulfilled. Spectral methods are very versatile methods for solving these types of problems. Although we have introduced them as a special type of basis equations for the Galerkin method they can be used with any type of weighted residual method.

29.3 Summary

In this section we have introduced the concept of the weighted residual methods using the commonly used Galerkin method as an example. As we have seen, the underlying concept of these methods is finding a suitable approximation function for the function to solve. When applied to the differential equation these functions will give rise to residuals that are nonzero because (as the name implies) the approximation function is only an approximation to the actual solution. If the residual is integrated over the whole domain, it will become very small if the approximation function was chosen correctly. In order to ensure that the integral becomes zero, a suitably chosen weighting function is used. This function is multiplied with the differential obtained after applying the approximation function to the differential equation. If the integral over the product is zero then the weighting function suitably “adjusts” the approximation function, i.e., corrects it. Usually the approximation functions come with unknown coefficients that have to be determined. Therefore, we obtain an algebraic equation with N unknowns corresponding to the number of unknown coefficients. For each weighting function we obtain one equation. As the weighting functions must be linearly independent, the equations obtained will also be linearly independent. Therefore, we will obtain a system of linear equations that can be solved to yield the unknown constants.

In general, weighted residual methods form the basis of many different types of numerical solvers. The choice of weighting functions differs from method to method. The Galerkin method uses the basis functions of the approximation function as a weighting function. We have discussed spectral methods as one type of basis functions, and they are commonly used for applications where the boundary conditions are periodic. As we have seen, there are many examples in fluid mechanics that can be conveniently solved by these types of methods.

References

[1] Southwell R.V. Relaxation Methods in Engineering Science. London: Oxford University Press; 1940 (cit. on p. 609).

[2] Frankel S.P. Convergence rates of iterative treatments of partial differential equations. Mathematical Tables and Other Aids to Computation. 1950;4(30):65–75 (cit. on p. 609).

[3] Ritz W. Über eine neue Methode zur Lösung gewisser Variationsprobleme der mathematischen Physik. Journal für die reine und angewandte Mathematik. 1909;135:1–61 (cit. on p. 609).

[4] Courant R., et al. Variational methods for the solution of problems of equilibrium and vibrations. Bull. Amer. Math. Soc. 1943;49(1):1–23 (cit. on p. 609).

[5] Clough R.W. The finite element method in plane stress analysis. American Society of Civil Engineers second Conference on Electronic Computation. 1960 (cit. on p. 609).

[6] Courant R., Friedrichs K., Lewy H. Über die partiellen Differenzengleichungen der mathematischen Physik. Mathematische Annalen. 1928;100(1):32–74 (cit. on p. 609).

[7] Crank J., Nicolson P. A practical method for numerical evaluation of solutions of partial differential equations of the heat-conduction type. In: Mathematical Proceedings of the Cambridge Philosophical Society. Cambridge Univ Press; 1947:50–67 Vol. 43. 01, (cit. on p. 609).

[8] Chorin A.J. Numerical solution of the Navier-Stokes equations. Mathematics of computation. 1968;22(104):745–762 (cit. on p. 610).

[9] Patankar S.V., Spalding D.B. A calculation procedure for heat, mass and momentum transfer in three-dimensional parabolic flows. International Journal of Heat and Mass Transfer. 1972;15(10):1787–1806 (cit. on p. 610).

[10] Patankar S. Numerical heat transfer and fluid flow. CRC Press; 1980 (cit. on p. 610).

[11] Hu H.H., Joseph D.D. Numerical simulation of viscoelastic flow past a cylinder. Journal of Non-Newtonian Fluid Mechanics. 1990;37(2):347–377 (cit. on p. 610).

[12] Thomée V. From finite differences to finite elements: A short history of numerical analysis of partial differential equations. Journal of Computational and Applied Mathematics. 2001;128(1–2) Numerical Analysis 2000. Vol. VII: Partial Differential Equations, pp. 1–54 (cit. on p. 610).

[13] Afonso A., Oliveira M., Oliveira P., Alves M., Pinho F. The finite-volume method in computational rheology. In-Tech Open Publishers; 2012 (cit. on p. 610).

[14] Galerkin B. Series development for some cases of equilibrium of plates and beams. Wjestnik Ingenerow Petrograd. 1915;19:897 (cit. on p. 610).


1 Richard Courant was a German-American mathematician who is most famous for his seminal work on the development of FEM.

1 Boris Grigorjewitsch Galjorkin was a Soviet engineer and mathematician who made important contributions to continuum mechanics and especially plate bending problems. In 1915 he suggested a method using weighted residuals for approximating solutions to differential equations [14]. This Galerkin method has become one of the most important schemes in finite element analysis.

1 Pafnuty Chebyshev was a Russian mathematician who made important contributions to number theory and statistics.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset