Chapter 31

Finite Volume Method

31.1 Introduction

The next method we will discuss is the finite volume method (FVM). Just as with the Galerkin method, FVM can be used on all differential equations, which can be written in the divergence form. This effectively writes the equation using divergence operators (see section 7.1.3.3). The equation is then integrated over the volume. We can then apply Gauss’s theorem (see section 7.2.1) converting the volume integral over the divergence into a surface integral across the boundaries. The integral is therefore turned from integrating the differential of the dependent variable inside of the cells into surface integrals of the fluxes of the dependent variable across the boundary of the cells. These integrals can usually be calculated using suitable numerical approximation methods. This simplifies the differential equation substantially.

We have used similar approaches previously, e.g., when deriving the mass conservation using Gauss’s theorem (see section 10.5). This approach was based on the concept that all mass that would “diverge” out of the control volume must inherently pass the boundary of the control volume at some time. If this flux is integrated over time, the total change of mass in the control volume can be derived. FVM uses the same approach. By monitoring the fluxes of the dependent variable across the boundary of the cells, a conservation approach of the dependent variable is obtained. This is in contrast to FDM, which seeks to approximate the differentials of the differential equation, i.e., the changes of the dependent variable across the cell.

FVM solvers range among the most popular methods in CFD and there are numerous commercial software packages that use this method. Some of the most commonly used FVM solver packages are Phoenics (www.cham.co.uk), ANSYS Fluent (www.ansys.com), Flow-3D (www.flow3d.com), and SOLIDWORKSFloWorks (www.solidworks.com).

31.2 Conservative form Notation

In general, FVM requires the dependent variable to be presented in a specific type of conservative form. If we consider an arbitrary conservative dependent variable g which depends on a number of independent variables x1, x2, . . . , xn, the conservative equation is given by

gt(x,t)+F(x,t)=f(x,t)

si2_e  (Eq. 31.1)

where we have combined the individual independent variables in vector form x si3_e. F si4_e denotes a vector function which depends on the independent variables and time t. The left-hand side of the equation consists of two characteristic terms. The first term is referred to as the time-dependent change of the dependent variable. We have often used comparable terms when assessing the temporal change of a dependent variable in a control volume. As an example, take the change of mass in a control volume over time, which we have used during the derivation of the conservation of mass (see Eq. 10.12).

The second term is the flux or advection term. You may also come across the term convection term. It denotes the change in the dependent variable as a consequence of in- and outflow into and out of the control volume. Again, we have used similar terms before, e.g., during the derivation of the conservation of mass where we have summed up all fluxes along the coordinate axes (see Eq. 10.13, Eq. 10.14, and Eq. 10.15). The expression

F=F1x1+F2x2++Fnxn

si5_e  (Eq. 31.2)

is referred to as the space divergence operator. The right-hand side of the Eq. 31.1 denotes the mixed source and diffusion term and sums up the creation or dissipation of the dependent variable’s quantity (source/sink behavior), and the diffusion of the quantity within the control volume.

Example: One-Dimensional Heat Equation. As a first example, we will consider the one-dimensional heat equation (see section 6.9.2) given by Eq. 6.82

Ttc2ΔT=0

si6_e  (Eq. 31.3)

where c2 is a positive constant. Here, the dependent function T describes the temperature function along a one-dimensional domain through which heat is transported at the speed c2. This constant is the thermal diffusivity α which can be thought of as a “diffusion coefficient” for heat. If we convert this to the form of Eq. 31.1, we find that the vector x si3_e of the independent variables actually only consists of a single entry. The same holds true for F si4_e. The function f is simply zero, as there is no term on the right-hand side to account for. We therefore find

x=(x)F(x,t)=(c2T)f(x,t)=0

si9_e

in which case we can rewrite Eq. 31.3 to

Tt(x,t)+F(x,t)=0Tt(x,t)c2((T))(x,t)=0Tt(x,t)c2(2Tx2)(x,t)=0

si10_e

which obviously is the heat equation (Eq. 31.3). As a matter of fact, all PDEs which involve conservative physical quantities can be rewritten in this way.

31.3 Integral form of the Conservative Notation

31.3.1 Derivation

As stated, the notation of Eq. 31.1 is required for transferring the equation into conservative form. Conservative means that the quantity of the change of the dependent variable within the control volume changes in dependence of the fluxes of the dependent variable across the boundary. Outflowing quantity will result in a reduction of the dependent variable within the control volume and v.v.. As stated, we have derived many differential equations using such conservation approaches on infinitesimally small control volumes, within which the quantity of the dependent variable was balanced.

Eq. 31.1 is in differential notation, i.e., the changes of the dependent variable within the control volume over time are balanced by the sources, the sinks, the diffusion inside the control volume, as well as the advection into and out of the control volume. This equation is valid for every point in the computational domain. If we work on the differential form, we would obtain an FDM problem that can be solved if we take into account control volumes of infinitesimally small lateral size. However, as we have seen, FDM is computationally very expensive because it requires a large number of control volumes with very small lateral dimension in order to yield exact results. It would be desirable to “inflate” the control volumes to larger volumes. This is exactly what we do in FVM. The reason why FVM allows working with larger control volumes is due to the fact that it does not approximate the changes of the dependent variable along the control volume, but performs a conservation of the dependent variable for each control volume. This allows calculating an average value for the dependent variable within the cell. The control volume is then replaced by a single value. Obviously, the larger the control volumes, the more “averaged” the solution will become, i.e., finer features of the solution such as spikes, e.g., in concentration will not be visible anymore because they are blurred by the averaging process. The fact that FVM averages gives rise to one very important property: FVM is surprisingly stable against discontinuities and large changes in the dependent variables. These tend to be problematic for FDM because at discontinuities, a function usually has no derivative and thus, FDM cannot approximate values at these locations.

31.3.2 Integration

In order to obtain the conservative form of Eq. 31.1 we integrate this equation over the control volume resulting in

Vgt(x,t)dV+VF(x,t)dV=Vf(x,t)dV

si11_e  (Eq. 31.4)

Obviously, this notation is not very useful as of yet. We note two things. First, we can exchange the differential and the integral of the first term on the left-hand side. We can do this because our control volume is fixed in space and not dependent on time, i.e., it is not a function of time. This allows us to convert the partial into a regular differential

VgtdV=tVgdV=ddtVgdV

si12_e

Second, we can use Gauss’s theorem (see section 7.2.1) to convert the volume integrals into surface integrals. Doing this, we can rewrite Eq. 31.4 to

ddtVgdV+dVFndV=VfdV

si13_e  (Eq. 31.5)

where we have converted the volume integrals into surface integrals. This is a major simplification: Instead of looking at the changes of the dependent quantity in the whole volume V, we simply have to consider the changes on the surface of the control volume ∂V. We can make this control volume as large as required. For finer resolution, V should be small, whereas for rougher solutions, the volume can be chosen larger.

31.3.3 Right-Hand Side Terms

If you look at the right-hand side of Eq. 31.5, you will see that we still have an integral and there seems to be no clever way to make this integral disappear, because we know nothing of the function f. In the general case, this is true. However, we often find that the function f consists mainly of three types of terms.

Constants: Sources and Sinks. Very often, we will have constants on the right-hand side that are obviously easy to integrate. In many cases, we will even have volume-specific constants with the physical unit of “quantity of the dependent variable per volume.” As an example, consider the right-hand side of the Navier–Stokes equation (see Eq. 11.40) where we find the volume forces k si14_e, which are volume-specific constants. These are simply integrated by multiplication with the volume.

Constants are physically interpreted as sources (in the case of positive values) or sinks (in the case of negative values) of the dependent variable’s quantity. As an example, a source or sink of concentration could be a chemical reaction that creates or consumes substance. A source for momentum could be forces, e.g., the volume forces in the Navier–Stokes equation.

Nabla Operators: Divergence.f will often contain divergence terms, i.e., terms which have a prepended nabla operator of the form si15_e · p. These can obviously be integrated using the Gauss’s theorem as well. As an example, consider the right-hand side of the Navier–Stokes equation where we actually have two of such terms. One is the pressure gradient si15_ep, which can be integrated using the Gauss’s theorem to

VpdV=Vp.ndV

si17_e

where we have again converted the volume to a surface integral.

Laplace Operators: Diffusion. A second example, which may not be visible intuitively, is the viscous term ηΔν si18_e of the right-hand side. Remember that the Laplace operator is nothing other than the squared nabla operator therefore

ηΔν=η(ν)

si19_e

in which case we can use the Gauss’s theorem to find the integral

V(ην)dV=V(ην)ndV

si20_e

As you can see, even the Laplace operator can be integrated accordingly. The physical interpretation of these terms is diffusion of the independent variable’s quantity, which can obviously distribute, as well as reduce or increase the overall quantity in the control volume. As an example, a fluid is able to transport momentum via diffusion. This is what the viscous term of the right-hand side of the Navier–Stokes equation expresses. Another example is the diffusion equation (see Eq. 8.4) that has a Laplace term on the right-hand side. Obviously, the concentration of a substance c in the control volume cannot only change by advection (convection) or by sources and sinks. It can also be transported out of the control volume by means of diffusion. This is what this term accounts for.

31.3.4 Interpretation

The physical interpretation of Eq. 31.5 is simple. The total change of the dependent variable within the control volume is balanced by the advection of quantity across the boundaries of the control volume, the diffusion of the quantity across the boundaries, and the action of the sources and sinks. However, as we have seen, the right-hand side terms usually give rise either to a constant, or to a function that is again evaluated on the boundary only. This means that the action of the source, sink, and diffusion terms can be evaluated also by looking only at their contribution to advection into and out of the control volume.

As stated, the size of the control volume can be arbitrary. It can be as large as the atmosphere of earth. Obviously, such a control volume is of no use if the dynamics of earth’s atmosphere should be studied, as it would virtually be a very coarse one-voxel-solution.

Another important point is that the order of the equation has been reduced by one from the differential form (see Eq. 31.1) to the conservative or integral form (see Eq. 31.5). You can also see this if you consider the example of the viscous terms of the Navier–Stokes equation we just discussed. In the conservative form, the equation contains only first-order derivatives. As already mentioned, functions with discontinuities or very steep gradients tend to be problematic for FDM. The underlying reason is that a function with a discontinuity does not have a derivative at the location of the discontinuity. Thus, a finite difference solver would not be able to approximate a derivative at this location. In FVM, the reduction in order will avoid problems with nonexisting first-order partial differentials because they do not occur on the left-hand side. For all cases where the right-hand side of the equation in differential form contains a Laplace operator (as does the Navier–Stokes equation), there will be a first-order partial differential in the final equation. However, the value of the differential is only required on the boundary of the control volume. If the discontinuity happens not to lie on the boundary of the domain, it is of no importance. Obviously, the layout of the control volumes can be adapted to work around such critical points.

31.3.5 Volume-Averaged Conservative Form

In the following step, we perform a discretization of the computational domain resulting in a total number of N cells. Each cell has a given (known) volume given by δV(j). We will replace the value of the dependent variable within the cell j with the volume-averaged value of the dependent variable ˉg(j) si21_e given by

ˉg(j)=1δV(j)δV(j)gdV

si22_e

in which case Eq. 31.5 can be written as

δV(j)dˉg(j)dt+δV(j)FndV(j)=δV(j)fdV

si23_e  (Eq. 31.6)

Please note that Eq. 31.5 and Eq. 31.6 are exact. There was no simplification made during the derivation.

31.4 Discretization

As in all finite methods, the discretization often is the most crucial point. Bad discretization can lead to inexact solutions, or even to instability of the solver, especially if discontinuities are encountered that occur at the wrong location.

Cells. The fact that FVM requires the dependent variable to be in conservative form also has implications for the volume discretization of our computational domain. The most important implication is the fact that the discreet volumes must remain fixed in space. The discrete volumes are generally referred to as cells in contrast to the term, elements, which is the term used in FEM. The cells in two-dimensional space are quadrangles and are not necessarily squares (see Fig. 31.1a). In three-dimensional, a six-face hexaeder is used (see Fig. 31.1b). The simplest geometry in three dimensions is a cube. The shapes are adapted to the discretization of the volume and should be designed such that they allow a good resolution of the domain, especially in areas which are expected to show large changes of the dependent variables. As an example, if calculating a flow around a sharp edge, the volume in close proximity to the edge should be divided into very fine cells, as the changes in the vicinity of this discontinuity are expected to be large. The further one moves off this discontinuity, the smaller the expected changes. In this area, the mesh may be significantly less fine. In general, the method of mesh discretization is crucial for obtaining good results using FVM. If the volume is incorrectly discretized, the resulting solutions will be inexact.

f31-01-9781455731411
Fig. 31.1 Cells used in FVM. a) A two-dimensional computational domain showing the typical quadrangles. At locations where large changes in the dependent variables are expected, the resolution is higher. b) A threedimensional uneven hexaeder with the unit normal vectors facing the respective neighboring cell.

Another important point to consider is the fact that the shape of the cells can be adapted to suit the geometry of the environment. This is in contrast to FDM, which usually requires the control volumes to be equally shaped. This makes FVM solvers the better choice for geometries with curved surfaces or radii. As we have seen, these tend to be problematic in FDM (see Fig. 30.6a).

In commercial packages for FVM, the code for the discretization is often a crucial aspect of the software. Good algorithms which split up the computational domain in an intelligent way make use of changing resolutions wherever larger changes are expected (see Fig. 31.1a). Given the fact that the structure of the cells can vary significantly, the underlying software must be able to handle arbitrary contours in space in order to, e.g., calculate the surface planes and the normal vectors, etc. This requires an implementation of the rudimentary algorithms of vector calculus. This is somewhat less problematic in finite difference methods where the domain is simply divided into cubes.

Computational Cells. In the following, we assume that we have discretized our computational domain and have obtained a number of computational cells with the volume δV(j) where j is the index in x-direction. We will restrict ourselves to the one-dimensional case, but the three-dimensional case would be derived identically. In this case, the volumes would be δV(j,k,l) with j, k, l being the indices in x-, y- and z-directions. Please note that the volumes of these cells are not usually identical.

31.5 Function Reconstruction

The basis of the calculation is Eq. 31.6. In case of a time-dependent function, we must be provided with an initial value profile of g(t=0)(x)=g(t=0,x) si24_e. From this initial profile, we step forward in time using, e.g., a forward or backward Euler scheme. At each point in time t, we have to find g(t)(x) si25_e. However, we do not know the general profile of this function. Therefore, we need to approximate the solution as we would do for weighted residual methods (see Eq. 29.2). For this we again require a set of N linearly independent basis functions. Usually, these are simple polynomials. The approximation to our solution would then be given as

˜g(x)Ni=1ciψi(x)

si26_e  (Eq. 31.7)

where we obviously do not know the unknown coefficients ci.

31.5.1 Determining the Constants ci

Similar to the Galerkin method, we require a set of N functions that allow us to determine ci. The Galerkin method used the constraint that the integral of the residual across the domain must be zero. Here, we use a similar approach. We state that the integral of the approximation function within each cell must equal the average value ˉg si27_e within the cell. If this is not the case, the approximation function is inexact, as it does not yield the same average value. We therefore find

δV(j)˜g(j)dV!=δV(j)ˉg(j)

si28_e  (Eq. 31.8)

Obviously this must be true for each cell. So if our polynomial has a total of N terms, we have a total of N unknown coefficients ci, and therefore we require a total of N cells. For each cell j in the computational domain we can thus state

δV(j)˜g(j)dV!=δV(j)ˉg(j)Ni=1ciδV(j)ψi(x)dV!=δV(j)ˉg(j)

si29_e  (Eq. 31.9)

which gives us a total of N independent equations to determine the coefficients ci.

31.5.2 Usage of ˜g si30_e

One may ask the question why we go through all the work of determining the coefficients ci, i.e., why do we need the approximation function ˜g si31_e at all? The reason for this is that having the average values ˉg(j) si21_e for each of the cells j will give us the distribution of our dependent variable “inside” of the cell. As stated, we expect this value to be constant over the whole volume of the cell. We usually refer to these values as being cell-centered, which means that we know the value of the dependent variable at the centers of each of the cells (see Fig. 31.2). However,

f31-02-9781455731411
Fig. 31.2 Function reconstruction schemes commonly used in FVM. a) Piecewise-constant profile. The individual cells are assumed to have a constant value. The value at the boundary between cell i and cell j + 1 is assumed to be the average value. b) Piecewise-linear profile. The value of the dependent variable is assumed to change linearly from one cell to another. For both profiles, the function value at the cells’ centers should ideally be exact.

Eq. 31.6 requires us to evaluate the function on the boundaries between two cells. Obviously, if we assume g to be piecewise-linear, the value at the boundary will be discontinuous depending on the direction from which we approach the discontinuity. It will be either the value of the first piecewise-linear function (if approaching from the left) or the value of the second piecewise-function (if approaching from the right). Usually, this problem can be resolved by stating that the value at the boundary is the average of the two values. In fact, we will use this simplification in an example in just a moment.

However, if the right-hand side function f in Eq. 31.6 contains a diffusion term, we end up with a first derivative in Eq. 31.6 that we need to evaluate at the discontinuity at the boundary of the cells. However, we cannot determine the derivative of the function at the boundary.

These problems are circumvented if we approximate g by the function ˜g si30_e, which we can design in a way that it has the values ˉg(j) si21_e at each cells’ center and is continuous on the boundaries. Obviously, the type of function we expect only depends on the basis function ψi. In the following we will discuss the most commonly encountered basis functions for the one-dimensional case. In general, the approximation function used is

˜g(j)Ni=0ci(xx(j)x(j))i

si35_e  (Eq. 31.10)

which uses the simple polynomials (xx(j)δx(j)) si36_e as basis functions ψi. Please note that the polynomials are shifted to xx(j), i.e., into the center of the cell. Additionally, the shifted term is normalized to the width of the interval. We often use the notation

˜g(j)Ni=0ci(ξ(j))i

si37_e  (Eq. 31.11)

With

ξ(j)=xx(j)δx(j)dξ(j)=xδx(j)dx=δx(j)dξ(j)

si38_e  (Eq. 31.12)

where 12ξ12 si39_e. From Eq. 31.9 we can derive the following equations for calculating ci

Ni=0ci1212(ξ(j))iδx(j)dξ(j)!=δx(j)ˉg(j)

si40_e  (Eq. 31.13)

which is a system of N linearly independent equations (one for each cell j).

31.5.3 Piecewise-Constant Profile

The piecewise-constant profile (see Fig. 31.2a) is the simplest profile. It assumes that the value ˜g(j) si41_e is constant within a cell. Therefore, N = 0 in Eq. 31.13 in which case we find

c01212(ξ(j))0δx(j)dξj!=δx(j)g(j)c0x(j)!=δx(j)ˉg(j)c0!=ˉg(j)

si42_e

and therefore

˜g(x)=ˉg(j),x(j)x<x(j+1)

si43_e

As we have already discussed, the value at the boundary of the cells is also of importance. For piecewiseconstant profiles, the value at the interface between cell j and cell j + 1 is defined as the average of the two values

˜g(jj+1)=˜g(j)+˜g(j+1)2

si44_e

As stated, this profile will be unsuitable if we have a diffusion term in our equation which requires the evaluation of the derivative, i.e., the flux term at the boundary.

31.5.4 Piecewise-Linear Profile

The second commonly used profile is the piecewise-linear profile (see Fig. 31.2b). It assumes the value of ˜g(j) si41_e to change linearly between the interpolation points that are located at the cells’ centers. We obtain this approximation by using N = 1 in Eq. 31.10 from which we find

˜g(j)=c0+c1ξ(j)!=δx(j)ˉg(j)

si46_e

Obviously, we require N = 2 cells in order to solve for the unknowns c0 and c1.

Stencils. This is a very common mechanism used in the reconstruction of polynomials from fixed (given values). Fitting algorithms have to solve similar tasks. Given a finite number of points, they need to fit a given curve, such that a minimal error is obtained. For polynomials, methods based on stencils are very common for this task. In addition to the cell j, we can choose the cells j − 1 or j + 1 to contribute. The choice is arbitrary. Usually, the selection pattern of neighboring cells is referred to as a reconstruction stencil and there are certain patterns which lend themselves better than others. The stencil is selected due its ability to achieve stable solutions which do not oscillate. Therefore, this type of reconstruction is commonly referred to as essentially non-oscillatory (ENO) reconstruction. In this type of reconstruction, only one stencil is selected. The second type of reconstruction schemes is weighted essentially nonoscillatory (WENO) reconstruction. Here, all potential stencils are taken into account and weighted in order to obtain the most suitable solution.

A Simple ENO Reconstruction. In the following we will use a simple ENO scheme which is sufficient for linear reconstruction. WENO reconstruction becomes more important for higher number of terms, i.e., for higher values of N. Here, we chose cell j + 1 as second cell for our stencil. We then state that in order to be a good approximation, the integral of ˜g si30_e must

 amount to the average value ˉg(j) si21_e in cell j, i.e., for 12ξ12 si39_e and

 amount to the average value ˉg(j+1) si50_e in cell j+ 1, i.e., for 12ξ32 si51_e and

From this we derive the following equations

1212(c0+c1ξ(j)δx(j))dξ(j)=δx(j)ˉg(j)3212(c0+c1ξ(j)δx(j))dξ(j)=δx(j+1)ˉg(j+1)

si52_e

or

1212(c0+c1ξ(j))dξ(j)=ˉg(j)3212(c0+c1ξ(j))dξ(j)=δx(j+1)δx(j)ˉg(j+1)

si53_e

After performing the integrations we obtain

c0=ˉg(J)c0+c1=δx(j+1)δx(j)ˉg(j+1)

si54_e

from which we derive

c0=ˉg(j)c1=δx(j+1)δx(j)ˉg(j+1)ˉg(j)˜g(ξ(j))=ˉg(j)(1ξ(j))+δx(j+1)δx(j)ˉg(j+1)ξ(j),12ξ(j)32

si55_e  (Eq. 31.14)

Obviously, if the grid is regular and x(j) = x(j+ 1) we obtain

˜g(ξ(j))=ˉg(j)(1ξ(j))+ξ(j)ˉg(j+1),12ξ(j)32

si56_e  (Eq. 31.15)

This function can be evaluated at the cell’s edges by simply setting ξ=12 si57_e for the left edge and ξ=12 si58_e for the right edge. Compared to the piecewise-constant function, we can form the derivative of Eq. 31.15 which is

d˜g(j)dξ(j)=ˉg(j+1)ˉg(j),12ξ(j)32

si59_e  (Eq. 31.16)

which is simply the difference between the two cells’ average values. As you can see, this function can be differentiated without problems and the fluxes can be calculated at the cells’ boundaries.

Conversion to the Differentialddx. si60_e If you consider Eq. 31.16 closely, you will see that the derivative has the same unit as the average value, which obviously is incorrect. It should have the unit “dependent value unit per length.” The reason for this is that Eq. 31.16 is the derivative with respect to ξ(j). If we require the derivative with respect to x, we can see from Eq. 31.12 that we have to divide Eq. 31.16 by dξ(j). This also results in the correct unit.

31.5.5 Piecewise-Parabolic Profile

The third commonly used approximation is piecewise-parabolic with N = 2 in Eq. 31.10 from which we find

˜g(j)=c0+c1ξ(j)+c2(ξ(j))2!=δx(j)ˉg(j)

si61_e  (Eq. 31.17)

where we again have the problem that we need to apply a stencil in order to determine the three unknowns. We take into account the left- and right-hand side neighbor cells j − 1 and j + 1 and state that the approximation ˜g(j) si41_e should be able to recover the average values ˉg(j1) si63_e, ˉg(j) si21_e and ˉg(j+1) si50_e correctly. In this case we find from integrating Eq. 31.17

1232c0+c1ξ(j1)+(c2ξ(j1))2δx(j)dξ(j)=δx(j1)ˉg(j1)1212c0+c1ξ(j)+(c2ξ(j)2δx(j)dξ(j)=δx(j)ˉg(j)3212c0+c1ξ(j+1)+(c2ξ(j+1))2δx(j)dξ(j)=δx(j+1)ˉg(j+1)

si66_e

for which we find after performing the integration

c0c1+1312c1=δx(j1)δx(j)ˉg(j1)c0+112c2=ˉg(j)c0+c1+1312c1=δx(j+1)δx(j)ˉg(j+1)

si67_e

which can be solved to yield

c0=12426δx(j)ˉg(j)6δx(j1)ˉg(j1)δx(j+1)ˉg(j+1)δx(j)c1=12δx(j1)ˉg(j1)δx(j+1)ˉg(j+1)δx(j)c2=122δx(j)ˉg(j)6δx(j1)ˉg(j1)δx(j+1)ˉg(j+1)δx(j)

si68_e

In case of a regular grid δx(j− 1) = δx(j)= δx(j+ 1) which simplifies the constants to

c0=ˉg(j+1)+26ˉg(j)ˉg(j1)24c1=ˉg(j+1ˉg(j1))2c2=ˉg(j+1)2ˉg(j)+ˉg(j1)2

si69_e

in which case our approximation function from Eq. 31.17 becomes

˜g(ξ(j))=ˉg(j+1)+26ˉg(j)ˉg(j1)24+ˉg(j+1)ˉg(j1)2ξ(j)+ˉg(j+1)2ˉg(j)+ˉg(j1)2(ξ(j))2,32ξ(j)32

si70_e  (Eq. 31.18)

Again, this function can be evaluated at the boundaries, i.e., for ξ(j)=32 si71_e (left boundary of cell j− 1) ξ(j)=12 si72_e (right boundary of cell j− 1; left boundary of cell j), ξ(j)=12 si73_e (right boundary of cell j; left boundary of cell j + 1), and ξ(j)=32 si71_e (right boundary of cell j + 1). Similar to the piecewise-linear function Eq. 31.18 can be differentiated finding

d˜gdξ(j)=ˉg(j+1)ˉg(j1)2+ˉg(j+1)2ˉg(j)+ˉg(j1)ξ(j),32ξ(j)32

si75_e  (Eq. 31.19)

which can be evaluated at the boundaries for calculating the fluxes.

31.5.6 WENO Schemes

We already briefly touched on the problems of the selection of a suitable stencil. Stencils tend to form oscillatory results, especially near discontinuities in the dependent function. These oscillations may lead to instability and (ultimately) in the inability to produce a useful function approximation. In order to prevent this from happening, stencils are weighted by their susceptibility to oscillation and combined in order to reduce these problems. If we return to the case of the piecewise-linear approximation, we found the stencil for the cells j and j + 1 to be (see Eq. 31.15)

˜gj,j+1(ξ(j))=ˉg(j)(1ξ(j))+ξ(j)ˉg(j+1),12ξ(j)32

si76_e  (Eq. 31.20)

We stated that we could also have taken a stencil using the cells j − 1 and j. In this case, we would have found

1232c0+c1ξ(j)δx(j)dξ(j)=δx(j1)ˉg(j1)1212c0+c1ξ(j)δx(j)dξ(j)=δx(j)ˉg(j)

si77_e

which we can solve to

c0c1=δx(j1)δx(j)ˉg(j1)c0=ˉg(j)

si78_e

from which we obtain

c0=ˉg(j)c1=ˉg(j)δx(j1)δx(j)ˉg(j1)

si79_e

which results in the approximation

˜g(ξ(j))=ˉg(j)(1+ξ(j))ξ(j)δx(j1)δx(j)ˉg(j1)

si80_e

In case of a regular grid with δx(j1)=δx(j) si81_e we find this second stencil to be

˜gj1,j(ξ(j))=ˉg(j)(1+ξ(j))ξ(j)ˉg(j1),32ξ(j)12

si82_e  (Eq. 31.21)

Both of these functions ˜gj,j+1 si83_e and ˜gj1,j si84_e produced from these stencils are said to be second-order stencil because they take two cells into consideration.

Transition to Third-Order WENO Stencils. We will now create a WENO stencil by using a weighted sum of the two functions ˜gj,j+1 si83_e and ˜gj1,j si84_e. The ideal weighting constants are 23 si87_e for ˜gj,j+1 si83_e and 13 si89_e for ˜gj1,j si84_e. These numbers are not random, as we will see in a moment. In doing to, we create a new approximation function that takes the cells j − 1, j, j + 1 into account and that is given by

˜gj1,j,j+1(ξ(j))=23(ˉg(j)(1ξ(j))+ξ(j)ˉg(j+1))+13(ˉg(j)(1+ξ(j))ξ(j)ˉg(j1))=ˉg(j)(113ξ(j))+ξ(j)(23ˉg(j+1)13ξ(j)ˉg(j1))'32ξ(j)32

si91_e  (Eq. 31.22)

If we evaluate this function for ξ(j)=12 si73_e we find

˜gj1,j,j+1(12)=16ˉg(j1)+56ˉg(j)+13ˉg(j+1)

si93_e  (Eq. 31.23)

We have already derived a scheme that uses three cells when deriving the piecewise-parabolic approximation (see Eq. 31.18). If we evaluate this function for ξ(j)=12 si94_e we find

˜g(12)=ˉg(j+1)+26ˉg(j)ˉg(j1)24+ˉg(j+1)ˉg(j1)4+ˉg(j+1)2ˉg(j)+ˉg(j1)8=ˉg(j1)(12414+18)+ˉg(j)(262414)+ˉg(j+1)(12414+18)=16ˉg(j1)+56ˉg(j)13ˉg(j+1)

si95_e  (Eq. 31.24)

As you can see, Eq. 31.23 and Eq. 31.24 yield the same result. By adequate weighting with the (seemingly random weighting factors 13 si89_e and 23 si87_e) we have obtained the same result as obtained using the stencil from the piecewise-parabolic scheme, which is of third-order. As one would expect, the higher the order of the scheme, the more stable it will be.

Fifth-Order WENO Stencils: WENO-5. If we return to the derivation of piecewise-parabolic approximation in section 31.5.5, we could have chosen one of three stencils

 cells j2, j1,and j

 cells j − 1,j, and j + l

 cells j, j + 1, and j + 2

We derived the second stencil of this list which is given by Eq. 31.18. If this stencil is evaluated for ξ=12 si58_e we find

˜gj1,j,j+1(12)=ˉg(j+1)+26ˉg(j)ˉg(j1)24+ˉg(j+1)ˉg(j1)4+ˉg(j+1)2ˉg(j)ˉg(j1)8=ˉg(j1)(12414+18)+ˉg(j1)(262414)+ˉg(j+1)(124+14+18)=ˉg(j1)+5ˉg(j)+2ˉg(j+1)6

si99_e  (Eq. 31.25)

We will not derive the two other stencils, as they are calculated likewise. When evaluating them for ξ=12 si58_e we find

˜gj2,j1,j(12)=2ˉg(j2)7ˉg(j1)+11ˉg(j)6

si101_e  (Eq. 31.26)

˜gj,j+1,j+2(12)=2ˉg(j)+5ˉg(j+1)+11ˉg(j+2)6

si102_e  (Eq. 31.27)

Each of the three stencils used to calculate Eq. 31.25, Eq. 31.26, and Eq. 31.27 is third-order. Adding them up forms a fifth-order scheme, which is usually referred to as WENO-5. The weighting can be determined by performing a piecewise-polynomial approximation for the cells j – 2, j − 1, j, j + 1, and j + 2. The scheme is given by

˜gj2,j1,j,j+1,j+2(12)=110˜gj2,j1,j(12)+610˜gj1,j,j+1(12)+310˜gj,j+1,j+2(12)

si103_e  (Eq. 31.28)

WENO-5 is a very stable scheme. As you can see, these “taking more cells into account” resemble the procedure taken for numerical schemes for solving ODEs. As an example, take the Adams-Bashforth schemes (see section 27.2.6). Depending on the order of the method, they rely on more data points “before” and “after” the current location j. The more values are taken into account, the better and more stable the numerical scheme will be. This is the same with stencil methods. Obviously, the disadvantage is the fact that higher-order stencils require knowledge about more points to the left and the right of the current location. As we approach the boundaries, these points may fall outside of the domain.

31.5.7 Summary

Stencil methods are required whenever an approximation function must be constructed. As we have seen, if a polynomial of order N = 0 is required, we only need to take cell j into account. These methods are first-order. When the order of the polynomial is N = 1, we obtain a second-order stencil requiring two cells, either j − 1 and j or j and j + 1. For a second-order polynomial, we obtain a third-order stencil which can be based either on the cells j – 2, j – 1, j or cells j – 1, j, j + 1 or cells j, j + 1, j + 1. As we have also seen, it is possible to construct higher-order stencils by appropriately weighting stencils of lower order.

As stated, the approximation function (and its derivative) are required for evaluating the function and the gradient on the boundary of cell j. As you have seen, we usually only evaluate the value for ξ=12 si58_e, which is the right-hand boundary of the current cell.

31.6 Example: One-Dimensional Heat Equation

Having derived the theoretical concepts, we will now discuss a practical example using FVM. For this, we return to the example of the one-dimensional heat equation (see Eq. 31.3). We will solve this equation numerically using FVM. We will use the same example as in section 8.3.3 where we solved the heat equation analytically. The domain is bound by 0 ≤ xL and a constant initial temperature distribution, i.e., T(t = 0) (x)=1 is given. The boundaries of the domain are kept at the constant temperature, i.e., T(t) (0) = T(t) (L) = 0.

Cells. We will use a cell-centered grid that is shown schematically in Fig. 31.3. The total number of cells is n and the cells have equal width ∆x. Please note that this is only done for convenience. The cells lateral dimensions must not necessarily be identical.

f31-03-9781455731411
Fig. 31.3 Example of an FVM using the one-dimensional heat equation. The domain 0 ≤ x ≤ L is subdivided into cells of length ∆x. Virtual cells are added to satisfy the boundary condition.

Initial Values.Fig. 31.3 also shows the initial values of the problem. As we are working on a cell-centered domain, only the values at the center of the cells are of importance. At t = 0 all cells have the constant temperature T(t = 0) (x) = 1.

Approximation Function. We will chose a piecewise-linear profile for the approximation function (see section 31.5.4). As we will see in just a moment, this profile is very suitable to account for the boundary conditions.

Boundary Conditions. On a cell-centered grid, it is important to account for the boundary values. As we are only evaluating the values of the dependent function, i.e., the temperature of the center of the cells the grid we chose seems somewhat unsuitable to account for the boundary condition, as the boundaries are on the edges of cells. An obvious approach would be to modify the boundary such that a cell’s center would be located at the boundary. However, there is a very simple approach to solve this problem in the displayed case, which is the introduction of virtual cells. These cells are outside of the computation domain and added only to account for the boundary conditions.

As stated, we use a piecewise-linear approximation function. If we assume our grid to be regular, i.e., all cells to have the same width, we can force the value on the left-hand boundary of the first cell in our domain to be zero by simply setting the value of the virtual cell on the left side of it to be the negative value. We therefore add two virtual cells - one on the left-hand boundary of the domain and one on the right-hand boundary of the domain. For these cells (indexed j = 0 and j = N + 1)we state

T(t,0)=T(t,1)

si105_e  (Eq. 31.29)

T(t,N+1)=T(t,N)

si106_e  (Eq. 31.30)

which obviously ensures the boundary conditions for all values of t. Please note that this procedure is very similar to the way we performed Fourier series expansions of constants in order to ensure symmetry and therefore, the expansion to pure sine functions (see Eq. 4.30).

Neumann Boundary Conditions. As you can see, we have just discussed the case of how to implement Dirichlet boundary conditions (see section 3.1.7.2). The second boundary condition we regularly require are Neumann boundary conditions where we fix the first derivative at the boundary. Obviously, by using virtual cells, we could just as conveniently implement this type of boundary condition by simply setting the value of the virtual cell to a value dependent on the neighboring cell. If for example, the gradient must be 0 we could simply set

T(t,0)=T(t,1)T(t,N+1)=T(t,N)

si107_e

which would fulfill Neumann boundary conditions.

Surface Integrals. We next convert the original PDE Eq. 31.3 into the conservative form of Eq. 31.5 for which we find

dTdtVdV+dVFndV=0

si108_e  (Eq. 31.31)

where the vector function F si4_e has only one entry

F(t,x)=(c2T00)

si110_e

because we only have one space variable x. The surface integral VFndV si111_e requires the evaluation of the value ofT si112_e on the surface of the individual cell multiplied with the normal vector of the surface. If you refer to Fig. 31.3, you can see that the normal vector on the left-hand side of the cells would be pointing in negative x-direction, whereas the normal vector on the right-hand side of the cells would be pointing in the positive x-direction. The integral also requires the total area of the left-hand and right-hand side cell surface, which is simply given as

V=11=1

si113_e

because the problem is one-dimensional and the lateral dimension of the cells in y-direction and z-direction are therefore normalized. We can then rewrite the surface integral of cell j as

δV(j)FndδV(j)=c2(Tx(t,j1j)+Tx(t,jj+1))

si114_e  (Eq. 31.32)

where we need approximations for the first derivatives, As we chose a piecewise-linear profile, we can use Eq. 31.16 to approximate the first derivative at the right-hand cell boundary by

d˜g(j)dx(12)=1δx(j)(ˉg(j+1)ˉg(j))

si115_e

and the first derivative at the left-hand side of the cell boundary by

d˜g(j)dx(12)=1δx(j1)(ˉg(j)ˉg(j1))

si116_e

which is simply Eq. 31.16 applied to cell j − 1. Please note that both equations have been converted to a differential with respect to X si117_e instead of ξ,which accounts for the prefactors. Using this approximation, we can rewrite Eq. 31.32 to

δV(j)FndδV(j)=c2(ˉg(j)+ˉg(j1)δx(j1)+ˉg(j+1)ˉg(j)δx(j))=0

si118_e  (Eq. 31.33)

Numerical Scheme. We can now reinsert Eq. 31.33 into Eq. 31.31 to find

dTdtVdVc2(ˉg(j)+ˉg(j1)δx(j1)+ˉg(j+1)ˉg(j)δx(j))=0

si119_e  (Eq. 31.34)

where we can replace the volume integral by the expression

dTdtVdV=dT(t,j)dtΔx(j)Δy(j)Δz(j)=dT(t,j)dtΔx(j)

si120_e  (Eq. 31.35)

as Δy = Δz = 1. Inserting Eq. 31.35 into Eq. 31.34 yields

dT(t,j)dtΔx(j)c2(ˉg(j)+ˉg(j1)δx(j1)+ˉg(j+1)ˉg(j)δx(j))=0dT(t,j)dt=c2Δx(j)(ˉg(j)+ˉg(j1)δx(j1)+ˉg(j+1)ˉg(j)δx(j))

si121_e  (Eq. 31.36)

In order to simplify the procedure slightly, we assume the grid to be regular and thus Δx(j) = Δx to be constant. In this case, we can rewrite Eq. 31.36 to

dT(t,j)dt=c2(Δx)2(T(t,j+1)+T(t,j1)2T(t,j))

si122_e  (Eq. 31.37)

Eq. 31.37 is an ODE which can be solved by various methods we have already discussed. We will solve it by a backward Euler approach (see Eq. 27.14), stepping forward from time t to t + 1 according to

T(t,j+1)=T(t,j)+ΔtX(t+1)T(t+1,j)=T(t,j)+c2Δt(Δx)2(T(t+1,j+1)+T(t+1,j1)2T(t+1,j))

si123_e

which we need to solve for T(t+ 1,n) according to

T(t+1,j)(1+2c2Δt(Δx)2)=T(t,j)+c2Δt(Δx)2(T(t+1,j+1)+T(t+1,j1))T(t+1,j)=11+2c2Δt(Δx)2(T(t,j)+c2Δt(Δx)2(T(t+1,j+1)+T(t+1,j1)))

si124_e  (Eq. 31.38)

where we have derived the numerical scheme with Eq. 31.38. As we have used the backward Euler methods, we obtain an implicit scheme which requires iteration. Obviously, there would have been many more methods to solve Eq. 31.37e.g., with the fourth-order Runge–Kutta method (see Tab. 27.6).

Implementing the Scheme Using Microsoft Excel. In the following, we will implement Eq. 31.38 using Microsoft Excel. You will find the file FVMHeatEquation1D.xlsx in the supplementary material which contains the spreadsheet. It is shown in Fig. 31.4. At the top of the spreadsheet, you can see the variables that we use for our calculation, i.e., Δx, Δt and the constant c2, which is the thermal diffusivity. We combine those variables to the variable Prefactor given by

f31-04-9781455731411
Fig. 31.4 Microsoft Excel spreadsheet used to solve the one-dimensional heat equation using FVM. A digital version of this spreadsheet can be found under the name F VMHeatEquation l D.xlsx in the supplementary material.

prefactor=c2Δt(Δx)2

si125_e

which we require twice in Eq. 31.38. Underneath the variables, you can see the calculation field. The column A contains the points in time starting from t = 0 in increments of Δt. For each point in time we calculate a total of 40 individual cells with a virtual cell on each side. These are the columns with the heading “virt. ”. The values of the cells of these rows are simply the negative values of the adjacent cells as stated by Eq. 31.29 and Eq. 31.30. The entry of each cell implements Eq. 31.38 as can be seen from the formula displayed for the highlighted cell in Fig. 31.4. This formula is written once (taking care that the references to the variable values are entered such that they remain fixed when the cell is copied) and then simply copied along the remaining columns. The complete row is then simply copied in order to add more data points in time.

As discussed, Eq. 31.38 is an iterative formula and thus we need to active iteration in Microsoft Excel (see section 30.3.3). In the example spreadsheet, we use a total of 40 cells for approximating Eq. 31.38. The calculation of time step t is used as input for time step t + 1. If you observe the values of x, you can see that they indicate the cell centers, not the beginning of the individual cells. This is the reason why the first non virtual cell’s location is x(t,1) = 1.25. The boundary values are not indicated explicitly, but due to the value of the left virtual cell at x(t,0) = − 1.25, the left boundary of the first cell is zero. The same holds true for the right-hand side.

As you can see, we use a reasonably large time step of Δt = 0.2. As we have used the backward Euler method, we obtain stable solutions. In this example we chose Δx = 2.5, which also is a reasonably large value. We chose c2= 3.62, which is the thermal diffusivity factor for stainless steel that we already used for Fig. 8.6a. The output of our numerical calculation for different points in time is shown in Fig. 31.5. If you compare these results with the analytical results from Fig. 8.6a, you can see that they are identical.

f31-05-9781455731411
Fig. 31.5 Numerical output of the Microsoft Excel spreadsheet used to solve the one-dimensional heat equation using FVM for different points in time. If you compare this result with the analytical solutions in Fig. 8.6a, you can see that they are identical.

Obviously, this is just a simple example of how FVM can be used to solve differential equations. Even from our simple spreadsheet, we obtain results which are very close to the analytical solution. This shows you the potential of this technique.

31.7 Two-Dimensional Problems of First Order in Time and Space

31.7.1 Introduction

The example we just derived was a one-dimensional example containing a first-order partial derivative with respect to time, and a second-order partial derivative with respect to space. In the next step, we want to solve a problem where we have more than one dimension. Returning to the general form of Eq. 31.1, we assume that the vector function F is now not only dependent on the spatial coordinate x1 = x, but also on the spatial coordinate x2 = y (see Eq. 31.2). In this case we find

gt(x,y,t)+F(x,x,t)=0gt(x,y,t)+Fxx(x,y,t)+Fyy(x,y,t)=0

si126_e  (Eq. 31.39)

We restrict ourselves to the homogeneous case here. As we have seen, right-hand side terms are treated likewise and they have been covered in section 31.3.3. The domain on which we perform FVM is a plane and shown in Fig. 31.6. The respective cell is indexed with j along the x-directiou and k along the y-direction. As usual for FVM, the surfaces of the cells may not be identical. The domain of the cell (j, k) is given by Ω with the counter-clockwise circumference si1_eΩ.

f31-06-9781455731411
Fig. 31.6 Two-dimensional mesh for FVM. The domain Ω and its counterclockwise circumference si1_eΩ are shown. The line integrals are performed in a counterclockwise fashion, starting from point 1 over 2 and 3 to 4. At each segment, the normal vector pointing outwards is required.

31.7.2 Integration

The concept is identical to the general derivation of FVM as outlined in section 31.3. We first perform the integration of Eq. 31.39 on the domain Ω resulting in

ΩgtdΩ+ΩFxxdΩ+ΩFyydΩ=0ddtΩgdΩ+Ω(Fxx+Fyy)dΩ=0

si128_e  (Eq. 31.40)

where we have rewritten the left-hand side partial differential with respect to time to an ordinary differential as before

31.7.3 Green’s Theorem

We now apply Green’s theorem (see section 7.2.3) that helps in simplifying volume and surface integrals that originate from vector functions (see Eq. 7.13). It states that

Ω(FyxFxy)dΩ=ΩFds

si129_e  (Eq. 31.41)

where the vector ds si130_e is the infinitesimal vector on the circumference given by

ds=(dxdy)

si131_e

in which case we can rewrite Eq. 31.41 to

Ω(FyxFxy)dΩ=ΩFxdx+Fydy

si132_e  (Eq. 31.42)

Using Green’s theorem, we have converted the surface integral into an integral along the contour. Please note that the contours of the cells are usually straight lines, which are not necessarily parallel to one of the coordinate axis. When the cells are parallel to the coordinate system, we can make a couple of additional simplifications.

31.7.4 Linearization

Using Eq. 31.42 we can rewrite the integral of Eq. 31.40 as follows

ΩFxx+FyydΩ=ΩFxdyFydx=12341FxdyFydx=12FxdyFydx+23FxdyFydx(34FxdyFydx+41FxdyFydx)

si133_e  (Eq. 31.43)

where we have rewritten the line integral on the circumference based on the points of Fig. 31.6. Please note that for all sections of the circumference where r si134_e is oriented in the negative direction of the coordinate system, the contribution to the line integral is negative. We now need to determine the individual integrals. For this we need an approach for the functions Fx and Fy along the circumference. As for the one-dimensional case, we assume linear changes of the functions within the cell, which allows us to linearize the differentials. For cell (j, k) we therefore find

dx(j,k)12Δx(j,k)12=x2(j,k)x1(j,k)dx(j,k)23Δx(j,k)23=x3(j,k)x2(j,k)dx(j,k)34Δx(j,k)34=x(j,k)4x(j,k)3dx(j,k)41Δx(j,k)41=x(j,k)1x(j,k)4dy(j,k)12Δy(j,k)12=y(j,k)2y(j,k)1dy(j,k)23Δy(j,k)23=y(j,k)3y(j,k)2dy(j,k)34Δy(j,k)34=y(j,k)4y(j,k)3dy(j,k)41Δy(j,k)41=y(j,k)1y(j,k)4

si135_e

We also need a function approximation for Fx and Fy on the circumference in order to perform the integration. Recall that we are working on a cell-centered grid. However, we require the function values on the circumference, i.e., on the boundary between the two cells. Obviously, we require a similar function approximation function, as we did for the one-dimensional case. However, instead of deriving a two-dimensional approximation function we simply derive a linear function along both axes. This function is already given by Eq. 31.15, which we simply evaluate for ξ(i,j)=12 si136_e to find the value at the right-hand side of cell (j, k). For this we find

F(j,k)x(12)=F(j,k)x+F(j+1,k)x2

si137_e

This means that the value of the functions on the circumference between cell (j, k) and cell (j + 1, k) can be approximated as the average between the value of cell (j, k) and cell (j + 1, k). This is the approximation along the x-direction. Obviously, the approximation along the y-direction proceeds likewise. We find the following linearization

F(j,k)x,12F(j,k1)x+F(j,k)x2F(j,k)x,23F(j,k)x+F(j+1,k)x2F(j,k)x,34F(j,k)x+F(j,k+1)x2F(j,k)x,41F(j1,k)x+F(j,k)x2F(j,k)y,12F(j,k1)y+F(j,k)y2F(j,k)y,23F(j,k)y+F(j+1,k)y2F(j,k)y,34F(j1,k)y+F(j,k+1)y2F(j,k)y,41F(j1,k)y+F(j,k)y2

si138_e

If we now insert the linearizations into Eq. 31.43 we find

Ω(Fxx+Fyy)dΩ=12FxdyFydx+23FxdyFydx+34FxdyFydx+41FxdyFydx=Fx,12dy12Fy,12dx12+Fx,23dy23Fy,23dx23(Fx,34dy34Fy,34dx34+Fx,41dy41Fy,41dx41)=F(j,k1)x+F(j,k)x2(y(j,k)2y(j,k)1)F(j,k1)y+F(j,k)y2(x(j,k)2x(j,k)1)+F(j,k)x+F(j+1,k)x2(y(j,k)3y(j,k)2)F(j,k)y+F(j+1,k)y2(x(j,k)3x(j,k)2)F(j,k)x+F(j,k+1)x2(y(j,k)4y(j,k)3)F(j,k)y+F(j,k+1)y2(x(j,k)4x(j,k)3)+F(j1,k)x+F(j,k)x2(y(j,k)1y(j,k)4)F(j1,k)y+F(j,k)y2(x(j,k)1x(j,k)4)

si139_e  (Eq. 31.44)

Obviously, Eq. 31.44 still looks very complicated, which is mainly due to the fact that it is written out for arbitrary grids.

31.7.5 Regular Axes-lncident Grid

We can simplify Eq. 31.44 by assuming a regular grid with constant values for Δx(j,k) and Δy(j,k). In addition, we state that the grid is incident with the coordinate system axes in which case

dx23=dx41=0dy12=dy34=0

si140_e

In this case, Eq. 31.44 simplifies to

Ω(Fxx+Fyy)dΩ=(Fy,12Fy,34)Δx(j,k)+(Fx,23Fx,41)Δy(j,k)

si141_e  (Eq. 31.45)

=F(j,k1)y+F(j,k)yF(j,k)yF(j,k+1)y2Δx(j,k)+F(j,k)x+F(j+1,k)xF(j1,k)xF(j,k)x2Δy(j,k)=F(j,k+1)yF(j,k1)y2Δx(j,k)+F(j+1,k)xF(j1,k)x2Δy(j,k)

si142_e  (Eq. 31.46)

We can now reinsert Eq. 31.46 into Eq. 31.40 to derive our numerical scheme.

31.7.6 Numerical Scheme

This scheme is given by

ddtΩgdΩ+Fy(j,k+1)Fy(j,k1)2Δx(j,k)+Fx(j+1,k)Fx(j1,k)2Δy(j,k)=0dˉgdt(j,k)Δx(j,k)Δy(j,k)+Fy(j,k+1)Fy(j,k1)2Δx(j,k)+Fx(j+1,k)Fx(j1,k)2Δy(j,k)=0

si143_e

which we can simplify to

dˉgdt(j,k)+Fy(j,k+1)Fy(j,k1)2Δy(j,k)+Fx(j+1,k)Fx(j1,k)2Δx(j,k)=0

si144_e  (Eq. 31.47)

where we have again replaced the surface integral of the dependent variable g by the averaged value ˉg(j,k) si145_e in cell (j, k). Please refer to the derivation of Eq. 31.6 for details.

If we consider Eq. 31.47, we will see that it is an ODE in time which can be solved, e.g., by forward or backward Euler schemes. The components for the change in space are nothing other than central-difference schemes (see Eq. 4.6). In fact, we have derived the same numerical schemes as we would have derived using an FDM approach.

We will not solve this equation at this point, as it only served to illustrate how FVM can be applied to solve a two-dimensional differential equation of first order in time and space.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset