Chapter 8

Differential Equations

8.1 Important Differential Equations

In this section we will look at some very important differential equations that we often come across in practical physics and fluid mechanics. We will not derive each equation here. For this, the interested reader is referred to textbooks that discuss the underlying physics to the differential equations. However, several of these equations will be treated throughout this book and will therefore be derived. Wherever possible the equations will be briefly introduced and then given in a standard form using f denoting the actual function to solve, c as the constants, and the independent variables x, y, and t. Occasionally, we will use the denotation c2 instead of simply writing c. In this case we want to indicate that the constant has to be positive as c2 will always be.

8.1.1 Transport Equation

The first differential equation we will introduce is the transport equation. The transport equation is the equation that describes the movement of a function along time and space. Imagine throwing a stone in the water. The stone will cause water waves that will travel away from the point where the stone sank. If you look very closely you will see that these waves are perturbations of the surface that simply travel along the surface.

As you sit outside of the water and watch the waves there are two potential ways of looking at this phenomenon. First, you could focus on a given point of the water surface that is at rest initially. As the wave propagates toward this point you will see that the point is lifted upwards first and then drops downwards again, pretty much like a sine wave. If there are a couple of waves this up and down movement will occur a couple of times until the waves have completely passed and the surface is yet again at rest.

The second way of looking at it would be to focus, e.g., the highest point of the first wave and move your eyes along with this point. You would then see that the waves (which look like a sine function) will actually not change over time. They will seem "fixed" and merely travel along the surface. If you look carefully you will see that the height of the highest point of the wave will slowly drop over time. This is due to the fact that the wave will experience damping and therefore decays over time. Without decay, the wave would travel on forever.

This is just a very simple example of the transport equation. Imagine any function, any physical shape or contour which has a "fixed" shape. Let us denote this shape or profile by the function f (x). This profile travels along a physical direction. If damping is considered, the function will slowly decay over time. This effectively makes the function dependent both on time t and space x as its independent variables. Therefore we are looking at a function f (x, t). Whenever you have such an effect, you are looking at a phenomenon that is described by the transport equation.

The transport equation is given by

f(x,t)t+cf(x,t)x+af(x,t)ft+cfx+af==00

si36_e  (Eq. 8.1)

where c is the velocity at which the profile travels and a is the damping factor. The damping factor will let the function decay over time. As we will see, this decay process is really only a function of time.

8.1.2 Heat Conduction

Heat conduction is among the oldest problems that yields partial differential equations. The resulting differential equation is referred to as the heat conduction equation. It is derived in section 6.9.2 and given by Eq. 6.82 as

Ttft==αΔTc2Δf

si37_e  (Eq. 8.2)

One-Dimensional Heat Conduction

Reduced to one dimension, the heat conduction equation becomes

Ttft==α2Tx2c22fx2

si38_e  (Eq. 8.3)

8.1.3 Diffusion Equation

Very closely related to heat conduction is mass transport by means of diffusion. The underlying PDE is derived in section 6.9.3 and given by Eq. 6.90 as

ρtft==DΔρc2Δf

si39_e  (Eq. 8.4)

Comparing Eq. 8.2 and Eq. 8.4 the analogy becomes obvious: heat and mass transport are governed by the same underlying PDE which simply has different physical meanings. Therefore, finding solutions to one equation will inevitably also yield solutions to the other.

One-Dimensional Diffusion

Reduced to one dimension, the diffusion equation becomes

ρtft==D2ρx2c22fx2

si40_e  (Eq. 8.5)

8.1.4 Wave Equation

8.1.4.1 Introduction

The wave equation is one of the most important equations in mechanics. It describes not only the movement of strings and wires, but also the movement of fluid surfaces, e.g., water waves. The wave equation is surprisingly simple to derive and not very complicated to solve although it is a second-order PDE.

We will derive the wave equation using the model of the suspended string (see Fig. 8.1). We will apply a few simplifications. First, the string is only assumed to move along the direction of the y-axis. Second, we will assume the problem to be planar, i.e., there is no movement in the z-direction. Third, we will assume the string to be perfectly bendable, i.e., the tangential forces F1 and F2 are effectively tangential to the shape of the string.

f08-01-9781455731411
Fig. 8.1 Derivation of the wave equation using the suspended string of length L.

8.1.4.2 Force Balance

The force balance along the y-axis is given as

FyF2,yF1,y===F2,yF1,yF2sinβF1sinα

si41_e  (Eq. 8.6)

As there is (per definition) no movement in the x-direction the forces along the x-axis balance to a constant value, F2,x = F1,x = F0

F2,xF1,xF2,x==0F1,x=F0

si42_e

Geometrically we can see that

F1,y=F1,xtanα=F0tanα

si43_e  (Eq. 8.7)

F2,y=F2,xtanβ=F0tanβ

si44_e  (Eq. 8.8)

Inserting Eq. 8.7 and Eq. 8.8 into Eq. 8.6 yields

Fy=F0(tanβtanα)

si45_e  (Eq. 8.9)

8.1.4.3 Newton’s Second Law of Motion

Now we apply Newton’s1 second law of motion (see Eq. 11.1), which states

dF==dmadm2ut2

si46_e  (Eq. 8.10)

which we expressed as a function of the string’s contour line u (x). The mass of the string in the interval dx between points 1 and 2 is given by

dm=ρmdx

si47_e  (Eq. 8.11)

where ρm is the density per unit length x with the SI unit g cm− 3 m− 1.

Inserting Eq. 8.10 and Eq. 8.11 into Eq. 8.9 yields

d(F0tanβtanα)d(tanβtanα)dx==ρm2ut2dxρmF02ut2

si48_e  (Eq. 8.12)

for which we will need to express the change in the angles α and β as a function of x. The angle α is defined by the inclination of u at the location x = x1 and is therefore given by

tanα=duxx=x1

si49_e

whereas the angle β is given by the inclination of u at the location x = x2, given by

tanβ=duxx=x2=u(x1)+dudx

si50_e

in which case we find

d(tanβtanα)dx=d(u(x1)+duu(x1)dx)dx=ddudxdx=2ux2

si51_e  (Eq. 8.13)

where we also account for the fact that u is a function both of x and the time t.

8.1.4.4 The Wave Equation

Finally inserting Eq. 8.13 into Eq. 8.12 yields

2ut2=F0ρm2ux2

si52_e  (Eq. 8.14)

2ut2=c22ux2

si53_e  (Eq. 8.15)

which is referred to as the wave equation. Eq. 8.14 is the wave equation in physical notation, whereas Eq. 8.15 uses a standard notation denoting the constant expression F0ρmsi54_e by the constant c2. The constant is written such in order to denote that this value is always positive.

General Formulation

The most commonly used general notation for the wave equation is given by

2ft2=c2Δf

si55_e

where we have used the dependent variable f. Here we have used the Laplace operator, which we formally introduced in section 7.1.3.5.

One-Dimensional Wave Equation

In the case of the suspended string it would be sufficient to solve the wave equation in the one-dimensional form, which is given by

2ft2=c22fx2

si56_e  (Eq. 8.16)

Three-Dimensional Wave Equation

The most general case of the wave equation is the three-dimensional wave equation, given by

2ft2=c2(2fx2+2fy2+2fz2)

si57_e

8.1.5 Poisson Equation

The Poisson equation is an equation with which we will work a lot. The most important equation in fluid mechanics, the Navier-Stokes equation can, in many cases, be simplified to a Poisson equation. This equation is derived in section 11 and given by Eq. 11.40. If considering only stationary incompressible flow conditions, the Navier-Stokes equation can be simplified to

Δv=1η(kp)

si58_e

In the general formulation the Poisson equation is given by

Δf=g(x,y)

si59_e  (Eq. 8.17)

Two-Dimensional Poisson Equation

In two dimensions, the Poisson equation is given by

2fx2+2fy2=g(x,y)

si60_e

As we will see, finding solutions to Poisson equations is very important in fluid mechanics.

8.1.6 Laplace Equation

The Laplace equation is a special case of the Poisson equation. It is the homogeneous Poisson equation for which g (x, y) = 0 is assumed. It is given by

Δf=0

si61_e  (Eq. 8.18)

Two-Dimensional Laplace Equation

In two dimensions it is given by

2fx2+2fy2=0

si62_e

Gravitation

Newton’s law of gravity states that the attractive force between two masses m and M, which are separated by a distance r, is given as

F=GmMr2er

si63_e  (Eq. 8.19)

where G is the earth gravitational constant with the value 6.673 84 × 10− 11 m3 kg− 1 s− 2. Please note that this force acts in radial direction, thus the unit vector ersi64_e. By dividing Eq. 8.19 by M we obtain

FMam==Gmr2erGmr2er

si65_e

which gives the gravitational acceleration amsi66_e created by the mass m. The sign has been introduced to correct the direction of the acceleration. We can understand amsi66_e as a field for which we can derive the total flux through the surface of a sphere with radius R given by

amndA==amdAGmR2dA

si68_e

where we can now insert the infinitesimal change in a sphere given by Eq. 3.107 to find

amndA==GmR28πrdr4πGm

si69_e  (Eq. 8.20)

Eq. 8.20 is referred to as the first formulation of Newton’s law in Gauss’s notation. By using the Gauss’s theorem (see Eq. 7.10) we can now set

amndA=amdV

si70_e  (Eq. 8.21)

By combining Eq. 8.20 and Eq. 8.21 we find

amdV=4πGm

si71_e  (Eq. 8.22)

where we can exploit the fact that we can express the mass m by means of the density ρ as m=ρdVsi72_e, which simplifies Eq. 8.22 to

amdVam==4πGρdV4πGρ

si73_e  (Eq. 8.23)

which gives the second formulation of Newton’s law in Gauss’s notation. In the last step, we introduce the gravitational potential Ψ. We know from Eq. 7.4 that the divergence of the vector amsi66_e can be created by a vector potential if the latter is source- and sink-free. For gravitation this is the case. Therefore we can obtain amsi66_e from Ψ as

am=Ψ

si76_e  (Eq. 8.24)

When combining Eq. 8.23 and Eq. 8.24 we obtain

ΔΨ==Ψam=4πGρ

si77_e  (Eq. 8.25)

Eq. 8.25 tells us that the gravitational field is related to the mass distributions by a Laplace equation.

8.1.7 Helmholtz Equation

The Helmholtz equation is given by

Δf=c2f

si78_e  (Eq. 8.26)

For c = 0 it results in the Laplace equation. The Helmholtz equation often occurs during a substitution and separation of variables approach to a differential equation.

One-Dimensional Helmholtz Equation

In one dimension the Helmholtz equation is given by

2fx2=c2f

si79_e

8.1.8 Classification

There is a common classification scheme used to distinguish PDEs. Referring to PDEs with the independent variables time t and space x the most general form of a PDE can be written as

A(t,x)2fx2+B(t,x)2fxt+C(t,x)2ft2+D(t,x)fx+E(t,x)ft+F(t,x)f=G(t,x)

si80_e

where A (t, x), B (t, x), C (t, x), D (t, x), E (t, x), F (t, x), and G (t, x) are the coefficients and functions of the independent variables. The expression B2 − 4 A C is referred to as the discriminant of the equation. Depending on the value of this discriminant PDEs are classified as being the following:

1. elliptic if B2 − 4 A C < 0, e.g., the Laplace equation (A = C = 1)

2. parabolic if B2 − 4 A C = 0, e.g., the heat equation (A > 0, B = C = 0)

3. hyperbolic if B2 − 4 A C > 0, e.g., the wave equation (A = 1, C < 0, B = 0)

8.2 General Solutions to Selected Ordinary Differential Equations

8.2.1 Introduction

In this section we will look at solutions to some of the most commonly encountered ODEs. As it turns out, there are a few techniques that are used to solve these equations and we will briefly discuss them using suitable examples.

8.2.2 Separation of Variables and Partial Integration

Before venturing into more complicated differential equations, we will start with the most simple case for solving a differential equation, which we have already used in the example of the barrier lake. This method is generally referred to as separation of variable. Let us consider Eq. 3.6 that we used during the derivation of the solution to the volume equation of the barrier lake. This differential equation was given by

dVlakedt=c

si81_e

which we will write in standard form

dfdt=c

si82_e

In the next step, we now separate the variables, i.e., the dependent from the independent variables, resulting in

df=cdt

si83_e  (Eq. 8.27)

where we can now integrate both sides of the equation to result in

dff==cdtct+c1

si84_e

More often than expected, simple separation of the variable will allow solving a differential equation.

Higher-Order Integration

Sometimes, the notation of higher-order integrations during separation of variables causes a bit of trouble. Let us consider the following example:

d2fdx2=c

si85_e  (Eq. 8.28)

This notation sometimes causes difficulties in rewriting this equation the way we did for Eq. 8.27. However, if we rewrite Eq. 8.28 as

d2fdx2=ddx(dfdx)=c

si86_e

it becomes much clearer to perform the separation, which would read

ddx(dfdx)d(dfdx)dfdxdff=====ccdxcx+c1(cx+c1)dxc2x2+c1x+c2

si87_e

8.2.3 Linear Differential Equations With Constant Coefficients

The second commonly encountered case of differential equations is differential equations that only contain linear terms. An ordinary inhomogeneous differential equation for the function f (x) with order n and constant coefficients is given by

dnfxn+==an1dn1fxn1++a1dfx+a0xBm(x)bmxm+bm1xm1++b1x+b0

si88_e  (Eq. 8.29)

where Bn (x) is referred to as disturbance function. For Bm (x) = 0 the differential equation becomes a homogenous differential equation. ai and bi are constants. If an ≠ 1 the function is first divided by an.

8.2.3.1 Homogeneous and Particular Solution

The solution to an inhomogeneous differential equation is given by the sum of the homogeneous fhom (x) and the inhomogeneous solution (referred to as the particular solution) fpart (x):

f(x)=fhom(x)+fpart(x)

si89_e  (Eq. 8.30)

This concept is very important as we will often proceed this way when solving differential equations. First we find the homogeneous solution; then in a second step, we derive the particular solution.

Homogeneous Solution

The approach for the solution of the homogeneous case for Eq. 8.29 is

fhom(x)=eλx

si90_e  (Eq. 8.31)

Inserting Eq. 8.31 into the homogeneous version of Eq. 8.29 results in

λneλx+an1λn1eλx++a1λneλx+a0eλxλn+an1λn1++a1λn+a0==00

si91_e  (Eq. 8.32)

where Eq. 8.32 is referred to as the characteristic polynomial. It will have n solutions λn,λn1,,λ1,λ0si92_e which are referred to as roots of the characteristic polynomial. The sum of the individual solutions will form the solution to the homogeneous differential equation, and therefore

fhom(x)=cneλnx+cn1eλn1x++c1eλ1x+c0eλ0x

si93_e

where the (integration) constants cn,cn1,,c1,c0si94_e must be determined by the boundary conditions. Please note that λi can be complex values, in which case the solutions may be written as sine and cosine functions using Euler’s formula, Eq. 3.38:

fhom(x)=cne(a+ib)x=cneaxeibx=cneax(cos(bx)+isin(bx))

si95_e

8.2.3.2 Inhomogeneous Solution

The approach for an inhomogeneous solution uses a polynomial of one order higher than the disturbance function. The approach in general is

fpart(x)=cm+1xm+1++c1x+c0

si96_e

The particular solution is then inserted into Eq. 8.29, and the coefficients are compared.

8.2.3.3 Characteristic Polynomials With Recurring Roots

In many cases, a characteristic polynomial may have recurring solutions. As an example, consider the differential equation

d2fx2=0

si97_e  (Eq. 8.33)

which has the characteristic polynomial λ2 = 0. In this case, there are two solutions: λ1 = λ2 = 0. Whenever a solution λ to the characteristic polynomial is recurring, the solution is a polynomial of type

fr(x)=(n=0r1cnxn)eλx

si98_e  (Eq. 8.34)

where the parameter r is the number of recurrences of the solution. In the given case, the solution to the ODE Eq. 8.33 would be

f(x)=c0x0e0x+c1x1e0x=c0+c1x

si99_e

By setting this equation into Eq. 8.33 one can easily find this to be correct.

8.2.3.4 Characteristic Polynomials With Complex Roots

The characteristic polynomial may have complex solutions. As an example, consider the following equation:

d2fx2+3f=0

si100_e  (Eq. 8.35)

for which the characteristic polynomial is λ2 = − 3, which yields the complex solutions λ1=3isi101_e and λ2=3isi102_e. In all cases of complex solutions we will always have a conjugated complex pair of solutions denoted by λ1 = α + i β and λ2 = α − i β. In this case our solution will be

f(x)==c0e(α+iβ)x+c1e(αiβ)xc0eαxeiβx+c1eαxeiβx

si103_e

where we now apply Euler’s formula, Eq. 3.38 and Eq. 3.39, to convert them to sine and cosine functions, in which case we find

f(x)===c0eαx(cos(βx)+isin(βx))+c1eαx(cos(βx)isin(βx))eαx(cos(βx)(c0+c1)+isin(βx)(c0c1))eαx(c3cos(βx)+ic4sin(βx))

si104_e

Here it is important to remember that both the real and the imaginary parts of the function obtained are solutions to our differential equation. This is why we find in the case of a characteristic polynomial that has the complex roots ± β, the solution to be

f(x)=c0eαxcos(βx)+c1eαxsin(βx)

si105_e  (Eq. 8.36)

Regarding our example ODE, Eq. 8.35, we find α = 0 and β=3si106_e, which yields

f(x)=c0cos(3x)+c1sin(3x)

si107_e

as our solutions.

8.2.3.5 Characteristic Polynomials With Recurring Complex Roots

Combining both special cases we have just studied, whenever you come across a polynomial that has complex roots λ = α ± iβ, which recurs r times, Eq. 8.36 is simply multiplied by the terms used for recurring roots (see Eq. 8.34), yielding

f(x)=(n=0r1cnxneαx)(c0eαxcos(βx)+c1eαxsin(βx))

si108_e  (Eq. 8.37)

8.2.3.6 Examples

In the following, we will look at some examples.

Second-Order Inhomogeneous Ordinary Differential Equation

The following differential equation is given:

d2fdx22dfdx=x2

si109_e  (Eq. 8.38)

boundary conditions

f(0)dfdxx=0==10

si110_e

We first solve the homogeneous differential equation for which we find the characteristic polynomial to be

λ22λ=0

si111_e

for which we find the two solutions λ1 = 0 and λ2 = 2. Therefore the homogeneous solution to Eq. 8.38 is

fhom(x)=c1+c2e2x

si112_e

We now proceed to finding the partial solution. We need a polynomial of order 3 using the approach

fpart(x)=c3x3+c2x2+c1x+c0

si113_e  (Eq. 8.39)

Inserting Eq. 8.39 into Eq. 8.38 yields

6c3+2c26c3x24c2x2c16c3+(6c34c2)x+(2c22c1)==x2x2

si114_e

from which we find the following system of equations

6c36c34c22c22c1===0c31c22c1===01434

si115_e

Therefore the general solution to Eq. 8.38 is

f(x)==yhom(x)+ypart(x)c1+c2e2x14x2+34x

si116_e

Using the boundary conditions, we can determine c2=38si117_e and c1=2c2=118si118_e.

Solving the Differential Equation for Meniscus Formation at Liquid/Solid Interfaces

During the derivation of the shape of a meniscus forming due to surface tension, we have a very practical example of a linear ordinary differential equation with constant coefficients (see Eq. 21.9). The equation is given by

d2z(x)dx2ρgγz(x)=0

si119_e

This is a homogenous second-order differential equation with constant coefficients. We proceed by finding the characteristic polynomial, which is given by

λ2ρgγ=0λ=±ρgγ

si120_e

Therefore we find the general solution to this differential equation to be

z(x)=c1eρgγx+c2eρgγx

si121_e

8.2.4 Linear Differential Equations With Nonconstant Coefficients

After having studied methods for solving differential equations with constant coefficients we will now look at differential equations that take functions instead of constants. For linear differential equations of first order, these can be solved conveniently.

8.2.4.1 Introduction

We will look at differential equations of the type

a(x)df(x)dx+b(x)f(x)adfdx+bf==c(x)c

si122_e  (Eq. 8.40)

with the coefficients a, b, and c being functions of x instead of constants: a (x), b (x), and c (x). These types of differential equations occur frequently, and they are very easy to solve. In the first step, we rewrite Eq. 8.40 so that the factor in front of the derivative becomes 1, resulting in

dfdx+baf=ca

si123_e  (Eq. 8.41)

8.2.4.2 Integration Factor

In the next step we apply the product rule of differentiation (see Eq. 3.13). We assume that the right-hand side of the equation is the result of the differentiation of the product r (x) f (x) where we introduced a new coefficient r (x), which we still have to determine. This coefficient is referred to as an integration factor. We will see in a moment where this name comes from. If we differentiate r f with respect to x we are left with

d(rf)dx=rdfdx+drdxf

si124_e  (Eq. 8.42)

If we multiply Eq. 8.41 with r we find

rdfdx+rbaf=rca

si125_e  (Eq. 8.43)

Now the left-hand side of Eq. 8.43 already looks very similar to the differentiated product in Eq. 8.42. In order to match exactly, we would need to ensure that

rba=drdx

si126_e

by comparing the coefficients. From this equation, we can derive r (x) which is given by

drrlnrr(x)===badxbadxeb(x)a(x)dx

si127_e  (Eq. 8.44)

We have now determined the integration factor r (x). Using this factor we can now rewrite Eq. 8.43 as

d(rf)dxd(rf)rff(x)====rcarcadxrcadx1r(x)r(x)c(x)a(x)dx

si128_e  (Eq. 8.45)

8.2.4.3 Solution

Eq. 8.44 and Eq. 8.45 give us the solution to the first-order differential equation with nonconstant coefficients, which is given by

f(x)=1eb(x)a(x)dxeb(x)a(x)dxc(x)a(x)dx

si129_e  (Eq. 8.46)

We used the integration factor r (x) in order to combine the differential dfdxsi130_e and the function f on the left-hand side of our original differential equation into one combined differential. This is the reason why these factors are referred to as integration factors. Integration factors allow expressing an (unknown) differential by an integral that depends on the coefficients.

8.2.4.4 Example

In the following we will solve the following differential equation using the method we just discussed:

2xdfdx3f=9x3

si131_e  (Eq. 8.47)

where a (x) = 2 x, b (x) = − 3, and c (x) = 9 x3. According to Eq. 8.46 the solution to this ODE is

f(x)==1e32dxxe32dxx9x32xdx1e32lnx+c0e32lnx+c092x2dx

si132_e

Here, we can chose any value for c0. For convenience we will set it to 0, in which case we find

f(x)======1e32lnxe32lnx92x2dx1x32x3292x2dxx32x3292x2dxx3292x12dxx32(3x32+c1)3x3+c1x32

si133_e  (Eq. 8.48)

Let us quickly verify that this is in fact a solution to our differential equation. Using Eq. 8.48 in Eq. 8.47 yields

2x(9x2+32c1x12)3(3x3+c1x32)18x3+3c1x329x33c1x32==9x39x3

si134_e

which is obviously correct.

8.2.5 Laplace Transform for Solving the Harmonic Oscillation ODE

8.2.5.1 Introduction

In section 5.2 we have introduced the Laplace transform, which is an extremely useful tool to solving complex ODEs. In this section we will use the Laplace transform for solving the ODE of the damped harmonic oscillation. The details of this problem are introduced in section 27.3.1. The resulting equation is a second-order homogeneous ODE given by Eq. 27.32 as

md2sdt2(t)+kv(t)+Ds(t)=0

si135_e  (Eq. 8.49)

The independent variable is the time t, and the dependent variable is the position or path s, which is a function of time. Obviously, the velocity v is the first derivative of s with respect to time, and the acceleration is the second derivative with respect to time. The constants are the oscillating mass m, the damping constant k, and the spring constant D.

As in section 27.3.1 we assume the following initial values: s (t = 0) = S and v (t = 0) = 0. At the beginning of the experiment, the mass is at rest but displaced to the maximum amplitude S from which it is left to oscillate.

We will now derive the solution to this equation using the Laplace transform. The analytical solutions are derived in section 27.3.1 using a different approach. We will compare the results found with the Laplace transform with these equations.

8.2.5.2 Laplace Transform of the ODE

We first need to look up the Laplace transforms of the individual terms from Tab. 5.2, from which we find Eq. 5.44 that gives the Laplace transforms of derivatives in time domain as

d2sdt2(t)dsdt(t)s(t)===f2S(f)dsdt(0)fs(0)=f2S(f)fSfS(f)s(0)=fS(f)SS(f)

si136_e

If we insert these equations into Eq. 8.49 we find

m(f2S(f)fS)+k(fS(f)S)+DS(f)=0

si137_e  (Eq. 8.50)

which we have to solve for S (f), finding

mf2S(f)+(fk+D)S(f)S(fm+k)S(f)(mf2+kf+D)S(f)===0S(fm+k)S(fm+k)mf2+kf+D

si138_e  (Eq. 8.51)

We have now found the solution to the ODE of Eq. 8.49. Unfortunately, this solution is still in the frequency domain and we first have to reconvert it to the time domain.

8.2.5.3 Method of Partial Fraction Expansion

As we have already discussed in section 5.2, the inversion of the Laplace transform in generally the problematic step. If we look at Tab. 5.2 we can see that we do not find an obvious transform pair that would match Eq. 8.51.

However, we can see that there are numerous transform pairs with have terms of the type (s − a) in the denominator. Here, a is a constant. It is a root of the denominator, i.e., a constant that makes the denominator equal to 0. These result from a process which is referred to as the method of partial fraction expansion.

If we look at Tab. 5.2 we can see that we have numerous transform pairs where the function in the frequency domain is of the form of simple fraction terms such as cfasi139_e. We also know from section 5.2.3 that we can very conveniently retransform sums of transformed functions from the frequency domain to the time domain.

Finding Roots

The method of partial fractions allows us to transform a rather complicated looking polynomial fraction such as the one we derived in Eq. 8.51 to a product of several simple fractions. As an example, take

2f8f25f+6=4f22f3

si140_e  (Eq. 8.52)

You can check this by finding a common denominator of the two fractions, which is done by multiplying the denominator of one fraction with the denominator of the other, respectively. Mathematically speaking, we have decomposed the polynomial f2 + 5f + 6 into a product of its roots, which are λ1 = 2 and λ1 = 3. These can be calculated using the formula for finding the roots of the quadratic equation ax2 + bx + c = 0, given by

x1,2=b±b24ac2a

si141_e  (Eq. 8.53)

Approaches for Partial Fractions

Obviously, the roots do not necessarily have to be real numbers, they may also be complex. Depending on the order of the denominator and the type of roots found, several common approaches for partial fractions are used. The most common ones are listed in Tab. 8.1. As you can see, the approaches are lacking the constants Ai, B, which need to be determined. We will carry out such a partial fraction expansion for Eq. 8.51.

Tab. 8.1

Common approaches for partial fraction expansion that is required, e.g., for the inversion of the Laplace transform. The polynomial to decompose is P(f)Q(f)si1_e, where the degree of the denominator is bigger than the degree of the numerator. Please note that the denominator polynomial must be normalized, i.e., the coefficient of the first term must be 1

Polynomial and expansion approachRoots of denominatorComment
P(f)Q(f)=P(f)f2+af+b=Afλ1+Bfλ2si2_eλ1, λ2denominator with two different noncomplex roots
P(f)Q(f)=P(f)f2+af+b=A(fλ)2si3_eλ1, λ2denominator with one recurring noncomplex roots
P(f)Q(f)=P(f)n1i=0aifi=n1i=0Aifλisi4_eλidenominator with n different real roots
P(f)Q(f)=P(f)f2+af+b=A(sR{λ})+B(fR{λ})2I{β}2si5_eλ1, λ2, λ1¯¯¯¯si6_e, λ2¯¯¯¯si7_edenominator with two complex-conjugated roots
Performing Partial Fraction Expansion

In many cases, partial fraction expansion allows us to transform a fraction with polynomial denominator to a sum of very simple polynomials with constant denominators. As an example, take the right-hand side of Eq. 8.52. This is significantly easier to retransform to the time domain than the left-hand side. The process is very simple. If we consider Eq. 8.52 we find the following roots:

λ1λ2==5+25242=3525242=2

si142_e

which we obtain by applying Eq. 8.53. We could then rewrite Eq. 8.52 as

2f8f25f+6=Afλ1+Bfλ2=Af2+Bf3

si143_e  (Eq. 8.54)

where A and B are constants. We determine these by comparison with the left-hand side of Eq. 8.54. For this we reexpand the right-hand side of Eq. 8.54 on a common denominator, which we do by multiplying the first term with (f − 3) and the second term with (f − 2), yielding

2f8f25f+6==2f8(f2)(f3)=A(f3)+B(f2)(f2)(f3)f(A+B)3A2B(f2)(f3)

si144_e  (Eq. 8.55)

We now simply have to compare coefficients to find that A+B=!2B=2Asi145_e and 3A2B=3A(42A)=!8si146_e, which is satisfied for A = 4; in this case B =  2, which is exactly the right-hand side of Eq. 8.52.

When to Use Partial Fraction Expansion

Partial fraction expansion allowed us to decompose the rather complicated polynomial to two simple polynomials that can be conveniently retransformed using Tab. 5.2. To be exact, we would use Eq. 5.29 with which we can write down the retransformed as

h(t)=4e2t2e3t

si147_e

In many cases, partial fraction expansion significantly simplifies the retransformation step. However, as we will see in a moment, if a comprehensive look-up table is available, it may not be required to perform the full partial fraction expansion since intermediary polynomials may already be available for look-up.

8.2.5.4 Performing the Retransform

As stated, we require the roots for the denominator of Eq. 8.51, i.e., the values fi for which the denominator becomes zero:

0f1,2==mf2+kf+Dk±k24mD2m

si148_e

where we applied Eq. 8.53 for finding the roots. If you compare this calculation with the derivation of the analytical solution in section 27.3.1 you will see that we derived the same characteristic polynomial as given by Eq. 27.33, which obviously has the same roots. We have to make a case distinction here because depending on the value in the square root we either get complex roots or not. In both cases we have to use different approaches (see Tab. 8.1)

Overdamped Oscillation: k2 − 4 m D > 0

The first case we will consider is the overdamped oscillation. It occurs when the momentum of the moving mass is consumed by the damping in the system and thus the oscillation will decay. At this point, we will insert numbers for the variables in order to simplify the calculations a little. We set D = 1, k = 3, and m = 1, which are the values used in section 27.3.1 as well as in listing 27.7. We also set S = 1. In this case, we obtain the roots

f1f2==k+k24mD2m=3+52kk24mD2m=352

si149_e

in which case we can decompose the denominator polynomial of Eq. 8.51 according to

S(fm+k)mf2+kf+D=1(f+3)(ff1)(ff2)=f(ff1)(ff2)+3f(ff1)(ff2)

si150_e

Obviously, we could further decompose the term into full polynomials but this is not required at this point. If we look at Tab. 5.2 we can see that we have suitable transform pairs we can use here (see Eq. 5.33 and Eq. 5.34). Using these equations, we find the solution

s(t)f1f2s(t)f1+3f2+3s(t)======f1ef1tf2ef2tf1f2+3ef1tef2tf1f23+52+3+52=515((f1+3)ef1t(f2+3)ef2t)3+52+3=3+52352+3=352510((3+5)e3+52t+(3+5)e352t)

si151_e  (Eq. 8.56)

where we have found the retransformed solution in the time domain. If you compare Eq. 8.55 with the analytical solution (see Eq. 27.38) you will see that this is in fact the correct result. As you can see, with a good look-up table the conversion of the Laplace transformed solution into the frequency domain is simple.

Critically-Damped Oscillation: k2− 4 m D = 0

The second case is referred to as critically damped oscillation. It is the case in which the oscillation decays most quickly. We chose the following values for this case: D = 1, k = 2, and m = 1. We now have are curring root that is given by

f1=f2=k2m=22=1

si152_e

We can therefore decompose the denominator polynomial of Eq. 8.51 according to

S(fm+k)mf2+kf+D=1(f+2)(ff1)2=f(ff1)2+21(ff1)2

si153_e

Referring to Tab. 5.2 we can find Eq. 5.31 and Eq. 5.30, which gives the following retransformed solutions:

s(t)===(1+f1t)ef1t+2tef1t(1t)et+2tet(1+t)et

si154_e

which is the solution in the time domain. If you compare this equation with the analytical solution derived in Eq. 27.40 you will see that this is the correct result.

Underdamped Oscillation: k2 − 4 m D < 0

The last case we will study is called underdamped oscillation, in which the damping does not consume the momentum of the moving mass fast enough to hinder the system from oscillating. We chose the following values for this case: D = 1, k = 1, and m = 5. We find Eq. 8.51 as

S(f)===S(fm+k)mf2+kf+D5f+15f2+f+1f+15f2+15f+15

si155_e

The reason for the last rearrangement is that, as indicated in Tab. 8.1, we have to scale the equation such that the factor of the f2 term in the denominator polynomial is 1. We now find the following conjugated complex roots:

f1f2==110+1910=110+i19101101910=f1¯¯¯=110i1910

si156_e

Referring to Tab. 8.1 we will use the following approach for the decomposition:

f+15f2+15f+15=!A(f+110)+B(f+110)2+19100

si157_e  (Eq. 8.57)

It takes a moment to see that the denominators are actually equal. We will shortly perform the multiplication of the right-hand side of Eq. 8.57, for which we find

(f+110)2+19100=f2+15f+1100+19100=f2+15f+15

si158_e

We now simply have to determine the missing constants A and B by comparing coefficients in the numerator of Eq. 8.57 for which we find

f+15=!A(f+110)+B

si159_e

from which we find A = 1 and B=110si160_e, which completes the partial fraction decomposition; for the decomposition we find

f+15f2+15f+15==(f+110)+110(f+110)2+19100f+110(f+110)2+19100+110(f+110)2+19100

si161_e  (Eq. 8.58)

From Tab. 5.2 we use Eq. 5.41 and Eq. 5.42 for retransforming Eq. 8.58 into the time domain for which we find

s(t)===et10cos(1910t)+119et10sin(1910t)et10(cos(1910t)+119sin(1910t))et10(1919sin(1910t)+cos(1910t))

si162_e

which is the solution in the time domain. If you compare this equation with the analytical solution derived in Eq. 27.42 you will see that this is the correct result.

Visualization

At this point, we will not bother visualizing our results because we will do this when studying numerical methods for which we will use this example as well. Please refer to section 27.3.1 for details on the solution and to Fig. 27.12 for a visualization of the results.

8.2.5.5 Summary

In this section we have briefly studied the Laplace transform as a method for solving ODEs in the time domain using damped oscillation as an example. As we have seen, the transformation is simple and we quickly come up with a solution of the ODE in the frequency domain. The critical aspect about the Laplace transform is the retransformation. We have used a rather complicated example which gave rise to three distinct cases each of which produced a different solution in the frequency domain. Retransforming these solutions into the time domain is possible using adequate partial fraction expansions, which allow us to derive polynomial fractions for which we can look up the corresponding equations in the time domain from reference tables.

The Laplace transform is a very important method to solve ODEs on semi-infinite domains, i.e., whenever the independent variable is bound from 0 to ∞. In this example, the time t was the independent variable and we were interested in solutions to the ODE for values of t ≥ 0. Even though we usually refer to the original domain as the time domain we could also refer to it as the original domain. In many cases, the independent variable may not be time but, e.g., a space coordinate such as x. Then the frequency domain would better be referred to as the image-domain.

Besides ODEs the Laplace transform is a very important tool to solve PDEs. We will study an example where the Laplace transform is used to solve a PDE in section 8.3.7 for solving.

8.3 General Solutions to Selected Partial Differential Equations

8.3.1 Introduction

Having introduced methods to solve ODEs we will turn to methods for solving PDEs. As it turns out, PDEs are significantly more challenging to solve and there is only a very limited number of general solutions that can be obtained for PDEs. Luckily, some of the most relevant PDEs that we will come across can be solved. In the following we will look at a selection of these equations and discuss the approaches used to solve them. We will see most of these equations again in later chapters and the methods for solving them form the basis of some of the concepts that we will employ later for solving the fundamental equations in fluid mechanics.

Rationale for Solving PDEs

With very few exceptions, the general method for solving a PDE is to first turn it into a system of several independent ODEs. For each independent variable, we will require a separate ODE. The main trick in solving PDEs lies in the method for splitting it into ODEs. Once these ODEs are obtained, the task is straightforward and we may apply any suitable method discussed in section 8.2 to solve the ODEs. After the individual solutions are found, we will reassemble these equations into one single equation which will (if we made no mistakes) be a solution to the PDE.

8.3.2 The Method of Characteristics: The Transport Equation

In this section we will solve the transport equation (see section 8.1.1).

8.3.2.1 Characteristic Lines

The transport equation is very easy to solve, which is why we will quickly do it. We begin by forming the total differential (see section 3.1.5.1) of the function f, which is given by

df=ftdt+fxdx

si163_e  (Eq. 8.59)

As x is a function of the time, we can rewrite dx as

dx=xtdt

si164_e

in which case we find from Eq. 8.59

dfdfdt==ftdt+fxxtdtft+fxxt

si165_e  (Eq. 8.60)

If we compare Eq. 8.59 with Eq. 8.1 we first find that the whole equation must be equal to 0. Second, we can simply derive the following equations by comparison of the coefficients:

c=xt=dxdtx(t)=ct+c0

si166_e  (Eq. 8.61)

a=1fdfdtf(t)=eat+c1=c2eat

si167_e  (Eq. 8.62)

Eq. 8.61 gives a very important information. It states that x (t) − ct = c0. If you recall the second version of looking at the traveling water wave, you will see that this is exactly what this equation tells you: As long as you move your eyes along with the velocity c you will not see a relative shift in the height of the wave. The point you chose to look at is given by c0. There obviously is an infinite number of points you could chose to look at. Any point would do as long as you move your eyes at the correct velocity. The lines along which you move your eyes are referred to as characteristic lines, which is why this method of solving the differential equation is sometimes referred to as the method of characteristics.

8.3.2.2 Solution

We can now use Eq. 8.62 to formulate f (x, t) as

f(x,t)=c2eat

si168_e  (Eq. 8.63)

for which we have to find c2 by using a boundary condition. Let this boundary condition be at t = 0, for which we find

f(x(t),0)=f(ct+c0,0)=f(c0+c0,0)=f(c0,0)=c2ea0=c2

si169_e

where we have inserted Eq. 8.61 to convert x (t). If we solve Eq. 8.61 for c0 we can write

f(c0,0)=f(xct,0):=F(xct)

si170_e  (Eq. 8.64)

where we have introduce, the function F (x − ct), which is any chosen function. If we now insert Eq. 8.64 in Eq. 8.62 and Eq. 8.63 we find

f(x,t)=F(xct)eat

si171_e  (Eq. 8.65)

which is the solution to the transport equation. The transport equation can be solved by any function F (x − ct) that is multiplied with an exponential term that describes the decay over time. As discussed, the decay is a function only of time, and it is independent of x.

On one hand it seems strange that we may really use "any" function as solution. However, in principle any physical shape or contour can be transported, no matter which function. This may be a traveling rectangular pulse, a sine wave, or any other function we may come up with. The only condition they have to fulfill is that they are defined as functions of the independent variable term x − ct.

8.3.2.3 Visualization

It may be easier to understand the implications of this solution when looking at some practical examples. In Fig. 8.2a you can see a sine wave that is traveling at constant speed c = 1. This may be a water wave from our introductory example. It is defined as

f08-02-9781455731411
Fig. 8.2 Sine wave function as a solution to the transport equation. The function travels at the speed c = 1 toward the right. The function is shown for the time points t = 0, t = 1, t = 3, t = 5, and t = 10. a) Undamped case. The function does not decay over time. b) Damped case assuming a damping factor a = 0.01. As you can see, the function decays over time.

f1(x,t)={sin(xct)00xct2πotherwise

si172_e

As can be seen from the plot, the water wave travels to the right, e.g., it moves in space without actually changing its shape. It is "transported" along: A classical example of a transport phenomena. For Fig. 8.2a the damping factor was set to a = 0, which is why you do not see a decrease in the function over time. In Fig. 8.2b you can see the same function, but this time it is damped using a damping factor a = 0.01. Obviously, the function still moves to the right but it slowly decays over time. This is a somewhat more realistic scenario since we would expect to see damping, also in the introductory example of the water waves.

Just to give you an idea of how a different solution function to the transport equation would look, Fig. 8.3 shows the pulse function transported without damping (see Fig. 8.3a) and with damping (see Fig. 8.3b). This function is defined as

f08-03-9781455731411
Fig. 8.3 Pulse function as a solution to the transport equation. The function travels at the speed c = 1 toward the right. The function is shown for the time points t = 0, t = 1, t = 3, t = 5, and t = 10. a) Undamped case. The function does not decay over time. b) Damped case assuming a damping factor a = 0.01. As you can see, the function decays over time.

f2(x,t)={100xct0.75otherwise

si173_e

As you can see, this function is a solution to the transport equation.

Wave Collision

A phenomenon we often observe is two waves that run in opposite directions and eventually "collide." They temporarily fuse into a single wave, pass each other, and travel onwards. This effect can be visualized quite conveniently using the sine wave we have used in Fig. 8.2. For this we simply defined two waves traveling in opposite directions, i.e., for one wave we assume c = 1 while for the other we assume c = − 1. The effect of the waves "passing each other" is shown in Fig. 8.4.

f08-04-9781455731411
Fig. 8.4 Two waves "colliding." The waves travel in the opposite direction with one wave at c = 1 while the other wave is at c = − 1. The second wave is offset to start at the right side.

8.3.3 Substitution and Separation of Variables: The One-Dimensional Heat Equation

The next method we will look into is often also referred to as "separation of variables" because it actually uses this method. However, it uses a very important trick before performing separation of variables, which is why we will refer to it as substitution and separation of variables. We will discuss this method using a practical example: The one-dimensional heat equation (see Eq. 8.3).

ft=c22fx2

si174_e  (Eq. 8.66)

As we can see, f depends on both of our independent variables x and t.

8.3.3.1 Substitution

We will assume that f is actually a function that consists of two functions X and T. Each of these functions is assumed to only depend on one of the independent variables. We assume

f(x,t)=X(x)T(t)

si175_e  (Eq. 8.67)

which allows us to find the two partial derivatives in Eq. 8.66 as

ft=XTt+XtT=XTt

si176_e  (Eq. 8.68)

2fx2=x(TXx+XTx)=x(TXx)=T2Xx2+XxTx=T2Xx2

si177_e  (Eq. 8.69)

where we have applied the product rule (see Eq. 3.13). Please note that as X is only a function of x we know that Xt=0si178_e. Likewise, we know Tx=0si179_e as T is only a function of t.

8.3.3.2 Separation of Variables

We can now insert Eq. 8.68 and Eq. 8.69 into Eq. 8.69 and obtain

XTtTtc2T==c2T2Xx22Xx2X

si180_e  (Eq. 8.70)

which we have rewritten such that all terms involving X are collected on the right-hand side whereas all terms involving T have been collected on the left-hand side of Eq. 8.70. This is the stated "separation" part.

8.3.3.3 Separation Into Two Differential Equations

After separation of variables we see that the left-hand side of Eq. 8.70 only depends on the independent variable t, whereas the right-hand side of the equation only depends on the independent variable x. Still for all values of t and x these two expressions must be equal. If this must be true for all potential values that the two independent variables could have, then there is only one possible solution: the expressions must both be a constant. We denote this constant − λ, which is, by convention, a negative term. This now turns our differential equation (8.66) into two differential equations:

Ttc2T2Xx2X==λλ

si181_e

As in both cases, the respective functions are only functions of one independent variable. We can rewrite these equations as

dTdtc2T=λ

si182_e  (Eq. 8.71)

d2Xdx2X=λ

si183_e  (Eq. 8.72)

8.3.3.4 Solving T (t)

Eq. 8.71 is very easy to solve. We rewrite it as

dTdtdTdt+λc2T==λc2T0

si184_e  (Eq. 8.73)

which yields a linear differential equation with constant coefficients (section 8.2.3). We find the characteristic polynomial (using a instead of λ in order to avoid confusion) to be

aλc2a==0λc2

si185_e

Therefore we find

T(t)=eλc2t

si186_e  (Eq. 8.74)

8.3.3.5 Solving X (x)

Now that we have solved T (t) we shall proceed with solving X (x), which can be solved in exactly the same manner. From Eq. 8.72 we find

d2Xdx2=λX

si187_e  (Eq. 8.75)

Eq. 8.75 is a Helmholtz equation (see Eq. 8.26), which is a linear differential equation with constant coefficients. We find the characteristic polynomial (again using a instead of λ) to be

a2=λ

si188_e  (Eq. 8.76)

There are three cases we need to distinguish here.

Case 1: λ = 0

For λ = 0 we find a1 = a2 = 0 to be a recurring root (see Eq. 8.34) in which case we find the solution to be

Xλ=0(x)=c1x+c2

si189_e

Case 2: λ > 0

For λ > 0 Eq. 8.76 has two complex solutions: a1=iλsi190_e and a2=iλsi191_e. In this case we find the solution according to Eq. 8.36 to be

X1,λ>0(x)X2,λ>0(x)==c1cos(λx)c2sin(λx)

si192_e

Case 3: λ < 0

For λ > < Eq. 8.76 has two noncomplex solutions, which are given by a1=λsi193_e and a2=λsi194_e in which case we find

Xλ<0(x)=c1eλx+c2eλx

si195_e

Here, the two solutions are

X1,λ<0(x)X2,λ<0(x)==c1eλxc2sin(λx)

si196_e

8.3.3.6 Preliminary Solution

We now have derived the solution that we can reassemble according to Eq. 8.67. The solution is given by

f(x,t)=λ=0λ>0λ<0c1x+c2eλc2t(c1cos(λx)+c2sin(λx))eλc2t(c1eλx+c2eλx)

si197_e  (Eq. 8.77)

This is the solution to one-dimensional heat conduction equation. In all cases, the constants will be defined by the boundary conditions.

8.3.3.7 Principle of Superposition

You may wonder why the solution presented for λ > 0 and λ < 0 is the sum of the given solution and why (in case of λ > 0) we simply ignored the complex constant i. The reason is that a general solution to a differential equation must always include all potential solutions. Each of the solutions is multiplied by a constant. So in case a given solution does not apply to a given problem, the boundary values will determine that respective constant to be 0 and thus this particular solution disappears. However, whenever you have more than one solution you must always include all of them. This is referred to as the principle of superposition.

8.3.3.8 Initial Boundary Value Problem

In the next step we will look at an example where we apply boundary conditions and initial values to the problem. In this case, we will assume a rod that is fixed between two fixtures (see Fig. 8.5). At the time t = 0 the rod is assumed to have a given temperature distribution T (t = 0, x). For the sake of simplicity we will assume this distribution to be the constant temperature T0. Heat will be conducted from the rod into the fixtures that are held at the temperature T = 0, which gives rise to the two boundary values we need for solving the heat equation: T (t, 0) = T (t, L) = 0. Please note that this temperature can be any temperature as long as the temperature profile of the rod is offset accordingly. Thus the temperature is given as a difference value to the temperature of the fixture.

f08-05-9781455731411
Fig. 8.5 Analytical solution of the one-dimensional heat equation with initial values and boundary values. a) At t = 0 a rod fixed between two points whose ends are held at a constant temperature (in this case T = 0) has a given heat distribution. In this case the heat is distributed equally and the rod has the temperature T0 at all points of x. b) At t = t1 the temperature distribution will have changed. Heat will be conducted along the rod toward the two ends, giving rise to a temperature profile. The ambiance is assumed to be perfectly insulated.

We already know the solutions to the heat equation that is given by Eq. 8.77. We now apply the boundary conditions for the three cases we determined.

Case 1: λ = 0

For λ = 0 we found the solution

fλ=0(x,t)=c1x+c2

si198_e

Applying the boundary values results in

f(0,t)=c2f(L,t)=c1=!0c2=0L=!0c1=0

si199_e

which only yields the trivial solution

fλ=0(x,t)=0

si200_e

Case 3: λ < 0

We will treat the third case here because, as we will see, it will also yield only a trivial solution. For λ < 0 we found the solution

fλ<0(x,t)=eλc2t(c1eλx+c2eλx)

si201_e

Applying the boundary conditions yields

fλ<0(0,t)=eλc2t(c1+c2)fλ<0(L,t)=eλc2t(c1eλL+c2eλL)=!=!0c1+c2=00c1eλL+c2eλL=0

si202_e

which we again can only achieve in the case of a trivial solution where c1 = c2 = 0

fλ<0(x,t)=0

si203_e

Case 2: λ > 0

For λ > 0 we found the solution

fλ>0(x,t)=eλc2t(c1cos(λx)+c2sin(λx))

si204_e

Applying the boundary conditions yields

fλ>0(0,t)=eλc2tc1fλ>0(L,t)=eλc2tc2sin(λL)=!=!0c1=00

si205_e

from which we find two potential solutions. One is the trivial case for c2 = 0 which yields the trivial solution

fλ>0,trivial(x,t)=0

si206_e

8.3.3.9 Eigenvalues and Eigenfunctions

The second solution is for sin (λL)=0si207_e. This is the most important case. In order for this case to be fulfilled, we must ensure that the term λLsi208_e takes on a multiple of π (at which point the sine function is 0; see section 3.2.1.1). Therefore we have a solution for each n where

λLλλn===nπnπL(nπL)2foralln=1,2,

si209_e

These values λn are referred to as the eigenvalues of the corresponding eigenvalue functionsfn=fλ=λn<0(x,t)si210_e. An eigenfunction is a solution to a differential equation that satisfies the boundary conditions. In our case, the eigenfunctions would be

fn,λn<0(x,t)=cne(nπcL)2tsin(nπxL)

si211_e

Each eigenfunction has a different value for cn. These are defined by the initial value of our problem. As you can see, we have an infinite number of solutions to the differential equation. According to the principle of superposition, we find the general solution to the one-dimensional heat equation to be

fλ<0(x,t)=n=1cne(nπcL)2tsin(nπxL)

si212_e  (Eq. 8.78)

Please note that until now we still have unknown constants in our equation that need to be determined. We will achieve this by looking at the initial values of our problem.

8.3.3.10 Initial Values

The initial values will allow us to determine the unknown constants in our solution Eq. 8.78 to yield a useful result. Returning to the example in Fig. 8.5 we assume the heat distribution to be described by the function f (x, t = 0) = f0 (x). We will first solve the more general case before proceeding to the case displayed where the initial temperature distribution is a constant f0 (x) = T0.

We will first expand the function f0 (x) to a Fourier series. We do this because our solution Eq. 8.78 is a sine function. We expand the function on the interval 0 ≤ x ≤ L according to Eq. 4.46 in which case we find

f0(x)=2Ln=1(L0f(x)0sin(nπxL)dx)sin(nπxL)

si213_e  (Eq. 8.79)

If we apply Eq. 8.79 to Eq. 8.78 we find

fλ<0(x,0)==!n=1cnsin(nπxL)2Ln=1(L0f0(x)sin(nπxL)dx)sin(nπxL)=f0(x)

si214_e

from which we learn that

cn=2L(L0f0(x)sin(nπxL)dx)

si215_e  (Eq. 8.80)

As you can see, the initial values allowed us to derive the constants we were missing for the final solutions that we can now formulate.

8.3.3.11 Solution

Combining Eq. 8.80 with Eq. 8.78 allows us to find the general solution to the one-dimensional heat equation that is given by

fλ<0(x,t)=n=12L(L0f0(x)sin(nπxL)dx)e(nπcL)2tsin(nπxL)

si216_e  (Eq. 8.81)

Solution for the Special Case f0 (x) = T0

In our original description of the heat equation we assumed the initial temperature distribution to be a constant T0. We will now simplify Eq. 8.81 for this special case. First, we solve the integral

L0T0sin(nπxL)dx==T0Lnπ[cos(nπxL)]L0T0Lnπ(cos(nπ)+1)

si217_e

As you can see, for all n = 0, 2, 4, … the value for cos (n π) = 1 in which case the bracket term is 0 and thus the whole function will be zero. For all n = 1, 3, 5, …, cos (n π) = − 1 and the bracket term will amount to 2. In this case the integral evaluates to

L0T0sin(nπxL)dx=2T0Lnπ

si218_e

This allows us to simplify Eq. 8.81 to

T(x,t)==n=12L2T0L(2n+1)πe((2n+1)πcL)2tsin((2n+1)πxL)4T0πn=01(2n+1)e((2n+1)πcL)2tsin((2n+1)πxL)

si219_e  (Eq. 8.82)

where we have substituted 2n + 1 for n in order to only account for odd values of n. As you can see Eq. 8.82 consists of two terms. The sine term will give the local distribution of the temperature at a given time point t as it is not dependent on the time. The second term is an exponential function that depends on the time t and on the constant c, which is (if you reconsider the equation given by Eq. 8.3) the thermal diffusivity, i.e., a measure of how quickly the material can transport heat. The exponential term only depends on the physical parameters, e.g., the length of the rod L but not on the independent variable x, i.e., the location. The exponential function makes the temperature profile decay over time while the sine sum will create the distribution.

Fig. 8.6 shows plotted results for the simplified one-dimensional heat equation, Eq. 8.82, for two different materials: stainless steel and glass. In both cases, rods of length 10 cm were assumed that were heated to the uniform temperature T0. Obviously, the two materials differ in their property to conduct heat, i.e., in their thermal diffusivity α (see Eq. 6.82). For stainless steel we find λ = 14 W m− 1 K− 1, a density of around ρ = 7.9 g cm− 3, and a specific heat capacity of cp = 0.49 J g− 1 K− 1 from Tab. 6.8. In this case we find the thermal diffusivity α of 3.62 mm2 s− 1 according to Eq. 6.82. For glass we find λ = 0.095 W m− 1 K− 1, a density of around ρ = 2.5 g cm− 3, and a specific heat capacity of cp = 0.67 J g− 1 K− 1 from Tab. 6.8. In this case we find the thermal diffusivity α of 0.057 mm2 s− 1.

f08-06-9781455731411
Fig. 8.6 Heat conduction in two rods made of stainless steel (a) and glass (b). The rods are both 10 cm long and were at the temperature T0 at the start of the experiment. At t0 = 0 they were clamped between two fixtures both of which are at the temperature 0. The temperature profiles for (a) are at 0 s, 1 s, 10 s, 100 s, and 500 s. For (b) the profiles are drawn for 0 s, 10 s, 100 s, 1000 s, and 10 000 s. As the glass rod conducts heat significantly worse than stainless steel it will keep its temperature longer.

As can be seen, the temperature profile for the stainless steel rod is (more or less) uniform at the start of the experiment. The artifacts caused by the Fourier series expansion of the constant are still obvious. Due to the high thermal diffusivity, the temperature profile of the stainless steel rod drops quickly whereas the glass rod hardly changes temperature even after very long-time periods.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset