Chapter 3

Engineering Mathematics

3.1 Differential Equations

3.1.1 Introduction

In this section, we will look at some of the most important (partial) differential equations that we come across in practical mathematics, physics, and (obviously) fluid mechanics. It is always astonishing to see how different physical phenomena can be reduced to the same type of differential equation. Obviously, once a solution to a prototypical differential equation has been found, this may open up entirely new roads to solving a large set of physical problems. These problems may, from the physical point of view, have nothing in common; still, they can be reduced to the same type of underlying equation.

Before introducing the most important differential equations, we first want to make sure we understand what a differential equation actually is.

3.1.2 Functions

Without going into too much detail, we will quickly introduce the term function and explain, what we mean when we refer to a function. For this, we first need to introduce a different term.

3.1.2.1 Variable

In the mathematical sense, a variable is a placeholder for a value we do not know at the moment. Usually, it is desirable to write relations not using numbers. In many cases, these numbers change and we do not want to redo the whole calculation. Therefore, we use placeholders, or variables.

3.1.2.2 Equations

If we sum up several variables and establish correlations and relationships between them, we end up with something that is generally considered an equation. An equation puts several variables in a context that is physically meaningful. As an example, when calculating the weight of an object for which we know the density ρ and the volume V , we usually write

m=ρV

si2_e  (Eq. 3.1)

Eq. 3.1 is an equation. It puts several variables in a physically meaningful context. Equations are valid for all values we plug into the variables. Obviously, we may need to adapt the units if they are not provided in International System of Units (SI) format, but other than that, this equation will always hold true. We can even reformulate this equation if we know the volume V and the mass m of an object, but not the density. In this case, Eq. 3.1 becomes

ρ=mV

si3_e  (Eq. 3.2)

3.1.2.3 Functions

A function is a special type of an equation. In a function, we select one variable that we can choose at will, which is referred to as the independent variable. Depending on the value of this independent variable, other values of an equation have to change. These values are dependent on the values of the independent variable and are therefore referred to as the dependent variable. Returning to our example, Eq. 3.1, we can say that the density ρ and the volume V can be changed as desired. In this case, our problem has two independent variables (which is common). Depending on the values of these two variables, the third variable, the mass (m), has to change in order to balance the equation. Therefore, the mass, m, is the dependent variable in our problem.

Whenever we find a dependency that describes how a dependent variable changes when the independent variable changes, we have what we refer to as a function. A function is therefore a “recipe” that tells us how to interlink the independent variable with the dependent variable. Please note that a function can only have one dependent variable but (theoretically) an unlimited number of independent variables.

Usually, we indicate functions by writing the independent variables in parentheses. Therefore, we could rewrite Eq. 3.1 as

m(ρ,V)=ρV

si4_e  (Eq. 3.3)

which would be a valid function definition. We could imagine a scenario where we want to determine the mass of different fluids but always take a given volume V0 in each experiment. Therefore, we could consider the volume to be a constant rather than an independent variable. In this case, we would rewrite Eq. 3.3 as

m(ρ)=ρV0

si5_e  (Eq. 3.4)

where we have a function with only one independent variable. Please note that whenever the independent variables have been defined (or can be derived from context) functions are often written with the independent variables in parentheses. This is sometimes misleading but common.

3.1.3 Differential Equations

After having introduced the concept of a function, we can now turn to the so-called differential equations. A differential equation is an equation that uses the derivative of a function instead of or in addition to the function itself. Such an equation then gives us information about the change in the dependent variable as a function of the independent variables. A regular function would give us information about the absolute value of the dependent variable as a function of the independent variable.

Let us consider an example. During a mountain hike, we come by a barrier lake. Into this lake flows a small river (see Fig. 3.1). The volume of the lake can be regulated by opening a valve through which water flows out. Now we are interested in the total volume of water in the lake as a function of time: Vlake (t). Obviously, in this scenario the time, t, is our independent variable whereas the volume of the lake, Vlake, is the dependent variable.

f03-01-9781455731411
Fig. 3.1 Example of a differential equation: Calculating the volume of a barrier lake.

How can we calculate this volume? Obviously, if the shape of the lake is geometrically complex we cannot simply calculate the volume. In this case, there will be no chance for us of finding the function Vlake (t). However, as we watch the water flowing in and out of the lake, we may be able to (at least) determine something about the change of volume of the lake. This change over time will be equivalent to the amount of liquid V˙insi6_e flowing into the lake via the river in a given time interval minus the amount of water V˙outsi7_e flowing out of the river via the valve. Suppose we are able to exactly measure these flows (which seems reasonable) we can define the change of volume in the lake over time as

dVlakedt=V˙inV˙out

si8_e  (Eq. 3.5)

Eq. 3.5 is a differential equation. It tells us nothing about the actual value of the function Vlake we are interested in, but it tells us something about how this value changes over time. For simplicity let us assume that the flow of the river is steady and so is the flow through the valve. We therefore assume the right-hand side of Eq. 3.5 to be a constant, c, in which case we find

dVlakedt=c

si9_e  (Eq. 3.6)

We can now solve this differential equation. In this case, we simply use separation of variables (see section 8.2.2), which results in

Vlake=ct+c0

si10_e  (Eq. 3.7)

which gives the function Vlake (t) we have been looking for. Unfortunately, the integration produces an integration constant, c0. This constant must be given; otherwise, we have a pretty useless formula. Let us assume that the barrier lake was empty on the first day of opening of the lake. In this case we can set c0 = 0. Since this day, all water influxes and outfluxes for the lake have been recorded. In our example, we assumed these influxes/outfluxes to be constants. Therefore, we are able to calculate the content of the lake at any given point by simply forming the product of the influx/outflux constant and the time passed.

This is only a very quick example of how differential equations can be used to derive the underlying functions. Unfortunately, in many cases, the problems are significantly more complex.

3.1.3.1 A Second Example: The Boiled Egg

Another very simple example of a differential equation is the following. Imagine you cook yourself a boiled egg. After the egg has been in the boiling water for a given amount of time, you take the egg out of the boiling water and drop it into cold water. The question would be: What is the temperature of the egg in the cold water as a function of time? At the start of the experiment, the egg has a given initial temperature T0, which should be somewhere around 100 si183_eC given that you just boiled it in water at about this temperature.

There is a certain intuition to this problem. We could assume that heat will flow from the egg into the cold water at a given rate. The more heat it loses, the colder the egg will become. We would also assume that the amount of heat flowing per unit of time is proportional to the difference in temperature between the egg and its surrounding area. Therefore the change of temperature in the egg will also be proportional to this difference, or in other words,

dTdtTdTdt=cT

si11_e

This is a simple example of a differential equation that can be solved quickly. In this case, it can be solved by a separation of variables and partial integration (see section 8.2.2), resulting in

dTdt=cTdTT=cdt1nT=ct+c0T=c1ect

si12_e

As you can see, we have an integration constant which is given by the initial value: T (t = 0) = c1 = T0. Therefore, we see that the temperature of the egg will decrease exponentially over time.

This is a very good example of a very practical example of a differential equation.

3.1.3.2 Degree of a Differential Equation

In many differential equations we may come across higher derivatives of the function. The degree n of a differential equation is defined by the highest derivative used. The following list gives a few examples:

dfdx=0first-orderODEfx=0first-orderPDEd2fdx2+d2fdx2=0second-orderODE2fx2+fx=0second-orderPDE

si13_e

3.1.3.3 Homogeneous and Inhomogeneous Differential Equations

A differential equation is considered to be homogeneous if, after moving all terms containing the function f to the left-hand side of the equation, the right-hand side of the equation is 0. If the right-hand side of the equation is nonzero, the equation is considered to be inhomogeneous. It is of no importance if the right-hand side term is a simple constant or a function. The following list expands on this example:

2fx2+fx=0second-order homogeneous PDE2fx2+fx=2second-order homogeneous PDE2fx2+fx=x2+y2second-order inhomogeneous PDEd2fdx2+d2fdx2=0second-order homogeneous ODEfx=0first-order homogeneous PDEd2fdx2=dpdzfirst-order inhomogeneous ODE

si14_e

Please note the last example as being an interesting one. Here, the right-hand side is not only an equation, but it is also a differential equation. However, this differential equation depends on z and not on the independent variable x of our differential equation. Thus the differential equation can be solved assuming dpdxsi15_e to be a constant.

3.1.3.4 Obtaining the Differential Equation

Often, obtaining the differential equation may be difficult. The differential equation contains the underlying physics. Therefore, a sound understanding of the physical phenomena associated with the problem is necessary; otherwise, the differential equation cannot be obtained. Interestingly, in most cases obtaining the differential equation is not the most problematic point.

3.1.3.5 Solving the Differential Equation

In most cases, the differential equation can be obtained readily, but actually solving it may be very difficult. This is where the beauty and the irony come together: Differential equations often describe the most complex phenomena in a short, precise, and accurate form. However, in many cases obtaining the differential equation is as far as we get to solving the problem. A good example that is highly relevant for fluid mechanics is the Navier-Stokes equation, Eq. 11.40. Everything we need in order to describe even the most complex fluid mechanical effects is contained in this differential equation. But solving this equation is, in most cases, simply impossible. Luckily, many technically relevant, specialized cases exist in which solutions to this equation can actually be found or, at least, approximated.

3.1.4 Ordinary and Partial Differential Equations

There are two types of differential equations: ordinary differential equation (ODE) and partial differential equation (PDE).

3.1.4.1 Ordinary Differential Equations

Ordinary differential equations are differential equations of functions that have only one independent variable. Eq. 3.6, which we just discussed, is an example of an ODE. In our example, the volume of the lake is a function only of the time dVlakedtsi16_e. Thus the change of the function as we change the independent variable is the only change we need to consider. In such a case, the change of the function can be written using the notation dfdxsi17_e. This denotes an ordinary or total differential, indicating that there are no other independent variables that need to be taken into account. In our example, this was the change of the lake’s volume with respect to time dVlakedtsi16_e.

We will not discuss the rules of forming derivatives here because they can be looked up in any good differential calculus textbook.

3.1.4.2 Partial Differential Equations

If a function has more than one independent variable, the total change in the function will depend on more than one variable. In such a case, we are faced with a “partial differential equation” (PDE). Eq. 3.3 was an example of a function that depends on more than one independent variable. If we want to determine how a change in the volume V influences the mass m (which is the function we are investigating) we need to treat the function as if the second independent variable, the density ρ, was a constant. In this case, we use the notation mVsi19_e, that indicates that (for the given case) we only want to consider the change of the mass as we change the volume, keeping all the other independent variables constant.

These special types of derivatives are referred to as partial derivatives, which give rise to the term PDE. They are formed exactly as such: Simply keep all other independent variables constant and form a regular derivative. As an example, we will quickly form the two partial derivatives of Eq. 3.3, which are

mρ=(Vρ)ρ=Vdρdρ=VmV=(ρV)V=ρdVdV=ρ

si20_e

As you can see, in both cases the second independent variable is simply treated as a constant that allows forming the ordinary derivative of the second independent variable.

3.1.4.3 ODEs and PDEs

If a differential equation contains a partial derivative it is referred to as a PDE. If it only contains regular derivatives it is referred to as an ODE. In general, ODEs are significantly simpler to solve than PDEs. Unfortunately, PDEs are more common in practical physics and we therefore need to understand how to solve these equations.

3.1.4.4 Comment on the Usage of Independent Variable Lists

In section 3.1.2.3 we have already briefly stated that the independent variables are often not appended to the function, especially if they have been clearly defined. You will often find that whenever a function is written as a differential there is no need to state the independent variable. If a differential is written as dfdxsi17_e one can clearly see that the dependent variable (or function) f has only one independent variable, which is x. If f had more than one independent variable we must write dfdxsi17_e. In this case, we can tell that f has more than one independent variable and x is one of the independent variables. In this case, it is advisable to repeat the independent variable list and write f(x,y)xsi23_e. However, in many cases we usually do not look for the partial differential of only one a variable but for the effect of all variables. Therefore we may come across an equation of the form

fx+fy=0

si24_e

Here we would assume that the two independent variables of f are x and y, and only those two. In this case, it is acceptable not to repeat them again. You will find these rules being applied very inconsistently in the literature. Most authors argue that it keeps equation expressions shorter and more readable if the dependent variable list is not repeated every time. If the problem has been clearly introduced and the reader is aware of the independent variable, this seems acceptable in many cases. However, if a differential equation is only briefly mentioned and the reader has no notion about the meaning of the individual terms and from where they have been derived, it is often advisable to repeat the independent variables.

3.1.5 Differentials

After having introduced the concept of ordinary and partial derivatives we will now discuss uses of these derivatives. They are important because they allow us to determine the change in the function as we change the respective independent variables. In general these “changes in functions as we change the independent variable” are referred to as differentials. Think of the derivative as the tangent to a function f (x) at a given point. If you now want to know how big the change in the function’s value will be if you change the independent variable x by dx, you end up with the so-called differential change or simply the differential of the function, which is directly derived from the definition of the derivative:

df=dfdxdx

si25_e  (Eq. 3.8)

f(x+Δx)=f(x)+dfdx|xΔxforΔx0

si26_e  (Eq. 3.9)

Graphically, this can be interpreted as the inclination or the slope of a function with respect to the variable x (see Fig. 3.2a).

f03-02-9781455731411
Fig. 3.2 Graphical representation of differentials and derivatives. a) In the one-dimensional case the derivative can be interpreted as the slope of a function f (x) along a variable x at the point S0. b) In the two-dimensional case of a function f (x, y) dependent on two variables x and y, the differential can be interpreted as the sum of the changes in height of a function along the two variables. At S0 the slope along x gives rise to a tangential plane. Likewise, the slope along y gives rise to a tangential plane. Adding the two slopes (or overlying the two tangential planes) will amount to the change of f (x, y) along x and y.

3.1.5.1 Total Differential

If f is only dependent on one variable, only one slope must be taken into account and the differential is referred to as the total differential. As stated, it is denoted dfdxsi17_e in case of the differential of f (x) with respect to x. The total differential represents the overall change df if the dependent variable x changes by dx.

3.1.5.2 Total Differential From Partial Differentials

As we already stated, if a function f (x, y) is dependent on two independent variables x and y, we need to apply partial derivatives. However, we are still interested in the total differential of f (x, y), which has to be expressed in terms of changes with respect to both of the independent variables x and y. Surprisingly it is very easy to study this case graphically (see Fig. 3.2b). Following up on the concept of a slope along a changing variable, this two-dimensional case gives rise to two tangential planes (in three-dimensional space). The first plane will be defined by the slope of the function along x, and the second plane will be defined by the slope of the function along y. Both slopes are defined at the point S0. Each of these two slopes will contribute to the change of the function’s value df and the total differential is defined as

df=dfx+dfy

si28_e  (Eq. 3.10)

Following Eq. 3.8 the two values dfx (originating from the tangential plane along the x-axis) and dfx (originating from the tangential plane along the y-axis) can be calculated from the derivatives of f (x, y) along x and y in S0, respectively. As discussed, these two partial derivatives are calculated with respect to a given axis while ignoring any change along the other axis or rather while keeping the value for the other axis fixed. They are f(x,y)dxsi29_e (for the partial derivative with respect to x) and f(x,y)dysi30_e (for the partial derivative with respect to y). Using these partial derivatives, dfx and dfy can be expressed as

dfx=fxdxdfy=fydy

si31_e

which then expresses the total differential of f (x, y) from Eq. 3.10 with respect to x and y as

df=dfx+dfy=fxdx+fydy

si32_e  (Eq. 3.11)

For functions with more than two independent variables the total differential is calculated in the following form:

df(x1,x2,...,xn)=fx1dx1+fx2dx2+...+fxndxn

si33_e  (Eq. 3.12)

3.1.6 Important Rules in Differential Calculus

There are some rules than can be applied in order to simplify finding derivatives of functions. In this section we will briefly discuss some of the most important rules. Although we will be using the notation for ordinary differentials, these rules apply likewise to partial differentials.

3.1.6.1 Product Rule

We often need to find the derivative of a function that contains a product. As an example, consider the function

f(x)=g(x)h(x)

si34_e

In this case, the differential of f is found as

dfdx=gdhdx+hdgdx

si35_e  (Eq. 3.13)

Eq. 3.13 is referred to as the product rule. As a more practical example taken from fluid mechanics, the change of momentum along a coordinate axis is used in the derivation of the Navier-Stokes equation (see section 11). Here, the partial differential of the term ρvsi196_e with respect to x needs to be found. This term can be simplified as the following by using Eq. 3.13:

(ρυ)x=ρυx+υρx

si36_e

3.1.6.2 Chain Rule

Another important (and commonly employed) trick in differential calculus is the expansion of a (partial) derivative term in case the variables of a function are interdependent. As an example, if a function f (x, t) is dependent on x and t, it may be necessary to find the derivative df(x,t)dtsi37_e of f (x, t) with respect to t. In some cases f depends primarily on x and only x depends on t. As an example, let

f(x,t)=x25t2

si38_e

be a function that depends on the variable x. Let x be time-variant with

x(t)=e2t

si39_e

Now inserting x (t) (which is a very easy term) into f (x, t) will allow finding dfdtsi40_e, although this is still a somewhat tricky equation. Applying the chain rule of differentiation will make this solution much easier to find, yielding

dfdt=dfdxdxdt

si41_e  (Eq. 3.14)

In this case the differential has been slightly rewritten and now requires the derivatives dfdxsi17_e and dxdtsi43_e instead of only dfdtsi40_e. These are very easy to find in the given case:

dfdx=2xt2dxdt=2e2t

si45_e

In which case we find

dfdt=dfdxdxdt=2xt22e2t

si46_e

This trick is referred to as a special form of the chain rule of differentiation which can be used in case of such interdependent variables. Note that these types of expansion of differentials can also be used on total differentials.

3.1.6.3 Integration by Parts

In contrast to the product rule for differentiation, integrating a product is a bit more complicated. Assume we have the following product of two functions: f (x) and g (x). We find the differential to be

d(fg)dx=fdgdx+gdfdx

si47_e

Integration on both sides results in

d(fg)dxdx=fdgdxdx+gdfdxdxd(fg)=fdgdxdx+gdfdxdxfg=fdgdxdx+gdfdxdxfdgdxdx=fggdfdxdx

si48_e  (Eq. 3.15)

Eq. 3.15 is referred to as integration by parts because the expression still contains integrals.

Example. Assume we have the following function for which we need to find the integral over a circular area:

1rdcdrdr

si49_e

where c (r) is a function of the radius r. Now, referring to Eq. 3.15, we define f=1rsi50_e and dgdx=dcdrsi51_e. This definition is arbitrary; we could have also chosen it the other way around. However, as we will see, this is the better approach. Using Eq. 3.15 we note that

f(r)=1rdfdr=1r2dgdr=dcdrg(r)=c(r)

si52_e

with which we can define the integral according to Eq. 3.15 as

1rdcdrdr=cr+cr2dr

si53_e

which only leaves the integral cr2drsi54_e to be evaluated.

3.1.7 Boundary Conditions

We now turn to a very important concept when solving differential equations, the so-called boundary condition (BC). We have already alluded to the concept of boundary conditions when solving the problem with the volume of the barrier lake. In Eq. 3.7 we integrated the ODE and obtained an integration constant. Obviously, these constants always occur during integration and we need information about the physical problem in order to substitute them for reasonable values. These reasonable values are given by the boundary conditions.

A boundary condition is a value that the solution to our differential equation must have at a given point in the domain on which we want to solve the equation. Boundary values are physically meaningful demands that require our solution to have a certain given value at a certain point in the domain. In our example, we assume the lake to be empty at t = 0. This is a boundary condition, and it requires our solution, the function Vlake (t), to have a given value at a given point in the domain for which we want to solve the differential equation. In this case Vlake(t=0)=!0si55_e. Here the time is the domain in which we solve the differential equation, and at the boundary of the domain, we now have a fixed value. These values provide the correct “offset” to our solution. In principle, the solution given by Eq. 3.7 is already mathematically correct. Any function of this type with any value for the integration constant would satisfy the differential equation, Eq. 3.5. We therefore have a set of solutions. However, there is only one solution that is physically meaningful. This specific solution is given by the boundary condition.

3.1.7.1 Boundary Value Problems

A differential equation for which boundary conditions are given is commonly referred to as a boundary value problem (BVP).

How Many Boundary Conditions Do We Need? Often, we may know quite a number of things about a physical problem. From this knowledge we may deduce several boundary conditions. The question arises: How many boundary conditions do we need? In general, for each differential equation we need a number of boundary conditions corresponding to the degree of the differential equation. For a second-order differential equation we would require two boundary conditions; for a first-order differential equation we would require one.

3.1.7.2 Types of Boundary Conditions

There are usually two types of boundary conditions which we may come across. The first type indicates the value of the solution on the boundary of the domain.

Dirichlet Boundary Conditions. The Dirichlet1boundary conditions state the value that the solution function f to the differential equation must have on the boundary of the domain C. The boundary is usually denoted as ∂C. In a two-dimensional domain that is described by x and y, a typical Dirichlet boundary condition would be

f(x,y)=g(x,y,...),where:(x,y)C

si56_e

Here the function g may not only depend on x and y, but also on additional independent variables, e.g., the time t. Our example of the barrier lake used a Dirichlet boundary condition stating that the volume of the lake was 0 for t = 0. Here the function g is a constant, but it must not necessarily be the case.

Neumann Boundary Conditions. The second important type of boundary conditions are Neumann2boundary conditions. Neumann boundary conditions state that the derivative of the solution function f to the differential equation must have a given value on the boundary of the domain C. A typical Neumann boundary condition would be

f(x,y)x=g(x,y,...)where:(x,y)C

si57_e

or

f(x,y)y=g(x,y,...)where:(x,y)C

si58_e

Again the function g may not only depend on x and y, but also on additional variables such as time. We use an example of a Neumann boundary condition when deriving the velocity profile of a fluid layer on a plane driven by gravity (see section 15.3). While solving the differential equation we state that the shear stress on the open surface of the fluid must be 0 (see Eq. 15.12). This is the so-called free-surface condition (see section 9.8.3). As the shear stress is calculated from the first derivative of the velocity, this is a Neumann boundary condition.

3.1.8 Initial Values

Besides boundary conditions, we also often need to provide the initial value (IV). As the name suggests, initial values are the values that our solution has at a given point. As an example, let us consider a string suspended between two static points (see Fig. 3.3). We want to find the displacement of the string at any given time point t. This displacement is described by a differential equation, and the function y (t, x) is its solution. Obviously, the string will be attached between the two points for the entire experiment, which is why we can state y (t, 0) = y (t, L) = 0. These are the boundary conditions. However, this function will not only depend on the boundary conditions, but also on the shape of the string at the point t = 0. If the string was already displaced at this position, e.g., because somebody stretched it, the solution at the point t = t1 will look different.

f03-03-9781455731411
Fig. 3.3 Examples of boundary conditions and initial values at the string suspended between two static points. a) The initial value y (0, x) gives the initial shape of the string at the point t = 0. The boundary conditions state that the displacement of the string at x = 0 and x = L must be zero at all times. b) At the point t = t1 the string will have assumed a given displacement. The boundary conditions are still valid. Obviously, the function y (t1, x) depends both on the initial values and the boundary conditions.

Initial Value Problems. This is why the shape of the string at t = t0 must be taken into account. This shape is the solution to the differential equation at the t0 and is the so-called initial value to the solution. Obviously, a differential equation may have an initial value but no boundary values. A problem for which initial values are given is referred to as the initial value problem (IVP).

Initial Boundary Value Problems If, in addition to initial values, a differential equation also has boundary values then the problem is referred to as an initial boundary value problem (IBVP).

3.1.9 Systems of Differential Equations

In many applications in practical physics, mechanics, and fluid mechanics, we may not only end up with one differential equation but with several. For a good example, refer to the analysis of the pendant drop in section 24.5. Here we have several first-order differential equations for several functions that need to be solved in parallel.

Even though they are differential equations, the same rules for classical equations apply to them. If you have n unknown functions you need n differential equations in order to solve for all of the unknowns. Obviously, depending on the degrees of the differential equations, you will also need a given number of boundary conditions. The total number of boundary conditions required is equal to the total number of degrees of the differential equations. Returning to the example of pendant drop analysis we have a total of five unknown functions: θ (s), x (s), z (s), V (s), and rmax (s) (see section 24.5 or listing 24.2 for details about their meaning). For these unknown functions we have five first-order differential equations. Consequently we need five boundary conditions.

In many cases we may come across physical problems for which we “know too much”, i.e., we know too many potential boundary conditions. In these cases it is important to reduce the number of boundary conditions and select only those most relevant to the problem. It may well be that a solution can be found that fulfills all boundary conditions, but it is often easier to first deduce a solution satisfying only the required number of boundary conditions and testing this solution for correctness using the remaining boundary conditions. This is especially true when finding solutions to differential equations numerically. If a numerical solver receives too many boundary conditions it may potentially not come up with a solution at all.

3.2 Important Functions

In this section we will look at a couple of important functions that we will require for solving some of the differential equations we will be working with.

3.2.1 Trigonometric Functions

We will now look at some of the most important properties of the trigonometric functions. As we will see, a good knowledge of their properties, especially their symmetry and their roots, is often required.

3.2.1.1 Sine Function

The sine function sin x is periodic over the period length T = 2π (see Fig. 3.4a). It is point-symmetric to the origin and is therefore referred to as an odd function. It obtains specific values for

f03-04-9781455731411
Fig. 3.4 Important trigonometric functions. a) Sine, cosine, and tangent functions. Please note the functions are periodic. b) Hyperbolic sine, cosine, and tangent functions. As you can see, these functions are not periodic.

allx=nπsinx=0allx=(4n+1)π2sinx=1allx=(4n1)π2sinx=1

si59_e

As the sine function is point-symmetric to the origin we note that

sin(x)=sinx

si60_e  (Eq. 3.16)

sin(nπ+x)=sin(nπx)for all odd values of n

si61_e  (Eq. 3.17)

sin(nπ+x)=sin(nπx)for all even values of n

si62_e  (Eq. 3.18)

3.2.1.2 Cosine Function

The cosine function cos x is also periodic over the period length T = 2π (see Fig. 3.4a). It is mirror-symmetric to x = 0 and is therefore referred to as an even function. It obtains specific values for

all x=2nπcosx=1allx=(2n+1)πcosx=1allx=(2n+1)π2cosx=0

si63_e

As the cosine function is mirror-symmetric to the origin we note that

cos(x)=cosx

si64_e  (Eq. 3.19)

cos(nπ+x)=cos(nπx)for all even values of n

si65_e  (Eq. 3.20)

cos(nπ+x)=cos(nπx)for all odd values of n

si66_e  (Eq. 3.21)

3.2.1.3 Tangent Function

The tangent function tan x is also periodic over the period length T = 2π (see Fig. 3.4a). It is defined as

tanx=sinxcosx

si67_e

It is point-symmetric to the origin and therefore also an odd function. It obtains specific values for

·allx=nπtanx=0·allx=(2n+1)π2tan x=±

si68_e

As you can see, the tangent function is discontinuous for all values of x=(2n+1)π2si69_e

3.2.1.4 Hyperbolic Sine Function

The hyperbolic sine function sinh x is not periodic (see Fig. 3.4b). It is point-symmetric to the origin and therefore also an odd function. It is defined as

sinhx=12(exex)

si70_e

The most important points to note are the fact that sinh (x = 0) = 0, and it does not obtain the value of 0 for any other values of x.

3.2.1.5 Hyperbolic Cosine Function

The hyperbolic cosine function cosh x is also nonperiodic (see Fig. 3.4b). It is mirror-symmetric to x = 0 and therefore also an even function. It is defined as coshx=12(ex+ex)si71_e

The most important point is the fact that cosh (x = 0) = 1, and it never obtains the value of 0 for any value of x.

3.2.1.6 Hyperbolic Tangent Function

The hyperbolic tangent function tanh x is also nonperiodic (see Fig. 3.4b) and defined, in analogy to the regular tangent function, as

tanhx=sinhxcoshx

si72_e

It is point-symmetric to the origin and therefore also an add function. It obtains specific values for

 tanh (x = 0) = 1

 tanh x → 1 for x > 2

 tanh x → −1 for x < −2

Occasionally, you may come across the hyperbolic cotangent function, which is defined as

cothx=coshxsinhx

si73_e

3.2.1.7 Derivatives

In the following, we will note a couple of important derivatives. The derivatives of sine and cosine are cyclic, yielding

ddxsinx=cosxddxcosx=sinxddx(sinx)=cosxddx(cosx)=sinx

si74_e

The derivative of the tangent function is

ddxtanx=1cos2xddxcotx=1sin2x

si75_e

The derivatives of the hyperbolic functions are

ddxsinhx=coshxddxcoshx=sinhxddxtanhx=1cosh2xddxcothx=1sinh2x

si76_e

3.2.2 Trigonometric Equivalences

This section will briefly list some of the more important trigonometric equivalences. Please refer to the right triangle (see Fig. 3.5) for a repetition of the individual edges of the right triangle.

f03-05-9781455731411
Fig. 3.5 The right triangle.

3.2.2.1 Pythagorean Theorem

More often than expected, the Pythagorean theorem is used, which states

sin2x+cos2x=1

si77_e  (Eq. 3.22)

3.2.2.2 Inverse Functions

As a short reminder, each trigonometric function has an inverse function that is defined as

sine:sinx=oppositehypotenuse=1cscxcosecant:scsx=hypotenuseopposite=1sinxcosine:cosx=adjacenthypotenuse=1secxsecant:secx=hypotenuseadjacent=1cosxtangent:tanx=oppositeadjacent=1tanxcotangent:cotx=adjacentopposite=1tanx

si78_e

3.2.2.3 Conversion From Sine

The following equations allow expressing the other trigonometric functions by means of the sine function:

sinx=±1cos2x

si79_e  (Eq. 3.23)

=±tanx1+tan2x

si80_e  (Eq. 3.24)

=±11+cot2x

si81_e  (Eq. 3.25)

3.2.2.4 Conversion From Cosine

The following equations allow expressing the other trigonometric functions by means of the cosine function:

cosx=±1sin2x

si82_e  (Eq. 3.26)

=±11+tan2x

si83_e  (Eq. 3.27)

=±cotx1+cot2x

si84_e  (Eq. 3.28)

3.2.2.5 Conversion From Tangent

The following equations allow expressing the other trigonometric functions by means of the tangent function:

tanx=±sinx1sin2x

si85_e  (Eq. 3.29)

=±1cos2xcosx

si86_e  (Eq. 3.30)

=±1cotx

si87_e  (Eq. 3.31)

3.2.2.6 Conversion From Cotangent

The following equations allow expressing the other trigonometric functions by means of the cotangent function:

cotx=±1sin2xsinx

si88_e  (Eq. 3.32)

=±cosx1cos2x

si89_e  (Eq. 3.33)

=±1tanx

si90_e  (Eq. 3.34)

3.2.2.7 Conversions For Half-Angles

The following equations can be used in order to convert half-angles into equations of full angles:

sinx2=±1cosx2

si91_e  (Eq. 3.35)

cosx2=±1+cosx2

si92_e  (Eq. 3.36)

tanx2=±1cosx1+cosx=sinx1+cosx=1cosxsinx

si93_e  (Eq. 3.37)

3.2.3 Euler’s Formula

Euler’s1 formula states that

eix=cosx+isinx

si94_e  (Eq. 3.38)

eix=cos(x)+isin(x)=cosxisinx

si95_e  (Eq. 3.39)

This is used extensively to represent trigonometric functions. The following functions are important to keep in mind:

sine functions

sinx=12i(eixeix)

si96_e  (Eq. 3.40)

sin(ix)=12i(exex)=12i(exex)=isinhx

si97_e

sinhx=12(exex)

si70_e  (Eq. 3.41)

cosine functions

cosx=12(eix+eix)cos(ix)=12(ex+ex)=coshx

si99_e  (Eq. 3.42)

coshx=12(ex+ex)

si71_e  (Eq. 3.43)

3.2.4 Bessel Functions

3.2.4.1 Standard Bessel Functions

The Bessel2 functions were first described by German mathematician Friedrich Wilhelm Bessel while studying many-body problems, particularly planetary motion [7]. Bessel was not the first to describe these equations, but he was the first to extensively use them to solve partial differential equations.

Definition.Bessel functions are solutions to a specific type of differential equation referred to as Bessel’s differential equation. This differential equation is a special case of the a Sturm-Liouville boundary value problem, which often occurs in polar and cylindrical coordinates. The Bessel differential equation of order ν for the function f (x) is given by

x2d2f(x)dx2+xdf(x)dx+(α2x2v2)f(x)=0

si101_e  (Eq. 3.44)

In this equation, ν is a nonnegative integer. In general, ν could also be a real number but the solutions to these differential equations are somewhat different. Eq. 3.44 is a second-order ordinary differential equation and is solved by a set of two functions, the Bessel function of first kind Jν (x) and the Bessel function of second kind Yν (x) which is also referred to as Weber function. The Bessel function of first kind Jν (x) can be approximated by a series of gamma functions. The Bessel function of second kind Yν (x) can be expressed using the Bessel function of first order.

The first five Bessel functions Jν (x) and Yν (x), respectively, are displayed in Fig. 3.6. As can be seen, the functions look roughly like decaying sine or cosine functions.

f03-06-9781455731411
Fig. 3.6 The first five Bessel functions Jν (x) and Yν (x), respectively.

The general solution to Eq. 3.44 is

f(x)=c1Jv(αx)+c2Yv(αx)

si102_e  (Eq. 3.45)

In physical applications, c2 = 0 in most cases. This is due to the fact that Yν (α x) strives to negative infinity for x → 0, which yields physically unreasonable results (unless there is a very strong source or sink at x = 0). Therefore, the general solution to Eq. 3.44 in most practical physical problems is

f(x)=c1Jv(αx)

si103_e  (Eq. 3.46)

Important Properties. Some of the most important properties of the Bessel function are for Jν

ddx(Jv(αx))=αJv1(αx)vxαJv(αx)

si104_e  (Eq. 3.47)

ddx(xvJv(αx))=αxvJv1(αx)

si105_e  (Eq. 3.48)

ddx(xvJv(αx))=xvαJv+1(αx)

si106_e  (Eq. 3.49)

Jv+1(αx)=2vαxJv(αx)Jv1(αx)

si107_e  (Eq. 3.50)

ddx(xvJv(αx))=xv1vJv(αx)+xv(vxJv(αx)αJv+1(αx))

si108_e  (Eq. 3.51)

ddx(Jv(αx))=vxJv(αx)αJv+1(αx)

si109_e  (Eq. 3.52)

ddx(xv+3Jv(αx))2αxv+1Jv+2(αx)=αxv+3Jv(αx)

si110_e  (Eq. 3.53)

Jv(λx)=(1)vJv(λx)

si111_e  (Eq. 3.54)

for Yν

ddx(Yv(αx))=αYv1(αx)vxαYv(αx)

si112_e  (Eq. 3.55)

ddx(xvYv(αx))=xvαYv1(αx)

si113_e  (Eq. 3.56)

which for ν = 0 gives the important relations

ddx(xJ1(αx))=αxJ0(αx)

si114_e  (Eq. 3.57)

ddx(xY1(αx))=αxY0(αx)

si115_e  (Eq. 3.58)

Here you can also verify one important property given by

dJ0(x)dx=J1(x)

si116_e  (Eq. 3.59)

Recurrence Relation. Among the most important properties of Bessel functions are the so-called recurrence relations, which state that Bessel functions of higher order can be expressed by Bessel functions of lower order and v.v. The recurrence relation reads

Jv1(αx)=2vαxJv(αx)Jv+1(αx)

si117_e  (Eq. 3.60)

Yv1(αx)=2vαxYv(αx)Yv+1(αx)

si118_e  (Eq. 3.61)

General Notation. In many applications the standard Bessel differential equation occurs in slightly different form. Compared to Eq. 3.44 a more general notation can be written as

x2d2f(x)dx2+(2p+1)xdf(x)dx+(α2x2r+v2)f(x)=0

si119_e  (Eq. 3.62)

The general solution to Eq. 3.44 is

f(x)=xp(c1Jqr(αrxr)+c2Yqr(αrxr))withq=p2v2

si120_e  (Eq. 3.63)

where, again, the second term is often dropped in order to obtain physically meaningful results.

Series Notation of the Bessel Functions of First Kind. The Bessel function of first kind is often written out explicitly as a series that is given by

Jv(x)=n=0(1)nn!(v+1)!(x2)v+2n

si121_e  (Eq. 3.64)

Now this notation does not look very convenient, but it is simple to write it out for explicit values of ν. The first two Bessel functions for ν = 0 and ν = 1 can be written out as

J0(x)=1(x2)2+1(2!)2(x2)41(3!)2(x2)6+

si122_e  (Eq. 3.65)

J1(x)=x212!(x2)3+12!3!(x2)513!4!(x2)7+

si123_e  (Eq. 3.66)

which looks distinctively more common.

Gamma Function. In these derivations we have always assumed ν to be an integer, which it does not necessarily have to be. If we allow all real values for ν the series notation of the Bessel function of first kind can be upheld, but it is rewritten using the so-called gamma function Γ which is defined as

Γ(x)=0tx1etdt(x>0)

si124_e

Fig. 3.7 shows a plot of the gamma function Γ (x). The gamma function is an important function in integral and differential analysis and can be used to express the Bessel function of first kind for all ν including noninteger values as

f03-07-9781455731411
Fig. 3.7 Gamma function Γ (x).

Jv(x)=n=0(1)nn!Γ(v+n+1)!(x2)v+2n

si125_e  (Eq. 3.67)

3.2.4.2 Modified Bessel Functions

Definition. If Eq. 3.44 is used with complex numbers for the dependent variable x the resulting differential equation is referred to the modified Bessel’s differential equation. Therefore, modified Bessel functions are solutions to the modified Bessel’s differential equation of order ν which, for the function f (x), is given by

x2d2f(x)dx2+xdf(x)dx((αx)2+v2)f(x)=0

si126_e  (Eq. 3.68)

where α is a constant. The general solution to Eq. 3.68 is given by a set of two functions, the modified Bessel function of first kind Iν (α x) and the modified Bessel function of second kind Kν (α x). The solution is given by

f(x)=c1Iv(αx)+c2Kv(αx)

si127_e

For the same reason as for the standard Bessel functions c2 = 0 in order to yield physically meaningful results, which simplifies the solution to

f(x)=c1Iv(αx)

si128_e  (Eq. 3.69)

The first five Bessel functions Iν (x) and Kν (x), respectively, are displayed in Fig. 3.8.

f03-08-9781455731411
Fig. 3.8 The first five Bessel functions Iν (x) and Kν (x), respectively.

Important Properties.Iν (x) can be expressed by the standard Bessel functions of first kind as

Iv(x)=ivJv(ix)

si129_e

Likewise, Kν (x) can be expressed by the standard Bessel functions of first and second kind as

Kv(x)=π2iv+1(Jv(ix)+iYv(ix))

si130_e

Similar to the standard Bessel functions, modified Bessel functions also feature recurrence relationships. Therefore, derivatives of higher Bessel functions can be expressed by Bessel functions of lower orders. The general relationship is the following:

for Iν

ddx(Iv(αx))=αIv1(αx)vxαIv(αx)

si131_e  (Eq. 3.70)

ddx(xvIv(αx))=xvIv1(αx)

si132_e  (Eq. 3.71)

for Kν

ddx(Kv(αx))=αKv1(αx)vxαKv(αx)

si133_e  (Eq. 3.72)

ddx(xvKv(αx))=xvKv1(αx)

si134_e  (Eq. 3.73)

which for ν = 0 gives the important relations

ddxI0(αx)=αI1(αx)

si135_e  (Eq. 3.74)

ddxK0(αx)=αK1(αx)

si136_e  (Eq. 3.75)

Recurrence Relation. Just like the regular Bessel functions the modified Bessel functions also show recurrence relation:

Jv1(αx)=2vαxJv(αx)+Jv+1(αx)

si137_e  (Eq. 3.76)

Yv1(αx)=2vαxYv(αx)+Yv+1(αx)

si138_e  (Eq. 3.77)

3.2.4.3 Roots of the Bessel Functions

As you can see from Fig. 3.6 the Bessel function of first and second kind have a number of roots, i.e., values for xn at which Jν (xn) = 0 and Yν (xn) = 0. As it turns out, these values are very important and frequently used when dealing with Bessel functions and in particular with Bessel differential equations. Since the roots are simply numbers, they can be calculated conveniently. Tab. 3.1 lists the first ten roots of the first five Bessel functions of first and second kind, respectively.

Tab. 3.1

Roots of the Bessel functions of first kind Jν and second kind Y of order ν

Root numberν = 0ν = 1ν = 2ν = 3ν = 4
Bessel function of first kind Jν of order ν
12.404 833.831 715.135 626.380 167.588 34
25.520 087.015 598.417 249.761 0211.064 71
38.653 7310.173 4711.619 8413.015 2014.372 54
411.791 5313.323 6914.795 9516.223 4717.615 97
514.930 9216.470 6317.959 8219.409 4220.826 93
618.071 0619.615 8621.117 0022.582 7324.019 02
721.211 6422.760 0824.270 1125.748 1727.199 09
824.352 4725.903 6727.420 5728.908 3530.371 01
927.493 4829.046 8330.569 2032.064 8533.537 14
1030.634 6132.189 6833.716 5235.218 6736.699 00
Bessel function of second kind Yν of order ν
10.893 582.197 143.384 244.527 025.645 15
23.957 685.429 686.793 818.097 559.361 62
37.086 058.596 0110.023 4811.396 4712.730 14
410.222 3511.749 1513.209 9914.623 0815.999 63
513.361 1014.897 4416.378 9717.818 4619.224 43
616.500 9218.043 4019.539 0420.997 2822.424 81
719.641 3121.188 0722.693 9624.166 2425.610 27
822.782 0324.331 9425.845 6127.328 8028.785 89
925.922 9627.475 2928.995 0830.486 9931.954 69
1029.064 0330.618 2932.143 0033.642 0535.118 53

t0010

3.2.5 Delta Function

The delta function is often also referred to as the Dirac delta function, named after English physicist Paul Dirac1. It is not a function in the classical sense being defined as

δ(x)={x=00otherwise

si139_e  (Eq. 3.78)

The main property of the delta function is in the fact that it reaches infinity at a single point and is zero at any other point. Its most important property is that its integral is always one:

δ(x)dx=1

si140_e

You may think of the delta function as the approximation of a rectangular pulse with the pulse width approaching zero. Let this pulse be defined as

δpluse(x)={1aa2xa20otherwise

si141_e  (Eq. 3.79)

Obviously, the smaller the parameter a is, the narrower and higher the pulse. For a → 0 the pulse height will move toward infinity. Fig. 3.9 shows this pulse for different values of a. Obviously, the integral over the pulse will always be 1.

f03-09-9781455731411
Fig. 3.9 Approximation of the delta function δ (x) by a pulse of width a and height 1asi1_e. As a → 0 the pulse height converges to infinity.

Usage. The delta function can be used to model singularities in physical problems. An example that we will also discuss later is given in the case of diffusion. At the beginning of the experiment we provide a given amount of substance, e.g., a salt in the form of a soluble piece of solid into a fluid volume. Over time this substance will dissolve into the volume changing the concentration locally. At the beginning of the experiment, all the substance is confined in the small piece of solid. At this location, the concentration is infinite as the ratio of substance over fluid volume is infinite (see Fig. 3.10a). Over time, the solid will slowly dissolve in the fluid leading to a broadening of the concentration profiles (see Fig. 3.10b). The profiles will transform from a delta function more toward a distribution we may associate with a Gaussian distribution.

f03-10-9781455731411
Fig. 3.10 Soluble solid as an example for a delta function in two dimensions. a) Initially the solid is in a compact form. The concentration profiles along the x-axis and the y-axis are delta functions. b) After a given amount of time, the profiles will broaden and the concentration in both directions will look more like a distribution.

Mathematical Treatment. The problem with the delta function is that it is, mathematically speaking, not a function but rather a distribution. Continuum mechanics and the underlying mathematics have trouble with functions that are noncontinuous. This is why we usually try to convert the delta function to a form that we can treat better mathematically. The most convenient means of doing so is by converting the delta function to a Fourier series. We will cover the mathematics of Fourier series in section 4.3. Being able to convert the delta function to a sine series is a very helpful technique. It allows using this mathematically rather impractical distribution as a continuous function.

Fourier Series of the Delta Function. The Fourier expansion into a sine series is rather straightforward. We use Eq. 4.24 and expand the delta function on the interval −a ≤ x ≤ + a. We begin with the Fourier coefficient a0 for which we find

a0=22aa+aδ(x)dx=1a

si142_e

where we have used the fact that a+aδ(x)dx=1si143_e per definition. In order to calculate an and bn we will need to employ a little trick which is often referred to as the “cutting ability of the delta function.” From Eq. 4.27 we find

an=22aa+aδ(x)cos(n[π]ax)dx=1aa+aδ(x)cos(n[π]a0)dx=1acos(n[π]a0)a+aδ(x)dx=1a

si144_e

This simplification may seem a little weird at first sight. The delta function will “cut off” all fractions of the function cos(n[π]ax)si145_e besides the value for x = 0 where the delta function is nonzero. Therefore, we only have to evaluate this function as x = 0, which results in the term cos(n[π]a0)si146_e. However, this term is constant as it is the evaluated function at one specific value for x. Therefore we can move this constant in front of the integral and are left with the integral of the delta function a+aδ(x)dxsi147_e, which is, again, equal to one per definition. We proceed similarly with bn and find

bn=22aa+aδ(x)sin(nπaa)dx=1aa+aδ(x)sin(nπa0)dx=0

si148_e

In sum we therefore find for the delta function according to Eq. 4.25

δFourier(x)=12a+n=1nmax1acos(n[π]ax)=1a(12+n=1nmaxcos(n[π]ax))

si149_e  (Eq. 3.80)

Fig. 3.11 shows the expanded Fourier series δFourier (x) of the delta function δ (x). As you can see, the higher the expansion order, the more pronounced the peak becomes.

f03-11-9781455731411
Fig. 3.11 Fourier series of the delta function δ (x) with different expansion orders nmax.

Shifting the Delta Function. We have expanded the delta function around x = 0, but it can be simply shifted to a value x = x0 by using δ (x − x0) instead of δ (x). Fig. 3.12 shows an example of a delta function shifted to x0 = 5.

f03-12-9781455731411
Fig. 3.12 Delta function δ (x − 5) which is a delta function shifted to x0 = 5.

Fourier Series of Shifted Delta Function. We will derive the Fourier series for the shifted delta function δ (x − x0) where −a ≤ x0 + a, which is given by

a0=22aa+aδ(xx0)dx=1aan=22aa+aδ(xx0)cos(n[π]ax)dx=1aa+aδ(x)cos(n[π]ax0)dx=1acos(n[π]ax0)bn=22aaaδ(xx0)sin(n[π]ax)dx=1aaaδ(x)sin(n[π]ax0)dx=1asin(n[π]ax0)

si150_e

with

δFourier(x)=12a+1an=1nmax(cos(n[π]ax0)cos(n[π]ax)+sin(n[π]ax0)sin(n[π]ax))

si151_e

Two-Dimensional Delta Function. The delta function can be easily extended to multiple dimensions. As the function effectively “cuts out” a function we may simply multiply a delta function along the x-axis and a delta function along the y-axis in order to obtain a two-dimensional delta function. It simply defined as

δxy(x,y)=δ(x)δ(y)

si152_e  (Eq. 3.81)

The corresponding Fourier series on the interval −a < x < + a and −b < y < + b is thus defined as

δxy,Fourier(x)=1a(12+n=1nmaxcos(n[π]ax))1b(12+n=1nmaxcos(n[π]ay))

si153_e  (Eq. 3.82)

Fig. 3.13 shows Eq. 3.82 with different expansion orders. As you can see, it resembles a two-dimensional peak at the origin.

f03-13-9781455731411
Fig. 3.13 Fourier series of a two-dimensional delta function with different expansion orders.

Fourier Transformation of the Delta Function. We will now derive the Fourier transformation of the delta function. We will cover Fourier transforms in detail in section 5.1, so do not worry if at this point the following derivation still seems obscure. From Eq. 5.1 we see that the Fourier transformation is defined as

{δ(t)}=+δ(t)e2πiftdt

si154_e

However, the delta function is 0 for all t ≠ 0; therefore,

{δ(t)}=+δ(t=0)e2πif0dt=δ(t=0)=1

si155_e  (Eq. 3.83)

3.2.6 Heaviside Function

The next important function we will require is the Heaviside1function. The Heaviside function is often considered the integral form of the delta function. It is defined as

Θ(x)={0x<01x1

si156_e  (Eq. 3.84)

Obviously, at x = 0 it has a discontinuity that amounts to a derivative with an infinite value in an infinitely small amount of time. This is effectively the delta function δ. Just as the delta function, the Heaviside function is not a function in the classical sense, which is why it is sometimes referred to as a distribution.

Usage. Just as the delta function, the Heaviside function can be used to model sudden changes in a physical system. Returning to our example of the soluble solids in a fluid, the Heaviside function can be used to model a compact block of solid that effectively forms a concentration jump (see Fig. 3.14a). This concentration jump is a good example of a Heaviside function. After a given amount of time, the profile will broaden (see Fig. 3.14b).

f03-14-9781455731411
Fig. 3.14 Soluble solid as an example for a Heaviside function in two dimensions. a) Initially the solid is in a compact form forming a “wall” inside of the fluid, effectively creating a sudden jump in the concentration. The resulting concentration profile is a Heaviside function. b) After a given amount of time, the profile will broaden and the concentration will look more like a distribution.

Fourier Series of the Heaviside Function. In analogy to our approach with the delta function, we will expand the Heaviside function to a Fourier series. This will allow us to treat this noncontinuous distribution as a standard function. Again, we use Eq. 4.24 and expand the function on the interval −a ≤ x ≤ + a, where the step will be at x = 0. Just as with the delta function, we may shift this step to any other value of x by using Θ (x − x0) instead of Θ (x). For the Fourier coefficients we find

a0=22aa+aΘ(x)dx=aa=1an=22aa+aΘ(x)cos(n[π]ax)dx=1aa+acos(n[π]ax)dx=1nπ[sin(n[π]ax)]0a=1nπ(sin(nπ)sin0)=0bn=22aaaδ(x)sin(n[π]ax)dx=1aaaδ(x)sin(n[π]ax)dx=1nπ[cos(n[π]ax)]0a=1nπ(cos(nπ)cos0)=1nπ(cos(nπ)1)

si157_e

Here we take into account that

cos(nπ)1={0n=0,2,4,ascos(nπ)=12n=1,3,5,ascos(nπ)=1

si158_e

Therefore we can rewrite bn as

bn=2(2n+1)π

si159_e

In sum we therefore find for the delta function, according to Eq. 4.25,

ΘFourier(x)=12+n=0nmax2(2n+1)sin((2n+1)[π]ax)

si160_e  (Eq. 3.85)

Fig. 3.15 shows the expanded Fourier series ΘFourier (x) of the Heaviside function Θ (x). As we already know from the Fourier expansion of the step function (which is in fact very similar to the Heaviside function), the Fourier series matches the step with good fidelity.

f03-15-9781455731411
Fig. 3.15 Fourier series of the Heaviside function Θ (x) with different expansion orders nmax.

Fourier Transformation of the Heaviside Function. We will now derive the Fourier transformation of the Heaviside function. Here we will use the signum function, which we will introduce in section 3.2.7. We will use the independent variable t instead of x in the following section in order to stay consistent with the notation commonly used for Fourier transformations (see Tab. 5.1). The Heaviside and the signum functions are related as

Θ(t)=12+12sign(t)

si161_e

which we can directly see from Eq. 3.87. We can then apply Fourier transformations to both sides, leading to

{Θ(t)}={12}+12{sign(t)}=πδ(f)=1if

si162_e  (Eq. 3.86)

where we have used {12}=12{1}=πδ(f)si163_e. The Fourier transformation of the signum function will be derived in Eq. 3.90.

3.2.7 Signum Function

The signum function is defined as

sign(x)={1x>00x=01x<0

si164_e  (Eq. 3.87)

It is closely related to the Heaviside function as

sign(x)=2Θ(x)1

si165_e  (Eq. 3.88)

Its derivative is closely related to the Delta function as

ddxsign(x)=2dΘ(x)dx=2δ(x)

si166_e  (Eq. 3.89)

Fourier Transformation of the Signum Function. We will quickly derive the Fourier transform of the signum function using Eq. 3.89 as a basis. In order to stay consistent with the notation used in Tab. 5.1 we use the independent variable t instead of x here. In this case we find

{ddtsign(t)}=2{δ(t)}if{sign(t)}=2{sign(t)}=2if

si167_e  (Eq. 3.90)

where we have used si183_e (t)} = 1.

3.2.8 Error Function

When working with the Heaviside function, we often encounter the so-called error function erf (x). It can be considered as the transition from the Heaviside function (which is rather a distribution than a function) to an algebraic function. Very often, you may rather work with the error function than with the Heaviside distribution when solving, e.g., differential equations.

Definition. The error function is defined as

erf(x)=2π0zeξ2dξ

si168_e  (Eq. 3.91)

Symmetry. The error function is point-symmetric to the origin.

erf(x)=erf(x)

si169_e  (Eq. 3.92)

Complementary Error Function. The second function often encountered when using the error function is the complementary error function erfc (x), which is defined as

erfc(x)=2πξeξ2dξ

si170_e  (Eq. 3.93)

Its most important property is due to the fact that it adds up with the error function to the value 1:

erf(x)+erfc(x)=1erfc(x)=1erf(x)

si171_e  (Eq. 3.94)

Visualization. The error function erf (x) and complementary error function erfc (x) are displayed in Fig. 3.16. As you can see, the error function resembles the Heaviside step function.

f03-16-9781455731411
Fig. 3.16 Plot of the error function erf (x) and complementary error function erfc (x).

3.3 Commonly Used Calculus Tricks

This section will highlight some tricks which are often used to simply calculations.

3.3.1 Simplification of Denominators Containing Sums

In the fraction 1a+bcsi172_e, which has a denominator containing a sum we can make the following simplification if bc ˝ a:

1a+bc1abca2

si173_e  (Eq. 3.95)

The proof for this is

1=(1abca2)(a+bc)=aa+bca(abca2+(bc)2a2)=1(bca)21

si174_e

as bca1si175_e because b c si183_e a.

3.3.2 Product of a Variable and Its Derivative

Once in a while the following trick can be used to simplify a differential product of a function ψ and its derivative:

ψdψdx=ddx(ψ22)

si176_e  (Eq. 3.96)

The proof is very simple; it simply uses the product rule of differentiation, Eq. 3.13

ddx(ψ22)=2ψ2dψdx=ψdψdx

si177_e

3.3.3 Curvature

The curvature κ is a quantitative means of describing how “bent” a function is. This is in agreement with our intuition of what “curved” actually means. Mathematically, the curvature is the second derivative of a vector function φ(s)si178_e with respect to the path ds

κ=|d2ψds2|=|dTds|

si179_e  (Eq. 3.97)

with Tsi180_e being the tangent vector. It therefore is the change of the tangent with respect to ds or, in other words, the degree by which the tangent changes. This is equivalent to the degree by which the angle θ of the tangent changes; therefore,

κ=dθds

si181_e  (Eq. 3.98)

Looking at Fig. 3.17, it can be seen that this is intuitive. If the tangent changes rapidly (see Fig. 3.17a) the curve is strongly curved, thus κ is high. If the tangent changes only slowly (see Fig. 3.17b), the curve is only slightly curved; thus κ is low. The curvature is the inverse of the radius, and therefore

f03-17-9781455731411
Fig. 3.17 Curvature κ of a strongly curved function (a) and a weakly curved function (b).

κ=1r

si182_e

The curvature for a planar function ψ can be derived from Fig. 3.18 from which we find

f03-18-9781455731411
Fig. 3.18 Derivation of the curvature of a planar function.

(ds)2=(dx)2+(dψ)2dsdx=1+(dψdx)

si183_e  (Eq. 3.99)

as well as

tanθ=dψdx

si184_e  (Eq. 3.100)

We can extract the change in the angle θ required for Eq. 3.98 by finding the derivative with respect to x of Eq. 3.100, resulting in

dtanθdx=(1+tan2θ)dθdx=d2ψdx2dθdx=d2ψdx21+tan2θdθdxd2ψdx21+(d2ψdx2)2

si185_e  (Eq. 3.101)

where have used Eq. 3.100. Now we can find dθdssi186_e by dividing Eq. 3.101 by Eq. 3.99:

dθds=dθdxdsds=d2ψdx21+(dψdx)2(1+(dψdx)2)12=d2ψdx2(1+(dψdx)2)32

si187_e  (Eq. 3.102)

For many cases, especially if the changes in a curve are steady and not too strong, Eq. 3.102 can be lin-earized to

dθdsd2ψdx2

si188_e  (Eq. 3.103)

as (1+(dψdx)2)32132=1si189_e for small values of dψdxsi190_e.

3.3.4 Infinitesimal Changes in Commonly Encountered Geometries

There are several geometries for which we often use the infinitesimal changes in surface dA or volume dV . This section derives the most commonly encountered equations.

Circle. For a circle the infinitesimal change of area dAcircle is given by

dAcircle=π((r+dr)2r2)=π(r2+2rdr+dr2r2)=2πrdr

si191_e  (Eq. 3.104)

where we neglected dr2 as being very small.

Cylinder.For a cylinder of length L the infinitesimal change in area dAcylinder is given by

dAcylinder=2πL((r+dr)r)=2πLdr

si192_e  (Eq. 3.105)

In analogy, the infinitesimal change in volume dVcylinder is given by

dVcylinder=πL((r+dr)2r2)=πL(r2+2rdr+dr2r2)=2πLrdr

si193_e  (Eq. 3.106)

Sphere. For a sphere the infinitesimal change in area dAsphere is given by

dAsphere=4π((r+dr)2r2)=4π(r2+2rdr+dr2r2)=8πrdr

si194_e  (Eq. 3.107)

where we have again neglected all terms containing drn with n ≥ 2 as being too small. The infinitesimal change of volume dVsphere of a sphere can be calculated as

dVsphere=43π((r+dr)3r3)=43π(r3+2r2dr+rdr2+r2dr+2rdr2+dr3r3)=4πr2dr

si195_e  (Eq. 3.108)

3.4 Summary

In this section, we have revisited some of the most important functions and concepts of engineering mathematics. Knowledge of these will be crucial to understanding concepts such as series and transforms, as well as methods for solving differential equations, which we will cover in the following sections.

References

[1] Dirichlet P.G.L. Über einen neuen Ausdruck zur Bestimmung der Dichtigkeit einer unendlich dünnen Kugelschale, wenn der Werth des Potentials derselben in jedem Punkte ihrer Oberfläche gegeben ist. Deutsche Akademie der Wissenschaften zu Berlin. Mathematische Abhandlungen der Königlichen Akademie der Wissenschaften zu Berlin. 1850;35: (cit. on p. 29).

[2] Neumann C. Das Dirichlet’sche Princip in seiner Anwendung auf die Riemann’schen Flächen. Leipzig: B.G. Teubner; 1865 (cit. on p. 29).

[3] Euler L. De summis serierum reciprocarum ex potestatibus numerorum naturalium ortarum dessertatio altera, in qua eaedem summationes ex fonte maxime divreso derivantur. Miscellanea Berolinensia. 1743;7(1743):172–192 (cit. on p. 35).

[4] Euler L. In: Acad. imp. Saènt; . Institutionum calculi integralis. 1768;Vol. 1 (cit. on p. 35).

[5] Euler L. Variae observationes circa series infinitas. Commentarii academiae scientiarum imperialis Petropolitanae. 1737;9(1737):160–188 (cit. on p. 35).

[6] Euler L. Principes généraux du mouvement des fluides. Mém. Acad. Sci. Berlin. 1755;11(1755):274–315 (cit. on p. 35).

[7] Bessel F.W. Untersuchung des Theils der planetarischen Störungen, welcher aus der Bewegung der Sonne entsteht. Druckerei der Königlichen Akademie der Wissenschaften; 1826 (cit. on p. 35).

[8] Dirac P.A.M. The Principles of Quantum Mechanics. The Clarendon Press; 1930 (cit. on p. 39).

[9] Fourier J.B.J. Suite de Mémoire intitulé: Théorie du mouvement de la chaleur dans les corps solides. 1826 (cit. on p. 39).


1 Peter Dirichlet was a German mathematician who studied the type of boundary conditions commonly referred to as Dirichlet boundary conditions extensively [1]. Dirichlet succeeded Gauß as chair of advanced mathematics in Göttingen in 1855.

2 Carl Neumann was a German mathematician who introduced the boundary conditions commonly referred to Neu-mann boundary conditions in 1865 [2].

1 Leonhard Euler was a Swiss mathematician and physicist who made major contributions to mathematics and is considered one of the most important mathematicians of all time. Among his many achievements is the solution of the so-called “Baseler Problem”, which was the determination of the convergence value for the geometric series [3]. Euler also provided one of the most important methods for solving ODEs numerically, i.e., the so-called Euler method[4]. While studying differential equations and methods for solving them, Euler developed a transform which is nowadays known as the z-transform; this formed the basis of the Laplace transform[5]. In 1755 Euler derived a simplified form of the Navier-Stokes equation that we still use nowadays for studying flow problems in circular flow tubes. This equation is often referred to as the Euler equation[6].

2 Friedrich Bessel was a German mathematician who was interested in astrophysics. Bessel has made many important contributions to physical mathematics and engineering mechanics. Among others, he developed the so-called Bessel functions, which are functions that solve certain types of differential equations that are common in many-body problems and astrophysics.

1 Paul Dirac was an English physicist who introduced the function that we refer to today as the Dirac delta function. He was the first to use this function for quantum physics in his seminal work dating from 1930 [8]. However, Fourier has used a similar function in his work on the Fourier series [9]. Dirac was the first to call this function the delta function and introduced the notation we still use today.

1 Oliver Heaviside was an English scientist. He introduced complex numbers into the study of electrical circuits and thus made important contributions to early electrical engineering as well as physics and signal processing.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset