Chapter 4

Series

4.1 Introduction

4.1.1 Definition

In general a series sn is defined as a sum of a given number of terms ai such that

sn=a1+a2+...an=ni=1ai

si39_e

If n = , the series is said to be infinite series. If n is finite, it is said to be a finite series.

4.1.2 Convergence

In most practical applications, a series will have to be evaluated up to a given upper bound n. The higher this bound is chosen, the more exact the solution, but also the more time-consuming, complicated and computationally expensive the solution will become. Therefore a suitable trade-off for n must be found.

However, some of the more important series actually sum up to a defined value. In such cases a series is said to be a convergent series. Obviously, convergent series are significantly easier to use as they can simply be replaced by the convergence value. The convergence value s is defined as

s=limnni=1ai

si40_e

Certain criteria can be used in order to test whether a series converges.

Ratio Test. The ratio test checks if

limx|an+1an|=q

si41_e

If q < 1, the series converges; for q > 1, the series diverges. For q = n, the test is inconclusive, i.e., it makes no statement about whether or not the series converges.

Root Test. The root test checks if

limxn|an|=q

si42_e

For q < 1, the series converges; for q > 1, the series diverges. For q = n, the test is inconclusive, i.e., it makes no statement about whether or not the series converges.

Other Tests. Additional tests can be used to test a series for convergence. The interested reader may refer to standard textbooks on engineering mathematics for more details.

4.1.3 Important Series

In the following sections, we will occasionally have to evaluate series. The most important ones are summarized in Tab. 4.1 together with their convergence values.

Tab. 4.1

Important series and their convergence values

SeriesConvergence valueComment
n=1qn1si1_e11qsi2_eGeometric series, converges for |q| < 1
n=1(1)n+11nsi3_eln 2
n=11n!si4_eeSometimes referred to as Euler series
n=11nssi5_e-Riemann zeta function ζ(s)
n=112n+1si6_e
n=11(2n+1)2si7_e18π2si8_e
n=11(2n+1)3si9_e18ς(3)1.0518si10_e
n=11(2n+1)4si11_e196π4si12_e
n=11(2n+1)5si13_e3132ς(5)1.0045si14_e
n=11(2n+1)6si15_e1960π6si16_e

Riemann Zeta Function. We will look briefly at one very important series, which is the Riemann1zeta function ζ (s). The Riemann zeta function is the generalized form of a group of series and from which we may be familiar with the geometric series (see Tab. 4.1). The latter series is tied to the name of Euler who found the convergence value.

The zeta function evaluates to real numbers, as shown in Fig. 4.1. As you can see, the function has a discontinuity for s = 1. The Riemann zeta function is often required when evaluating the convergence values of series as the higher potential series of the geometric series, i.e., n=11(2n+1)3si9_e, can be evaluated as a function of a zeta function series. More importantly, often the convergence value can be obtained as a function of the zeta function. The convergence value of n=11(2n+1)3si9_e is 18ς(3)1.0518si10_e (see Tab. 4.1) and can be conveniently evaluated using the zeta function.

f04-01-9781455731411
Fig. 4.1 The Riemann zeta function ζ (s).

4.2 Taylor Series

4.2.1 Introduction

A Taylor1series or a Taylor series expansion is an approximation of an analytical function using an infinite sum of terms, i.e., a polynomial. The idea that a function can be expressed as a sum of terms was suggested before, but Taylor was the first to introduce a general method for calculating these terms.

Taylor series are very important tools in many areas of practical mathematics. In many applications, it is impossible to find a closed analytical expression of a function. In such scenarios, it may sometimes be significantly easier to approximate the function by a given number of terms, i.e., a polynomial. Obviously, if we are talking about an infinite sum, there would be an infinite number of terms in this polynomial one needs to take into account. However, in many scenarios, it is acceptable to consider only a finite number of terms and thus approximate the function taking into account that the result may be incorrect to a small degree. Obviously, the higher the number of terms considered, the smaller the error.

A Taylor series is, so to speak, the link between analytical and numerical calculus. Whenever numerical approximations of problems such as solutions to PDEs are to be found, we very often have no other choice but to default to numerical approximations of the solution because finding the exact analytical solution is not possible.

A number of prerequisites must be fulfilled in order to be able to work with a Taylor series. As this book does not intend to give a fully accounted introduction to Taylor series, we will restrict ourselves to a few easy-to-understand requirements. First of all, a Taylor series is always constructed at a given point of the function. We usually say that the Taylor series is “created around” or “expanded around” a given value a. If we move away from the point a, the Taylor series will give a less ideal approximation of the function. Therefore, in order to construct a function in a given domain, we may need to create many series at different points a1, a2, a3, and so on in order to obtain a good approximation. As we will see, the Taylor series expansion uses the derivatives of the function. Obviously one could argue, if we do not know the function, chances are we do not know anything about the derivatives. This is often not true. Many problems in practical physics and mathematics are described by (partial) differential equations. In these cases, we may know nothing of a function itself, i.e., the solution of the (partial) differential equation. However, we do know something about the (higher) derivatives.

From this concept arises one important prerequisite in order to be able to create a Taylor series: the function must not be too “funky” around the expansion point a. In this context, “funky” means that the function must be differentiable at a; otherwise, we would not be able to find the higher derivatives that we need in order to create the Taylor series. As long as this prerequisite is met, it is very likely that we are able to create a Taylor series. Obviously, in many cases (especially if we have no intuitive idea about the function we are looking for), we cannot be sure that the function really is differentiable. In these cases, it makes sense to carefully consider the physical problem that this differential equation describes. Would the physics really allow the function to be nondifferentiable at a given point? Can, e.g., the velocity really be infinite at a given point? If this is not the case, chances are that the function will behave somewhat predictably and we will be able to create the Taylor series.

4.2.2 Building a Taylor Series

A Taylor series of a function f (x) around a value a is given by

f(x)=f(a)+11!df(a)dx(xa)1+12!d2f(a)dx2(xa)2+13!d3f(a)dx3(xa)3+=n=01n!dnf(a)dxn(xa)n

si46_e  (Eq. 4.1)

=nmaxn=01n!dnf(a)dxn(xa)n+On,max

si47_e  (Eq. 4.2)

Obviously, the higher the value for nmax, the better our approximation. However, in order to limit the computationally effort, nmax is usually chosen to be a reasonable number. This value is referred to as the expansion order. Depending on the expansion order, the Taylor series is correct only to a certain extent. The difference of the actual value of the function is given by the truncation error function On,max of order nmax.

4.2.2.1 Rationale

Before seeing some examples of how a Taylor expansion actually works, we want to understand a little more of the process itself. If we look at Eq. 4.1, we see that the first term in the series is the value of the function at the expansion point a. A Taylor series is nothing other than a function that allows us to understand how much a function changes as we move away from a. The remaining terms of the expansion describe this: the relative change of the function. This first term is important because it defines the offset of the function.

The second expansion term uses the first derivative. It multiplies the value of this derivative with the distance by which x is away from a. The fraction term gives a certain weight to the contribution of the first derivative. Looking at this term, it is easy to understand why it is important to keep x − a very small: the derivative is a linearization of the slope of the function at the point a. The change as we move away from a must be proportional to this slope, which is the origin of this term.

But what do the other terms do? Suppose the function is a rather complicated function for which the derivative is not a constant. In this case, we would make significant mistakes even if x is very close to a. So the obvious solution to this problem is to correct the nonlinearity of the first derivative by finding the derivative to it. This corrects for nonlinearity if this second derivative is a constant. If not, we repeat this process over and over until the corrections finally become unnoticeable. As stated, the fraction terms give a certain weight to these higher derivatives because, obviously the nth correction is less important than the first correction.

As stated, the more we move away from the point a, the worse the approximation of our function (usually) become. However, in many cases, expanding a Taylor series around a representative point may even be a very good approximation for the whole function.

4.2.2.2 Taylor Series Expansion of the Exponential Function

As a first example, we will derive the Taylor series expansion for the exponential function f (x) = ex. We will look at the expanded series at the expansion orders n = 1, n = 2, n = 3, and n = 5. We first expand the series around a = 0. According to Eq. 4.1 we find

n=1f(x)=1n=2f(x)=1+xn=3f(x)=1+x+12x2n=5f(x)=1+x+12x2+16x3+124x4

si48_e

The functions are plotted in Fig. 4.2. As you can see, for n = 5, we find a very nice approximation of the exponential function. As long as we evaluate only the function around the expansion point a = 0, we are not making significant mistakes. However, if we wanted to use our approximations in the range of x = −3 (see Fig. 4.3), you can see that we would be making huge mistakes. The reason for this is that expanding the Taylor series around a = 3 leads to significantly different terms. According to Eq. 4.1, we find

f04-02-9781455731411
Fig. 4.2 Taylor series expansion of the exponential function around a = 0. The displayed function is ex together with the Taylor expansion of the given expansion order n.
f04-03-9781455731411
Fig. 4.3 The Taylor series expansion of the exponential function around a = 0 gives bad approximations at around x = –3

n=1f(x)=e3n=2f(x)=e3+e3(x+3)n=3f(x)=e3+e3(x+3)+12e3(x+3)2n=5f(x)=e3+e3(x+3)+12e3(x+3)2+16e3(x+3)3+124e3(x+3)4

si49_e

As you can see, these terms are different. When using these equations, we find very good approximations again (see Fig. 4.4).

f04-04-9781455731411
Fig. 4.4 Taylor series expansion of the exponential function around a = −3. The displayed function is ex together with the Taylor expansion of the given expansion order n. As you can see, when expanding the series at the correct location, we will obtain significantly better fits.

4.2.2.3 Taylor Series Expansion of the Sine Function

As a final example, we will approximate the sine function using a Taylor series around a = 0. As we will see in a moment, we need higher expansion orders in order to approximate the sine function sufficiently. Here we will use n = 1, n = 3, n = 6, and n = 9, for which we find from Eq. 4.1

n=1f(x)=0n=3f(x)=0+xn=6f(x)=0+x16x3+1120x5n=9f(x)=0+x16x3+1120x515040x7

si50_e

Fig. 4.5 shows the results of the Taylor series. As we can see, with n = 9, we achieve a decent approximation around a = 0.

f04-05-9781455731411
Fig. 4.5 Taylor series expansion of the sine function around a = 0. The displayed function is the sine function together with the Taylor expansion of the given expansion order n.

4.2.3 Maple Listing

In this section, we will transfer what we have just discussed into a Maple worksheet in order to have a tool available to quickly calculate the Taylor series. The listing is shown in listing 4.1.

Listing 4.1

[Maple] Listing for Taylor series expansion. A digital version of the listing can be found under the name TaylorSeries.mw in the supplementary material.

1  restart:read "Core.txt":
2
3  #this function makes a taylor expansion up to the given expansion order
4  expandtaylor:=proc(f::function,variable,around,expansionorder)
5   return unapply(
6     sum(
7       eval(
8         diff(f,variable$(i-1))
9       ,variable=around)
10      /((i-1)!)
11      *(variable-around)^(i-1)
12    ,i=1..expansionorder)
13    ,variable);
14 end proc:
15
16 #expand the taylor series for the exponential function around a=0
17 expandtaylor(exp(x),x,0,1);expandtaylor(exp(x),x,0,2);expandtaylor(exp(x),x,0,3);expandtaylor(exp (x),x,0,5);
18
19 #expand the taylor series for the exponential function around a=-3
20 expandtaylor(exp(x),x,-3,1);expandtaylor(exp(x),x,-3,2);expandtaylor(exp(x),x,-3,3);expandtaylor (exp(x),x,-3,5);
21
22 #expand the taylor series for the sine function around a=0
23 expandtaylor(sin(x),x,0,1);expandtaylor(sin(x),x,0,3);expandtaylor(sin(x),x,0,6);expandtaylor (sin(x),x,0,9);
24
25 #plot the expanded taylor series for the sine function around 0
26 quickplot([x->exp(x),expandtaylor(exp(x),x,0,1)],-1..1,"x",typeset(e^x),[typeset(e^x),n=1]);
27 quickplot([x->exp(x),expandtaylor(exp(x),x,0,2)],-1..1,"x",typeset(e^x),[typeset(e^x),n=2]);
28 quickplot([x->exp(x),expandtaylor(exp(x),x,0,3)],-1..1,"x",typeset(e^x),[typeset(e^x),n=3]);
29 quickplot([x->exp(x),expandtaylor(exp(x),x,0,5)],-1..1,"x",typeset(e^x),[typeset(e^x),n=5]);
30
31 #however see that our series gives a bad approximation for x=-3 and below
32 quickplot([x->exp(x),expandtaylor(exp(x),x,0,6)],-5..1,"x",typeset(e^x),[typeset(e^x),n=6]);
33
34 #if we expand the series around x=-3 the fit becomes much better
35 quickplot([x->exp(x),expandtaylor(exp(x),x,-3,1)],-5..1,"x",typeset(e^x),[typeset(e^x),n=1]);
36 quickplot([x->exp(x),expandtaylor(exp(x),x,-3,3)],-5..1,"x",typeset(e^x),[typeset(e^x),n=3]);
37 quickplot([x->exp(x),expandtaylor(exp(x),x,-3,5)],-5..1,"x",typeset(e^x),[typeset(e^x),n=5]);
38 quickplot([x->exp(x),expandtaylor(exp(x),x,-3,7)],-5..1,"x",typeset(e^x),[typeset(e^x),n=7]);
39
40 #next we approximate the sine function around a=0
41 quickplot([x->sin(x),expandtaylor(sin(x),x,0,1)],-Pi..Pi,"x",sin(x),[sin(x),n=1]);
42 quickplot([x->sin(x),expandtaylor(sin(x),x,0,3)],-Pi..Pi,"x",sin(x),[sin(x),n=3]);
43 quickplot([x->sin(x),expandtaylor(sin(x),x,0,6)],-Pi..Pi,"x",sin(x),[sin(x),n=6]);
44 quickplot([x->sin(x),expandtaylor(sin(x),x,0,9)],-Pi..Pi,"x",sin(x),[sin(x),n=9]);

We will briefly walk through the listing. In line 1 we restart the Maple server and include Core.txt. In line 4 we define the function expandtaylor, which performs a Taylor expansion. This function expects four arguments: the function f to expand, the independent variable variable to use during expansion, the value around around which the series is expanded, and finally the expansion order expansionorder.

The function first calculates the ith derivative of the function f with respect to the independent variable (see line 8) and evaluates this derivative at the point around (see line 7). This value is then divided by the faculty of i (see line 10) and multiplied by (variablearound)i (see line 11). These terms are summarized using the function sum up to the value given by expansionorder. We use (i − 1) as index in the sum in order to also include the first term.

Finally, the obtained term is turned into a function of the independent variable variable using the function unapply (see line 5). This function performs all the magic required. The following lines in the listing merely use this function and output its results.

In line 17 we create the Taylor expansion for ex around 0 with the expansion orders n = 1, n = 2, n = 3, and n = 5. This corresponds to the terms we already discussed. In line 20 we repeat the expansion around −3, and in line 23 we expand the sine function around 0. All of the terms that Maple will output have already been discussed. The code starting from line 26 creates the plots shown in Fig. 4.2, line 32 and the following lines produce Fig. 4.3, the code starting from line 35 produces Fig. 4.4, and finally line 41 and the subsequent lines produce Fig. 4.5.

As you can see, it is not very difficult to use these functions. Feel free to modify them and evaluate different functions.

4.2.4 Finite Changes as Taylor Series

We will now discuss a particularly useful aspect of the Taylor series, i.e., their ability to express finite changes in the function we are approximating. We already discussed that in many scenarios we may know something about the changes, i.e., the derivatives of a function but not of the function itself. In these cases, Taylor series expansions become very useful as they allow, as we will see in a moment, deriving the value of a function at a given point x if we know the values of the derivative of the function at the point x. These values may often be read directly from the (partial) differential equation.

4.2.4.1 Infinitesimal Positive Change

Suppose we have expanded the Taylor series around a point a and now want to know how much the function changes as we move from a to a + ∆x. In this case, Eq. 4.1 becomes

f(a+Δx)=f(a)+11!dfdx(a+Δxa)1+12!d2fdx2(a+Δxa)2+13!d3fdx3(a+Δxa)3+...=f(a)+11!dfdx(Δx)1+12!d2fdx2(Δx)2+13!d3fdx3(Δx)3+=n=01n!dnf(a)dxn(Δx)n=nmaxn=01n!dnf(a)dxn(Δx)n+Onmax+1

si51_e  (Eq. 4.3)

Note that nothing spectacular has happened here. We now simply have an idea how the function changes as we move the infinitesimal distance ∆x in the positive direction.

4.2.4.2 Infinitesimal Negative Change

In analogy, we now determine the Taylor series as we move from a by the infinitesimally small distance ∆x into the negative direction, i.e., to a − ∆x. In this case Eq. 4.1 becomes

f(aΔx)=f(a)+11!dfdx(aΔxa)1+12!d2fdx2(aΔxa)2+13!d3fdx3(aΔxa)3+...=f(a)+11!dfdx(Δx)1+12!d2fdx2(Δx)2+13!d3fdx3(Δx)3+=n=0(1)n1n!dnf(a)dxn(Δx)n+On+1=nmaxn=0(1)n1n!dnf(a)dxn(Δx)n+Onmax+1

si52_e  (Eq. 4.4)

As you can see, Eq. 4.3 and Eq. 4.4 are almost identical. However, Eq. 4.4 shows alternating negative terms. This is the basic concept that we will use in the next step.

4.2.4.3 Deriving the First Derivative: Central Finite Difference

Now we will simply subtract Eq. 4.3 and Eq. 4.4. In this case, we find

f(a+Δx)f(aΔx)=f(a)+11!dfdx(Δx)1+12!d2fdx2(Δx)2+13!d3fdx3(Δx)3+...(f(a)+11!dfdx(Δx)1+12!d2fdx2(Δx)2+13!d3fdx3(Δx)3+)=2(21!dfdx(Δx)1+13!d3fdx3(Δx)3+)=2(n=01(2n+1)!d2n+1f(a)dx2n+1(Δx)2n+1+O2n+1)

si53_e  (Eq. 4.5)

In many applications, especially in numerical simulation, Eq. 4.5 is not expanded farther than n = 1, in which case we find

f(a+Δx)f(aΔx)=2dfdxΔxdfdx=f(a+Δx)f(aΔx)2Δxdfdx=f(x+Δx)f(xΔx)2Δx+O3

si54_e  (Eq. 4.6)

where we have replaced the expansion point a with the variable x. Eq. 4.6 is referred to as the central finite distance approximation. It allows deriving the first derivative of a function if we know the values of the function at the distance x + ∆x and x − ∆x, which is why it is referred to as the central finite distance approximation.

As the formula stands, it does nothing but calculate the slope.

4.2.4.4 Deriving the First Derivative: Forward and Backward Finite Difference

We can derive a second method for putting into relation the first derivative and the values of the function at two given points. For this we consider Eq. 4.3 and expand the series only to n = 1, in which case we find

f(a+Δx)=f(a)+11!dfdxΔxdfdx=f(a+Δx)f(a)Δxdfdx=f(x+Δx)f(x)Δx+O2

si55_e  (Eq. 4.7)

where we have again replaced x for a as the independent variable. Eq. 4.7 is referred to as the forward finite distance approximation. It allows deriving the first derivative of a function if we know the values of the function at x and x + ∆x (and v.v.).

We can perform the same operation starting from Eq. 4.4, in which case we find (again expanding only to n = 1)

f(aΔx)=f(a)11!dfdxΔxdfdx=f(a)f(aΔx)Δxdfdx=f(x)f(xΔx)Δx+O2

si56_e  (Eq. 4.8)

where we have again replaced x for a as the independent variable. Eq. 4.8 is referred to as the backward finite distance approximation. It allows deriving the first derivative of a function if we know the values of the function at x and x − ∆x (and v.v.).

4.2.4.5 Deriving the Second Derivative

In the next step we will add Eq. 4.3 and Eq. 4.4. Subtraction resulted in the central finite distance approximation of the first derivative. Interestingly, something quite different happens in the case of the addition. Here we find

f(a+Δx)f(aΔx)=f(a)+11!dfdx(Δx)1+12!d2fdx2(Δx)2+13!d3fdx3(Δx)3+...+f(a)+11!dfdx(Δx)1+12!d2fdx2(Δx)2+13!d3fdx3(Δx)3+=2(f(a)+12!d2fdx2(Δx)2+14!d4f(a)dx4(Δx)4+)=2(f(a)+n=01(2n)!d2nf(a)dx2n(Δx)2n+O2n)

si57_e  (Eq. 4.9)

Again, we will expand the series only to the first term (n = 1, in this case as the series does not start at 0), in which case we find

f(a+Δx)+f(aΔx)=2f(a)+d2fdx2(Δx)2d2fdx2=f(a+Δx)+f(aΔx)2f(a)(Δx)2d2fdx2=f(x+Δx)+f(xΔx)2f(x)(Δx)23

si58_e  (Eq. 4.10)

where we have again replaced a by x as the independent variable. In this case we have found a correlation between the second derivative and the values of the function at x, x + ∆x, and x − ∆x. If we know these values, we can calculate the value of the second derivative. Equally, if we know the value of the second derivative at x, we can calculate the values of the function at x, x + ∆x, and x − ∆x.

4.2.4.6 Implications

We have now derived four very important equations that allow us to derive the first derivative (see Eq. 4.6, Eq. 4.7, and Eq. 4.8) and the second derivative (see Eq. 4.10) of a function at the location x if we know the values of the function at x + ∆x and x − ∆x. More importantly, however, these functions also enable us to do the reverse process: if we know the values of the derivatives at the location x, we are able to derive the change in the actual function between x + ∆x and x − ∆x. If we know the value of the function at a boundary (usually because a boundary condition is given), we can calculate the changes from this absolute value if we know one of the derivatives.

As we will see, in many problems in practical physics and especially fluid mechanics, we often know something about a function’s second derivative because the most important PDE in fluid mechanics, the Navier-Stokes equation, is a second-order PDE (see Eq. 11.40) of the velocity. In many cases, we want to derive the velocity, which is why finding equations that relate the second derivative of a function and the function itself is very important.

It is important to keep one thing in mind here. As we have seen, a Taylor series expansion only does a good job approximating a function around the value it was expanded around. Therefore, we will need to make sure that the values for ∆x we are working with are very small. Obviously, the higher the local changes in the functions become, the less precise a Taylor series approximation that takes into account only the linear terms (as do Eq. 4.6, Eq. 4.7, Eq. 4.8, and Eq. 4.10) becomes.

4.2.5 Useful Taylor Series

There are several noteworthy and handy Taylor series. Tab. 4.2 summarizes the most commonly occurring series.

Tab. 4.2

Useful Taylor series. B2n is the 2nth Bernouilli number (see Tab. 4.3)

FunctionTaylor seriesExpanded series for nmax = 2CommentReference
sin xn=0(1)nx2+1(2n+1)!si17_esinxxx36+x5120si18_eEq. 4.11
cos xn=0(1)nx2n(2n)!si19_ecosx1x22+x424si20_eEq. 4.12
tan xn=0B2n(4)n(14n)(2n)!x2n1si21_etanxx+x33+1215x5si22_e|x|π1si23_eEq. 4.13
arcsin xn=0(2n)!22n(n!)2(2n+1)x2n+1si24_earcsinxx+x26+340x5si25_e|x| ≤ 1Eq. 4.14
arccos xπ2arcsinxsi26_earccosx1(x+x26+315x5)si27_e|x| ≤ 1Eq. 4.15
arctan xn=0(1)n12n+1x2n+1si28_earctanxxx33+x55si29_e|x| ≤ 1Eq. 4.16

Tab. 4.3

The first 16 Bernouilli numbers

IndexNumber
01
1±12si30_e
216si31_e
30
4130si32_e
50
6142si33_e
70
8130si32_e
90
10566si35_e
110
126912730si36_e
130
1476si37_e
150
163617510si38_e

t0015

4.3 Fourier Series

4.3.1 Introduction

After discussing the Taylor series, we now look at the second important series we will require, the Fourier1series. Just as the Taylor series can be used to approximate analytical functions by a sum of individual terms, i.e., a polynomial, the Fourier series approximates a function as a polynomial using sine and cosine functions.

Generally speaking, a Fourier series is a special case of a Taylor series as it uses sine and cosine terms, whereas the Taylor series uses polynomials. A good example is given in section 4.2.2.3 where we approximated the sine function using a polynomial only. Given that a Fourier series uses sine and cosine terms, it is best used to approximate a periodic function, i.e., a function for which f (x) = f (x + 2π). However, as we will see, many functions that we approximate using a Fourier series are not actually periodic. We may simply transform them into a Fourier series thus making them periodic even though we are only interested in the function in the first interval, thus ignoring the periodicity. We will look at an example in a moment.

This periodicity gives rise to another important difference between the Taylor and Fourier series. A Taylor series always needs to be expanded around a point and is a good approximation of the function only near this expansion point. A Fourier series on the other hand will approximate the function in the whole domain in which it is defined. Therefore it approximates the function well for any point of the domain for which it was defined.

4.3.2 Building a Fourier Series

A Fourier series of a function f (x) is given by

f(x)=a02+n=1(αncos(nx)+bnsin(nx))

si59_e  (Eq. 4.17)

=a02+nmaxn=1(αncos(nx)+bnsin(nx))+on,max

si60_e  (Eq. 4.18)

a0=1π2π0f(x)dx

si61_e  (Eq. 4.19)

an=1π2π0f(x)cos(nx)dx

si62_e  (Eq. 4.20)

bn=1π2π0f(x)sin(nx)dx

si63_e  (Eq. 4.21)

Just as for the Taylor series, a Fourier series is usually not expanded to infinity (see Eq. 4.17) but is truncated after a given expansion order nmax, as indicated in Eq. 4.18. This gives rise to the truncation error function On,max of order nmax. Obviously, the higher the expansion order, the better the approximation.

4.3.2.1 Rationale

Again, before seeing some examples of the Fourier series, we want to understand how the Fourier series expansion actually works. First, the Fourier series uses three types of coefficients: a0, an, and bn. The coefficient a0 is obtained if n = 0 is applied to the term an cos (n x) + bn sin (n x). In this case the sine function is 0 and the cosine function 1, in which case the term a0 is obtained from Eq. 4.20. This is why this constant is given with a0 and not b0. If we look at Eq. 4.17, we see that the function is thus approximated with sine and cosine functions of increasing frequency. For n = 1, we obtain the functions cos x and sin x, for n = 2, we obtain cos (2 x) and sin ( 2 x), and so on.

The cosine and the sine functions are “weighted” with the constants an and bn, respectively. They determine how well a given sine or cosine function at the given frequency will approximate the function. If you look carefully at Eq. 4.20 and Eq. 4.21, you will see why. These functions effectively multiply f (x), the function we want to approximate, with the sine or cosine function of the given frequency. Now, if this specific sine or cosine approximates the function well, they will overlap, which is why the total surface area of the product of these two functions should be a high value that is either positive or negative at most points of the domain (see Fig. 4.6a). If we take the integral of such a function, we expect a large value. As this integral gives rise to an and bn, we expect these weighting factors to be high (see Eq. 4.20 and Eq. 4.21). On the other hand, if the sine or cosine function does a poor job of approximating the function, the product of the two will either be essentially 0 or feature positive and negative regions. If we take the integral of such a function, we expect a significantly smaller integral and thus a smaller weighting factor (see Fig. 4.6b).

f04-06-9781455731411
Fig. 4.6 Graphical explanation of the Fourier series weighting factors an and bn. A square wave function is approximated by a sine and a cosine function. a) The sine function matches the square wave function, and the two functions overlap largely, thereby the product of the functions is positive in the entire domain. As a consequence, the weighting factor b1 (we are looking at sin x) will be large. b) The cosine function does a poor job in matching the square wave function. Therefore the product of the functions is alternatingly positive and negative with the two balancing out. Therefore the integral (from which a1 is calculated) is 0. Therefore the Fourier series approximation will not use this cosine function.

4.3.2.2 Fourier Series Expansion of the Square Wave Function

We have already briefly introduced the square wave function as a good example of how the Fourier series expansion works (see Fig. 4.6). In this section, we perform the Fourier series expansion of the square wave function, which is defined as

fsquare(x)={1x<π1otherwise

si64_e  (Eq. 4.22)

Now let us exemplary expand the Fourier series for this function up to n = 1. From Eq. 4.19 we find

a0=1π2π0f(x)dx=1π(π01dx+2ππ(1)dx)=1π(π01dx2ππ(1)dx)=0

si65_e

Furthermore, we find from Eq. 4.20 for n = 1

a1=1π2π0f(x)cosxdx=1π(π01cosxdx+2ππ(1)cosxdx)=1π((sinπsinθ)+(sin2πsinπ))=0

si66_e

which tells us that a1 = 0. In fact, we find that a2n+1 = 0 for all n, which leaves only the terms bn as relevant terms. From Eq. 4.21 we find for n = 1

b1=1π2π0f(x)sinxdx=1π(π01sinxdx+2ππ(1)sinxdx)=1π((cosπcosθ)+(cos2πcosπ))=4π

si67_e

So in total we find for the Fourier expansion of the square function

fsquare(x)=4πsinx

si68_e

Higher order expansions can be calculated likewise. For these we find

n=1fsquare(x)=4sinxπn=3fsquare(x)=4sinxπ+434sin(3x)πn=5fsquare(x)=4sinxπ+434sin(3x)π+454sin(5x)πn=7fsquare(x)=4sinxπ+434sin(3x)π+454sin(5x)π+474sin(7x)πn=9fsquare(x)=4sinxπ+434sin(3x)π+454sin(5x)π+474sin(7x)π+494sin(9x)π

si69_e

The expansion functions together with the square wave function are shown in Fig. 4.7. As you can see, an expansion to n = 1 gives only a very vague approximation of the square wave function. If more and more sine functions of higher orders are added, the approximation becomes gradually better until for n = 9 one can already see the square wave function quite clearly.

f04-07-9781455731411
Fig. 4.7 Fourier series expansion of the square wave function. The displayed function is the square wave function together with the Fourier expansion of the given expansion order n.

Gibbs Phenomenon. As you can see in Fig. 4.7, the Fourier transform of a piecewise continuous function tends to form periodic oscillations at locations of discontinuities. This phenomenon is named after Willard Gibbs1 who described it in 1899 [5].

4.3.2.3 Fourier Series Expansion of a Triangular-Like Function

As a next example, we will look at a somewhat odd function that vaguely resembles a triangular function. The function is given by

ftriang(x)={1x<xπ1x1xπ

si70_e  (Eq. 4.23)

We can repeat the indicated process to determine the coefficients a0, an, and bn. The Fourier series expansions to a given expansion order n can then be found to be

n=1ftriang(x)=4cosxπ2+2sinxπn=3ftriang(x)=4cosxπ2+2sinxπ4cos(3x)9π+2sin(3x)3πn=5ftriang(x)=4cosxπ+2sinxπ4cos(3x)9π2+2sin(3x)3π4cos(5x)25π2+2sin(5x)5πn=7ftriang(x)=4cosxπ2+2sinxπ4cos(3x)9π2+2sin(3x)3π4cos(5x)25π2+2sin(5x)5π4cos(7x)49π2+2sin(7x)7πn=9ftriang(x)=4cosxπ2+2sinxπ4cos(3x)9π2+2sin(3x)3π4cos(5x)25π2+2sin(5x)5π4cos(7x)49π2+2sin(7x)7π4cos(9x)81π2+2sin(9x)9π

si71_e

The expansion of the Fourier series is shown in Fig. 4.8. As you can see, at expansion orders around or above n = 9, we can approximate the function to a very high degree.

f04-08-9781455731411
Fig. 4.8 Fourier series expansion of the triangular-like function. The displayed function is the triangular-like function together with the Fourier expansion of the given expansion order n.

4.3.2.4 Discontinuities

When you carefully look at both cases we just discussed, you will notice one important point: both functions have discontinuities, i.e., sharp steps at which the function cannot be differentiated. We have briefly touched upon differentiability when discussing the Taylor series for which we stated that the function must be continuous and the derivatives must be defined in the domain of interest, otherwise, the Taylor expansion will fail. This is not the case for the Fourier series. In fact, if a function has a discontinuity at x0 and the values of the function when approaching “from the left side” (which is often written as f (x → −x0)) and “from the right side” (which is then written as f (x → + x0)) are not equal, the Fourier series will deliver the arithmetic mean of the two values. This is interesting and can be clearly seen, e.g., when looking at Fig. 4.7. The discontinuity here is at x = π, and the two limit values are 1 and −1, in which case the Fourier series expansion has the value 0.

This is a very important property of the Fourier series. This series is stable even if the function does something very “funky” within the domain in which we are investigating it.

4.3.2.5 Maple Listing for the Fourier Expansion of f (x)

Just as we did for the Taylor series, we will now convert the equations we found into a Maple worksheet that will allow us to create Fourier series. The listing is shown in listing 4.2.

Listing 4.2

[Maple] Listing for Fourier series expansion of periodic functions. A digital version of the listing can be found under the name FourierSeries.mw in the supplementary material.

1  restart:read "Core.txt":
2
3  #makes a Fourier expansion
4  expandfourier:=proc(f,variable,expansionorder) local i,store;
5   return unapply(
6     1/Pi*int(f,variable=0..2*Pi)
7     + sum(
8         1/Pi*int(f*cos(i*variable),variable=0..2*Pi)*cos(i*variable)
9       + 1/Pi*int(f*sin(i*variable),variable=0..2*Pi)*sin(i*variable)
10    ,i=1..n),variable);
11 end proc:
12
13 #make a square wave function
14 f_square:=x->piecewise(x<Pi,1,x>=Pi,-1):
15
16 #make the Fourier expansion
17 expandfourier(f_square(x),x,1);
18 expandfourier(f_square(x),x,3);
19 expandfourier(f_square(x),x,5);
20 expandfourier(f_square(x),x,7);
21 expandfourier(f_square(x),x,9);
22
23 #and plot us some functions
24 quickplot([expandfourier(f_square(x),x,1),x->f_square(x)],0..2*Pi,-2..2,"x","y",["n=1","square function"]);
25 quickplot([expandfourier(f_square(x),x,3),x->f_square(x)],0..2*Pi,-2..2,"x","y",["n=3","square function"]);
26 quickplot([expandfourier(f_square(x),x,5),x->f_square(x)],0..2*Pi,-2..2,"x","y",["n=5","square function"]);
27 quickplot([expandfourier(f_square(x),x,9),x->f_square(x)],0..2*Pi,-2..2,"x","y",["n=9","square function"]);
28
29 #next we look at a quasi-triangle function
30 f_triangle:=x->piecewise(x<Pi,x/Pi,x>=Pi,1-x/Pi):
31
32 #make the Fourier expansion
33 expandfourier(f_triangle(x),x,1);
34 expandfourier(f_triangle(x),x,3);
35 expandfourier(f_triangle(x),x,5);
36 expandfourier(f_triangle(x),x,7);
37 expandfourier(f_triangle(x),x,9);
38
39 #and plot us some functions
40 quickplot([expandfourier(f_triangle(x),x,1),x->f_triangle(x)],0..2*Pi,-2..2,"x","y",["n=1","triang. function"]);
41 quickplot([expandfourier(f_triangle(x),x,3),x->f_triangle(x)],0..2*Pi,-2..2,"x","y",["n=3","triang. function"]);
42 quickplot([expandfourier(f_triangle(x),x,5),x->f_triangle(x)],0..2*Pi,-2..2,"x","y",["n=5","triang. function"]);
43 quickplot([expandfourier(f_triangle(x),x,9),x->f_triangle(x)],0..2*Pi,-2..2,"x","y",["n=9","triang. function"]);

We will briefly walk through the listing. In line 1 we restart the Maple server and include Core.txt. Next we define the function that performs the Fourier series expansion (see line 4). This function takes three arguments: the function f to expand, the independent variable variable, and the expansion order expansionorder. This function first calculates the coefficients a0 (see line 6), an (see line 8) and bn (see line 9) and form their products with the sine and cosine functions, respectively, before summing them up to the given expansion order (see line 7). The sum is then turned into a function of the independent variable (see line 5) and returned.

In the following, we use this function. First, we define the square wave function Eq. 4.22 in line 14 and then find the Fourier series with the expansion order n = 1, n = 3, n = 5, n = 7, and n = 9 (see line 17). The output of these code lines has already been presented. Next we plot these functions (see line 24), which produces the plots shown in Fig. 4.7.

In the next step, we define the triangular-like function given by Eq. 4.23 (see line 30). Again, we perform the Fourier series expansion (see line 33) for which we already presented the results. These functions are then plotted (see line 40), which produces the plots shown in Fig. 4.8.

As you can see, it is not very difficult to use these functions. Feel free to modify them and evaluate different functions.

4.3.3 Fourier Series of Nonperiodic Functions

In many cases, we need to define Fourier series of functions that are nonperiodic. In most cases we are only interested in the values of a function f (x) in an interval xmin≤ x ≤ xmax. We can then imagine that this function is periodic, expand a Fourier series (which yields a periodic function), and simply ignore all values which are outside of the domain we are interested in (see Fig. 4.9a). However, as you will see, it is very useful, to set the values of the function we want to approximate to the function’s negative value (see Fig. 4.9b). If we do this, we can approximate the function using a series of only sine functions (see Fig. 4.9c).

f04-09-9781455731411
Fig. 4.9 Approximating a function by a Fourier sine series. a) A function f (x) in which we are only interested in the interval 0 ≤ x ≤ xmax. b) Outside of this interval, we set the functions to the negative value of f (x). c) This allows us to approximate the function as a sine function.

In order to process a nonperiodic function, we first need to convert it to a format that will ensure that the function’s values at xmin and xmax are 0 and that the function therefore resembles a sine function. In order to keep the equations simpler, we assume xmin = 0 for the following cases. If this value is different than 0, we can simply shift the function to xmin = 0.

4.3.3.1 Period Length

When working with periodic functions, we come across the angular frequency ωo=2πv=2πTsi72_e, which is nothing other than the reciprocal of the period length T . The angular frequency converts the physical length T or the frequency ν into a function of π, which is required for periodic functions.

In order to convert our physical length xmax into an expression of π, we will do a similar thing. We will replace the independent variable x by the independent variable ˜x=2πxmaxxsi73_e. If we now look at our function, we find the values f(˜x=0)=0si74_e and f(˜x=xmax)=2πxmax=2πsi75_e, both of which would become 0 if the sine function were applied to them. By using this scheme, we are able to convert our function into an expression that can be approximated using a sine function.

Please note that the integration boundaries can be chosen more or less arbitrarily as long as they span a full period, i.e., a full interval. The interval can be shifted and offset in order to match the function best. This is due to the fact that we will make the function periodic. Thus it does not matter at which point of the interval we start, as long as we integrate along a full interval shifting the boundaries will amount to the same integral.

4.3.3.2 Fourier Expansion

Using our nonperiodic functions, we rewrite the coefficients for the Fourier expansion slightly resulting in

f(x)=a02+n=1(ancos(n2πxmaxxminx)+bnsin(n2πxmaxxminx))

si76_e  (Eq. 4.24)

=a02+nmaxn=1(ancos(n2πxmaxxminx)+bnsin(n2πxmaxxminx))+On,max

si77_e  (Eq. 4.25)

a0=2πxmaxxminxmaxxminf(x)dx

si78_e  (Eq. 4.26)

an=2xmaxxminxmaxxminf(x)cos(n2πxmaxxminx)dx

si79_e  (Eq. 4.27)

bn=2xmaxxminxmaxxminf(x)sin(n2πxmaxxminx)dx

si80_e  (Eq. 4.28)

4.3.3.3 Fourier Expansion of a Constant to a Sine Series in One Dimension

A Constant Function. We will now derive a very important Fourier series that we will need later, i.e., the Fourier series of a constant. The function we want to expand is defined as

fconstant(x)={c00xac0otherwise

si81_e  (Eq. 4.29)

which we convert to a function of π as detailed in section 4.3.3

fconstant(πxa)={c00x<ac0otherwise

si82_e  (Eq. 4.30)

As you can see, Eq. 4.30 looks very much like the square function. Please note that we are only interested in the function in the interval 0 ≤ x ≤ a. In most cases we want the function to have the value 0 at the boundaries, i.e., at x = 0 and x = a. We define the function’s value to be a constant c0 in this interval. We set the function to the negative value outside this interval as displayed in Fig. 4.9, which will make the terms a0 and an disappear, as we will see in a moment. We also use the interval 0 . . . 2a instead of only 0 . . . a, which results in the term πasi83_e instead of 2πasi84_e.

Fourier Series of the Constant in One Dimension. We will now derive the terms required for the Fourier according to Eq. 4.26, Eq. 4.27, and Eq. 4.28. In this case we find

a0=22a2a0f(x)dx=1a(c0ac0a)=0

si85_e  (Eq. 4.31)

an=22a2a0f(x)cos(nπxa)dx=c0a2a0(cos(nπxa)dx)=c0aanπ([sin(nπxa)]a0[sin(nπa)]2a0)=c0nπ((sin(nπ)sin0)(sin(n2π)sinπ))=0

si86_e  (Eq. 4.32)

bn=22a2a0f(x)sin(nx)dx=c0a2a0(sin(nπxa)dx)=c0aanπ([cos(nπxa)]a0+[cos(nπxa)]2a0)=c0nπ((cos(nπ)cos0)+(cos(2πn)cos(nπ)))=c0nπ(cos(nπ)+1+1cos(nπ))=c0nπ(22cos(nπ))

si87_e  (Eq. 4.33)

We will simplify the expression for bn a little more taking into account that, for all n = 0, 2, 4 . . ., cos () = 1 and therefore bn = 0 because the term in the bracket is 0. On the other hand, for all n = 1, 3, 4 . . ., the expression cos () = −1, in which case the term in the bracket becomes 4. Therefore our Fourier series must be evaluated only for all odd values of n. From this we find the Fourier series to Eq. 4.30 to be

fconstant(πxa)=n=1,3,54nπc0sin(nπxa)=c0n=04(2n+1)πsin((2n+1)πxa)

si88_e  (Eq. 4.34)

Eq. 4.34 is the function we are looking for. It expresses a constant in one dimension as a Fourier series. It is plotted together with the constant function Eq. 4.34 in Fig. 4.10. As you can see, the function does a pretty good job at approximating the constant function.

f04-10-9781455731411
Fig. 4.10 Fourier series expansion of the constant function to a sine series. The plot shows the constant function and the Fourier series according to Eq. 4.34 expanded to nmax = 10.

4.3.3.4 Fourier Expansion of a Constant to a Sine Series in Two Dimensions

In the next step, we will do an expansion of a constant in two dimensions. This function is defined as

fconstant(x,y)={c00x<aand0ybc0otherwise

si89_e  (Eq. 4.35)

where we use the constants a and b to indicate the length on which the constant is defined in our domain along the x-axis and the y-axis, respectively. You can think of it as being a plateau of height c0 for all values (x, y) with 0 ≤ x < a and 0 ≤ y < b and −c0 for all other values of (x, y).

Here we can directly refer to Eq. 4.34, which gives us the Fourier series expansion in one dimension. If we plot this function in two dimensions, we obtain the plot shown in Fig. 4.11. As can be seen, the function is (obviously) not a function of y. In order to obtain the desired plateau, we need to set the values to −c0 for all values y < 0 and y > b. For this we need to make a second Fourier series expansion, this time along the y-axis. Here one thing comes in very handy: the Fourier series expanded along the x-axis is not dependent on y and can therefore be treated as a constant during this second expansion. As the expansion along the x-axis already took care of the scaling, i.e., by taking the value c0 into account, we can make this second expansion simply taking + 1 and −1 as the function values. Therefore

f04-11-9781455731411
Fig. 4.11 One-dimensional Fourier expansion interpreted in two dimensions. The resulting function is already correct for 0 ≤ x ≤ a. In the next step, we must adapt this function to also be correct along the y-axis.

fconstant(x,y)={n=04(2n+1)πc0sin((2n+1)πxa)0ybn=04(2n+1)πc0sin((2n+1)πxa)otherwise

si90_e  (Eq. 4.36)

Thus we repeated the same series expansion simply by using a different constant. We will not repeat the calculation already performed in Eq. 4.31, Eq. 4.32, and Eq. 4.33. We will obtain an expression similar to Eq. 4.34 along the y-axis given by

fconstant(πxa,πxb)=n=04(2m+1)π(n=04(2n+1)πc0sin((2n+1)πxa))sin((2m+1)πyb)=c0(n=04(2m+1)πn=04(2n+1)πsin((2n+1)πxa))sin((2m+1)πyb)=16c0π2n=0m=01(2m+1)(2n+1)sin((2n+1)πxa)sin((2m+1)πyb)

si91_e  (Eq. 4.37)

Eq. 4.37 is the desired function for Fourier expansion of a constant in two dimensions (see Fig. 4.12). The constant function Eq. 4.35 is shown in Fig. 4.12a, and the Fourier series expanded to nmax = mmax = 25 is shown in Fig. 4.12b. As can be seen, the approximation is very good.

f04-12-9781455731411
Fig. 4.12 Fourier expansion of a constant to a sine series in two dimensions showing the target function (a) and the Fourier series expanded to nmax = mmax = 25 (b). As can be seen, the function approximates the constant between 0 ≤ x ≤ a and 0 ≤ y ≤ b very well. The function shows the periodicity that always occurs when expanding a function to a Fourier series. However, if using the approximation only within the domain required, this periodicity can be ignored.

Periodicity. The Fourier series expansion to our constant function is periodic. As already discussed, this will occur whenever expanding a function to a Fourier series. However, the periodicity can be ignored if we restrict ourselves to using the function only in the domain for which we defined it, i.e., to all values of (x, y) with 0 ≤ xa and 0 ≤ yb.

4.3.3.5 Fourier Expansion of a Constant to a Cosine Series in One Dimension

For the sake of completeness, we will also show the expansion of a constant to a cosine series in one and two dimensions. Obviously, one may come across an equation that is defined as a pure cosine function, in which case it is more convenient to expand a constant to a cosine series.

As the cosine function is symmetric to the y-axis, it is more convenient to chose the constant symmetrically in order to make the term a0 disappear. We therefore define the constant in the interval

fconstant(πxa)={c0a2x<a2c0otherwise

si92_e  (Eq. 4.38)

Again we chose the period length 2a but only define the function in the interval a2+a2si93_e setting the function to −c0 everywhere else. Deriving the Fourier constants according to Eq. 4.26, Eq. 4.27, and Eq. 4.28 yields

a0=22aaaf(x)dx=1a(a2af(x)dx+a2a2f(x)dx+aa2f(x)dx)=c0a([x]a2a+[x]a2a2[x]aa2)=c0a((a2a)+(a2a2)(aa2))=0

si94_e  (Eq. 4.39)

an=a2aaaf(x)cos(nπxa)dx=c0a(aa2cos(nπxa)dx+a2a2cos(nπxa)dxa2acos(nπxa)dx)si300_e

=c0nπ(aa2cos(nπxa)dx+a2a2cos(nπxa)dxa2acos(nπxa)dx)=c0nπ([sin(nππa)]aa2+[sin(nππa)]a2a2[sin(nππa)]a2a)=c0nπ((sin(nπ2)sin(nπ))+(sin(nπ2)sin(nπ2))(sin(nπ)sin(nπ2)))=c0nπ((sin(nπ2)+sin(nπ))+(sin(nπ2)+sin(nπ2))(sin(nπ)+sin(nπ2)))=c0nπ(sin(nπ2)+sin(nπ2)+sin(nπ2)+sin(nπ2))=c0nπsin(nπ2)

si95_e  (Eq. 4.40)

bn=22aaaf(x)sin(nπxa)dx=c0a(aa2sin(nπxa)dx+a2a2sin(nπxa)dxa2asin(nπxa)dx)=c0nπ([cos(nπxa)]aa2+[cos(nπxa)]a2a2[cos(nπxa)]asa)=c0nπ((cos(nπ)cos(nπ2))(cos(nπ2))+(cos(nπ)cos(nπ2)))=c0nπ((cos(nπ)+cos(nπ2))(cos(nπ2))+(cos(nπ)cos(nπ2)))=0

si96_e  (Eq. 4.41)

where we have used the symmetry of the cosine function cos (−x) = cos x and the sine function sin (−x) = − sin x as well as the fact that sin () = 0 for all values of n. As expected, the terms a0 and bn disappear. We can further simplify the expression for an using the fact that for all n = 0, 2, 4, . . ., the term sin(nπ2)=sin(nπ)=0si97_e. Therefore we can rewrite an to account only for the odd terms. For all n = 1, 3, 5, . . ., the term will be alternating 1 (for all n = 1, 5, 9, . . .) and −1 (for all n = 3, 7, 11, . . .). Therefore we can rewrite an as

an=(1)n4c0(2n+1)π

si98_e

Therefore, the Fourier expansion of a constant to a cosine series is given by

fconstant(πax)=c0n=0(1)n4(2n+1)πcos((2n+1)πxa)

si99_e  (Eq. 4.42)

Fig. 4.13 shows the expansion of a constant to a cosine series expanded to nmax = 10.

f04-13-9781455731411
Fig. 4.13 Fourier series expansion of the constant function to a cosine function. The plot shows the constant function and the Fourier series according to Eq. 4.42 expanded to nmax = 10.

4.3.3.6 Fourier Expansion of a Constant to a Cosine Series in Two Dimensions

The expansion of a constant to a cosine Fourier series in two dimensions can be done accordingly. The constant is then defined as

fconstant(x,y)={c0a2x<a2andb2y<b2c0otherwise

si100_e  (Eq. 4.43)

Again, the expansion in one dimension is given by Eq. 4.42, which is simply expanded along the second dimension, in this case the y-axis, as it is not a function of y. We find

fconstant(πxa,πyb)=n=0(1)n4(2m+1)π(n=0(1)m4(2n+1)πc0sin((2n+1)πxa))sin((2m+1)πyb)=c0(n=0(1)n4(2m+1)πn=0(1)m4(2n+1)πsin((2n+1)πxa))sin((2m+1)πyb)=16c0π2n=0n=01(2m+1)(2n+1)(1)n+msin((2n+1)πxa)sin((2m+1)πyb)

si101_e  (Eq. 4.44)

Eq. 4.44 is the desired function for the Fourier expansion of a constant to a cosine series in two dimensions (see Fig. 4.14). The constant function Eq. 4.43 is shown in Fig. 4.14a, and the Fourier series expanded to nmax = mmax = 25 is shown in Fig. 4.14b.

f04-14-9781455731411
Fig. 4.14 Fourier expansion of a constant to a cosine series in two dimensions showing the target function (a) and the Fourier series expanded to nmax = mmax = 25. As can be seen, the function approximates the constant between

4.3.3.7 Maple Listing for the Fourier Expansion of Nonperiodic Functions f(x)

Just as we did for the periodic function, we will build ourselves a Maple worksheet that will help us with the Fourier expansion of nonperiodic functions. The listing is shown in listing 4.3.

Listing 4.3

[Maple] Listing for Fourier series expansion of nonperiodic functions. A digital version of the listing can be found under the name FourierSeriesNonPeriodic.mw in the supplementary material.

1  restart:read "Core.txt":
2
3  #makes a Fourier expansion
4  expandfourier:=proc(f,variable,_from,_to,expansionorder) local i,interval;
5  interval:=_to-_from;
6  return unapply(
7     1/interval*int(f,variable=_from.._to)
8     + sum(
9        2/interval*int(f*cos(i*2*Pi/interval*variable),variable=_from.._to)*cos(i*2*Pi/interval* variable)
10      + 2/interval*int(f*sin(i*2*Pi/interval*variable),variable=_from.._to)*sin(i*2*Pi/interval* variable)
11    ,i=1..expansionorder),variable);
12 end proc:
13 
14 #expand the exp-function
15 expandfourier(exp(x),x,0,6,2);
16 quickplot([expandfourier(exp(x),x,0,6,50),x->exp(x)],0..12,"x",typeset(exp(x)),["n=50",typeset (exp(x))]);
17 
18 #expand to a purely cosine function
19 f_cosineexp:=(x,a)->piecewise(x>=0 and x<=a,exp(x),x>=-a and x<0,-exp(-x),0);
20 expandfourier(f_cosineexp(x,6),x,-6,6,2);
21  quickplot([expandfourier(f_cosineexp(x,6),x,-6,6,50),x->f_cosineexp(x,6)],-6..6,"x",typeset (exp(x)),["n=50",typeset(exp(x))]);
22 
23 #expand to a purely cosine function
24 f_cosineexp:=(x,a)->piecewise(x>=0 and x<=a,exp(x),x>=-a and x<0,exp(-x),0);
25 expandfourier(f_cosineexp(x,6),x,-6,6,2);
26  quickplot([expandfourier(f_cosineexp(x,6),x,-6,6,50),x->f_cosineexp(x,6)],-6..6,"x",typeset (exp(x)),["n=50",typeset(exp(x))]);

We will briefly walk through the listing. In line 1 we restart the Maple server and include Core.txt. Next we define the function that performs the Fourier series expansion (see line 4). This function takes five arguments: the function f to expand, the independent variable variable, the lower interval boundary _from, the upper interval boundary _to, and the expansion order expansionorder. In line 5 the interval is calculated and stored in interval. The function then calculates the coefficients a0 (see line 7), an (see line 9), and bn (line 10), according to Eq. 4.26, Eq. 4.27, and Eq. 4.28 and forms their products with the sine and cosine functions, respectively, before summing them up to the given expansion order (see line 8). The sum is then turned into a function of the independent variable (see line 6) and returned.

In the following lines we use the function expandfourier to expand the exponential function (see line 15). Here we expand the function ex only to an expansion order of n = 2 on the interval 0 . . . 6 which yields

ex=16+16e6+1π2+9((3+3e6)cosxπ3(e61)πsinxπ3)+14π2+9(3+3e6cos(2xπ3)(2e62)πsin(2xπ3))

si102_e  (Eq. 4.45)

In line 16 we plot this expansion together with the exponential function. The output is shown in Fig. 4.15. As you can see, the expansion matches the function well. Eq. 4.45 contains both sine and cosine terms, i.e., neither an nor bn are 0. The remaining lines of code will use a trick in order to convert this function into a series consisting of only sine or only cosine terms.

f04-15-9781455731411
Fig. 4.15 Fourier expansion of the nonperiodic exponential function on the interval 0 . . . 6. The function is shown on the interval 0 . . . 12. The expansion order is n = 50.

4.3.3.8 Fourier Expansion of a Nonperiodic Function f(x) to a Sine Series Within a Given Interval

In many cases, it would be desirable to have a function f (x) that we can expand to a purely sine or cosine series within a given interval. This can be done conveniently using a little trick. We will start with expanding this function to a purely sine function in the interval 0 . . . a. If the lower boundary is different from 0, we can simply shift the function to offset it to zero. We will demonstrate this trick (which we already used during the expansion of the constant to a sine function in Eq. 4.29) using the exponential function, which we expanded in listing 4.3 and Fig. 4.15. We define our new function f' (x) as

f(x)={f(x)0x<af(x)ax<00otherwise

si103_e  (Eq. 4.46)

This will create a function f' (x) that is point-symmetrical about the upper interval boundary a and thus expands to a purely sine series. This is accomplished by listing 4.3, line 19, which yields

f(x)=ex=2π(e6+1)π2+36sinxπ6+π(1e6)π2+9sinxπ3

si104_e  (Eq. 4.47)

As you can see, the function expands to a purely sine function. Eq. 4.46 and Eq. 4.47 are shown in Fig. 4.16a for the exponential function in the interval 0 . . . a. This plot is generated by listing 4.3, line 21.

f04-16-9781455731411
Fig. 4.16 Exponential function expanded to a purely sine (a) or purely cosine (b) series. In both cases the expansion interval was 0 . . . 6 and the expansion order n = 50.

4.3.3.9 Fourier Expansion of a Function f(x) to a Cosine Series Within a Given Interval

In the second step, we will expand the exponential function to a purely cosine function. We do this by defining f′ (x) as

f(x)={f(x)0x<af(x)ax<00otherwise

si105_e  (Eq. 4.48)

This will create a function f′ (x) that is mirror-symmetrical to the upper interval boundary a and thus expands to a purely cosine series. This is accomplished by listing 4.3, line 24, which yields

f(x)=ex=16(e61)12(e6+1)π2+36cosxπ6+3(e61)π2+9cosxπ3

si106_e  (Eq. 4.49)

As you can see, this function expands to a purely cosine function. Eq. 4.48 and Eq. 4.49 are shown in Fig. 4.16b for the exponential function in the interval 0 . . . a. This plot is generated by listing 4.3, line 26.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset