Chapter 5

Transforms

5.1 Fourier Transform

5.1.1 Introduction

Having introduced the concept of the Fourier series in section 4.3, we will now introduce the Fourier transform. You will recall that the Fourier series is the representation of a function as a finite series of sine and cosine terms. If you look back at the definition of the individual terms in Eq. 4.24, you can see that these terms vary in two means:

 the amplitude or strength which is given by the prefactor derived from the integrals Eq. 4.26, Eq. 4.27, and Eq. 4.28

 the frequency which is given by the index n which occurs as multiplier in the sine and cosine terms

Until now we have simply calculated these terms individually. However, there may be a better option. What if there was a function into which we could plug a frequency n as the independent variable which would output the amplitude of the respective term? This is exactly what the Fourier transform does. If applied to a function g (t), it will output the Fourier-transformed function G (f) which takes the frequency f as the independent variable and outputs the amplitude that this specific frequency has if g (t) was expanded to a Fourier series.

Compared to the Fourier series, the Fourier transform will allow us to create functions which are nonperiodic. If you recall the definition of the Fourier series, we introduced the period T for a Fourier transformed function. The Fourier transform can be considered as an expanded version of the Fourier series where we increase the period length from −∞ to +  effectively obtaining a nonperiodic function.

5.1.2 Definition

We will not derive the Fourier transform in detail here, as this derivative is best left to reference textbooks in mathematics. The Fourier transform is defined as

{g(t)}=G(f)=+g(t)e2πiftdt

si118_e  (Eq. 5.1)

Obviously, a Fourier-transformed function can be reconverted to the original function. This process is referred to as the inverse Fourier transform and is defined as

1{g(f)}=g(t)=+g(f)e2πiftdt

si119_e  (Eq. 5.2)

The Fourier transform is often referred to as the continuous Fourier transform as opposed to the discreet Fourier transform (which we have worked on so far), as it also allows us to use real numbers for f, whereas the discreet Fourier transform is restricted to full integers for the values of n.

Please note that we have used the following notation so far. The original function is a function of the independent variable t which may (in most cases) be the time domain t. The Fourier transform will then be the frequency domain f. However, the Fourier transform can also be carried out on functions g not in the time domain, i.e., with different independent variables. Please note that the resulting independent variable after Fourier transform is still a frequency. However, we then usually refer it as being in the image domain.

5.1.3 Properties

The Fourier transform has a number of important properties, the most important of which we will shortly introduce. Most of these properties are intuitive to understand. The interested reader may refer to engineering or technical mathematics textbooks for the derivation of these properties.

Linearity. The Fourier transform is linear. Therefore

{c1g1(t)+c2g2(t)}={c1g1(t)}+{c2g2(t)}=c1{g1(t)}+c2{g2(t)}=c1G1(f)+c2G2(f)

si120_e

Shift Property. If we are interested in the Fourier transform of a shifted function g (t − a) where a is a real number, we find

{g(ta)}=e2πifaG(f)

si121_e

Scaling Property. In contrast to the linearity property, scaling in the time domain introduces a scaling factor

{g(ct)}=G(fc)|c|

si122_e

Differentiation Property. This property may be the most important property when dealing with a differential equation. The Fourier transform of a differential is given by

{dg(t)dt}=2πifG(f)

si123_e

As you can see, in the frequency domain, a differentiation in the time domain will lead to a mere multiplication. This obviously makes life significantly simpler in many ways.

Convolution Property. The Fourier transform of a convolution of two functions g1 (t) and g2 (t) which is usually written as

g1(t)*g2(t)=+g1(T)g2(tT)dT

si124_e

results in a mere multiplication of the two functions once Fourier-transformed

{g1(t)*g2(t)}=G1(f)G2(f)

si125_e

Modulation Property. On the other hand, the Fourier transform of two functions that are multiplied results in a convolution

{g1(t)g2(t)}=G1(f)*G2(f)

si126_e

Parseval’s Theorem.Parseval’s1theorem states that

+|g(t)2|dt=+|G(f)2|df

si127_e

The integral of this squared magnitude is often referred to as the energy of the function contained across all time spans (left-hand side) and frequencies (right-hand side). Obviously, if the Fourier transform gives an exact account of the function, the “energy” must be the same.

Duality Theorem. The duality theorems can be directly derived from Eq. 5.1 and Eq. 5.2 and state that if si183_e {g (t)} = G (f) then

{G(t)}=2πg(f)

si128_e

5.1.4 Important Fourier Transform Pairs

There are a number of important Fourier transform pairs which we encounter commonly. The most important ones are summarized in Tab. 5.1.

Tab. 5.1

Important Fourier transform pairs. See Eq. 5.1 for the definition of the Fourier transform and Eq. 5.2 for the definition of the inverse Fourier transform. Expanded from [2].

Function f(t)Fourier transformReferenceComment
Shifting and scaling
f(tc)si1_eF(f)e2πicfsi2_eEq. 5.3time shifting property; see section 5.1.3
f(t)e2πictsi3_eF(fc)si4_eEq. 5.4frequency shifting property
f(ct)si5_e1|c|F(fc)si6_eEq. 5.5time scaling property; see section 5.1.3
1|c|F(tc)si7_eF(cf)si8_eEq. 5.6frequency scaling property
Delta, Heaviside, and signum function
δ(t)si9_e1Eq. 5.7Delta function; see section 3.2.5 and Eq. 3.83
δ(tt0)si10_ee2πift0si11_eEq. 5.8shifted delta function; see Eq. 5.7 and section 5.1.3
1δ(f)si12_einverse transformed delta function; see Eq. 5.7
e2πif0tsi13_eδ(ff0)si14_einverse shifted delta function; see Eq. 5.8
Θ(t)si15_eπδ(f)+1ifsi16_eHeaviside function; see section 3.2.6 and Eq. 3.86
sign(t)si17_e2ifsi18_eEq. 5.9signum function; see section 3.2.7 and Eq. 3.90
iπtsi19_esign(f)si20_einverse signum function; see Eq. 5.9
Decay functions
ec|t|,c>0si21_e2cc2+4π2f2si22_eEq. 5.10
eπt2si23_eeπf2si24_eEq. 5.11
eiπt2si25_eeiπ(14f2)si26_e
Trigonometric functions
sin(2πf0t+c)si27_ei2(eicδ(f+f0)eicδ(ff0))si28_epure sine function
cos(2πf0t+c)si29_e12(eicδ(f+f0)+ eicδ(ff0))si30_epure cosine function
f(t) sin(2πf0t)si31_ei2(F(f+f0)F(f+f0))si32_emodulated sine function
f(t) cos(2πf0t)si33_e12(F(f+f0)+F(f+f0))si34_emodulated cosine function
sin2tsi35_e14(2δ(f)δ(f1π)δ(f+1π))si36_esquared sine function
cos2tsi37_e14(2δ(f)+δ(f1π)+δ(f+1π))si38_esquared cosine function
Geometric functions
f(tT)={1|t|<=T20otherwiseTsine(T*f)si39_eEq. 5.12rectangular function with applied scaling property; see section 5.1.3

t0010

Fourier Transform of the Rectangular Function: The sinc Function. As a quick example, we will derive the Fourier transform of the rectangular function which is given by Eq. 5.12. From Eq. 5.1 we find

{rect(tT)}=G(f)=+g(t)e2πiftdt=T2+T21e2πiftdt=12πif[e2πift]T2+T2=12πif(eπifTeπifT)=12πif(eπifTeπifT)=1πf12i(eπifTeπifT)

si129_e

where we apply Eq. 3.40 to rewrite the bracket as a sine function to result in

{rect(tT)}=1πfsin(πfT)=TπfTsin(πfT)=Tsinc(fT)

si130_e

where we used the normalized sinc function which is defined as

sincx=sin(πx)πx

si131_e  (Eq. 5.13)

In mathematics, the unnormalized sinc function is often used which is defined as

sinx=sinxx

si132_e  (Eq. 5.14)

Fig. 5.1 shows the two sinc functions. As you can see, they are different. The normalized sinc function is commonly used in signal processing, whereas the unnormalized sinc function is more commonly used in mathematics.

f05-01-9781455731411
Fig. 5.1 Normalized and unnormalized sinc function.

5.1.5 Application of the Fourier Transform

The Fourier transform is one of the most important tools when solving ODEs and in particular, PDEs. We will look at an example which makes use of the Fourier transform in section 8.3.6 where we solve the one-dimensional diffusion equation.

In general, the Fourier transform is a very useful tool when solving differential equations on domains ranging from −∞ . . . + . This is due to the fact that the Fourier transform contains an integral. This integral leads to very useful features when put into a differential equation. Obviously, an integral and a differential cancel which makes solving the differential equation significantly easier. However, for this case, the variable which we Fourier-transform must be allowed to range from −∞ . . . +  as this is what the Fourier transform effectively models. As we will see, in many applications in practical physics we may often allow one of the independent variables to span from minus to plus infinity. A typical example is diffusion in an infinite volume. If we do not limit the length scale of the system (in this case, the size of the vessel) the independent variable of the problem can be assumed to range from −∞ . . . + . However, in many applications we may need to impose certain limits to the scope of the independent variables and we may also need to set the value of the dependent function to known values at finite values for the independent variables. These conditions are referred to as boundary conditions (see section 3.1.7.2). The Fourier transform is not a particularly suitable method if we need to take into account boundary values.

5.2 Laplace Transform

5.2.1 Introduction

Having introduced the Fourier transform, we will discuss the Laplace1transform. The Laplace transform will, similar to the Fourier transform, transform a function from the time-domain to the frequency-domain. It is particularly useful for all systems or processes which are only to be studied for t > 0, i.e., for positive values of the time. Obviously, in practical applications, these are the more common cases as, e.g., the behavior of an oscillating system before the start of the experiment is usually not of interest. Similarly to the Fourier transform, the Laplace transform uses an integral which makes treating the differential equations which arise when studying the system significantly easier to solve. In many cases, ODEs and even PDEs can be solved surprisingly conveniently once Laplace-transformed. However, compared to the Fourier transform, the inversion of the transform is usually significantly more complex. This leaves us, in many cases, with solutions to complex differential equations which we cannot reconvert from the frequency- to the time-domain.

The Laplace transform as we use it today is based on work by Euler who used a similar transform (the z-transform) for solving differential equations [4]. Laplace used a modified form of this equation for his seminal work on probability theory which would later become the reason the transform is named after him [3]. The form as we use it today is based on the work of Doetsch and dates back to 1937 [5]. Doetsch also provided a very detailed account of the history of this method.

5.2.2 Definition

The Laplace transform of a function g (t) is defined as

{g(t)}=G(f)=0g(t)eftdt

si133_e  (Eq. 5.15)

As already stated, there is no intuitive formula for the inversion of the Laplace transform. The inverse of a Laplace transformation must usually be looked up in suitable tables in which functions and their Laplace transforms are summarized.

5.2.3 Properties

The Laplace transform has several important properties, the most important of which we will shortly introduce.

Existence. The Laplace transform of a function exists only if

0|g(t)eft|dt<

si134_e

If this property is not fulfilled, the integral will not converge and the Laplace transform does not exist.

Linearity and Superposition Property. The Laplace transform is a linear operation and thus

{c1g(t)+c2h(t)}=c1{g(t)}+c2{h(t)}=c1G(f)+c2H(f)

si135_e

Shift Property (Time-Domain). Time-shifted functions occur pretty often when studying dynamic system. If a function g (t) is time-shifted by a time a > 0, it is written as g (t − a) where we must ensure t−a ≥ 0 because the Laplace transform is only defined for positive values of time. This essentially means that the function g (t) is “delayed” by a time a before which nothing happens. In order to ensure that shifting does not shift portions of g (t) into the domain which originally were located at t ≤ 0 and which might disturb our calculations, we use the Heaviside function which is defined in section 3.2.6. We then transform the overall function

{g(ta)Θ(ta)}=0g(ta)Θ(ta)ef(ta)dt

si136_e  (Eq. 5.16)

Now, the Heaviside function is 0 for all values t < a and 1 for all values t ≥ a. This means that we can skip the integral below a, in which case we can simplify Eq. 5.16 to

{g(ta)Θ(ta)}=0a0g(ta)eftdt+01g(ta)eftdt=0ag(ta)eftdt

si137_e  (Eq. 5.17)

We now make the following change of variables introducing u = t − a → t = u + a and the differential dt = du + da = du where da = 0 because a is a constant. We then find from Eq. 5.17

{g(ta)Θ(ta)}=0g(u)ef(u+a)du=0g(u)efudu

si138_e

which is the transform we are looking for. Therefore, for a time-shifted function g (t − a) we find

{g(ta)Θ(ta)}=efa0g(ta)eftdt=efa{g(t)}=efaG(f)

si139_e

Shift Property (Frequency-Domain) or Dampening Property. If we are interested in the Laplace transform of a time-shifted function g (t − a) where t ≥ a > 0 is a real number we find

{eatg(t)}=0g(t)eateftdt=0g(t)e(f+a)tdt=G(a+f)

si140_e

Please note that this multiplication effectively results in the occurrence of an exponential decay term in the time-domain. This is why this property is often also referred to as the dampening property or the complex shift property.

Time-Scaling. The time-scaling property of the Laplace transform states that for a > 0

{g(ta)}=aG(as){g(at)}=1aG(sa)

si141_e

Derivative in the Time-Domain. The Laplace transformed of the nth derivative of g (t) in the time-domain is given by

{dng(t)dtn}=fnG(f)i=1nfn1diigdtii(0)

si142_e

The first three derivatives are therefore given by

{dgdt}=fG(f)g(0){d2gdt2}=f2G(f)fg(0)dgdt(0){d3gdt3}=f3G(f)f2g(0)fdgdt(0)fd2gdt2(0)

si143_e

Derivative in the Frequency-Domain. The nth derivative in the frequency-domain will lead to multiple multiplication by time in the time-domain given by

{tng(t)}=(1)ndnG(f)dfn

si144_e

The first three derivatives are therefore given by

{tg(t)}=dG(f)df{t2g(t)}=d2G(f)df2{t3g(t)}=d3G(f)df3

si145_e

Time-Integration Property. Integrals in the time-domain with the upper boundary a and the lower boundary t with aτt will result in divisions in the frequency-domain given by

{atg(T)dr}=1f(G(f)0ag(T)dr),G(f)={g(t)}

si146_e

Obviously, for a = 0 this equation simplifies to

{0tg(T)dr}=G(f)f

si147_e

Convolution in Time-Domain Property. The Laplace transform of a convolution of two functions g1 (t) and g2 (t) results in a product of the Laplace transforms as given by

{g1(t)*g2(t)}=G1(f)G2(f)

si148_e

Initial Value Property. The limit value for t → 0 in the time-domain can be transformed to the frequency-domain as

limt0g(t)=limffG(f)

si149_e

Final Value Property. The limit value for t → ∞ in the time-domain can be transformed to the frequency-domain as

limtg(t)=limf0fG(f)

si150_e

5.2.4 Important Laplace Transform Pairs

As already stated, it is usually difficult to find the inverse Laplace transform which is why extensive tables have been compiled which serve as lookup tables for the retransformation. Each Laplace transform has a unique time-domain equivalent. The most important ones are summarized in Tab. 5.2. In the following, we will derive a couple of the Laplace transforms noted in Tab. 5.2.

Tab. 5.2

Important Laplace transform pairs. See Eq. 5.15 for the definition of the Laplace transform. Expanded from [2, 6].

Function g(t)Laplace transform{g(t)}=G(f)si40_eRef.Comment
Shifting, scaling, adding, multiplying
c1g1(t)+c2g2(t)si41_ec1G1(f)+c2G2(f)si42_eEq. 5.18linearity property; see section 5.2.3
11fsi43_eEq. 5.19
g(tt0),tt0>0si44_eet0fG(f)si45_eEq. 5.20time shifting property; see section 5.2.3
eatg(t),a>0si46_eG(f+a)si47_eEq. 5.21frequency shifting property; see section 5.2.3
g(at),a>0si48_e1aG(fa)si49_eEq. 5.22time scaling property; see section 5.2.3
g1(t)g2(t)si50_eG1(f)G2(f)si51_eEq. 5.23time multiplication
g1(t)g2(t)si52_eG1(f)G2(f)si53_eEq. 5.24frequency-convolution property; see section 5.2.3
Delta, Heaviside, and signum function
δ(t)1Eq. 5.25Delta function; see section 3.2.5
δ(ta),ta>0si54_eeafsi55_eEq. 5.26shifted Delta function
Θ(t)si56_e1fsi43_eEq. 5.27Heaviside function; see section 3.2.6
tnΘ(t)si58_en!fn+1si59_eEq. 5.28Step nth power function
tΘ(t)si60_e1f2si61_eStep ramp function; see Eq. 5.28 for n = 1
t2Θ(t)si62_e2f3si63_eStep parabola function; see Eq. 5.28 for n = 2
Decay and exponential functions
eatsi64_e1f+asi65_eEq. 5.29exponential decay
teatsi66_e1(f+a)2si67_eEq. 5.30
t2e−a t2(f+a)3si68_e
(1 −a t)e−a tf(f+a)2si69_eEq. 5.31
1 − e−a taf(f+a)si70_eEq. 5.32exponentially approaching function
eat1asi71_e1f(fa)si72_e
eatebtabsi73_e1(fa)(fb)si74_eEq. 5.33
aeatbebtabsi75_ef(fa)(fb)si76_eEq. 5.34
eatat1a2si77_e1f2(fa)si78_e
eat(at1)+1a2si79_e1f(fa)2si80_e
eat(at22+1)si81_eff(fa)2si82_e
eat(at22+2at+1)si83_ef2f(fa)si84_e
aea24t2πt3si85_ee|a|fsi86_eEq. 5.35
ea24tπtsi87_ee|a|ffsi88_e
erfc(a4t)si89_eeaffsi90_eEq. 5.36
Trigonometric functions
sin(at)af2+a2si91_eEq. 5.37sine function
sin(at + b)fsinb+acosbf2+a2si92_e
cos(at)ff2+a2si93_eEq. 5.38cosine function
cos(at + b)fcosbasinbf2+a2si94_e
sinh(at)ff2a2si95_eEq. 5.39hyperbolic sine function
cosh(at)af2a2si96_eEq. 5.40hyperbolic cosine function
e−a t sin(at), a > 0a(f+b)2+a2si97_eEq. 5.41sine function with exponential decay
e−b t cos(at), a > 0f+b(f+b)2+a2si98_eEq. 5.42cosine function with exponential decay
Differential and integrals
tng(t)(1)ndnGdfn(f)si99_eEq. 5.43frequency-domain derivative
t g(t)1dGdf(f)si100_efirst frequency-domain derivative; see Eq. 5.43 for n = 1
t2g(t)d2Gdf2(f)si101_esecond frequency-domain derivative; see Eq. 5.43 for n = 2
dng(t)dtnsi102_efnG(f)i=1nfn1diigdtii(0)si103_eEq. 5.44time-domain derivative; see section 5.2.3
dg(t)dtsi104_efG(f)g(0)si105_eEq. 5.45first time-domain derivative; see section 5.2.3 for n = 1
d2g(t)dt2si106_ef2G(f)dgdt(0)fg(0)si107_esecond time-domain derivative; see section 5.2.3 for n = 2
atg(T)dT,aTtsi108_e1f(G(f)0ag(T)dT)si109_eEq. 5.46time-integration property; see section 5.2.3
0tg(T)dTsi110_eG(f)fsi111_especial case of Eq. 5.46 for a = 0
Geometric functions
f(t)={10<t<a1a<t<2asi112_e1f1eaf1+eaf=1ftanhaffsi113_erectangular function with periodicity 2a
f(t)={10<t<a0a<t<2asi114_e1f11+eafsi115_erectangular function with periodicity 2a
f(t)={10<t<b0otherwisesi116_eeafebffsi117_epulse function of width b − a with b > a

t0015

Laplace Transform of the Delta Function.The Laplace transform of the Delta function is easy to derive. It is given by Eq. 5.15 as

{δ(t)}=0δ(t)eftdt=δ(0)ef0=1

si151_e

where we exploit the fact that the Delta function is zero everywhere except for t = 0.

Laplace Transform of a Derivative With Respect to Time. Next we will derive the Laplace transform of the first derivative of g with respect to t (see Eq. 5.45). The Laplace transform is given by

{dgdt}=0dgdteftdt

si152_e

where we need to apply integration by parts (see Eq. 3.15) as we have the product of two functions of t. Using Eq. 3.15 we find

0eftdgdtdt=[g(t)eft]00g(t)deftdt=0g(0)+0g(t)eftfG(f)g(0)

si153_e

which is Eq. 5.45.

Laplace Transform of a Partial Derivative With Respect to a Variable Different Than Time. Now that we have derived the Laplace transform of a derivative with respect to time, let us consider the partial derivative of a function g (x, t) with respect to a variable different than time. In this case, let us find the Laplace transform of the partial derivative with respect to x, i.e., gxsi154_e. Applying Eq. 5.15 we find

{gx}=0gxeftdt

si155_e

where we can now apply Leibniz’s rule which tells us that we can change the order of the integration and the differentiation which means

{gx}=0gxeftdt=x0g(x,t)eftdt

si156_e

At this point, we can make a subtle but very important modification. If you look closely at the integral operation you will see that during the Laplace transform, the independent variable t will be integrated and thus disappear. The independent variable x is constant during this integration. The term we obtain from the integration will then be a function of f but not of t which means that we can safely replace the partial differential with a normal differential according to

{gx}=0gxeftdt=x0g(x,t)eftdt=dG(x,f)dx

si157_e  (Eq. 5.47)

Obviously, this also holds true for higher differentials. This is one of the most important properties of the Laplace transform: it converts partial to normal differentials. As we can imagine, this is a very useful property when solving PDEs.

Leibniz Integral Rule. We have just used the so-called Leibniz1 rule which allows us to switch the order of the integration and the differentiation. This rule is a special case of the more general Leibniz integral rule which we will shortly introduce. The Leibniz integral rule allows us to convert a derivative of an integral whose boundaries are functions of the integration variable into a more useable form. As an example, take a function f which is dependent on x (our integration variable) and a second independent variable z. We now want to find the derivative of the integral abf(x,z)dxsi161_e where a (z) and b (z) are both functions of z. The Leibniz integral rulelets us rewrite this integral to

xa(z)b(x)f(x,z)dx=a(z)b(z)fzdx+f(b(z),z)bzf(a(z),z)az

si158_e  (Eq. 5.48)

As you can see, we require the partial derivatives of both a (z) and b (z) with respect to z as well as the values of the function f (a (z) , z) and f (b (z) , z). Eq. 5.48 is generally applicable. In the case we just discussed, a and b were constants in which case az=bz=0si159_e which simplifies Eq. 5.48 to

xabf(x,z)dx=abfzdx

si160_e  (Eq. 5.49)

which tells us that we are perfectly allowed to exchange the order of the integration and the differential as we have done.

5.2.5 Application of the Laplace Transform

The Laplace transform is one of the most important tools used for solving ODEs and specifically, PDEs as it converts partial differentials to regular differentials as we have just seen. In general, the Laplace transform is used for applications in the time-domain for t ≥ 0. However, the transformation variable must not necessarily be time. It can be any independent variable x on the domain from 0 to .

Compared to the Fourier transform, the Laplace transform generates nonperiodic solutions. As we have seen in section 4.3.3, Fourier series of nonperiodic functions will always expand to periodic series. Once these series are used to solve differential equations, the solutions will also be periodic. This can be problematic because the solutions may interfere. We will see a very prominent example of this problem when studying the Aris-Taylor dispersion in section 19.9.3. In this section we will create the Fourier transform of the balanced rectangular function which is a rectangular pulse sequence with sufficient spacing between the pulses. The spacing is required as we only require a solution in a given domain from a single pulse. However, all other pulses will also become solutions to our original differential equation. If the pulses are not sufficiently spaced, the pulses may overlap which results in generation of false solutions. This is an intrinsic problem of Fourier series solutions. The Laplace transform generates nonperiodic solutions. Therefore, it could be used in a similar scenario without the risk of interference from artifacts.

We will illustrate the usability of the Laplace transform in section 8.2.5 where we discuss an example using the Laplace transform to solve an ODE. In section 8.3.7 we will use the Laplace transform for solving a PDE.

5.3 Summary

In this section, we have introduced the Fourier and the Laplace transforms, two very important tools for solving differential equations. As we have seen, these transforms allow the significantly simplified derivation of solutions to differential equations given that we are able to find the retransformations. We will see examples of how these transforms can be used to solve differential equations in section 8.

References

[1] Des Chênes M.-A. P. Mémoires présentés à l’Institut des Sciences, Lettres et Arts, par divers savans, et lus dans ses assemblées. Sciences mathématiques et physiques. 1806;1:638–648 (cit. on p. 82).

[2] Stoeckli M. Tables of Common Transform Pairs. 2012. (visited on 03/05/2015) (cit. on pp. 83, 88) http://www.mechmat.ethz.ch/Lectures/tables.pdf.

[3] Laplace P.S. Théorie analytique des probabilités. Mme Ve Courcier; 1812 (cit. on p. 85).

[4] Euler L. Variae observationes circa series infinitas. Commentarii academiae scientiarum imperialis Petropolitanae. 1737;9(1737):160–188 (cit. on p. 85).

[5] Doetsch G. Theorie und Anwendung der Laplace-transformation. 1937 (cit. on p. 85).

[6] Papula L. In: Springer; . Mathematische Formelsammlung. 2003;Vol. 7 (cit. on p. 88).

[7] Newton I. Philosophiae Naturalis Principia Mathematica. 1686 (cit. on p. 91).

[8] Child J. The manuscripts of Leibniz on his discovery of the differential calculus. Part II. The Monist. 1917;238–294 (cit. on p. 91).


1 Marc-Antoine Parseval was a French mathematician who introduced the Parseval theorem in 1799 in a memoir which was published in a more general account of his work in 1806 [1].

1 Pierre-Simon, marquis de Laplace was a French mathematician and physicist who may be most commonly known for his work on the Laplace transform which he developed from concepts originally described by Euler [3]. In this work he applied the transform to probability theory. Laplace’s career is tightly linked to the life of Napoleon Bonaparte whom he examined for suitability for service in the French Royal Artillery. He later voted for Napoleon to become emperor of France, a favor which Napoleon thanked him for by making him secretary of the Interior of France.

1 Gottfried Wilhelm Leibniz was a German mathematician and philosopher who made significant contributions to both mathematics and philosophy. He is credited with having developed calculus independently of Newton. Leibniz published his work on calculus one year before Newton published his Principia[7] which spurred a discussion about who is to be credited with its “invention”. Leibniz developed, among others, very early forms of mechanical calculators using a binary system as the basis of operations, much like modern computers. Leibniz is credited as being one of the most important philosophers of the 17th century coining what is today known as optimism and rationalism. Many important theorems and formulas in mathematics carry his name, the most commonly known may be the Leibniz integral rule which he originally published in a manuscript in 1675 [8].

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset