Having introduced the concept of the Fourier series in section 4.3, we will now introduce the Fourier transform. You will recall that the Fourier series is the representation of a function as a finite series of sine and cosine terms. If you look back at the definition of the individual terms in Eq. 4.24, you can see that these terms vary in two means:
• the amplitude or strength which is given by the prefactor derived from the integrals Eq. 4.26, Eq. 4.27, and Eq. 4.28
• the frequency which is given by the index n which occurs as multiplier in the sine and cosine terms
Until now we have simply calculated these terms individually. However, there may be a better option. What if there was a function into which we could plug a frequency n as the independent variable which would output the amplitude of the respective term? This is exactly what the Fourier transform does. If applied to a function g (t), it will output the Fourier-transformed function G (f) which takes the frequency f as the independent variable and outputs the amplitude that this specific frequency has if g (t) was expanded to a Fourier series.
Compared to the Fourier series, the Fourier transform will allow us to create functions which are nonperiodic. If you recall the definition of the Fourier series, we introduced the period T for a Fourier transformed function. The Fourier transform can be considered as an expanded version of the Fourier series where we increase the period length from −∞ to + ∞ effectively obtaining a nonperiodic function.
We will not derive the Fourier transform in detail here, as this derivative is best left to reference textbooks in mathematics. The Fourier transform is defined as
Obviously, a Fourier-transformed function can be reconverted to the original function. This process is referred to as the inverse Fourier transform and is defined as
The Fourier transform is often referred to as the continuous Fourier transform as opposed to the discreet Fourier transform (which we have worked on so far), as it also allows us to use real numbers for f, whereas the discreet Fourier transform is restricted to full integers for the values of n.
Please note that we have used the following notation so far. The original function is a function of the independent variable t which may (in most cases) be the time domain t. The Fourier transform will then be the frequency domain f. However, the Fourier transform can also be carried out on functions g not in the time domain, i.e., with different independent variables. Please note that the resulting independent variable after Fourier transform is still a frequency. However, we then usually refer it as being in the image domain.
The Fourier transform has a number of important properties, the most important of which we will shortly introduce. Most of these properties are intuitive to understand. The interested reader may refer to engineering or technical mathematics textbooks for the derivation of these properties.
Linearity. The Fourier transform is linear. Therefore
Shift Property. If we are interested in the Fourier transform of a shifted function g (t − a) where a is a real number, we find
Scaling Property. In contrast to the linearity property, scaling in the time domain introduces a scaling factor
Differentiation Property. This property may be the most important property when dealing with a differential equation. The Fourier transform of a differential is given by
As you can see, in the frequency domain, a differentiation in the time domain will lead to a mere multiplication. This obviously makes life significantly simpler in many ways.
Convolution Property. The Fourier transform of a convolution of two functions g1 (t) and g2 (t) which is usually written as
results in a mere multiplication of the two functions once Fourier-transformed
Modulation Property. On the other hand, the Fourier transform of two functions that are multiplied results in a convolution
Parseval’s Theorem.Parseval’s1theorem states that
The integral of this squared magnitude is often referred to as the energy of the function contained across all time spans (left-hand side) and frequencies (right-hand side). Obviously, if the Fourier transform gives an exact account of the function, the “energy” must be the same.
Duality Theorem. The duality theorems can be directly derived from Eq. 5.1 and Eq. 5.2 and state that if {g (t)} = G (f) then
There are a number of important Fourier transform pairs which we encounter commonly. The most important ones are summarized in Tab. 5.1.
Tab. 5.1
Important Fourier transform pairs. See Eq. 5.1 for the definition of the Fourier transform and Eq. 5.2 for the definition of the inverse Fourier transform. Expanded from [2].
Function f(t) | Fourier transform | Reference | Comment |
Shifting and scaling | |||
Eq. 5.3 | time shifting property; see section 5.1.3 | ||
Eq. 5.4 | frequency shifting property | ||
Eq. 5.5 | time scaling property; see section 5.1.3 | ||
Eq. 5.6 | frequency scaling property | ||
Delta, Heaviside, and signum function | |||
1 | Eq. 5.7 | Delta function; see section 3.2.5 and Eq. 3.83 | |
Eq. 5.8 | shifted delta function; see Eq. 5.7 and section 5.1.3 | ||
1 | inverse transformed delta function; see Eq. 5.7 | ||
inverse shifted delta function; see Eq. 5.8 | |||
Heaviside function; see section 3.2.6 and Eq. 3.86 | |||
Eq. 5.9 | signum function; see section 3.2.7 and Eq. 3.90 | ||
inverse signum function; see Eq. 5.9 | |||
Decay functions | |||
Eq. 5.10 | |||
Eq. 5.11 | |||
Trigonometric functions | |||
pure sine function | |||
pure cosine function | |||
modulated sine function | |||
modulated cosine function | |||
squared sine function | |||
squared cosine function | |||
Geometric functions | |||
Eq. 5.12 | rectangular function with applied scaling property; see section 5.1.3 |
Fourier Transform of the Rectangular Function: The sinc Function. As a quick example, we will derive the Fourier transform of the rectangular function which is given by Eq. 5.12. From Eq. 5.1 we find
where we apply Eq. 3.40 to rewrite the bracket as a sine function to result in
where we used the normalized sinc function which is defined as
In mathematics, the unnormalized sinc function is often used which is defined as
Fig. 5.1 shows the two sinc functions. As you can see, they are different. The normalized sinc function is commonly used in signal processing, whereas the unnormalized sinc function is more commonly used in mathematics.
The Fourier transform is one of the most important tools when solving ODEs and in particular, PDEs. We will look at an example which makes use of the Fourier transform in section 8.3.6 where we solve the one-dimensional diffusion equation.
In general, the Fourier transform is a very useful tool when solving differential equations on domains ranging from −∞ . . . + ∞. This is due to the fact that the Fourier transform contains an integral. This integral leads to very useful features when put into a differential equation. Obviously, an integral and a differential cancel which makes solving the differential equation significantly easier. However, for this case, the variable which we Fourier-transform must be allowed to range from −∞ . . . + ∞ as this is what the Fourier transform effectively models. As we will see, in many applications in practical physics we may often allow one of the independent variables to span from minus to plus infinity. A typical example is diffusion in an infinite volume. If we do not limit the length scale of the system (in this case, the size of the vessel) the independent variable of the problem can be assumed to range from −∞ . . . + ∞. However, in many applications we may need to impose certain limits to the scope of the independent variables and we may also need to set the value of the dependent function to known values at finite values for the independent variables. These conditions are referred to as boundary conditions (see section 3.1.7.2). The Fourier transform is not a particularly suitable method if we need to take into account boundary values.
Having introduced the Fourier transform, we will discuss the Laplace1transform. The Laplace transform will, similar to the Fourier transform, transform a function from the time-domain to the frequency-domain. It is particularly useful for all systems or processes which are only to be studied for t > 0, i.e., for positive values of the time. Obviously, in practical applications, these are the more common cases as, e.g., the behavior of an oscillating system before the start of the experiment is usually not of interest. Similarly to the Fourier transform, the Laplace transform uses an integral which makes treating the differential equations which arise when studying the system significantly easier to solve. In many cases, ODEs and even PDEs can be solved surprisingly conveniently once Laplace-transformed. However, compared to the Fourier transform, the inversion of the transform is usually significantly more complex. This leaves us, in many cases, with solutions to complex differential equations which we cannot reconvert from the frequency- to the time-domain.
The Laplace transform as we use it today is based on work by Euler who used a similar transform (the z-transform) for solving differential equations [4]. Laplace used a modified form of this equation for his seminal work on probability theory which would later become the reason the transform is named after him [3]. The form as we use it today is based on the work of Doetsch and dates back to 1937 [5]. Doetsch also provided a very detailed account of the history of this method.
The Laplace transform of a function g (t) is defined as
As already stated, there is no intuitive formula for the inversion of the Laplace transform. The inverse of a Laplace transformation must usually be looked up in suitable tables in which functions and their Laplace transforms are summarized.
The Laplace transform has several important properties, the most important of which we will shortly introduce.
Existence. The Laplace transform of a function exists only if
If this property is not fulfilled, the integral will not converge and the Laplace transform does not exist.
Linearity and Superposition Property. The Laplace transform is a linear operation and thus
Shift Property (Time-Domain). Time-shifted functions occur pretty often when studying dynamic system. If a function g (t) is time-shifted by a time a > 0, it is written as g (t − a) where we must ensure t−a ≥ 0 because the Laplace transform is only defined for positive values of time. This essentially means that the function g (t) is “delayed” by a time a before which nothing happens. In order to ensure that shifting does not shift portions of g (t) into the domain which originally were located at t ≤ 0 and which might disturb our calculations, we use the Heaviside function which is defined in section 3.2.6. We then transform the overall function
Now, the Heaviside function is 0 for all values t < a and 1 for all values t ≥ a. This means that we can skip the integral below a, in which case we can simplify Eq. 5.16 to
We now make the following change of variables introducing u = t − a → t = u + a and the differential dt = du + da = du where da = 0 because a is a constant. We then find from Eq. 5.17
which is the transform we are looking for. Therefore, for a time-shifted function g (t − a) we find
Shift Property (Frequency-Domain) or Dampening Property. If we are interested in the Laplace transform of a time-shifted function g (t − a) where t ≥ a > 0 is a real number we find
Please note that this multiplication effectively results in the occurrence of an exponential decay term in the time-domain. This is why this property is often also referred to as the dampening property or the complex shift property.
Time-Scaling. The time-scaling property of the Laplace transform states that for a > 0
Derivative in the Time-Domain. The Laplace transformed of the nth derivative of g (t) in the time-domain is given by
The first three derivatives are therefore given by
Derivative in the Frequency-Domain. The nth derivative in the frequency-domain will lead to multiple multiplication by time in the time-domain given by
The first three derivatives are therefore given by
Time-Integration Property. Integrals in the time-domain with the upper boundary a and the lower boundary t with a ≤ τ ≤ t will result in divisions in the frequency-domain given by
Obviously, for a = 0 this equation simplifies to
Convolution in Time-Domain Property. The Laplace transform of a convolution of two functions g1 (t) and g2 (t) results in a product of the Laplace transforms as given by
Initial Value Property. The limit value for t → 0 in the time-domain can be transformed to the frequency-domain as
Final Value Property. The limit value for t → ∞ in the time-domain can be transformed to the frequency-domain as
As already stated, it is usually difficult to find the inverse Laplace transform which is why extensive tables have been compiled which serve as lookup tables for the retransformation. Each Laplace transform has a unique time-domain equivalent. The most important ones are summarized in Tab. 5.2. In the following, we will derive a couple of the Laplace transforms noted in Tab. 5.2.
Tab. 5.2
Important Laplace transform pairs. See Eq. 5.15 for the definition of the Laplace transform. Expanded from [2, 6].
Function g(t) | Laplace transform | Ref. | Comment |
Shifting, scaling, adding, multiplying | |||
Eq. 5.18 | linearity property; see section 5.2.3 | ||
1 | Eq. 5.19 | ||
Eq. 5.20 | time shifting property; see section 5.2.3 | ||
Eq. 5.21 | frequency shifting property; see section 5.2.3 | ||
Eq. 5.22 | time scaling property; see section 5.2.3 | ||
Eq. 5.23 | time multiplication | ||
Eq. 5.24 | frequency-convolution property; see section 5.2.3 | ||
Delta, Heaviside, and signum function | |||
δ(t) | 1 | Eq. 5.25 | Delta function; see section 3.2.5 |
Eq. 5.26 | shifted Delta function | ||
Eq. 5.27 | Heaviside function; see section 3.2.6 | ||
Eq. 5.28 | Step nth power function | ||
Step ramp function; see Eq. 5.28 for n = 1 | |||
Step parabola function; see Eq. 5.28 for n = 2 | |||
Decay and exponential functions | |||
Eq. 5.29 | exponential decay | ||
Eq. 5.30 | |||
t2e−a t | |||
(1 −a t)e−a t | Eq. 5.31 | ||
1 − e−a t | Eq. 5.32 | exponentially approaching function | |
Eq. 5.33 | |||
Eq. 5.34 | |||
Eq. 5.35 | |||
Eq. 5.36 | |||
Trigonometric functions | |||
sin(at) | Eq. 5.37 | sine function | |
sin(at + b) | |||
cos(at) | Eq. 5.38 | cosine function | |
cos(at + b) | |||
sinh(at) | Eq. 5.39 | hyperbolic sine function | |
cosh(at) | Eq. 5.40 | hyperbolic cosine function | |
e−a t sin(at), a > 0 | Eq. 5.41 | sine function with exponential decay | |
e−b t cos(at), a > 0 | Eq. 5.42 | cosine function with exponential decay | |
Differential and integrals | |||
tng(t) | Eq. 5.43 | frequency-domain derivative | |
t g(t) | first frequency-domain derivative; see Eq. 5.43 for n = 1 | ||
t2g(t) | second frequency-domain derivative; see Eq. 5.43 for n = 2 | ||
Eq. 5.44 | time-domain derivative; see section 5.2.3 | ||
Eq. 5.45 | first time-domain derivative; see section 5.2.3 for n = 1 | ||
second time-domain derivative; see section 5.2.3 for n = 2 | |||
Eq. 5.46 | time-integration property; see section 5.2.3 | ||
special case of Eq. 5.46 for a = 0 | |||
Geometric functions | |||
rectangular function with periodicity 2a | |||
rectangular function with periodicity 2a | |||
pulse function of width b − a with b > a |
Laplace Transform of the Delta Function.The Laplace transform of the Delta function is easy to derive. It is given by Eq. 5.15 as
where we exploit the fact that the Delta function is zero everywhere except for t = 0.
Laplace Transform of a Derivative With Respect to Time. Next we will derive the Laplace transform of the first derivative of g with respect to t (see Eq. 5.45). The Laplace transform is given by
where we need to apply integration by parts (see Eq. 3.15) as we have the product of two functions of t. Using Eq. 3.15 we find
which is Eq. 5.45.
Laplace Transform of a Partial Derivative With Respect to a Variable Different Than Time. Now that we have derived the Laplace transform of a derivative with respect to time, let us consider the partial derivative of a function g (x, t) with respect to a variable different than time. In this case, let us find the Laplace transform of the partial derivative with respect to x, i.e., . Applying Eq. 5.15 we find
where we can now apply Leibniz’s rule which tells us that we can change the order of the integration and the differentiation which means
At this point, we can make a subtle but very important modification. If you look closely at the integral operation you will see that during the Laplace transform, the independent variable t will be integrated and thus disappear. The independent variable x is constant during this integration. The term we obtain from the integration will then be a function of f but not of t which means that we can safely replace the partial differential with a normal differential according to
Obviously, this also holds true for higher differentials. This is one of the most important properties of the Laplace transform: it converts partial to normal differentials. As we can imagine, this is a very useful property when solving PDEs.
Leibniz Integral Rule. We have just used the so-called Leibniz1 rule which allows us to switch the order of the integration and the differentiation. This rule is a special case of the more general Leibniz integral rule which we will shortly introduce. The Leibniz integral rule allows us to convert a derivative of an integral whose boundaries are functions of the integration variable into a more useable form. As an example, take a function f which is dependent on x (our integration variable) and a second independent variable z. We now want to find the derivative of the integral where a (z) and b (z) are both functions of z. The Leibniz integral rulelets us rewrite this integral to
As you can see, we require the partial derivatives of both a (z) and b (z) with respect to z as well as the values of the function f (a (z) , z) and f (b (z) , z). Eq. 5.48 is generally applicable. In the case we just discussed, a and b were constants in which case which simplifies Eq. 5.48 to
which tells us that we are perfectly allowed to exchange the order of the integration and the differential as we have done.
The Laplace transform is one of the most important tools used for solving ODEs and specifically, PDEs as it converts partial differentials to regular differentials as we have just seen. In general, the Laplace transform is used for applications in the time-domain for t ≥ 0. However, the transformation variable must not necessarily be time. It can be any independent variable x on the domain from 0 to ∞.
Compared to the Fourier transform, the Laplace transform generates nonperiodic solutions. As we have seen in section 4.3.3, Fourier series of nonperiodic functions will always expand to periodic series. Once these series are used to solve differential equations, the solutions will also be periodic. This can be problematic because the solutions may interfere. We will see a very prominent example of this problem when studying the Aris-Taylor dispersion in section 19.9.3. In this section we will create the Fourier transform of the balanced rectangular function which is a rectangular pulse sequence with sufficient spacing between the pulses. The spacing is required as we only require a solution in a given domain from a single pulse. However, all other pulses will also become solutions to our original differential equation. If the pulses are not sufficiently spaced, the pulses may overlap which results in generation of false solutions. This is an intrinsic problem of Fourier series solutions. The Laplace transform generates nonperiodic solutions. Therefore, it could be used in a similar scenario without the risk of interference from artifacts.
We will illustrate the usability of the Laplace transform in section 8.2.5 where we discuss an example using the Laplace transform to solve an ODE. In section 8.3.7 we will use the Laplace transform for solving a PDE.
In this section, we have introduced the Fourier and the Laplace transforms, two very important tools for solving differential equations. As we have seen, these transforms allow the significantly simplified derivation of solutions to differential equations given that we are able to find the retransformations. We will see examples of how these transforms can be used to solve differential equations in section 8.