1.1. Introduction
In this second part of the book, we will concentrate on the representation and processing of continuous-time signals. Such signals are familiar to us. Voice, music, as well as images and video coming from radios, cell phones, IPods, and MP3 players exemplify these signals. Clearly each of these signals has some type of information, but what is not clear is how we could capture, represent, and perhaps modify these signals and their information content.
To process signals we need to understand their nature—to classify them—so as to clarify the limitations of our analysis and our expectations. Several realizations could then come to mind. One could be that almost all signals vary randomly and continuously with time. Consider a voice signal. If you are able to capture such a signal, by connecting a microphone to your computer and using the hardware and software necessary to display it, you realize that when you speak into the microphone a rather complicated signal that changes in unpredictable ways is displayed. You would ask yourself how is it that your spoken words are converted into this signal, and how could it be represented mathematically to allow you to develop algorithms to change it. In this book we consider the representation of deterministic—rather than random—signals, clearly a first step in the long process of answering these significant questions.
A second realization could be that to input signals into a computer the signals must be in binary form. How do we convert the voltage signal generated by the microphone into a binary form? This requires that we compress the information in a way that permits us to get it back, as when we wish to listen to the voice signal stored in the computer.
One more realization could be that the processing of signals requires us to consider systems. In our example, one could think of the human vocal system and of a microphone as a system that converts differences in air pressure into a voltage signal. Signals and systems go together. We will consider the interaction of signals and systems in the next chapter.
Specifically in this chapter we will discuss the following issues:
■ The mathematical representation of signals—Generally, how to think of a signal as a function of either time (e.g., music and voice signals), space (e.g., images), or of time and space (e.g., videos). In this book we will concentrate on time-dependent signals.
■ Classification of signals—Using practical characteristics of signals we offer a classification of signals indicating the way a signal is stored, processed, or both. As indicated, this second part of the book will concentrate on the representation and analysis of continuous-time signals and systems, while the next part will discuss the representation and analysis of discrete-time signals and systems.
■ Signal manipulation—What it means to delay or advance a signal, to reflect it, or to find its odd or even components. These are signal operations that will help us in their representation and processing.
■ Basic signal representation—We show that any signal can be represented using basic signals. This will permit us to highlight certain characteristics of the signal and to simplify finding the corresponding outputs of systems. In particular, the representation in terms of sinusoids is of great interest as it allows the development of the so-called Fourier representation, which is essential in the development of the theory of linear systems.
1.3. Continuous-Time Signals
That signals are functions of time-carrying information is easily illustrated with a recorded voice signal. Such a signal can be thought of as a continuously varying voltage, generated by a microphone, that can be transformed into an audible acoustic signal—providing the voice information—by means of an amplifier and speakers. Thus, the speech signal is represented by a function of time
where
tb is the time at which this signal starts, and
tf the time at which it ends. The function
v(
t) varies continuously with time, and its amplitude can take any possible value (as long as the speakers are not too loud!). This signal obviously carries the information provided by the voice message.
Not all signals are functions of time alone. A digital image stored in a computer provides visual information. The intensity of the illumination of the image depends on its location within the image. Thus, a digital image can be represented as a function of two space variables (m, n) that vary discretely, creating an array of values called picture elements or pixels. The visual information in the image is thus provided by the signal p(m, n) where 0 ≤ m ≤ M − 1 and 0 ≤ n ≤ N − 1 for an image of size M × N pixels. Each of the pixel values can be represented, for instance, by 256 gray scale values or 8 bits/pixel. Thus, the signal p(m, n) varies discretely in space and in amplitude. A video, as a sequence of images in time, is accordingly a function of time and of two space variables. How their time or space variables and their amplitudes vary characterizes signals.
For a time-dependent signal, time and amplitude vary continuously or discretely. Thus, according to the independent variable, signals are continuous-time or discrete-time signals—that is, t takes an innumerable or a finite set of values. Likewise, the amplitude of either a continuous-time or a discrete-time signal can vary continuously or discretely. Thus, continuous-time signals can be continuous-amplitude as well as discrete-amplitude signals. Continuous-amplitude, continuous-time signals are called analog signals given that they resemble the pressure variations caused by an acoustic signal. A continuous-amplitude, discrete-time signal is called a discrete-time signal. A digital signal has discrete time and discrete amplitude. If the samples of a digital signal are given as binary codes the signal is called a binary signal.
A good way to illustrate the signal classification is to consider the steps needed to process the voice signal
v(
t) in
Equation (1.1) with a computer. As indicated above, in
v(
t) time varies continuously between
tb and
tf, and the amplitude also varies continuously, and we assume it could take any possible real value (i.e.,
v(
t) is an analog signal). As such,
v(
t) cannot be processed with a computer. It would require to store an innumerable number of signal values (even when
tb is very close to
tf) and for an accurate representation of the amplitude values
v(
t), we might need a large number of bits. Thus, it is necessary to reduce the amount of data without losing the information provided by the signal. To accomplish that, we sample the signal by taking signal values at equally spaced times
nTs, where
n is an integer and
Ts is the
sampling period, which is appropriately chosen for this signal (in
Chapter 7 we will learn how to chose
Ts).
As a result of the sampling, we obtain the discrete-time signal
where
Ts = (
tf −
tb)/
N and we have taken samples at times
tb +
Tsn. Clearly, this discretization of the time variable reduces the number of values to enter into the computer, but the amplitudes of these samples still can take possibly innumerable values. Now, to represent each of the
v(
nTs) values with a certain number of bits, we also discretize the amplitude of the samples. To do so, the dynamic range (the difference between the maximum and the minimum amplitude) of the analog signal is equally divided into a certain number of levels. A sample value falling within one of these levels is allocated a unique binary code. For instance, if we want each sample to be represented by 8 bits we have 2
8 or 256 possible levels. These operations are called
quantization and coding. The resulting signal is digital, where each sample is represented as a binary number.
Given that many of the signals we encounter in practical applications are analog, if it is desirable to process such signals with a computer, the above procedure is commonly done. The device that converts an analog signal into a digital signal is called an analog-to-digital converter (ADC) and it is characterized by the number of samples it takes per second (sampling rate 1/
Ts) and by the number of bits that it allocates to each sample. To convert a digital signal into an analog signal a digital-to-analog converter (DAC) is used. Such a device inverts the ADC process: binary values are converted into pulses with amplitudes approximating those of the original samples, which are then smoothed out resulting in an analog signal. We will discuss in
Chapter 7 how the sampling, binary representation, and reconstruction of an analog signal is done.
Figure 1.1 shows how the discretization of an analog signal in time and amplitude can be understood, while
Figure 1.2 illustrates the sampling and quantization of a segment of speech.
A continuous-time signal can be thought of as a real-(or complex-) valued function of time:
Thus, the independent variable is time
t, and the value of the function at some time
t0,
x(
t0), is a real (or a complex) value. (Although in practice signals are real, it is useful in theory to have the option of complex-valued signals.) It is assumed that both time
t and signal amplitude
x(
t) can vary continuously, if needed, from −∞ to ∞.
The term analog used for continuous-time signals derives from the similarity of acoustic signals to the pressure variations generated by voice, music, or any other acoustic signal. The terms continuous-time and analog are used interchangeably for these signals.
Characterize the sinusoidal signal
Solution
The signal x(t) is
■ Deterministic, as the value of the signal can be obtained for any possible value of t.
■ Analog, as there is a continuous variation of the time variable
t from −∞ to ∞, and of the amplitude of the signal between
to
.
■ Of infinite support, as the signal does not become zero outside any finite interval.
The amplitude of the sinusoid is
, its frequency is Ω =
π/2 (rad/sec), and its phase is
π/4 rad (notice that Ω
t has radians as units so that it can be added to the phase). Because of the infinite support, this signal cannot exist in practice, but we will see that sinusoids are extremely important in the representation and processing of signals.
A complex signal y(t) is defined as
and zero otherwise. Express
y(
t) in terms of the signal
x(
t) from
Example 1.1. Characterize
y(
t).
Solution
Since
, then using Euler's identity:
Thus, the real and imaginary parts of this signal are
for 0 ≤
t ≤ 10 and zero otherwise. The signal
y(
t) can be written as
and zero otherwise. Notice that
■ Analog of finite support—that is, the signal is zero outside the interval 0 ≤ t ≤ 10.
■
Complex, composed of two sinusoids of frequency Ω =
π/2 rad/sec, phase
π/4 in rad, and amplitude
in 0 ≤
t ≤ 10, and it is zero outside that time interval.
Consider the pulse signal
and zero elsewhere. Characterize this signal, and use it along with
x(
t) in
Example 1.1, to represent
y(
t) in the above example.
Solution
The analog signal p(t) is of finite support and real-valued. We have that
The multiplication by
p(
t) makes
x(
t)
p(
t) and
x(
t − 1)
p(
t) finite-support signals. This operation is called time windowing as the signal
p(
t) only allows us to see the values of
x(
t) wherever
p(
t) = 1, while ignoring the values of
x(
t) wherever
p(
t) = 0. It acts like a window.
Example 1.1,
Example 1.2 and
Example 1.3 not only illustrate how different types of signal can be related to each other, but also how signals can be be defined in shorter or more precise forms. Although the representations for
y(
t) in
Example 1.2 and in this example are equivalent, the one here is shorter and easier to visualize by the use of the pulse
p(
t).
1.3.1. Basic Signal Operations—Time Shifting and Reversal
The following are basic signal operations used in the representation and processing of signals (for some of these operations we indicate the system that is used to realize the operation):
■ Signal addition—Two signals x(t) and y(t) are added to obtain their sum z(t). An adder is used.
■ Constant multiplication—A signal x(t) is multiplied by a constant α. A constant multiplier is used.
■ Time and frequency shifting—The signal x(t) is delayed τ seconds to get x(t − τ), and advanced by τ to get x(t + τ). A signal can be shifted in frequency or frequency modulated by multiplying it by a complex exponential or a sinusoid. A delay shifts right a time signal, while a modulator shifts the signal in frequency.
■ Time scaling—The time variable of a signal x(t) is scaled by a constant α to give x(αt). If α = −1, the signal is reversed in time (i.e., x(−t)), or reflected. Only the delay can be implemented in practice.
■ Time windowing—A signal x(t) is multiplied by a window signal w(t) so that x(t) is available in the support of w(t).
Given the simplicity of the first two operations we will only discuss the others. In this section we consider time shifting and reflection (a special case of the time scaling) and leave the rest for a later section.
In
Figure 1.3 we show the diagrams used for the implementation of the addition of two signals, the multiplication of a signal by a constant, the delay of a signal, and the time windowing or modulation of a signal. These will be used in the block diagrams for systems in the next chapters.
It is important to understand that advancing or reflecting cannot be implemented in real time—that is as the signal is being processed. Delays can be implemented in real time. Advancing and reflection require that the signal be saved or recorded. Thus, an acoustic signal recorded on magnetic tape can be delayed or advanced with respect to an initial time, or played back, faster or slower, but it can only be delayed if we have the signal coming from a live microphone.
We will see later in this chapter that shifting in frequency results in the process of signal modulation, which is of great significance in communications. Scaling of the time variable results in a contracted and expanded version of the original signal and causes changes in the frequency content of the signal.
■ For a positive value
τ, a signal
x(
t −
τ) is the original signal
x(
t) shifted right or delayed
τ seconds, as illustrated in
Figure 1.4(b). That the original signal has been shifted to the right can be verified by finding that the
x(0) value of the original signal appears in the delayed signal at
t =
τ (which results from making
t −
τ = 0).
■ Likewise, a signal
x(
t +
τ) is the original signal
x(
t) shifted left or advanced by
τ seconds as illustrated in
Figure 1.4(c). The original signal is now shifted to the left—that is, the value
x(0) of the original signal occurs now earlier (i.e., it has been advanced) at time
t = −
τ.
■
Reflection consists in negating the time variable. Thus, the reflection of
x(
t) is
x(−
t). This operation can be visualized as flipping the signal about the origin. See
Figure 1.4(d).
Given an analog signal x(t) and τ > 0 we have that with respect to x(t):
(a) x(t − τ) is delayed or shifted right τ seconds.
(b) x(t + τ) is advanced or shifted left τ seconds.
(d) x(−t − τ) is reflected and shifted left τ seconds, while x(−t + τ) is reflected and shifted right τ seconds.
Remarks
Whenever we combine the delaying or advancing with reflection, delaying and advancing are swapped. Thus, x(−t + 1) is x(t) reflected and delayed, or shifted to the right, by 1. Likewise, x(−t − 1) is x(t) reflected and advanced, or shifted to the left by 1. Again, the value x(0) of the original signal is found in x(−t + 1) at t = 1, and in x(−t − 1) at t = −1.
Consider an analog pulse
Find mathematical expressions for
x(
t) delayed by 2, advanced by 2, and the reflected signal
x(−
t).
Solution
The delayed signal x(t − 2) can be found mathematically by replacing the variable t by t − 2 so that
The value
x(0) (which in
x(
t) occurs at
t = 0) in
x(
t − 2) now occurs when
t = 2, so that the signal
x(
t) has been shifted to the right two units of time, and since the values are occurring later, the signal
x(
t − 2) is said to be “delayed” by 2 with respect to
x(
t).
The signal
x(
t + 2) can be seen to be the advanced version of
x(
t), as it is this signal shifted to the left by two units of time. The value
x(0) for
x(
t + 2) now occurs at
t = −2, which is ahead of
t = 0.
Finally, the signal x(−t) is given by
This signal is a mirror image of the original: the value
x(0) still occurs at the same time, but
x(1) occurs when
t = −1.
When the shifting and reflecting operations are considered together the best approach to visualize the operation is to make a table computing several values of the new signal and comparing these with those from the original signal. Consider the pulse in
Example 1.4 and plot the signal
x(−
t + 2).
Solution
Although one can see that this signal is reflected, it is not clear whether it is advanced or delayed by 2. By computing a few values:
t | x(−t + 2) |
---|
2 | x(0) = 1 |
1.5 | x(0.5) = 1 |
1 | x(1) = 1 |
0 | x(2) = 0 |
−1 | x(3) = 0 |
it becomes clear that
x(−
t + 2) is reflected and “delayed” by 2. In fact, as indicated above, whenever the signal is a function of −
t (i.e., reflected), the −
t +
τ operation becomes reflection and “delay,” and −
t −
τ becomes reflection and “advancing.”
Remarks
When computing the convolution integral later on, we will consider the signal x(
t −
τ)
as a function of τ for different values of t. As indicated fromExample 1.5,
this signal is a reflected version of x(
τ)
being shifted to the right t seconds. To see this, consider t = 0
then x(
t −
τ)|
t=0 =
x(−
τ),
the reflected version, and x(0)
occurs at τ = 0.
When t = 1,
then x(
t −
τ)|
t=1 =
x(1 −
τ)
and x(0)
occurs at τ = 1,
so that x(1 −
τ)
is x(−
τ)
shifted to the right by 1, and so on. 1.3.2. Even and Odd Signals
Symmetry with respect to the origin differentiates signals and will be useful in their Fourier analysis. We have that an analog signal
x(
t) is called
■ Even whenever x(t) coincides with its reflection x(−t). Such a signal is symmetric with respect to the time origin.
■ Odd whenever x(t) coincides with −x(−t)—that is, the negative of its reflection. Such a signal is asymmetric with respect to the time origin.
Even and odd signals are defined as follows:
Even and odd decomposition: Any signal
y(
t) is representable as a sum of an even component
ye(
t) and an odd component
yo(
t):
Using the definitions of even and odd signals, any signal y(t) can be decomposed into the sum of an even and an odd function. Indeed, the following is an identity:
where the first term is the even and the second is the odd components of
y(
t). It can be easily verified that
ye(
t) is even and that
yo(
t) is odd.
Consider the analog signal
Determine the value of
θ for which
x(
t) is even and odd. If
θ =
π/4, is
x(
t) = cos(2
π t +
π/4), −∞ <
t < ∞, even or odd?
Solution
The reflection of
x(
t) is
x(−
t) = cos(−2
π t +
θ). Then:
1. x(t) is even if x(t = x(−t) or
or
θ = −
θ or
θ = 0,
π. Thus,
x1(
t) = cos(2
π t) as well as
x2(
t) = cos(2
π t +
π) = −cos(2
π t) are even.
2. for x(t) to be odd, we need that x(t) = −x(−t) or
which can be obtained with
θ = −
θ ∓
π or
θ = ∓
π/2. Indeed, cos(2
π t −
π/2) = sin(2
π t) and cos(2
π t +
π/2) = −sin(2
π t) are both odd. Thus,
x3(
t) = ±sin(2
π t) is odd.
When θ = π/4, x(t) = cos(2π t + π/4) is neither even nor odd according to the above.
Consider the signal
Find its even and odd decomposition. What would happen if
x(0) = 2 instead of 0—that is, when we define the sinusoid at
t = 0? Explain.
Solution
The signal x(t) is neither even nor odd given that its values for t ≤ 0 are zero. For its even–odd decomposition, the even component is given by
and the odd component is given by
which when added together become the given signal.
while the odd component is the same. The even component has a discontinuity at
t = 0.
1.3.3. Periodic and Aperiodic Signals
A useful characterization of signals is whether they are periodic or aperiodic (nonperiodic).
An analog signal x(t) is periodic if
■ it is defined for all possible values of t, −∞ < t < ∞, and
■ there is a positive real value T0, the period of x(t), such that
The period of x(t) is the smallest possible value of T0 > 0 that makes the periodicity possible. Thus, although NT0 for an integer N > 1 is also a period of x(t) it should not be considered the period.
Remarks
■ The infinite support and the unique characteristic of the period make periodic signals nonexistent in practical applications. Despite this, periodic signals are of great significance in the Fourier representation of signals and in their processing, as we will see later. The representation of aperiodic signals is obtained from that of periodic signals, and the response of systems to periodic sinusoids is fundamental in the theory of linear systems.
■ Although seemingly redundant, the first part of the definition of a periodic signal indicates that it is not possible to have a nonzero periodic signal with a finite support (i.e., the analog signal is zero outside an interval t ∈ [t1, t2]). This first part of the definition is needed for the second part to make sense.
■ It is exasperating to find the period of a constant signal x(t) = A; visually x(t) is periodic but its period is not clear. Any positive value could be considered the period, but none will be taken. The reason is that x(t) = A = A cos(0t) or of zero frequency, and as such its period is not determined since we would have to divide by zero—not permitted. Thus, a constant signal is a periodic signal of nondefinable period!
Consider the analog sinusoid
Determine the period of this signal, and indicate for what frequency Ω
0 the period of
x(
t) is not clearly defined.
Solution
The analog frequency is Ω
0 = 2
π/
T0 so
T0 = 2
π/Ω
0 is the period. Whenever
T0 > 0 (or Ω
0 > 0) these sinusoidals are periodic. For instance, consider
Its period is found by noticing that this signal has an analog frequency Ω
0 = 2 = 2
π f0(rad/sec), or a hertz frequency of
f0 = 1/
π = 1/
T0, so that
T0 =
π is the period in seconds. That this is the period can be seen for an integer
N,
since adding 2
π N(a multiple of 2
π) to the angle of the cosine gives the original angle. If Ω
0 = 0—that is, dc frequency—the period cannot be defined because of the division by zero when finding
T0 = 2
π/Ω
0.
Consider a periodic signal x(t) of period T0. Determine whether the following signals are periodic, and if so, find their corresponding periods:
(b) z(t) = x(t) + v(t) where v(t) is periodic of period T1 = NT0, where N is a positive integer.
(c) w(t) = x(t) + u(t) where u(t) is periodic of period T1, not necessarily a multiple of T0. Determine under what conditions w(t) could be periodic.
Solution
(a) Adding a constant to a periodic signal does not change the periodicity, so y(t) is periodic of period T0—that is, for an integer k, y(t + kT0) = A + x(t + kT0) = A + x(t) since x(t) is periodic of period T0.
(b) The period T1 = NT0 of v(t) is also a period of x(t), and so z(t) is periodic of period T1 since for any integer k,
given that
v(
t +
kT1) =
v(
t), and that
kN is an integer so that
x(
t +
kNT0) =
x(
t). The periodicity can be visualized by considering that in one period of
v(
t) we can place
N periods of
x(
t).
(c) The condition for w(t) to be periodic is that the ratio of the periods of x(t) and of u(t) be
where
N and
M are positive integers not divisible by each other so that
MT1 =
NT0 becomes the period of
w(
t). That is,
Let x(t) = ej2t and y(t) = ejπ t, and consider their sum z(t) = x(t) + y(t), and their product w(t) = x(t)y(t). Determine if z(t) and w(t) are periodic, and if so, find their periods. Is p(t) = (1 + x(t))(1 + y(t)) periodic?
Solution
According to Euler's identity,
indicating
x(
t) is periodic of period
T0 =
π (the frequency of
x(
t) is Ω
0 = 2 = 2
π/
T0) and
y(
t) is periodic of period
T1 = 2 (the frequency of
y(
t) is Ω
1 =
π = 2
π/
T1).
For z(t) to be periodic requires that T1/T0 be a rational number, which is not the case as T1/T0 = 2/π. So z(t) is not periodic.
The product is w(t) = x(t)y(t) = ej(2+π)t = cos(Ω2t) + j sin(Ω2t) where Ω2 = 2 + π = 2π/T2 so that T2 = 2π/(2 + π), so w(t) is periodic of period T2.
The terms 1 + x(t) and 1 + y(t) are periodic of period T0 = π and T1 = 2, and from the case of the product above, one would hope this product be periodic. But since p(t) = 1 + x(t) + y(t) + x(t)y(t) and x(t) + y(t) is not periodic, then p(t) is not periodic.
■ Analog sinusoids of frequency Ω0 > 0 are periodic of period T0 = 2π/Ω0. If Ω0 = 0, the period is not well defined.
■ The sum of two periodic signals x(t) and y(t), of periods T1 and T2, is periodic if the ratio of the periods T1/T2 is a rational number N/M, with N and M being nondivisible. The period of the sum is MT1 = NT2.
■ The product of two sinusoids is periodic. The product of two periodic signals is not necessarily periodic.
1.3.4. Finite-Energy and Finite Power Signals
Another possible classification of signals is based on their energy and power. The concepts of energy and power introduced in circuit theory can be extended to any signal. Recall that for a resistor of unit resistance its instantaneous power is given by
where
i(
t) and
v(
t) are the current and voltage in the resistor. The
energy in the resistor for an interval [
t0,
t1], of duration
T =
t1 −
t0, is the accumulation of instantaneous power over that time interval,
The
power in the interval
T =
t1 −
t0 is the average energy
corresponding to the heat dissipated by the resistor (and for which you pay the electric company). The energy and power concepts can thus be easily generalized.
The energy and the power of an analog signal x(t) are defined for either finite or infinite-support signals as:
The signal x(t) is then said to be finite energy, or square integrable, whenever
The signal is said to have finite power if
Remarks
■ The above definitions of energy and power are valid for any signal of finite or infinite support, since a finite-support signal is zero outside its support.
■ In the formulas for energy and power we are considering the possibility that the signals might be complex and so we are squaring its magnitude: If the signal being considered is real, this simply is equivalent to squaring the signal.
■ According to the above definitions, a finite-energy signal has zero power. Indeed, if the energy of the signal is some constant Ex < ∞, then
■
An analog signal x(
t)
is said to be absolutely integrable
if x(
t)
satisfies the conditionFind the energy and the power of the following:
(a) The periodic signal x(t) = cos(π t/2 + π/4).
(b) The complex signal y(t) = (1 + j)ejπ t/2, for 0 ≤ t ≤ 10 and zero otherwise.
(c) The pulse z(t) = 1, for 0 ≤ t ≤ 10 and zero otherwise.
Determine whether these signals are finite energy, finite power, or both.
Solution
The energy in these signals is computed as follows:
where we used |(1 +
j)
ejπ t/2|
2 = |1 +
j|
2|
ejπ t/2|
2 = |1 +
j|
2 = 2. Thus,
x(
t) is an infinite-energy signal while
y(
t) and
z(
t) are finite-energy signals. The power of
y(
t) and
z(
t) are zero because they have finite energy. The power of
x(
t) can be calculated by using the symmetry of the signal squared and letting
T =
NT0:
Using the trigonometric identity
The first integral is the area of the sinusoid over two of its periods, thus zero. So we have that
x(
t) is a finite-power but infinite-energy signal, while
y(
t) and
z(
t) are finite-power and finite-energy signals.
Consider an aperiodic signal x(t) = e−at, a > 0, for t ≥ 0 and zero otherwise. Find the energy and the power of this signal and determine whether the signal is finite energy, finite power, or both.
Solution
The energy of x(t) is given by
for any value of
a > 0. The power of
x(
t) is then zero. Thus,
x(
t) is a finite-energy and finite-power signal.
Consider the following analog signal, which we call a causal sinusoid because it is zero for t < 0:
This is the kind of signal that you would get from a signal generator that is started at a certain initial time (in this case 0) and that continues until the signal generator is switched off (in this case possibly infinity). Determine if this signal is finite energy, finite power or both.
Solution
Clearly, the analog signal x(t) has infinite energy:
Although this signal has infinite energy, it has finite power. Letting
T =
NT0 where
T0 is the period of 2 cos(4
t −
π/4)(or
T0 = 2
π/4), then its power is
which is a finite value and therefore the signal has finite power but infinite energy.
As we will see later in the Fourier series representation, any periodic signal is representable as a possibly infinite sum of sinusoids of frequencies multiples of the fundamental frequency of the periodic signal being represented. These frequencies are said to be
harmonically related, and for this case the power of the signal is shown to be the sum of the power of each of the sinusoidal components—that is, there is superposition of the power. This superposition is still possible when a sum of sinusoids creates a nonperiodic signal. This is illustrated in
Example 1.14.
Consider the signals x(t) = cos(2π t) + cos(4π t) and y(t) = cos(2π t) + cos(2t), −∞ < t < ∞. Determine if these signals are periodic, and if so, find their periods. Compute the power of these signals.
Solution
The sinusoids cos(2πt) and cos(4πt) periods T1 = 1 and T2 = 1/2, so x(t) is periodic since T1/T2 = 2 with period T1 = 2T2 = 1. The two frequencies are harmonically related. The sinusoid cos(2t) has as period T3 = π. Therefore, the ratio of the periods of the sinusoidal components of y(t) is T1/T3 = 1/π, which is not rational, and so y(t) is not periodic and the frequencies 2π and 2 are not harmonically related.
Using the trigonometric identities
which is again a sum of harmonically related frequency sinusoids, so that
x2(
t) is periodic of period
T0 = 1. As in the previous examples, we have
which is the integral of the constant since the other integrals are zero. In this case, we used the periodicity of
x(
t) and
x2(
t) to calculate the power directly. That is not possible when computing the power of
y(
t) because it is not periodic, so we have to consider each of its components. We have that
where
T4,
T5,
T6, and
T7 are the periods of the sinusoidal components of
y2(
t). Fortunately, only the first integral is not zero and the others are zero (the average over a period of the sinusoidal components of
y2(
t)). Fortunately, too, we have that the power of
x(
t) and the power of
y(
t) are the sum of the powers of its components. That is if
then as in previous examples
, so that
The power of a sum of sinusoids,
with harmonically or nonharmonically related frequencies {Ω
k}, is the sum of the power of each of the sinusoidal components,
1.4. Representation Using Basic Signals
A fundamental idea in signal processing is to attempt to represent signals in terms of basic signals, which we know how to process. In this section we consider some of these basic signals (complex exponentials, sinusoids, impulse, unit-step, and ramp) that will be used to represent signals and for which we will obtain their responses in a simple way in the next chapter.
1.4.1. Complex Exponentials
A complex exponential is a signal of the form
where
A = |
A|
ejθ, and
a =
r +
jΩ
0 are complex numbers.
Using Euler's identity, ejϕ = cos(ϕ) + j sin(ϕ), and from the definitions of A and a as complex numbers, we have that
We will see later that complex exponentials are fundamental in the Fourier representation of signals.
Remarks
■ Suppose that A and a are real, then
is a decaying exponential if a < 0,
and a growing exponential if a > 0.
SeeFigure 1.5.
■
If A is real, but a =
jΩ
0,
then we havewhere the real part of x(
t)
isand the imaginary part of x(
t)
is,
and.
■ If both A and a are complex, x(t) is a complex signal and we need to consider separately its real and imaginary parts. For instance, the real part function is
The envelope of g(t) can be found by considering that
and that when multiplied by |
A|
ert > 0,
we have Whenever r < 0
the g(
t)
signal is a damped sinusoid, and when r > 0
then g(
t)
grows, as illustrated inFigure 1.5.
■ According to the above, several signals can be obtained from the complex exponential.
Sinusoids
Sinusoids are of the general form
where
A is the amplitude of the sinusoid, Ω
0 = 2
π f0 (rad/sec) is the frequency, and
θ is a phase shift. The frequency and time variables are inversely related, as follows:
The cosine and the sine signals, as indicated above, are out of phase by π/2 radians. The frequency can also be expressed in hertz or 1/sec units, and in that case Ω0 = 2π f0, and the period is found by the relation f0 = 1/T0 (it is important to point out the inverse relation between time and frequency shown here, which will be important in the representation of signals later on).
Recall from
Chapter 0, that the Euler's identity provides the relation of the sinusoids with the complex exponential
that will allow us to represent in terms of sines and cosines any signal that is represented in terms of complex exponentials. Likewise, the Euler's identity also permits us to represent sines and cosines in terms of complex exponentials, since
Remarks
A sinusoid is characterized by its amplitude, frequency, and phase. When we allow these three parameters to be functions of time, or
the following different types of modulation systems in communications are obtained: ■ Amplitude modulation (AM)—The amplitude A(t) changes according to the message, while the frequency and the phase are constant,
■ Frequency modulation (FM)—The frequency Ω(t) changes according to the message, while the amplitude and phase are constant,
■ Phase modulation (PM)—The phase θ(t) varies according to the message and the other parameters are kept constant.
1.4.2. Unit-Step, Unit-Impulse, and Ram Signals
Unit-Step and Unit-Impulse Signals
Consider a rectangular pulse of duration Δ and unit area
The pulse
pΔ(
t) and its integral
uΔ(
t) are shown in
Figure 1.6.
Suppose that Δ → 0, then
■ The pulse pΔ(t) still has a unit area but is an extremely narrow pulse. We will call the limit the unit-impulse signal,
which is zero for all values of
t except at
t = 0 when its value is not defined.
■ The integral uΔ(t), as Δ → 0 has a left-side limit of uΔ(−ϵ) → 0 and a right-side limit of uΔ(ϵ) → 1, for some infinitesimal ϵ > 0, and at t = 0 it is 1/2. Thus, the limit is
Ignoring the value at
t = 0 we define the
unit-step signal as
You can think of the u(t) as the switching of a dc signal generator from off to on, while δ(t) is a very strong pulse of very short duration.
The impulse signal δ(t) is:
■ Zero everywhere except at the origin where its value is not well defined (i.e., δ(t) = 0, t ≠ 0, and undefined at t = 0).
■ its area is unity, i.e.,
The unit-step signal is
The
δ(
t) and
u(
t) are related as follows:
According to calculus we have
and so letting Δ → 0, we obtain the relation between
u(
t) and
δ(
t).
Remarks
■ Since u(t) is not a continuous function, it jumps from 0 to 1 instantaneously around t = 0, from the calculus point of view it should not have a derivative. That δ(t) is its derivative must be taken with suspicion, which makes the δ(t) signal also suspicious. Such signals can, however, be formally defined using the theory of distributions.
■
The impulse δ(
t)
is impossible to generate physically, but characterizes very brief pulses of any shape. It can be derived using pulses or functions different from the rectangular pulse (seeEq. 1.22).
InProblem 1.7at the end of the chapter it is indicated how it can be derived from either a triangular pulse or a sinc function of unit area.■ Signals with jump discontinuities can be represented as the sum of a continuous signal and unit-step signals at the discontinuities. This is useful in computing the derivative of these signals.
Ramp Signal
The ramp signal is defined as
Its relation to the unit-step and the unit-impulse signals is
The ramp is a continuous function and its derivative is given by
Consider the discontinuous signals
Represent each of these signals as the sum of a continuous signal and unit-step signals, and find their derivatives.
Solution
The signal x1(t) is a period of a cosine of period T0 = 1, 0 ≤ t ≤ 1, with a discontinuity of 1 at t = 0 and t = 1. Subtracting u(t) − u(t − 1) from x1(t) we obtain a continuous signal, but to compensate we must add a unit pulse between t = 0 and t = 1, giving
where the first term
x1a(
t) is continuous and the second
x1b(
t) is discontinuous. The derivative is
The term
δ(
t) in the derivative indicates that there is a jump from 0 to 1 in
x1(
t) at
t = 0 and that in −
δ(
t − 1) there is a jump of −1 (from 1 to 0) at
t = 1. See
Figure 1.7.
The signal
x2(
t) has jump discontinuities at
t = 0,
t = 1, and
t = 2, and we can think of it as completely discontinuous so that its continuous component is 0. The derivative is
The area of each of the deltas coincides with the jump in the discontinuities.
Signal Generation with MATLAB
In the following examples we illustrate how to generate analog signals using MATLAB. This is done by either approximating continuous-time signals by discrete-time signals or by using the symbolic toolbox. The function plot uses an interpolation algorithm that makes the plots of discrete-time signals look like analog signals.
Write a script and the necessary functions to generate a signal,
Then plot it and verify analytically that the obtained figure is correct.
Solution
We wrote functions ramp and ustep to generate ramp and unit-step signals for obtaining a numeric approximation of the signal y(t). The following script shows how these functions are used to generate y(t). The arguments of ramp determine the support of the signal, the slope, and the shift (for advance, a positive number, and for delay, a negative number). For ustep we need to provide the support and the shift.
%%%%%%%%%%%%%%%%%%%
% Example 1.16
%%%%%%%%%%%%%%%%%%%
clear all; clf
Ts = 0.01; t = -5:Ts:5; % support of signal
% ramp with support [-5, 5], slope of 3 and advanced
% (shifted left) with respect to the origin by 3
y1 = ramp(t,3,3);
y2 = ramp(t,-6,1);
y3 = ramp(t,3,0);
% unit-step function with support [-5,5], delayed by 3
y4 = -3* ustep(t,-3);
y = y1 + y2 + y3 + y4;
plot(t,y,’k’); axis([-5 5 -1 7]); grid
Our functions ramp and ustep are as follows.
function y = ramp(t,m,ad)
% ramp generation
% m: slope of ramp
% ad : advance (positive), delay (negative) factor
% USE: y = ramp(t,m,ad)
N = length(t);
y = zeros(1,N);
for i = 1:N,
if t(i)>= -ad,
y(i) = m*(t(i)+ad);
end
end
function y=ustep(t,ad)
% generation of unit step
% t: time
% ad : advance (positive), delay (negative)
% USE y = ustep(t,ad)
N = length(t);
y = zeros(1,N);
for i = 1:N,
if t(i) >= -ad,
y(i) = 1;
end
end
Analytically,
■ y(t) = 0 for t < −3 and for t > 3, so the chosen support −5 ≤ t ≤ 5 displays the signal in a region where the signal occurs.
■ For −3 ≤ t ≤ −1, y(t) is 3r(t + 3) = 3(t + 3), which is 0 at t = −3 and 6 at t = −1.
■ For −1 ≤ t ≤ 0, y(t) is 3r(t + 3) − 6r(t + 1) = 3(t + 3) − 6(t + 1) = −3t + 3, which is 6 at t = −1 and 3 at t = 0.
■ For 0 ≤ t ≤ 3, y(t) is 3r(t + 3) − 6r(t + 1) + 3r(t) = −3t + 3 + 3t = 3.
■ For t ≥ 3 the signal is 3r(t + 3) − 6r(t + 1) + 3r(t) − 3u(t − 3) = 3 − 3 = 0.
These coincide with the signal shown in
Figure 1.8.
Consider the following script that uses the functions ramp and ustep to generate a signal y(t). Obtain analytically the formula for the signal y(t). Write a function to compute and plot the even and odd components of y(t).
clear all; clf
t = -5:0.01:5;
y1 = ramp(t,2,2.5);
y2 = ramp(t,-5,0);
y4 = ustep(t,-4);
y = y1 + y2 + y3 + y4;
plot(t,y,’k’); axis([-5 5 -3 5]); grid
The signal y(t) = 0 for t < −5 and t > 5.
Solution
The signal
y(
t) displayed on
Figure 1.9(a) is given analytically by
Clearly,
y(
t) is neither even nor odd. To find its even and odd components we use the function
evenodd, shown in the following code with inputs as the signal and its support and outputs as the even and odd components. The results are shown on the bottom plots of
Figure 1.9. Adding these two signals gives back the original signal
y(
t). The script used is as follows.
%%%%%%%%%%%%%%%%%%%
% Example 1.17
%%%%%%%%%%%%%%%%%%%
[ye, yo] = evenodd(t,y);
subplot(211)
plot(t,ye,’r’)
grid
axis([min(t) max(t) -2 5])
subplot(212)
plot(t,yo,’r’)
axis([min(t) max(t) -1 5])
function [ye,yo] = evenodd(t,y)
% even/odd decomposition
% t: time
% y: analog signal
% ye, yo: even and odd components
% USE [ye,yo] = evenodd(t,y)
%
yr = fliplr(t,y);
ye = 0.5*(y + yr);
yo=0.5*(y - yr);
The MATLAB function fliplr reverses the values of the vector y giving the reflected signal.
Use symbolic MATLAB to generate the following analog signals.
(a) For the damped sinusoid signal
obtain a script to generate
x(
t) and its envelope.
(b) For a rough approximation of a periodic pulse generated by adding three cosines of frequencies multiples of Ω0 = π/10—that is
write a script to generate
x1(
t).
Solution
The following script generates the damped sinusoid signal, and its envelope y(t) = ±e−t.
%%%%%%%%%%%%%%%%%%%
% Example 1.18 --- damped sinusoid
%%%%%%%%%%%%%%%%%%%
t = sym(’t’);
x = exp(-t)*cos(2*pi*t);
y = exp(-t);
ezplot(x,[-2,4])
grid
ezplot(y,[-2,4])
hold on
ezplot(-y,[-2,4])
axis([-2 4 -8 8])
hold off
The approximate pulse signal is generated by the following script.
clear; clf
t = sym(’t’);
% sum of constant and cosines
x = 1 + 1.5*cos(2*pi*t/10)-.6*cos(4*pi*t/10);
ezplot(x,[-10,10]); grid
The plots of the damped sinusoid and the approximate pulse are given in
Figure 1.10.
Consider the generation of a triangular signal,
using ramp signals
r(
t). Use the unit-step signal to represent the derivative of
dΛ(
t)/
dt.
Solution
The triangular pulse can be represented as
In fact, since
r(
t − 1) and
r(
t − 2) have values different from 0 for
t ≥ 1 and
t ≥ 2, respectively, then
Finally, for
t > 2 the three ramp signals are different from zero, so
and by definition Λ(
t) is zero for
t < 0. So the given expression for Λ(
t) in terms of the ramp functions is identical to its given mathematical definition.
Using the mathematical definition of the triangular function, its derivative is given by
Using the representation in
Equation (1.32) this derivative is also given by
Consider a full-wave rectified signal,
of period
T0 = 0.5. Obtain a representation for a period between 0 and 0.5, and represent
x(
t) in terms of shifted versions of it. A full-wave rectified signal is used in designing dc sources. It is a first step in converting an alternating voltage into a dc voltage. See
Figure 1.12.
Solution
The period between 0 and 0.5 can be expressed as
Since
x(
t) is a periodic signal of period
T0 = 0.5, we have then that
Generate a causal train of pulses that repeats every two units of time using as first period
Find the derivative of the train of pulses.
Solution
Considering that s(t) is the first period of the train of pulses of period two, then
is the desired signal. Notice that
ρ(
t) equals zero for
t < 0, thus it is causal. Given that the derivative of a sum of signals is the sum of the derivative of each of the signals, the derivative of
ρ(
t) is
which can be simplified to
where
δ(
t), 2
δ(
t − 2
k), and −2
δ(
t − 2
k + 1) for
k ≥ 1 occur at
t = 0,
t = 2
k, and
t = 2
k − 1 for
k ≥ 1, or the times at which the discontinuities of
ρ(
t) occur. The value associated with the
δ(
t) corresponds to the jump of the signal from the left to the right. Thus,
δ(
t) indicates there is a discontinuity in
ρ(
t) at zero as it jumps from 0 to 1, while the discontinuities at 2, 4, … have a jump of 2 from −1 to 1, increasing. The discontinuities indicated by
δ(
t − 2
k − 1) occurring at 1, 3, 5, … are from 1 to −1(i.e., decreasing, so the value of −2). See
Figure 1.13.
1.4.3. Special Signals—the Sampling Signal and the Sinc
Two signals of great significance in the sampling of continuous-time signals and their reconstruction are the sampling signal and the sinc. Sampling a continuous-time signal consists in taking samples of the signal at uniform times. One can think of this process as the multiplication of a continuous-time
signal
x(
t) by a train of very narrow pulses of the sampling period
Ts. For simplicity, considering that the width of the pulses is much smaller than
Ts, the train of pulses can be approximated by a train of impulses that is periodic of period
Ts—that is, the
sampling signal is
The sampled signal
xs(
t) is then
or a sequence of uniformly shifted impulses with amplitude the value of the signal
x(
t) at the time when the impulse occurs.
A fundamental result in sampling theory is the recovery of the original signal, under certain constrains, by means of an interpolation using sinc signals. Moreover, we will see that the sinc is connected with ideal low-pass filters. The sinc function is defined as
This signal has the following characteristics:
■ The time support of this signal is infinite.
■ It is an even function of t, as
■ At t = 0 the numerator and the denominator of the sinc are zero; thus the limit as t → 0 is found using L'Hˆopital's rule—that is,
■ S(t) is bounded—that is, since −1 ≤ sin(π t) ≤ 1, then for t ≥ 0,
and given that
S(
t) is even, it is equally bounded for
t < 0. As
t → ±∞,
S(
t) → 0.
■ The zero-crossing time of S(t) are found by letting the numerator equal zero—that is, when sin(π t) = 0, the zero-crossing times are such that π t = kπ, or t = k for a nonzero integer k or k = ±1, ±2, ….
■
A property that is not obvious and that requires the frequency representation of
S(
t) is that the integral
Recall that we showed this in
Chapter 0 using numeric and symbolic MATLAB.
The sinc signal will appear in several places in the rest of the book.
1.4.4. Basic Signal Operations—Time Scaling, Frequency Shifting, and Windowing
Given a signal x(t), and real values α ≠ 0 or 1, and ϕ > 0:
■ x(αt) is said to be contracted if |α| > 1, and if α < 0 it is also reflected.
■ x(αt) is said to be expanded if |α| < 1, and if α < 0 it is also reflected.
■ x(t)ejϕt is said to be shifted in frequency by ϕ radians.
■ For a window signal w(t), x(t)w(t) displays x(t) within the support of w(t).
To illustrate the time scaling, consider a signal x(t) with a finite support t0 ≤ t ≤ t1. Assume that α > 1, then x(αt) is defined in t0 ≤ αt ≤ t1 or t0/α ≤ t ≤ t1/α, a smaller support than the original one. For instance, for α = 2, t0 = 2, and t1 = 4, then the support of x(2t) is 1 ≤ t ≤ 2, while the support of x(t) is 2 ≤ t ≤ 4. If α = −2, then x(−2t) is not only contracted but also reflected. Similarly, x(0.5t) would have a support of 2t0 ≤ t ≤ 2t1, which is larger than the original support.
Multiplication by an exponential shifts the frequency of the original signal. To illustrate this consider the case of an exponential x(t) = ejΩ0t of frequency Ω0. If we multiply x(t) by an exponential ejϕt, then
so that the frequency of the new exponential is greater than Ω
0 if
ϕ > 0 or smaller if
ϕ < 0. So we have shifted the frequency of
x(
t). If we have a sum of exponentials (they do not need to be harmonically related as in the Fourier series we will consider later),
so that each of the frequencies of the signal
x(
t) has been shifted. This shifting of the frequency is significant in the development of amplitude modulation, and as such this frequency shift process is called
modulation —that is, the signal
x(
t) modulates the exponential and
x(
t)
ejϕt is the modulated signal.
Notice that time scaling also changes the frequency content of the signal. For instance, a signal
is periodic of period
T0 = 2
π/Ω
0, while
has a period
T0/
α or a frequency
αΩ
0, which is larger than the original frequency of Ω
0 when
α > 1 and smaller than Ω
0 when 0 <
α < 1.
Remarks
We can thus summarize the above as follows:
■ If x(t) is periodic of period T0then the time-scaled signal x(αt), α ≠ 0, is also periodic of period T0/|α|.
■ The support in time of a periodic or nonperiodic signal is inversely proportional to the support in frequency for that signal.
■ The frequencies present in a signal can be changed by modulation—that is, multiplying the signal by a complex exponential or, equivalently, by sines and cosines. The frequency change is also possible by expansion and compression of the signal.
■ Reflection is a special case of time scaling with α = −1.
Let x1(t), for 0 ≤ t ≤ T0, be one period of a periodic signal x(t) of period T0. Represent x(t) in terms of advanced and delayed versions of x1(t). What would be x(2t)?
Solution
The periodic signal x(t) can be written as
and the contracted signal
x(2
t) is then
and periodic of period
T0/2.
An acoustic signal x(t) has a duration of 3.3 minutes and a radio station would like to use the signal for a three-minute segment. Indicate how to make it possible.
Solution
We need to contract the signal by a factor of α = 3.3/3 = 1.1, so that x(1.1t) can be used in the three-minute piece. If the signal is recorded on tape, the tape player can be run 1.1 times faster than the recording speed. This would change the voice or music on the tape, as the frequencies x(1.1t) are increased with respect to the original frequencies in x(t).
One way of transmitting a message over the airwaves is to multiply it by a sinusoid of frequency higher than those in the message, thus changing the frequency content of the signal. The resulting signal is called an amplitude-modulated (AM) signal: The message changes the amplitude of the sinusoid. To recover the message from the transmitted signal, one can make the envelope of the modulated signal be related to the message. Use again the
ramp and
ustep functions to generate a signal
y(
t) = 2
r(
t + 2) − 4
r(
t) +
r(
t − 2) +
r(
t − 3) +
u(
t − 3) to modulate a so-called carrier signal
x(
t) = sin(5
π t) to give the AM signal
z(
t). Obtain a script to generate and plot the AM signal. Indicate whether the envelope of the AM signal is connected with the message signal
y(
t).
Solution
The signal y(t) analytically equals
The following script is used to generate the message signal
y(
t), the AM signal
z(
t), and the corresponding plots. The MATLAB function
sound is used to produce the sound corresponding to 100
z(
t). In
Figure 1.14 we show
z(
t) and emphasize the envelope (dashed line) that corresponds to ±
y(
t).
%%%%%%%%%%%%%%%
% Example 1.24 --- AM signal
%%%%%%%%%%%%%%%
t = -5:0.01:5;
x = sin(5*pi*t);
y1 = ramp(t,2,2);
y2 = ramp(t,-4,0);
y3 = ramp(t,1,-2);
y4 = ramp(t,1,-3);
y5 = ustep(t,-3);
y = y1 + y2 + y3 + y4 + y5;
z = y.*x;
sound(100*z,1000)
plot(t,z,’k’); hold on
plot(t,y,’r’,t,-y,’r’); axis([-5 5 -5 5]); grid
hold off
xlabel(’t’); ylabel(’z(t)’)
1.4.5. Generic Representation of Signals
Consider the following integral:
The product of
f(
t) and
δ(
t) gives zero everywhere except at the origin where we get an impulse of area
f(0)—that is,
f(
t)
δ(
t) =
f(0)
δ(
t) (let
t0 = 0 in
Figure 1.15). Therefore,
since the area under the curve of the impulse is unity. This property of the impulse function is appropriately called the
sifting property. The result of this integration is to sift out
f(
t) for all
t except for
t = 0, where
δ(
t) occurs. If we delay or advance the
δ(
t) function in the integrand, the result is that all values of
f(
t) are sifted out except for the value corresponding to the location of the delta function—that is,
since the last integral is still unity.
Figure 1.15 illustrates the multiplication of a signal
f(
t) by an impulse signal
δ(
t −
t0), located at
t =
t0.
By the sifting property of the impulse function δ(t), any signal x(t) can be represented by the following generic representation:
Figure 1.16 shows a generic representation.
Equation (1.41) basically indicates that any signal can be viewed as a stacking of pulses
x(
kΔ)
pΔ(
t −
kΔ), which in the limit as Δ → 0 become impulses
x(
τ)
δ(
t −
τ).
Equation (1.41) provides a generic representation of a signal in terms of basic signals, in this case impulse signals. As we will see in the next chapter, once we determine the response of a system to an impulse we will use the generic representation to find the response of the system to any signal.
1.5. What Have We Accomplished? Where Do We Go from Here?
We have taken another step in our long journey. In this chapter we discussed the main classification of signals and have started the study of deterministic, continuous-time signals. We have also discussed important characteristics of signals such as periodicity, energy, power, evenness, and oddness, and learned basic signal operations that will be useful as we will see in the next chapters. Interestingly,
we began to see how some of these operations lead to practical applications, such as amplitude, frequency, and phase modulations, which are basic in the theory of communications. Very importantly, we have also begun to represent signals in terms of basic signals, which in later chapters will allow us to simplify the analysis and will give us flexibility in the synthesis of systems. These basic signals are used as test signals in control systems.
Table 1.1 displays basic signals.
Table 1.1 Basic Signals
Signal | Definition |
---|
Complex exponential | |
Sinusoid | |
Unit impulse | δ(t) = 0 t ≠ 0, undefined at t = 0
|
Unit step | |
Ramp | δ(t) = du(t)/dt
|
Rectangular pulse | |
Triangular pulse | |
Sampling | |
Sinc | S(t) = sin(π t)/(π t) S(0) = 1 S(k) = 0 k ≠ 0 integer
|
Our next step is to connect signals with systems. We are particularly interested in developing a theory that can be used to approximate, to some degree, the behavior of most systems of interest in engineering. After that we consider the analysis of signals and systems time and frequency domains.