CHAPTER 3

Stimulus

In some sense, the beginning of a synthetic instrumentation system is the stimulus generation, and the beginning of the stimulus generation is the digital control, or DSP, driving the stimulus side. This is where the stimulus for the DUT comes from. It’s the prime mover or first cause in a stimulus-response measurement, and it’s the source of calibration for response-only measurements. The basic CCC architecture for stimulus comprises a three-block cascade: DSP control, followed by the stimulus codec (a D/A in this case), and finally the signal conditioning that interfaces to the DUT.

image

Figure 3-1 The stimulus cascade

Stimulus Digital Signal Processing

The digital processor section can perform various sorts of functions, ranging from waveform synthesis to pulse generation. Depending on the exact requirements of each of these functions, the hardware implementation of an “optimum” digital processor section can vary in many different, and seemingly incompatible directions.

Ironically, one “general-purpose” digital controller (in the sense of a general-purpose microprocessor) may not be generally useful. When deciding the synthesis controller capabilities for a CCC synthetic measurement system, it inevitably becomes a choice from among several distinct controller options.

Although there are numerous alternatives for a stimulus controller, these various possible digital processor assets fall into broad categories. Listed in order of complexity, they are:

image Waveform Playback (ARB)

image Direct Digital Synthesis (DDS)

image Algorithmic Sequencing (CPU)

In the following sections, I will discuss these categories in turn.

Waveform Playback

The first and simplest of these categories, waveform playback, represents the class of controllers one finds in a typical arbitrary waveform generator (ARB). These controllers are also akin to the controller in a common CD player: a “dumb” digital playback device. The basic controller consists of a large block of waveform memory, and a simple state machine, perhaps just an address counter, for sequencing through that memory. A counter is a register to which you repeatedly add +1 as shown in Figure 3-2. When the register reaches some predetermined terminal count, it’s reset back to its start count.

image

Figure 3-2 Basic waveform playback controller

The large block of memory contains digitized samples of waveform data. Perhaps the data is in one continuous data set, or as several independent tracks of data. An interface controls the counter, and another gets waveform data into the RAM.

In basic operation, the waveform playback controller sequences through the data points in the tracks and feeds the waveform data to the codec for conversion to an analog voltage that is then conditioned and used to stimulate the DUT. Customarily, the controller has features where it can either playback a selected track repeatedly in a loop, or just a one-time, single-shot playback. There may also be features to access different tracks and play them back in various sequential orders or randomly. The ability to address and play multiple tracks is handy for synthesizing a communications waveform that has a signaling alphabet.

The waveform playback controller has a fundamental limitation when generating periodic waveforms (like a sine wave). It can only generate waveforms that have a period that is some integer multiple of the basic clock period. For example, imagine a playback controller that runs at 100 MHz, it can only generate waveforms with periods that are multiples of 10 nS. That is to say, it can only generate: 100 MHz, 50 MHz, 33.333 MHz, and so on for every integer division of 100 MHz. A related limitation is the inability of a playback controller to shift the phase of the waveform being played.

You will see below that direct digital synthesis (DDS) controllers do not have these limitations.

Another fundamental limitation that the waveform playback controller has is that it cannot generate a calculated waveform. For example, in order to produce a sine wave, it needs a sine table. It cannot implement even a simple digital oscillator. This is a distinct limitation from the inability to generate waveforms of arbitrary period, but not unrelated. Both have to do with the ability to perform algorithms, albeit with different degrees of generality.

Direct Digital Synthesis

Direct digital synthesis (DDS) is an enhancement of the basic waveform playback architecture that allows the frequency of periodic waveforms to be tuned with arbitrarily fine steps that are not necessarily submultiples of the clock frequency. Moreover, DDS controllers can provide hooks into the waveform generation process that allows direct parameterization of the waveform for the purposes of modulation. With a DDS architecture, it is dramatically easier to amplitude or phase modulate a waveform than it is with an ordinary waveform playback system.

A block diagram of a DDS controller is shown in Figure 3-3.

image

Figure 3-3 Direct digital synthesizer

The heart of a DDS controller is the phase accumulator. This is a register recursively looped to itself through an adder. One addend is the contents of the accumulator, the other addend is the phase increment. After each clock, the sum in the phase accumulator is increased by the amount of the phase increment.

How is the phase accumulator different from the address counter in a waveform playback controller? The deciding difference is that a phase accumulator has many more bits than needed to address waveform memory. For example, waveform memory may have only 4096 samples. A 12-bit address is sufficient to index this table. But the phase accumulator may have 32-bit. These extra bits represent fractional phase. In the case of a 12-bit waveform address and 32-bit accumulator, a phase increment of 220 would index through one sample per clock. Any phase increment less than 220 causes indexing as some fraction of a sample per clock.

You may recall that a waveform playback controller could never generate a period that wasn’t a multiple of the clock period. In a DDS controller, the addition of fractional phase bits allow the period to vary in infinitesimal fractions of a clock cycle. In fact, a DDS controller can be tuned in uniform frequency steps that are the clock period divided by 2N, where N is the number of bits in the phase accumulator. With a 32-bit accumulator, for example, and a 1-GHz clock, frequencies can be tuned in 1/4 Hz steps.

Since the phase accumulator represents the phase of the periodic waveform being synthesized, simply loading the phase accumulator with a specified phase number causes the synthesized phase to jump to a new state. This is handy for phase modulation. Similarly, the phase increment can be varied causing real-time frequency modulation.

image

Figure 3-4 Fractional samples

It’s remarkable that few ARB controllers include this phase accumulator feature. Given that such a simple extension to the address counter has such a large advantage, it’s puzzling why one seldom sees it. In fact, I would say that there really is no fundamental difference between a DDS waveform controller and a waveform playback controller. They are distinguished entirely by the extra fractional bits in the address counter, and the ability to program a phase increment with fractional bits.

Digital Up-Converter

A close relative of the DDS controller is the digital up-converter. In a sense, it is a combination of a straight playback controller and a DDS in a compound stimulus arrangement as discussed in the section titled “Compound Stimulus.” Baseband data (possibly I/Q) in blocks or “tracks” are played back and modulated on a carrier frequency provided by the DDS.

Algorithmic Sequencing

A basic waveform playback controller has sequencing capabilities when it can play tracks in a programmed order, or in loops with repeat counts. The DDS controller adds the ability to perform fractional increments through the waveform, but otherwise has no additional programmability or algorithmic support.

Both the basic waveform playback controller and the DDS controller have only rudimentary algorithmic features and functions, but it’s easy to see how more algorithmic features would be useful.

For example, it would be handy to be able to loop and through tracks with repeat counts, or to construct subroutines that comprised certain collections of track sequences (playlists in the verbiage of CD players). Not that we want to turn the instrument into an MP3 player, but playlist capability also can assemble message waveforms on the fly based on a signal alphabet. A stimulus with modulated digital data that is provided live, in real time can be assembled from a “playlist.”

It would also be useful to be able to parameterize playlists so that their contents could be varied based on certain conditions. This leads to the requirement for conditional branching, either on external trigger or gate conditions, or on internal conditions—conditions based on the data or on the sequence itself.

As more features are added in this direction, a critical threshold is reached. Instruction memory appears, and along with it, a way for data and program to intermix. The watershed is a conditional branch that can choose one of two sequences based on some location in memory, combined with the ability to write memory. At this point, the controller is a true Turing machine—a real computer. It can now make calculations. Those calculations can either be about data (for example: delays, loops, patterns, alphabets) or they can be generating the data itself (for example: oscillators, filters, pulses, codes).

There is a vast collection of possibilities here. Moving from simple state machines, adding more algorithmic features, adding an algorithmic logic unit, or ALU, along with state sequencer capable of conditional branches and recursive subroutines, the controller becomes a dual-memory Harvard architecture DSP-style processor. Or it may move in a slightly different direction to the general-purpose single-memory Von Neumann processors. Or, perhaps, the controller might incorporate symmetric multiprocessing or systolic arrays. Or something beyond even that.

Obviously, it’s also beyond the scope of this book to discuss all the possibilities encompassed in the field of computer architecture. There are many fine books[B7] that give a comprehensive treatment of this large topic. I will, however, make a few comments that are particularly relevant to the synthetic instrumentation application.

Synthesis Controller Considerations

At first glance, a designer thinking about what controller to use for a synthetic instrument application might see the controller architecture choice as a speed-complexity trade-off. On one hand, they can use a complex general-purpose processor with moderate speed, and on the other hand, they can use a lean-and-mean state sequencer to get maximum speed. Which to pick?

Fortunately, advances in programmable logic are softening this dilemma. As of this writing, gate arrays can implement signal processing with nearly the general computational horsepower of the best DSP microprocessors, without giving up the task specific horsepower that can be achieved with a lean-and-mean state sequencer. I would expect this gap to narrow into insignificance over the next few years.

Microprocessors are in everything from personal computers to washing machines, from digital cameras to toasters. But it is this very ubiquity that has made us forget that microprocessors, no matter how powerful, are inefficient compared with chips designed to do a specific thing.

—Tredennick and Shimamoto[P1]

Given this trend, perhaps the true dilemma is not in hardware at all. Rather, it might be a question of software architecture and operating system. Does the designer choose a standard microprocessor (or DSP processor) architecture to reap the benefits of a mainstream operating system like vxWorks, Linux, pSOS, BSD, or Windows? Or does the designer roll-their-own hardware architecture specially optimized for the synthetic instrument application at the cost of also needing to roll-her-own software architecture, at least to some extent?

Again, this gap is narrowing, so the dilemma may not be an issue. Standard processor instruction sets can be implemented in gate arrays, allowing them to run mainstream operating systems, and there is growing support for customized ASIC/PLD-based real-time processing in modern operating systems.

In fact, because of advances in gate array and operating system technology, the world may soon see true, general-purpose digital systems that do not compromise speed for complexity, and do not compromise software support for hardware customization. This trend bodes well for synthetic instrumentation, which shines the brightest when it can be implemented on a single, generalized, CCC cascade.

Extensive computational requirements lead to a general-purpose DSP- or microprocessor-based approach. In contrast, complex periodic waveforms may be best controlled with a high-speed state sequencer implementing a DDS phase accumulator indexing a waveform buffer. While fine resolution delayed pulse requirements are often best met with hybrid analog/digital pulse generator circuits. These categories intertwine when considering implementation.

Stimulus Triggering

I have only scratched the surface of the vast body of issues associated with stimulus DSP. One could write a whole book on nothing but stimulus signal synthesis. But my admittedly abbreviated treatment of the topic would be embarrassingly lacking without at least some comment on the issue of triggering.

Probably the biggest topic not yet discussed is the issue of triggering. Triggering is required by many kinds of instruments. How do we synchronize the stimulus, and subsequent measurement with external events?

Triggering ties together stimulus and response, much in the same way as calibration does. A stimulus that is triggered requires a response measurement capability in order to measure the trigger signal input. Therefore, an SMS with complete stimulus response closure will provide a mechanism for response (or ordinate) conditions to initiate stimulus events.

Desirable triggering conditions can be as diverse as ingenuity allows. They don’t have to be limited to a signal threshold. Rather, trigger conditions can span the gamut from the rudimentary single-shot and free-run conditions, to complex trigger programs that require several events to transpire in a particular pattern before the ultimate trigger event is initiated.

Stimulus Trigger Interpolation

Generality in triggering requires a programmable state machine controller of some kind. Furthermore, it is often desirable to implement finely quantized (near continuously adjustable) delays after triggering, which seems to lead us to a hybrid of digital and analog delay generation in the controller.

While programmable analog delays can be made to work and meet requirements for fine trigger delay control, it’s a mistake to jump to this hardware-oriented solution. It’s a mistake, in general, to consider only hardware as a solution for requirements in synthetic measurement systems. Introducing analog delays into the stimulus controller for the purpose of allowing finely controlled trigger delay is just one way to meet the requirement. There are other approaches.

For example, as shown in Figure 3-5, based on foreknowledge of the reconstruction filtering in the signal conditioner, it is possible to alter the samples being sent to the D/A in such a way that the phase of the synthesized waveform is controlled with fine precision—finer than the sample interval.

image

Figure 3-5 Fine trigger delay control

The dual of this stimulus trigger interpolation and re-sampling technique will reappear in the response side in the concept of a trigger time interpolator used to re-sample the response waveform based on the precisely known time of the trigger.

The Stimulus D/A

The D/A conversion section, or stimulus side codec (or really just “Co”), creates an analog waveform based on the output of the digital control section. As I will discuss below, the place where the D/A begins and the controller ends may be fuzzy.

D/A converters tend to be constrained by a speed versus accuracy tradeoff much in the same way that controllers trade speed for complexity. This speed-accuracy trade-off reflects the practical reality that fast D/A systems tend to have worse amplitude accuracy than slow D/A systems. I’m using the word “accuracy” in the qualitative sense of finer amplitude resolution, measured in bits, but this may also be measured in SiNaD or IMD performance. Speed can be viewed as sampling rate or equivalently, time resolution. It’s most practical to use low speed D/A subsystems where amplitude accuracy is the primary requirement, and to use moderate amplitude accuracy systems where speed is paramount.

The following table illustrates typical extremes of this spectrum of trade-offs. It illustrates the typical bandwidth versus accuracy trade-off that one encounters. This does not reflect an exhaustive survey of available D/A technology, nor could any static table on a printed page keep up with the fast pace of change.

Table 3-1

D/A converter trade-off range

Requirement ENOB Speed
Pulse Generation 1-bit 100 GHz
Analog Waveforms 12-bit 100 MHz
AC/DC Reference 18-bit 100 kHz

Note that ENOB refers to the effective number of bits provided by the amplitude accuracy of a D/A in the given category.

I have more to say about codec accuracy, ENOB, and other related topics when I discuss the response codec in the section titled “The Response Codec,” as these issues affect both stimulus and response in analogous ways.

Interpolation and Digital Up-Converters in the Codec

One of the signal coding operations an SMS can be asked to perform is modulation on a carrier, resulting in a so-called bandpass signal. On the stimulus side of a synthetic instrument, there are two fundamental ways to generate a bandpass signal:

image Up-convert digitally before the D/A

image Up-convert with analog circuits after the D/A

It’s also possible to do both of these, up-converting to a fixed digital IF before the D/A, and then use analog up-conversion after the D/A to finish the job.

The idea of interpolation in the context of digital signal processing is different than what I mean by interpolation in the context of measurement maps. In DSP terms, interpolation is a process of increasing the sampling rate of a digital signal. It is accomplished by means of an interpolating filter that reconstructs the missing samples with predicted data based on the assumption of a limited signal bandwidth. A good reference on the interpolation process is[B11]. Interpolation goes hand-in-hand with up-conversion since a higher frequency up-converted result needs more samples to represent it without aliasing.

You may be puzzled why up-conversion and interpolation is a topic for the stimulus codec section of this book. Isn’t digital up-conversion accomplished by the stimulus controller? Isn’t analog up-conversion a signal conditioning function? Yes, I agree that logically the up-conversion function is not part of the codec, but the fact is that many new D/A subsystems are being built with digital up-conversion and interpolation on board.

Actually, it’s quite beneficial for interpolation and digital up-conversion to be accomplished within the stimulus codec. This is good because it lowers the data rate required out of the stimulus controller. In a sense, the stimulus controller can concern itself only with the meaningful portion of the stimulus. The resulting baseband or low-IF signal can then be translated in a mechanical process to some high frequency without burdening the controller. In a sense, the codec is merely coding the information bearing portion of the signal, both as an analog voltage, and as a modulation.

The idea of accomplishing signal encoding in the D/A subsystem extends beyond just up-conversion and analog coding. Other forms of encoding can be accomplished most efficiently here. These possibilities include: pulse modulation, AM/FM modulation, television signals such as NTSC/PAL, frequency hopping, and direct sequence spreading.

In general, never assume that all DSP tasks are performed in the stimulus controller; never assume all analog conditioning tasks are performed in the stimulus conditioner. All the hardware is available to be used for anything.

Stimulus Conditioning

In a stimulus system, after the analog signal is synthesized by the D/A, some amount of signal conditioning may need to be applied. This conditioning can include a wide variety of signal processing and DUT specific considerations including, possibly, one or more of the following:

image Amplification: linear, digital, pulse, RF, high voltage or current

image Filtering: fixed, tunable, tracking, adaptive

image Impedance: matched source, programmable mismatch, constant current/voltage

image DUT Interface: probes, connectors, transducers, antennas

DUT interfacing is the normal role for signal conditioning; however, as I just explained with regard to the D/A, there’s no reason that signal encoding or modulation tasks can’t be performed here as well, especially up-conversion and modulation. An analog RF up-converter is a common signal conditioner component.

Signal-conditioning requirements are obviously dependent on the needs of the DUT, but they are also dependent on the performance of the D/A subsystem. If, for example, the D/A is fast enough to generate all frequencies of interest, there is no need for up-conversion; if the D/A can produce the required power, current, or voltage, there is no need for amplification.

The need to interact with a diverse selection of DUTs tends to drive the design of stimulus signal conditioning in the direction of either a parameterized asset, as I discussed in the section titled “Parameterization of CCC Assets,” or to multiple assets in the CRM architecture sense. Often parameterization makes the most sense as it’s more efficient to contain the switching between different conditioner circuit options within some overall conditioner subsystem than it is to force the system-level switching to handle this. Only when faced with a unique and narrow range of conditioning needs specific to a certain class of DUTs does it make sense to create a unique conditioner asset to address that requirement.

An illustration of this principle would be a variable gain amplifier or selectable filter in the signal conditioner. It would make no sense to build separate conditioners just to change a gain or a filter. Similarly, DC offsets, impedances, and other easily parameterized qualities are best implemented that way.

The opposite case would be a situation where the signal conditioner could produce a signal that would be damaging to some class of assets, or where some class of DUTs could do damage to the conditioner, for instance a high voltage stimulus.

Stimulus Conditioner Linearity

Stimulus conditioner circuitry does not have to be linear in all cases. It depends on the requirements of the test. Some applications, pulsed digital test, for instance, might be best served with digital line drivers as the signal conditioner amplifiers. Such drivers are not linear devices.

In other applications, linearity after the D/A is paramount. It’s also generally beneficial to minimize the noise and spurious signals added after the D/A.

Problems with stimulus conditioner linearity are exacerbated when the stimulus conditioner is an analog up-converter. It is very difficult to preserve wide dynamic range, limit the injection of noise, and prevent spurious products from appearing at the stimulus output.

Because linear conditioner design can be challenging and expensive, it’s important to keep an open mind about solutions to these difficulties.

Gain Control

Although it is definitely most convenient to adjust the level (amplitude) of the stimulus by simply adjusting the amplitude of the digital signal entering the D/A, sometimes this is not a good idea. If the signal conditioner has an up-converter, or other gain or spurious producing stages, the junk injected by this conditioner circuitry remains at a constant level as the signal out of the D/A drops. The D/A itself may also inject some unwanted signals at a fixed level. Consequently, the signal-to-noise ratio (SNR) of the stimulus will fall as the stimulus level is decreased relative to the fixed noise. In fact, the fixed noise-level may limit the minimum stimulus signal that is discernible, as eventually noise will swamp the signal.

The way around this problem is to adjust signal levels in the stimulus conditioner after most of the junk has been added to the signal. That way, when adjusting the signal level, the SNR stays roughly the same. The signal level can be lowered without fear that it will be swamped by the noise.

image

Figure 3-6 Effect of gain control placement on SNR with varying gain

As a result of this idea, stimulus signal conditioners are often designed with variable gain amplifiers or adjustable output attenuation. This allows us to run the D/A at an optimum signal level relative to headroom and quantization noise as discussed in the section titled “Codec Headroom.”

Adjusting gain in the signal conditioner is not without its disadvantages. Foremost of these is that the variable gain must be implemented so as to work and maintain calibration across the range of frequencies and signals that the conditioner might handle. This can be a challenge in broadband designs. Sometimes a compromise approach is used, with only coarse gain steps (perhaps 10 dB) implemented in the conditioner, with fine steps implemented in the codec or DSP controller.

Adaptive Fidelity Improvement

Often, the designers of synthetic measurement systems struggle to achieve impeccable fidelity in the stimulus signal conditioning so as to preserve all the precision in the stimulus they have generated with the finely quantized D/A. Building a “clean” enough signal conditioner good enough to match the D/A is often a daunting challenge. It’s particularly difficult to meet fidelity specifications in generic hardware when they derive from a the performance of a signal specific instrument. I have seen stimulus system designers struggle with this again and again. Designing up-converters with high fidelity and low spurs is additionally difficult.

Granted, it’s harder to make a clean sine wave with a D/A and broadband analog processing than it is with a narrow-band filtered crystal oscillator, but this may be an unnecessary enterprise. As I will say repeatedly in this book, proper synthetic instrument design focuses on the measurement, not on the specifications of some legacy instrument being replaced. Turn to the measurement to see what fidelity is needed. You may find that much less is needed by the measurement than what a blanket fidelity specification would require.

Once we have focused on the measurement and looked at the fidelity performance of reasonable stimulus signal conditioning, if we see that we still don’t meet the requirements for a good measurement, what do we do then? Clearly, the solution to that underspecified problem depends on the details of the situation. In some cases, parameterized filtering would help, other times higher power amplification can improve linearity. These are all well-known techniques. But there is one technique I want to mention here because it is often overlooked—a technique that has wide applicability to these situations: adaptive processing.

Remember, there is a response system at our disposal, and a fully programmable, DSP driven stimulus system to boot. This combination lends itself nicely to closed-loop adaptive techniques.

If you can measure and you can control, then you can adapt to achieve a measured goal.

Specifically, it is possible to adapt the digital data driving the D/A so as to reduce or eliminate artifacts, spurs, and other fidelity issues introduced by the signal conditioner.

A simple example of this technique would be the elimination of a spurious tone that appears in the stimulus output that is harmful to a measurement. Synthesize a second tone of the same frequency as the spur. Figure 3-7 shows a system for adaptively adjusting the amplitude and phase of the second tone to null the spur, eliminating it from the stimulus and thus making the measurement possible.

image

Figure 3-7 Adaptive nulling

Adaptive nulling, linearization, or calibration is a nontrivial enterprise to be sure, but it has the unique property in this context of being something that can be implemented purely in DSP software. That doesn’t mean it’s necessarily easier or better than a hardware solution to a fidelity issue. My point, however, is that such techniques should always be considered when the hardware has fidelity issues. Adaptive DSP techniques will often have a significantly lower cost in production than any hardware solution.

Reconstruction Filtering

Depending on the needs of the test, it may be possible to directly use the quantized voltage from the output of the stimulus codec. For example, if the stimulus is a digital logic level, it may be used directly. However, when synthesizing a smooth analog stimulus waveform, it’s often better to use a reconstruction or interpolation filter. This filtering at the output of the D/A reconstructs the analog waveform from the “stair-step” approximation. Spectrally, this filter attenuates high-frequency aliases while it can also correct for the sin(x)/x roll-off effect created by holding the samples through each “tread” in the staircase.

If I know the dynamics of the reconstruction filtering when I am calculating the samples I will send into the stimulus codec, it is possible for me to choose these samples to custom tailor and shape in the signal conditioner output waveform based on that knowledge.

For example, fine control of rise time and delay is possible using this technique—ten times finer than the sample interval, even. This is a critical fact to keep in mind when designing the codec. It’s easy to see a specification that says “rise time programmable in 1 NS steps” and erroneously conclude that the D/A must run at 1 GHz.

The characteristics of the reconstruction filtering, and other fixed or parameterized filters in the signal conditioner, can also be used to facilitate adaptive techniques, as described in the section titled “Adaptive Fidelity Improvement.”

Stimulus Cascade—Real-World Example

Figure 3-8 shows an example of a real-world stimulus subsystem, the Celerity Series CS25000 Broadband Signal and Environment Generator, from Aeroflex. This is only one of the many different stimulus products made by Aeroflex. I selected this one in particular because it includes both high-fidelity signal conditioning and stimulus processing that comprises both a waveform playback controller and general-purpose CPU with a range of options. Thus, the Aeroflex Broadband Signal and Environment Generator platform represents a complete synthetic measurement system stimulus cascade.

image

Figure 3-8 Aeroflex CS25000

The BSG combines a very deep memory, very high-speed arbitrary waveform generator and a broadband RF up-converter with powerful signal generation software. The BSGs have bandwidths of up to 500 MHz, and full bandwidth signal memory of up to 10 seconds. The bandwidth, memory depth and dynamic range make the BSG a powerful tool for broadband satellite communications, frequency agile radio communications, broadband wireless network communications, and radar test. An open, software-defined instrument architecture allows easy imports of user created waveforms. Vector signal simulator (VSS) software creates signal files for commercial wireless standards as well as generic nPSK, nQAM, nFSK, MSK, CW, tone combs, and notched noise signals.

Any of these generic signal types can be gated or bursted in time, as well as hopped in frequency. Real signals, including recorded signals from Aeroflex’s broadband signal analyzers or other recorder sources, can be imported and combined with digitally generated signals, and then played back on the BSG. Impairments can be added to the signals including thermal noise, phase noise, and passband amplitude and phase distortion. VSS provides the unique ability to mix any combination of signals and impairments to generate complex signal environments.

Aeroflex’s Vector signal player (VSP) software provides simple controls for signal file selection, output frequency control and output power control. Aeroflex’s up-converters use real (non-I/Q) conversion architectures, generating high dynamic range waveforms without the carrier leakage and signal image problems associated with I/Q modulators found in other signal sources.

The high-speed stimulus controller in the BSG is designed with an enhanced version of the waveform playback architecture I discussed in the section titled “Waveform Playback.” This controller allows the BSG to play extremely complicated waveform data files through the use of programmed sequencing. This allows for the predetermined, scenario-based, playback of different sections in memory. During playback, the instrument can move from one section of memory to another, on a clock cycle. The control of this analogous to a typical MIDI tone generator driven by a sequencer (a high-tech player piano). The BSG waveform address counter can move in programmed fashion to different sections of memory, building a complete stimulus output without having to put the complete wave train into memory.

This sequencing capability is particularly useful when synthesizing digital modulation waveforms. It makes efficient use of memory while allowing many possible waveforms to be generated. A simple example would be a pulse that is only active for a small amount of time. The pulse can be put into a small block of memory, another small block of memory can hold a piece of interpulse signal (often just zeros). A scenario file programs the system to play the interpulse buffer for a certain number of cycles, then to play the pulse file once, then to start over. There can be multiple pulse profiles and interpulse buffer profiles that can be played to produce extremely complex output stimulus from a small amount of memory.

Table 3-2

BSG performance range

image

Table 3-3

BSG options

image

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset