CHAPTER 4

Response

This chapter describes concepts and design issues related to the response side of a synthetic measurement system. The main goal of the response subsystem is to measure some aspect of the DUT in a measurement context. A secondary goal is to measure the output of the stimulus system for calibration purposes.

Some of the concepts discussed relating to stimulus also apply to response, and vice versa. Some concepts, however, are unique to response. The response CCC cascade comprises the interface to the DUT and associated signal conditioning, A/D conversion (response codec), and finally a DSP controller. The ordering of this cascade is the opposite of the stimulus cascade, but the functions are completely analogous.

image

Figure 4-1 The response cascade

Response Signal Conditioning

The response signal conditioner is the signal processing interface between the DUT and the response codec. It may be as simple as an amplifier with anti-alias filtering, or it may be as complex as a down-converter.

Input Protection

General-purpose test equipment must be able to withstand typical mistakes made by test engineers (yes, test engineers do sometimes make mistakes). Some of these may expose the response system to excess signal levels. The response signal conditioner should be designed such that reasonable overloads do not damage the subsequent processing.

Response Linearity and Gain Control

As in the stimulus signal conditioner, linearity in the response conditioner is a common requirement that leads to challenging hardware design. Digital-oriented test scenarios may not require full linearity, but most other applications do. Linearity is just as difficult to achieve in the response side than it is in the stimulus side, although on the response side there are some additional options that can ease the problems somewhat.

In a response conditioner, there are two fundamentally different approaches to implementing a linear system. I call these the high-gain and low-gain response processing strategy.

Essentially, the difference between these approaches is in where to place the noise floor relative to the A/D quantization noise. Low-gain strategies will place the signal conditioner noise near the quantization noise floor; high-gain strategies will have enough gain to amplify this noise all the way up to the nominal operating point of the A/D.

image

Figure 4-2 Low-gain versus high-gain

Most measurement systems are low-gain. Communication systems are often high-gain. High-gain systems will always need some form of gain control because the noise is already at the nominal loading point of the A/D. As a signal is introduced and its level increases, a high-gain system will need to back down its gain immediately to prevent overload. In a low-gain system, variable attenuation is optional and is used only with very large signals that threaten to overload the A/D. Many low-gain systems have no pre-A/D gain control at all.

It’s somewhat ironic that measurement systems tend to be low-gain designs with limited gain control. The irony in this stems from the fact that most response signal conditioners have a signal-level sweet spot that maximizes dynamic range. You normally want to keep the measured signal pinned exactly on this sweet spot in order to achieve the best fidelity and consequently the best measurement accuracy. Unfortunately, other considerations in designs for measurement systems prevent this degree of gain optimization from being achieved. In contrast, communications systems often have automatic gain control (AGC) that keeps the signal level pinned precisely at the optimum level for the detector, even preservation of dynamic range and accuracy is the not reason this is done.

It’s also interesting to note by way of analogy with stimulus that stimulus conditioners tend to always be low-gain in the sense that they try to minimize the quantization noise from the D/A reflected at the output while running with high signal at D/A nominal. As such, stimulus conditioning tends to be more constrained, although they do tend to run at their “sweet spot” more consistently. Response conditioners can get away with taking a low-gain or high-gain approach and move the signal level around, depending on the situation.

Adaptive Techniques

As with the stimulus conditioner, it’s possible to implement system-level adaptive techniques to fix linearity, crosstalk, or spurious signal issues that plague the response conditioner. For example, consider the measurement of signal harmonics in the presence of a powerful fundamental. If the fundamental is powerful enough, and the level of the DUT harmonics to be measured is low enough, then the measurement system harmonics (specifically those of the response conditioner) will swamp those of the DUT. The measurement will become impossible.

But if adaptive nulling techniques are used to attenuate the fundamental without affecting the harmonics, the measurement again becomes possible. This nulling happens in the response signal conditioner, up front, as soon as possible, using the undistorted fundamental from the input of the DUT as the nulling reference.

image

Figure 4-3 Adaptive nulling to improve response measurement

Similar methods can be applied for the measurement of intermodulation and spurs. This is also a good technique for measuring one channel alongside several others in a multiplexed arrangement. Without adaptive nulling, the adjacent channels may make accurate measurement of the desired channel impossible.

The Response Codec

In this section I will discuss some issues with regard to response digitization. Some of the discussion will apply to the stimulus codec as well. Also included in this section is a description of a state-of-the-art commercial digitizer subsystem.

Fidelity and Measurement Accuracy

An issue always on people’s minds when they begin to contemplate a synthetic solution to a measurement problem is the number of bits in the A/D or D/A (what I refer to collectively as the codec). When comparing two systems, if one has 12 bits and the other has 14 bits, it’s tempting to conclude that the 14-bit system is somehow “better” than the 12-bit system.

But the number of bits in the codec is a rather superficial and misleading metric for specifying the fidelity of a synthetic instrument. Even the supposedly more honest and encompassing effective number of bits or ENOB parameter can be completely misleading. Here’s why.

There are a plethora of sources of error that plague a typical measurement system. The signal conditioner, like any analog system, can have offset, drift, noise, distortion, or spurious signals that corrupt the desired signal. The codec itself, with one foot firmly in the analog world, can also be plagued by these analog troubles. Less acknowledged, but certainly possible, the measurement on the digital side can be additionally corrupted by noise, distortion, and spurious signals.

That last statement probably needs some justification if you are under the impression that digital processing is ideal and works just like it says in Oppenheim and Schaefer[B10]. Unfortunately, reality does not quite live up to this expectation.

Digital filters, for example, can oscillate and generate spurious signals through limit cycles and other nonlinear behavior. They can also generate noise through coefficient round-off error and can introduce distortion through various finite word-size effects. This noise and distortion can be significantly larger than one might expect given the number of bits in the signal processing path.

image

Figure 4-4 Sources of noise and distortion in synthetic systems

Therefore, focusing only on the bits in the codec distracts attention from the performance of the whole system. It is necessary to analyze signal flow, noise, and distortion through the whole system in order to draw any conclusions about accuracy and fidelity.

Ideal Quantization

Assuming the codec is an ideal quantizer (say, an ideal A/D converter) with N bits, and a quantization step size of Δ, and, additionally, assuming that the input voltage has stationary and uniformly distributed statistics over the analog range quantized by the N bits, a common textbook exercise shows that the root mean square (RMS) error introduced by the quantization process is Δ2/12 relative to the ideal signal. In terms of dB, with the additional assumption of a bipolar signal (where one bit is used up as the sign bit) then the RMS quantization noise is 6N + 6 dB below the ideal signal. Giving up another 6 dB for headroom, then the digitization process results in RMS noise that is roughly 6 dB below the desired signal level for every bit in the codec.

This 6 dB per bit quantization noise is a well-known rule of thumb, but it’s important to always remember the assumptions. They are:

1. 6 dB headroom

2. Input signal statistics stationary and uniformly distributed

3. Ideal quantization

Reality may invalidate one or more of these assumptions. Let’s discuss them each in turn.

Codec Headroom

The need for headroom in a codec derives from the fact that real signals (measurements) have a peak-to-average ratio that is greater than one. Even theoretically, the common assumption that a measurement is Gaussian distributed about some mean implies a peak to average ratio that is infinite!

The problem with a high peak/average ratio is that the average will determine the overall performance of the codec in terms of quantization noise, but the codec will catastrophically distort the signal if the signal peaks overload the maximum range of the codec. Therefore, it’s best to arrange things so the average is as high as possible with overload never (or rarely) occurring on peaks.

Although real signals aren’t so benign as to have unity peak/average ratios, they are not so malicious as to have infinite peaks. Practice falls somewhere in between. It’s usually possible to come up with a good compromise.

As an example of how to arrive at a headroom compromise, think about your living room stereo set. If your stereo is of any decent quality, it will have a VU meter that displays the signal level. The VU meter gives an excellent graphical depiction of headroom.

image

Figure 4-5 VU meter

The region in red above the 0 dB mark is the headroom. Most audio equipment works “best” when the signal peaks seem to bounce up to the 0 dB mark, with rare excursions higher into the red. This is exactly the same sort of consideration that guides the design of any codec system.

You set your average below the maximum overload level with the headroom approximately the same as the anticipated peak/average ratio. This is called optimal loading of the codec. A consequence of this practice is that signal-to-quantization noise ratio in an optimally loaded codec will decrease dB-for-dB with any increase in peak/average ratio. The two parameters represent a counterbalancing trade-off. The dilemma therefore is a choice between avoiding overload, and minimizing quantization noise.

Headroom Trade-off and System Fidelity

The codec itself is always in the context of the signal conditioner and the controller. Headroom that optimizes the codec performance may not make sense for the signal conditioner. As I have discussed, the conditioner will have a “sweet spot” that optimizes performance.

In addition, it can’t be assumed that just because there was an optimal balance between headroom and noise at the codec and conditioner, that this optimal balance remains optimal throughout digital processing.

Things affect dynamic range and fidelity in digital signal processing just like they do in analog processing. Although modern DSP tools make it virtually trivial to slap together processing steps, as much care needs to be exerted in designing each step of DSP as is put into designing each step of analog processing if you are to achieve optimum performance.

Response Digital Signal Processing

Just like the stimulus DSP, the response digital processor section can perform various sorts of functions, ranging from simple measurement and analysis tasks, to full-blown digital demodulation. Therefore, once again, depending on the exact requirements of each of these functions, the hardware implementation of an optimum response digital processor section can vary widely. One “general-purpose” DSP controller may not be what’s needed, in general.

Following are two broad categories that divide the capabilities of possible response digital processor assets:

image Waveform Recorder and DSP

image Matched Filter (Demodulator)

In the following sections, I will discuss these categories in turn.

Waveform Recorder and DSP

The first and simplest of these categories, waveform recorder and DSP, is representative of the response controller in most synthetic measurement systems around today. It consists of a block digitizer comprising an A/D and RAM, combined with high-speed DSP immediately after the A/D, and a general-purpose DSP-oriented CPU that that works with the RAM buffered data.

image

Figure 4-6 Waveform recording controller

The high-speed DSP (HSDSP) is normally used to reduce the quantity of data stored, by one of several techniques. These data-rate reduction schemes might include: decimating, digital down-converting, averaging, or demodulating, decoding, despreading, or computing statistical summaries. Some A/D parts build in high-speed DSP. You can live without HSDSP, but as data rates climb, this function becomes essential. Quite often it is implemented with a gate array.

The memory controller manages the large block of waveform memory. The large block of memory contains digitized samples of waveform data. Perhaps the data is in one continuous data set, or several independently acquired tracks or blocks of data. The memory controller is a state machine that allows sequencing through that memory, reading or writing. The better systems allow you to read and write at the same time. Although you certainly can get away without simultaneous read and write, when I buy a digitizer system, I look for this feature first. It greatly enhances the capabilities of the system and is most often worth the money. Dual-port access may be implemented with a FIFO, or “ping-pong” buffers, or it may be a true two-port memory design with separate read and write address decoding logic.

Low-speed DSP (LSDSP) represents a microprocessor dedicated to DSP tasks. This may either be within the response controller subsystem, or it may be implemented as part of the host. The purpose of LSDSP is to further reduce the data rate, possibly by computing final ordinates.

Even with memory controllers that allow continuous acquisition capability, typical waveform recording controllers are block-oriented. What they do is “take a block of data” and analyze it. Even the more advanced units with decimators and down-converters will boil down to this limited functionality. I say “limited,” not withstanding the fact that, given a fast enough CPU and a big enough block, any sort of processing could be implemented this way. The reason I say “limited” is because other approaches can be orders of magnitude more resource efficient for certain essential tasks. Thus, the limitation of the simple block digitize and DSP response controller arises from the limits of DSP processing resources.

Controllers can do many more things beyond just “taking a block of data” and running some DSP algorithm on it. They certainly must do more to handle high-speed, real-time interactive testing, particularly with digital modulations from cell phones and military communications equipment.

Matched Filter Demodulator

A matched filter is something I can prove mathematically to be the best way to detect the information modulated on a signal. It represents the gold standard of measurement devices. Matched filtering is described in any good communications theory book[B12], and is an essential vector signal analyzer (VSA) operation.

The simple block diagram of a matched filter is shown in Figure 4-7. A template of the expected signal waveform is correlated with the input signal. The correlation is integrated over the signal duration, resulting in a metric that indicates how close the input signal is to the template.

image

Figure 4-7 Matched filter

The signal template, h(t), in the matched filter is an ideal, undistorted copy of the thing the system is trying to detect and measure. This fact implies that a matched filter actually has the ability to store and generate waveform data, at least for internal use. If this sounds to you suspiciously like the stimulus controller, your suspicion isn’t misplaced. It may be surprising that a response detector contains stimulus generation, yet this is just another example of stimulus-response closure as I discussed in the section titled “Stimulus Response Closure: The Calibration Problem.” The response system cannot be separated from the stimulus system. In this case, an ideal response detector must have exact knowledge of the stimulus it is trying to detect.

The matched filter is worth considering given what this achieves: linear detection of a signal with minimum mean square error. There is no better detector in a least squares sense. Given how great a matched filter is, it makes a lot of sense to have one or more of these in the response system. You definitely want one for each possible letter in the signal alphabet you are trying to detect.

No doubt a matched filter can be implemented in DSP software, thus one might argue that a matched filter can be implemented with the block digitizer and DSP processor. It’s certainly cheapest to do it this way, and just as certainly a software matched filter would be cheaper than dedicated matched filter hardware. Two questions thereby arise: Is there any real need for dedicated matched filtering in the response system hardware? Isn’t this exactly the kind of specificity we should avoid in a synthetic measurement system?

First question: Yes. Matched filtering as a dedicated hardware structure is often needed because it can’t be realistically implemented in DSP software for many real-world scenarios, particularly real-time scenarios. A true matched filter requires a convolution between the prototype impulse response and the response signal. This is computationally intensive. Even in cases where FFT techniques can be used to speed the processing, matched filters take a while to calculate. Then, multiply the already lengthy time for one matched filter by the number of templates in the alphabet, and you see that the computational burden becomes onerous quickly. Certainly, if you want matched filtering for a nontrivial alphabet, you need to dedicate hardware to the task.

Second question: No. Matched filtering isn’t specific. Quite the contrary, matched filtering is the most general form of linear detection around. This is because a matched filter is a general process with all its specificity encapsulated in the signal template it seeks to detect. The template is a parameter; actually, the template is more correctly seen as an abscissa. Any finite signal template can be the basis for a matched filter. The template waveform represents a basis function for the ordinate being measured. The matched filter detection process is an inner product (dot product) operation that determines how much of the signal vector projects onto a particular abscissa represented by the basis. What better way to measure an ordinate!

Response Trigger Time Interpolator

I have already discussed triggering from in the context of the stimulus system in the section titled “Stimulus Triggering.” Response triggering is somewhat of a different, perhaps simpler problem. When responding to a signal, triggering from it, you have the option to use post processing to fix-up the acquired data based on the trigger. That’s easier to do than with stimulus where, lacking the ability to move back in time and change history, you are stuck with the data previously emitted.

Response controllers or digitizers often have what’s called a trigger time interpolator that tells us precisely when the trigger happened relative to the digitizer clock. Straightforward digital processing techniques can be used to resample the data, transforming it into data synchronous with the trigger event.

Response Cascade—Real-World Example

Figure 4-8 shows an example of a real-world digitizer subsystem, the Acqiris AP240. This is only one of the many different response digitizers made by Acqiris. I selected this one in particular because it includes both signal conditioning and response processing. Thus, Acqiris’ reconfigurable analyzer platform is more than just a digitizer. The full front-end signal conditioning has up to 1-GHz bandwidth, and an onboard FPGA digital processing unit (DPU) allows digitized signals to be processed and analyzed in real time. In fact, a system such as this, along with a host computer for moderate speed processing and control, represents a complete synthetic measurement system response cascade.

image

Figure 4-8 AP240 reconfigurable PCI signal analyzer platform

With SSR firmware options, the DPU can be programmed to perform processing algorithms at the cards’ maximum sampling rate, easing the requirements on the remainder of the response DSP subsystem.

image Onboard reconfigurable data processing unit (DPU) for real-time operations.

image Front-panel digital I/O connectors for real-time data processing control (DPU Ctrl).

image Synchronous dual-channel mode with independent gain and offset on each channel.

image Interleaved single-channel mode on either input, software selectable.

image 1-GHz analog bandwidth in all FS ranges.

image Up to 2 GS/s sampling rate in single-channel mode.

image Fully-featured 50Ù mezzanine front-end design with internal calibration and input protection.

image Short (1 Mpoints/ch typical) or optional long processing memory (4 Mpoints/ch typical).

image Multipurpose I/O connectors for trigger, clock, reference and status control signals.

image Continuous and start/stop external clock modes.

image High-speed PCI bus transfers data to host PC at sustained rates up to 100 MB/s.

image Device drivers for Windows 95/98/NT4.0/2000/XP, VxWorks and Linux.

image Auto-install software with application code examples for C/C++, Visual Basic, National Instruments LabVIEW and LabWindows/CVI.

The sustained sequential recording (SSR) firmware for AP240 analyzer platform uses a dual-bank memory system with onboard automatic switching allowing sustained sequential recording to the host PC in sequence mode at sustained trigger and data rates ten times faster than normal. The SSR firmware also allows 1-GHz bandwidth with synchronous start-on-trigger dual-channel sampling at rates up to 1 GS/s (2 GS/s in single-channel mode). When triggering, there is minimum dead time between successive acquisitions, allowing recording in sequence mode with a sustained trigger rate of up to 100 kHz.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset