CHAPTER 11

Ten Mistakes in Synthetic Measurement System Design

Throughout this book, I have attempted to give you a clear and consistent plan for the design of synthetic instrumentation. After reading my plan, and after considering your own individual application requirements, you should be able to design and develop your own SI that reaps the promised benefits of this beneficial design approach.

Unfortunately, it has been my experience that the best laid plans to develop synthetic instrumentation go oft astray when battered by the erratic yet often powerful forces that beset system development in real-world industry. Designers don’t really want to stray from the plan, but in the fast pace of development, it’s easy to get hit by a latecomer requirement, an unexpected design shortfall, and get accidentally derailed as a consequence. By the time you realize what has happened, you have left the road leading to the Synthetic Instrument City, and are sidetracked, on your way to Modular-ville or Rack-em-Stackopolis with no way to turn back.

Forewarned is forearmed, and in that spirit I will list ten common detours that designers can encounter during development on synthetic instrumentation. My hope is that knowing in advance about these common diversions, you will be able to navigate the through rough spots and ultimately stay on track.

Fixing Performance or Functionality Shortfalls Exclusively by Adding Hardware

Synthetic instrumentation systems are a hybrid of software and hardware developed to meet some measurement need as expressed by a set of specifications. As the system is developed, performance estimates are made, and at some point the predicted performance is held up next to the required performance, and compared. The system designers would have to be living a pretty lucky and enchanted life if all the required performance is met. Normally there is a shortfall somewhere.

Faced with an explicit shortfall in the performance of a hybrid of software and hardware, probably the most common reaction is to modify the hardware in some way so as to address it, rather than modifying software, or the system design as a whole, or the requirement itself. Instead, just hardware is changed; frequently something is added or complexity is increased.

This is a mistake.

The reason this is a mistake is because any system-level requirement shortfall is a system-level design shortfall. As such, it should be addressed at the system level in the best manner possible. In some cases this may involve changing hardware, but an unbiased evaluation must be made. Unfortunately, software engineers are often not even invited to consider solutions to things that are “obviously” hardware oriented at the system-level. I believe this is wrong and represents a primary source of failure and schedule/budget overrun in synthetic measurement system development.

To avoid this mistake, designers should be aware of the solve-with-hardware knee-jerk bias and compensate somehow. For example, software engineers should go to hardware meetings, learn the issues, and solve them as best they can. There should be a sincere effort to develop a software solution to all requirement shortfalls even if hardware seems the thing that must change. Sometimes you might be surprised.

“At first I thought we needed an amplifier, but integration solved our sensitivity problem.”

“I was sure we needed another filter, but we coded an adaptive nulling algorithm in the DSP to remove the interference.”

“That spec was impossible to meet with the hardware we were planning, so we convinced the customer that their measurement could be made just as accurately with the system as is.”

“By multiplexing we were able to avoid the need for another channel.”

Those are just a few examples of what system designers might come up with after they pushed through their initial instinct to add hardware to solve system-level problems.

Fixing Hardware Mistakes with Software

It may seem that this is the opposite of the first mistake. And it is. “Fixing it with hardware” and “Fixing it with software” are the Scylla and Charybdis of synthetic measurement system design. It’s difficult to steer a wise and safe course that does not wreck on either peril.

Now I am concerned about an over-dependence on software to provide solutions for correctable mistakes made in hardware. Hardware oriented system engineers are prone to say “we’ll just fix it in software” when a shortfall appears that “obviously” can be fixed with an appropriate algorithm. The stepping motor that spins backward, the connector pins that are scrambled, the register that can’t be read after being written, the sensor that requires compensation for numerous unrelated parameters—the list is endless—are all examples of things that really should be fixed by the guy that made the mistake.

Again, the solution to this side of the dilemma is to keep a systems view to all things. Software team members need to be present when these decisions are made, and need to be familiar the hardware issues and politically empowered in order to be able to say no to “just fixing it with software.” Perhaps more beneficial, the hardware team members, who are so often oblivious to the innards of the system software, need to get up to speed on these details. A suggestion to “just fix it with software” is not acceptable. Instead the suggestion should be to “fix it precisely so in this exact point of the software system design.”

Adding Modes or Features Dedicated to Specific Measurements

Its so much easier just to add a voltmeter to the system than to add all that signal conditioner stuff and DSP we need to measure boring old DC voltage with our fancy 96 GHz digitizer. After all, this fancy digitizer fundamentally makes a rotten voltmeter. A slower but more precise A/D is more appropriate. So lets just add the voltmeter module, shall we?

—Anonymous

OK. That may be an extreme example, but you should get the point. The tendency always is to add a new, conventional-style instrument to a SMS rather than to do the extra work needed to make the synthetic design handle the full generality of measurements that need to be made.

This is a mistake in SMS design, although I’ll be the first one to admit that other factors may play a role here. Time and money may be saved in the short run by abandoning the purist SMS design philosophy. I will not argue that point.

But in the long run, the mistake compounds and threatens everything. In the long run you find traditional instruments doing all the measurements, and only a ghost of a synthetic design lurking in the system. All the redundancy and instrument specificity you tried to avoid is now back.

Designing Synthetic Instruments Procedurally

Object-oriented software design techniques have been widely known for over a decade, but still there is a pervasive bias toward procedural software design. This bias is very strong in the ATE community. After all, if you talk about “test procedures” and “test sequences,” everyone knows what you mean. But if you use the word “object,” the eyes around you glaze over and people start to drift away to refresh their coffee.

Using procedural methodologies to design synthetic instruments is a big mistake. The reason this is such a big mistake is because synthetic instruments are built on a hierarchy that naturally fits with the OO concept of inheritance. An RMS distortion meter is a special kind of RMS voltmeter, which is a special kind of voltmeter, which is a special kind of meter, which is a special kind of instrument. Similarly, maps, abscissas, ordinates, signals, block acquisitions are another family tree.

Not taking advantage of the natural structure of this hierarchy results in software redundancy, which leads to a maintenance nightmare. If one improves the voltmeter, the improvements are not necessary reflected in the RMS voltmeter, or the RMS distortion meter, unless somebody redundantly improves them all in the same way.

Failing to orient the design around objects results in related things being redundantly scattered all throughout the system. Units are a classic example of this. Some ordinates are best expressed in terms of a certain unit: volts, amps, dBm, for example. It seems to me that the best place to say what units a given ordinate has is in the ordinate itself. But without object orientation, information about the units that a given ordinate is expressed in are hidden in all sorts of places: reports, test parameters, pass fail criteria, database field names, graphs, and so on. Ironically, sometimes you look at the code that computes the ordinate itself and unless you are lucky enough to find a comment, the units being used are anyone’s guess!

It is my recommendation that all synthetic instrument designers learn OOD principles. They should resist the temptation to just put one foot in front of the other, as they have done in the past, and consider where things really should go in order to eliminate redundancy. The limitations imposed by the realities of non-OO tools and legacy applications are fading. Newer tools and applications are more amenable to system level OOD. But no matter how good the tools, SMS design will never turn object-oriented unless the designers force themselves to think that way.

Meeting Legacy Instrument Specifications

Although one of the greatest advantages of synthetic measurement systems is their ability to perform legacy measurements in lieu of some obsolete instrument, there is no more certain way to disembowel a synthetic measurement system design effort than to specify that the synthetic instrument needs replace a legacy instrument, or to meet the same specifications as some legacy instrument. This approach invariably leads the SMS in directions that are counterproductive. The result is a replacement for the legacy system that is not in any true sense a synthetic instrument.

The reason that this approach goes wrong is because the legacy instrument specifications were chosen in the context of a specific implementation of that old instrument. The legacy specifications reflect that implementation, often more than they reflect the underlying measurement. Therefore, if you try to use the legacy specifications as any sort of guidance for a synthetic implementation, you are likely to be led astray. You end up addressing issues that are irrelevant to the goal of making the underlying measurement.

Consider, for example, a measurement of relative humidity. Maybe you want to replace wet/dry bulb thermometers with a new digital humidity gage based on hygroscopic polymer sensors. Would you use the all specs on the wet/dry bulb system as a specification for the digital system? The size of the liquid reservoir? The minimum air flow for evaporation? Of course not. What you would do is abstract the measurement performed by the wet/dry bulb system and require the digital system to do the same measurement.

To take another specific example, consider a RF spectrum analyzer. This instrument is a form of radio receiver that sweeps across some wide frequency band and plots the ordinate of power versus frequency. In a legacy spectrum analyzer data sheet, you will find all sorts of specifications about the “sweep speed” and the “video averaging,” but in a modern synthetic spectrum analyzer there may be a CCC system that digitizes and computes. There is no sweep; there is no video. To understand them at all, the meaning of those old specifications needs to be recast in terms that are relevant to a DSP implementation. And after you have done the work to figure out what “sweep speed” actually means when nothing is sweeping, what have you gained? Yes, there is some correspondence between a wagon wheel, a car tire, and an oar, but does that correspondence tell us anything of value when it comes to designing these devices? I doubt it.

One might argue that some legacy instrument specifications are relevant to the measurement performed. Obviously, the size, weight, dimensions, interfaces, and power requirements of a legacy instrument are irrelevant, but the accuracy specifications aren’t. It may seem reasonable to believe that it’s possible to select those legacy specifications that have relevance to the underlying measurement, and just use those as specifications for the synthetic instrument.

Unfortunately, there is no such free lunch. Legacy instrument specifications (particularly those that appear on manufacturer’s data sheets) are always tuned to the strengths and weaknesses of a particular hardware implementation and are always colored by marketing considerations. There is an implicit desire to put one’s best foot forward. The specifications that make it to data sheets are chosen in order to look good to customers and sell the product, not to be a specification for manufacturing the product.

Data sheets are poor sources for quantitative information regarding accuracy. Any quantitative information is seldom defined with accuracy estimates in terms of standard uncertainty, but rather, is given as a number with no error quantification, or possibly with absolute “accuracy” numbers. Consequently, any quantitative meaning is absent.

You should realize, therefore, that legacy instrument specifications are a historic qualitative legacy of depreciated technology, not something carved on stone tablets to be preserved in our culture for thousands of years. They represent a distorted snapshot of what was possible in the past with a particular instrument. From them, you should attempt to glean a rough qualitative idea of what the legacy instrument was capable of, measurement wise, and accuracy wise, keeping in mind that past and present marketing bias colors everything, and quantitative statements of accuracy are a contradiction in terms.

From the qualitative understanding you can gain from the study of legacy specifications, in combination with the present day test and measurement goals for the new synthetic instrument, you can develop specific measurement requirements that need to be addressed by a new design. You can then produce a design that seems to fit the need. This design can be analyzed, simulated, and prototyped to determine its quantitative performance in the required test scenario. The loop then closes as better requirements are written and the design is revised. The end result is a new instrument that addresses today’s need.

Developing Stimulus Separate from Response

As I discussed earlier, the one unique redundancy that synthetic instrumentation can eliminate are response components dedicated to calibrate stimulus, and the stimulus components dedicated to calibrate response. By using a system-level optimization, you can factor out these sorts of redundancies readily.

If stimulus and response subsystems are developed in isolation, then it becomes impossible to design with the assumption of closure. Therefore, redundant response functions must be added to stimulus, and visa versa. Cost and complexity go up. In general, this is bad.

However, it must be said that there are worse crimes on this list. Sometimes circumstances dictate that a stimulus system must be used in isolation, or at least that there is a firm requirement for it to have the capability of independent operation. In these situations adding redundant components to close leveling loops and provide calibration signals is simply the cost of meeting the requirement.

Not Combining Measurements

The power of the stimulus response measurement map view of measurements is that it allows multiple measurements to be combined into a single, fast acquisition of a single map. When the system is allowed to combine measurements, acquiring data takes far less time than when the same measurements are made separately. In some cases, orders of magnitude less time.

Unfortunately, there seems to be a reluctance to combine measurements. This reluctance is a result of the way TPSs are constructed: each measurement is separate in a firmly procedural sequence. In the normal world-view, considering that measurements might be thought of as nonprocedural entities borders on insanity. That one could trust to an instrument to combine measurements into a high speed, optimized map is absurd. Inevitably one must specify each switch to throw and each knob to twist for each measurement, mustn’t one? Anything else is madness, isn’t it?

No, it isn’t madness. It is a fact borne out in practice that a combined map does the same set of integrated measurements faster and more efficiently than the same measurements can be done as separate tests. The true madness is to ignore the gain in performance this represents and continue to do measurements separately.

Hardware Modularity as a Distraction

As I have explained, synthetic instruments are not necessarily modular. In fact, the whole idea of running specific measurements on general purpose hardware tends to discourage modular approaches. After all, the point of modularity to be able to conveniently plug in the specific hardware you need. If you can do all your measurements with the same CCC cascade, no modules need to be swapped. The hardware can just sit there, happily, doing all sorts of different measurements. All the modularity has been swept into software.

Thus, efforts toward encouraging modularity in synthetic instrument designs can be a sort of a false god. These modularization efforts can drive the design away from a pure synthetic approach. The easier it is to put in different hardware modules, the less incentive to make one particular cascade of hardware to do all the measurement tasks.

On the other hand, the practical reality of realizable hardware may dictate that one CCC cascade may not be able to do everything you need. Consequently, you will need to switch in a new conditioner, or codec, or controller. You might as well modularize the portion replaced, so long as you do so in a manner that doesn’t undermine the foundation of your synthetic instrument system.

Bad Lab Procedure

This mistake has been alluded to several times and is a contributor to other mistakes in this list. Even so, it deserves to be listed by itself as it is such a grievous and pernicious error.

Anyone who has taken an introductory college level lab course in any hard science, be it physics or chemistry or biology, has been drilled in proper measurement procedure. You learned how to observe and take data from your observations, again and again. You learned how to use control experiments and other techniques to avoid tainted data because of observer bias and wishful thinking. You learned how to tabulate and statistically analyze data, how to calculate sample mean and variance, perform confidence testing, draw X/Y plots with proper divisions, markings and labeling, and so forth. These are basic metrologic and scientific skills that anyone taking these courses either learned (at least a little) or failed the course.

Why then is evidence of a proper grasp of scientific and metrologic techniques so seldom seen in the operation of modern automated measurement systems?

This problem is not specific to synthetic instruments, but as a relatively immature technology, synthetic instruments show flaws older technologies would have had more time to correct. Thus, it has been my experience that first generation SI efforts are often blemished with basic lab procedure errors that would have earned someone a “D” in physics lab 101. These are not subtle errors, mind you, but simple things, like not putting units on measurements, not making properly labeled plots, or not doing any rudimentary statistical analysis of results to justify the conclusions being drawn.

All these mistakes fall under the category of bad lab procedure. Despite the fact that modern measurement equipment can allow even the unskilled to make measurements, making good measurements still requires skill. Even so, anybody can make these mistakes, whether they are skilled or not—or use a synthetic instrument or not. However, because synthetic instruments are more automated, and more user friendly, they facilitate shortcuts and consequent blunders. Therefore, it behooves the designers and operators of synthetic instruments not to forget the basic lab procedure that they all undoubtedly earned them an “A” back in college.

Fear of Change

Synthetic instruments represent a new way to design instruments. They are different than what came before. They are not a concrete, hardware thing, but rather a software abstraction. The combination of innovation and abstraction loses a lot of people right from the start. They don’t get it. They keep looking for the rack-em-stack-em, or the modules, or even the virtual instruments. They need something familiar to latch on to that isn’t there.

There is a legend about an early inventor of the quartz watch. Supposedly, he took his invention to various mechanical watchmaking companies trying to sell it to them. They looked at his masterpiece and could not see a wristwatch. Where is the spring? Where is the escapement? How does this thing tell time?

They didn’t like it. It wasn’t a real wristwatch. Clearly, he eventually made his point and today, quartz watches dominate. Mechanical watches represent only a small fraction of the total watch sales.

I can tell numerous anecdotes that are basically identical to the this story, but with synthetic instruments representing the quartz watch. When you show people a synthetic measurement system, particularly if the measurement software is designed along OO principles, they look at it and can’t see the instruments. They ask “How does this thing do a test?”

This is one reason that I believe LabVIEW and virtual instruments have been successful: Not because it is a superior approach, but because you can see the instruments. The virtual instruments have graphical front panels that evoke the feel of a legacy instrument. These are wired together from the “back panel” with the interconnections and their procedural interactions clearly in evidence—at least in the smaller systems that are deceptively used to sell the approach.

Synthetic instrumentation systems aren’t as concrete as virtual instruments, and as such can be a harder sell to certain people despite their numerous advantages. Therefore there is often a tendency to make the mistake of concretizing synthetic instruments to placate those that want to see familiar concrete hardware patterns. Frivolous hardware modularization is a symptom of this disease, as is legacy instrument virtualization on the software side.

The way to avoid this mistake is to focus on the measurements. Express them as maps without any legacy instrumentation context. Think like scientists, metrologists, and statisticians.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset