CHAPTER 1

What is a Synthetic Instrument?

Engineers often confuse synthetic measurement systems with other sorts of systems. This confusion isn’t because synthetic instrumentation is an inherently complex concept, or because it’s vaguely defined, but rather because there are lots of companies trying to sell their old nonsynthetic instruments with a synthetic spin.

If all you have to sell are pigs, and people want chickens, gluing feathers on your pigs and taking them to market might seem to be an attractive option to some people. If enough people do this, and feathered pigs, goats, cows, as well as turkeys and pigeons are flooding the market, being sold as if they were chickens, real confusion can arise among city folk regarding what a chicken might actually be.

One of the main purposes of this book is to set the record straight. When you are finished reading it, you should be able to tell a synthetic instrument from a traditional instrument. You will then be an educated consumer. If someone offers you a feathered pig in place of a chicken, you will be able to tell that you are being duped.

History of Automated Measurement

Purveyors of synthetic instrumentation often talk disparagingly about traditional instrumentation. But what exactly are they talking about? Often you will hear a system criticized as “traditional rack-em-stack-em.” What does that mean?

In order to understand what’s being held up for scorn, you need to understand a little about the history of measurement systems.

Genesis

In the beginning, when people wanted to measure things, they grabbed a specific measurement device that was expressly designed for the particular measurement they wanted to make. For example, if they wanted to measure a length, they grabbed a scale, or a tape measure, or a laser range finder and carried it over to where they wanted to measure a length. They used that specific device to make their specific length measurement. Then they walked back and put the device away in its carrying case or other storage, and put it back on some shelf somewhere where they originally found it (assuming they were tidy).

If you had a set of measurements to make, you needed a set of matching instruments. Occasionally, instruments did double duty (a chronometer with built-in compass), but fundamentally there was a one-to-one correspondence between the instruments and the measurements made.

That sort of arrangement works fine when you have only a few measurements to make, and you aren’t in a hurry. Under those circumstances, you don’t mind taking the time to learn how to use each sort of specific instrument, and you have ample time to do everything manually, finding, deploying, using, and stowing the instrument.

image

Figure 1-1 Manual measurements

Things went along like this for many centuries. But then in the 20th century, the pace picked up a lot. The minicomputer was invented, and people started using these inexpensive computers to control measurement devices. Using a computer to make measurements allows measurements to be made faster, and it allows measurements to be made by someone that might not know too much about how to operate the instruments. The knowledge for operating the instruments is encapsulated in software that anybody can run.

With computer-controlled measurement devices, you still needed a separate measurement device for each separate measurement. It seemed fortunate that you didn’t necessarily need a different computer for each measurement. Common instrument interface buses, like the IEEE-488 bus, allowed multiple devices to be controlled by a single computer. In those days, computers were still expensive, so it helped matters to economize on the number of computers.

And, obviously, using a computer to control measurement devices through a common bus requires measurement devices that can be controlled by a computer in this manner. An ordinary schoolchild’s ruler cannot be easily controlled by a computer to measure a length. You needed a digitizing caliper or some other sort of length measurement device with a computer interface.

Things went along like this for a few years, but folks quickly got tired of taking all those instruments off the shelf, hooking them up to a computer, running their measurements, and then putting everything away. Sloppy, lazy folks that didn’t put their measurement instruments away tripped over the interconnecting wires. Eventually, somebody came up with the idea of putting all these computer-controlled instruments into one big enclosure, making a measurement system that comprised a set of instruments and a controlling computer mounted in a convenient package. Typically, EIA standard 19” racks were used, and the resulting sorts of systems have been described as “rack-em-stack-em” measurement systems. Smaller systems were also developed with instruments plugged into a common frame using a common computer interface bus, but the concept is identical.

At this point, the people that made measurements were quite happy with the situation. They could have a whole slew of measurements made with the touch of a button. The computer would run all the instruments and record the results. There was little to deploy or stow. In fact, since so many instruments were crammed into these rack-em-stack-em measurement systems, some systems got so big that you needed to carry whatever you were measuring to the measurement system, rather than the other way around. But that suited measurement makers just fine.

On the other hand, the people that paid for these measurement systems (seldom the same people as using them) were somewhat upset. They didn’t like how much money these systems were costing, how much room they took up, how much power they used, and how much heat they threw off. Racking up every conceivable measurement instrument into a huge, integrated system cost a mint and it was obvious to everyone that there were a lot of duplicated parts in these big racks of instruments.

Modular Instruments

As I referred to above, there was an alternative kind of measurement system where measurement instruments were put into smaller, plug-in packages that connected to a common bus. This sort of approach is called modular instrumentation. Since this is essentially a packaging concept rather than any sort of architecture paradigm, modular instruments are not necessarily synthetic instrumentation at all. In fact, they usually aren’t, but since some of the advantages of modular packaging correspond to advantages of synthetic system design, the two are often confused.

Modular packaging can eliminate redundancy in a way that seems the same as how synthetic instruments eliminate redundancy. Modular instruments are boiled down to their essential measurement-specific components, with nonessential things like front panels, power supplies, and cooling systems shared among several modules.

Modular design saves money in theory. In practice, however, cost savings are often not realized with modular packaging. Anyone attempting to specify a measurement or test system in modular VXI packaging knows that the same instrument in VXI often costs more than an equivalent standalone instrument. This seems absurd given the fact that the modular version has no power supply, no front panel, and no processor. Why this economic absurdity occurs is more of a marketing question than a design optimization paradox, but the fact remains that modular approaches, although the darling of engineers, don’t save as much in the real world as you would expect.

One might be tempted to point at the failure of modular approaches to yield true cost savings and predict the same sort of cost savings failure for synthetic instrumentation. The situation is quite different, however. The modular approach to eliminating redundancy and reducing cost does not go nearly as far as the synthetic instrument approach does. A synthetic instrument design will attempt to eliminate redundancy by providing a common instrument synthesis platform that can synthesize any number of instruments with little or no additional hardware. With a modular design, when you want to add another instrument, you add another measurement specific hardware module. With a synthetic instrument, ideally you add nothing but software to add another instrument.

Synthetic Instruments Defined

Synergy means behavior of whole systems unpredicted by the behavior of their parts taken separately.

—R. Buckminster Fuller[B4]

Fundamental Definitions

Synthetic Measurement System

A synthetic measurement system (SMS) is a system that uses synthetic instruments implemented on a common, general purpose, physical hardware platform to perform a set of specific measurements.

Synthetic Instrument

A synthetic instrument (SI) is a functional mode or personality component of a synthetic measurement system that performs a specific synthesis or analysis function using specific software running on generic, nonspecific physical hardware.

There are several key words in these definitions that need to be emphasized and further amplified.

Synthesis and Analysis

Although the word “synthetic” in the phrase synthetic instrument might seem to indicate that synthetic instruments are synthesizers—that they do synthesis. This is a mistake. When I say synthetic instrument, I mean that the instrument is being synthesized. I am not implying anything about what the instrument itself does.

A synthetic instrument might indeed be a synthesizer, but it could just as easily be an analyzer, or some hybrid of the two.

I’ve heard people suggest the term “analytic instruments” rather than synthetic instruments in the context of some analysis instrument built with a synthetic architecture, and this isn’t really correct either. Remember, you are synthesizing an instrument; the instrument itself may synthesize something, but that’s another matter.

Generic Hardware

Synthetic instruments are implemented on generic hardware. This is probably the most salient characteristic of a synthetic instrument. It’s also one of the bitterest pills to swallow when adopting an SI approach to measurements. Generic means that the underlying hardware is not explicitly designed to do the particular measurement. Rather, the underlying hardware is explicitly designed to be general purpose. Measurement specificity is encapsulated in software.

An exact analogy to this is the relationship between specific digital circuits and a general-purpose CPU. A specific digital circuit can be designed and hardwired with digital logic parts to perform a specific calculation. Alternatively, a microprocessor (or, better yet, a gate array) could be used to perform the same calculation using appropriate software. One case is specific, the other generic, with the specificity encapsulated in software.

The reason this is such a bitter pill is that it moves many instrument designers out of their hardware comfort zone. The orthodox design approach for instrumentation is to design and optimize the hardware so as to meet all measurement requirements. Specifications for measurement systems reflect this optimized-hardware orientation. Software is relegated to a subordinate role of collecting and managing measurement results, but no fundamental measurement requirements are the responsibility of any software.

image

Figure 1-2 Digital hardwired logic versus CPU

With a synthetic instrumentation approach, the responsibility for meeting fundamental measurement requirements is distributed between hardware and software. In truth, the measurement requirements are now primarily a system-level requirement, with those high-level requirements driving lower-level requirements. If anything, the result is that more responsibility is given to software to meet detailed measurement requirements. After all, the hardware is generic. As such, although there will be some broad-brush optimization applied to the hardware to make it adequate for the required instrumentation tasks, the ultimate responsibility for implementing detailed instrumentation requirements belongs to software.

Once system planners and designers understand the above point, it gives them a way out of a classic dilemma of test system design. I have seen many first attempts at synthetic instrumentation where this was not understood.

In these misguided efforts, the hardware designers continued to bear most or all of the responsibility for meeting system-level measurement performance requirements. Crucial performance aspects that were best or only achievable in system software were assigned to hardware solutions, with hardware engineers struggling against their own system design and the laws of physics to make something of the impossible hand they had been dealt. Software engineers habitually ignore key measurement performance issues under the invalid assumption that “the hardware does the measurement.” Software engineers focus instead on well-known TPS issues (configuration management, test executive, database, presentation, user interface (UI), and so forth) that are valid concerns, but which should not be their only concerns.

One of the goals of this book is to raise awareness of this fact to people contemplating the development of synthetic instrumentation: a synthetic instrument is a system-level concept. As such, it needs a balanced system-level development effort to have any chance of being successful. Don’t fall into the trap of turning to hardware as the solution for every measurement problem. Instead, synthesize the solution to the measurement problem using software and hardware together.

Organizations that develop synthetic instruments should make sure that the proper emphasis is placed. System-level goals for synthetic instruments are achieved by software. Therefore, the system designer should have a software skill-set and be intimately involved in the software development. When challenges are encountered during design or development, software solutions should be sought vigorously, with hardware solutions strongly discouraged. If every performance specification shortfall is fixed by a hardware change, you know you have things backward.

Dilemma—All Instruments Created Equal

When the Founding Fathers of the United States wrote into the Declaration of Independence the phrase “all men are created equal,” it was clear to everyone then, and still should be clear to everyone now, that this statement is not literally true. Obviously, there are some tall men and some short men; men differ in all sorts of qualities. Half the citizens to which that phrase refers are actually women.

What the Founding Fathers were doing was to establish a government that would treat all of its citizens as if they were equal. They were perfectly aware of the inequalities between people, but they felt that the government should be designed as if citizens were all equivalent and had equivalent rights. The government should be blind to their inherent and inevitable differences between citizens.

Doubtless, the resources of government are always limited. Some citizens who are extremely unequal to others may find that their rights are altered from the norm. For example, an 8-foot tall man might find some difficulty navigating most buildings, but the government would find it difficult to mandate that doorways all be taller than 8 feet.

Thus, a consequence of the “created equal” mandate is that the needs of extreme minorities are neglected. This is a dilemma. Either one finds that extraordinary amounts of resources are devoted to satisfying these minority needs, which is unfair to the majority, or the needs of the minority are sacrificed to the tyranny of the majority. The endless controversies that result are well known in U.S. history.

You may be wondering where I’m going with this digression on U.S. political thought and why it has any place in a book about synthetic instrumentation. Well, the same sort of political philosophy characterizes the design of synthetic instrumentation and synthetic measurement systems. All instruments are created equal by the fiat of the synthetic instrument design paradigm. That means, from the perspective of the system designer, that the hardware design does not focus on and optimize the specific details of the specific instruments to be implemented. Rather, it considers the big picture and attempts to guarantee that all conceivable instruments can flourish with equal “happiness.”

But we all know that instruments aren’t created equal. As with government, there are inevitable trade-offs in trying to provide a level playing field for all possible instruments. Some types of instruments and measurements require far different resources than others. Attempting to provide for these oddball measurements will skew the generic hardware in such a way that it does a bad job supporting more common measurements.

Here’s an example. Suppose there is a need for a general-purpose test and measurement system that would be able to test any of a large number of different items of some general class, and determine if they work. An example of this would be something like a battery tester. You plug your questionable battery into the tester, push a button, and a green light illuminates (or meter deflects to the green zone) if the battery is good, or red if bad.

But suppose that it was necessary to test specialized batteries, like car batteries, or high power computer UPS batteries, or tiny hearing aid batteries. Nothing in a typical consumer battery tester does a good job of this. To legitimately test big batteries you would want to have a high-power load, cables thick enough to handle the current, and so on. Small batteries need tiny connectors and sockets that fit their various shapes. Adding the necessary parts to make these tests would drive up the cost, size, and other aspects of the tester.

Thus, there seems to be an inherent compromise in the design of a generic test instrument. The dilemma is to accept inflated costs to provide a foundation for rarely needed, oddball tests, or to drop the support for those tests, sacrificing the ability to address all test needs.

Fortunately, synthetic instrumentation provides a way to break out of this dilemma to some degree—a far better way than traditional instrumentation provided. In a synthetic instrumentation system, there is always the potential to satisfy a specific, oddball measurement need with software. Although software always has costs (both nonrecurring and recurring), it is most often the case that handling a minority need with software is easier to achieve than it is with hardware.

A good, general example of this is how digital signal processing (DSP) can be applied in post processing to synthesize a measurement that would normally be done in real time with hardware. A specific case would be demodulating some form of encoding. Rather than include a hardware demodulator in order to perform some measurement on the encoded data, DSP can be applied to the raw data to demodulate in post-processing. In this way, a minority need is addressed without adding specialized hardware.

Continuing with this example, if it turns out that DSP post-processing does not have sufficient performance to achieve the goal of the measurement, one option is to upgrade the controller portion of the control, codec, conditioning (CCC) instrument. Maybe then the DSP will run adequately. Yes, the hardware is now altered for the benefit of a single test, but not by adding hardware specific to that test. This is one of my central points. As I will discuss in detail later on, I believe it is a mistake to add hardware specific to a particular test.

Advantages of Synthetic Instruments

No one would design synthetic instruments unless there was an advantage: above all, a cost advantage. In fact, there are several advantages that allow synthetic instruments to be more cost effective than their nonsynthetic competitors.

Eliminating Redundancy

Ordinary rack-em-stack-em instrumentation contains repeated components. Every measurement box contains a slew of parts that also appear in every other measurement box. Typical repeated parts include:

image Power Supply

image Front Panel Controls

image Computer Interfaces

image Computer Controllers

image Calibration Standards

image Mechanical Enclosures

image Interfaces

image Signal Processing

A fundamental advantage of a synthetic approach to measurement system design is that adding a new measurement does not imply that you need to add another measurement box. If you do add hardware, the hardware comes from a menu of generic modules. Any specificity tends to be restricted to the signal conditioning needed by the sensor or effector being used.

Stimulus Response Closure: The Calibration Problem

Many of the redundancies eliminated by synthetic instrumentation are the same as redundancies eliminated by modular instrument approaches. However, one significant redundancy that synthetic instruments have the unique ability to eliminate is the response components that are responsible for stimulus, and the stimulus components that support response. I call this efficiency closure. I will show, however, that this sort of redundancy elimination, while facilitated by synthetic approaches, has more to do with using a system-level optimization rather than an instrument-level optimization.

A signal generator (a box that generates an AC sine wave at some frequency and amplitude) is a typical stimulus instrument that you may encounter in a test system. When a signal generator creates the stimulus signal, it must do so at a known, calibrated signal level. Most signal generators achieve this by a process called internal leveling. The way internal leveling is implemented is to build a little response measurement system inside the signal generator. The level of the generator is then adjusted in a feedback loop so as to set the level to a known, calibrated point.

As you can see in Figure 1-3, this stimulus instrument comprises not only stimulus components, but also response measurement components. It may be the case that elsewhere in the overall system, those response components needed internally in the signal generator are duplicated in some other instruments. Those components may even be the primary function of what might be considered a “true” response instrument. If so, the response function in the signal generator is redundant.

image

Figure 1-3 Signal-generator leveling loop

Naturally, this sort of redundancy is a true waste only in an integrated measurement system with the freedom to use available functions in whatever manner desired, combining as needed. A signal generator has to work standalone, and so must carry a redundant response system within itself. Even a synthetic signal generator designed for standalone modular VXI or PXI use must have this response measurement redundancy within.

Therefore, it would certainly be possible to look at a system comprising a set of nonsynthetic instruments and to optimize away stimulus response redundancy. That would be possible, but it’s difficult to do in practice. The reason it is difficult is that nonsynthetic instruments tend to be specific in their stimulus and response functions. It’s difficult to match-up functions and factor them out.

In contrast, when one looks at a system designed with synthetic stimulus and response components, the chance of finding duplicate functions is much higher. If synthetic functions are all designed with the same signal conditioner, converter, DSP subsystem cascade, then a response system provided in a stimulus instrument will have the same exact architecture as one provided in a response instrument. The duplications factor out directly.

Measurement Integration

One of the most powerful concepts associated with synthetic instrumentation is the concept of measurement integration.

Fundamental Definitions

Measurement Integration

Combining disparate measurements into a single measurement map.

From my discussion of a measurement map in the section titled “Abscissas and Ordinates,” you will learn how to describe a measurement in such a way that encourages measurement integration. When you specify a list of ordinates and abscissas, and state how the abscissas are sequenced, you have effectively packaged a bunch of measurements into a tidy bundle. This is measurement integration in its purest sense.

Measurement integration is important because it allows you to get the most out of the data you take. The data set is seen as an integrated whole that is analyzed, categorized, and visualized in whatever way makes the most sense for the given test. This is in contrast with the more prevalent way of approaching test where a separate measurement is done in a sequential process with each test. There is no intertest communication (beyond basic prerequisites and an occasional parameter). The result of this redundancy is slow testing and ambiguity in the results.

Measurement Speed

Synthetic instruments are unquestionably faster than ordinary instruments. There are many reasons for this fact, but the principal reason is that a synthetic instrument does a measurement that is exactly tuned to the needs of the test being performed. Nothing more, nothing less. It does exactly the measurement that the test engineer wants.

In contrast, ordinary instruments are designed to a certain kind of measurement, but the way they do it may not be optimized for the task at hand. The test engineer is stuck with what the ordinary instrument has in its bag of tricks.

For example, there is a speed-accuracy trade-off on most measurements. If an instrument doesn’t know exactly how much accuracy you need for a given test, it needs to take whatever time it takes for the maximum accuracy you might ever want. Consequently, you get the slowest measurement. It is true that many conventional instruments that make measurements with a severe speed-accuracy trade-off often have provision to specify a preference (e.g., a frequency counter that allows you to specify the count time and/or digits of precision), but the test engineer is always locked into the menu of compromises that the instrument maker anticipated.

Another big reason why synthetic instrumentation makes faster measurements is that the most efficient measurement techniques and algorithms can be used. Consider, for example, a common swept filter spectrum analyzer. This is a slow instrument when fine frequency resolution is required simultaneously with a wide span. In contrast, a synthetic spectrum analyzer based on fast Fourier transform (FFT) processing will not suffer a slowdown in this situation.

Decreased time to switch between measurements is also another noteworthy speed advantage of synthetic instrumentation. This ability goes hand-in-hand with measurement integration. When you can combine several different measurements into one, eliminating both the intermeasurement setup times and the redundancies between sets of overlapping measurements, you can see surprising speed increases.

Longer Service Life

Synthetic measurement systems don’t become obsolete as quickly as less flexible dedicated measurement hardware systems. The reason for this fact is quite evident: Synthetic measurement systems can be reprogrammed to do both future measurements, not yet imagined, at the same time as they can perform legacy measurements done with older systems. Synthetic measurement systems give you your cake and allow you to eat it too, at least in terms of nourishing a wide variety of past, present, and future measurements.

In fact, one of the biggest reasons the U.S. military is so interested in synthetic measurement systems is this unique ability to support the old and the new with one, unchanging system. Millions of man-hours are invested in legacy test programs that expect to run on specific hardware. That specific hardware becomes obsolete. Now what do we do?

Rather than dumping everything and starting over, a better idea is to develop a synthetic measurement system that can implement synthetic instruments that do the same measurements as hardware instruments from the old system, while at the same time, are able to do new measurements. Best of all, the new SMS hardware is generic to the measurements. That means it can go obsolete piecemeal or in great chunks and the resulting hardware changes (if done right) don’t affect the measurements significantly.

Synthetic Instrument Misconceptions

Now that you understand what a synthetic instrument is, let’s tackle some of the common misconceptions surrounding this technology.

Why not Just Measure Volts with a Voltmeter?

The main goal of synthetic instrumentation is to achieve instrument integration through the use of multipurpose stimulus/response synthesis/analysis hardware. Although there may be nonsynthetic, commercial off-the-shelf (COTS) solutions to various requirements, we intentionally eschew them in favor of doing everything with a synthetic CCC approach.

It should be obvious that a COTS, measurement specific instrument that has been in production many years, and has gone through myriad optimizations, reviews, updates, and improvements, will probably do a better job of measuring the thing it was designed to measure, than a first revision synthetic instrument.

However, as the synthetic instrument is refined, there comes a day when the performance of the synthetic instrument rivals or even surpasses the performance of the legacy, single-measurement instrument. The reason this is possible is because the synthetic instrument can be continuously improved, with better and better measurement techniques incorporated, even completely new approaches. The traditional instrument can’t do that.

Synthetic Musical Instruments

This book is about synthetic measurement instruments, but the concept is not far from that of a synthetic musical instrument. Musical instrument synthesizers generate sound alike versions of many classic instruments by means of generic synthesis hardware. In fact, the quality of the synthesis in synthetic musical instruments now rivals, and in some cases surpasses, the musical-aesthetic quality of the best classic mechanical musical instruments. Modern musical synthesis systems also can accurately imitate the flaws and imperfections in traditional instruments. This situation is exactly analogous to the eventual goal of synthetic instrument—that they will rival and surpass, as well as imitate, classic dedicated hardware instruments.

Virtual Instruments

In the section titled “History of Automated Measurement,” I described automated, rack-em-stack-em systems. People liked these systems, but they were too big and pricey. As a consequence, modular approaches were developed that reduced size and presumably cost by eliminating redundancy in the design. These modular packaging approaches had an undesirable side effect: they made the instrument front panels tiny and crowded. Anybody that used modular plug-in instruments in the 1970s and 1980s knows how crowded some of these modular instrument front panels got.

image

Figure 1-4 Crowded front panel on Tektronix Spectrum analyzer

It occurred to designers that if the instrument could be fully controlled by computer, there might be no need for a crowded front panel. Instead, a soft front panel could be provided on a nearby PC that would serve as a way for a human to interact with the instrument. Thus, the concept of a virtual instrument appeared. Virtual instruments were actually conventional instruments implemented with a pure, computer-based user interface.

Certain software technologies, like National Instruments’ LabVIEW product, facilitated the development of virtual instruments of this sort. The very name “virtual instrument” is deeply entwined with LabVIEW. In a sense, LabVIEW defines what a virtual instrument was, and is.

Synthetic instruments running on generic hardware differ radically from ordinary instrumentation, where the hardware is specific to the measurement. Therefore, synthetic instruments also differ fundamentally from virtual instruments where, again, the hardware is specific to a measurement. In this latter case, however, the difference is more disguised since a virtual instrument block diagram might look similar to a synthetic instrument block diagram. Some might call this a purely semantic distinction, but in fact, the two are quite different.

Virtual instruments are a different beast than synthetic instruments because virtual instrument software mirrors and augments the hardware, providing a soft front panel, or managing the data flow to and from a conventional instrument in a rack, but does not start by creating or synthesizing something new from generic hardware.

This is the essential point: synthetic instruments are synthesized. The whole is greater than the sum of the parts. To use Buckminster Fuller’s word, synthetic instruments are synergistic instruments[B4]. Just as a triangle is more than three lines, synthetic instruments are more than the triangle of hardware (control, codec, conditioning) they are implemented on.

Therefore, one way to tell if you have a true synthetic instrument is to examine the hardware design alone and to try to figure out what sort of instrument it might be. If all you can determine are basic facts, like the fact that it’s a stimulus or response instrument, or like the fact that it might do something with signals on optical fiber, but not anything about what it’s particularly designed to create or measure—if the measurement specificity is all hidden in software—then you likely have a synthetic instrument.

I mentioned National Instruments’ LabVIEW product earlier in the context of virtual instruments. The capabilities of LabVIEW are tuned more toward an instrument stance rather than a measurement stance, (at least at the time of this writing) and therefore do not currently lend themselves as effectively to the types of abstractions necessary to make flexible synthetic instrumentation as do other software tools. In addition, LabVIEW’s nonobject-oriented approach to programming prevents the application of powerful object oriented (OO) benefits like efficient software reuse. Since OO techniques work well with synthetic instrumentation, LabVIEW’s shortcoming in this regard represents a significant limitation.

That said, there’s no reason that LabVIEW can’t be used to as a tool for creating and manipulating synthetic instruments, at some level. Just because LabVIEW is currently tuned to be a non-OO virtual instrument tool, doesn’t mean that it can’t be a SI tool to some extent. Also, it should be noted that the C++-based LabWindows environment doesn’t share as many limitations as the non-OO LabVIEW tools.

Analog Instruments

One common misconception about synthetic instruments is that they can be only analog measuring instruments. That is to say, they are not appropriate for digital measurements. Because of the digitizer, processing has moved from the digital world to the analog world, and what results is only useful for analog measurements.

Nothing could be further from the truth. All good digital hardware engineers know that digital circuitry is no less “analog” than analog circuits. Digital signaling waveforms, ideally thought of as fixed 1 and 0 logic levels are anything but fixed: they vary, they ring, they droop, they are obscured by glitches, spurs, hum, noise, and other garbage.

Performing measurements on digital systems is a fundamentally analog endeavor. As such, synthetic instrumentation implemented with a CCC hardware architecture is equally appropriate for digital test and analog test.

There is, without doubt, a major difference between the sorts of instruments that are synthesized to address digital versus analog measurement needs. Digital systems often require many more simultaneous channels of stimulus and response measurement than do analog systems. But bandwidths, voltage ranges, and even interfacing requirements are similar enough in enough cases to make the unified synthetic approach useful for testing both kinds of systems with the same hardware asset.

Another difference between analog and digital oriented synthetic measurement systems is the signal conditioning used. In situations where only the data is of interest, rather than the voltage waveform itself, the best choice of signal conditioner may be nonlinear. Choose nonlinear digital-style line drivers and receivers in the conditioner. Digital drivers will give us better digital waveforms, per dollar, than linear drivers. Similarly, when implementing many channels of response measurement, a digital receiver will be far less expensive than a linear response asset.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset