Before any rocket propulsion systems are put into operational use, they are subjected to several different types of tests, many of which are outlined below in the approximate sequence in which they are normally performed.
Each of these five types of tests can be performed on at least three basic program types:
The first two program types are usually concerned with novel devices and often involve the testing and measurement of new concepts or conditions using experimental rocket propulsion systems. Here examples are the testing of a new solid propellant grain, the development of a novel control valve assembly, and the measurement of the thermal expansion of a nozzle exhaust cone during firing operation.
Production tests are concerned with measurements of a few basic parameters on propulsion systems to assure that their performance, reliability, and operation are within specified tolerance limits. If the number of units is large, test equipment and instrumentation used for these tests are usually partly or fully automated and designed to permit the testing, measurement, recording, and evaluation in a minimum amount of time.
During early development phases of a program, many special or unusual tests are performed on components and on complete rocket systems to prove specific design features and performance characteristics. Special facilities and instrumentation or modification of existing test equipment are used. Examples are large centrifuges to operate propulsion systems under acceleration, or variable frequency vibration equipment for shaking complete systems and/or components. During the second type of program, special tests are usually conducted to establish statistically the performance and reliability of a rocket system by operating a number of units of the same design. During this phase, tests are also made to demonstrate the ability of the rocket system to withstand extreme operating conditions limits, such as high and low ambient temperatures, variations in fuel composition, changes in the vibration environment, or exposure to moisture, rain, vacuum, or rough handling during storage or transport. To demonstrate safety, sometimes intentional malfunctions, spurious signals, or known manufacturing flaws are introduced into the propulsion system to determine the capability of its control system or the safety devices to handle and prevent potential failures.
Before an experimental rocket system can be flown in a vehicle it has to pass a set of preliminary flight rating tests aimed at demonstrating its safety, reliability, and performance. It is not a single test, but a series of tests under various specified conditions, operating limits, and performance tolerances, simulated environments, and intentional malfunctions. Only thereafter may the rocket system be used in experimental flights. However, before they can be put into production, rocket systems have to pass another a series of tests under a variety of rigorous specified conditions, known as the qualification test or preproduction test. Once a particular propulsion system has been qualified, or passed such a qualification test, it is usually forbidden to make any changes in design, fabrication, or materials without going through a careful review, extensive documentation, and often also a partial or full set of requalification tests.
The amount and expense of component and complete propulsion systems testing has greatly decreased in the last few decades. The reasons here are more experience with prior similar systems, less hardware development, better sensors, better analytical model simulations, better prediction of system parameters, use of health monitoring systems to identify incipient damage (which saves hardware), and more confidence in predicting certain failure modes and their location. Validated computer software have removed many uncertainties and obviated needs for some tests. In some applications the number of firing tests has decreased by a factor of 10 or more.
For chemical rocket propulsion systems, each test facility usually has the following major systems or components:
Most rocket propulsion testing is accomplished in sophisticated facilities under closely controlled conditions. Modern rocket propulsion test facilities are frequently located several miles away from the nearest community to prevent or minimize effects of excessive noise, vibration, explosions, and toxic exhaust clouds. Figure 21–1 shows one type of an open‐air test stand for vertically down‐firing large liquid propellant thrust chambers, medium‐sized SPRMs or LPREs (100,000 to 2 million pounds thrust). It is best to fire the propulsion system in a direction (vertical or horizontal) similar to the actual flight condition. Figure 21–2 shows a simulated altitude test facility for rockets of about 10.5 metric tons thrust force (46,000 lbf). It requires a vacuum chamber in which to mount the engine, a set of steam ejectors to create the vacuum, water sprays to reduce the gas temperature, and a cooled diffuser. With flow magnitudes of chemical rocket propellant combustion gases, however, it is impossible to maintain a high vacuum in these kinds of facilities; typically, between 15 to 4 torr (equivalent to 20 to 35 km altitude) can be maintained. This type of test facility allows the operation of rocket propulsion systems with high‐nozzle‐area ratios that would normally experience flow separation at sea‐level ambient pressures.
Prior to performing any test, it is common practice to educate the test crew by going through repeated dry runs to familiarize each person with his or her responsibilities and procedures, including all emergency procedures.
Typical personnel and plant security and/or safety provisions in a modern test facility include the following:
Open‐air testing of chemical rocket propulsion systems frequently requires measurement and control of exhaust cloud concentrations and gas movement in the surrounding areas for safeguarding personnel and the environment. Most test and launch facilities have several stations (both inside and outside the facilities) for collecting and measuring air samples before, during, and after testing. Toxic clouds of exhaust gases (some with particulates) can result from normal rocket operation or from vapors or reaction gases from unintentional propellant spills, or from fire borne gases, explosions, or from the intentional destruction of vehicles in flight or rockets on the launch stand. Environmental government regulations usually limit the maximum local concentration or the total quantity of toxic gas or particulates that can be released to the atmosphere. The toxic nature of some of these liquids, vapors, and gases has been mentioned in Chapters 7 and 13. One method of control is for tests with discharges of moderately toxic gases or products to be postponed until favorable weather and wind conditions are present.
Extensive analytical studies and measurements of the environmental exposure from explosions, industrial smoke, and exhausts from missile and space vehicle launchings provide useful background for predicting the atmospheric diffusion and downwind concentrations of rocket exhaust clouds. In ground‐test analyses, the toxic cloud source is modelled as a point source and in flight tests as a ribbon source. Diffusion rates of exhaust clouds are influenced by many propulsion system variables, including propellant types, vehicle size, exhaust gas temperatures, and thrust duration; also by many atmospheric variables, including wind velocity, direction, turbulence, humidity, vertical stability or lapse rate (see definition below), and by the surrounding terrain. Reference 21–2 describes hazards related to toxic gas cloud concentrations and dispersals. Reference 21–3 evaluates the environmental impact of rocket exhausts from large units on the ozone layer in the stratosphere as well as on the ground weather near the test site (it concludes that the impacts are generally small and temporary). Reference 21–4 describes a test area atmospheric measuring network.
A widely used relationship for predicting atmospheric diffusion of gas clouds has been formulated by O. G. Sutton (Ref. 21–5). Many modern equations and models relating to downwind concentrations of toxic clouds are extensions of O. G. Sutton's theory. Given below are equations from Ref. 21–5 of primary interest to rocket and missile operators.
For instantaneous ground‐level point‐source nonisotropic conditions,
For continuous ground‐level point‐source nonisotropic conditions,
where χ is concentration in grams per cubic meter, Q is source strength (grams for instantaneous, grams per second for continuous); Cx, y, z are diffusion coefficients in the x, y, z planes, respectively; ū is average wind velocity in meters per second, t is time in seconds, and the coordinates x, y, z are in meters measured from the center of the moving cloud in the instantaneous case and from a ground point beneath the plume axis in the continuous case. The exponent n is a stability or turbulence coefficient, ranging from almost zero for highly turbulent conditions to 1 as a limit for extremely stable conditions, and usually falling between 0.10 and 0.50.
A few basic definitions relevant to the study of atmospheric diffusion of exhaust clouds are as follows:
The following are some general rules and observations derived from experience with atmospheric diffusion of rocket exhaust clouds:
Interpretation of the hazards that exist once the concentration of the toxic agent is known also requires knowledge of their effects on the human body, and the environment. Government tolerance limits for humans are given in Chapter 7 and in Ref. 7–5. There are usually three limits of interest: one for the short‐time exposure of the general public, one for an 8‐hr exposure limit of workers, and one for concentrations needing evacuation. Depending on the toxic chemical, the 8‐hr limit may vary from 5000 ppm for a gas such as carbon dioxide, to less than 1 ppm for more toxic substances such as fluorine. Human poisoning by rocket exhaust products usually occurs from gas and fine solid particle inhalation, but also any solid residuals that may remain around a test facility for weeks or months following a test firing can enter the body through cuts and other avenues. Moreover, certain liquid propellants may cause burns and skin rashes or are poisonous when ingested, as explained in Chapter 7.
In the last few decades, considerable progress has been made in instrumentation and data management. For further study the reader is referred to standard textbooks on instruments and computers used in testing, such as Ref. 21–6. Some of the physical quantities commonly measured during rocket propulsion testing are:
During any one test of a production RPS, only some of the 11 measurements listed above are made and then automatically compared (by computer) with the same measurements on a prior identical rocket propulsion system, which performed properly in previous tests. Any deviation between the two that is identified by the computer unit is usually investigated; if appropriate, an adjustments or hardware modifications are then made to bring the rocket propulsion system being tested to the desired performance.
Reference 2–8 describes some specialized diagnostic techniques used in propulsion systems, such as using nonintrusive optical methods, microwaves, and ultrasound for measurements of temperatures, velocities, particle sizes, or burn rates in solid propellant grains. Many of these sensors incorporate specialized technologies and, often, unique software. Each measured parameter can be obtained with different types of instruments, sensors, and analyzers, as indicated in Ref. 21–9.
Each measurement or each measuring system may require one or more sensing elements (often called transducers or pickups), a device for recording, displaying, and/or indicating the sensed information, and often also another device for conditioning, amplifying, correcting, or transforming the sensed signal into the form suitable for recording, indicating, display, or analysis. Recording of rocket propulsion test data has been performed in several ways, such as on chart recorders or in digital form on memory devices, such as on magnetic tapes or disks. Definitions of several significant terms are given below and in Ref. 21–6.
Range refers to a region extending from the minimum to the maximum rated value over which the measurement system will give a true and linear response. Usually an additional margin is provided to permit temporary overloads without damage to the instrument or need for recalibration.
Errors in measurements are usually of two types: (1) human errors (improperly reading or incorrectly adjusting the instrument, chart, or record and improperly interpreting or correcting these data), and (2) instrument or system errors, which usually fall into four classifications: static errors, dynamic response errors, drift errors, and hysteresis errors (see Refs. 21–6 and 21–5). Static errors are usually fixed errors due to fabrication and installation variations; these errors can usually only be detected by careful calibration, and an appropriate correction can then be applied to the reading. Drift error is the change in output over a period of time, usually caused by random wander and environmental conditions. To avoid drift error the measuring system has to be calibrated at frequent intervals at standard environmental conditions against known standard reference values over its whole range. Dynamic response errors occur when the measuring system fails to register the true value of the measured quantity while this quantity is changing, particularly when it is changing rapidly. For example, thrust force has an embedded dynamic component due to vibrations, combustion oscillations, interactions with the support structure, and the like. These dynamic changes can distort or amplify thrust reading unless the test stand structure, the rocket mounting structure, and the thrust measuring and recording system are properly designed to avoid any harmonic excitation or excessive energy damping. To obtain good dynamic responses requires careful analysis and design of the total system.
A maximum frequency response refers to the maximum frequency at which the instrument system will measure true values. The natural frequency of the measuring system should be above the limiting response frequency. Generally, a high‐frequency response requires more complex and expensive instrumentation. The instrument system (sensing elements, modulators, and recorders) must be capable of fast responses. Most of the measurements in rocket testing are made with one of two types of instruments: those made under nearly steady static conditions, where only relatively gradual changes in the quantities occur, and those made with fast transient conditions, such as rocket starting, stopping, or vibrations (see Ref. 21–11). This latter type of instrument has frequency responses above 200 Hz, sometimes as high as 20,000 Hz. Such fast measurements are necessary to evaluate physical phenomena in rapid transients.
Linearity of the instrument refers to the ratio of the input (usually pressure, temperature, force, etc.) to the output (usually voltage, output display change, etc.) over the range of the instrument. Very often static calibration errors indicate deviations from a truly linear response; a nonlinear response can cause appreciable errors in dynamic measurements. Resolution refers to the minimum change in the measured quantity that can be detected with a given instrument. Dead zone or hysteresis errors are often caused by the absorption of energy within the instrument system or by instrument‐mechanism play; each may limit the resolution of the instrument.
Sensitivity refers to the change in response or reading caused by particular influences. For example, temperature sensitivity and acceleration sensitivity refer to the change in measured value caused by ambient temperature and flight acceleration respectively. These are commonly expressed in percent change of measured value per unit of temperature or acceleration. Such information can serve to correct readings to reference or standard conditions.
Errors in measurements can arise from many sources. Reference 21–12 gives a standardized method, including mathematical models, for estimating errors, component by component, as well as any cumulative effect in the instrumentation and recording systems.
Electrical interference or “noise” within an instrumentation system, including the power supply, transmission lines, amplifiers, and recorders, can affect the accuracy of the recorded data, especially when low‐output transducers are in use. Methods for measuring and eliminating objectionable electrical noise are given in Ref. 21–18.
For every test of a rocket propulsion system or one of its components there should be one or more objectives, an identification of the article to be tested, a description of the test, an identification of the test facility, a list of specific measurements to be performed, instructions on storing, analyzing or displaying of the data, an interpretation of the results, and a conclusion.
For many tests, especially with large liquid propellant rocket engines, the amount of data generated can be very large. Manual data analysis with skilled individuals was the original method, but it was too cumbersome; today computers have become commonplace. They can be programmed to correct raw measured data, store it, analyze it, correct it, and display it. Some control systems can be programed to control the engine being tested and minimize failures. This approach is really a version of a health monitoring system (HMS) described at the end of Section 11.5 and also discussed in the next section.
Often, only a portion of recorded data is actually analyzed or reviewed during and/or after testing. In complex rocket propulsion system development tests, sometimes between 100 and 400 different instrument measurements are made and recorded. Some data need to be sampled frequently (e.g., some transients may be sampled at rates higher than 1000 times per second), whereas other data need to be taken at lower intervals (e.g., temperature of mounting structures may be needed only every 1 to 5 sec). Multiplexing of data is commonly practiced to simplify data transmission. For most rocket tests, computer systems contain a configuration file to indicate data characteristics for each channel, such as range, gain, data references, type of averaging, parameter characteristics, and/or data correction algorithms. Most data are not analyzed or printed; a detailed analysis occurs only if there is reason for scrutinizing particular test events in more detail. Such analysis may occur months after the actual tests and may not even be done on the same computer.
Proper selection of type, number of measurements, and/or frequency of data collection is crucial in the sensing of the health status of any propulsion system to be tested. This selection is usually performed by the engineers while developing the particular propulsion system, by identifying critical parameters that influence performance and possibly also likely impending failures.
Actual test measurements are performed with a variety of sensors, modified to fit standard interfaces so they can be connected to a common computer or network. See Ref. 21–14. Measured outputs from each instrument or sensor are corrected for changes in ambient temperature, instrument calibration factors, nonlinear outputs, conversions of analog data into digital, and/or filtering of data signals to eliminate signals outside ranges of interest. Recording of data can be done by a computer located within the engine, on the ground, or in the flight vehicle. Manipulation of data may include changes of scale or providing displays (table, plots, or curves) of selected engine parameters in support of analysis of test results.
All HMSs rely on receiving corrected data inputs from selected sensors, whose measurements were described in the prior paragraphs. HMSs are also discussed in Section 11.5. From simulated and validated analytical descriptions of the propulsion system it is possible to derive the nominal (desired) values for each parameter at a particular time during the test, and these values are entered into the HMS computer. They can include the nominal chamber pressure, thrust, nozzle wall temperature, voltage, as well as many others for both steady‐state and transient conditions—all at various times of engine operation. For each measured parameter and time, programmed analytical models provide nominal values together with upper and lower limits (also known as red line values) that identify the safe operating range of that parameter. These analytical models usually also include transient conditions (start/stop). Values for each parameter are often validated (and sometimes corrected) by actual test data from prior successful test‐firings from equivalent types of rocket propulsion systems.
Then, automatic comparisons are made by the HMS computer between the actual measured corrected data and equivalent data from analytical simulations. These comparisons will show if any measured parameter is satisfactory (falls between the two limit lines) or unsatisfactory exceeding safe limits. Since damage to a chemical rocket system can occur very quickly (in less than a second), immediate action is necessary. In an impending failure, other measurements will usually also exceed their safe limits. For example, if the turbine inlet temperature of an LPRE becomes too hot, turbine blades may be damaged. If the turbine inlet gas temperature exceeds its safe limit, then usually the turbopump's shaft speed and the turbine inlet manifold temperature will also exceed their safe limits. By sensing any impending critical failure with more than one red line measurement, likely incipient engine failures are thus confirmed; also, any occasional single sensor malfunction can be eliminated as an insufficient cause for starting a remedial action.
Decisions on possible remedial actions are done very quickly on a computer (certainly much faster than human decisions). If the computer receives three simultaneous related measurements that exceed their safe limit values, and if all three are related to the same potential failure, then remedial action is allowed; the computer will automatically query a table of remedies, and corrective actions, which fit the three overlimit values, are quickly identified and selected, and this remedy is automatically initiated. For the turbine inlet temperature example given above, this action may include throttling the total flow to the gas generator or changing the mixture ratio of the gas generator flow (provided the gas generator valves have a throttling capability) or shutting the engine down safely. Such fast actions with decisions made by a computer have saved much hardware in development testing. They represent a relatively new feature of rocket engine testing.
For some component tests programmable logic controllers are used to control test operations instead of general‐purpose computers, and this application usually requires some software development.
Flight testing of the larger rocket propulsion systems is always conducted in conjunction with flight tests of their vehicles and other subsystems such as guidance, vehicle controls, or ground support. These flights usually occur along missile and space launch ranges, often over an ocean. If a flight test vehicle deviates from its intended path and appears to be headed for a populated area, a range safety official (or a computer) will either have to destroy the vehicle, abort the flight or cause it to correct its course. Many propulsion systems therefore include devices that will either terminate operation (shut off the rocket engine or open thrust termination openings into rocket motor cases as described in Section 14.3) or trigger explosive devices that will cause the vehicle (and therefore also the propulsion system) to disintegrate in flight.
Flight testing requires special launch test range support equipment together with means for observing, monitoring, and recording data (cameras, radar, telemetering, etc.), equipment for assuring range safety and for reducing data and evaluating flight test performance, and specially trained personnel. Different equipment is needed for different kinds of vehicles. This may include launch tubes for shoulder‐held infantry support missile launchers, movable turret‐type mounted multiple launchers installed on an army truck or a navy ship, transporters for larger missiles, and track‐propelled launch platforms or fixed complex launch pads for spacecraft launch vehicles. Launch equipment has to have provisions for loading or placing vehicles into a launch position, for allowing access of various parts and connections to launch support equipment (checkout, monitoring, fueling, etc.), for aligning or aiming vehicles, or for withstanding the exposure to hot rocket plumes at launch.
During experimental flights extensive measurements need to be made on the behavior of the various vehicle subsystems; for example, rocket propulsion parameters, such as chamber pressure, feed pressures, temperatures, and so on, are measured and these data are telemetered and transmitted to a ground receiving station for recording, monitoring and analyzing. Some flight tests also rely on salvaging and examining the test vehicle after the flight.
In the testing of any rocket propulsion system there will invariably be failures, particularly when some of the operating parameters are close to their limit. With each failure comes an opportunity to learn more about the design, materials, propulsion performance, fabrication methods, and/or the test procedures. A careful and thorough investigation of each failure is needed to learn the likely causes in order to identify remedies or fixes to prevent similar failures in the future. The lessons to be learned from these failures are perhaps some of the most important benefits of development testing. A formalized postaccident approach is often used, particularly if the failure had a major impact, such as high cost, major damage, or personnel injury. Any major failure (e.g., the loss of a space launch vehicle or severe damage to a test facility) often causes the program to be stopped and further testing or flights put on hold until failure causes are determined and remedial actions have been taken to prevent a recurrence.
Of utmost concern immediately after a major failure are the needed steps to respond to the emergency. These may include giving first aid to injured personnel, bringing the propulsion system and/or the test facilities to a safe/stable condition, limiting further damage from chemical hazards to the facility or the environment, working with local fire departments, medical or emergency maintenance staff or ambulance personnel, and debris clearing crews, and quickly providing factual statements to the management, the employees, the news media, and the public. It also includes controlling access to the facility where the failure has occurred and preserving evidence for the subsequent investigation. All test personnel, particularly the supervisory people, need to be trained not only in preventing accidents and minimizing any impact of a potential failure, but also on how to best respond to the emergency. Reference 21–15 suggests postaccident procedures involving rocket propellants.