Users of instruments need information about the performance characteristics of instruments and how these characteristics are determined and documented if they are to make reliable measurements and report measurement characteristics. Users of measurements need this knowledge if they are to reach valid conclusions based on those measurements.

The lesson begins by exploring static performance characteristics, which leads naturally to a discussion of calibration as the means of determining the values of the performance parameters of a sensor or instrument. This is followed by a thorough description of dynamic performance characteristics and responses to first and second order inputs using three classic examples: step or impulse, ramp or time varying, and oscillatory inputs.

The instruments used by atmospheric scientists sample the characteristics of a medium that is constantly changing and is subject to modification by upstream influences as well as the local setting. These attributes are discussed in a section on the general characteristics of measurements, focusing on representativeness and homogeneity of the region affecting the measurement.

The last section is devoted to the quantitative estimate of uncertainty, adhering to the standard evaluation methodologies and expressions employed by the National Institute of Standards and Technology (NIST).

Topics introduced in this lesson include the general characteristics of measured data and the importance of homogeneity when assessing how well the measurement represents the phenomenon; i.e., its “representativeness.” Measurement errors and uncertainty play a central role in discussions of instrument performance. In this lesson, we make an effort to quell the use of the term “accuracy,” which is often misused. We focus instead on the National Institute of Science and Technology guidelines for evaluating and expressing uncertainty. The lesson also describes the types or components of uncertainty that arise from systematic and random effects.

Upon completion of this lesson, you’ll be able to:

- Define and explain the
**static performance characteristics**of an instrument. - Describe the process of
**static calibration**. - Identify the difference between
**static and dynamic performance characteristics**. - Describe how the
**principle of superposition**relates to signal analysis. - Describe the
**dynamic performance characteristics**for a first-order system. - Distinguish between a
**first and higher order measurement system**and associated energy reservoirs. - Describe
**representativity and homogeneity**by identifying features within an application that could generate internal boundary layers and affect these characteristics. - Distinguish between
**Type A and Type B uncertainty**. - Apply the
**Law of Propagation of Uncertainty**to a system of multiple sensors.

Instrument performance characteristics can be described as static, conditions where the measurand is constant or slowly varying, or dynamic, where the input to a sensor changes rapidly.

Static performance characteristics are generally determined during instrument calibration, which is a comparison of measurement values with a calibration standard of known uncertainty. Static performance characteristics include the range, span, resolution, static sensitivity, linearity, and stability - plus peculiarities such as hysteresis and sensor threshold.

Every instrument comes with technical specification information (see the example Technical Data for the Vaisala RS41-SGP radiosonde) that describes the static performance characteristics. Technical specifications also include the time constant, which is the time that it takes for an instrument, after a sudden change in measurand value, to reach more than 63 percent of the new value. The time constant is a dynamic performance characteristic that will be discussed later in greater detail.

Some terminology will be important to know as we proceed through the lesson. Based on what you know about instrumentation, complete the following set of questions.

Select the term that best matches each definition.

The correct answers are shown above.

This lesson presents the science behind many of the characteristics that appear in data sheets such as the previously shown example. Such specifications are usually the starting point for understanding performance of a sensor.

One of the first descriptions of instrument performance characteristics is provided by a transfer curve. A transfer curve shows the output from a sensor as a function of the measurand. The slope of this curve is the static sensitivity, or the change in sensor output based on the change in input. If the response is linear, the transfer curve will be a straight line. In this example, the x-axis shows the measurand and the y-axis indicates the output of the sensor.

A linear relationship between the output and the measurand is given by:

Output = Intercept + (Static Sensitivity x Measurand)

A transfer curve often is determined by calibration and then used inversely to determine an estimate of the measurand based on the output of the sensor. Deviations from a linear transfer equation can be caused by a non-linear response from the sensor, in which case the relationship between input and output might be a transfer curve represented by a higher degree (>1) polynomial, or due to systematic measurement error caused by drift, lag or hysteresis, or failure to calibrate.

The random error is related to the precision of the instrument and is also used to determine instrument resolution. The precision of the instrument determines the resolution, and various multiples of the precision are often quoted as the resolution, so it is necessary to consider which convention is used. If the precision of the measurement is given by a value *p*, you can conclude with about 50% confidence that the measurements are different if they are separated by *p*. At 2*p*, you can make the same conclusion with about 2/3 confidence [corresponding to one-standard-deviation confidence], and for 95% confidence, a separation of about 3*p* is needed.

The term “resolution” has many definitions, some conflicting. This lesson seeks to clarify why characterizing the resolution is difficult, and it will explain some of the varying usages of this term.

There are several components to a description of resolution, and in practice the term is often applied to these individual components as well as to their net result. The components include:

- The smallest difference between measurands that will lead to corresponding measurements indicating that the measurands are different. The desired indication will have an associated confidence level, often 68.3% or that corresponding to one standard deviation. A primary factor influencing this aspect of resolution is the precision of the instrument.
- The smallest change in the indicator (e.g., meter or digital display) that can be perceived. For example, the instrument resolution may be limited by the ability of an operator to read an analog display, by hysteresis in that display, or by the smallest incremental change reported on a digital display. In the case of a digitized result, the resolution is limited by the change corresponding to one unit in the digitized output.
- For a spectrometer, the smallest difference in frequency that can be resolved.
- For a time series, the temporal resolution is the increment between consecutive measurements, although if a particular sensor responds more slowly than this, the temporal resolution of that sensor may be lower.

Let’s consider definitions #1 and #2 in more detail. (Definitions #3 and #4 are specific to instrument types or to the data series and are largely self-explanatory in their definitions.)

Our first definition of resolution, and the one that has found general acceptance for instrument performance, is **“the smallest difference between measurements that indicates that the corresponding measurands are different”**, which normally uses single measurements of each measurand. Resolution defined this way will be limited primarily by the precision of the measurement. Consider first the case where two measurands x_{1} and x_{2} are the same. If measurements of each (M_{1} and M_{2}) have a random error characterized by 𝜎_{M} and the random errors in repeated measurements are independent, then the expected standard deviation arising from random error in the difference y between the two measurements is 𝜎_{y} = √2𝜎_{M}. The two measurements will differ by less than 𝜎_{y} in about 68.3% of cases, as is appropriate for a confidence level corresponding to one standard deviation, but they will differ by more than 𝜎_{M}, the instrument precision, in about 48% of cases. A measured difference of 𝜎_{M} is therefore an unreliable indicator that the measurands are different; a measured difference of at least √2𝜎_{M} is needed to indicate that the measurands are different. Figure 1 shows the probability distribution functions for the two measurements (as the blue line) and that for the difference (as the red line).

To find the resolution, a different limit is needed. In this case we want to find the minimum separation between measurands y' = x_{1}' - x_{2}' that leads to a statistically significant likelihood that measurements will show the two measurands to be different. (Here the primed quantities refer to the measurands and unprimed quantities to the measurements.) We need measurements separated by at least √2𝜎_{x} to be confident, on the basis of measurements with precision 𝜎_{M}, that the measurands are different. We therefore need to find the minimum separation between measurands required to give measurements separated by at least √2𝜎_{x} in at least 68.3% of cases. Then, with one-standard-deviation confidence, we can use the measurements to conclude that the measurands are different.

The separation at which the probability “P2” of finding a difference outside √2𝜎_{x} is about 68.3%. The probability for a separation between measurements of d = 𝜎_{x} is only about 43%, so a separation between measurands equal to the precision of the sensor the probability is still far below the desired limit.

In order to achieve a probability P2 = 68.3% that two measurands are different with a one-standard deviation confidence limit, the separation between measurements must be about twice the precision, or d = 2.06 𝜎_{x}.

Sometimes it is desirable to quote a resolution that gives confidence at the two-standard-deviation limit (the 95% confidence limit) that the measurands are different. In this case, the separation d of the measurements must be about five times the precision.

The smallest change in an indicator that can be perceived is difficult to characterize for an analog display, but it is straightforward for a digital display. For example, consider a transducer that produces voltages in the range from 0–5 V, connected to an analog-to-digital converter that produces a 12-bit signal covering this full range. Because 2^{12} = 4096, 4096 increments are available, so the smallest voltage difference that can be indicated is 5 / 4096 ≈ 0.00122 V. This limits the resolution to no better than 1.22 mV, and that is often quoted as the resolution of the instrument. However, if the instrumental precision is larger than this, the smallest increment that can be distinguished will be larger than this value, as discussed in the preceding section. The digitizer resolution is often a poor measure of instrumental resolution if the precision is larger than this, but many instrument specifications indicate only this resolution.

The effect of digital resolution on measurements of a smoothly varying measurand is a function of the increments available over the full range. At 4-bit resolution the increments are obvious, but at 6-bit resolution the 64 levels are harder to distinguish. It is common for A-to-D devices to produce at least 12-bit resolution to avoid significant distortion of measurements. If there are no comparable or larger contributions to the precision, the digitizer resolution contributes to the precision of the instrument as 𝜎 = ∆ / √12 where ∆ is the difference in measurand corresponding to a 1-bit change in digitized output (e.g., 1.22 mV in the example in the previous paragraph). However, this relationship to precision applies to encoders using any number of bits because the value (i.e., √12) arises from the standard deviation of randomly occuring values in the range of one bit."

The following interactive tool can be used to explore the relationship between precision and resolution. After viewing the tool, answer the questions to test your understanding of the material. If you are viewing a downloaded version of this lesson, please see the "Printable Lesson" for the ShinyApp exercise information.

If 𝜎 is the instrument precision (at the one-standard deviation limit), what separation between measurands is required for the difference between measurements to be at least √2𝜎 for at least 80% of the pairs of measurements?

The correct answer is b.

With a separation of 2.6𝜎_{x} or 2.6 times the precision, the measurements will differ by √2𝜎 in 80% of the cases. To see this case, select the 68.3% checkbox (which determines the √2𝜎 limits) and adjust the slider until P2 = 0.8. Other answers correspond to lower (answer a) or higher (answers c and d) probabilities than 80%.

For measurement pairs to differ by at least 2√2𝜎 in 95% of cases, as would be needed for 95% confidence that the measurands are different, what separation between measurands would be required?

The correct answer is b.

The correct answer is b, 5.2 𝜎. To see this, select the “95.4%” button and adjust the slider to the point where P2 = 0.95. Answers a and c correspond to significantly lower likelihoods that the measurements would differ by 2√2𝜎 .

For measurands separated by the instrument precision, what is the likelihood that pairs of measurements will differ by less than the true separation? Report your answer to the nearest whole number.

(Type your answer in the box, then click Done.)

The correct answer is 42%. The answer can be found by checking 50% and using d = 1. Notice that the red vertical lines are very close to the +/-d limits from the center of x1. The integration of the regions **outside** +/-d then give about 0.60, so the fraction **inside** is 0.4. This is only approximate because the limits are not exactly +/-d, but this will give an answer within a few percent of the right answer.

Answer the following three questions using this statement:

**A transducer produces a voltage in the range from 0-10 V and is connected to an analog-to-digital converter covering that full range that produces a 10-bit output.**

What is the number of available digital increments covering the full range?

The correct answer is c.

Computers count by base 2. The smallest increment of data on a computer is a bit. A bit is a binary digit, that is, it can hold only one of two values: 0 or 1, corresponding to the electrical values of off or on, respectively. A 10-bit output will, therefore, have 2^{10} available digital increments over the full range of 10 V. Answers a and b are incorrect. Answer a is the number of bits in a byte and answer b is for an 8-bit output.

What is the smallest resolution that this instrument can have?

The correct answer is b.

The finest resolution is equal to the range divided by the number of increments over that range, or 5.8 mV. Answer a is incorrect because it is the square root of 10 V / 1024, and answer c is 10 V / 2^{9} in mV, and 2^{9} has no relevance to a 10-bit output. Note that answer b is correct in reference to resolution definition #1. In this definition, precision is
Delta/sqrt(12). Delta for 10-bit resolution of 10 V is about 0.01 V. Precision then is (10/1024)/sqrt(12)=2.82 mV. From definition #1, resolution is about 2.06*precision, so the resolution is 5.8 mV.
If we use resolution definition #2, the answer will be 9.77 mV.

What is the instrument precision in the absence of other (larger) random-error contributions to the precision?

The correct answer is a.

The precision is given by 𝜎 = (difference in measurand corresponding to a one-bit change in digital output) / number of bits = 9.77 mV / √12 = 2.82 mV.

Precision is the component of uncertainty arising from the random error in a measurement. An estimation of the precision is obtained from the root-mean-square (RMS) error determined from the unbiased standard deviation.

Use the images to explore the relationships between precision and resolution, then answer the following questions.

If 𝜎 is the instrument precision (at the one-standard deviation limit), what separation between measurands is required for the difference between measurements to be at least √2𝜎 for at least 80% of the pairs of measurements?

The correct answer is b.

With a separation of 2.6𝜎_{x} or 2.6 times the precision, the measurements will differ by √2𝜎 in 80% of the cases. In this case, the integrations are done outside the √2𝜎 limits. Probability Distributions 1 and 3 use integration limits determined by 𝜎 or 2√2𝜎, respectively, and Probability Distribution 4 represents a higher likelihood and also a too-large integration limit.

For measurement pairs to differ by at least 2√2𝜎 in 99% of cases, as would be needed for 95% confidence that the measurands are different, what separation between measurands would be required?

The correct answer is b, 5.2 𝜎.

Measurands would have to be different by this separation distance 99% of the time to be able to report findings to the 95% confidence limit. For a separation distance between measurands of 3.74 and 3.28, the confidence limits corresponding to 99% of measurements are 68.3% and 50%, respectively.

For measurands separated by the instrument precision, what is the likelihood (in %; i.e. enter a number like 50 for 50%) that pairs of measurements will differ by less than the true separation? Report your answer to the nearest whole number.

The correct answer is 40-42%. The approximate answer can be found using the figure below, which shows the probability distributions for a separation of d = 1 and integration limits covering 50% of the red distribution. Notice that the red vertical lines are very close to the ±d limits from the center of x1. The integration of the regions **outside** ±d then give about 0.60, so the fraction *inside* is 0.4. This is only approximate because the limits are not exactly ±d, but this will give an answer within a few percent of the right answer.

Answer the following three questions using this statement:

**A transducer produces a voltage in the range from 0-10 V and is connected to an analog-to-digital converter covering that full range that produces a 10-bit output.**

What is the number of available digital increments covering the full range?

The correct answer is c.

Computers count by base 2. The smallest increment of data on a computer is a bi. A bit is a binary digit, that is, it can hold only one of two values: 0 or 1, corresponding to the electrical values of off or on, respectively. A 10-bit output will, therefore, have 2^{10} available digital increments over the full range of 10 V. Answers a and b are incorrect. Answer a is the number of bits in a byte and answer b is for an 8-bit output.

What is the finest resolution that this instrument can have?

The correct answer is b.

The finest resolution is equal to the range divided by the number of increments over that range, or 10 V / 1024 = 0.00977 V = 9.77 mV. Answer a is incorrect because it is the square root of 10 V / 1024, and answer c is 10V / 2^{9} in mV, and has no relevancy to a 10-bit output.

What is the instrument precision in the absence of other (larger) random-error contributions to the precision?

The correct answer is a.

The precision is given by 𝜎 = (difference in measurand corresponding to a one-bit change in digital output) / √12 = 2.82 mV.

For more information about resolution, see Appendix 1.

Instrument calibration is one of the primary processes used to determine performance characteristics and quantify measurement uncertainty. A calibration, as described here, involves both the sensor and instrument and tests the transfer curve as well as the conversion of sensor output to an estimate of the measurand. The calibration process generally requires a traceable standard input to test the output signal of the instrument.

Calibration results are used to establish a relationship between the measurand (input signal) and the output of the instrument, and if necessary, make adjustments to bring the instrument in line with the calibrator.

The calibration process typically steps up or down the measurement range of the instrument. This adjustment allows both the calibrator and the instrument to reach steady-state values for a period of time until a statistically robust mean and standard deviation can be determined for each span value over the range between the lowest and highest inputs.

Which of the following are part of the process of static calibration?

The correct answers are a, c, and d.

Calibration requires a traceable standard input to provide information for stepping the instrument's measurement range up or down. A calibration curve can reveal systematic errors, including zero error or span error.

A calibration curve showing the measured value (output) versus the input value can be constructed. Differences from the identity line indicate systematic error, which can have many forms. An example of systematic error is a consistent offset, indicating the need to adjust the zero offset of the instrument. This type of zero error is shown in the figure below. A span error, where the error depends on the value of the measurement, is also shown.

If the systematic error increases or decreases linearly as values of the input change linearly, then the instrument suffers from a span error, and the static sensitivity [ ∂(Output) / ∂(Input) ] must be adjusted. With the span error and zero error removed during calibration, any remaining systematic error could be due to nonlinearity, which could either be the instrument’s true response to an input, as in the case of a thermistor (a temperature-sensitive semiconductor that produces a very large nonlinear change in resistance for a change in temperature) or the result of other error sources such as drift, hysteresis, or nonlinear span errors.

Use the interactive ShinyApp tool to complete the following questions. If you are viewing a downloaded version of this lesson, please see the "Printable Lesson" for the ShinyApp exercise information.

If the sensor output is 55 (M = 55), what would your estimate of the measurand x be using a linear model fit?

The correct answer is b.

To estimate the measurand x for a sensor output of M = 55, we create a plot of x = f(M) and determine the linear fit from least-squares regression. The resulting linear model is x = 0.1929M + 0.6689. If R^{2} = 0.9869, the model yields a value of **x = 11.28 for M = 55**.

The residual standard deviation of the calibration values for x, given M, from that regression line is 0.6967. That is not very good, and the deviations are systematic, so it is justified to try a higher-order fit. Using a linear-model fit to determine coefficients for a relationship x = b_{1} + b_{2}M + b_{3}M^{2}.

What is your estimate of x as you increase the degree of the polynomial used to fit the measurand data to a second degree polynomial? Third degree polynomial? Enter both values in the box below.

The corect answer is 2nd degree polynomial, 11.96; 3rd degree polynomial, 11.97.

Estimates of the measurand for M=55 were obtained for 2nd and 3rd degree polynomials. The calibration equations are shown in the figure, along with a plot of trend lines for each of the three polynomial fits. For M=55, a 2nd degree polynomial yields a value for **x = 11.965**, leaving only 1-0.9973 or 0.27% variation unaccounted for by the regression. Using a 3rd degree polynomial improves the fit slightly with **x** = 11.974 and R^{2} = 0.998 or 1-0.998 = 0.2% variation not accounted for by the 3rd degree polynomial regression.

Our objective is to calibrate a sensor so that we can use the calibration to estimate the value x of a measurand from the output of the sensor M. We have a calibrator that produces values of a measurand with very small random error and no systematic error, so its contribution to error in the calibration can be neglected. Our sensor, on the other hand, makes individual measurements with a rather large random error, characterized by the precision σ_{M} = 1.5. The goal of calibration is to minimize systematic error in measurements using this sensor. We decide to make N = 20 measurements equally spaced over a measurand interval (x_{1} – x_{2}), and as a result we get N measurements {M_{i}} that correspond to {x_{i}}. Results are as listed in the following table.

x |
M |
x |
M |
x |
M |
x |
M |
---|---|---|---|---|---|---|---|

1 |
8.9783 |
6 |
23.8731 |
11 |
49.9256 |
16 |
79.8253 |

2 |
12.4642 |
7 |
31.8451 |
12 |
58.1260 |
17 |
83.5705 |

3 |
14.7360 |
8 |
34.9857 |
13 |
61.4559 |
18 |
90.9026 |

4 |
17.9198 |
9 |
39.6738 |
14 |
67.3803 |
19 |
99.7811 |

5 |
23.4088 |
10 |
43.6210 |
15 |
71.3742 |
20 |
105.4500 |

Using these data, find a representation of this calibration in the form **x = f(M)**, where f(M) can be a polynomial or other function, so that the output M from this sensor can be used to estimate the measurand x, under static conditions.

Procedure: Download the data from the table to an application or program (e.g., ShinyApp, MS Excel, MatLab, Python, etc.) that you can use to plot x = f(M) and generate a set of calibration curves.

If the sensor output is 55 (M = 55), what would your estimate of the measurand x be using a linear model fit?

The correct answer is b.

To estimate the measurand x for a sensor output of M = 55, we create a plot of x = f(M) and determine the linear fit from least-squares regression. The resulting linear model is x = 0.1929M + 0.6689. Using R^{2} = 0.9869 yields a value of**x = 11.28 for M = 55**.

The residual standard deviation of the calibration values for x, given M, from that regression line is 0.6967. That is not very good, and the deviations are systematic, so it is justified to try a higher-order fit. Using a linear-model fit to determine coefficients for a relationship x = b_{1} + b_{2}M + b_{3}M^{2}

What is your estimate of x as you increase the degree of the polynomial used to fit the measurand data to a second degree polynomial? What is your estimate if fitting to a third degree polynomial?

The correct answer is b.

Estimates of the measurand for M = 55 were obtained for 2nd and 3rd degree polynomials. The calibration equations are shown in the figure, along with a plot of trend lines for each of the three polynomial fits. For M = 55, a 2nd degree polynomial yields a value for **x = 11.965**, leaving only 1-0.9973 or 0.27% variation unaccounted for by the regression. Using a 3rd degree polynomial improves the fit slightly with **x = 11.974** and R_{2} = 0.998 or 1-0.998 = 0.2% variation not accounted for by the 3rd degree polynomial regression.

The residual standard deviation for the new (2nd degree) fit is 0.3266, so the improvement is significant. Studies of this sensor by other means indicate that its precision is about 1.5 units in the measurement M, so with a representative slope of about b_{2} the expected precision in estimates of the measurand x is expected to be about 0.4155, so this result is reasonably consistent with the expected precision in measurements. That indicates that higher-order polynomial fits are probably not needed or justified. If a third-order polynomial is used in the fit, the residual standard deviation reduces only to 0.2867, so inclusion of that term does not give any significant improvement in the fit. (The decision regarding which terms are justified is often made on the basis of an analysis of variance, which in this case indicates that the addition of a cubic term to the equation does not lead to statistically significant improvement.)

For a detailed discussion of the answers above and a review of the lessons learned from the Calibration Exercise, view Appendix 2. Additional notes about the software package and code are also included.

Drift, hysteresis, or nonlinear span errors can all influence an instrument’s true response.

Drift is a form of instrument instability that produces a time-dependent systematic error, often associated with a physical change or deterioration of components.

Hysteresis occurs when the sensor input for a given output depends on whether the input is increasing or decreasing. As an example, hygrometers exhibit hysteresis because the rates of sorption and desorption of water molecules on the surface of the sensor are different. Another example of hysteresis is illustrated by the blue curves in the figure below.

Where the measurand (input) changes with time, we observe performance characteristics that are the result of the instrument’s inability to respond instantly. In these situations, the instrument output often lags the instrument input. The time constant, or response time, has been adopted as this dynamic performance characteristic. The inverse of the time constant provides the instrument’s temporal resolution. In the example shown, a time constant is applied to produce the red curve, which corresponds to output values unaffected by hysteresis.

We use differential equations to model dynamic performance. If the instrument or sensor responds linearly, its response can be characterized by a linear differential equation. Consider an instrument that uses a sensor to produce a signal M when exposed to a measurand x. Calibration of a sensor consists of determining the static relationship between M and x, often plotted as a transfer curve showing the relationship as discussed in the section on static response. A sensor can be depicted generically using a diagram like that below, where H(x) will be called the transfer function.

The transfer curve characterizes this relationship under static conditions (M given x), but we are often interested in applications where the measurand x varies in time. The simple box-description of the sensor still applies, but now the output M will depend not only on the present value of x but also on its past history.

As an example, consider a liquid-in-glass thermometer immersed at time t_{0} = 0 into a bath of fluid having temperature T_{F} (the final bath temperature). If the thermometer reads T_{0} before immersion, it will start to move toward T_{F} after immersion and will display a temperature T(t) that moves toward T_{F} over some period of time. The rate at which heat is transferred to the thermometer depends on the temperature difference T(t) - T_{F}, often in a linear relationship, so the displayed temperature will move faster at first and gradually move more slowly toward T_{F} as the temperature difference becomes small.

A linear relationship between the time rate of change in temperature and the temperature difference between the sensor and the bath can be represented by a first-order linear differential equation like dT(t) / dt = (T_{F} - T(t)) / 𝜏, where 𝜏 is a constant that determines the rate at which the sensor responds to its environment and T_{F} is the final temperature. This equation has an exponential solution, such that T(t) approaches T_{F} but never reaches that value, as shown in the figure. This first-order response is characteristic behavior for many sensors and electronic components of sensors.

Select whether the statement refers to a static performance characteristic or a dynamic performance characteristic.

The correct answers are shown above.

Sensors differ in their response characteristics, but many follow general patterns. A first-order system is one that can be described by a first-order differential equation. A second-order system requires a second-order differential equation for its description. The thermometer is an example of a first-order system in temperature that responds directly to the heat flow induced by the difference in temperature between the sensor and the bath. A wind-measuring system consisting of a wind vane and a cup anemometer, as shown in the next illustration, involves both first-order and second-order responses. The rotor in the cup anemometer has angular momentum that resists change when the wind changes, so it is basically a first-order system in velocity with no preferred orientation or position. An exponential change between the initial and final states describes the solution to a first-order system. The wind vane is a second-order system in position that will tend to oscillate about the equilibrium position when the wind direction changes. Thus, second-order solutions include sinusoidally-varying components that describe the oscillatory nature of the system, one which often includes damping, which is represented by a damping coefficient. Many other commonly used systems fall into one of these two categories.

Drag each of the following to the type of system (first order or higher order) it represents.

The correct answers are shown below.

First Order System

Second Order System

Which of the following is(are) true of dynamic performance characteristics for a first-order system?

The correct answers are a and b.

In a first-order system, only slowly varying input fluctuations will be passed through to the output, while rapidly varying input fluctuations will be dampened or attenuated.

Another example of a first-order system is an “RC” circuit, consisting of a resistor R and a capacitor C as shown in the next figure. The same equations apply to this circuit, with time constant = RC. Functionally, a capacitor can be regarded as an integrator because the voltage across it equals the time integral of the current (I(t)) that passes through it divided by the capacitance. The same current passes through the resistor and produces a voltage across it equalling I(t)R, so equating the current passing through the resistor and through the capacitor gives dV_{out} / dt = ( V_{in} - V_{out} ) / (RC). This equation has the same form as the equation for the derivative of temperature, dT(t) / dt in the preceding example, if the time constant 𝜏 is equal to RC.

All electronic components have some resistance and some capacitance associated with their output impedance, making it a common factor influencing dynamic response.

Sensors exhibiting first-order dynamic response can be more complicated, but the simplest (like the preceding examples) have exponential response and are characterized by

where H*(x) represents the static transfer function or the transfer curve, giving the output value that would be produced by a steady measurand with the value x(t).

The above diagram, based on the equation, is a functional depiction of a first-order dynamic system having time constant 𝜏. It shows how the output M from a sensor having first-order dynamic response depends on the measurand x . The red box is the complete instrument transfer function H. However, x and M both generally vary with time. The triangular symbols represent amplifiers producing an output that is the difference between the two inputs multiplied by the indicated gain (1 or 1 / 𝜏 in these two examples). The box with an integration symbol integrates the input (in this case, the time derivative dM(t) / dt of the measured quantity as given by the preceding equation). Integrating the input produces the measurement, M. However, that measurement is also fed back to the left amplifier, which produces an output determined by the difference between H*(x) and M. The diagram is a representation equivalent to the differential equation shown above, the RC circuit diagram, or the diagram with the generic transfer function H. The transfer function depicted can be considered a low-pass filter because only slowly varying input fluctuations will be passed through to the instrument output. Rapidly varying input fluctuations will be dampened (i.e., attenuated) and will not be passed to the output. In general, input variations of frequency less than 1 / 𝜏 (where 𝜏 is the time constant) will be passed through to the instrument output, whereas input variations at frequencies higher than the instrument’s temporal resolution (1 / 𝜏 ) will be attenuated.

The transfer function diagram can be used for dynamic systems of higher order. An example is the simplified response of a wind vane to the direction of the wind. *[Note that a wind vane is actually a much more complex system and the following simplification is not realistic except as an example of a second-order dynamic system. For a real wind vane, the restoring torque becomes nonlinear for any but very small deflection angles, varying wind speed changes the restoring torque, and there are aerodynamic contributions to the drag term and other complicating factors.]*

In this simplification, two factors control how the vane will respond to the direction of the wind when the wind speed remains constant. First, a difference between the orientations of the vane and the wind will produce a torque proportional to that difference, with proportionality constant k. However, such a system will tend to oscillate about the correct orientation, like a weight suspended on a spring. Therefore a wind vane will have some resistance to motion built into its mounting structure, and this resistance provides damping of the oscillation.

A schematic representation of the transfer function of this system is shown in the figure, where the amplifier with gain D provides the damping effect of this feedback.

This schematic shows a damped forced harmonic oscillator, where k is analogous to a spring constant but in this case is the proportionality constant between the restoring torque and the angle between the orientation of the wind and of the wind vane. The symbol m in the amplifier with gain 1 / m is analogous to mass for the loaded spring but here represents the moment of inertia of the wind vane. The response to changes in wind direction is determined by the relative magnitudes of the parameters {k, the spring constant; m,the mass; and D, the damping constant}. If the damping constant is very small, the vane tends to oscillate too much, while a large value of D causes the vane to respond too slowly. “Critical damping,” for which the vane just moves to a new equilibrium position without overshooting, occurs when D = 2√ mk , and the natural oscillation without damping has an angular frequency 𝜔_{0} equal to √ k/m .

In contrast to the first-order transfer function, which serves as a low-pass filter, the response of a second-order transfer function can lead to erroneous large-amplitude fluctuations near the natural frequency if the damping is insufficient. In situations with large damping, the transfer function again serves as a low-pass filter.

The governing differential equation can be determined from this and other similar transfer-function diagrams, as follows:

Start at the point in the diagram labeled as (or whichever term is the highest-order derivative).

Working backward, incorporate each contribution into the differential equation to obtain, in the case of the diagram,

It is then possible to solve the resulting differential equations for specified values of the measurand as a function of time. Another benefit of characterizing the sensor with a transfer-function diagram is that it provides a structure easily incorporated, with initial values, into a numerical solution that gives the response to any input.

To demonstrate how solutions to these equations or the transfer-function diagrams relate to characteristics of instruments, the responses to some specific types of input will be discussed in the next pages.

The first-order response is characterized uniquely by the time constant 𝜏, so a single representative response to each input function characterizes all such sensors. In the plots that follow, the time constant will be 1 s. However, for a sensor having second-order response, the response is determined by the three terms shown in the equation or as the gain factors in the transfer-function diagram, k, 1/m, and D.

Only the relative sizes of the gain factors are important, so the controlling factors can be reduced to two, the undamped natural angular frequency of the system, given by ω_{0} = √ k/m and the ratio of the damping term to the critical-damping value, given by 𝛄 = D / (2 √ mk ).

An abrupt change in the measurand to a new value, or step-function change, is shown as the red line in the next figure. The first-order response is an exponential (M1, shown by the blue line), but the second-order response (M2, in green) is slower in this case. However, the shape of the second-order response curve, while preserving the exponential behavior, does exhibit differences from the first-order response. These response curves provide good guidance regarding how long it is necessary to wait before errors caused by time response become negligible in a particular application. Response like that shown is commonly seen; for example, temperature sensors on research aircraft often have first-order time constants of a few seconds and this imposes important limitations on the detection of spatial structures using such measurements.

The “ramp” variation in the measurand occurs when the measurand increases at a constant rate, as shown by the red line in the figure.

The sensor output in both cases, after a transient period, follows the slope of the measurand but with a delay. For the first-order case, the effective delay, also known as the dynamic lag, is equal to the time constant so this is often a useful way to determine that time constant. In the case of aircraft-borne temperature sensors, this response will result in a difference between aircraft ascent and descent profiles, and the time constant of the sensors can be determined from that difference.

For a second-order sensor, the delay is 2𝛄 / ω and, for the parameters shown, is larger than the first-order delay 𝜏 .

A measurand that varies as a sine wave with frequency arbitrarily selected to be 0.29 Hz (i.e., a period of 3.4 s) is shown in the next figure. This period is long in comparison to the assumed first-order time constant (𝜏 = 1 s) but still below the natural frequency of the second-order system.

In this example, both response curves (M1 and M2) show attenuated response and a significant time lag between the input and response signals. The time lag leads to a phase lag; i.e., the peaks of the output signal (M1 and M2) waves occur after the peak of the input signal (x). For lower frequencies, the attenuation and phase lag are both reduced, but even at 0.1 Hz, important attenuation and phase lag remain evident in the response curves. There is some transient response at the start of this example because the calculation is started with the measurement and its derivative both set to zero, but the initial transient response no longer has much effect after a few cycles of the sine wave.

The next figure shows the response to a 2.5 Hz sine wave. Here the attenuation for both response signals is strong, and the phase lag approaches 90°; i.e., the peak of the output wave occurs 1/4 wavelength after the peak of the input wave.

In this case (as with the previous example), the transfer function acts as a low-pass filter by attenuating fluctuations that are fast compared to the characteristic response times. It is worth noting, though, that this transfer function isn’t a very good filter because the attenuation changes very slowly over a large frequency range and the transfer function introduces substantial lag over a similar range.

Using the observed response to controlled signals like step, ramp and sine functions often makes it possible to learn the response characteristics of a sensor. For example, a simple exponential response to a step function can be studied to determine the time constant of that system, as can the offset in response to a ramp function.

If good time response is needed for a particular measurement, much attention must be devoted to minimizing the effects of time lags and phase shifts. Time lags introduced by delays in signal lines and instrument components are often difficult to minimize. Another concern when sampling time-series measurements is choosing an appropriate sample rate. If a system samples at a frequency f, it is not possible to detect sine-wave components with a frequency faster than f/2, called the Nyquist frequency.

Furthermore, higher-frequency components can be “aliased” to appear as contaminating contributions at lower resolved frequencies. To avoid this contamination, it is best to remove components above the Nyquist frequency by filtering (using filters with better cutoff characteristics than the dynamic systems illustrated in this lesson). General guidance is to sample fast enough to give a Nyquist frequency significantly above the highest frequency of interest and then filter at or below the sample frequency to eliminate higher-frequency components that might influence the resolved frequency range. Study of the frequency content of signals often is done using spectral analysis, either with appropriate equipment or numerically. This topic is beyond the scope of this lesson, but in some studies it can be crucial to understanding the spectral content of measurements.

Use the interactive ShinyApp tool to answer the following questions. If you are viewing a downloaded version of this lesson, please see the "Printable Lesson" for the ShinyApp exercise information.

A temperature sensor on an aircraft has first-order dynamic response, and you need to make measurements that respond to abrupt (i.e., step-function) changes by reaching 90% of the final value in 100 m of flight path, while flying 200 m/s. What is the required time constant of the sensor? Answer in seconds (within 10%).

The correct answer is c.

In the exercise, with "step" selected, place the cursor over the blue line to find the time corresponding to M1 = 0.90 [which should occur for t = -ln(0.1)]; the desired time constant is 0.5 seconds divided by this value. The answer is c, 0.22 seconds.

For a ramp input with slope such that x = t/𝛕, at long times a first-order system will lag behind that input by 𝛕 (in these plots, 1 s), so this is a possible way to find the time constant and to provide corrections for such a system. For a second-order system, the response is more complicated and depends on both the characteristic angular frequency and the damping. However, for critical damping the system (at long time) again lags behind the input signal by a fixed amount, so the lag again can be used to find the characteristic angular frequency ω of this system. What value of 𝛚 is needed so that the lag of a second-order critically damped system is also 1? Answer in radians/s (within 10%).

The correct answer is b.

The answer is 2.06 s^{-1} (~ 2.1 s^{-1}). Leave the setting of 𝛄 = 1 and adjust the value of ω until the first-order and second-order response lines overlap at long time. (Hint: you can make fine adjustments to the sliders by left-mouse-clicking the slider button and then using the left-arrow or right-arrow keys.)

To determine the type of response characteristic of an unknown system, it is often revealing to determine the system response to an impulse function. Suppose you have a measuring system with unknown characteristics, and when you apply an impulse as shown by "x" in the figure below you obtain the response "M".

Is this measuring system first-order or second-order? (Answer this question before continuing to the next two questions.)

The correct answer is b.

The system exhibits a second-order response. A first-order system always responds toward the current value of the input, so it would not continue upward after the impulse function returns to zero.

If the system is second order, is it:

The correct answer is c.

The response overshoots just slightly past zero. It would not overshoot if it were critically damped (gamma = 1) or overdamped, and it would oscillate if it were strongly underdamped. Using the ShinyApp tool, vary gamma to generate a second-order response that overshoots just slightly as in the figure. The figure was generated with gamma = 0.7, omega to be determined in the next question, and ordinate scale from -0.25 to +1.

What is the approximate characteristic angular frequency?

The correct answer is d.

The steep initial response requires a large value of omega. Set gamma as required for the preceding question (0.7) and adjust omega to match the amplitude of response shown in the figure for this question.

What is the approximate time constant for a second-order step impulse with a natural undamped frequency of ω_{0} = 1 s^{-1} and a critical damping ratio 𝛄 = 1?

The correct answer is b.

The time constant is the time that it takes the sensor to respond to 63.2% of the value of the measurand, in this example, about 2.15 seconds. The response for other natural periods will vary. You can explore other possibilities and other waveforms using this application.

Instrument developers find that a damping ratio of 0.7 is better than the critical damping ratio. Why is this true?

The correct answer is d.

A smaller damping ratio decreases the time constant, provides a suitable compromise between response and overshoot, and becomes close to the final state well before a second-order response with critical damping.

Use the interactive application to explore a second-order solution to a step input with a natural undamped angular frequency of ω_{n} = 0.5 radians s^{-1} and a damping ratio 𝛄 = 0.7 to find the time constant.

The correct answer is c.

The answer is approximately 3.5 s.

Use the image below to answer the following question.

A temperature sensor on an aircraft has first-order dynamic response, and you need to make measurements that respond to abrupt (i.e., step-function) changes by reaching 90% of the final value in 100 m of flight path, while flying 200 m/s. What is the required time constant of the sensor? Answer in seconds (within 10%).

The correct answer is c.

The answer is c, 0.22 seconds or 0.5 seconds / 2.3, where 2.3 is obtained using the following procedure. Follow blue line (first-order response) to find the time corresponding to M1 = 0.90; the desired time constant is 0.5 s (i.e. 100 m/200 m s^{-1}) divided by this value. Answer image key is shown below.

Use the image below to answer the following question.

For a ramp input with slope such that x = t/𝛕, at long times a first-order system will lag behind that input by 𝛕 (in these plots, 1 s), so this is a possible way to find the time constant and to provide corrections for such a system. For a second-order system, the response is more complicated and depends on both the characteristic angular frequency and the damping. However, for critical damping the system (at long time) again lags behind the input signal by a fixed amount, so the lag again can be used to find the characteristic angular frequency ω of this system. What value of 𝛚 is needed so that the lag of a second-order critically damped system is also 1?

The correct answer is b.

The answer is 2.06 s^{-1} (~ 2.1 s^{-1}). To find the characteristic angular frequency of the system so that it is equal to the first-order system, the second-order response must overlap the first order response at a time after the transient effects have disappeared. For a second-order system with critical damping (𝛄 = 1) to have the same time constant as a first-order system, its characteristic frequency would have to be approximately twice that of a first order system. See the answer image key below.

Use the image below to answer the following questions.

To determine the type of response characteristic of an unknown system, it is often revealing to determine the system response to an impulse function. Suppose you have a measuring system with unknown characteristics, and you find it responds to an impulse as shown.

Is this measuring system first-order or second-order? (Answer this question before continuing to the next two questions.)

The correct answer is b.

The response is not simple-exponential motion toward x. This is evident in the behavior of the x in time.

Using the image below estimate the dynamic lag and the dynamic error for a second-order ramp input. You will use the figure to estimate the lag and error between the input (red line) and measurand (green line).

For 𝛄 = 1, 𝜔_{n} = 1, estimate the dynamic lag for a second-order ramp input.

The correct answer is c.

The dynamic lag is 2 seconds.

For 𝛄 = 1, 𝜔_{n} = 1, estimate the dynamic error in units of the measurand for a second-order ramp input.

The correct answer is b.

You can estimate the dynamic lag 𝚫t and dynamic error 𝚫x using the image below. Using a horizontal line, determine the 𝚫t as shown. It is important that this determination is carried out after the solution has become steady, beyond the transient period. The correct answer is 2 𝛕. We can verify our estimate quantitatively since for a second-order ramp input, the dynamic lag 𝚫t = 2 𝛄/𝛚, and since 𝛄 = 1, 𝜔_{n} = 1 s^{-1}, 𝚫t = 2 seconds. To obtain an estimate of the dynamic error, use a vertical line through the data as shown to determine the difference between the input and the measurand at a specific time after the transient solution has disappeared.

The values can be verified on this graph.

First-order and second-order responses are described by ordinary differential equations (ODEs). ODEs are linear in the dynamic sense, but that does not necessarily imply linearity in the static sense. The static sensitivity may not be constant over the range of the sensor, that is, the sensitivity may be nonlinear without a constant slope but the sensor could still be described by a linear ODE. Therefore, the Principle of Superposition applies, meaning that the response of a sensor is the sum of the response of the individual inputs to the device (Brock and Richardson, 2001). This additive property is important and enables the Principle of Superposition to be applied to the frequency decomposition of complex signals by superimposing the responses to individual frequencies.

The Principle of Superposition can be illustrated by considering that the input signal produced by a time-varying measurand can be decomposed into individual sinusoidally-varying signals.

Similarly, individual discrete sources can each produce input signals that add linearly, resulting in a combined input signal that appears to be complex.

Explore the following animations, which illustrate the Principle of Superposition. (For more details, see the Superposition of Waves page created by Daniel A. Russell at The Pennsylvania Sate University, http://www.acs.psu.edu/drussell/Demos/superposition/superposition.html.)

According to CIMO (2008), the representativeness of an observation is defined as the degree to which it accurately describes the value of the variable needed for a specific purpose. It is not a fixed quality of any observation but results from joint appraisal of instrumentation, measurement interval, and exposure as these characteristics relate to the requirements of a particular application. Measurements taken at a site will always contain a degree of influence from some upstream area and will be affected by natural variables such as wind speed, turbulence, and diurnal heating and cooling but also by the instrument characteristics such as the volume of sample collected and analyzed. Site selection is paramount to achieving the expected representativeness of the spatial-temporal domain relevant to the study (See Foundations to Meteorological Instrumentation and Measurements for more on Site Selection).

Homogeneity refers to similarity of a particular characteristic over some interval in space or time. There can be considerable variability in underlying properties of a measured quantity, as in a turbulent field, but the field itself may be homogeneous if the characteristics of the turbulence are similar over some interval. There is a relationship between homogeneity and representativeness. It is easier to make representative measurements of a homogeneous characteristic; for example, one sounding may represent conditions over a large area away from particular weather features but may miss important structure if there are weather systems causing significant variations between sounding stations.

Over a uniform surface and with constant wind, an atmospheric boundary layer develops that has homogeneous properties. However, if the wind then carries the air over land having different properties, the properties of the boundary layer change as it moves toward a different structure that would form over the new surface. The result is the formation of an "internal boundary layer" that increases in depth with distance downwind from the land-surface change.

Representativeness requires an appropriate match between the scale of the phenomenon being studied and the spatial and temporal coverage of the sensor. It also requires collecting a sufficient and appropriately distributed number of measurements covering that scale. For example, a regularly spaced set of measurements over one minute, where the wind speed is about 10 m/s, is representative of an upstream fetch of 600 meters.

The number and spacing of sensors and the sampling frequency required to give representativeness depends on the scale of the phenomenon being studied. An example of a measurand having particularly high temporal and spatial variability is rainfall.

In terms of representativeness, field research presents particular challenges. For example, consider the deployment of mobile instrument platforms needed to characterize the mesoscale features studied during the *Plains Elevated Convection At Night* (PECAN) campaign. In this study, an array of fixed and mobiles sites, some with single instruments and others with an assortment of instruments, were used to establish a representative footprint (shown in the map) covering features of interest to the project.

What factors need to be considered to determine representativeness?

All of the above must be considered to determine site representativeness.

The result of a measurement is an estimate of the measurand and is incomplete unless it is accompanied by a quantitative estimate of its uncertainty.

In accordance with the *Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results* (Taylor and Kuyatt, 1994), estimates of uncertainty are classified into two categories, “Type-A” (evaluated by statistical methods) and “Type-B” (evaluated by other means). Uncertainty classification differs from the classification of error as either random or systematic because either type of uncertainty can apply to either type of error. For example, a component in an analysis of uncertainty may be the uncertainty in calibration, for which one component is the estimate that arises from the standard deviation of the calibration data about the calibration curve. This is a Type-A evaluation of an assumed random error in the calibration process, but the resulting uncertainty applies to the systematic error resulting from use of the calibration.

Standard uncertainty is the recommended method for reporting uncertainty, which for Type-A estimates and uncorrelated measurements corresponds to one standard deviation as given by

where x̄ is the mean of the N measurements x_{i}. The measurement would be reported as x̄ ± u and the degrees of freedom would be reported as v = N-1.

Type A evaluation of standard uncertainty may be based on any valid statistical method for analyzing data. The analysis of variance (ANOVA) is particularly useful since it considers both random and systematic effects between groups relative to the random effects within groups.

In Type B evaluation cases, a target is to have the uncertainty correspond to similar coverage. Type-B estimates are difficult to defend as quantitative measures and can't be assigned rigorous confidence limits.

A Type B evaluation of standard uncertainty is usually based on scientific judgment using all the relevant information available about the measurand and the instrument, such as:

- Previous measurement data
- Experience with the properties and behavior of instruments
- Manufacturer’s specifications
- Calibration data
- Uncertainties associated with reference data
*Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results*(Taylor and Kuyatt, 1994)

The combined standard uncertainty of a measurement result u_{C} is obtained by combining the individual standard uncertainties whether arising from Type A or Type B evaluations.

Select the type of uncertainty (Type A or Type B) referred to by the statement.

The correct answers are shown above.

So how can the joint effects of many sources of uncertainty be understood? First, apply all corrections (e.g., from calibration) to compensate for the known systematic errors.The relevant uncertainty to associate with each correction is the standard uncertainty of the applied correction.

*Note: The uncertainty of a correction applied to a measurement result in order to compensate for the systematic effect is NOT the systematic error in the result for that effect. Rather, it is a measure of the uncertainty of the result due to incomplete knowledge of the required value of the correction. The difference between error and uncertainty should always be kept in mind. For example, the result of a measurement after correction can unknowably be very close to the unknown value of the measurand, and thus have a negligible error, even though it may have a large uncertainty. *

(From NIST Guidelines).

After compensating for the systematic errors, the combined standard uncertainty can be estimated using the *law of propagation of uncertainty*, since in many cases, the measurand Y (using NIST notation) is not measured directly, but is determined from N other quantities X_{1}, X_{2}, ...X_{N} such that

Included among the quantities needed to determine Y are the corrections for systematic errors and the correlations among the basic measurements used to determine Y.

In keeping with the NIST notation, the estimate of the measurand (output quantity) Y, denoted by y, is obtained from input quantity estimates x_{1}, x, ...x_{N} using the same functional relationship as above: y = f(x_{1}, x_{2}, ...x_{N}). The combined standard uncertainty u_{c}(y) of the measurement result y is taken as the positive square root of the estimated variance obtained from

Where u(x_{i}) is the standard uncertainty associated with the estimate x_{i}and u(x_{i}, x_{j}) is the covariance associated with x_{i} and x_{j}. The partial derivatives are the sensitivity coefficients (static sensitivities) of the relationship between y and x_{i}. The equation above is based on a first order Taylor series approximation of Y = f(X_{1}, X_{2}, ...X_{N}) and is referred to as the Law of Propagation of Uncertainty.

*Note: It is recommended that the number of degrees of freedom associated with uncertainty estimate (e.g., N-1 where N is the number of measurements entering a standard deviation) be reported where applicable. This is often omitted when large, but when small it can affect how the uncertainty estimate is used. Some recommendations regarding how to determine degrees of freedom for uncertainty estimates arising from many components are contained in the NIST document referenced earlier; see the Welch-Satterthwaite formula.*

Let’s consider the example of Ohm’s Law P = I^{2}R, where P is power, I is current, and R equals resistance. The uncertainty u_{c} in the power measurement is given by the following equation:

where

and

gives the uncertainty in the current, u.

A set of measurements produce the following estimate of the Measurands (current and resistance) during a calibration test.

I = 15.000 Amps

R = 1.0500 Ω

Variance(I) = 8.28E-06 A^{2}

Variance(R) = 4.6888E-09 Ω^{2}

Covariance(I, R) = -5.2E-09 A Ω

The uncertainty in the power measurement, P, can be computed based on the following equation:

Use the values above to solve for .

The correct value for is:

The correct answer is d.

Taking the square root, the Standard Uncertainty (u_{c}) on the Power, P, = ±0.0095 Watts.

It is good practice to report random and systematic effects separately, as random effects influence the measurements differently while systematic effects influence each measurement in the same way. For instance, a zero offset (a systematic effect) will affect two measurements in the same way so that a property derived from taking the difference between the measurements will not be affected. On the other hand, that same property will be affected by the random effects on each measurement. For this reason, quantitative random uncertainty must be reported with every measurement. The convention is to use a multiple of the standard deviation, ±1𝜎, ±2𝜎, ... etc., which would be interpreted as a probability of 32% (±1𝜎) or 5% (±2𝜎) that the true (unknowable) value lies outside these uncertainties.

Errors can be reported as absolute values or percentages. It is often a personal choice. If everything remains the same during the measurement period, then it makes sense to report the uncertainty in absolute units. Conversely, if instruments, measurement techniques, and other factors come into play during the measurement period making it necessary to determine the combined uncertainty from the law of propagation of uncertainty, it is advisable to use percentages. The uncertainty can be reported in terms of standard error as

- ± X% of full range
- ± X% ( 100 𝜎√x ) or fraction ( 𝜎√x ) of the mean
- ± 𝜎

Uncertainty can also arise from issues with measurement representativity. Errors associated with representativeness more often appear as random errors except in cases where the correlation between a specific feature (e.g., large upwind body of water) and the measurement (wind direction, for example) introduces a systematic effect in the measurement. In this example, the directional persistence of the wind moving over the body of water will bring cooler and more moist conditions which would reveal a directional bias in the data.

Representativeness requires an interpretation of the natural spatial-temporal variability of the measurand. For example, where was an air parcel earlier, what changes occurred during its advection to the observation site, and how is the value of the measurand correlated with its values at other times and locations? Such correlations must be considered when estimating uncertainty because measurements cannot be considered independent in the presence of such correlations.

Exploratory statistical correlations for paired data can be very useful in determining correlations arising from representativity. The Ordinary or Pearson Correlation coefficient is given by

where Cov(x,y) is the covariance of x and y and s_{x} and s_{y} are the sample standard deviations. For testing temporal representativity, the autocorrelation function can be used to compute correlations for various lags. The autocorrelation for a time lag of 𝜏 samples is given by

Where Cov(x,y) = E(x-x̄)(y-ȳ), where E denotes the expected value.

Let’s consider an example in which we measure component winds (u, v, w) for an hour with a sonic anemometer at 20 Hz. This sampling frequency would yield 72,000 measurements. The autocorrelation shows that the measurements have a correlation time of 0.5 minutes, that is, the e-folding time for the autocorrelation coefficient to decrease to 0.37 is 0.5 minutes.

In this situation, how many independent measurements do we have? Select the best answer.

The correct answer is b.

In this situation, we get an independent measurement every 0.5 minutes, so will have 120 independent measurements instead of 72,000.

For measurements to be representative, atmospheric processes in general must be relatively homogeneous, or vary linearly, across the spatial and temporal domains of an instrument network. Representativeness cannot be defined by an observation or a specific parameter, but results from an assessment of the instrumentation, sampling frequency, exposure, and the application.

This lesson summarized the key performance characteristics of instrumentation used for meteorological measurements. Measurements are necessary to obtain quantitative information about the atmosphere. Elements of a good measurement system are those that return a robust, stable, and reliable output that is of a scale and resolution to be useful in quantifying the phenomenon under investigation. Meteorological measurement techniques are a field of steady change and progress largely but not solely driven by enhancements to electronics, microelectronics, and computer resources [Emeis (2010)]. Students and practitioners of instrumentation, measurement, and observing systems need to keep abreast of these changes to maintain timely knowledge and ensure competency and comprehension. The usefulness of a measurement result is largely determined by the quality of the statement of uncertainty that accompanies the measurement.

Static performance characteristics of an instrument include range, span, resolution, static sensitivity, linearity, stability, and sensor threshold. Dynamic performance characteristics include the time constant , sensor time lag, hysteresis, and first-order transfer functions. The selection of site in terms of representativity and homogeneity can affect these performance characteristics.

The importance of calibration was also emphasized as the key step in determining the relationship between a measurand and the output of an instrument. The best calibration process uses a traceable standard that provides values of the measurand covering the measurement range.

Differences between first order dynamic systems and higher order dynamic systems were described. The lesson also covered the principle of superposition and its usefulness in signal analysis, the characterization of uncertainty, the differences between Type A and Type B estimates of uncertainty, and the propagation of uncertainty.

Thank you for completing this lesson. Test your knowledge of the main concepts presented in this training by completing the lesson quiz and share your feedback with us via the user survey.

Brock, F. V., & Richardson, S. J. (2001). *Meteorological measurement systems.* Oxford: Oxford University Press.

Cable, Mike. *Calibration Principles: A technician’s guide.* Vol. 1. Raleigh: International Society of Automation, 1989. Print. ISA Technician Ser.

DeFelice, T. P. (2000). *An introduction to meteorological instrumentation and measurement. *Upper Saddle River, NJ: Prentice-Hall.

Emeis, S. (2010). *Measurement methods in atmospheric sciences: in situ and remote; with 28 tables.* Stuttgart: Borntraeger.

Mishra, A. K. (2013). Effect of rain gauge density over the accuracy of rainfall: a case study over Bangalore, India. *SpringerPlus,* **2**, 311. http://doi.org/10.1186/2193-1801-2-311

Nappo, C.J., J.Y. Caneill, R.W. Furman, F.A. Gifford, J.C. Kaimal, M.L. Kramer, T.J. Lockhart, M.M. Pendergast, R.A. Pielke, D. Randerson, J.H. Shreffler, and J.C. Wyngaard, 1982: The workshop on the representativeness of meteorological observations, June 1981, Boulder, Colorado, *Bull. Amer. Meteor. Soc.*, **63**, 761-764.

Pendergast, R. A. Pielke, D. Randerson, J.H. Shreffler, and J. C. Wyngaard, 1982. The workshop on the representativeness of meteorological observations, June 1981, Boulder, CO. *Bull. Amer. Meteorol. Soc.*, **63**, 761-764.

World Meteorological Organization, *Guide to Meteorological Instruments and Methods of Observation (CIMO guide)*, WMO-No. 8, 2014, 1139 pp.

MetEd and the COMET® Program are a part of the University Corporation for Atmospheric Research's (UCAR's) Community Programs (UCP) and are sponsored by NOAA's National Weather Service (NWS), with additional funding by:

- Bureau of Meteorology of Australia (BoM)
- Bureau of Reclamation, United States Department of the Interior
- European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT)
- Meteorological Service of Canada (MSC)
- NOAA's National Environmental Satellite, Data and Information Service (NESDIS)
- NOAA's National Geodetic Survey (NGS)
- National Science Foundation (NSF)
- Naval Meteorology and Oceanography Command (NMOC)
- U.S. Army Corps of Engineers (USACE)

To learn more about us, please visit the COMET website.

The Instrumentation and Measurement of Atmospheric Parameters training series has been made possible by support from the National Science Foundation (NCAR Award #1642735 and Millersville University Award #1642643).

- Alison Rockwell, NCAR/EOL

- Dr. Richard Clark, Millersville University
- Dr. William Cooper, NCAR/EOL

- Dr. Teresa Campos, NCAR/EOL
- Dr. Julie Haggerty, NCAR/EOL
- Dr. Jorgen Jensen, NCAR/EOL
- Dr. Steven Oncley, NCAR/EOL
- Dr. Holger Vömel, NCAR/EOL
- Cory Wolff, NCAR/EOL

- Amy Stevermer, UCAR/COMET
- Dr. Alan Bol, UCAR/COMET

- Amy Stevermer, UCAR/COMET
- Dr. Alan Bol, UCAR/COMET

- Steve Deyo, UCAR/COMET

- Gary Pacheco, UCAR/COMET
- Sylvia Quesada, UCAR/COMET

- Lindsay Johnson, UCAR/COMET
- Marianne Weingroff, UCAR/COMET

- Dr. Elizabeth Mulvihill Page, Director
- Tim Alberta, Assistant Director Operations and IT
- Paul Kucera, Assistant Director International Programs

- Lorrie Alberta, Administrator
- Tara Torres, Program Coordinator

- Bob Bubon, Systems Administrator
- Joshua Hepp, Student Assistant
- Joey Rener, Software Engineer
- Malte Winkler, Software Engineer

- Dr. Alan Bol, Scientist/Instructional Designer
- Tsvetomir Ross-Lazarov, Instructional Designer

- Rosario Alfaro Ocampo, Translator/Meteorologist
- Bruce Muller, Project Manager
- David Russi, Translations Coordinator
- Martin Steinson, Project Manager

- Steve Deyo, Graphic and 3D Designer
- Dolores Kiessling, Software Engineer
- Gary Pacheco, Web Designer and Developer
- Sylvia Quesada, Production Assistant

- Dr. William Bua, Meteorologist
- Patrick Dills, Meteorologist
- Bryan Guarente, Instructional Designer/Meteorologist
- Matthew Kelsch, Hydrometeorologist
- Erin Regan, Student Assistant
- Andrea Smith, Meteorologist
- Amy Stevermer, Meteorologist
- Vanessa Vincente, Meteorologist